text
stringlengths 54
548k
| label
stringclasses 4
values | id_
stringlengths 32
32
|
---|---|---|
Moreover, the transferability of current vision foundation models is somewhat narrow considering the wide spectrum of video applications. These models {{cite:855736a1aa2f2fcd22f7179934eb7a7fb1474cd5}}, {{cite:eebf99fd236761a84e264a5d7c5031f48776f481}}, {{cite:2345e1e2913e183389c2454453fed712e809b49b}}, {{cite:c201e2bb17778023371741328121a1b1535dd69c}} either concentrate on action understanding tasks (action recognition, spatiotemporal action localization, etc) or video-language alignment ones (video retrieval, video question answering, etc). We suppose this results from their learning schemes, as well as the lack of a comprehensive benchmark for measuring video understanding capabilities. Thus, these works {{cite:855736a1aa2f2fcd22f7179934eb7a7fb1474cd5}}, {{cite:eebf99fd236761a84e264a5d7c5031f48776f481}}, {{cite:2345e1e2913e183389c2454453fed712e809b49b}}, {{cite:c201e2bb17778023371741328121a1b1535dd69c}} focalize a few specific tasks to demonstrate their spatiotemporal perceptions.
The community desires a general foundation model that enables a broader application domain.
| i | b21affcab8dc3d4d97b3cf4342b59a0a |
This research note presents the algorithms needed to efficiently compute
gradients of GP models applied to large datasets using the celerite
method.
These developments increase the performance of inference methods based on
celerite and improve the convergence properties of non-linear
optimization routines.
Furthermore, the derivation of reverse accumulation algorithms for
celerite allow its integration into popular model building and
automatic differentiation libraries like Stan {{cite:e01e84bbd12e8fd4a3b215d5307de82260c8d952}},
TensorFlow {{cite:383c5dcc753f50e9bf772fd83435916ddc2a13c3}}, and others.
| d | 9acb39df46c93ff907243a8ffee9b7af |
Inspired by the close link between reasoning and the cause-and-effect relationship, causality is recently incorporated to compactly represent the aforementioned structured knowledge in RL training {{cite:2465699211371a4ff9dac1b0ea0a0fa52cf19e72}}.
Based on the form of causal knowledge, we divide the related works into two categories, i.e., implicit and explicit causation.
With implicit causal representation, researchers ignore the detailed causal structure. For instance, {{cite:490130a56fddae18665d6a82279a684977dbdf50}} extracts invariant features as one node that influences the reward function, while the other node consists of task-irrelevant features {{cite:a18d0c02e73f7663322359817f6743f42f11233b}}, {{cite:d4cdd1d8493a282590e7e47b87f43d8dc3ca9e0d}}, {{cite:2541d08ca6b109ff590ff94caca9ce26ac0ceeda}}, {{cite:472f277df879ca4b3cbbaace624644439312c5a8}}.
This neat structure has good scalability but requires access to multiple environments that share the same invariant feature {{cite:490130a56fddae18665d6a82279a684977dbdf50}}, {{cite:472f277df879ca4b3cbbaace624644439312c5a8}}, {{cite:d6a137effede2505f82f773e54525d94bf76a0d3}}.
One can also turn to the explicit side by estimating detailed causal structures {{cite:1cd8852fc57fb77ff5b5031f6c972cda7c4c33a1}}, {{cite:b6867879badd1344089045417ffce5af31cd09e5}}, {{cite:c5b8bb81168b1346ca2481139876951cf86e23f7}}, {{cite:fafa8c034fd81d035fb9f628adafa6736d8e1967}}, which uses directed graphical models to capture the causality in the environment.
A pre-request for this estimation is the object-level or event-level abstraction of the observation, which is available in most tasks and also becoming a frequently studied problem {{cite:4ca2f97184648ae8b9e124dd4b39fac3d78fe01d}}, {{cite:734a4752f3c5722772ee3b6e38961a8f30ac063a}}, {{cite:e45131141841cd4f9bf562e300aa6e8923db0d3e}}.
However, existing explicit causal reasoning RL models either rely on heuristic design without theoretical guarantees {{cite:1cd8852fc57fb77ff5b5031f6c972cda7c4c33a1}} or require the true causal graph {{cite:dad0af1071f9bb20d8bef5255f4fdc4ddb908769}}.
| i | 2d2906bd49b9fb8288c664f0f46972e2 |
The positive mass theorem also involves a rigidity statement when the mass is zero, but we will not concern ourselves with rigidity questions in this paper. This theorem was generalized to the (now commonplace) general asymptotically flat setting in {{cite:91778dd291ddccab2c33b204ceb28f0331965d20}} via the density theorem, to dimensions {{formula:a2c22de8-ba50-45f6-af2d-5479fe64e254}} by Schoen in {{cite:72397a1b3cff2570d241a06966159024a0ce30df}} by studying the strong stability inequality, and finally to all dimensions in {{cite:331bc59b32b0bcf8c0bae85fd384bdcc3da5e52b}} using {{formula:126e691e-5710-43d1-bf57-932defad996e}} -slicings and the convenient compactification technique of Lohkamp {{cite:9d8876ccceea9dc5d8d626634b546efb888a3af9}}.
| i | 0d80b28c7a651fea268dcdfe94822b62 |
Equation (REF ) is closely related to models that describe the alignment of either biological individuals (fish, birds, etc.) or physical rods {{cite:13e011e127f4a0fa0362664abf216057a5187fdf}}, {{cite:861db35b46c242deeb618ae63f86e6fecc3eea03}}, {{cite:0b9cc61a6c5fbdc4acb2aceb6648b10c45b5aef9}}. More broadly, it is related to inelastic Boltzmann equations {{cite:45f3a4aace1ca5bfbb7d1d175c45fa38253f37f3}}. In this context, the wasserstein distances have been used to describe the long time dynamics of solutions {{cite:5cf1b90ec20125a0a591e6b04a228c95742acbbd}}, {{cite:49a80308d4b50e6831b8dfe5c329b5a9f9a05035}}, following the seminal work of Tanaka {{cite:ed903e49490866a0c9ac79dfe519bbb5d019e88f}}. Note that a related set of work uses Fourier-based distances that have related properties {{cite:9c67597e426d214a92acd93986e746d81fdced46}}, {{cite:61fb3954b6f12fdb9dd99d72aad9093974c26694}}.
| i | b6ef6b8e5d55cbdddef132de2305e423 |
More precisely, as for the analytical investigation, we apply interpolating techniques (see e.g., {{cite:ec7e77f1f2d8db32adace109856eb05261f0beba}}, {{cite:39abc10eb6b2be58ffe4cb3e242772657b49b451}}) and extend their range of applicability to include the challenging case of non-Gaussian local fields as it happens for dense networks.
As for the numerical investigation, we propose a strategy based on Plefka expansion {{cite:036a471c1f441fbf845f9e48f8fcc6adee08a0be}}, {{cite:c7e1422eceda3a47b42e5995c8d52c06e745264e}} to overcome the extreme efforts implied by the update of a giant synaptic tensors in Monte Carlo (MC) simulations with a remarkable speed up.
| i | 57b8eda933f2e33d2d16d9ca82b298a9 |
Using this benchmark, we performed an analysis of the sample efficiency of existing machine learning models and their ability to harness compositionality. Our results suggest that even the best pretrained neural architectures require 50 times more training samples than humans to reach the same level of accuracy, which is consistent with prior work on sample efficiency {{cite:fa3c21124ec1396fdef7782f63af1b8e138a3f04}}. Our evaluation further revealed that current neural architectures fail to learn several tasks even when provided an abundance of samples and extensive prior visual experience. These results highlight the importance of developing more data-efficient and vision-oriented neural architectures toward achieving human-level artificial intelligence. In addition, we evaluated models' generalization ability across rules – from elementary rules to compositions and vice versa. We find that convolutional architectures benefit from learning all visual reasoning tasks jointly and transfer skills learned during training on elementary rules. However, they also failed to generalize systematically from compositions to their individual rules. These results indicate that convolutional architectures are capable of transferring skills across tasks but do not learn by decomposing a visual task into its elementary components.
| d | 9efde6c1a70ca6a67944b72b94a119cd |
Natural Language Processing (NLP) is a sub-discipline of computer science providing a bridge between natural languages and computers.
It helps empower machines to understand, process, and analyze human language {{cite:37a66d4ba2f3dd1fc661a7e73b7f54587c4deb2d}}.
NLP's significance as a tool aiding comprehension of human-generated data is a logical consequence of the context-dependency of data.
Data becomes more meaningful through a deeper understanding of its context, which in turn facilitates text analysis and mining.
NLP enables this with the communication structures and patterns of humans.
| i | 4daf15c970bbefa7252c54b0d9a3cfb6 |
Other researchers carried out post-processing on reconstructed images with deep learning models, so as to remove the artifacts and noises for upgrading the quality of these images{{cite:8ad8f2f0092beec62320456e1818c8040c8e60e4}}, {{cite:596f59b7a510a02e1c00f8f0e9ca031f123db19f}}, {{cite:09d168f706222b12af294777d6569ec0a02094d6}}, {{cite:a8a59be02448ebd07d9ecf5c6f1a423c2286e9c4}}, {{cite:d89de1987fa2db80aa512b8f51bdf24dbe1cf6a3}}, {{cite:cad713bb45fe06058780d51f3e035822da443d41}}, {{cite:069f6e324603380ff5638ebd9a7210215c3f372f}}. In 2016, a deep convolutional neural network {{cite:09d168f706222b12af294777d6569ec0a02094d6}} was proposed to learn an end-to-end mapping between the FBP and artifact-free images. In 2018, Yoseob Han and Jong Chul Ye designed a dual frame and tight frame U-Net {{cite:8ad8f2f0092beec62320456e1818c8040c8e60e4}} which satisfies the frame condition and performs better for recovery of high frequency edges in sparse-view CT. In 2019, Xie et al. {{cite:a8a59be02448ebd07d9ecf5c6f1a423c2286e9c4}} built an end-to-end cGAN model with joint function used for removing artifacts from limited-angle CT reconstruction images. In 2020, Wang et al. {{cite:d89de1987fa2db80aa512b8f51bdf24dbe1cf6a3}} developed a limited-angle TCT image reconstruction algorithm based on U-Net, which could suppress the artifacts and preserve the structures. Experiments have shown that U-Net-like structures are efficacious for image artifacts removal and texture restoration {{cite:d438e0e942cdc7192be8a06bb378baa039785f1c}}, {{cite:4ea1ac60a0753ca0cb323e79cd90eedd12d6e51d}}, {{cite:8ad8f2f0092beec62320456e1818c8040c8e60e4}}, {{cite:d89de1987fa2db80aa512b8f51bdf24dbe1cf6a3}}, {{cite:069f6e324603380ff5638ebd9a7210215c3f372f}} .
| i | a438efe78adfd6a251a40c0d431a94e0 |
The results are reported in Figure REF and Table
REF for the regression and classification tasks, and in Figure REF and Table REF for the preference elicitation task. For all models, considering the average of weights always leads to better performance than considering only the weights of the last epoch. Such an approach is called stochastic weights averaging {{cite:3b15c89d0a5502e36a78b1a1c55b776085802c00}}, {{cite:2604632b07f876ba50fb69eff258a74fa849e682}}.
| d | 9354110b8a9020ae172db656abcfb8c2 |
Figures REF (a) and (b) compare the predicted pose trajectory (for varying depth network size) from the proposed monocular localization against an equivalent framework where joint training of depth network and filtering model is not performed. The comparison uses the RGBD scenes dataset {{cite:403e9f61f12ff7fa3a6def5c9a9bd520af946387}}. Figure REF (c) shows the corresponding degradation in depth images, measured using structural similarity index measure (SSIM). In Figure REF (d), despite significant degradation in depth image quality and reduction of depth predictor to one-third parameters, the proposed joint training maintains pose prediction accuracy by learning and adapting against systematic inaccuracies in depth prediction. Another crucial feature is that the original depth predictor can be trained on any dataset, and then tuned (on the last layer) for the application domain. For example, in the presented results, the original depth network was trained on NYU-Depth {{cite:80c7dfc7a8ecabf43cbe107f9e106853577f827d}} and applied on RGBD scenes {{cite:403e9f61f12ff7fa3a6def5c9a9bd520af946387}}. Thus, the predictor has access to vast training data that can be independent of application domain.
| r | 415911acbeb4195f142c7779336560b7 |
We motivated the omission of data-augmentation based on results that indicate
the benefits may be dataset specific {{cite:5062acb21f5e014b750afc965c41c3fff6b9fe4f}}. More
recently, the same authors conducted a more thorough analysis, providing
stronger indications that a sensibly designed data-augmentation procedure would
provide benefits across datasets and organs {{cite:451ad80c9046780b07e5ffcd79a62cc2f459b68c}}. Thus
we expect the addition of data-augmentation would further improve our results.
| d | 1f9938fa5e618fd799668df66fd483e0 |
We expect that VTT will enable important downstream applications such as visuo-tactile curiosity. Curiosity {{cite:14ae559d92419f98818befc8bf7cc388110c646e}} makes exploration more effective by maximizing a proxy reward. This reward is often formulated as a series of actions that increase representation entropy. Similarly to {{cite:fd1a4e8b7e4e24b8a4c837d2f4e1242c13f9babd}}, VTT can measure potential mismatch in vision and touch and use this score to explore. VTT advantageously incorporates spatial attention to localize regions of interest in an image, which sets it apart from other multi-modal representation methods. Despite its promise, the current iteration of VTT has a few limitations.
Firstly, our implementation uses 6-dimensional reaction wrenches from F/T sensors as tactile feedback. This low-dimensional signal can often alias important contact events and is generally not as informative as collocated tactile sensing like {{cite:309ce9dad65f295bf3b70e3ea22f3fb25ba40282}}, {{cite:d9e40304c8cff02735c48af7ef2f6574b5578d09}}. To address this, the tactile patching and embedding need to be adapted to handle high-dimensional tactile signatures. Secondly, we only evaluated VTT on rigid-body interactions. This choice was motivated by the availability of simulation platforms that incorporate both tactile feedback and deformables and that are fast enough for RL applications. It is not entirely clear how VTT will handle deformables; though we suspect that attention will be distributed over both the deformation and the contact region.
| d | 8da932c373f75a6cf9465bdb8437f836 |
The main difference between NCSR {{cite:680bb33ad4ea72be38680400456fa01b5930a03e}} and the proposed GSR-NLS is that NCSR is essentially a patch-based sparse coding method, which usually ignored the relationship among similar patches {{cite:c76f643ef8687b370c9568384bdfda14470df4e5}}, {{cite:ac05cbdad4acbdfc61d7a2ac2fe7e640cb7ffc7e}}, {{cite:96a24aeae54d0902a90055e299a15d03b0a6acd9}}, {{cite:4341e7c29e221bee4a544cd9c91f74e8bd313f39}}. In addition, NCSR extracted image patches from noisy image {{formula:aa351f67-030d-4245-b7f2-afa44e1bde9a}} and used {{formula:46d104af-a6dc-4eb7-9697-929ee5958114}} -means algorithm to generate {{formula:be5c24e2-4470-472f-bb85-aa6da980914b}} clusters. Following this, it learned {{formula:d10cba6e-9fbe-496b-80cc-ad56c8cc996c}} PCA sub-dictionaries from each cluster. However, since each cluster includes thousands of patches, the dictionary learned by PCA from each cluster may not accurately centralize the image features. The proposed GSR-NLS learned the PCA dictionary from each group and the patches in each group are similar, and therefore, our PCA dictionary is more appropriate. An additional advantage of GSR-NLS is that it only requires 1/3 computational time of NCSR but achieves {{formula:6b96ea3d-3707-4746-917f-be134fc86113}} 0.5dB improvement on average over NCSR (see Section for details).
| m | 7abc6a4f94105794e7151f6669090829 |
One problem we have to solve before we can calculate distances is the delineation of the research field. It is not feasible and not necessary that the electric current between two papers flows through the total citation network of all papers published in the year considered. Field delineation should be done by an appropriate method for finding thematically coherent communities of papers {{cite:a3de238b44021f6f929da0e85ce49e6e45cb7f64}}, {{cite:ffbfec300482c56bae0a853fb7860e842c64d412}}.
| m | 799079ea434b1f3a38b4c446fa98822f |
We begin by enforcing the conditions of unitarity, perturbativity and
stability of the potential, by requiring that Eqs. (REF ) are respected.
These theoretical conditions are applied as hard cuts
to ensure that every sampled point is theoretically meaningful.
Considering that the oblique parameters, dominantly {{formula:b1c78acb-ba63-44be-bb04-96941d381c22}} and {{formula:ecff6278-1508-40b7-85bf-6447a40ea0bd}} ,
shift the value for {{formula:fb2510c7-50d8-405e-a53c-2b3d5d4b3554}} ,
we will require their values to be consistent
with both the recent measurement of the mass of the {{formula:35c0f9d5-ccbd-4f89-9a92-791e05e2b5b7}} by the CDF collaboration
and with the current experimental average for the weak mixing angle.
To calculate the theoretical constraints and oblique parameters,
as well as other observables we employ 2HDMC {{cite:2cf32f6c0c1d9bf3c456211f34c508f67efef3cf}}.
This tool is then interfaced with HiggsBounds {{cite:6403b786b3822f015b4627c1a599b954d52d2d91}}
and HiggsSignals {{cite:0ed2dea213f6db7e35a3525de24b31cd2c004993}}
to incorporate several constraints from LEP, Tevatron and LHC on the Higgs sector
and obtain a {{formula:b8bf7aae-10c9-45df-9842-adf1264a722a}} for the currently observed signals of the Higgs.
Finally, to calculate flavor physics observables,
that will be relevant mostly for the type II 2HDM as mentioned in Sec. REF ,
we process our obtained data with SuperIso {{cite:8e107ba291d38c0b3f65a1edd4fed5e6bcf26101}}.
We use the Markov Chain Monte Carlo sampler emcee {{cite:ae946bc8b19149a89fef95e04eb55c1bf9fcc2f6}} to explore the parameter space.
The free parameters of our model are given by
the mass of the heavy Higgs, {{formula:cf62fdb0-ccb4-46ed-9593-3787070ce2bd}} ,
the charged Higgs mass, {{formula:f3d49b70-d4d2-4e79-9651-8613389eb2e5}} ,
the mass of the pseudoscalar, {{formula:378d6ba4-6d2f-4d4e-ad04-309a5a7fdde3}} ,
the mixing {{formula:25f3deee-56f7-4e56-9414-cf77d7a0fd99}} ,
the parameter {{formula:97652572-a157-452b-beb7-bb5c287c4360}}
and the squared mass parameter {{formula:7ea364ed-65bf-43a4-9d5a-a5e110a7b1d5}} .
The limits of our parameter scan are given by:
{{formula:6127738a-7346-4465-b66a-c5713fb51f4f}}
| m | ab58ca35e5201630f77d389895354f99 |
Remarkably, Table REF shows that the proposed framework outperforms all the other state-of-the-art supervised debiasing methods. Notably, only about 16 samples remain within (Gender=1, HeavyMakeup=1) group after subsampling. Thus it is almost impossible to prevent deep networks from memorizing those samples even with strong regularization if we train the networks from scratch, which explains the failure of existing upweighting protocols such as JTT. In contrast, the proposed framework can fully take advantage of unlabeled samples where contrastive learning help prevent memorization of the minority counterexamples {{cite:0a9357f69521584deec3e9f79d9ca78b6256ab0c}}. It highlights the importance of pretraining using unlabeled samples that most prior debiasing works do not consider. Moreover, such implicit bias of deep networks towards memorizing samples may seriously deteriorate the performance of existing bias-conflicting sample mining algorithms {{cite:e98bf6434b52950bdf37b987a0951297e8fb0137}}, {{cite:6e4695354f62e992a3e1ef7e2e72b2893d84c9ed}}, {{cite:94d80a92fd04728e06c35eaa08ab2f74c931a4fe}} when the number of labeled samples is strictly limited. However, such failure is unlikely to be reproduced in the proposed framework since we only train a simple linear classifier on top of a freezed biased encoder to identify such bias-conflicting samples.
{{table:a6b05633-2245-488b-aa14-b6afd085fc09}}{{table:bb30d83d-2fca-4ddd-9520-67117f967a08}} | r | 9693d5d26b31273e0a6e60ed9b1da13f |
Motivated by the latest measurement on the muon anomalous magnetic moments {{formula:1380ad21-380f-466f-bd13-30c65b271f53}} by the Muon Collaboration at Fermilab {{cite:9d305010dccef881fd52beec5145d0de89da6111}} and those of electrons {{formula:046c38b2-6148-4f7d-8268-8ed22ea4c9b0}} at LBK {{cite:345d87e3a5c4ab8e819877dcb1079a14f2393f13}} and Berkeley {{cite:fa099d21fb0ce3ffe6fcf4a6215174c2559e6350}}, we have explored the possibility to explain these anomalies in terms of the presence of a massive spin-2 particle {{formula:53ffdea3-a4be-46e3-8d0c-28dad95e149e}} , which can be identified as the first KK excitation of the ordinary graviton in the generalized RS scenario. By calculating the associated one-loop Feynman diagrams, we have given the analytic expression of the leading-order contributions to the lepton anomalous magnetic moments induced by {{formula:6b4216ae-042f-441b-b786-2eb4fc9aca98}} . Note that the integrals over the loop momentum in the Feynman diagrams are all of highly power-law divergence, i.e., of {{formula:b584fafc-fc09-4bdf-8ca3-f7834af80612}} , with {{formula:6dbaa149-69b6-46e0-a3ae-2c3f486f5497}} representing the UV cutoff scale. Moreover, in the Barr-Zee type diagrams {{cite:31401ae7e6a48a28095bd0ecea4c9f249d426836}}, {{cite:6f62aea2e0b91547cbce8e33205eda7270eacf52}}, the gauge invariance involving the internal photon line should be preserved, which is another difficulty in our computation. In order to keep the gauge invariance and the power-law divergence structure, the loop regularization method {{cite:76eb3baef05e67b48dc06fa3d8c28cf4ac86051b}}, {{cite:0158e85eaccb8c040fbad1daf7ebf193d62d8aa3}} has been applied. In particular, we have explicitly checked the photon gauge invariance by performing our calculation of Barr-Zee diagrams in both the Feynman-'t Hooft gauge and the general gauge with the parameter {{formula:52a8e81f-05d0-48b4-8d9d-551afd851998}} free. We have further considered theoretical constraints from the perturbativity of the effective field theory of the massive graviton. Interestingly, we have given a new cutoff-dependent perturbativity constraint on the associated Wilson coefficients of the nonrenormalizable operators, which is a natural but nontrivial generalization of the counterpart for dimensionless renormalizable operators. As a result, we have shown that there exists a substantial amount of the parameter space which can accommodate the {{formula:84e1fd26-8757-42ff-a42c-d53ad91c8599}} anomaly and the LBK {{formula:4ed26e76-00c9-4915-9bdc-93af4eb8a9f4}} data simultaneously without disturbing the perturbativity bounds. On the other hand, the present simple massive graviton framework seems impossible to take into account the Berkeley's result of the electron's anomalous magnetic moment in the light of current {{formula:e1ac60c6-303e-4585-86fe-4c03c251175e}} data, due to the strong theoretical bounds from the perturbativity.
| d | e1af67282a7b34ba2aa35909d57db049 |
The gluon dissociation describes the process of a color-singlet state converting to a color-octet state by absorbing a gluon {{cite:1b8d870aa4c6e73d7a041e09af7ac5f3986dcb3d}}. The cross section in vacuum neglecting the color-octet interaction in the final state was calculated firstly by Bhanot and Peskin via the operator-product-expansion (OPE) method {{cite:ddccfc67de22bbf2871cf114b7c9ad91d120fa4e}}, {{cite:967c8b020ee09b7c2996b67877c0629aa5c7fcaf}}. Peskin's perturbative analysis can be represented by a gauge-invariant effective action from which one can get a non-relativistic Hamiltonian for heavy quark systems via QCD multipole expansion {{cite:fe2a25f4184c918ba3fbcbccd7cdacc7d0d33fa4}}, {{cite:abe19fafbdd77fdb283996842bb0bbc9bfdbbcce}}, {{cite:2a079d717b2f92aed3f7c7b98bcc10a1bb691a26}}. Based on this effective Hamiltonian, the cross section of gluon dissociation in hot medium are derived in the frame of perturbation theory of quantum mechanics {{cite:10e6cc029ffb24d0cf1d3f82e345563e53106898}}, {{cite:8ef245157c62c85603ea0051dad8a65230d45783}}, {{cite:dda3cc3133fc519822a8703addaa3edc2083754b}}. The result in the Coulomb approximation is in consistent with the OPE .
| i | 6ee6bda5bd8a78e1f90602c5bac130b0 |
We know an object A can be decomposed into wavelet Haar bases of {{formula:2b5c08fa-d533-4b1f-bc0d-17433378043e}} and corresponding expansion coefficients of {{formula:275017a6-cb8d-49dd-a45d-78cef1663016}} {{cite:cb0ed9aa3185a934dde7d8b8cb34121c46753dbb}}. In fact, as eq. REF shows, at the first level of decomposition (L), we have four components of {{formula:947e80ab-b91c-4cbc-b3cb-a0c22b9cbc9c}} , {{formula:e09bf73d-ec3f-42ad-a136-5ed266a02cf8}} , {{formula:0a8df95e-e4f8-436b-b440-4b499c18a532}} , and {{formula:ba33389c-7686-461d-8c1b-4047f481e541}} . In the same way, {{formula:a4b6fdfd-918e-46fd-a324-8d1f5ea4ccb2}} can be decomposed into the second level of DWT (LL) i.e. {{formula:b8484c32-b316-470b-bd79-e8600782a783}} , {{formula:38d197cf-d4ee-49aa-9bf0-c799577089d7}} , {{formula:eb0a910f-ad87-4472-99cd-751808535674}} , and {{formula:e673ce47-25d4-43fc-9596-e031b533dc1d}} .
{{formula:ea06d553-4767-4647-bc62-2e170a87f420}}
| m | e8c7829ef8ddf6272011475343f9f3e9 |
In recent years, many efforts have been made to utilize PDs as features for downstream machine learning tasks, such as material science {{cite:6a91b59e6685b7a2be9a327ab3e5e448993249c2}}, signal analysis {{cite:bd0b59e8d1e3e34104b17c78a5b5b7bd4f11e86f}} , cellular data {{cite:448e8361c11be94bf9be221f9efbc11259c0a048}} and shape recognition {{cite:2431fd4341b6c53cb16582d5ffe97b05e6bb9fba}}. However, the geometry of the PD does not lend itself easily to well-adopted classifiers due to the lack of Hilbert structure. In particular, several basic operations, such as mean and addition, are not well defined {{cite:ad599baea54ddf1bebc1528b4cec432263f78a32}} for PD, making it difficult to utilize PDs straightforwardly in machine learning.
| i | 7c659a32eb17b36c570dd6cd50efdbf6 |
SCE on FC Layer-1 (FC1) or Layer-2 (FC2).
One may argue that SCE is not necessary to be applied on an additional classifier FC2. We conduct the experiments of using SCE on FC1 (i.e., w/o FC2) and show the results in upper block of Table REF . “{{formula:e36593d4-b8d6-444f-a4de-9e35e30c1186}} only” is the baseline of using only BCE loss for FC1. “{{formula:22783c5e-97db-4366-98b5-284da374aef1}} only” is to use SCE only for FC1, with modifying the original multi-hot labels to be normalized (summed up to 1). For example, {{formula:2875911d-9b5c-460c-86f4-61d235261cd7}} is modified as {{formula:82589350-f2cf-494b-ae78-7aaf77d82e48}} . “{{formula:63be3304-eacb-486d-860a-feafdda48e5d}} for single only” is to apply BCE for learning multi-label images but SCE for single-label images (i.e., the subset of training images containing one object class). It shows that “{{formula:9b39fd68-8458-4889-9f75-7894cee5a456}} only” performs the worst. This is because SCE does not make sense for multi-label classification tasks where the probabilities of different classes are not independent {{cite:f84d16c94d64d7b213c9f0742343e850bbd215e0}}. “{{formula:453e88a0-94bf-4bdb-9d91-a7a064ad0836}} for single only” combines two losses to handle different images, which increases the complexity of the method. Moreover, it does not gain much, especially for MS COCO dataset where there are a smaller number of single-label images and is a more general segmentation scenario in practice.
| r | ae14dfb60f290f88650075e34a82c7f8 |
The decoder of {{formula:756adb42-2d7d-473e-b5aa-06fc307b3171}} consists of five upsampling blocks in order to obtain an output resolution that is identical to the original image size. Each block contains two convolutional layers with a kernel size equal to 3 and zero padding equal to one. In addition, we use batch normalization after the last convolution layer before the activation function. The first four layers' activation function is a ReLU, while the last layer's activation function is sigmoid. Each layer receives a skip connection from the block of the encoder that has the same spatial resolution {{cite:a595f92b103d8ee3a4e585db165b8db317a459ca}}.
| m | 4f1cc450c2594a209cfa04d435d6fdd1 |
In scientific fields ranging from systems biology and systems engineering to social sciences, physical systems and finance, differential equations are omnipresent and constitute an essential tool to simulate, analyze, predict, and to ultimately make informed decisions. Due to the wide range of applications, the search for efficient, flexible, and reliable numerical schemes is still a timely topic despite its long history. Numerical solutions of ordinary differential equations (ODEs), in particular for initial-value problems (IVPs), are predominantly obtained by a rich variety of finite difference single/multistep schemes, which lead to both implicit and explicit solvers that are now standard in many programming languages {{cite:2bc26d0bd9ef75547bde561e659a5650d0b85fc6}}, {{cite:5a2767c5ae002a12b9b227642930697cffd6b1c8}}, {{cite:c1b4d14ffd3f975fef3da2a9f406af0e855c8a23}}, {{cite:6ea6a01c35297a2920b408f418a39a2edecf5191}}, {{cite:a11a8c8c39cffc68c8e5ce25d15da791a607b536}}. In contrast, finite element methods for ODEs are much less investigated, despite the works on continuous and discontinuous Galerkin methods (see {{cite:87e9a60008bc161426dcdc0f39bc5d269588ddb0}} for a brief review) and collocation methods {{cite:24ff8e7e01137c7d575d07c843ae67a7c5140e4d}}. The same can be said for delay differential equations (DDEs) as well {{cite:f1da1b52390cb3afdc56d10e21d5cf36a3ee274a}}.
| i | c1d802375470ce7bab7084c0363ceb8f |
Commercially available semiconducting wafers were purchased from different vendors. {{formula:b8f82397-bd99-4dcd-a683-f391b89fd905}} -Si and {{formula:3b67d4ff-75ca-4de8-93c4-355bb903e98b}} -GaAs samples were doped with P (2-6{{formula:d294719f-0e4e-49f3-aff5-71e09fbaedb4}}{{formula:11882689-ae28-4b8b-8e3a-4496f52c4184}} cm{{formula:313e2b99-9586-40f2-9531-8ff82a423867}} ) and Si (3-6{{formula:77797b65-687c-42c1-a321-3f1551835bf4}}{{formula:eb4ac1fa-ffbd-4dc4-831b-95e1a40f0f31}} cm{{formula:bf58619c-3180-4bb6-9947-bc26817f581e}} ) respectively. Epilayers of {{formula:295d23a4-d82c-449f-8274-93eb59a36ecd}} -GaN and {{formula:3dc688f1-f41e-4fae-a6dc-6fe240c17f73}} -4H-SiC, 3-6 {{formula:9fad8bf4-ab34-43ad-a0c7-ca640ce85252}} m-thick, were grown on semi-insulating sapphire substrates with Si (1-3{{formula:dbe99e0c-0b15-44fd-a01d-1bb5712dac4d}}{{formula:7adc82e4-34c9-436b-a6b0-d3add8d2afc8}} cm{{formula:12240ef6-1833-474e-b00d-ee4032614274}} ) and N (1-3{{formula:bb04b7b3-b2a6-4ab6-8f5e-d3c91b4d6895}}{{formula:6cae4334-0817-4970-a4dd-de6107160cf4}} cm{{formula:59fd8d09-7cda-4b53-816e-fd234cc100d8}} ) dopants. During the sample preparation and before the graphene transfer, the wafers were cleaned using typical surface cleaning techniques. Ohmic contacts to the semiconductors were formed using conventional ohmic contact recipes{{cite:e50b5090711ee7cdc3aa2bb1cc18f72c54ad734e}}, {{cite:d05ec69b8fd2a4f725a041ca1e5743a67277e67e}}, {{cite:32e2ff5d5d4e9a98a4eb5bf8ae992dc4d3ab1e35}}, {{cite:2f6d97c954a18f510944d7a5f566b9618c9710cf}}. Multilayer ohmic contacts were thermally grown at the back/front side of the semiconductor and were annealed at high temperatures using rapid thermal annealing. After the ohmic contact formation, a 0.5-1.0 {{formula:c7ac976a-b1c5-4994-9332-e4af115a4756}} m thick SiO{{formula:a5e37683-775d-414a-869f-e5657908f1c0}} window was grown on various semiconductors using a plasma enhanced chemical vapor deposition (PECVD) system, and {{formula:ca0c9ece-2e5a-4fe8-a2ca-ed6ce77e7b91}} 500 nm thick gold electrodes were thermally evaporated onto SiO{{formula:68abf6e4-0b42-41cf-8917-d37481d04571}} windows at 5{{formula:c5176dc1-3f9c-41f7-9c07-4e2c935db884}} Torr. The graphene contacting areas were squares with sides in the range 500 {{formula:2e8eda8f-7d18-434f-9647-cc32d059d5fa}} m to 2000 {{formula:0b7f1c7c-63df-4fa8-b937-657d4ca17d60}} m. Application of IPA improves the success rate of the graphene transfer and does not affect the measurements presented here. After depositing the graphene/PMMA films, the samples were placed in an acetone vapor rich container for periods ranging from 10 minutes to {{formula:e9bf0c59-8207-4d1b-88b8-e87c04cb670a}} 10 hours. The acetone bath allows slow removal of the PMMA films without noticeable deformation of the graphene sheets.
| m | da139066459d5242a5b37f4bc43cdb9b |
We also evaluate the proposed approach against GAN-based approaches: GAIN {{cite:bacfa04920b1086abd3d603ae74176dd821f9aa6}}, MIDA {{cite:11ddfc62e860c0f217c95772ec5746a68537696b}} (for both random and structured missing values), MIWAE (for random missing values) {{cite:678bc074be9ebbad0f1cd7b4f35ec9ebbeca9724}} and not-MIWAE {{cite:a88fb27c299ec52f7177319f8e1cf3b08fc161c2}} (for structured missing values).
For each approach, we report the best obtained result after multiple runs.
| m | 44028ff6eb90af68e4fee457a4fbfda6 |
Our implementation of Anchor+BM25 labels uses better neural rankers and also combines it with base retrieval, compared to its vanilla form in previous research {{cite:75fc075fe9cf1e0e8ce196e459b97396194f8900}}. Still, it does not yet outperform No Weak Supervision. BM25 scores can be used as pseudo relevance feedback (PRF) or to find stronger negative documents {{cite:986d07bafb1473b4732bf41e7581f232a8526b0f}}, {{cite:2271c1217290547c320cea991fcd78da6533cb19}}.
More in line with the later, ReInfoSelect uses BM25 to find negative documents for anchors.
| r | c8e87ec0935045c79a6eede213ddc20e |
Node Clustering.
We conduct node clustering experiments on the DBLP and IMDb datasets, using the same setting as {{cite:68923bc71940fc8209423b52f3d5961c113207c8}}, {{cite:02e9806dbdf839d0a42404ad898e5bbc814c9ee9}}.
We feed the embeddings of labeled nodes to a K-Means algorithm. The number of cluster K is set to 3 for IMDb and 4 for DBLP.
Since the clustering result of the K-Means algorithm is highly dependent on the initialization of the centroids, we repeat K-Means 10 times and report the averaged normalized mutual information (NMI) and adjusted Rand index (ARI).
| r | 25bb7aa66436c611ee11c1ec13646358 |
As federated learning is essential for institutions dealing with privacy such as medical, driving, voice, and facial data, it is difficult for each institution to share data.
Though training data and models are publicly available with open-source machine learning libraries in Tensorflow or Keras, it is not easy to share training models in federated learning.
While each FL client has different amounts of data, computing power, and learning schedule, the client performs the learning process in federated learning asynchronously {{cite:cbe65e188bd68ee2fe2ed502144176e219eb0d94}}.
In addition, there is no convenient tool for developers to exchange federated learning machine learning models.
| i | 3f44de2b453fc29fee693c3005ec63a6 |
In any of these smaller spaces – typically in the basis of left- and
right-orthogonal environments as produced naturally during left-to-right and right-to-left sweeps – it is very easy to solve the
time-dependent Schrödinger equation exactly from time {{formula:9eff05a6-e46b-44dd-a764-dadbc3d63d4b}} to time
{{formula:a8348209-f98a-4967-a596-44d918623809}} using a local Krylov procedure or any other exponential
solver{{cite:43c336e03294969ff7b5f3b143c4f8c67963585a}}. This results then in a new effective state
{{formula:d11e770a-4058-4201-a94c-524b6be08118}} . Unfortunately, the new
state will be represented in a relatively bad basis of the
environments {{formula:0fc8b23e-ce5f-4380-bfee-f3ffc821338d}} and {{formula:fad6c67b-4260-4ad3-9830-20cd30501379}} which are optimized to represent
the original state at time {{formula:3080617b-2ae6-4951-9a23-cec3039e4925}} . To alleviate this issue, after
evolving the site tensor {{formula:5d70736e-383f-472b-b624-e41016cf2332}} forwards by {{formula:0504ce4d-a4dc-4ff2-b47f-7e708ad6f39a}} , one now wishes to
obtain a global state {{formula:850f7f7e-2adc-418c-86c9-ce1843108cd7}} at the original time but in the
new left auxiliary basis. If this is achieved, we may iteratively
update the basis transformation tensors to be suitable for the new
state {{formula:1a349d39-cb0f-45b0-8d5d-77b6fff6a758}} .
| m | 83a478404cedf4523f92f332fa4c0830 |
In Supplementary material, we provide more GAN models {{cite:bfe6b03e0679832e1caf0451bbd9085b441200d1}}, {{cite:a44dd89d4af9088c7e9cc314c0a8ce3d1bc7b230}} with no high frequency decay discrepancies.
We also investigate whether such high frequency decay discrepancies are found in other types of computational image synthesis methods (synthesis using Unity game enginehttps://unity.com/) {{cite:6fa75f5fdd4eaca4fd143370e35f592d5220cd03}}, {{cite:fd6feb06b2ec300d90517cca4c28c9982562f952}}.
To conclude, through this work we hope to help image forensics research manoeuvre in more plausible directions to combat the fight against CNN-synthesized visual disinformation.
{{figure:f6a0fe5f-8a94-4116-8631-1a661e4a9dcd}} | d | e64ae1a4b1d81c92514420996a1f30b2 |
The regularized regret objective, AdaptationObjective, can be interpreted in several different ways. The regularization can be motivated as preventing overfitting to a limited number of samples of a noisy reward function.
We could also interpret this term as reflecting the cost of adapting from to the new task, but under a somewhat idealized model of the learning process. Precisely, the adaptation objective is equivalent to performing one step of natural gradient descent {{cite:b2db4d595463e809359690f0210cda8cddab741e}}, {{cite:985be5cf20e89f5b632373548e29a57702d28935}}, {{cite:b6685dfd9b14d21374533498f4816747f5fd2a69}}, where the underlying metric is the state marginal distribution {{cite:9206caba821492cb8072971893e64ed56a33ec70}}. This natural gradient is different from the standard natural policy gradient {{cite:985be5cf20e89f5b632373548e29a57702d28935}}, which does not depend on the environment dynamics and is much easier to estimate.
This equivalence suggests that our notion of adaptation is idealized: it assumes that this adaptation step can be performed exactly, without considering how data is collected, how state marginal distributions are estimated, or how the underlying policy parameters should be updated. Nonetheless, this paper is (to the best of our knowledge) the first to provide any formal connection between the mutual information objective optimized by unsupervised skill learning algorithms and performance on downstream tasks.
| r | 0834b0dbbe963ea298697bf102bc4608 |
Feature comparison. For each category, we randomly sample an equal number of pixels on ResNet-50-based features for each domain (200 pixels/ domain&category) and present the T-SNE comparison with the GA baseline {{cite:add7490497586f729bde73053191a8370c39ed1e}} in Figure REF . It can be observed that those similar categories (person, rider, and bike) can be separated clearly on features by our method, which benefits the followed detection head in terms of object recognition significantly.
| r | 176aae977c2077e29d01472b742cae35 |
A thorough comparison of the methods have shown that AETv2 {{cite:6e7057a2a228b68005125c15dbf54d7c0c44d12e}} performs best in terms of learning relevant features from images, and being able to contribute to lower error rates in object recognition on ImageNet {{cite:0a91e8181e59be965558526a7e40578eba1f7add}} and CIFAR-10 {{cite:0fa1a5601da82e74e12782935d9268c9bbccf7e3}} datasets. As shown in Figure REF , it can be seen that the path of loss of prediction of transformation follows a similar path of that of error of classification and top-1 accuracy on CIFAR-10 {{cite:0fa1a5601da82e74e12782935d9268c9bbccf7e3}} and ImageNet {{cite:0a91e8181e59be965558526a7e40578eba1f7add}}. This indicates that predicting transformations better implies better result of classification making use of learned representations. This validates the choice of AET and its variants for the supervision of learning feature representations.
| d | 192cf6a371c7b3fe020e8dfbb0055056 |
To further investigate the nonequilibrium quantum phenomena in the system, the phase diagram for identifying different numbers of steady-state solutions is mapped in the {{formula:82adda07-f0d9-4bf9-ac4d-54ecaa28e8d2}} -{{formula:f9f0cc6b-feb6-4849-8ac5-86f34fa36bf2}} parameter plane with fixing {{formula:3efbf718-2894-4581-b6d3-876fceb7eb3c}} , as displayed in Fig. REF (a). It reveals that the bistability behavior with three steady-state solutions exists in a large parameter regime (light-red regime). Besides the emergence of optical bistability, the number of steady-state solutions can be more than three, corresponding to the existence of multistability with five steady-state solutions (light-yellow regime). Remarkably, the bistability and multistability only occur at nonzero optical Stark shift ({{formula:136d7826-a11a-42d6-9aad-f7a0c14f19b3}} ) in the single-atom cavity-EIT system, which demonstrates that the optical Stark shift plays an important role in enhancing the optical nonlinearity {{cite:276ae13d50cf6463161628ee2aee4ec3b5f47eb8}}, {{cite:ab9328a3a953dcc7b200664a065dc49ec2d8a447}}. Moreover, the dynamical characteristics of bistability to multistability phase transition can be conveniently controlled by tuning the cavity-light detuning {{formula:92ecd3bd-8d18-45ad-8026-7ef8c1886af9}} or Stark shift {{formula:37d913d8-e85f-44f3-b9d4-8ce291bb97bd}} . Indeed, we have checked that the similar behaviors are observed for negative detunings when {{formula:41ee21e0-89fe-4c31-a5bd-b94c58be4d93}} changes from positive to negative values. Compared with the proposed Stark shift mediated EIT, we should note that optical bistability and multistability behaviors have been studied in a {{formula:d55791b5-9970-49a0-88d9-0b456bc62aa0}} -type four-level atom system {{cite:9cde68261a4c7e9284db487fb948d54a54d132d9}}, where the predicted nonlinear phenomenon can be essentially ascribed to the emerging giant Kerr nonlinearity in cavity-EIT {{cite:7a657e567b3e6b61191f7eec3b42f7766bf80c63}}, {{cite:82baeab57d25b48d1069b87459d21e2638980601}}.
| r | 56da6a55d7166a865f49deb43b50bb3e |
The photometric distance modulus/distance of Be 55 as {{formula:0b97d6f3-5271-432e-8a63-4897de70438b}} mag ({{formula:d7a3d05c-9dcb-4231-91b2-3288d8387e50}} kpc) is better and well consistent with the median Gaia EDR3 distance (3.15{{formula:2d2d4c25-0631-4945-8fd5-c531cfc76202}} 0.59 kpc). These distances locate Be 55 near the Perseus Spiral arm.
The photometric distance of this paper is in concordance with {{formula:a55f1fd3-481c-4d75-80f5-964e60d468d4}} kpc of A20 and 3.02 kpc of {{cite:8fcff1ff050e5276a725f9878c2c7febff87f21c}} (Table 7). It is rather smaller than N12 but is farther than L18. L18'distance is from period-luminosity relation (PLR) of Cepheid {{formula:0669128c-8540-465f-baf2-7cfad3061ad0}} (Table 8). N12 obtain its distance modulus from the dereddened ZAMS of {{cite:1cb30a1f43cb653af8fd141ea4b22684b172fa18}} and SK82 as 13.0{{formula:a7e1dc18-9f1e-4319-aa89-76a1d86c0e90}} 0.30 mag on {{formula:4c56a33e-c14d-476f-9efa-5ab6c88886cc}} . This corresponds to a distance, 3.98{{formula:45f98db8-71ca-41bb-8db7-66a17321585e}} 0.55 kpc. L18'distance locates Be 55 on the outer edge of Local arm rather than in the Perseus arm as N12 suggest. A mismatch between the distances of L18 and N12 is explained by the underestimated uncertainties in N12's distance modulus. The other literature give close distances.
| d | ad695212a64373ffb570d7df01f2842a |
Given the gradient and the Hessian of {{formula:f5ee71ce-50d3-484c-b155-95f9cd69f1de}} in (REF ) and (REF ), one may employ an iterative optimization scheme such as Newton method or (L-)BFGS method {{cite:c4d585091ec801fcb8368700ec9fba2d53c389df}} to find {{formula:23948c47-d405-43a5-a057-d24404abf95a}} , the solution of the convex program {{formula:dc5444bb-7acf-4290-a91e-35193f189054}} . Note that {{formula:62f2a9e2-cd21-42b1-8ecd-dc01a8a06b25}} is feasible in (REF ).
Also, one can see that the feasible set in (REF ) is a convex set with non-empty interior, and consequently, it is guaranteed {{cite:c4d585091ec801fcb8368700ec9fba2d53c389df}} that {{formula:b2fdaf97-ef23-4ba7-b52e-40e739e21a73}} converges to the solution of (REF ).
| m | eeeb93abf730d07ffb918c83db03ec40 |
The significance of dendrites in neural computation is well documented {{cite:04481347f49e3917e39f0e55e6516144bc84dcf6}}, {{cite:49cd6207e337f1b781be901cb7640fdeee89c064}}, {{cite:af46b712a9adb61a79ff4e1679c6135b9a2d4d33}}, {{cite:c249c23a991bc8aa0e3f11a29da3b616f5836be4}}, {{cite:721d4f4f635a290860d86be0d71ac882c0c3c356}}, {{cite:549a5b7c77098180b1958aeb4c5ead722a0e1408}}, {{cite:131d41c7666c8c2ff2634ed25f4440bc6aabfeaf}}, yet the incorporation in neuromorphic hardware has received proportionally less attention {{cite:9f4d169d3c98dc9fd44c095987b2f156a66cac3c}}, {{cite:d688f7f1db38b124728f7453bfb57e36e959bb76}}, {{cite:05e030c236b75a05ff520a216628ac3f441e6fbc}}, {{cite:87f384eb1d26c6e61f78bf0bf2bfc7fde1126b92}}, {{cite:f6d5b3c0de6514edf45d5c6caea5507709378a0e}}. The dendrites studied here may provide valuable means for implementing credit assignment in training algorithms that utilize local information in conjunction with population information {{cite:583760f8a201153f4108579da966a30bfd1c06b5}}, {{cite:5b681610c975c569b20b8ccb0881a96b01662881}}, {{cite:346f94d603717421d4aa60d38aa9c91d270f0a92}}, {{cite:d8be933387297ae346b1e4fececdfffa01a881c5}}, {{cite:8134742652b58a461a0de6544355d1fcc7f76968}}, potentially leveraging much of what is know from neuroscience about the key role of dendrites in learning {{cite:3996ac35a4a35648a10387a146c2bfe210bedf6e}}, {{cite:9a7f31a68246f39c4895e30c2fdb37499b03b520}}, {{cite:04b8ea0dad4a71b0a37255e96efdcfae60cee4f9}}, {{cite:aa7aa61b4e5c1847e64465b509c2bfe986060d52}}, {{cite:b08697573fb62a83a1725f8cc3686c61aabf894e}}, {{cite:e27fa6f01d9db07a9f29d9a457c66a06feeba772}}, {{cite:57d15053b270a367ced57d93b235bbfec24b54e6}}, {{cite:264fb0428bb53fc11c44fd18208064ea92e8e22a}} and overcoming a major obstacle to widespread adoption of spiking neural networks for artificial intelligence. The presence of continuous dendritic signals that are continuous in time without erasure following a spike may provide new methods for training spiking neural networks that are not available based on spiking activity alone. Perhaps these continuous signals will be useful for constructing cost functions and training networks with variants of the backpropagation technique, a feat which has been difficult in the case of spiking neurons outside of the rate-coding domain, and a subject which is significant in bridging machine learning and neuroscience {{cite:2997484c68a84559ef70cfb24219fedd29abfeae}}.
| d | 559f7fa393894329b6b6c7921bfa4e17 |
In this paper, we propose two quantum-inspired algorithms for solving linear systems. One is based on the randomized Kaczmarz method {{cite:029646648f7442768f2b84a440794ec1726cfd71}}, and the other one is based on the randomized coordinate descent method {{cite:29e6ad4aa7871b518f5c4fd531a419f7965a3fbc}}. The second algorithm only works for symmetric positive-definite (SPD) matrices. Our results are summarized in Table REF .
{{table:c71d2f88-bbb8-413e-8cdd-f10d748c0889}} | r | bef558af734cb85e23d690f18dae54db |
blackWe also compare to GIRAFFE {{cite:5c556e3162072fb25b82c178b670d22b4a618cad}} in Fig. REF .
Our method maintains the consistency of both pose and shape components better.
Quantitatively, GIRAFFE achieves similar scores compared to our method on FFHQ using the metrics defined in the main paper.
It achieves an appearance consistency score of 0.05, geometry consistency score of 0.32, and appearance variation score of 0.09.
However, ours results have better multi-view consistency, and better qualitative disentanglement as shown in Fig. REF .
{{figure:1a64fce5-b0e0-46ca-a02a-774954f1488f}} | r | 33b6193f6ef3bd34abbab3f0c3ec7fad |
In the analysis of a spectroscopic time series of a pulsating star, there are generally two methods that can be used: (i) the moment method {{cite:5c3448bdb6c49590687f9dcd40efdfa7b0c32646}}, {{cite:e29d39e51e3d568557d2901c50d13d89fe7d0d71}}, {{cite:7e150093da1c57867bc8b4c16a22dcef034400a1}}, {{cite:738853a42eb0373d3f8e8f7ea0f78944c95912ee}}, {{cite:0fe5826400b3797e350c44c07fd2c053b1731b56}}, {{cite:84c3c82ef14a2ff8a124447060b9b893b8287962}}, and (ii) the pixel-by-pixel method {{cite:ecac2bae4d2fb141a4f77d0ee59c704e2fbe0789}}, {{cite:52c80b727906a255c2d637bcfa6bd9d31c363e28}}, {{cite:f8f6d815b085c54b91459b40bc797ceb6d6b1a94}}, {{cite:18a7e71f301a78dc6190fe5021382b39ac058120}}. The moment method requires the numerical integration of the statistical moments of an observed spectral line and describes LPV in terms of: equivalent width (0th moment); centroid velocity (1st moment); profile width (2nd moment); and profile skewness (3rd moment). Such a method is most effective for slowly-rotating stars (i.e. {{formula:122d105f-8bd0-477c-b7e6-0a24586c987a}} km s{{formula:e56b46a5-c62f-4d11-91ea-aaf793f6ff15}} ), because the rotation period is therefore much larger than the pulsation period. Thus, the dominant line broadening mechanism is because of pulsations as opposed to rotation. The analysis of slowly rotating {{formula:66bbe90b-ca6d-49d9-aa7a-dcb5f45ed6f3}} Cep stars typically leads to the identification of low-angular degree pulsation modes, which are ideal observational inputs for forward asteroseismic modelling. On the other hand, the pixel-by-pixel method is sensitive to the phase and amplitude of a pulsation mode as a function of the line profile. It is more suitable for moderately and rapidly rotating stars (i.e. {{formula:9c6f8cbe-fb39-4049-a37b-cc82d6eefba0}} km s{{formula:d3528f2f-696c-412e-a655-f705d5104a91}} ), and typically leads to the identification of high-angular degree pulsation modes, which are seen as bumps moving through the spectral line profile. However, the pixel-by-pixel method has the disadvantage that inference from it is limited by the signal-to-noise of the spectrum. Therefore, since we have optimised the CubeSpec target list for slowly rotating stars with pulsationally dominated spectral line broadening to identify low-angular degree pulsation modes, we will primarily apply the moment method to maximise the feasibility of future forward asteroseismic modelling.
| m | 04c0ca9e2dbee736c78c475633cd0682 |
Our algorithm can also be used to prepare stationary states of slowly-evolving Markov chains, i.e. given a sequence of Markov chains {{formula:7901c36f-7253-4473-bc85-7e9fa0b4e02b}} , such that there is a significant overlap between the stationary distributions of any two consecutive Markov chains, meaning {{formula:44c2a5de-8c68-46b9-ab0c-cbde86305d10}} is large {{cite:5227bd444a393ef8a36f346657732baaa4b7753a}}, {{cite:376cc18363a50be649f9b52e1458ac3d3d830017}}, {{cite:e8541442039135ab7994d43a0f3a896f9e83efce}}. Given that one can prepare {{formula:e10e3be3-c490-4ea9-abb7-f43e6e1fe269}} efficiently, the task is to prepare {{formula:864b29d7-deb8-4eab-890d-c00e05320296}} . Such situations arise in a host of approximation algorithms for counting as has been pointed out in Ref. {{cite:5227bd444a393ef8a36f346657732baaa4b7753a}}. Our algorithm will provide a quadratic speedup over that of Ref. {{cite:5227bd444a393ef8a36f346657732baaa4b7753a}} as given any {{formula:e191ad43-5e8d-4894-9feb-a8ef350130bb}} , the spectral gap of the Hamiltonian defined in Sec. is amplified quadratically over the corresponding discriminant matrix, which acts as the Hamiltonian for the approach in {{cite:5227bd444a393ef8a36f346657732baaa4b7753a}}.
| d | e9388337b782951d4a123c2e26f84c56 |
Fig. REF presents the 4-QAM {{formula:5b788de2-856f-45b4-9494-77d8050241b1}} MIMO-OTFS BER performance for low ({{formula:9b53b03b-3f4f-4f2e-97d3-e662cac4b5ac}} ) and high ({{formula:fb680cfe-3675-42ec-831d-e9a0b7db5c42}} ) correlation at the Rx. We consider practical channel estimation, where the channel coefficients are obtained using the single pilot method proposed in {{cite:b6c3ae2720fa87eda1637e1240656c24829a18ce}}, {{cite:15183df651abce9b277eec4376b759f795412ffd}}. The pilot symbol energy for each OTFS frame is given as {{formula:13c33e1e-ba72-4b2a-ae7f-d323b8fae86b}} , where {{formula:43d48de2-9399-49d2-b950-92df77d8aec2}} is the average symbol energy. The LMMSE detection performance is plotted alongside for comparison. The quality of the channel estimation depends on the pilot power as observed in this figure. It can be observed that the MRCw detector offers around 5dB gain compared to LMMSE for the same excess pilot power {{formula:bb2228a8-aeb5-4564-8346-acc7ce2cce44}} for both low and high correlation at the Rx. For the perfect CSI case (dashed lines), it can be noted for both MRCw and LMMSE, that a spatial correlation of 0.9 causes a performance degradation of around 7 dB due to reduction in available space diversity as compared to the case with no correlation. In both cases MRCw gains 2dB over LMMSE at a much lower complexity.
| r | 4879e25c40783b667d39686861730a18 |
We retrieve many associations that make sense for humans in {{formula:56cd6c8b-fbb9-4a66-9661-b8e3f5b50206}} : blues and jazz; rock and pop; motivation, dancing, running, and party are close to one another – which aligns with previous work {{cite:914dc16b9ab7c669e466509f128f569a05ec2934}}. More interestingly, we observe that "LSD Trip" is the parent of "Surf", "Summer of love", and "Stoner Rock", effectively associating an activity to a sport, an historic phenomenon, and a music genre. Yet, we also find some association that make sense for the model but not for humans: e.g. "Rock Christmas" is also a child of "LSD Trip". This phenomenon is hard to avoid with fully unsupervised methods {{cite:c0533abdf3bdb42e7297dc6b1b3312fd59f639bc}}, {{cite:d55ca612f65f8899bb49bb32cd93eab00b503be8}}.
We can find many more interesting associations, but we have to beware of confirmation biases, as often in XAI {{cite:1ceeb82d03e674d05851d6869e9195d3ef5a212c}}, {{cite:14b750a289a617794d1a81b7ee334d05df56cc35}}, {{cite:fd34b7b26d0fc96e952cf0ec8fb0bd39367def23}}. We provide online visualisations of the resultsresearch.deezer.com/concept_hierarchy/.
| d | d30637e731f768f33de7d6386348260a |
Lemma 14 (Chernoff bound for Bernoulli variable {{cite:9136671d0b712488abe2bb8f512bf917d188cf24}})
Let {{formula:324e239f-d80a-47dc-8357-06abc9536b36}} be independent random variables taking values in {{formula:14e33774-6f7f-4fdc-804b-6c56ab7c1b28}} . Let {{formula:cdcd941b-7834-41f1-bd6e-32d481de1ff4}} and {{formula:62a4e32d-7856-47c6-98b2-22c2459c8fb2}} . The following statements hold.
| r | 0654c37c530e68515d5c3b3704c3204a |
We first conduct experiments with the synthetic dataset provided by Kar {{cite:f747d9b3e873f26cfde83940cf92d1657502f1aa}}, which contains 43,784 objects in 13 classes from ShapeNet {{cite:db620c35ad22e39ceacfbec2d8fe37907e12a277}}.
Each sample includes a 3D CAD model, 20 camera viewpoints for rendering, and the corresponding rendered images at a resolution of 224{{formula:068665e0-257d-48fe-beba-b036e0ba5c0e}} 224 pixels.
We use the same training/validation/testing splits as the original dataset.
The training images are augmented by random shuffling the RGB channels and horizontal flipping.
We only use one view per object for training and evaluate on all the 20 views in the test set (with independent single-view reconstruction on each view).
The ground-truth 3D shapes are only used for testing.
We show some qualitative results of our method and other baselines in Figure REF .
More part reconstruction results on car, lamp, rifle, table, and watercraft samples are shown in Figure REF to demonstrate that our method generalizes well across diverse object classes.
| r | 4d8097bbae37c4ed4750fff9f1d9e954 |
The results shown here extend the utility of local-dimension-invariant stabilizer codes, and so naturally there are questions as to what other uses this technique will have. Is it possible to apply this technique to show some foundational aspect of quantum measurements? Can this technique in some way be used for other varieties of stabilizer like codes, such as Entanglement-Assisted Quantum Error-Correcting Codes {{cite:09151aa168543ae773f021f4a216ad11b3547066}}, {{cite:8270e0851ad4998bb016a2a207638a6ce5dcaaa6}}? If this method can be applied in this situation it is possible that it could remove the need for entanglement use in these codes, so long as the local-dimension is altered. However, even still, the local-dimension required would likely be quite large so the importance of decreasing the bounds for {{formula:1e302ec1-860e-49c0-8465-0594871233f9}} and {{formula:9baea539-1605-4ccc-ba92-4522bb4541b9}} would become that much more.
| d | 702dd735bd2fe22ce4103e2dcb2957f4 |
The BH mass estimates have a cubic dependence on the inclination (Equation 4). Therefore BH mass estimates may not be reliable, given the presence of systematic errors in the measurement of inclination. The two main sources of systematics on light curves are superhump modulation and contamination from rapid aperiodic variability. If there is a large spread of inclination values, there will be more larger range of BH masses {{cite:9a916d26f22d1f21f49dc704ae8cfcd2e87c64a0}}. The BH mass estimates has also two limitations for High Mass X-ray Binaries (HMXBs). Firstly, the value of mass ratio `q' is very small (given the spectral type of their companion star; {{cite:c1703fb14f5d64f29dc44ec76f7b1f63e972b332}}{{cite:c1703fb14f5d64f29dc44ec76f7b1f63e972b332}}), and therefore the BH mass would not be very accurate. Secondly, the mass transfer is powered by stellar winds {{cite:70bc4cca7ebcec9478fcf5da29e237de926f4c08}}.
| m | d599231f10516b23e01577764c49b416 |
The structure of the Datalog program can be analysed to provide clues
about the predicates to focus on.
Following {{cite:9dbe911605a33d1df322badc7acb16abb8eb5ed5}}, we introduce the notion of
precedence graph {{formula:dc74d158-1257-4370-b16e-2095c201e45d}} of a Datalog program {{formula:e4a619f7-b5d9-4356-b0e5-d86b52b0e6af}} . The nodes are
the {{formula:e51cd916-93cc-4f3b-b813-3eabafb60d43}} predicates and the edges are pairs of {{formula:75e330a7-0570-4647-a818-9072435463e4}} predicates {{formula:d0f73caa-376e-4c72-91ad-9946d59b2ce3}} where {{formula:4dcc6d4f-5ddc-4978-92b2-2d91d69ddcff}} occurs at the head of a rule of {{formula:54c1d9ea-db04-4bbe-a975-933b6c6f3123}} with {{formula:d6c9260f-44b5-4390-ac96-3c00a7550c99}} belonging to the tail.
{{formula:d95306e3-edc9-4abe-aa8e-5a62c81863c9}} is a recursive program if {{formula:63902938-08b9-4cc8-a053-05142b0874cb}} has a directed cycle.
Two predicates {{formula:79381439-1ca5-402e-8d1f-9754c05d1c0e}} and {{formula:2c218fe6-f64a-46b5-876a-1e890e7ebba1}} are mutually recursive if {{formula:2e31315a-0880-4951-973a-801d5864c30b}} or {{formula:535648b5-29cb-499a-a8a6-f860380688e4}} and
{{formula:a2acb6a3-1d18-40a6-84b2-85bad64ad2b8}} participate in the same cycle of {{formula:ec6cc08c-aca9-44ab-a158-a55cd5053edc}} . This defines equivalence
classes.
Putting it together, we obtain as Algorithm out final
algorithm.
| m | d1707df07f5cf0fda8e5e96e544116e8 |
Results. Tab.REF illustrates the results on large-scale datasets. Ours is consistently effective and outperforms existing mainstream methods, achieving distinguish improvement compared with previous SOTA c-RT {{cite:b39c725961ad42237828a3f4a1f5d8d443c2749e}} in the compared backbones. Especially, our method outperforms the baseline on ImageNet-LT and iNaturalist 2018 by 9.53% and 8.27% with ResNet-50, respectively. As can be noticed in Tab.REF , the proposed method also surpasses the well-known two-stage methods {{cite:b39c725961ad42237828a3f4a1f5d8d443c2749e}}, {{cite:6e6fb4e626fd7221559626adf736720e1fa16d22}}, {{cite:bbe7e9e85d6b6509ea17884fdd67e9cbf4fa77e5}}, achieving superior accuracy with less computation load in a concise training manner.
| r | 5ce5ac632d4e996c736721340b888dae |
These four steps are performed either a preset number of times or until reaching given optimization objectives. The main obstacle to implementing this scheme is that GCNNs do not naturally return the uncertainties that evaluation of the acquisition function requires. We overcome it using either MC-Dropout {{cite:a1abd322fe4ab15bd2b071b251fa2907581185c2}} or Deep Ensembles {{cite:ae6be8926c808f05bd3c16800bd30510d15e1d8c}}, as described below.
{{figure:0145ba16-c0c9-48e2-8f2e-9fcfa07d06f6}} | m | e587f577e51425b89d79709e7561c42c |
BSDS500 dataset{{cite:ccf67bf7d2f85fc6c96786a173fc83c89093223b}} is the standard benchmark for superpixel segmentation which contains 200 training images, 100 validation images and 200 test images. The size of image in this dataset is {{formula:19e0f701-1330-4b3d-94e4-d60fb6983537}} . Each image has more than 5 segmentation ground truths labeled by different person. Thus, we choose one of the ground truth that can achieve the highest segmentation scores in this study. Considering that SSN{{cite:622a0f703399879e4416f11f7f6b90c107bfbadc}} is a supervised superpixel segmentation method that needs training the model on the training set and validation set to optimize the parameters, we only compare the performance on the test set for all superpixel segmentation method mentioned above. Quantitative results on BSDS dataset are shown in Fig. REF , it can be seen that our LNS-Net has the highest performance among all the unsupervised superpixel segmentation methods (ERS{{cite:d562021998e960f3ca6157395c9ebccaa1cc5d22}}, SNIC{{cite:d2c384983eab54472b8df6ea9cfd794dcf3b5ed6}}, LSC{{cite:c8028bc69ce49f596ae5655361d878af3a2b00de}}, RIM{{cite:a848ee2657bb53052e563500a1b37ad3e77d60ac}}). This benefits from our sequential training strategy, which can unsupervisely optimize the model parameters. Moreover, our LNS-Net is more sensitive to the contours in a broad sense rather than only the semantic boundaries as shown in Fig. REF . This trait contributes to our higher BR than the supervised segmentation method SSN.
| r | ee725304e166a3ad9b69daf652c14321 |
Using the idea of the immersed interface method {{cite:aa9670bc558acd5e9a65c46e357fb6e07fc4f26d}}, {{cite:8154059125b6e3dc7c39d876a9e6a6add5d50d28}}, {{cite:10667b00e922b4b7d423558fd0566dff9ff48d3a}}, we can use the jump conditions of the solutions and their partial derivatives instead of singular source in terms of the Dirac delta function. Thus, the incompressible Stokes equations with an immersed interface {{formula:d9e79757-d2e8-40c9-b303-7f78342ecaf3}} can also be written as
{{formula:01fe42cb-2daa-4d27-a5e3-a1fbb3023a6f}}
| m | 28e76d5392a643776fbb18c0aedec84b |
Moreover, in addition to Recall and mean Recall result with graph constraint, we also report the results without graph constraint of our methods in Table REF . As we can observe, RTPB also significantly improves the mean Recall of corresponding methods. Besides, DTrans+RTPB(CB) performs the best on nearly all mean Recall. The Recall is dropped by using RTPB. This is because the less frequently seen relationships have been highlighted, where most ground truths are frequently seen. Therefore, it is commonly seen in unbiased SGG strategies that an increase of mean Recall would be followed with a decrease of Recall {{cite:7d75635d20dac0b46cea6ec520c9ab10184842fa}}.
{{table:02feffb4-c69a-413d-8f29-d0ff40e600a5}} | r | d16c7d744395a401c9836a89772413a9 |
Table REF and table REF gives the quantitative results of our method evaluated at sfKITTI {{cite:29a9197cd2b8c652555d125b6a941c3e2ffabc90}}. The accuracy of our method is substantially ahead of supervised learning methods FlowNet3 {{cite:dd46a738dc2b768f546f66704038157eecf48c1f}}, FlowNet3D {{cite:9e13b545e23442f6504637db09cddb8e1e35705b}}, and FLOT {{cite:ad47c28ba69ce3900858aad96387d885ef62c854}}. Compared with the self-supervised learning method {{cite:59205ecf198aaa13bbb33fc9f60170c74b90389e}}, {{cite:c3a0bef4148f2d613352e6806b3fde9ecfe7e5cd}}, {{cite:96a256d6f6983dbd88353fbffe5958d304f812d1}}, {{cite:6fc61bbb2101fb05020eec2468731aede218c585}}, {{cite:0302ed6feb534fd22a5a6050e5f144c6b234b2d8}}, {{cite:e7d92495432677b6d11b3983dec59cfa1cf02a6f}} based on point clouds, learning 3D scene flow on pseudo-LiDAR from real-world images demonstrates an impressive effect. Compared to PointPWC-Net {{cite:59205ecf198aaa13bbb33fc9f60170c74b90389e}}, our method improves over 45% on EPE3D, Acc3DS and EPE2D, and improves over 30% on Acc3DR, Outliers and Acc2D, which is a powerful demonstration that Chamfer loss and Laplacian regularization loss can be more effective on pseudo-LiDAR. Our method without fine-tuning still outperforms the results of Mittal et al. (ft) {{cite:96a256d6f6983dbd88353fbffe5958d304f812d1}} fine-tuning on sfKITTI.
| r | 6432c2c098d90cb466bc4d65a2a69457 |
CNN with bilinear models (Bilinear-CNN):
Bilinear-CNN {{cite:a1dc346e7ffe7077da738c372c4d57b827b43b48}} employs bilinear models with feature maps from convolutional layers. Outputs from convolutional layers of two CNN streams are multiplied using outer product at each location and pooled for recognition. To make an equal comparison, we employ the pre-trained MobileNet V2 as CNN streams for feature extractor. Feature maps from the last convolutional layer are pooled with bilinear models. we reduce the number of CNN streams outputs channels from 1280 to 128 with a 1{{formula:086ec2fa-b83d-4eab-82ad-bb9fcd1b2a94}} 1 convolutional layer before bilinear models. The dimension of feature maps for bilinear models is 7{{formula:7b20cfca-c15e-4159-8354-213929eb73ee}} 7{{formula:f10d632f-9951-4a74-96be-4c5eabc40cb1}} 128 and the pooled bilinear feature is of size 128{{formula:9522b586-3e8a-4604-88ff-673fb1ff06d1}} 128. The pooled bilinear feature is fed into classification layer for classification.
| r | a96085d8f61f4737c8a3c64400a5699f |
whose target is the dual of the Lie algebra of {{formula:202aa497-cf52-46b1-b7cc-114fca8923ed}} . According to the Marsden–Weinstein Theorem {{cite:f2a766045d2d0211f6c00cc594388b2d4556e21c}}, we may use this moment map to “reduce” the symmetries of the manifold {{formula:942c6459-9d7d-4e64-ba12-105ca16e0fc9}} at any point {{formula:738703a2-92a5-4b80-92e7-009404a472a4}} of {{formula:ef07a490-1200-4960-9484-b63adcc52d68}} which is a regular value by taking the quotient
{{formula:3dba5bb6-d1f6-4731-8057-c8db6fc4b13d}}
| i | 35cd4a4b6b92572e30e629a260d3cf8f |
The proof follows on the lines of {{cite:d57d13aa636f17800ce8d9f08b45ac8410aec0bc}}, using the distribution as transition probabilities.
| r | b308150c5d41d406f1532ea7aee457bb |
As was shown by the success of RAEs {{cite:01bc235e441d80872ad340fcd25284c6c38c82a9}}, there is often a mismatch between the induced posterior and the prior of generative models which can be removed by an ex-post density estimator. InvGAN is also aminable to ex-post density estimation. When applied to the tiled latent codes, it estimates a joint density of the tiles for unseen data. This would recover a generative model without going through the unstable GAN training.
| d | e928c0ae66d04c99c0ff7b1b08f09925 |
In all cases shown there was an “explosive” transition as the coupling strength was varied
from one fixed point to another, or between a fixed point and a periodic solution. An exception occurs with lower degree-frequency correlations for the power law frequency distributed case. A variety
of scenarios were seen. This is
in contrast to the similar and much more widely studied Kuramoto model, for which
transitions are only between an incoherent fixed point and a partially synchronous periodic
solution {{cite:f7009e260cb81ecfe46ffdca53323ef8c78546a1}}, {{cite:3e42a5d4147daf4dd549253afa5785d5ab78c874}}.
The range of scenarios is a result of the form of a Winfree oscillator: for strong
enough coupling an oscillator will approximately lock to a fixed point, so that even though
the measure of synchrony {{formula:e883fb33-453d-4d15-b9ed-d7e3d7063eb8}} may increase as coupling strength is increased,
this does not necessarily correspond to synchronous firing, as can be seen in Figs. REF
and REF .
| d | b0c629ef727dae68787e5aab9eb419d3 |
In order to avoid results stating that classical algorithms are more powerful than quantum algorithms, we could consider classical algorithms with access to an oracle that is at most as powerful as inputs given to the quantum algorithms.
For example, when comparing to quantum algorithms with quantum state inputs, we could consider classical algorithms with access to classical data obtained by measuring the quantum states.
Classical algorithms with access to measurement data are still very powerful, being capable of predicting outcomes of quantum experiments {{cite:210e3428196062fc6d4a4423bc3afaadbf1c7a38}}, {{cite:90551db8022a87f1da97e252372837381f7bf4b7}}, {{cite:52e077bd2627f9c1845b74a94929b81fe67b5a75}}, {{cite:0608cbe55edc494d8610dc55bd5fff6bf3441725}}, classifying quantum phases of matter {{cite:c3563cad11b5dc192218933f82f946d2ffb5499e}}, predicting ground state properties {{cite:c3563cad11b5dc192218933f82f946d2ffb5499e}}, etc.
However, classical algorithms with measurement data access will never be more powerful than quantum algorithms with quantum state inputs because quantum algorithms can always perform the measurements within the algorithm.
This would then avoid perplexing statements such as that classical algorithms can be exponentially faster than quantum algorithms.
| d | d102d6502298d323df27f1eceaba0b5c |
The idea behind this work is to model the task of selecting the most suitable features for a given problem through a meta-heuristic optimization process. As stated in Section , feature selection stands for a proper selection of features, reducing a particular problem's dimensionality and usually enhancing its performance. Also, as the proposed approach is a wrapper-based one, there is a need to define an objective function that will conduct the optimization process. Therefore, the proposed approach aims at selecting the subset of features that minimize the classification error (maximize the classification accuracy) of a given supervised classifier over a validation set. Although any supervised pattern recognition classifier could be applied, we opted to use the Optimum-Path Forest (OPF) {{cite:8d389345a12beb207dbd1f23045414e424b4651f}}, {{cite:675b0ecb877afcd3a6a689eabd82bb1a0b7ec13e}} since it is parameterless and has a fast training procedure. Essentially, the OPF encodes each dataset's sample as a node in a graph, whose connections are defined by an adjacency relation. Its learning process aims at finding prime samples called prototypes and trying to conquer the remaining samples by offering them optimum-paths according to a path-cost function. In the end, optimum-path trees are achieved, each one rooted at a different prototype node.
| m | 0ecd92c42630fa0a1445936b6b2a2778 |
This paper considers causal discovery for ordinal categorical data. Categorical data are common across multiple disciplines. For example, psychologists often use questionnaires to measure latent traits such as personality and depression. The responses to those questionnaires are often categorical, say, with five levels (5-point Likert scale):
“strongly disagree",
“disagree",
“neutral",
“agree", and
“strongly agree". In genetics, single-nucleotide polymorphisms are categorical variables with three levels (mutation on neither, one, or both alleles).
Categorical data also arise as a result of discretization of non-categorical (e.g., continuous and count) data. For instance, in biology, gene expression data are often trichotomized to “underexpression",
“normal expression", and
“overexpression" {{cite:127fd609f0da4b887fa3c461a30ba796521775d4}}, {{cite:c10beb063551b826a3956526c3e4ba1e8cfa9291}}, {{cite:fcc4386dba482e6f16c997c29ce2cae08dd25b90}}
in order to reduce sequencing technical noise while retaining biological interpretability.
| i | 3be30c8079436133c77054702d3b301e |
Another natural generalization emerges by analyzing the Krylov complexity of {{formula:30ff5512-38ad-48fb-ae3e-62c2fed92289}} , as time evolves, i.e., by considering composite operators at non-coincident locations. In this setup, the OPE does not collapse to a single operator but is expanded in terms of OPE blocks, see {{cite:341c194bd8a38b49f81e206ae940b9cce5e5d23d}}, {{cite:8824251195559b9b8f9331627a71bb98d056e913}}, {{cite:45d3f22b607f64a4a5f86acc7e5d7f60b00cdef7}}. This suggests considering first the Krylov complexity of OPE blocks. In the light of the present article, the block Liouvillian is expected to take the generic form (REF ) and the block complexity may be related to volumes in kinematic space {{cite:8824251195559b9b8f9331627a71bb98d056e913}}, {{cite:45d3f22b607f64a4a5f86acc7e5d7f60b00cdef7}}. We leave these interesting problems for future work as well.
| d | b9187c0f951a5264e291037c9a39839d |
Choice of the equilibrium point
In a first step we choose an equilibrium point in which the state of the system will reach a sustainable state, such that if the system is at that point, it will never leave it.
An equilibrium point, as known in the dynamical systems theory, is a state such as if the system reaches that state, then it will always remain there. The strategy we are looking for is one that forces the states of the dynamical system to approach the desired equilibrium point as time progresses.
The state of the system may never become exactly the equilibrium point, it will get gradually closer and closer to it, hence the behavior of the system will get closer to its behavior in the equilibrium point. In particular, we will chose an equilibrium point where the utility functions of each stakeholder are above a certain threshold. We will call such equilibrium points sustainable.
Calculating a safe set and a feedback strategy
In addition to reaching the equilibrium point it is necessary to find a set such that it contains the equilibrium point and such that all the elements of this set are sustainable. Recall that by sustainability we mean that the utility function of each stakeholder is above a certain critical value. We will call such set, safe set.
In parallel to calculating a safe set we also calculate a strategy such that when the strategy is applied, the safe set is invariant. By invariance we mean that if the initial collection of attributes in this set, then at any time instance the attributes at that time instance will also be in that set. Moreover, under the application of this strategy, in the absence of disturbances, the attributes converge to the chosen equilibrium point.
As a consequence,if we start in the safe set, we are sure that we will always remain there and converge towards the equilibrium point. If the initial state is not in the safe set, then the proposed strategy is not guaranteed to yield a sustainable behavior. Indeed, the safe set must ensure that all elements that belong to this set are sustainable and satisfy the constraints on attributes and actions. That is to say, even with the disturbance, the system will still be sustainable, although the attribute vector will no longer converge towards the equilibrium point.
Application of the strategy: feedback.
The calculated strategy will be in the form of feedback. That is, at each time instance, the action prescribed by the strategy is a function of the current attribute values.The use of feedback and the properties of the safe set guarantee that the strategy is robust. If the actual attribute values differ slightly from the ones prescribed by the model, due to external shocks (disturbances) or modeling error, but they are still in the safe set, then the application of the feedback will ensure sustainability and convergence to the equilibrium point in the absence of further disturbances. This property of feedback strategies is widely used in engineering ({{cite:c7867e3c1a5c1a7e2604313367ff7b264ca57c90}}).
| m | d89e8347bdcfe9b8867b87260c57c58a |
From the table, we see that the performance of all methods consistently improves with data augmentation. Additionally, among all methods, BRL performs the best. However, the difference between BRL and soft-label knowledge distillation is not as prominent as before (see Figure REF ). Nonetheless, such behavior is not surprising as we expect soft-label knowledge distillation and our method to eventually reach the teacher's performance, given that a sufficient amount of data (i.e., training examples) for distillation is provided {{cite:dfbd59033da08921067019b0af6fbcf176b29548}}. As we show in the main experiments, BRL delivers a clear advantage in a setup with a limited number of samples.
| r | 65888cc39c59998ea95dca8cc92a4dee |
So far, the focus of this study has been on designing the acoustic model. However, performance of the acoustic model can further improve by deploying more robust input features other than MFCC. In the final section, we evaluate the proposed method trained on noise-invariant Wav2Vec features {{cite:eaeafc02e042f0c6aa6d96912f862dc809260f3d}}. Wav2Vec representation has been trained on large amounts of unlabeled audio data in an unsupervised manner. Fig. REF (left) shows the WER of the baseline ResNet18 trained on MFCC and Wav2Vec features, which manifests the effectiveness of Wav2Vec in reducing the WER across all test sets in the absence of speaker auxiliary information. The highest improvement is achieved for overlap speech with SIR 15dB, which is {{formula:1791f604-d2f9-426f-8f1c-d590087591c5}} absolute improvement in WER. Fig.REF (right) depicts the WER of the proposed speaker conditioned ResNet18 trained on MFCC and Wav2Vec. Similar to the baseline, the speaker conditioned acoustic model benefits from the Wav2Vec features by achieving {{formula:7c26883a-a909-4918-9fb5-c56c1e0db2d4}} absolute improvement in WER for SIR 15dB. However, due to the availability of speaker information, the acoustic model is less sensitive to the robustness of the input acoustic features, and thus, the amount of improvement from Wav2Vec is less in the proposed speaker conditioned ResNet18 compared to the baseline. In conclusion, the WER across all test sets is improved by using the proposed speaker conditioned acoustic model trained on wav2Vec. For example, on the overlap speech test set with SIR 15dB, the proposed ResNet18 with Affine Transformation trained on Wav2Vec gains +33% relative (+11% absolute) improvement in WER compared to the original ResNet trained on MFCC.
| r | c91579a50d2d2582f0ff06b6cb157a45 |
{{cite:c2d3303f5712d9c02b47d69a16d220dcb9042987}} considered the case of edges representing different types of relation between nodes, and proposed to specialise the message function {{formula:7b4590f8-00b3-4722-837e-1f1402f8c2cb}} with regards to relation type {{formula:e4ec74c5-60ba-4e07-9f73-a2f13859a3f1}} . This was achieved with distinct perceptrons. In {{cite:30ca6c572ed285010cbe210b11ca8dcb4b849588}}, Zhang et al. obtained a similar specialisation of {{formula:8e95c70a-0841-4c89-a436-caa141753686}} to the relation type using specialised weights. In {{cite:f2cc09debbb8de9d5696d490914f870b8fe0b8b5}}, Chen et al. took a different approach to account for relation type, with separate and specialised GNNs for each sub-graph of a given edge type, before final fusion of learnt representations.
| i | 8ac2cf4b3de4a453dcae1446b36a339d |
Data collected for the purpose of machine learning is often in a high-dimensional space, but yet is believed to satisfy certain low dimensional structure, that is, the collected dataset can be well approximated by a low dimensional manifold sitting inside a high dimensional Euclidean space. See {{cite:000305c26e17536b11f94fe6950dd8a0b8b736ce}}, {{cite:37ec07f4bc8bf10e9963a83f534fa9ddbecf9486}}, {{cite:2a2ec43e06c75f3c5793ceb3cf73b7e200688eb5}}, {{cite:375835d9851a7cfbea6a68ce1ef475f2a0b81aca}} for a far from complete list of available literature.
How to analyze a dataset under this assumption is generally called the manifold learning problem.
One particular goal is to recover the nonlinear low dimensional structure of the manifold and to reduce the dimensionality of the space where the dataset lies inside.
Mathematically, this problem is formulated as asking if it is possible to have an embedding to put the manifold (hence the dataset)
into a finite dimensional Euclidean space that is bi-Lipschitz, even isometric.
Although the embedding problem was first positively answered by Whitney {{cite:88301d8b3c23f8acd0bb280a73fa2a2458226772}}, and the isometrically embedding problem was
first solved by Nash {{cite:1ccf3c59e482397eadf7af0b64652b910e0f2811}}, the approaches are not canonical and are not essentially feasible for data analysis.
In Berard-Besson-Gallot’s breakthrough paper {{cite:538c1975fbbb13856b7e6c773344ebda3071fdc1}}, the spectral embedding idea is explored.
They show that a manifold can be embedded into the sequence space {{formula:a58d01a5-3fc4-48e2-8398-87b9c563bef6}} with all eigenfunctions.
| i | d39ec7b518a87e955e471081522aaabf |
Also, the amount of {{formula:07e2b794-6c65-41a6-baf5-c9f441dfcb20}} violation manifested in Jarlskog invariant ({{formula:9d20a2fc-cfa9-48fa-81c3-1a2d1c1ecaf4}} ){{cite:fb69d00d78310ec95fdde802021913386ff1f3d2}}, {{cite:6cdc75a33bcba289761b9ae3b47ac4cb3075fc88}} is defined as
{{formula:2cee5d0f-c33b-4a8a-bcdc-f91612ee8908}}
| d | f5d694e30b92606f4aed62adc83ea2d4 |
Transferability with Data Augmentation. The models trained with data augmentation are generally more robust than those without data augmentation {{cite:b9c640f1950d59fb9dd69628f0acbe70f0f0ac75}}, {{cite:b882eeb989eb0044eafca6742008b50db0a5bdc3}}. In this subsection, we explore the transferability of adversarial samples generated by AOF when transfer to models trained with data augmentation. The augmented models are trained on ModelNet40 with random scaling, random rotation, random shifting, random dropping and random point cloud jittering, the adversarial samples are generated by attacking DGCNN model trained without any data augmentation. Here, we pick {{formula:87e5770e-4677-4c1c-9e57-5860bd2b6cae}} , other experiment setting are the same as Sec 4.3. We denote the attck success rate on models without data augmentation as {{formula:c8897573-1f03-4a97-94c3-331c6d6032ed}} and the attack success rate on models with data augmentation as {{formula:e2f8758e-f9d5-40cb-8486-6b511dbd1944}} . Then, the drop rate of transferability can be expressed as
{{formula:64aa0d7d-7926-4ac3-bdef-4b81c1984d66}}
| d | 3a1e06f7c5f433e4389fa30755a01bcc |
Unlike circumplanetary disks, circumstellar disks have been thoroughly characterized from observations during the last decade thanks to optical/near-IR instruments like VLT/SPHERE and GPI {{cite:a1ced593addbd33a542966adbc004ffd2e8fb04a}}, {{cite:1caea006e50b4e8a6ac3d81a7e684ac350bc1392}} and to the (sub-)mm interferometer ALMA {{cite:6b77ceb0adfec0de41f6be1463189e74a1781a28}}. Among the near-IR observations, the most successful technique to directly image circumstellar disks is currently the polarized differential imaging {{cite:57cd9128c3e3d22ef98c315d89081ac5c1e816a4}}, {{cite:2adf2a74a0709b018b5de4c72511a5225e0e687e}}. This technique allows a very good removal of the strong stellar flux by separating the polarized light (mostly scattered light from the disk) from the unpolarized light (mainly stellar light). Therefore, most of the available high-resolution near-IR maps of circumstellar disks trace the polarized scattered light from the disk surface. In principle, these polarized scattered light observations also open the way to detect the circumplanetary disk the same way although this is yet to be proven observationally.
| i | ceab9adf9162cb800a2f4670e152e41a |
Similarly to the low speed regime, the drag reduction properties of each sample relative to the control disk are plotted in Fig. REF . The best results were observed for the 100 {{formula:97cd6202-62aa-4504-85a6-afa5d0d7ac2d}} m deep groove sample, which reduced the drag by up to 20 %, even at high rotational speed. The 10 and 1,000 {{formula:21a7a8ba-8c20-40db-8ec6-78455bfffdd5}} m deep groove samples were capable of reducing the drag by 5 % to 10 % only. Note that larger drag reduction was observed for all the samples at 44 rad/s, which corresponds to the transition to turbulence for the cone-and-plate flow {{cite:852b3e15e2be6342d8b08568c079ae06e038daad}}. This larger drag reduction may be caused by a delay in the transition to turbulence, which would result in lower torque measurement.
| r | bbf4eb142c61b907b116d7df4ff97e17 |
Improving the calibration and reliability of confidence estimates is an active research area. Guo et al. {{cite:c87fd90b1f5cef91002bc7ab7cca95e1b55512d9}} found that a DL model is often overconfident in its predictions; they examined a range of methods to calibrate model output confidences. For example, temperature scaling gives the best empirical calibration results due to the strong non-linearity nature; it requires additional labeled data for calibration and does not change the rank order of the original uncalibrated confidence estimates. In {{cite:fd9ff423335b4d515969fd7f33e0701c01f048a3}}, {{cite:4beee2049414131304feea65566a260b7958e415}}, it was proposed to use model ensembles to improve the reliability of prediction confidence estimates. Dropout-based and Bayesian-based methods {{cite:96ca4fc6f99cd3dbd9a57e31ec9542ac6c5a92fc}}, {{cite:4ca94ef1a590bb562e654e097c8f30bd10f8ec43}} are also popular choices for better uncertainty/confidence estimates. On the evaluations side, Expected Calibration Error (ECE) {{cite:c87fd90b1f5cef91002bc7ab7cca95e1b55512d9}}, {{cite:4beee2049414131304feea65566a260b7958e415}}, {{cite:23e71b7f6048e71574c982daa908bf577f9924d6}} is widely used for evaluating calibration of prediction confidences. ECE computes the absolute difference between the correctness score and confidence score for each prediction (or each bin of predictions) and takes the average of the computed absolute differences as output. Low ECE values indicate that confidences are better matched with the actual correctness scores. Similarly, Brier score (Br) {{cite:e79607e8e0a3cb2a78944f1db8f2ed3d1e69491e}} measures the mean square difference between each probabilistic prediction and its corresponding ground truth label. ECE and Br quantify how well a model's confidence estimates (CE) match with correctness scores (CS) in value but do not measure alignment/matching in rank.
| i | 3dd713532f120c19f2d305e5f03ac070 |
We analyze and alleviate a single shortcoming of using post hoc explanations. However, the post hoc explainers we consider have also been shown to be inconsistent, unfaithful, and intractable {{cite:beca28d20cb8879395132db5d8e363540832a1fc}}, {{cite:fc396c15c446b409cc712bba0869cd008bb32d52}}, {{cite:7aa4ee0e8672ed6ae3147507e62acffa6afbc856}}, {{cite:f941e3d24b262d685b52752f31cef7275f610402}}, {{cite:472eb881825b7bf984e1e7dd147e301e86ebb88a}}.
Consequently, we believe that a potential source of negative societal impact in this work arises from practitioners overtrusting post hoc explainers {{cite:2e0facbc79142b387f5d3e2a1571335d2e493b51}}, {{cite:beca28d20cb8879395132db5d8e363540832a1fc}}. While this is the case, our study demonstrates that the explainers backed with our proposed defense not only detect adversarial behavior but also faithfully identify the most important features in decisions. Moreover, if an explainer is not to be trusted, our approach at least can exploit it to identify misbehaving algorithms.
In future work, the ramifications of an adversarial organization caught red-handed should be explored in the context of existing regulatory guidelines, such as the EU AI Act draft {{cite:16795a17ee174ead3540c139092752290af334d9}}.
| d | 5dc0f87016cdf7fc9b1f293ece1f7efb |
Anomalies flow by an AB phase. It is known that anomalies in four dimensions are related to
global topology of the space through the index theorem.{{cite:0d2760dad6602473026a366178edea51288b1732}}, {{cite:97db92749f8e45bb11284f26815a53ee3ec7474c}}
It is challenging to understand the anomaly flow
by an AB phase from the viewpoint of the index theorem.{{cite:6e56f16c259f58202826669c221c7478c1622baa}}, {{cite:85c04026db8bef87ee91c1a277e9dd228d771bbf}}
Gauge theory in the RS space or in the flat {{formula:056e5ced-36e6-480a-a6d9-fe352f7f4748}} spacetime can be formulated
as gauge theory on an interval {{formula:1cda4199-ca83-4039-b9d3-e68bb556b173}} in the fifth dimension
with a special class of orbifold boundary conditions at {{formula:59f04d3c-3cd2-4c63-bf2a-2b491f776375}} and {{formula:bbf10e9c-69d4-4163-a9eb-f2c0a37d1b65}} .
In the twisted gauge in GHU the AB phase {{formula:a7aca5cc-4145-4053-954a-1fc28068dee5}} appears as a phase parameter specifying orbifold
boundary conditions. Anomaly and the index theorem in orbifold gauge theory with nonvanishing {{formula:ac7de7a3-ad7a-49ff-9098-99e76bdfa182}}
have not been well understood so far.
To elucidate the anomaly flow by {{formula:ce96bcbc-580a-432a-b823-485244aeb637}} the RS space will provide a powerful tool
as there occurs no level crossing in the mass spectrum and anomaly smoothly changes as {{formula:f2b1bd0e-79d3-4435-92b8-ea09136141c1}} ,
quite in contrast to the behavior in the flat space.
| d | bfa9b5ae65172d17bca4d9ba879562ea |
The crystal structures of {{formula:211cfffb-4800-4366-89fe-85e3637817a0}} , {{formula:9fe8071d-4265-46a6-9edd-04bfbeb928e4}} , and {{formula:c22154a0-fd4a-424b-88f7-8c425a9bff0f}} -CeXH{{formula:9174cb79-d2c9-4a5f-878f-9d43c2050041}} (X=B, Be) are shown in Fig. REF . Red spheres represent cerium atoms, green ones represent beryllium or boron atoms, and white spheres represent hydrogen atoms. For the {{formula:a5ef8bba-a8b3-44e9-8088-b23dc72e7ce6}} phase, Ce and Be/B atoms occupy 4{{formula:8994b651-b240-4e04-b393-260f5cc0bb95}} and 4{{formula:6b93dd7c-bddb-44dc-906c-404431763bdf}} Wyckoff positions, respectively, while H atoms locate at the 32{{formula:d7824383-1e6c-4c0a-a80b-898d6a9396f2}} position. The hydrogen atoms construct spherical polyhedrons with Ce atom located at the center, and a series of small hydrogen cubes with centered Be/B atoms, are inserted in the polyhedrons. The structure of {{formula:7d230e12-23cb-4791-b16e-945009655b5f}} -CeXH{{formula:e7b7f284-956a-4859-b895-f0b5840f6857}} is similar to sodalite-like hydride, e.g., CeH{{formula:8a0c9164-0b6f-43ed-8912-5b1edb6d11d8}} , in which guest atoms Ce act as a scaffold to exert mechanical pressure on the hydrogen sublattice{{cite:e0dde1ba242226564b5c2b0f9af21aa361c1a49a}}, {{cite:f24f7fabeb9c6485e6585351f94559334de94bb6}}. In CeXH{{formula:467137a2-b79a-40cf-b158-67517ee5b5fa}} , Be/B atoms fill the interstitial sites between the second-nearest Ce atoms, efficiently filling the remaining voids in the CeH{{formula:20d068c4-87d8-481f-89ba-9f88f974d2fb}} structure{{cite:db7cc557eec1307045aa300fda7e74ff8346ddd7}}, {{cite:cdec2da28aeae6166058c7f7fdb17ba2b6b1d961}}. In the {{formula:4d2133d3-db22-4627-bfd2-8c20e62b32cc}} structure, Ce and Be/B atoms occupy 3{{formula:305a5b61-a8d4-485a-9791-cd4288842e5f}} and 3{{formula:d634b1ad-dafc-4d7e-9161-40e4c64ba69a}} positions, respectively, while H located at positions 18{{formula:1b5b0c10-f71c-461e-8d48-84e11d64a859}} and 6{{formula:15b8ccd8-6347-476c-9114-68c1d74fd665}} . In {{formula:f5b19863-3acc-4b1c-aae3-e20600cbccb3}} -CeXH{{formula:a3dd8d6a-5f86-4da1-a8cb-d4bf76a63be9}} , Ce atom occupies the 2{{formula:a9ff1f9c-35f0-4c36-a099-657f93200dc5}} position, Be/B occupies the 2{{formula:31737872-9a2e-473a-9e38-8f7cef18972e}} position, and H atoms locate at 8{{formula:49a08e51-07c9-43fa-b179-1aa2f1769520}} and 4{{formula:70b056bb-fe02-4bdf-9559-5af3330b9262}} positions. The enthalpy curves of the different phases can be found in the Supplementary material. When pressure less than 60 GPa for CeBeH{{formula:2fedada4-cfb0-4943-b7b4-fc7cf1531693}} , and 200 GPa for CeBH{{formula:8ca1467c-016b-4923-84d1-c880f4187499}} , {{formula:683fdf14-769d-4e51-aef7-d8e9f1965ae9}} and {{formula:0ab159bb-d08a-4115-87a2-af178c42fd4a}} phases exhibit lower enthalpies, then these three phases coexist with similar enthalpies up to the maximum pressure we studied.
{{figure:524452c0-7b3e-457c-9cbb-38193325001e}} | r | ac3f76c3412dafc76578c9136e235318 |
Here, we present the results of our classifier fine-tuning experiments on both the validation and test sets, over five randomised dataset seeds (splits) on the balanced version of the EMNIST dataset {{cite:c1fa7a68e7ab7360496aaeda5f83e31c58f34c03}}. Over these splits, the test set is the same (consisting of 10 classes), but the training and validation sets comprise different class splits corresponding to 28 and 9 classes, respectively. By fine-tuning, we refer to freezing the pre-trained classifier (Section REF ) and replacing its output (logits) layer with a new layer which defines a probability distribution over either the validation classes {{formula:eb4e4b22-2649-491e-a6bf-a916548ea446}} or the test classes {{formula:37aa331c-b077-4a4f-bf08-017d01bcc2b3}} . Note that unlike with generalised few-shot learning, here we are not interested in maximising performance over both the old and new classes – we simply wish to maximise performance over the latter. Here, we use the same parameters for ADAM as described in Section REF but lower the learning rate to {{formula:9e22e1c1-b5a8-4279-9864-998f618a0b35}} . We describe our experiments in the following paragraphs.
| r | 922c34251e572d86400cefa5a1099bbf |
The speech processing community relies on term segmentation to describe a multitude of tasks: from classifying the audio signal into three classes {speech, music, other}, to detecting breath groups, localizing word boundaries, or even partitioning speech regions into phonetic units.
On this coarse-to-fine time scale, speaker segmentation lies somewhere between {speech, non-speech} classification and breath groups detection. It consists in partitioning speech regions into smaller chunks containing speech from a single speaker. It has been addressed in the past as the combination of several sub-tasks. First, voice activity detection (VAD) removes any region that does not contain speech. Then, speaker change detection (SCD) partitions remaining speech regions into speaker turns, by looking for time instants where a change of speaker occurs {{cite:c579e3ba0356322fcb81409b5b3fb3da2fbb9d16}}. From a distance, this definition of speaker segmentation may appear clear and unambiguous. However, when looking more carefully, lots of complex phenomena happen in real-life spontaneous conversations – overlapped speech, interruptions, and backchannels being the most prominent ones. Therefore, researchers have started working on the overlapped speech detection (OSD) task as well {{cite:d4f8670d30adc6cba3d50710a506f7bac330fc02}}, {{cite:e716e6041e388e7de71a165efa56e1ce4c351f5f}}, {{cite:ea2c6b46d9e9b123e0268cf4b1561273ad55edc4}}.
| i | d01b404da8b17dfcfd1f4d89a4ee3e9a |
On the other hand, problem (REF ) is finite-dimensional. As a result {{formula:9ba4d9f1-d5da-4f4f-bed9-efe7be6c0607}} has far less capacity to overfit than does {{formula:49ad4bf1-e915-43f8-b047-4aa5b811c86c}} , for any given sample size {{formula:58a54211-82fd-450b-8d1a-d91313575d3f}} . Discretization is not the only way to make the problem (REF ) more tractable: for instance, one can replace the penalty {{formula:6baf892d-197f-4e91-b0a4-15de63510881}} with a stricter choice like {{formula:e5a6ba91-b7cf-4f39-8f5f-f3d0b28d8101}} , or conduct the optimization over some finite-dimensional linear subspace of {{formula:abd7929b-fa99-49a4-b5e2-cdc1fd25047f}} (i.e., use a sieve). While these solutions do improve the statistical properties of {{formula:3e5d2cdd-8ccb-4958-ab43-f6003d7182db}} for {{formula:8400c361-53b4-427c-af19-893c1fa7d28f}} (see e.g., {{cite:08733dd92cc187986cdbf0ccfebc5a888c0233e5}}, {{cite:206c31ba6158fde2c50c7a09abaae9a3fd8184d4}}, {{cite:c7f5d9556add8433855eaadba5efdd145283e58f}}), Laplacian smoothing is generally speaking much simpler and more computationally friendly. In addition, the other approaches are usually specifically tailored to the domain {{formula:fec3c49e-7bd0-47a9-9edb-4289a63ba5ee}} , in stark contrast to {{formula:74f13660-3c3c-40e0-a83f-ab5be195eb21}} .
| d | 0e0e87a5a5ac7a4ac1fd0ac836582247 |
We numerically solve Eq. (REF ) by self-consistent method using QuTiP {{cite:d3649dab8effefc192f05ab4a66a4ddec9e34cd2}}, {{cite:efe087dc1a7dd59df0377126898d37057480f677}}. As in the case of classical network, here also we take {{formula:e8670011-628f-4964-881c-a93d1686f124}} for the active elements {{formula:b46d34d8-06cb-4158-8d44-f821d4406287}} , and {{formula:7c0dd8f7-f2b8-41d6-b5d9-9f05ec9c8b20}} for the inactive elements {{formula:a21f2d87-3dea-4e4d-819d-632f8c059e37}} . In the network, we distinguish oscillatory and oscillation collapsed state by computing the mean boson number per oscillator: {{formula:caf75525-215f-4b43-b8aa-17fad82be292}} . Based on this, we define an order parameter, namely the normalized mean boson number
{{formula:fe880d53-d92d-4ffe-9238-a5a71cdeb2aa}}
| r | 6c5d0ab8375414848fd0c43a80aeecb2 |
The Bernstein condition for the log loss is satisfied, for example, when the model satisfies the Lipschitz and strongly convex property for the parameter {{formula:959dffad-dd04-45a9-af48-97a3a6402b29}} {{cite:ef4ab73d2981ef24e68fc8ff42e6e89ab62c1f37}}. Moreover, the Bernstein condition leads to the fast convergence, which results in {{formula:ca4fffe5-7210-4608-bb39-344ea20dbc80}} in PAC-Bayesian bound; see {{cite:ef4ab73d2981ef24e68fc8ff42e6e89ab62c1f37}} for the detail.
| d | 2d3d044892a696092bcfb2babb9ab001 |
Let us introduce a last kind of scale, the quantization scale. It generalizes the quantization dimension which dragged much research interest {{cite:6aedf6aea7587cbd82b68a0cba080f169b4a044e}}, {{cite:5890ed7796c5e9f9adb2fbd1c26c745dc3780e24}}, {{cite:332d1f7bdc852098c0aca835811dc28a1f45d3d2}}, {{cite:fdc53bae360c9bebd81326d7e76e669ed19cbf6a}}, {{cite:815293b370c099226c356e48105b3c3f4dd15537}}, {{cite:9b9988cbfca7fda70ab6e81e7b6f8bc40c6b6c1a}}, {{cite:7d79a0e44112ac2790cf0945df0fda28f64442a8}}.
| r | 2c1d09dd7716c9b807346073e9e24807 |
While many of these observations should generalize beyond our particular settings, additional modeling or randomization may be necessary for other tasks or robot morphologies. For example, due to different actuation, the ANYmal robot has been found to require additional actuator modeling {{cite:f1a356c107985f639ea321c667ffba9433d42ab7}}, and significant latency in the control loop of the Ghost Minitaur robot has been found to require randomization of latency {{cite:bc92775a193d4251bffec5933e267466aea9de43}}. We advocate identifying important, i.e., high-sensitivity, sim-to-real bottlenecks using simulations and performing necessary additional modeling or randomization only for the relevant parameters instead of arbitrarily adding randomization to a larger set of parameters, as has sometimes been done in the past.
| d | 2708b18fbe713febdc66890acc4e2089 |
Recently, convolutional neural networks (CNNs) based face hallucination methods have achieved significant improvements over conventional face hallucination methods because of its powerful feature representation capacity.
Compared with the general image, face image is a highly structured object {{cite:63b9c3442d1421d3ae6277d95af0fb0070cb99ca}}, which is composed of facial shape model and facial shape-free model (facial texture model) {{cite:45df97a167693fef005274bc9e8eaf2ac30b772e}}, {{cite:426857b30b264e5a8835a4c698cc869bf1e8b224}}.
From this point, the CNNs based face hallucination can be divided into two categories: texture-oriented {{cite:cd0873c25f07b55837de4a4199b6884d04d6aad8}}, {{cite:e6d822c6dc6f6f3cde7d452ae139472fdd906e51}}, {{cite:8e5d3c803bcdd92731d5eb169160cf24aab04681}}, {{cite:b0da1a148e987c870039361bcafd1b05a36dfb2e}} and shape-oriented methods {{cite:6e204479aba73ab1c75a12667199dffdbbd2c0d1}}, {{cite:9116bfa6d7e0a8316a86c778932eaca89fcdf82f}}, {{cite:dcc56c6959d9e97b1794efcd090d5ac7d1366691}}, {{cite:00c0a908ee8764e56d5ae7f1439769900c7cddb7}}.
Texture-oriented methods aim to restore fine-grained facial texture details through deep semantic features extracted by CNN.
For example, Zhou et al. {{cite:667e5e900f1e1e68c6ec19958e26ac27f8ad5e3f}} firstly introduced CNNs into face hallucination and designed bi-channel convolutional neural network to learn a mapping function from LR face images to HR face images.
Lu et al. {{cite:2a3eb8d75a71b587b461efc45034ac2e1bf89892}} proposed a global-local fusion network to refine high-frequency information, thereby recovering fine facial texture details.
| i | 3e2b791beba7dd1ec0d26c534b289e1f |
This appearance of the multi-differential groomed jet distributions motivates obtaining a more fundamental and precise description of these cross sections.
In recent years there has been significant progress on the analytic treatment of multi-differentialBy multi-differential we refer to observables where the same set of particles in a given jet region is subjected to multiple measurements, and exclude cases that are essentially the overall kinematic information/property of the entire jet, such as jet rapidity or jet-{{formula:093587e5-73fb-44fe-b9cb-98e1d4612eb4}} . distributions. Some of the most significant recent advances {{cite:cf86a4f060184c9ef1d75311682c39271ae37b85}}, {{cite:a6770396cf81519d26bff661c138e07a8c24d27b}}, {{cite:bc3be7743a0b14999dddabca31dbe6e627283468}} have been made using SCET {{cite:0decf34b688aaffc0e692799e143822fd07c9c69}}, {{cite:717fad7a8d5152ff4f6e15ed4eca6db308b0d43d}}, {{cite:9951e64ead93a31bc8e56464dcf2927e93f74283}}, {{cite:348175672c593c87412a7d006979050b5776a0b0}}, {{cite:4501d415d1fd04a118c9e45f47f9327587c24073}}. In particular, double differential cross sections have allowed us to understand correlations between two different observables, such as simultaneous measurement of two angularities on a single jet {{cite:cf86a4f060184c9ef1d75311682c39271ae37b85}}, {{cite:a6770396cf81519d26bff661c138e07a8c24d27b}}. Doubly differential cross sections, such as 2 and 3-point energy correlation functions have been used to calculate the distribution of the groomed {{formula:447efc00-5c2c-44c7-8498-82d5a9e48b21}} at NNLL accuracy {{cite:f0052270497e1fdd56e25361d4e9b4c5f82fc04e}}, {{cite:537f4e479b1942c51fdc7374f2773170ed9765f7}}, and to develop novel formalism for non-global logarithm resummation {{cite:0ad217d7959f3cadac57f7dceddb7de6a0241e9d}}. In Ref. {{cite:bc3be7743a0b14999dddabca31dbe6e627283468}} a joint resummation for 0-jettiness and total color singlet transverse momentum {{formula:dccbbc0e-2c44-4f5d-982b-229adc0d9648}} was performed at NNLL accuracy, where the novelty of this work lies in the fact that the two observables have different sensitivity to the recoil effects, leading to different logarithmic structures that are simultaneously resummed.
| i | c35440854a55e7d2bf7b54e2f6dd2290 |
Let {{formula:e09ba82b-9042-4ff0-82da-1d8c715a876b}} denote the variables from which independent and identically distributed data is collected, and let {{formula:db72657a-bf16-4477-8ef3-c46d8277d9ef}} be the finite-dimensional parameter of interest, assumed to be regular {{cite:3e3bbfee4ca1bb0528444c7fa0409c90d7520d24}}.
We consider the class of parameters {{formula:1d20b707-e757-40ef-9076-20a6ccb8c25b}} for which the influence function is of the form
{{formula:24bf6717-2ac4-4898-b4a6-1597d1210db8}}
| m | d1fbc31996e8e024aaded8189a8ed22d |
We showed that the morphology-independent analysis with BayesWave can successfully reconstruct beyond-GR signals for all injected values of {{formula:72d6cb33-ff78-4be0-a7e8-a3a54d22cb3d}} . This analysis makes minimal assumptions, assuming only that the same signal be detected in different interferometers, come from one sky location, travel at the speed of light, and contain only tensor polarizations. Both GR and dCS fit into this class of assumptions. Should LIGO-Virgo observe a signal with significant beyond-GR effects, a morphology-independent analysis such as BayesWave will be able to fully reconstruct the signal. When compared to a Bilby analysis that is based on GR waveform models, comparing the two reconstructions will reveal a discrepancy and additional, unmodeled dynamics in the observed signal. Though not considered in this study, parametrized or residual tests might also be able to identify the beyond-GR effects {{cite:c5035b835f74be56c9684aba2a08117033c06a52}}, {{cite:52e5353c36c9c9fd8d5f388a6f95ca0b4c901097}}, {{cite:ea5c91b5f109f3127eea52d877d035a12f82fa2a}}, {{cite:6303839392fd19a875bc60e0348f5a78b6e85a07}}, though the latter is only sensitive to large deviations {{cite:e339b8a8ee76ab4e02c569346f98ac803f53b1d9}}.
| d | a16d22ee7ebf86ab33c16bc0cd9cef2a |
The following unusual identity was discovered through different
manipulations of the saddle-point method in order to derive
Stirling's formula, which has a huge literature since de Moivre's
and Stirling's pioneering analysis in the early eighteenth century;
see for example the survey {{cite:56a269518469ac5daabf0db3af86f5e22bfbf9a9}} (and the references
therein) and the book {{cite:77436170be2d543c5f57a9ebb48054c56f043852}} for five different
analytic proofs. While the identity can be deduced from known
expansions for {{formula:534e1109-a763-4bb3-a124-bc65349cad71}} (e.g., {{cite:3700b45cb9da9c71c0514d6fdbb94f7904dc1cf5}}, {{cite:5ab1b590fce59179e1f3065fca543d276fbaf115}}), the
formulation, as well as the proof given here, is of independent
interest per se. Denote by {{formula:817ef936-7c1a-4958-8561-9a3b7292dafc}} the coefficient of
{{formula:3b06b743-4407-472f-b4a1-5941edf7318b}} in the Taylor expansion of {{formula:3b159445-974f-4756-b8e5-981643ba2e66}} .
| i | a932e9e54d6531cdc1c5f14562c46c48 |
Figures 1 to 5 show the multiplicity distribution of target evaporated fragment emitted in forward hemisphere, backward hemisphere and whole space. It is found that the distribution can be well fitted by a Gaussian distribution for 3.7 A GeV {{formula:a3410b0b-21d5-4685-943c-6ad2de506578}} O, 60 A GeV {{formula:fdde1d0d-c90f-47af-9fc5-fc5b7237ea67}} O, 1.7 A GeV {{formula:2a168e21-449f-4107-b861-f36b6f550afa}} Kr and 10.7 A GeV {{formula:93d4f816-298e-47aa-a50b-dae28aa6de6b}} Au induced AgBr interactions, but for 12 A GeV {{formula:0427455b-a234-4cb7-a852-65ccbd20fba9}} He induced AgBr target interactions the distributions can be fitted by the superposition of two Gaussian distribution. The Gaussian fitting parameters (mean value and error) and {{formula:fc5c8620-111e-44d4-b4bf-fd64bed14ca6}} /DOF are presented in Table 2, where DOF means the degree of freedom of simulation. For comparison the results in Ref.{{cite:1397577e42db5f26ae7864be62e37a20969970f7}} are also included in the table. It is found that the fitting parameters are different between the forward and backward hemispheres for all the interactions, the mean values and errors of Gaussian distributions in forward hemisphere are greater than that in backward hemisphere. The difference in the nature of multiplicity distribution between the two hemispheres may be attributed to the fact that the mechanism of the target fragmentation process is different in the forward and backward hemispheres. Based on the cascade evaporation model{{cite:dac4d051ef4902839a479bc25922fa45a7b0f033}} the emission of the target evaporated fragments should be isotropic in the laboratory frame, but attributed to the electromagnetic field from projectile, the emission of target evaporated fragments is close to {{formula:74c5f75a-6d0f-4cba-944b-7d16524c996b}} symmetric and the emission probability in forward hemisphere is greater than that in backward hemisphere. According to the model proposed by Stocker et al.{{cite:f83537f037f52a7fbe0894bbb47383a68c890eb2}} using three-dimensional nuclear fluid dynamics, the emission of target fragments in backward hemisphere can be explained with the help of the side splash phenomenon. In a nucleus-nucleus collision, a head shock zone may be developed during the dividing phase of the projectile nucleus with the target. A strongly compressed and highly excited projectile-like object continues to interpenetrate the target with supersonic velocity and may push the matter sideward. This results in the generation of shock waves that give rise to particle evaporation in the backward directions. At intermediate impact parameters the highly inelastic bounce-off appears, where the large compression potential leads to the sidewards deflection of the projectile, which then explodes. A large collective transverse momentum transfer to the target leads to azimuthally asymmetric fragment distribution.
{{figure:abbb0371-664a-443a-b61a-25fbd4930c5d}}{{figure:4af2c8a3-0ef5-4ea4-8e79-2012a13f1b0b}}{{figure:4f6d866e-eff1-407f-aee0-7427b86b6004}}{{figure:f03e40cd-96b7-46be-8705-e1ec4b6f47f8}}{{figure:706a35da-3397-493b-91e6-0c4c9d96c12c}} | r | 1104b34ec3ef0fa55cce4ae44770357a |
Huang et al. {{cite:0f1dbe1ba59cdeecb63619d2714d9bcb1eff65ea}} adopted spatial patch matching and flow-based method but suffered from mismatching issues caused by corrupted flows.
Instead, {{cite:29b877e3f640a6af9eb5c8ac4744322f30a863a0}}, {{cite:7ac80e38f44d3d0500536bb01c0112b875784be3}} first inpainted the corrupted flows and propagated pixels along their flow trajectories.
Despite the success, they still have some limitations to produce better results:
First, to estimate the corrupted flows, they entered original frames to the flow estimator and removed the values with masks.
This procedure works well in object removal scenarios where the original frames exist.
The problem is when the original frames are not available as in video restoration scenarios.
If the flow estimator takes the corrupted frames as input, the masks can interfere with estimating the motion of objects in the video (Fig. REF ).
The errors in corrupted flows will affect the flow completion stage, resulting in incorrect flow values. Second, it is difficult to use spatio-temporal information between the corrupted flows for flow completion as mentioned in previous section.
| m | 33171ebbe72b4bc297c9b685170ed403 |
By requiring {{formula:587db3f3-3f12-4e51-97f1-6935093b0a22}} , one can control the deviations {{formula:914b157e-7d71-4283-8636-56ab401a4cd5}} . Then the minimization of the empirical loss function {{formula:53f6eeec-ce79-4fed-9453-fe19b00ca14d}} is close to that of the population loss function {{formula:b6459afd-de23-496b-8fa3-4f3b39ba8994}} . There are many methods to control the randomness. For example, the authors in {{cite:78f2f1a64065edff5eabd9a9a34ead9814dfec3d}}, {{cite:27e5e1886f9044c5ca2b4b51a39484a1837264c6}} use truncation methods to obtain more delicate concentration properties. In this way, we can understand the mechanism behind many random sensing problems by analyzing the global convergence of the population dynamics.
| d | fc44b1e24f6deff0b96d791a856ab6e3 |
In its original conception, Dyson, along with Gaudin and Mehta, studied the Fokker-Planck equation associated to equation (REF ) and the laws of the eigenvalues for the self-dual Gaussian ensembles GOE, GUE and GSE (corresponding to {{formula:55148401-25a9-41f2-9811-b2f10c0e4cb3}} ,2 and 4 respectively) {{cite:54ffd8a1b9ed8c9950a32c73bb53d2536d57c5fc}}. Later, Dumitriu and Edelman constructed ensembles of Jacobi matrices, the law of whose eigenvalues corresponds to equation (REF ) with arbitrary {{formula:55e35b49-c76b-406d-94ce-d02ebb6a53b3}} {{cite:f8e3f58524e44df6b0b70de08fe2bb32ad811832}}. Both these constructions place a central importance on the study of eigenvalues of self-dual matrices.
| i | 7b8da3353ec7dd93665fb8c7905a7618 |
{{formula:471a2331-417f-4d6b-9f8b-f5dc84a6464f}} -norm has been well-studied in the streaming model {{cite:67095f0af497d99fdb1bf56f2a1a5bbd149a3679}}, where the tuples come one-by-one and the goal is to approximate the {{formula:512d5112-3d2c-4e57-b5a7-396455d199e9}} -norm. It is known that
approximating {{formula:fb3ddc2d-e99d-467c-8729-c745409b553a}} takes only logarithmic space {{cite:67095f0af497d99fdb1bf56f2a1a5bbd149a3679}} while approximating all other {{formula:79d6c5f3-1096-4e17-bfea-a2ef769374a6}} takes polynomial space {{cite:06f06fcdddafd129b097400d9aab361eec69fa2d}}. From the joins point of view, streaming joins (in the context of update queries) with worst-case optimality guarantees have been studied for the triangle case {{cite:7d4700b37326fd7b24376629c6f13756bc2ef83d}} and we expect {{formula:ee530c03-509e-48ac-a04a-6602566ba839}} -norms to find applications in this space.
| r | 02b7931fe9648853f7edbfb3e689d5b8 |
Besides using the generalized entropy to investigate the thermodynamic properties of the black holes, there have been intensive investigations on the role of generalized entropy in the context of cosmology via the so-called holographic dark energy (HDE) {{cite:fd7838698498986b38ad6ca5123371246d1a5269}}, {{cite:5e7f71574309eaa637ec6e19ec36348600de348f}}, {{cite:13e059b4353bea120e42cfc57ae19e786918ff26}}, {{cite:c224df4966bad761c89221794e58eb8598a89ed1}}, {{cite:41265ee8c3fd4a84dd7e15ced2c029240e835cc0}}, {{cite:336298ab5933a2bf2997559033edf7e6c2f9a8d1}}, {{cite:22dab412296e0fc88fce020a9686438bb915cf25}}, {{cite:ecc7e386e85762f18d2370898ede7ab21bae31cf}}.
For the HDE, the energy density of the dark energy is obtained from the notion that the matter entropy in quantum field theory is not greater than the black hole entropy {{cite:38f2cb6ae945667b2d9e330c3bfab8783a108b00}}. Recently, it was argued that the energy density of the dark energy in the HDE model can be obtained from the first law of black hole thermodynamics {{cite:5e6c3c8240d79fb2a197cb2962ace49ca4f090f5}}. While the equation for the first law presented in this paper is taken into account, it is interesting to investigate the role of generalized entropy such as Rényi entropy in the cosmological context. We leave this investigation for further work.
| d | a8a6e57ec7b213c66e183f59a4a7a939 |
Here {{formula:34fb353c-07d8-402d-9e8e-c8ff5d8cbb76}} is the modified Bessel function of second kind {{cite:944eb5d09d1d909fd8e9708daed1cad020d09c06}}. In terms of {{formula:e3205f95-29d7-4dd8-9d36-358af1e9bc91}} , Eq.(REF ) can be written as
{{formula:9b959859-4b16-4796-9a34-e3f1502270fe}}
| r | 3fbede3068632c72e7426491ba1f788f |
Furthermore, in both questions it would also be interesting to know the dependence of the results on {{formula:fe5c340b-b0b5-48a4-9886-7d7bd80127c8}} . For example, in {{formula:527fdbc8-292b-4fea-9651-3e6d93aa20d1}} it is known that for {{formula:9ee9913e-b5a4-491e-bd81-28d049da2c4b}} the largest component is of order {{formula:130fecd8-900c-475b-8f69-ccedcf696614}} , the length of the longest cycle is of order {{formula:cd028dbb-a80e-4d87-9057-c48007f8c539}} (see, for example, {{cite:28dd1a5530215deb563ab6251b5e3b06bab58487}}) and the order of the largest complete minor is {{formula:bfe319a8-4294-4129-97d2-e33b81199c5f}} (see {{cite:1397e5fd6f3cab0090c8270f9b0d328047ca0629}}).
| d | fb0dd01db6168b750b3ae8f7a74d58fc |
Based on this observation, a couple of recent works tried to learn separate distributions (one for motion and another for appearance) and have shown improved video quality as well as more natural motion dynamics {{cite:14f728fc33cee66852872a9936569c34a13051b4}}, {{cite:e1aa9433ce737a5621c2b48c5ca4b3cb06ca7d82}}.
However, existing approaches, while separately learning motion and appearance to some extent, fail to learn the true dynamics of motion as they all treat videos as a sequence of frames bound by discretized, fixed-interval timesteps.
Such rigid treatment of the temporal aspect limits the model's capacity to learn natural motions as indicated in Fig. REF , which eventually leads to generating videos of suboptimal quality.
| i | b427bd5ccceea26ba29e9cf60c5b1b5d |
In practical terms, this concept of generalisation as being
grounded in relations of similarity underlies a variety of
machine learning techniques such as
k-nearest-neighbour classification {{cite:bcff87cdeb91ac3f170e58833b5ad0dfbecbb950}}
and kernel machines {{cite:93ba687a18649bbbce1a85b304a68b086c505296}}.
As previously discussed, the view that
the abilities of artificial neural networks
are also limited to generalisation between similar items,
in the sense of sharing overlapping features,
has been a common assumption among both
supporters and critics of connectionist modelling
{{cite:1b924186f94a29d309fe084ab6cd9c77234b06c7}}, {{cite:0e74c5e2667707667b0fbfe93539d67745f9d489}}, {{cite:f6a7c2bd836347352c3e05ef44cca6c331d035f2}}.
| d | 379cffb02c340f4cb4260d9dff5e4ca7 |
Penalizing the HSIC as we do for each mini-batch implies that no information is learned about distribution {{formula:1778f57c-d80b-4938-b9f4-83bb97220119}} or {{formula:7c98d56f-f8ea-49e8-bdc2-6aa3f9b49d93}} during training. On one hand, this is positive since we do no have to estimate more parameters, especially if the joint estimation would imply a minimax problem as in {{cite:7a5d5beffa331a7284f07f35df2e6c72c29a9253}}, {{cite:b8d8d43063c8e25d89a3a78652b9525935fac33a}}. One the other hand, that could be harmful if the HSIC could not be estimated with only a mini-batch. Our experiments show this does not happen in a reasonable set of configurations.
| d | 5bd0eedb04415389d0f8e60c20959b39 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 36