text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
While polaritons can be understood as semiclassical excitations of the electromagnetic field in a polarizable medium {{cite:f3bf59ead23a79a3bad4545b317f5c2541ac032e}}, a full quantum-mechanical theory of polaritons was originally proposed by Hopfield {{cite:60ff43c81adcc1afcc7bd2e55f7d554b8f1d4be4}} and Agranovich {{cite:c6e21dd7506ccc3e8c4a9d93a46dcd8fe2a3072a}} in two seminal papers. In particular, the Hopfield formulation is based on a transformation of operators and can be extended to include imaginary parts of the oscillator energies, thus making the theory a non-hermitian one. While a generalized formulation of the Hopfield method has been reported in the literature {{cite:0732dac2d0978115d44fcc008b0ebb221ff3bdbe}}, {{cite:eee3c710ed97989c6bacd2dd245517a321d679c3}}, a Hopfield-based quantum theory accounting for an arbitrary multilayered structures with an in-plane pattern is still lacking. In this work, we extend a previously formulated theory of radiation-matter interaction in fully etched thin photonic crystal slabs containing a single active layer {{cite:acc053af3f2dde57b865220e53a6dd26e693dc17}} to describe an arbitrarily etched multilayered structure. In particular, we consider a guided mode expansion (GME) basis for the approximate photonic modes that are the closed system solution of a multilayered planar waveguide with partially etched core {{cite:c14159864139c69f26b4287709a430e6c41ecf2a}}, {{cite:c65bc1249280633928d1eccca4ab1cb2d362f145}}, in which losses are calculated within a perturbative approach by coupling to the continuum of radiative modes in the light cone {{cite:c14159864139c69f26b4287709a430e6c41ecf2a}} to obtain the eigenvalues imaginary parts. Then we formulate a generalized Hopflied Hamiltonian accounting for radiation-matter coupling to an arbitrary number of active layers characterized by an excitonic response. We generalize previous treatments by including the polarization response of the excitonic transition.
i
8d3c171ea1e0eace44ba9bb150d9de55
Next, we took the resistivity measurements under magnetic field {{formula:eac6bcd9-51c5-465f-aa21-4da81ab1dff8}} perpendicular to and parallel with {{formula:b1414f09-701c-4dbf-8565-4945e92c96d0}} , and the results are displayed in Fig. REF (b) and (c), respectively. For {{formula:f3ebb5d2-a11c-4188-b676-a7c1fab02bbb}} , magnetic field gradually suppresses the SC transition, and when {{formula:c3f56d28-402d-4414-a49a-9a74679e38c7}} T, seldom resistivity drop can be seen. The upper critical field {{formula:4359889f-0188-45e2-8b02-87901125e1af}} can be obtained from the {{formula:0757a4e4-6624-4482-8937-c8d922bcde79}} data, by fitting the field dependent {{formula:483bd0bc-f55b-4739-9529-e4af81ca8883}} to the Ginzburg-Landau formula{{cite:642dfe2619374c4ceeee0c9f5378e7a0043f93e6}}, {{formula:257007a9-f3c3-4010-9913-a9f77918370a}} , where {{formula:1a2b0f3f-2c5e-467a-b519-a8adee70f825}} is the reduced temperature. This fit results in {{formula:98509b26-3c04-4641-8e32-14c3f383b1b7}} T. The measurements for {{formula:727e4d22-804f-4d8a-992e-2cda0050cef9}} were made on another sample whose {{formula:f36b856f-819e-47b5-88a2-e0437177426f}} is a little higher, but we notice that a second step is visible between 0.4 and 0.8 K before {{formula:1eeeba5f-b6ca-4d2b-bff9-92a7a04d685b}} finally drops to {{formula:e992a31e-c814-4ecf-beb6-098f82b554bb}} 0. This is probably because of phase separation or some weak links between superconducting granules. In this field orientation, we derived {{formula:cbd05526-2161-45c3-bb3c-53c51cb33126}} T. The anisotropy of upper critical field thus is {{formula:7c561c7a-52f3-4a32-87d8-3a8f69254e51}} 2.3. These {{formula:032a4c6a-eebf-4936-aa2b-59bd9691e3ee}} values are far less than the Pauli paramagnetic limit {{formula:10589da4-c7f1-45db-b529-73617e2c90b9}} T {{cite:ec644ca58ef77f480fff6c2aa3f3ef929c3b6ad0}}, {{cite:d3e200a722d72b0b07554df113c5eb4caef9f4a7}}, implying that a spin-triplet Cooper pairing is very unlikely. {{figure:9ed94408-e610-4f0b-9f05-79af0af4d922}}
r
5624746225d1723eeca80c517e4b214e
There are different approaches to designing trajectory prediction models in the literature. Scene-based methods mainly rely on the semantic map of the environment alongside the agents' positional information represented as a point-cloud map {{cite:487bce901a1984ef450e3f3b1294f56be9bfad97}}, {{cite:1eb6fbeb3f40696034be12fb2344973e168539b5}}, {{cite:16b71b20a372cb5191f5967fbca34d0c82ad2ed8}}, {{cite:182d8a316aa18dcc8b5bdc64b379218f73c0da89}}, or 2D representations of trajectories {{cite:b30867fcd31690d5ddd4d70d48474dc96a9eca8e}}, {{cite:943a224f635602e5de89e47d72b5806cf2730e11}}, {{cite:9e26e735571d561ab1c6b4a359edcaef7e72c981}}. These approaches often use convolutional networks to encode map and positional information. Point-based approaches, on the other hand, explicitly process agents' positional information as a time-series data using recurrent {{cite:a128b0ab14b23bc5fe0a019d9beb1ffbeb17d79b}}, {{cite:b72766909d3f979447a11393716b2757136edc72}}, {{cite:020d2a7d3b1c001d09d8d7abaa4d335ba408ec3d}} or transformer-based {{cite:6bb9e22fb6a997edb9e58025cf787d682d071a3f}}, {{cite:41b5b828d654409cc195ec3b5c0036778a9a48c7}}, {{cite:8983dbc9300e2513ae3dd4a770aefdb2c8b2dbdb}} techniques. Unlike scene-based techniques, these methods often explicitly model the interactions between the agents and their surroundings.
m
ddbf08d12e1cf3470c7b8eb7394b8439
Quantitative Experiments For comparison, the state-of-the-art methods, HKS-Net {{cite:863c53f14c513980a607b95c3f1dad7bcc1e15b2}}, HMR {{cite:6cbc5ad6961934aa48e1b6c3ecce322c0af50e5f}} and SMPLify {{cite:7f349e795970b215b9627e04cdfbbf28364a52d1}} are trained with the same data and compared to the proposed network. HKS-Net uses the UF-US-2 architecture and was trained with RGB images. HMR {{cite:6cbc5ad6961934aa48e1b6c3ecce322c0af50e5f}} and SMPLify {{cite:7f349e795970b215b9627e04cdfbbf28364a52d1}} use only the frontal RGB image. For SMPLify {{cite:7f349e795970b215b9627e04cdfbbf28364a52d1}} the estimated locations of joints by DeepCut {{cite:677890a884c032ae6156c3ccd5790b0e1f44e957}} are provided and the original models of {{cite:7f349e795970b215b9627e04cdfbbf28364a52d1}} and {{cite:6cbc5ad6961934aa48e1b6c3ecce322c0af50e5f}} were used. The mean measurement errors on reconstructed meshes are reported in Table REF & REF and illustrations of the results are provided in the supplementary material. Our method achieves competitive performance compared to the state-of-the-arts works on both two dataset. Our method shows significantly better performances on the upper torso (chest, waist and pelvis). The error distributions over these measurements for our method and HKS-Net on XXX-fits dataset are plotted in Figure REF . {{table:39744be3-18b6-4ba0-946c-2db41e10dba5}}{{table:6d404a8c-f29b-4743-9223-573c8bbc48ce}}
r
c0e25e80afd88251c78497730264744e
We next consider the N-jettiness beam function, which is a SCET-1 observable with {{formula:8f40b9c4-12f1-4690-a65f-b7bc8d6c4d1b}} . In this case we extract the two-loop non-cusp anomalous dimension {{formula:50195e35-ae9c-42c9-a69c-5cb4a5515ae6}} and the non-logarithmic terms of the matching kernels {{formula:d6551f72-45bf-4733-a864-279e498047d9}} . The former are given in Table REF , and the latter are displayed in the lower panels of Figure REF . Our numbers are again in excellent agreement with the known analytic results {{cite:760d976e2b35d80ed0a58aff581e736fd1b1e0a2}}.
r
4e05d2594b90e84f9e1fe422e250f1bb
Primordial hydrogen may reduce mantle oxides and produce H{{formula:5d5c8590-3015-4849-8661-9c05dac3c8cc}} O {{cite:79d048d68a77a89b2f9ffba63639c68480d16fd3}} or FeH, but it is unclear if reduced Fe would merge with the core {{cite:758bc67f6b636f1fa3658be910750d565efafc53}}. In global chemical equilibrium, mantle oxygen may be able to produce water in mole number comparable to or more than hydrogen {{cite:7eb65c90a48357435232c467b5648278ef8e1576}}. Thus, primordial H/He atmospheres may be enriched with water vapour {{cite:1cbbaddd97cef7fbd7e816265c6a7299a864f905}}. In consequence, the water will in large parts dissolve into the mantle. Thereby, [atmospheric water abundances of sub-Neptune envelopes would consequently become sub-solar]. For now, there is no comprehensive interior model that accurately accounts for the chemical reactive atmosphere-magma ocean boundary as well as the partitioning of water and other volatiles in the deep interior of sub-Neptunes. Only few studies have investigated individual aspects of it {{cite:33a0d328b43bb7346edf58c723a749b8d79d2bcd}}, {{cite:758bc67f6b636f1fa3658be910750d565efafc53}}, {{cite:7eb65c90a48357435232c467b5648278ef8e1576}}, {{cite:9c48757ac3643dad21c8dd0f148d34ed8a35fecb}}, {{cite:1cbbaddd97cef7fbd7e816265c6a7299a864f905}}, {{cite:d90b513c3f0d9dc0f258f2468b31daf6f7fc8fc4}}, {{cite:80ddd3dec817eb7bbea9bec1c8f990c8ec44a94e}}. As volatile partitioning in the deep interior has been neglected for inference studies of exoplanets, previous interior predictions have thus far generally underestimated the amount of water and hydrogen for sub-Neptunes. [Sub-Neptune envelopes may possess compositional gradients {{cite:2cffcc1849056be80fcaad1c738ae0f34d2cd3b0}}, {{cite:19d22131543d60225bb6b13332fa48199c097076}}, e.g., water might only be mixed within a hydrogen layer up to heights where water condenses. This effect itself influences the calculated radii and should ideally be considered in parallel with the partitioning of volatiles in the deeper planetary parts.] This is particularly important for very water-rich atmospheres as a potential explanation for the radius valley {{cite:90722959c1a75aea141a2a30c3379154d98a8562}}, {{cite:339ed9f6d4714f6dbf9e9da0c18d0c10b4eef288}}: most of the atmospheric water will be dissolved in the deep interior and does not contribute to the atmospheric layer thickness.
d
b97e2fb554390870915f7dff2d2266a1
It is well known {{cite:cbcd6492d0b09b1d50e54a65125ee363a330c0c1}} that BDFp (and hence NDFp) are unstable for {{formula:c5e1ad5b-2b09-43fd-8189-2dbb0c27e58f}} , and for {{formula:7c060dab-4305-403d-901e-086b355cd29b}} the stability region is small and hence not practically useful in our case. Further the celebrated Dahlquist barrier {{cite:cbcd6492d0b09b1d50e54a65125ee363a330c0c1}} implies that BDFp (and hence NDFp) cannot be absolutely stable [that is, {{formula:b5fc01f5-a005-47f8-9d64-28adba8c6f69}} -stable with {{formula:7879a2de-93e9-46b5-b7aa-5510d682a441}} ] for {{formula:f2f50da6-e4b2-4715-8d9d-514ad6039963}} .
m
7a164376a578b93ec10eb49b0bb875b3
GPDs are projections of Wigner distributions that give access to the unknown mechanical properties of the nucleon involving both space and momentum correlations. Among these are the quark and gluon angular momentum, along with spin directed {{formula:7846cc00-5558-4da3-aba2-2fd8e1f54335}} interactions {{cite:dd40591206a464cb69cb4c5ca8244f6b227d260b}}, {{cite:c196c5465ce94765990fd071c30fb77fb5a759fe}}, {{cite:47f3ab02c54806257bab484eeb46a0367fe81ddf}}, {{cite:9b34d4f19ad6c5e8ca56a8e01a53304de527c728}}, {{cite:e0c81960191047cb191cd5bc9fbd59e7fc19d357}}, {{cite:3c0cd247420d3087376941a07bcb9cdc25669f3c}}. An accurate knowledge of GPDs would unveil an unprecedented amount of information on nucleon structure and on the working of the strong interactions. Nevertheless, after two decades of experimental and phenomenological efforts, it has been, so far, impossible to extract these important quantities directly from experiment. The problem lies at the core of their connection with observables: the cleanest probe to observe GPDs is from the matrix elements for deeply virtual Compton scattering (DVCS) (Fig.REF , and Sec.REF ). In a nutshell, GPDs are multi-variable functions depending on the kinematic set of variables, {{formula:36e1b456-5a81-48fd-be56-2700b0decd53}} (see eq.[REF )], which enter the DVCS cross section in the form of convolutions with complex kernels, calculable in perturbative QCD, known as Compton Form Factors (CFFs). Furthermore, because GPDs are defined at the amplitude level, they appear in bilinear forms, in all observables, including various types of asymmetries. An additional consequence is that all four GPDs, {{formula:5d87d6fa-00bb-427d-870a-9761e7c9793a}} , {{formula:dd67ae99-7d78-4d89-beec-3e0cabfd4898}} , {{formula:68278c53-4bd0-49b1-bdfb-0d41bcb6aa7e}} , {{formula:3333c330-301c-42e0-b87b-6e15dbe550d7}} , enter simultaneously any given beam/target spin configuration. It is therefore necessary to consider simultaneously a large array of different observables in order to extract the contribution of each individual GPD, even before addressing the issues of their flavor composition, and of the sensitivity of observables to quark/antiquark components (for a detailed analysis of the DVCS cross section we refer the reader to {{cite:0cb37b37a60ef83b3e34dbc743e9d818b5f729d5}}, {{cite:3d7ac193e70e2d71815c2b14363cae9156338336}}, {{cite:943eb30f712cf3968ff907732fa4d483ed7d77b3}}).
m
25bbd551368d84f9fa798deb57b157a0
On the other hand, quantum computer hardware technologies are maturing, making quantum algorithms a possible solution to reduce the training cost of neural networks. When this paper was written, a quantum computer with 127 qubits was revealed {{cite:6337399f07ac6b0227a58e061e767734e6798a42}}. However, the actual capability of Quantum Machine Learning (QML) still needs to be discovered. Multiple reports {{cite:a3f81c49de42e473afeb5a6cda663af45a10dcbb}} {{cite:382ba736bd7d8726dd49486f0dfaf420eb990c0d}} show that it indeed outperforms classical machine learning in certain situations. Most QML models are built on top of Variational Quantum Circuits (VQC), the quantum algorithms that depend on free parameters known as Parametrized Quantum Circuits. Based on VQC, multiple quantum Feedforward neural networks capable of universal quantum computation have been proposed {{cite:a99442529018004ca3e352564945937a4109b6ed}} {{cite:2dac3d4d307a0d68a9c3540e5755c741d5f13ef4}} {{cite:9c90ed5821a76591cdbd4047e4e322dcaa100b55}} {{cite:06434f7bf79c2ddd9a54093cad4cb8cdcbd502c4}} {{cite:07241ffb39745ba0fb972217694c2630d807e9c9}} to use quantum computers' potential in machine learning and attempt to train networks more effectively.
i
55e9ccf43120839e1cbb398440f3d9a3
We evaluated each method using the source-to-distortion ratio (SDR) {{cite:34fa41ac60f5e81b0c404bb6c3a2db205be2a778}} of the directional target speech, which measures the separation quality and the absence of distortion. For each of the methods, the SDR behaviors with respect to the elapsed time are shown in Fig. REF . Note that all methods except for FastMNMF are initialized using ILRMA. Multichannel Wiener filtering using ILRMA estimates improves the SDR value to some extent, and FastMNMF or rank-constrained SCM estimation further improves the separation quality. From Fig. REF , we can confirm that the second-stage acceleration achieves the fastest and most efficient target speech extraction.
r
e5b2cca8a1e3bdf67f2ca4b30fa46cad
From the measurements of the cosmic microwave background (CMB), we know that ordinary matter (photons, electrons, etc) were part of a thermal bath, at a temperature of {{formula:3619596c-c9ed-41c7-bcec-3617f872c9a1}} eV. In the context of {{formula:93f6f0ec-453d-4830-b15e-bb45e6bc1e37}} CDM, the universe is at first dominated by the vacuum energy of the field driving inflation, the inflaton. After inflation, the out-of-equilibrium decay of the inflaton produces SM fields, and therefore the cosmic entropy. At the so-defined reheat temperature {{formula:f466555b-dd31-4928-915a-d75e44b835bf}} , a thermal bath of ultra-relativistic species (or radiation) is established, and the universe becomes radiation-dominated (RD). The scale of the inflationary reheating {{formula:925d4417-1178-4347-9b3a-a4d93d389c10}} is not known, but it must be above the MeV scale in order to not spoil the BBN predictions {{cite:85a3880ac96dad5e1d54bd7de3ce655c76455601}}, {{cite:03c3b7e63aa1a31b786cf368876e378b05270aa1}}, {{cite:b5bcd34b7f4af62e4a2bff603c05139f551ea9fa}}. Therefore, within {{formula:66599f8b-5a6b-4054-819a-5f68c83f8e1d}} CDM the universe was RD from {{formula:f32ff5a8-be5b-42bc-aaeb-df99bbd21fc0}} up to {{formula:cb1ddbe9-20e3-4a8a-87cc-7bf7d726fae5}} eV, when it becomes matter-dominated (MD) due to the CDM component. Nowadays, the {{formula:ad0a1f43-940e-473c-8823-c65bcdee4427}} component dominates the total energy density and drives the accelerated cosmic expansion.
i
79bd6058187d260e9a2a7646eb1a811a
the limiting distributions under the null and alternative can be derived but are hard to simulate from. To alleviate this it is standard to use a bootstrap procedure {{cite:ef5a7049dfbd48d6ec55561a201c67e114f8e36c}} {{formula:5ff6b7ed-d62d-4cd7-a027-3325d89783a1}}
m
46a902b796cf2df110444e16438e9b23
Boundedness is trivial, while continuity comes from the same property of the trace operator, for which we refer to {{cite:342ec19b3637de177e8421054bddc4dd92d4fcd8}}. Since strict monotonicity is a straightforward consequence of the vector inequalities in {{cite:8d9e6850e55911b42608f189930884430ea22b53}}, it remains to verify that {{formula:5a45e007-153f-44dd-be85-1e04bd08d4d9}} fulfills condition {{formula:0d0b8a6f-3d0d-4ed6-bb11-d210fe58bf26}} . We evidently have {{formula:250ac42e-0cd9-4166-b2f4-9b368c5b5059}} , where {{formula:e8e2136f-8bb7-4c2e-935a-3ce0508c7e00}} By {{cite:342ec19b3637de177e8421054bddc4dd92d4fcd8}} the operator {{formula:93a4bdf1-79eb-4314-86a5-2316500c24a4}} is of type {{formula:802da5a9-7bb2-4174-9e74-e84dc5239049}} , while {{formula:c4965999-0361-42e7-9211-0fc9a089a0a3}} turns out monotone. Thus, the assertion follows from {{cite:cb3f9cac8d96d16fa6285d6b1c324bf016cb48d8}}. This shows {{formula:277ffe4d-af02-45da-a491-946ed4fd93ce}} . After noting that {{formula:2d6361f4-f526-40db-ac2a-60ec16887c8d}} Theorem REF and {{formula:2996e55e-0abd-4464-a798-30ccd430e44c}} directly yield the bijectivity of {{formula:6187a837-9425-4b95-937e-cef5b7aa59ba}} ; see also {{cite:d7fab67b05af319eb0298cd3f7fc239b71eceae8}}. Now, pick a sequence {{formula:964aaa79-4a9e-4bc9-b812-4ea69612e2d0}} such that {{formula:997d94d3-4b89-4b90-b8a5-3e21a35964ef}} in {{formula:a47f8b09-81c3-4951-aa45-a8768c611811}} and write {{formula:283d1880-e645-477f-a5ef-74111f270474}} , {{formula:24922342-06e5-416f-9b9b-2f6ebefb8e2b}} . One evidently has: {{formula:8dd8829a-bf2d-4ac3-a772-a5518508ee30}} {{formula:300fc943-245b-478d-b457-beab7bc1cfdd}} Through (REF ), besides (REF ), we obtain {{formula:c5598a6d-07ff-41e6-8939-d611e4d49d1c}} because {{formula:80211536-3b09-4e21-9d70-63e02437d242}} turns out bounded. Therefore, {{formula:5f34e4a7-fbb7-4e7b-b25b-440cf308495d}} enjoys the same property and, taking a sub-sequence if necessary, {{formula:1e035b39-14fb-4e61-a12a-3b8fa36888ff}} in {{formula:f8a500d4-1449-4a57-b889-2fd444d03b64}} . Accordingly, by (REF ) again, {{formula:4e634a1d-4ebe-4e3b-b9b2-a66501741a40}} Since {{formula:ad92eb8f-7aa9-4231-a218-a0ebd8fd2e5e}} is of type {{formula:a9056fd0-5192-4c93-afa0-a9db4879a321}} , this entails {{formula:859c0b3c-3bb1-4b86-98a8-bd4fa45f4b65}} in {{formula:027b0cea-0fe9-4ecb-85b5-ab5b2d50d54c}} and, a fortiori, {{formula:e93613d9-fb60-45a9-bf6a-3cd6b9916884}} (recall that the trace operator is continuous). Thanks to {{cite:342ec19b3637de177e8421054bddc4dd92d4fcd8}} we thus arrive at {{formula:0e5534de-4371-40be-bdbe-fbac61987228}} whence {{formula:943bd65f-c0ab-4921-9178-f02786f2c22a}} On the other hand, from {{formula:c48a4eab-7b7a-4685-92f6-cff7adaf98be}} in {{formula:edb170dd-c590-45e1-80e6-c8856d7cc500}} it follows {{formula:ff461aaa-2c36-45b5-b73c-7fde82416e29}} whatever {{formula:346f3464-f7d3-49db-a17e-5f96980be2bb}} . Thus, (REF )–(REF ) easily lead to {{formula:a1849e5f-e397-40e5-b910-7f209f24bec6}} By the strict monotonicity of {{formula:f2b682ca-81f2-4f55-b030-d87bcd748210}} , we finally have {{formula:64ce61ad-5bf9-40dc-a189-15267b50e0d5}} . Summing up, {{formula:7a9da925-9f6f-4044-9b1b-8a736b5f3047}} , namely {{formula:61f6f03d-a0ab-453c-8fa7-96fce0aeff2a}} , in {{formula:75b3e974-648a-4e21-89a0-206de61a5c0e}} , as desired.
r
a3b65bd08f22b415b5d87db580aa234c
The Page curve has recently been postulated to develop from the action of islands {{cite:878b9ccd90b5ca918eb486ff9bbe4e963ee25045}}{{cite:4c2d353e30cf95e14c02dec31bc0c86e595f03dc}} {{cite:609cd83667986d1397b44306f363e5d99af882c0}} {{cite:033686cba4259777b21a490f16e8ed49cc9416cf}}. The density matrix of R is generally determined by taking the partial trace across the states in R{{formula:6f693f74-b5cb-4cdb-83a0-320e597d17c6}} , which is the complementary area of R, when considering the state of the Hawking radiation as that in a region R outside the black hole. Recent research {{cite:a48e6e1fe578cf3666d956dddd273a9cb58f5c7a}} {{cite:7739af72594e5d7e63c0d11b2f9605ff015dac95}}{{cite:b0b4d1bad8d05739bc929ea5641f1f3b4f91146f}}{{cite:56af5c925eaecc0d95065317956b21d9411f864b}}{{cite:a9fffbeb5766ffb8f5f2584ff38185daaf876e32}}{{cite:1e3313927cf2bb5147d6ab9572db47c5b7f28b15}}{{cite:107e6de9b096ca0e37251312aa46be2f54aefc6e}}{{cite:5ffed4fec7cdbe337abd709380f88e25ec03c157}}{{cite:3c1e7ae325085963c2ebe0340b3166bd920ec0d0}}{{cite:bf778fe9844541434cc503353ece4bec5a1c4130}}{{cite:b970c6febf7c4b0c8a9aabfcd33f11085bf16940}}{{cite:263b6112d88ab5da51459091525f2e5dfef676be}}{{cite:935fbe7c62ab18f4dbdae64ede9d9ae26ad56b68}} {{cite:2293d436462e7f9ec2389c7d59d581fa62a1226b}}{{cite:bf0af5b5df058e1472ba1641daa35ab64b05a5ad}}{{cite:9fc61a67629fe3bb0b46c983f9de5faeb40351e1}} has revealed that specific supplementary regions known as islands contribute to the entropy of Hawking radiation, with their borders defined as surfaces known as quantum extremal surfaces (QES). This indicates that some non-trivial QES arises in spacetime at Page time, cancelling out S(R)'s time-dependency and resulting in a saturated fine-grained entropy of Hawking radiation. The fine-grained entropy of Hawking radiation, including the island contributions, is {{cite:4c2d353e30cf95e14c02dec31bc0c86e595f03dc}} {{formula:5d3c5237-746a-4f7b-ba17-f3d411fddd33}}
i
51199e6c84e8e3adfd94e6d7c6f84ba5
In this section, we discuss our experiments and results. We evaluate our model on logical inference {{cite:f574f1af8a218ae3b3bb201b8c694c10a87b62f3}}, list operations (ListOps) {{cite:4160e05d81dbe11ad93e0daad800edc6b0ee0693}}, sentiment analysis—two datasets, SST2 and SST5 {{cite:3980d2e8582bd514340ab745ce6ccc472996f62e}}, and natural language inference—two datasets, SNLI {{cite:7d0df3855889c778affc1566ce4e298da0f566a6}} and MNLI {{cite:2a769bdb9992665f05fda840fe099ffa94ea94e2}}. For implementation details, refer to the appendix. {{table:4902869b-dd45-4fed-84f7-8055529974ed}}
r
7e7d04f07d0b66645e3392555ec77be1
The relationship between the speed-ups and the number of processors with the OpenMP approach is shown in Fig.REF for a small model size, where the speed-up is the ratio of computational time between eight and more than eight processors. Between the processor number 8 and 16, the maximum speed-up value increases with the model size apart from the 130-body case, which has a different trend as other cases. It is clear that an over-used number of the processor will not have a significant speed-up and sometimes even increases the computational time. The reason for it is because of the overhead in starting up the new threads, which takes more time than the time reduced in parallelisation. This would also be due to the effect of Amdahl's law {{cite:02a52fd0af170480014863e90f44bccdc80791ac}}, stating that one can only speed the execution up to a limit even if more processors are used. {{figure:e03eef44-717f-4a9c-bfda-135506ec364d}}
r
93ff0468bdd192656cc7a149e58dfa86
We discuss several directions for future research. First, the proposed estimators cannot adapt to the contemporaneous correlation among elements of the random error {{formula:412f0d62-2396-4ed0-8ad2-7e0018848c28}} , which may lead to efficiency loss. This issue can be addressed by considering the generalized least squares loss function, {{formula:2d84d499-ae57-4631-885b-365a0da2a9eb}} , where {{formula:0fb9d272-4ab5-4dbb-9577-7131b2087556}} . Then {{formula:dc5daa20-b80e-4647-9e39-918ad8e4c92e}} may be estimated jointly with {{formula:d536a701-74e6-4412-8d71-d67dff8602f5}} or by a two-step approach based on a consistent estimator {{formula:5e2564bf-af99-4569-81e3-0e53c677f1c2}} {{cite:1f1a30581c26b7681c4a7380f035bab982e52395}}, {{cite:c3f7d8081940b2dd28fe86e92d11b3334d223b46}}.
d
2c674ff17d7ebde93f698d81c53dfcbf
The trade-off between fairness and privacy is of great concern to the machine learning community, but previous work has not suggested promising research directions for how to best solve this dillemma. Our results go beyond previous work on the trade-offs between fairness and privacy {{cite:baba36405bcea1e6acbeb0a250e2831d24ff7348}}, {{cite:62d8b1119fdc9075f72f4b375f9709a477902dde}} in evaluating this trade-off across multiple machine learning settings and in establishing logarithmic correlations between fairness and privacy across two different settings: linear classification and robust deep learning. Finally, we also provide an example of a successful attack against differentially private dimensionality reduction by training a three-layer feed-forward network on the learned representations of DP-PCA.
d
4857fe195d69a9c0f5799c91e0f682f9
Similar to the base model comparison in Section REF , we assess how the proposed M4M method affect RoBERTa large models in this section. We use the training data of BooksCorpus {{cite:91471700a3057af17ae60307d173fcff018ed74b}} plus English Wikipedia {{cite:cf92530c5afe45acc66624e978cdbc92a017b1e2}}, {{cite:aec855d237a5e4ff07ddec03c8eb7c117a8eccca}} (16G) and OpenWeb Text {{cite:8b7cd2075fabbf4476225a9d00c49b9713874212}} (38G) to train a model denoted as RoBERTa-M4M Large, which is initialized from RoBERTa Large roberta-large from pytorch transformer https://github.com/huggingface/transformers.. We use 4 AWS P4DN instances (each has 8 A100-SXM4-40GB GPUs) for model pre-training. We use the batch size of 192, Adam optimizer with learning rate starting from 1e-4, and maximum steps of 500000.
m
bae7a9a8682967ac74fccacc9d2d3baa
As briefed in section , the optimal parameters can be realized using modular architecture and modular units. The parameters are reduced to {{formula:a597b58d-50eb-46e5-b6fb-7f996abdf739}} using given set of rules in section REF , as shown in Fig. REF (b). For the validation of the 3-DoF modular configuration, a Unified Robot Description Format (URDF) of the configuration is generated automatically to use it for the motion planning of the robot between the working locations avoiding obstacles. This has been implemented in Robot Operating System platform using Moveit {{cite:b9cc2c05982896b38c22312a058dbb542405317a}}. Moveit uses a joint limited Jacobian based numerical solution for the inverse kinematics called as TracIK {{cite:c789266aaefee3da22a0486a733ce9b8bef45fad}} and Open Motion Planning Library (OMPL) {{cite:ea8108be6ee3a6e3ab8193ff243f9000cd43ed8e}} for the motion planning of the end-effector of the manipulator under the constrained environment. The motion of the end-link of the 3-DoF configuration from working location 2 to 3 is implemented using this, as shown in Fig. REF (c). This validates that the modular configuration is not only able to reach the working locations but also able to plan the motion between each task space location.
r
cc40d8625721b2fd2a3b5457a50f91fe
Figure  REF and  REF shows results of image classification on CUB-200-2011  {{cite:d38f071a8899ae9b1c772b37acda5775628d9d4c}} and CARS  {{cite:d9bbf3db02b9811214b3b6da09eddc89dbb1d74c}} dataset {{formula:1607fcf3-9af8-46d7-ab26-ab85077b2253}} test images using Photon Net. Probability output of incorrect class is highlighted in red and correct class is highlighted in green. Even in the case of extreme low light (PPP{{formula:b5f1f81d-21db-4778-9c8b-ba7eac394e3c}} 0.1), Photon Net is able to recover the correct output label. Figure  REF example of few failure cases where Photon Net architecture fails to get the correct prediction. As we can observe, these cases are extremely challenging.
r
d02ae81377acab330f6e9416b791c71c
The validation accuracy of scratch and Full KD learning on GENEactiv dataset is presented in Fig. REF . Training from scratch with the original data shows higher accuracy than KD with original data in very early stages before 25 epochs. However, KD shows better accuracy than the models trained from scratch after 40 epochs. KD with augmentation tends to perform better in accuracy than models trained from scratch and KD learning with the original data alone. That is, data augmentation can help to boost the generalization ability of student models for KD. Mix1 shows the highest accuracy among the results. The highest accuracies are seen in early stages, which are less than 120 epochs for all methods, where 120 epochs is less than three-fourths of the total number of epochs. On closer inspection, we find that the best accuracies are actually seen in less than 20 epochs for training from scratch and Full KD, less than 60 epochs for shifting, Mix1, and Mix2, and less than 120 epochs for adding noise, respectively. This implies that not only early stopped teachers but also early stopped students are able to perform better than fully iterated models. In training based on KD with augmentation methods, the accuracy goes up in early stages, however, the accuracy suffers towards to the end of training. These trends on KD are similar to the previous ESKD study {{cite:759bbfdfa55b2e6b5ce5ca5026ddec38cf2f854b}}. For the following experiments, we restrict our analyses to ESKD.
m
0a59356bb436cab80ac9d207f8e757f6
EmbDTA: Instead of a specific feature extractor, it only uses an embedding layer. We add this to see how much the proposed method improves the performance of the task using only the interaction. It uses SMILES and FASTA to represent the drugs and targets, respectively. DeepDTA {{cite:705ea3fdcdfafed3e5901620045b0134083993c2}}: It uses CNNs for feature extraction layers to encode locally salient features from drugs and targets. It uses the same representations for drugs and targets as the EmbDTA. GraphDTA {{cite:50dc7f75c8f15c294acfda2c19dfe9f86c4c0f7f}}: It uses GCNs to exploit the properties of the molecular graph and utilizes CNNs for proteins as in DeepDTA. For an input representation, it uses atoms in drugs and connects them through graph information (i.e., adjacency matrix). To preserve sequential information, we use a sequence-wise convolution instead of feature-wise convolution, which is similar to {{cite:3906ebf69ba5196f7b2511897f311c9d7d38dcfd}} and we empirically found that this version produces nearly identical results with that of feature-wise convolution.
m
0149b69e84d48cc082a2fadf43d49663
We implemented the enumeration methods on Magma {{cite:801bc247bc6cc19158cd55b6d3cd7a4bdf5bab4f}}, and the source codes are available at the web page of the second author, see Subsection REF for the url. Computational results obtained by our implementation will be described in Subsection REF .
i
00e0773dbfc312da0a277eddc02353c1
Instance normalization proved quite influential. Normalizing the representation extracted from the content allowed training networks which could produce outputs in a multiple predefined styles (typically 30-50). Chen et al. {{cite:04133238e324b03d378a366c1c572f3a17427403}} proposed training an architecture where the parameters of most layers were shared, but a few were learned separately for each style, and were swapped in and out depending on the target style. Dumoulin et al. {{cite:40cfb02ca0a0bed9b2f8af250354d2e34097de00}} had a similar, but simpler approach, learning a per-style scaling and bias to apply after instance normalization. {{figure:f7449459-9076-4174-9ffc-4377ca327bee}}
m
90b9a24ab67e5f44f0efc457cebb299e
On the other hand, as the Mach number vanishes, the flow converges to the incompressible limit. For the full Euler equations, at the incompressible limit, the density remains constant along the fluid particle trajectories, and the pressure waves propagate with infinite speed. Mass conservation equation reduces to the incompressibilty condition {{formula:e9d4ada5-9922-4391-a921-592c6ca658cc}} on the velocity field {{cite:745164fe8e31cf6f335963aee0c7e0945229b62f}}, so that the pressure and the density are decoupled. The pressure turns out to act as a Lagrange multiplier to enforce incompressibility of the flow {{cite:751ab7cbfe83debc94526f3cd0ae6f9d3179f139}}. A rigorous proof for the compressible flow converging to the incompressible one as the Mach number goes to zero is given in {{cite:0736d836dabcf8548b6e50de0dde153428e49fdf}}. An effective approach to deal with low Mach flows is given by pressure-based algorithms, such as, for example, the one by Casulli and Greenspan {{cite:2639d03d45a99b25bb83fbfa956cf2ec06b8262f}}, in which a semi-implicit treatment of the pressure is incorporated in a scheme for compressible flow. The authors use an upwind discretization on the material wave, and an implicit equation for the pressure, which is solved by a SOR-type method. Several authors have subsequently worked on the development of semi-implicit methods {{cite:751ab7cbfe83debc94526f3cd0ae6f9d3179f139}}, {{cite:9d3219f264ec40da3a1344375ced0569188d7368}} based on low-Mach asymptotic {{cite:0736d836dabcf8548b6e50de0dde153428e49fdf}}. However, many of such schemes are specifically designed to deal with low Mach flows. When the fluid flow is compressible at large speed, shock discontinuities may form and propagate. In these cases, it is necessary to resort to conservative schemes (density-based schemes) which correctly capture possible shocks.
i
cf66c9f59586bfa1b13f1a0eed146b4d
Given the dependence of subhalo concentration on mass, our detailed quantitative results depend on the exact {{formula:3768bc65-e4e2-48e7-bae7-eae7dccf055d}} –{{formula:11be3e99-7ef6-44e5-bafd-11265232cc55}} relation assumed. When we use the empirical {{formula:cfe0e5a8-5282-4966-b779-b6f7b02c7c67}} –{{formula:cc0e7fda-acec-48dc-8e80-1cb083cbca93}} relation found in simulations by {{cite:98d4c9a4d45d6bc07268cad6fdc45b5dc6a0e7a3}}, our predicted scaled number density profile of stripped subhaloes fully reproduces the empirical one foiund by HCFJ in the Aquarius simulation halo A, which is substantially shallower than the density profile of the host halo. This is so despite that the predicted mean truncated-to-original subhalo mass ratio is substantially lower than the corresponding empirical median profile derived by HCFJ. This result, which is contrary to the expectations for the lognormal distribution of truncated-to-original mass ratios, is likely due to the fact that the median profile obtained by HCFJ has been derived for subhaloes of all levels, while our predictions are for first-level ones only. Since the number of subhaloes of any mass at all levels is twice that of first-level subhaloes (Paper I), the only ones undergoing stripping, it is not unsurprising that the truncated-to-original profile for subhaloes of the former population is notably higher than for the latter one, while they both have the same scaled number density profile. When the unbiased {{formula:817b30d2-f5f0-4478-bf69-1472324c1465}} –{{formula:05928875-5813-46aa-9584-da1d6a2646d7}} relation predicted by CUSP is used, the predicted mean truncated-to-original subhalo mass ratio somewhat changes, but the general trend is similar. In particular, the corresponding scaled number density profile becomes substantially steeper, but it is also kept less steep than the mass density profile of the halo. This robust result is the consequence of the higher concentration of dDM towards the halo centre. On the other hand, the predicted subhalo MF reproduces the subhalo MF and its dependence on halo mass found in simulations, regardless of the particular {{formula:5f0bd99a-71e1-49bf-9200-ca66660d44d9}} –{{formula:f83c6ff6-05a7-4e11-b8f7-ecb7e63a8166}} relation used.
d
1bdb734d71dc5218a16611095581b67c
The vacuum is an elusive concept. From just a featureless ground state in classical field theory and quantum mechanics it acquires a rich structure in quantum field theory. The pioneering efforts of Dirac, Zeldovich, Hawking, Coleman, Kibble and others have shown that the vacuum gravitates, it can give origin to a thermal radiation, undergo to phase transitions and can give rise to topological defects likewise in material systems. The quantum vacuum can also be at the core of the spontaneous breaking of symmetries, a crucial feature of the Standard Model of Fundamental Interactions of Nature. However, this endowed protagonism poses an embarrassing problem of fine tuning, the cosmological constant problem, whose solution has been the object of countless attempts from a wide range of points of view. Given the need to bring gravity within the context of a quantum framework, it is often argued that the cosmological constant problem cannot be properly addressed without a suitable fundamental quantum gravity theory. It might be very well the case, however attempts in the context of, for instance, string theory, one of the most accomplished proposals to quantise gravity and unify all interactions of Nature, has not shown to be successful in this respect {{cite:18065b538a5cd8eaefc25b0317bd0186151ab3e2}}, despite the interesting ideas that spring from the vacua landscape of string theory {{cite:c799493511b1d77061fe1a1f1f7c66ff092dad1e}}, {{cite:c24d3bb21047e9c6da6002034256fac02034a82c}}, from M-theory compactifications {{cite:2859e8d8901129d7aea18e863d855bfa53bba1a3}} and other multiverse considerations {{cite:cd0b3c2cad1290ffb9c515a40850f1e1cd6b2637}}. The same can be stated about other proposals to quantise gravity such as loop quantum gravity, although recent hopeful and interesting results {{cite:2716f0f10b1c7d6e546bb14f9ad14d792fe0c718}}.
d
fe7f39c02e377609bf7481d6ea4096b5
This can be seen as a classical collocation scheme for Volterra integral equations {{cite:2be5ee216e1f9a9f533f515c636da81b6032c2c9}}, {{cite:c603428693a8a4c523605cf8c7e7f90a6e92cadf}} and is equivalent to the L1 scheme of {{cite:e7b66541c63bbb31218b5d669575965c45fc7c49}} and {{cite:abac149bf7d3ecc14d8360b25a52f8dae9a35638}}. A fast and memory efficient implementation of the solution of such a discretization is developed in {{cite:0364c69dcefd53dbac2a26a2a3adf80cc32b73ae}}.
m
425147bfc733abbd776294c9f8af88f3
To describe microscopic states of the population we use so called marked configurations {{formula:55ad2ccc-5b60-4b25-9b71-5cc7a54c2054}} , cf. {{cite:b9419fb3ca34be39c18266cb2b7e01b0e9aa983d}}. Set {{formula:0f314d4a-4b3f-4355-ae60-e01600a54776}}
i
5386d0dbe45ff97a864576f0597a7896
To address S1, our detection framework involves BP reverse-engineering using a small, clean dataset independently collected by the defender, like existing REDs. To address S3, unlike existing REDs that perform anomaly detection involving statistics from all classes, we inspect each class independently using a novel detection statistic called expected transferability (ET), which can be empirically estimated for each class independently. To address S2, we show that ET possesses a theoretically-grounded detection threshold value for distinguishing BA target classes from non-target classes, one which depends neither on the domain nor on the attack configuration. This is very different from existing REDs, for which a suitable detection threshold for their proposed statistics may be both domain and attack-dependent. For example, the range of the {{formula:261b9734-6d21-440e-a58d-ea72ffe61207}} norm of the estimated mask used by {{cite:2dfd7c0b9fd0deb9d886decf75dc13ecde440075}} depends on the image size; the range of the cosine similarity statistic used by {{cite:7c5316cfc24e3e0519c9bd57b89ce8e71b3ce452}} depends on the architecture of the classifier (see Apdx. REF for more details). The practical import here is that the detection threshold is a hyperparameter, but setting this threshold in a supervised fashion (e.g. to achieve a specified false positive rate on a group of clean classifiers) is generally infeasible due to S2. Use of ET thus obviates the need for such hyperparameter setting.
m
42088d09a8baa2d8c8f8b8d791820544
In this paper, we propose LightCap, a lightweight yet high-performance image captioning method for mobile devices. Our core design is largely inspired by the recent CLIP method {{cite:14fce0bb31ab8c7ba696093161613e854747e18f}}. CLIP is an impressive image-text retrieval model, which readily tells what objects exist in the image but fails to generate a description for the given image. In this work, we investigate how to transfer such a strong cross-modal retrieval model to an image captioner, and meanwhile break the obstacles that hinder image captioners from being deployed on the mobile devices. The main obstacles that hinder image captioners from being deployed on mobile devices are their cross-modal fusion and image feature extraction models. For visual representations, we leverage the efficient yet compact grid features from the CLIP without relying on time-consuming Region of Interest (ROI) features from sophisticated object detectors. To unveil the potential of a capacity-limited model, we propose the following designs. (1) Visual concept extractor. To take advantage of the cross-modal retrieval capability of CLIP, we train a region-based alignment model to retrieve the visual concepts from an off-the-shelf dictionary. These visual concepts serve as the description hints of the image to facilitate caption generation. (2) Cross-modal modulator. Before being fed to the fusion model, the feature dimension of the CLIP feature is highly compressed (i.e., from 2048 to 312), which inevitably loses semantic representations. To retain the valuable semantics, we propose a cross-modal modulator that takes the textual concepts as inputs to activate the informative feature channels of the CLIP model. (3) Ensemble head. We jointly optimize and distill an ensemble of head networks for collaborative prediction. We disentangle the key parameters and share the rest weights of different heads for lightweight design. Last but not least, for the cross-modal fusion model, instead of the widely-used {{formula:6baf0d29-0440-4890-92fa-5b20767d304a}} {{cite:2de0f47c4727f2b3263afa0eae93a48f03dffbd1}}, we chose the efficient TinyBERT {{cite:19424026b9dc65c1d009ea2a9ec3414bee13f0ed}} to fuse cross-modal features. By virtue of our designed sequential knowledge distillations in both pre-training and fine-tuning stages and the ensemble distillations from multiple teachers, a TinyBERT almost matches the performance of the standard {{formula:371fe9b0-7388-4cd8-9e81-129fcc76813c}} .
i
f9f8789fb35e508603b8e9f16c4870e5
Our style-transfer based data transformation tackles this problem by applying appropriate low-level transformations to existing vanilla car image datasets {{cite:81f59750fcc8e8ae0cbce04c0179f4c3e5db6f17}}. As the degree and type of style transfer is controlled by user-defined weights and reference images respectively, the various kinds of synthetic composite outputs could be generated to match a particular feature domain of interest. To the best of our knowledge, we are the first to use neural style transfer for data augmentation to improve cross-domain performance using domain-specific noise.
i
953a56a5234de7ecc3b16f4438a643ee
We draw inspiration from situated cognition in the context of grounded language understanding, and show that intermediate states in the external world can be leveraged to help incrementally guide models to a correct solution. In so doing, much of the bookkeeping that traditional models have to do (i.e., of “imagining” an agent's current position based on its history of predicted actions) is offloaded onto the external environment. As observed in Section REF , this has the additional advantage of eliminating reliance on biased surface cues, such as token count in the length task, that otherwise deter models from generating OOD output sequences. These benefits reaffirm recent demonstrations that disentangling seq2seq models' underlying task representations from the bookkeeeping necessary to track their progress in relation to a final solution makes them more amenable to compositional generalization {{cite:b321616d2d8c441c1e5fc42e5bbbb3b22c624811}}, {{cite:2323ff065f1ad173b555a89d38cbcfc23bb0d260}}.
d
aacda625b47e514a7bb6a75cc1a02500
We analyse the complexity of decidable fragments of first-order temporal logic on finite traces. To start with, we show that {{formula:a7b10dfc-3ef2-4c2e-a107-43a4293d697d}} -hardness holds already for the constant-free one-variable monadic fragment {{formula:0e1cc8b6-bef6-4049-9915-cd8ff5aefd89}} . This fragment can be considered as a notational variant of the propositional language of the two-dimensional product {{formula:760342ef-cb5a-4a2d-ab42-0edd561c6b1e}} , defined similarly to the product {{formula:19d5fc28-45af-4358-a472-09d551ab1ee2}}  {{cite:f6683a74750471a0eb961926b11aa17997f64578}}, where {{formula:1d0dc06e-1e7e-432e-a1f5-e572c8f6c431}} denotes {{formula:8b002122-35be-4e21-a46a-adb56403ce45}} interpreted on finite traces. In particular, the {{formula:4a5eff59-d79c-4f9a-8408-db877e0cbc82}} -modality is replaced by the universal quantifier {{formula:7e6f2779-e6c7-45f4-82eb-b3d29f923957}} , and propositional letters {{formula:9ce78c95-30c4-47fd-81e8-7da216cc6cb1}} are substituted by unary predicates {{formula:8b92f57a-4d4d-42ee-b3bc-d872fbfa5791}} , with free variable {{formula:6c63de8f-bd4f-4bc5-ac22-73f052caa801}} . The lower bound can be proved by applying similar ideas as those used to show hardness of {{formula:b0d3ad6d-f87c-4cf8-aec9-2e5d9c29d66c}} satisfiability. {{formula:bb7493b5-c126-48c0-9b8d-49251c968dbe}} formula satisfiability on finite traces is {{formula:a3b87555-7fa4-47a1-afaf-deef91c8d5eb}} -hard. The proof is an adaptation of {{cite:f6683a74750471a0eb961926b11aa17997f64578}} to the case of {{formula:84c38f0a-40cb-4ae7-8f79-7dfd69b9b8c3}} on finite traces. A tile type is a 4-tuple {{formula:5c012bcc-a16e-467e-94a3-1d6199b7bfc6}} of colours (from a set that we assume to include the colour white). Let {{formula:e7013bf0-a4c2-4efe-a404-0588adc53923}} be a finite set of tile types, with {{formula:8dfd2295-f2ab-4073-8ede-9bc00db19c39}} , and let {{formula:2e7a3174-d75d-45df-bbe1-9b14edc106fb}} , given in binary. The {{formula:c21bed90-77cb-4b29-b8bb-6e59ff2b3223}} corridor tiling problem is the problem of deciding whether there exist {{formula:8314d91c-8cce-4ff6-9872-bddbf708977a}} and a function, called tiling, {{formula:4f2d951c-46de-4105-b29c-5446208de4a4}} (where {{formula:5b832f51-f2f6-48c2-8e8e-7b00808ae41a}} denotes the set of all pairs {{formula:b05dd185-7237-481b-b8d1-2c49f371ebef}} with {{formula:cd25a54d-38f9-46bd-a4fc-cb95886a8343}} and {{formula:1b841b44-69bf-47b4-98df-1f615899df1c}} ) such that:
r
64954a6692b09030feca556efeb6c4ec
The first assumption requires all units to have a positive probability of being treated or untreated. The second assumption states that there is a well-defined population distribution for treatment, outcomes and covariates and that all units are drawn independently from it {{cite:e920acd90440ee5c3106cd10a44be2ddf776e0d3}}. Note that, formulated in this way, this assumption also indirectly requires the common SUTVA {{cite:b40625bc22d3717a0f4809222d79299ae23bd39c}}, i.e., the absence of hidden treatments and the independence of units's outcomes from other unit's treatment assignments. The last assumption is the common requirement of observational inference, that all potential confounders of the causal relationship between {{formula:72154a56-663b-4ac9-92d8-0e85bc747afb}} and {{formula:af5e292c-92a6-4121-befd-10682a05cb82}} are measured {{cite:8397ae7f9a01e03f3a0926185922f8c0e7b39399}}, {{cite:3151905bc9dbc64f8f8060ca58f2c77979102d3b}}. Throughout most of the paper, we will be concerned with the problem of estimating the Sample Average Treated Effect on the Treated (SATT) under the assumption above. This quantity is defined as follows: {{formula:ab91c288-7619-4ba1-832f-da734bf8366e}}
m
73fc3f7afa6c6211e7621deba043244c
The symmetrical chirp effects on the momentum spectrum and reduced particle number can also be discussed in terms of the turning point structure of potential in semiclassical regime by employing complex Wentzel-Kramers-Brillouin (WKB) scattering approach. The scattering potential is determined by solving the equation {{formula:fceae895-dbd5-4ae2-82bc-538a8c16bea0}} for the turning points which appear as complex conjugate pairs when the potential is real {{cite:421193c05de5418f1d31a1a6348df3b9f0fbe6f7}}. When a single complex turning point dominates, the WKB results for the particle creation rate is given as {{cite:290bfa03ba7b3dcb7840dbd4590848d3907d6b01}}, {{cite:a15912891470cedfcfca47853b8642153bd5a762}} {{formula:75721635-1463-4bdb-9148-0ea37fb62898}}
d
321f20856fe851c46b721aa99822d017
To evaluate on the test-set, we upload predictions to Cityscapes benchmark server. We use the HS3-Fuse architecture trained using an HRNetv2-w48 {{cite:ef40d2475dc9159bd3c8edb3811feee3939248a2}} with OCR {{cite:8292175739b2ccb674bded40eaa5b7b6e9fa989a}} and Hierarchical Multi-scale attention(HMS) {{cite:447961b54775333523d6e1d4071902e7b1588757}} model as backbone. As seen in Table REF , We achieve a gain of 0.3 mIoU and 1.7 iIoU over this baseline. We also outperform the previous state-of-the-art model(InverseForm {{cite:8249272a28183d1fb68c4b59e29f5c7ed05ffc69}}) by a margin of 0.1 mIoU and 0.4 iIoU. Our model ranks top in both categories among published results. We also show visual results comparing our approach to these methods in Figure REF . Details on the predictions obtained from other methods are mentioned in the supplementary file. {{table:06957c08-fd19-42e5-bca5-0a4f9e37788f}}{{figure:3e19ebcd-6b81-4957-a72c-5e3dbb84a26b}}
r
e6ba7d7a890b935c55c3895236a87c51
Quantum computation is believed to be much more powerful than classical one, especially when it comes to problems which are incapable for a conventional computer, such as the prime factorization {{cite:e0af2eb5b012fd52cb92295bd5a2ec5dd5d73020}}, {{cite:b4b78d2c5e9182aa730ccc0a6146e84efd2e8dfd}}. However, full-scale quantum computing {{cite:57683ef914d443c3502f63eb751ba0747396c47c}} is still in its infancy. The challenge lies in its building block, quantum bits (Qubits). Qubits are error-prone, difficult to control during any computation or interaction with the environment, which limits further increasing the number of qubits in a single processor. Then the question is how to build fault-tolerant qubit at the physical level as classic bit in magnetic memory. Topological quantum computation (TQC) is expected to address the issue {{cite:2ac4f093e517aa59fff5a3feb1d055631e3bde50}}, {{cite:b3fa2fb2d6ad9bc7cce58ae273b5757bd05205c9}}, {{cite:5ff08b79c6d7785bad725203fb8a1dbc7d31352d}}. TQC utilizes anyons which has non-Abelian braiding statistics to perform quantum computation {{cite:2ac4f093e517aa59fff5a3feb1d055631e3bde50}}, {{cite:b3fa2fb2d6ad9bc7cce58ae273b5757bd05205c9}}, {{cite:5ff08b79c6d7785bad725203fb8a1dbc7d31352d}}. Qubits in TQC are built non-locally into the quasiparticle states, hence naturally immune to errors caused by local perturbations {{cite:2ac4f093e517aa59fff5a3feb1d055631e3bde50}}. The novel particle in high-energy is called Majorana fermion (MF). In condensed-matter physics, one can alternatively use Majorana zero mode (MZM) sharing similar statistical properties as MF, to build TQC {{cite:2ac4f093e517aa59fff5a3feb1d055631e3bde50}}, {{cite:110d97fdb65fd22a00b6427878996a295900c55f}}, {{cite:1b3c772c4691610be6f5eb31ec02e2320f85bb96}}, {{cite:97713889aaa28f03865530d2b9fbcdcfaf002c2f}}. Up to now, MZM has been experimentally observed in the vortex core of artificial topological superconductors (TSC) {{cite:7f71c6973bd49db237bd42f10f48c4f741fd40b4}}, {{cite:d2edb5e3077bcce66dbecfe8116c4a4959aa11cf}}, {{cite:5430a0f2e7d3a121e7addf1077c43b24f5ce3223}}, intrinsic TSC {{cite:58c07be2568b6069a8e7767a8bef04e85b18260f}}, {{cite:1646d1f6e5c404b3905f77551ad7a3b681a75c51}}, {{cite:1020a657348894a512ce6e6e893594e8c949fc69}} and other systems {{cite:050979375f3f3a68fcd75a64215a93c705bfab06}}, which accomplished the first step towards TQC, i.e. initialization of MZM. The next step is to perform unitary gate operations for quantum computation, which can be realized with the braiding operations of MZM {{cite:5ff08b79c6d7785bad725203fb8a1dbc7d31352d}} (i.e. move one vortex around another one or more).
i
25196e05ac564c4a1f8af575c6ef71f4
CIFAR-10. For the CIFAR-10 dataset, we follow an analogous procedure. The obtained comparative effectiveness of our approach is depicted in Table REF . Therein, we also compare to the best-performing models in {{cite:8594ce02528346e190529a2a52acc9a0ca4837f6}}, namely the Madry model {{cite:ccc39453ecb3668c0bd70b4e257b0a01a6070aaf}}, and TanhEns64. In this case, the differences in the computational burden are more evident, since now the trained networks are based on the VGG-like architecture. Specifically, our approach requires one-two orders of magnitude less parameters than the best performing alternatives in {{cite:8594ce02528346e190529a2a52acc9a0ca4837f6}}. At the same time, it yields substantially superior accuracy in the context of the considered attacks, while retaining a comparable performance for the benign case.
r
4d06934e827b77dc9444d7f7acd3cd73
A typical set of eigenvalues of the linearisation about a fixed point of (REF )-(REF ) is shown in Fig. REF . Recall that we have discretised {{formula:690cc2e8-512e-4146-b78c-0f36e51010ee}} with 50 points. We see 49 points on the negative real axis, each corresponding to a perturbation localised at one or two neighbouring {{formula:2bc0ccf2-1dea-4f4b-88e1-6ce891edff28}} values. There are also 49 complex conjugate pairs with zero real part, each corresponding to a perturbation localised at one or two neighbouring {{formula:6bb2872c-3936-44c3-b8de-27ebb00f313f}} values. These 147 eigenvalues are associated with discretising the continuous parameter {{formula:68d0fee2-6352-4bba-9f27-f7dd9e364c51}} , and are presumably discretisations of the continuous spectrum associated with fixed points of (REF )-(REF ). There is also a complex conjugate pair with negative real part which, upon varying the relative sizes of {{formula:1a37dfff-13d3-4c76-92cb-c53ef50b18c3}} and {{formula:d31f095e-12cc-489f-b53b-da609f3a5d9f}} , could cross the imaginary axis resulting in a Hopf bifurcation {{cite:9b23646c0b3df1a35b3f21dc5a020ff840b6acb0}}, and the single zero eigenvalue corresponding to the invariance of the system under a global phase shift. As {{formula:c9f937f0-57e2-4690-93c9-3ebc27b5bbd4}} , the 49 negative real eigenvalues collapse to a single negative real value with multiplicity 49, and the 49 complex conjugate pairs on the imaginary axis collapse to a single complex conjugate pair on the imaginary axis, again with multiplicity 49.
r
9a6a44263e93323d690c73c7d2cc7942
In a previous work, we had explored the use of a PCA-Net as a surrogate constitutive model at the macroscale {{cite:d61d054d40241360fcb177ba922d862e187e647f}}. The PCA-Net does not have a causal architectureA trained PCA-Net, however, does learn causality from the data. and seeks to approximate the response over the entire map. The proposed RNO requires significantly less data to train and has a higher accuracy in both the two and three dimensional settings. Further the current RNO can be evaluated locally while the PCA-Net can not. For this and other reasons, it has a computational cost that is smaller than the PCA-Net. Finally, we are restricted to computation in a fixed time-interval on which the PCA is conducted; in constrast the RNO can be used on arbitrarily long time intervals as long as the stretch trajectory remains within the realm of the training data. Therefore, the RNO is a better surrogate in such history dependent materials. We anticipate that the RNO will similarly outperform other architectures like the graph neural operators {{cite:bcb7e671bdbf13d430843b40f8239f191ebbd343}} and Fourier neural operator {{cite:dd01f808de54cdbc67f2451a7a95953b920f9035}} in this context.
d
15db3f59b44d74f74abea4edb08effae
Analogous results hold for the sample average approximation {{formula:5b0705be-8189-4fae-b20f-a6964b43a42f}} of {{formula:560b4cd1-8b56-4a1d-8146-0127269542b6}} and the sample average approximation {{formula:39951751-9126-40c2-b878-ef01bfa5c7a6}} of {{formula:13c67efa-3946-4203-b046-0aaec8bb1da0}} {{cite:e6e18a04ac5b1f9637025a009d2d1a86cdaf4fe7}}, {{cite:624f1bf1c8c2074cdace308a86f9b04b42b20657}}. In particular, note that if {{formula:c5c07918-ff8a-4f31-bf30-80ac3244070d}} is Lipschitz continuous and {{formula:c2b0463f-840b-4320-9bb4-d049e026975e}} then (REF ) holds.
m
aba0bdf56db9c12684a8faedbd32a5fc
Once the sparse graph is created, the goal to the robot is fed in terms of a goal image, {{formula:6179d769-e5fe-40e9-be41-eebfd9d86613}} . The goal image associates with it a set of frames based on the environment that it is located in. These are the frames the robot “needs to view” to associate it with a room ID {{formula:10426ebf-a006-45cc-ab72-7d13d6b7c52a}} where the scene is known to be available. During initialization, the network RoomNet is fed the queue, {{formula:37bfac53-9090-43cb-af77-6018679b7b80}} , consisting of the latest image frames. This provides a location that is used to initialize the additional module. It associates the room ID with an initial frame {{formula:2f6a69cc-be85-438c-b5e0-eac51107a9d6}} acting as the source node in the graph. Simultaneously, the user-defined goal image {{formula:95165b99-7ff5-4a05-b9e5-a5d519dabb46}} , is passed. Using {{formula:988cf79f-bc36-4636-9dbe-55c9a613f1ec}} algorithm{{cite:e0d9ca874d4936d326f66d5f1c3ae3be90e5db6b}} a hierarchy is then generated from the sparse graph for the room ID's, {{formula:78fc7e49-4bef-4b1a-881e-011617e5fa15}} to visit and the transition images, {{formula:56a89e11-a81f-4c3a-b912-11c2d79b5e84}} the robot needs to look for. Thus, the output of RoomNet is a {{formula:3c9ab69e-9da3-4649-97bb-c0b4a693a21e}} tensor with an associated probability {{formula:f9afcc2c-91fc-4593-964e-8acc4d291fce}} corresponding to the prediction. The tensor is multiplied with {{formula:5a47ea90-8651-497b-a020-7d8a76f0da4c}} to get the final output matrix. {{formula:1b1a2ec9-a918-4f0d-a493-519fdf7bf3d0}}
m
f9655093d00734823bf19c83f23e301d
Reading someone else's handwriting is often a challenging task; some of the characters are unclear, the text is cursive, there is background clutter and the image quality can be low. When deciphering each character, we often rely on the surrounding area to compensate for the occasional obscurity of local areas in the text. The automation of reading text images has been a thriving field of research in computer vision for decades. Recent deep learning methods have significantly improved recognition results {{cite:fbb369e57bc1262ed69b423224118c1a8347abcf}}, {{cite:4c0a14abf443a03a9ec5e15b1f347a3fde73a376}}, {{cite:3d87ec1686007e2ffc747d2695c48b71a6af6164}}, {{cite:3936a733072631aab2789cedc41e91ddf45d2fc0}}, {{cite:66b05d9115e1d1b40424c61ff4cb0c0c0563bac0}}. Nevertheless, a close investigation reveals that state-of-the-art text recognizers are prone to overly rely on local image statisticsIn this work, statistics are defined as the mean and standard deviation calculated from the corresponding distribution., ignoring cues from the surrounding areas.
i
203b7683e40e58ee46c14272dbab767c
We evaluate the performance of Scene Designer for both compositional sketch search (SBIR) and synthesis, contrasting performance against contemporary baselines for both tasks. We explore the efficacy of our model for both search and synthesis from sketch, image and mixed domain compositions. For SBIR we compare against four baselines: a scene-level technique (SceneSketcher {{cite:fbe8acfec5da1fc0e64e3c1ff191b2185d17d2dc}}) and three single-object techniques. Of these, two are fine-grained SBIR models (Sketchy {{cite:c81c651b363829d079c668b3a1a5c1ae03781f06}}, and Sketch-me-that-shoe {{cite:389b4b4bc023501219938e71c0075f8a763575e6}}), and one is coarse-grained (Multi-stage Learning or MSL {{cite:7cfebd451280c6098f8c85298c5b4110faa5fee8}}). For scene synthesis we compare against the sketch driven method of Gao {{cite:4c9d1f7502905d0d954a7007f298ce175534dc22}} (proposed alongside their SketchyCOCO dataset) and the method of Ashual {{cite:9db7ba150e0943125c71d5ade086c574f8605bec}} that accepts semantic scene graphs (spatial arrangements of keywords) as input.
d
9b2ef83ee8bd4fe012eb5c983ba5f0fc
agrees with the asymptotic estimates in {{cite:751d350a49fbd07e4c67591abb0e8226140969d2}}, {{cite:a195a20e457f10519a6bec407e385b76869c296a}} (recall from Remark REF that {{formula:48d43c85-5bb7-490e-b524-860092f52ad9}} , where {{formula:07c25efd-b9a5-43f6-bca2-6533b488480c}} denotes the small parameter in {{cite:751d350a49fbd07e4c67591abb0e8226140969d2}}). Similarly, the strong contraction property described in Assertion (b) does not depend on {{formula:87bf204b-fe9b-4c83-8947-b157300460f3}} . This explains the similarity between the dynamics observed in the {{formula:11261f62-001c-4d47-91c7-64329ce6ce40}} -plane and the dynamics near a stationary regular fold point; c.f. Figure REF and {{cite:751d350a49fbd07e4c67591abb0e8226140969d2}}. The dynamics in different cases, i.e. for differing values of {{formula:3b5a37c8-ce1e-4162-a9cd-26e532140436}} , are primarily distinguished via the angular dynamics and in particular, the number of complete rotations about the {{formula:c76b4a29-0973-484a-8205-6fa8032569f6}} -axis during the transition from {{formula:4a9c18ec-0296-42a3-9ee3-70ed35639788}} to {{formula:5bf62fa4-90eb-46a4-bbc5-f9507701d8ad}} . Since {{formula:f0a1eb3f-759a-4515-a080-3eeab0f3f47f}} , a solution with initial condition {{formula:2bf15a23-5487-4d48-b18a-30364f4bf03a}} undergoes a total of {{formula:b119d4c8-b47b-4c71-8555-50773df3593c}}
r
87326e3e59ac0f0c1b53e72f48c580e9
The granular dynamics has not the same characteristics in different granulator geometries and may favor more or less collisional or frictional contacts between primary particles and influence the redistribution and transport of the binding liquid. The granulation process is easier to model and control when the agglomeration is governed by binary collisions between particles, as in granulators based on fluidized bed or high shearing by impellers. Such processes have been extensively investigated in application to the pharmaceutical industry {{cite:0dc76b31478dcc4fbead816df123f10c7a8d77d5}}. In contrast, in drum granulators the particles agglomerate in a downward dense granular flow along inclined rotating drum. The drum agglomeration has the advantage of being a continuous and robust process, but since the rheology of dense granular flows is a matter of current research {{cite:99bef001671be86c2190748258c3d92fc75cdc5b}}, {{cite:b66453144a1f0552eb1d5f48f721ad0ec6b48f8b}}, {{cite:3bb6ae74b49b8b60b1da7596612b4a4bd460c0ae}}, the agglomeration mechanisms in this geometry remain quite poorly understood {{cite:b0ba688fac2c9ee298a8b699e796ed2fa46eb22c}}. Granular flows in an inclined rotating drum may show several flow regimes {{cite:ed30796a8c16a01f06551ba277db82d461ca95a6}}, {{cite:6c5583dbcf40a659a325f4e4d30673e4b9dcb809}}, {{cite:f77bd3ccfc77c5e0109b617156e16029df81cdf1}} with the common feature of being dense and inhomogeneous, and involving inertial effects {{cite:7f860906fc29cf31c7849717f14a474b31f6e26a}}, {{cite:0ec1cd589a382d5bd2a8e8b5ea82c821a37df637}}, {{cite:85ae9058d88be79fc4a0c8d2e4d20c6c228ec6be}}, {{cite:5b9611e340830c9c9f7eb3d7b4688a9520fe2ffc}}, {{cite:00ad95ed719eee54f589a0685c88606a404ef18f}}, {{cite:f0596bbefcb24c372abfe65dfff3c5f6ab3c57d8}}, {{cite:4dc501a70784ef20c3500e1194e447f019858023}}. A practical difficulty with drum granulation is the in-line monitoring of the kinetics, making it less amenable to theoretical understanding, which is required in order to be able to improve drum granulation plants, often suffering from a significant recycle of undersize and crushed oversize granules {{cite:b0ba688fac2c9ee298a8b699e796ed2fa46eb22c}}.
i
39287e8152bb2292f9411424abe7df55
Security is another important topic. Security itself is a common issue in autonomous and cooperative distributed systems, especially when it comes to open systems. In wireless ad hoc networking, a friend mechanism {{cite:802b6fe83443b2501982f499dfced0c1441976f3}} can be helpful for acquiring the basic security. Anomaly detection, adversarial detection, or trust mechanisms should be studied as {{cite:a3f40c9039e9ff7e64e6e3fef34e68e8f9e3ca73}}, {{cite:0e6313609e70bde3caf63a47aa90c4fab366e069}} did.
d
080989ecd631289020bfa8719bec8a89
>0,   =1, ,L, which leads to a square system when {{formula:ac0e4adc-e775-459d-a7f6-963116e2277b}} and an overdetermined system when {{formula:b2301130-b8cd-472f-9292-121c411c19f6}} . We would like to point out that there is no theoretical result for the choice of the interval {{formula:42a61db2-8b6b-4832-b64e-3f0b9d8ef24c}} and the log-spaced sample points. In the implementation, we use the program SolvOpt {{cite:f373951a1388882994c310ecf0421c170fb5cbfc}}, {{cite:67372b576e15b0998113e3f0640986904873c367}} to solve the above nonlinear constrained optimization problem. The modified Gauss-Jacobi quadrature formula is used to initialize the iterative process {{cite:447788fc8dca317fbe0d3a9388f5091add8df196}}, with which the positivity of the starting points is satisfied. More precisely, by introducing {{formula:080ec263-4fa3-4194-9f19-f7850ec50bcf}} we derive {{formula:2fe0cd40-18d9-41d3-959a-b4bbc50c1b4b}}
m
e9754123d75080e97fdd9cd119ffec5d
To handle problem (), we formulate a semi-definite program by using the Gram matrix of the unknown vectors in the problem. Indeed, we form the Gram matrices {{formula:322ecd8a-dbd2-4a58-ba04-d2f181552823}} and {{formula:ac9232cc-c9fd-4773-b7a0-bf034cc79899}} corresponding to {{formula:96e30ae7-dd1f-462b-94e3-3b435720f8f5}} and {{formula:e88b5335-0a23-43de-b847-e26c56bf9a18}} ({{formula:450029dd-e22e-444d-8154-96bf9767d02d}} ), respectively. The interested reader can refer to {{cite:649ea61b54f092dc090c751d648f1dc601b01e75}}, {{cite:c4008477d6882c359f0732d39613c541d58e2a58}} for more details concerning the Gram matrix formulation.
m
6f528f77fe7b62cb3640928824a71a6f
In recent years, various methods have been proposed utilizing deep learning architectures for extracting embedding vectors and have shown better performance than the i-vector framework when a large amount of training data is available {{cite:968a51e8221169a313987f7763b85fc6e4722c6b}}. In {{cite:18f14988c226b0609572561df2afaf257c190520}}, a deep neural network (DNN) for frame-level speaker identification was trained and the averaged activation from the last hidden layer, namely, the d-vector, was taken as the embedding vector for text-dependent speaker verification. In {{cite:968a51e8221169a313987f7763b85fc6e4722c6b}}, {{cite:60cc9217fdea7c9359640a7b1b3a4ffa737ee5ca}}, a speaker identification model consisting of a frame-level network and a segment-level network was trained and the hidden layer activation of the segment-level network (i.e. x-vector) was extracted as the embedding vector. In {{cite:e44e7c4587a99b362d9de56adf26e47b0adbe028}}, long short-term memory (LSTM) layers were adopted to capture the contextual information within the d-vector, and the embedding network was trained to directly optimize the verification score (e.g., cosine similarity) in an end-to-end fashion. The end-to-end d-vector framework was further enhanced in {{cite:f0495f307a627b99c317944712379374261af5cc}} by applying different weight (i.e. attention) to each frame-level activation while obtaining the d-vector, which enables the embedding network to attend more on the frames with relatively higher amount of speaker-dependent information. In {{cite:0f85255d603cdb5b0d0b67e8de3e05c91c636381}}, a generalized end-to-end loss function, which optimizes the embedding vector to move towards the centroid of the true speaker while departing away from the centroid of the most confusing speaker, was introduced to train the end-to-end d-vector system more efficiently. In {{cite:89af0e898f804fe5bccc24e1e46b98632f0c6eed}} and {{cite:504fe274193033656e9751674a3d6a0041175170}}, a variational autoencoder (VAE)-based architecture was trained in an unsupervised manner to extract an embedding vector for short-duration speaker verification. Despite their success in well-matched conditions, the deep learning-based embedding methods are vulnerable to the performance degradation caused by mismatched conditions (e.g., channel, noise) {{cite:7e4d33d319d8c76ff809465a7d9f23683bbff63d}}.
i
d316ace32c8d4630057088501ce6e1e0
A promising direction, NOTEARS {{cite:8984fafa3a34fb760c9924cac49479a35ff4c45d}}, formulates a smooth characterization of acyclicity that can be incorporated into a continuous optimization and solved using well-known numerical methods. NOTEARS was later extended to parametric nonlinear models and nonparametric models {{cite:91f75c460c3cca99dae13be46001c6f4ffcc9fd8}}. GOLEM{{cite:543042310745fa2fffa668ef6ce9e5fbf14ac8ad}} also adopts a continuous optimization framework, however, it makes use of a linear DAG learning model and doesn't capture non-linear relationships. A different approach looks at identifying cause-effect pairs using the statistical techniques from observational data {{cite:56d68ec2e01ade3df53d068169e47bae6db31adc}}, {{cite:acabb264b5ef274e4654351e2dd0dcf54020c6e9}}. Singh et al.{{cite:d69abeb79a95fa4f891a05c755a98939d73ae810}} use deep convolutional neural network (CNN) models to determine the directions of pairwise causal edges from observational data. Hassanzadeh et al.{{cite:d2edc38123db5c7e699a500453c01aaa9cedb7ab}} formulate the pairwise causal discovery techniques as binary causal problems where they try to answer if there exist any causal relations between two variables in the context of Natural Language Processing (NLP). Nevertheless, they have not studied how the predicted edge directions can be used to provide a solution to causal graph identification.
i
c76c7f8eb00a8188e589f07e406905e0
The Tsfresh set on the other hand, included a `relevant subset' of 63 different time series characteristics such as absolute entropy, kurtosis, skewness etc.The interested reader may refer to https://tsfresh.readthedocs.io/en/latest/text/list_of_features.html for a complete list of time series characterization techniques. and a total of 794 characterizationstsfresh computes 794 features by default. (or aggregates) for each time series feature at the window-level. Each aggregate is a time series characteristic with a specific configuration of its parameters. For instance, approximate_entropy is a time series characteristic, while approximate_entropy(m, r) is an aggregate for an integer m (length of compared run of data) and a positive float r (filtering level). Since too many irrelevant features may impair the quality of models, tsfresh also selects relevant features from the exhaustive set of aggregates it creates based on statistical hypothesis tests {{cite:adf4cbe3cb58b562e1efc18425f69232c6674ef9}}. The final set of `relevant' tsfresh features included 496 features for the LDP task and only 6 features for the BEP task in total.
r
7bf5c75fa420c5e630160f795d09d8b8
Although the flux phase or {{formula:974c0b7a-b0d8-47bb-888b-6ad3e575a360}} CDW is a candidate for describing the pseudogap, its existence in the {{formula:76ae9827-165d-4ffd-bd83-6dbf9815e1df}} -{{formula:3665acf9-d0a7-42d6-a0bb-4c85f760d227}} and Hubbard models is controversial. While some reports show the presence of the flux instability or its fluctuations{{cite:71caa2015c0ba777040151664b44ee6fe3b6a213}}, {{cite:0437e7339b2120cce6c3c3d7c854f3095e813cd8}}, others do not{{cite:270333445ef4887132d96b1f89efe432e7a8a270}}. The FP is also controversial from the experimental point of view. While the authors of Refs. [chakravarty01,honerkamp04,lee06] show that a series of experiments in the pseudogap phase can be described in the context of the FP, angle-resolved photoemission spectroscopy (ARPES) experiments do not show pockets but Fermi arcs{{cite:52a4b406e23837940a8c72d81e651a2a982e2f2d}}, {{cite:a045bf19559d96543f14d0388386066183afaf33}} which are considered as an indication that translational symmetry is not broken in the pseudogap. In Refs. [greco09,greco11,greco14] the interaction between the flux-phase fluctuations and carriers in the proximity to the flux-phase instability leads to a reasonable description of the Fermi arcs and Raman scattering without the necessity of the translational-symmetry breaking. Recently{{cite:8537760adf627d73dde6ebf7f2ecab68822bb4a3}}, it was proposed that the FP is a good candidate for describing the pseudogap.
d
7f96d876e09cf4e6be0502dff85a8c31
The material 1T-TiSe{{formula:80d66d9d-6de8-42bf-b881-582383fa0b76}} has been long known to have a low-temperature CDW phase. The origin of the CDW is still controversial but may be caused by the Coulomb interaction induced excitonic condensate {{cite:4ce989573f09c7937f817cc2a4c1d9a74f7c45dd}}, {{cite:e9f161c2f7c8c0c200b459712b528a3c09edd1b0}}, {{cite:ff4b425b5e1474138d2a79dbad80532b8ec12138}}, {{cite:ff1832a123957cb3c4f8c11f659bec888343f3c8}}, {{cite:9df7fe6800f55bf23b2ac993cbbb00b64ea80167}} or electron-phonon coupling {{cite:63d351f58948983a7f1cc4fd163454f702b5a886}}, {{cite:696e074dace2c76365e999deb441e2149e32af95}}. Interestingly, the signature of inversion symmetry breaking in (3D) bulk 1T-TiSe{{formula:f41581ee-b4c4-46f3-ae33-dfa5ec52e3bb}} with CDW was observed {{cite:6f1f86e906956178353dd362edfa0f7326a801a4}}. Our previous work has gone through symmetry as well as topology thoroughly in bulk 1T-TiSe{{formula:7d83f18d-28ea-4b16-acdb-47891d52c950}} {{cite:fb48f51cf5bddec7c4f24b5f54520cbf380992d3}}. Since the combined operation of inversion with translation along the vertical {{formula:94761692-9d0f-4b2e-8936-ebca90b9b5be}} axis remains a generalized inversion, which also corresponds to an inversion with respect to a shifted center at {{formula:c4c7c184-e6af-4ad8-a3c2-40c9cc761dbe}} , the technique of {{formula:4f273f35-ea78-49ef-8a8a-db898bdab557}} calculation using products of parity eigenvalues at time-reversal invariant momentum (TRIM) points applies {{cite:7d402b65f2403119086c1cd6468eda4a55cdf16c}}.
i
bd3b42c2a814a6c22cb732e318232b20
where {{formula:f916cbc8-62cf-4514-aaa3-ed2bd8095241}} are the fitting parameters. Our results for the form factors are presented in Table. REF and compared with the results from light-cone sum rule (LCSR) {{cite:2d9b0fea2c6e8edf5b2fadbb941958e1e2543677}} where the form factors are parameterized by {{formula:d9a8ab0d-f035-4f4d-933c-8ba24b1892ac}}
r
3fa2c03b1d78e61354b812fc9ef9268a
Super-class Based Performance on COD10K. For the challenging dataset COD10K {{cite:2f113267c36256bd0cf9035abdbd0509566f5d09}}, it contains five super-classes: “Amphibians”, “Aquatic Animals”, “Flying Animals”, “Terrestrial Animals”, and “Others”. Due to limited space, we only report the quantitative results of each SOTA methods on four dominant super-classes, as shown in Table REF . We observe that our model significantly surpasses the best method SINet with an impressive gap, e.g., a 5.4% increase on “Aquatic Animals” and 6.4% on “Flying Animals” in terms of {{formula:05664a2b-2451-47c0-8546-0152444a387f}} .
m
c737cdbfdc2222a6f7ef74120d76073c
where {{formula:8bc63823-16d7-4ee9-b5b6-4cef0b58163e}} is the {{formula:787d0e18-2cce-4750-b92c-4834c84d9142}} matrix of the parameter differences, {{formula:51c451b5-0dd1-48a6-a98d-6db8318afe4d}} , {{formula:319f9190-0363-4366-a2ea-2a3616f142cc}} are the covariance matrices for the corresponding parameters in the Hipparcos catalogue and the (propagated) Gaia data, and {{formula:3864f71c-3b73-437e-ac3a-af6b366d1ab3}} is the transpose of {{formula:47fe7e34-f8a4-4b28-b639-6ec2f75c79e9}} . Confidence intervals of the estimated {{formula:688d8d64-2744-430e-a334-52c981ad92bd}} were obtained from the increase in {{formula:8c4586bb-2a8f-4501-b481-655240b90010}} around the minimum value; in particular, the 68% confidence interval ({{formula:a3b5da5e-e853-4736-8e11-efead9227a1d}} ) is where {{formula:3beb1f3e-a12a-421f-b03b-45010fbad95b}} {{cite:34f80101d16e97a6805c2b25396764ea34a666cf}}. For the Hipparcos data, we used the re-reduction by 2007ASSL..350.....V, with covariances computed as described in Appendix B of 2014AA...571A..85M.
m
8b61dbcdcd6d5b0b6def41f8b5e0aa75
These results suggest that Rb and Cs masers are unlikely to occur frequently in M- and L-dwarf atmospheres. A survey of 10 giant stars and two globular clusters for the 6.8 GHz {{formula:1019787f-7f1c-4a54-8ce0-f9a6b2c9886d}} Rb maser by {{cite:7d9afb73f5573c5b24874a285d26bfdb32c4cec4}} likewise made no detections. I suggest that the search should continue, perhaps toward other types of stars and in the interstellar medium.
r
d5ad1d53c0452b638e4fa09e3d2f5d13
Feature extraction for each modality is separate. This allows the fusion process to be independent of point-wise or sequential operations. This is motivated by three reasons: First, independent encoding allows for retaining the complementary information from both modalities. For example, image features are not simply appended to sparse and narrow point cloud slices {{cite:8a4c8255619c3a4bb0f5e7a317e1e4342ad2ae03}} and thus losing their richness and density. This allows image features to provide needed context that is wider than the slice. Second, independent encoding boosts robustness to small camera-LiDAR miscalibration, missing points, or missing camera images as discussed in Sec. . Third, parallel encoding allows for better utilization of parallelization hardware. {{figure:f07272c0-5207-4266-a419-9e0db2c37751}}
d
a2289465793fee69e9205aedd3d95942
We describe the results on UCF-101 {{cite:c8d0afc02f93fa569fc0a435c15989c4bd9e9768}} and HMDB-51 {{cite:c6683c2043473c59b49f24958875315c236e335c}} datasets using ImageNet or Kinetics pre-training in Table REF . From the results, we can check the effects of augmentations in terms of model pre-training. Although the improvement of performance are reduced compare to training-from-scratch through the pre-training, DA still increases the performances. {{table:f4b7fae5-7b71-4851-a237-5d4005a14341}}
r
e10325be0201f73e68f86b4c8192ea85
Within both the EfficientNet and XCiT results, we see the desired model scaling effects where accuracy improves as the network increases in size. This signifies that the Sig53 dataset is sufficiently challenging for this ML network research. We can also see that XCiT significantly outperforms EfficientNets in terms of parameter efficiency. An important point of note is that within the vision domain, to match the accuracies of EfficientNets, XCiT networks require both a heavily regularized training schedule and a convoluational teacher {{cite:d5b7fe474940165153661d6e6edb9ad8d1ade8a6}}. Here, we do not augment at all and do not use a convolutional teacher, but we are able to outperform EfficientNets by a large margin. We hypothesize this difference comes from the fact that chunking is not required here, as our signal consists of 4096 elements, compared to the tens/hundreds of thousands typical in the vision domain. This chunking introduces difficult dependencies between samples that are likely very difficult to recover. Specifically, the samples on the edges of chunk boundaries are locally close originally but become far in both time and channel space. ConvNets do not suffer from this due to their natural locality bias and from being able to examine the image directly in two dimensions. {{figure:a5d31fa0-2774-4086-922a-fbb5c18938bc}}{{table:4cb6d718-e83b-4f6a-b334-f6bebd5dbc4f}}{{figure:5e1c0982-2168-4d26-8ccc-96f0cf381e5d}}
r
5f7bbaac2e14e7d0d9b1a49eb6b6ab91
Estimation of uncertainty represents a first step in understanding the imperfect information in our natural surroundings. Future directions of interest include: 1) improving the post-processing of the uncertainty by implementing algorithms that result in better models (e.g. the unscented Kalman filter or particle filter), or 2) modifying the way these networks learn from uncertain information {{cite:09e3cd1e2b9db6c263af375c834da23479c72307}}. Such architectures could weight the training samples according to their uncertainty while rejecting out-of distribution data. In our view, as the field of tactile robotics matures in its use of deep learning, the estimation of uncertainty will become a key component in the control of physically interactive robots in complex environments.
d
44f81b67dbd2a9e59320ee129e40081b
We have compiled the model with XLA {{cite:681b1992a18115c89cdc649d0700e91ec9f54585}} and ran inference for the 15th frame in scene 8907419590259234067_1960_000_1980_000 that has 68 vehicles and 69 pedestrians on a Nvidia T4 GPU. The latency is 43ms, more efficient than the popular realtime detector PointPillars {{cite:3e559592854d80be84c2611fa13e1d84e1864fa8}} which takes about 100ms on the same GPU with our own implementation. With fused transformer GPU kernels and optimized GPU sparse window partition operations, the latency can be further reduced to 20ms. {{table:22ca0e18-aea6-4d03-83ae-02989cfe051d}}
r
0361aa9e79490b499960bf9167fb7bc9
Let {{formula:2e280c36-4bd6-4991-9784-fe1be64fc15a}} denote the {{formula:c738026b-f3a5-4b35-8e2e-4dfa985080c2}} -dimensional area measure on {{formula:4c7e5f31-bf10-4c69-a4ba-63cf51b36b68}} normalized to be a probability measure; that is, {{formula:228d6e50-36a3-48f8-a192-afd39a5b4424}} , where {{formula:ddfcd91a-982b-4aed-aa18-0683c64bfe94}} is the {{formula:fd906a11-ef4e-4212-8393-0cabfe2344eb}} -dimensional Hausdorff measure in {{formula:9b563ca7-fed7-48a9-927d-9901d6f7de57}} . Following {{cite:bf8d4247f0febd77a52b624fbf35e68c7d01ba3b}}, we call a point configuration {{formula:678cd962-7aca-4b8b-b565-4226b185b06b}} a spherical {{formula:1731c121-bdc8-45ec-8a7a-c6d5db6d6db4}} -design if, for every polynomial {{formula:b8875712-82a9-40b4-adcf-3d04f24ba183}} on {{formula:9d334ce5-b3cb-4719-bdd2-09fc43096320}} of degree at most {{formula:cb8ee005-65d2-45b8-b0ae-5aab4e1555d1}} , {{formula:eda7a991-d313-4b18-bde1-416c61094426}}
r
011c1922c19fe41029ba78a2e9d46f4d
We evaluate clustering algorithms by four widely used metrics, unsupervised clustering accuracy (ACC) {{cite:59e7b5396ff42e592a68ddac462c9d991b8ca3a0}}, normalized mutual information (NMI) {{cite:af824d248a2f05d2facd21570ad5aec8c98292f9}}, adjusted rand index (ARI) {{cite:af824d248a2f05d2facd21570ad5aec8c98292f9}}, and Silhouette {{cite:e24a16e1136d9c600fc9b834dde02acdd2d2da39}}. Note that the values of ACC and NMI are in the range of 0 to 1, with 1 indicating the best clustering and 0 indicating the worst clustering. The values of ARI and Silhouette are in the range of -1 to 1, -1 indicates the worst clustering, and 1 indicates the best clustering. There is no standard method for setting the two hyperparameters, the block size {{formula:b0c0adf6-182a-46d8-9e8c-d47302c2d513}} and the threshold {{formula:e63c752f-ac54-4743-84ed-1f26ebbcf582}} , so we set the {{formula:fe1c9e29-9cd2-45d2-b795-2f9eb80bbbe8}} by grid search and set {{formula:5a6ecf44-51e3-4a0b-858e-080188907d10}} adaptively. Specifically, we first set the hyperparameter {{formula:28520346-3a5b-4114-8af5-1be5e66dcca2}} to indicate the percentage of excess for all samples. Then we sort {{formula:397a5cfd-35d9-433d-8d24-0792436c34df}} and set {{formula:aa91ead4-bf32-4d42-b84a-742d3c9c8867}} to the {{formula:c4f6b1c5-6769-429c-a797-e43911b31683}} -th upper percentile of the sorted {{formula:6573a684-8f86-46a0-983e-e2577274b8be}} . Furthermore, we set the percentage of excess {{formula:184a5b50-5c85-44c9-a5f3-5f8268ee68db}} is 0.2.
r
cadde96006d6c84ba24e4dea2bfe4334
where {{formula:5315c7fc-9b37-432b-99a6-6821d6fa52d5}} {{cite:a6de100eb60d6d773e9064400b205906be7a447c}}. In addition, {{formula:0c4e7a30-ef94-4866-9c00-952090718832}} and {{formula:3f6abda1-9f17-4c94-99e5-e6b0f34bc868}} are fixed by fitting the corresponding decay widths. In particular, since the {{formula:31944c13-7738-46c4-834d-03ee812b4ef8}} dominantly decays into di-pion, the {{formula:fb1f1a48-e36a-4bdf-931e-56d728ee70cd}} {{cite:f4c04e65893155852297615b6e816c56dde58c39}} can be used to determine the coupling constant {{formula:75e2e8b1-44ec-45b2-982f-bc840d504d93}} . For the {{formula:5270a404-9d18-486a-986e-1f0787d38319}} , it dominantly decays into a pair of pions or kaons, so the ratio {{formula:d1be868c-d107-43b4-a058-3805c1e7fcc0}} {{cite:f4c04e65893155852297615b6e816c56dde58c39}} is used to determine {{formula:31694599-a6fd-4332-8c9e-35c06c842b72}} and the corresponding coupling constant. Meanwhile, the relation {{formula:945734e2-39f1-40c2-8313-06065c93d2b7}} is adopted. {{figure:63e56edf-dc8d-4fed-8b3a-bd8c84028ff7}}
r
9302b750b065854a5758049423c6dd60
We use ResNet-18 {{cite:94c4707b3757a0295b1af3a30b20a4392fcf2994}} as the backbone network, a two-layer nonlinear MLP as the projector, and a linear predictor. Unless specified otherwise, SGD is used as the optimizer with weight decay {{formula:71f5c135-e84b-418c-8928-676761d4a055}} . To evaluate the quality of the pre-trained representations, we follow the linear evaluation protocol. Each setting is repeated 5 times to compute the mean and standard deviation. The accuracy is reported as “mean{{formula:3d20e37c-aee2-405e-a63b-082be154250f}} std”. Unless explicitly specified, we use learning rate {{formula:a9373bef-6869-48cb-8660-a32af99a4c44}} regularization {{formula:51878a88-61fc-49cc-9da5-78a7d7ae0dc6}} on STL-10; {{formula:594a7483-6095-4c4a-a1f3-e2180d1a9992}} on CIFAR-10 and {{formula:5e0772bd-3a42-4b8b-9966-1856e290f1e8}} on CIFAR-100. See more detailed experiment settings in Appendix . {{table:ae8bafb7-93cd-4d39-a522-7a420143f5dd}}
r
403ce6848064fe7b2898308cd9fb7ed8
Finally, FedTriNet will retrain the client model using both the real labeled data and the pseudo-labeled data. Note that since there are three models in each client, we choose to retrain the finetuned combined model, which is significantly different from FedAvg {{cite:c87c2e45b107882bc54536fc03cb600aa9888774}} and FedSem {{cite:aaa270592e7bb97de8ac02d5649e2f4902aa7ebe}}. We evaluate the proposed FedTriNet on three benchmark image datasets under both IID and Non-IID data distribution settings compared with state-of-art baselines. Experimental results show the effectiveness of the proposed FedTriNet framework.
i
2ecd0f26c53b3cbd1619047ea15a76d5
Programming languages are similar to natural languages in many ways, and natural language translation has been studied extensively. Sequence-to-sequence translation models, which map input sequences to output sequences, have achieved great performance {{cite:02e10252638c9e844885e3211752a173ced0124b}}, {{cite:638caf5ee9121bef43db03cfe4156173109736f8}}, {{cite:af93cb8a8e200521ffe40cce3b17329c793809e8}}, {{cite:d9306ddb445de3c1d311a044684e2f1442093bdb}}. While similar to natural languages, programming languages have a distinct structure which makes it harder to use the same tools for translation. For instance, the RNN-based sequence generator, which easily generates phrases in a natural language, finds it difficult to generate long syntactically correct programs {{cite:4dc44630d715b23ea9dc7919a012d6f227415620}}.
i
882dc5e77f47145135b146c3017381ec
We would like to address some related topics for further consideration. First, are there any algebras to characterize this type of vertex operators? In other words, are these vertex operators the representations of some algebras? Date, Kashiwara and Miwa {{cite:6399571f36834cb27858c0f1c0a18b6aa87719f2}} found that the vertex operator related to affine Lie algebra {{formula:eb560d53-e188-4c70-b9d4-5499bafd4354}} {{cite:07af33e2733411f5b0faced5416bf1f25d50f08b}} can be used to define a symmetry group of the KdV {{formula:2515d060-d507-4dbb-9db3-f19009710de5}} function. This then built up a beautiful connection between integrable systems and affine Lie algebras via vertex operators {{cite:6399571f36834cb27858c0f1c0a18b6aa87719f2}}, {{cite:dff830d12f562a1ca36bb03cae745bd959514642}}, {{cite:b834030ea234e5b18a6d947961ca1870dae67939}}, {{cite:01ff052bafc207ca53fad262dcab3f5d8236bd2e}}. However, so far we did not find any similar algebraic structures behind our vertex operators (excluding the rational case). The vertex operators (REF ) and (REF ) can be considered as elliptic deformations of the usual vertex operators of the KdV equation and KP equation. Without algebraic structure, one can still investigate such deformations on vertex operators of other integrable systems (e.g. {{cite:dff830d12f562a1ca36bb03cae745bd959514642}}, {{cite:b834030ea234e5b18a6d947961ca1870dae67939}}), and in particular, of discrete integrable systems (e.g. {{cite:d36f570d4402a73621731a5a0ceb5955a4d573d5}}, {{cite:eaa019d8e91c8b190fdcc4ecc72fd80b30afc196}}, {{cite:01990ae540cb781cd22a837e39e1319ca10b2f33}}, {{cite:124bb1599235db9289566241b6a9d84a22a55c19}}, {{cite:bfac480f7faa17309516fa4973ba6e9d2e51cb36}}). In addition, note that {{formula:3acb775d-aec4-4886-9218-71191e69da69}} is an initial solution in our scheme, and meanwhile it is the 1-gap and 1-genus solution in light of the finite-gap integration approach {{cite:dd25673e2842d9c5b892fca4649dfcbc3b5c6f07}}, {{cite:b88dcb25e3443e619a8a3b9a35152b6097ab1c1e}}. It would be interesting to make clear the eigenvalue distribution of the corresponding spectral problem where the potential is elliptic multi-solitons, and recover these elliptic soliton solutions form some analytic approach, e.g. the inverse scattering transform. Finally, there are vertex representations for quantum affine algebras {{cite:c373ff25888a2ffb1d41763256753b3e0d9e840b}}. It would be also interesting if such elliptic deformations could be extended to quantum vertex operators.
d
dadd46ce0ad6d0b2d04a0c32dfc01cd9
Table REF lists the results of each method across the datasets that we propose. Our new measure and new datasets reveal several important properties of non-homophilous node classification. Firstly, both methods that only use node features and methods that only use graph topology appear to perform better than random, thus demonstrating the quality of our datasets. Secondly, the stability of performance across runs is better for our datasets than those of {{cite:25d0fdf82b83ee0b5dd6803a46892da5900cc48f}} (see {{cite:0968006db138223535749a229b513de57d368717}} results). Moreover, as suggested by prior theory and experiments {{cite:0968006db138223535749a229b513de57d368717}}, {{cite:6fda67ceb8d69ea3603b8c64d2350808624f8ddc}}, {{cite:a5bda46dbf86d8a1517f053c1db621197d99406e}}, the non-homophilous GNNs usually do well — though not necessarily on every dataset.
r
8b752bebb9ce5699a78a8acbecf991f4
Figure REF is an extension of Figure REF in the main document, and is produced under the same experimental conditions described in Section REF . The top row of Figure REF subplots shows the targeted success rates (tSuc) versus query counts ({{formula:f8d9e1e4-8966-43e8-9285-8ee738972a77}} ) for the (snake, lizard), (big-cat, house-cat), (large-vehicle, small-vehicle) and (beetle, insect) scenarios from DINS Test 1. The bottom row of subplots shows tSuc vs {{formula:1bd6d0f9-3693-4c9a-957d-4f2bbfc654b1}} for the (spider, insect), (mustelids, monkey), (house-cat, dog-spaniel) and (large-vehicle, train) scenarios from DINS Test 2. RGF represents the query-only Random Gradient Free {{cite:a0d14281e1daa2b2e1d161d6b1ec32c83ad4fc25}} baseline attack; TMIM+RGF represents the RGF attack warm-started with the TMIM transfer attack direction; and FDA+RGF represents the RGF attack warm-started with the FDA transfer attack direction. Finally, all results are averaged over the six individual blackbox models in the corresponding test environment.
r
06e989a595522d8c3f0cdc1134ffbd81
This paper demonstrates (1) what makes Vanilla-SC fail and (2) how Regularized-SC fixes that problem. One key motivation for Vanilla-SC is that it relaxes a discrete optimization problem of minimizing graph conductance {{cite:273da8d47be518bef70a01698ec51d0368c1582f}}. Yet, this graph conductance problem is fragile to small cuts in the graph. The fundamental fragility of graph conductance that is studied in this paper comes from the type of subgraph illustrated in Figure REF and defined here.
i
de4c5914759a0bbc9ba0c958d4641d23
To proceed, we must first consider the mass of a debris disc. Defining {{formula:ed9138f6-1256-4da0-9284-8321f47a88ea}} as the number of particles of radius {{formula:fda321f6-91ac-45fc-b2e8-e33d568986a7}} in the range {{formula:f8b085ca-99de-4fb6-b221-dedeff0ea8e0}} , the debris disc size distribution is often modelled as a single powerlaw {{formula:58500455-c020-4695-b9b4-0925bbc72910}} , with {{formula:83ad5985-5b57-4995-97f5-3ac7854a48d5}} expected from destructive collisions {{cite:3921eb83671a6e2255eb72bee097d8943f84e585}}. However, {{cite:45ec97a6ff8e38fa5182c5b9fe34c85f1c827de8}} argue that debris is unlikely to follow a single power law all the way from dust up to the largest planetesimals, and that a 3-slope model is more realistic; in this case there exist transitional sizes {{formula:bfbec50d-81a5-444f-bc88-029e372f4dfa}} and {{formula:cbea7671-a56f-47ea-8bed-f697d163d093}} , where the size distribution slope changes due to different physics affecting debris. For grains smaller than {{formula:919444dc-4657-4371-9132-5bc2dce3bec6}} , radiation effects are important and the size distribution is defined to be {{formula:bcf3b7e9-4c57-4367-81d0-74098191c7ca}} . Larger bodies are unaffected by radiation forces, so debris larger than {{formula:25f25a6a-4113-422a-8827-65327d294e5f}} but smaller than the largest colliding body {{formula:c95f6c46-83a5-44b6-8cfd-1efd875c3076}} have their size distribution set by destructive collisions, with {{formula:7b44a71a-d4fb-4efb-8efe-dac1105d0d04}} . Finally, bodies larger than {{formula:05b044d5-af52-43e0-81da-115ee72f6e04}} are not yet involved in this collisional cascade; it takes time to stir large bodies such that they undergo destructive collisions, so bodies larger than (time-dependent) {{formula:796f0fba-ffa3-4d5b-929c-c15cca370775}} are primordial and not yet colliding. Hence the size distribution between the largest fragmenting body {{formula:021f85bd-cc2b-48d7-aed5-d2acb4b5e06d}} and the largest body {{formula:aa8fa38e-65ad-4b1b-b0f2-40f8fb5f8759}} goes as {{formula:c2ee2eea-8e29-4c09-9995-2d88c73c9850}} . As in {{cite:45ec97a6ff8e38fa5182c5b9fe34c85f1c827de8}}, we fix {{formula:1f1aa9b4-4145-4821-89b2-0fe769e6e027}} , {{formula:c3dc7ed1-ad01-4a6a-b8d3-3978b9a04111}} , and {{formula:4ba5e864-1e85-42b9-999a-b42998991990}} , noting that these values describe indices and should not be confused with the disc pericentre locations {{formula:dbd47076-dbfe-4910-a6ae-16534152ba38}} and {{formula:96ca2f48-029b-470f-9e27-433e3f380ee2}} . Integrating over the size distribution yields the total disc mass in the 3-slope model as {{formula:f2aa2900-655c-4156-905e-e6c209605d80}}
m
7230d1c03b5d68a60ce3aacd57f88482
By now, a lot of research efforts have been dedicated to providing security through physical layer methods. A power control scheme is proposed in {{cite:37df540b608e440d2722b6233d419d421f8df761}} to ensure that an eavesdropper can never reach its desired signal-to-noise-plus-interference ratio (SINR). However, such scheme is not effective when the eavesdropper has a better channel than the receiver. The technique of artificial noise generation has also been widely explored to jam the eavesdroppers and provide secure transmission in the relay communications {{cite:0e305c830e8551f0d30a85f3ba487f9a99d19856}}{{cite:326085e5c0466582bd2175b2c7376775f3810468}}{{cite:4f060a5af5cb9b3f4ba66935df95bd23e4b6d869}}{{cite:e621061608897a037ac4f1da11b29a2df462090f}}. Recently, the cooperative jamming through node cooperation has been demonstrated to be efficient in ensuring physical layer security {{cite:3ce69b7a5120b062ef2ed90385c0b5c2e164d0ba}}{{cite:0aaae35952f4be2603714718cb281b2ac260e7a3}}{{cite:79aa891a4ddbde0d93eb5b81cbede683a662b4d7}}. It is notable that these schemes generally reply on the knowledge of eavesdropper channels and locations to jam eavesdroppers. In practice, however, it is difficult to gain such information, specifically in untrusted network environment. To address this constraint, a cooperative protocol based on artificial noise generation and multi-user diversity has been proposed recently in {{cite:73737372d3a87ae9344408dcea00098fcaef99aa}} to achieve secure transmission in two-hop wireless networks without the knowledge of eavesdropper channels and locations. In particular, the asymptotic behavior of such cooperative protocol in a network has been reported there to illustrate how the number of eavesdroppers the network can tolerate scales as the number of system nodes there tends to infinite.
i
50edc6ccb67e01db3a2973a90176cfca
From a learning perspective, generating the distribution of successful 6-DoF grasps is quite challenging, because the distribution is multi-modal, discontinuous, imbalanced and ambiguous due to (self-) occlusions. Furthermore, direct regression in high dimensional output spaces like {{formula:7815d9c8-abf2-4694-b283-63b213401b9c}} has been shown to be difficult in grasping {{cite:7ab4c898fc4c60bdfd333f73eed39f49922e8ffc}} and also in related fields such as object pose estimation {{cite:5b96c7f0991bb5d5b00a4d9dd7e35164c800f61e}}.
m
a9a80d24fa7cb5b125eb671c5e555f20
Our results of the branching ratios from the LFQM and {{formula:410ec1c4-8040-49e5-bc1a-0045bdc46946}} are given in Table REF , where we have also shown the {{formula:4de26c6c-769b-4a91-a697-c39e9070e8a8}} evaluations in Ref. {{cite:4a644c7e8cb52112f6f70897aa60c3c3506cb253}} and some of other theoretical predictions in the literature, such as LCSR {{cite:b955b57498dd6dca0649d066db0c134f985f041b}}, {{cite:348101159804a01057fd7c412d57b4f8ec446577}}, BSE {{cite:613ab497d58844dc6d5f3278cac70f7e806c3a9f}} and QM {{cite:7b71fa26d669bd7279c5f75fa34b1f9513c71b8c}}, {{cite:4a170aa1c02622132f942aa5561d97440dc0726b}}, as well as the current experimental data {{cite:92cb6e9da8f5c80711538194c6e7dbc044f5e7a3}}, {{cite:a18e09bcc7fc6988715c22c13ca624c9ee7324a3}}. In particular, for {{formula:c596094e-d383-4f89-839c-93011c1bbd72}} in the LFQM approach, we find that {{formula:2e114b35-3af7-4508-9dab-c5543b07b656}} , which agrees well with the experimental measured value. In addition, we obtain that {{formula:a7cc195f-c685-4355-9aab-40cb464bccd7}}
r
e9c19e77ec652920f0d0faba36701af6
The contribution {{formula:48af5549-eefd-4314-82e9-f4c7b78d14ec}} from the host galaxy is an unknown factor that enters FRB observations for most of the unlocalized FRBs. The value of {{formula:07ed3b3e-3dc3-4cf3-a8a5-4b4674b04bd5}} is expected to vary from FRB to FRB depending on the host galaxy and the location of the FRB within it. However, the FRB detections suggest that {{formula:a8879a2d-6fb9-4c38-ad0b-d4b3d4c76de0}} may not exceed the value {{formula:60c93dc4-60a5-4415-b3a4-fab41fb0090d}} {{cite:f808511acc438fbc0ecd7dd3587df9205ab0932f}}. In addition to this, the observed DM will have another contribution {{formula:dfe1c407-646b-4fe6-80fa-974025efc1ae}} from the Galactic halo {{cite:f05c55017b20d86fe65a5e3056ec2dbeaa3fbbc7}}. Here we have absorbed the {{formula:51a1dc8c-a5d7-4bee-a7c3-62195ba42419}} contribution in {{formula:fede6bb4-2976-41aa-8afe-d10c6f76c4cb}} . In this work we consider two scenarios for {{formula:e1516b21-2526-438c-94be-5e487bd31c5b}} namely (a) DM120 - where all FRBs have fixed {{formula:9d1514b6-a414-452d-ba92-3a03982cd3c6}} , and (b.) DMRand - where {{formula:233947a7-b020-4a53-b3d1-32536acf7cb1}} values are randomly drawn from a Gaussian distribution with mean {{formula:c23d87a1-9eb7-46da-90f6-a7f766fddd86}} and root mean square value {{formula:5570a4c4-ce36-47fb-8792-7e2d2bb62c7c}} . The {{formula:4756c9f4-be2c-41d3-9bbf-a75486364906}} distribution is truncated at 0.
m
fc59c07d0e017c7192a4e90fe51195b9
r2v-config performs the best or nearly the best for all graphs (Figs. REF A–F). It consistently outperforms other random walk-based methods in all cases despite the fact that node2vec and r2v-config train the same model. The two methods have two key differences. First, r2v-config uses baseline {{formula:9ba7d43e-e589-4153-815f-851e902b182b}} , whereas node2vec uses {{formula:9f3d26da-85ab-481b-9986-15f205f68bf2}} that does not exactly fit to the degree bias. Second, r2v-config optimizes the model based on a matrix factorization, which often yields a better embedding than the stochastic gradient descent algorithm used in node2vec  {{cite:e98765157aaec60e1b1ad1a976612510f2d9e04d}}, {{cite:5e458494e56295ea7c518077ea29c6ba00139141}}. The performance of residual2vec is substantially improved when incorporating offset {{formula:b713b396-cbfa-4bf6-920d-6b258b9c8575}} , which itself is a strong predictor as indicated by the high AUC-ROC.
r
34a7c8648acf555719e37a47cc2f5330
The exponent depends in general on the interaction strength {{formula:338e3f46-c982-4063-b587-2f856c1c62d8}} , vanishing for {{formula:4a8d227e-3c48-4eb9-ac5e-e7c8d332b62b}} . The self-similar form (REF ) is borrowed from the one particle quantum walk, for which the ballistic law {{formula:4fa77931-50f8-4b5c-84fb-9f8a713b3711}} applies {{cite:86aa30e3803af65ef0e01e436f9689f082203ac4}}, {{cite:531b1db769c96a6fc2fd9c4ddb9b785c71946c93}}; this is certainly appropriated when the probability distribution is dominated by the motion of the bounded state of the two particles, and then we have an effective one particle walk, or in the other limit, when there is repulsion (as in the antisymmetric case) and the two particles are almost independent. The fact that the characteristic exponent varies with the interaction can be related to the leak of one particle position probability in correlations with the other degrees of freedom: the effective motion of one particle is no more ballistic.
r
c2319307507988e7120444055d7d75bd
The intrinsic and extrinsic of the camera and LiDAR are vastly different from each other. Data in both modalities need to be re-organized under a new coordinate system. Traditional early and deep fusion methods utilize an extrinsic calibration matrix to project all the LiDAR points directly to the corresponding pixels or vice versa {{cite:816968e742a793aa06edcf23c4a66ccabefc3286}}, {{cite:2f9b50d1c3570070dafd17fb3c5111eefaa65a2a}}, {{cite:0be266779cfe48ae947662b63414f7e5e3aa8ae6}}. However, this point-by-pixel alignment is not accurate enough because of the sensory noise. Therefore, we can see that in addition to such strict correspondence, some work {{cite:214376182eca77b33fb0adcd99d08093927ede9a}} utilizing the surrounding information as a supplement results in better performance.
m
b8b3b442a0b73919de5fece510e9e598
Several previous studies have defined analogs to high-{{formula:1b540fc3-f761-460c-9e71-6e68e3068a12}} galaxies using a variety of approaches. For instance, {{cite:20d90a600a4269111d8c5935c3ed8e19102faaa8}} and {{cite:8f7d08a7f684dc8310f02f9ce18fbd49bdb170a1}} found that ultra-violet luminous galaxies (UVLGs) with high surface brightness have characteristics that are remarkably similar to Lyman break galaxies (LBGs) at high-{{formula:1ca7f7b4-2214-4d8e-908d-3f3a5525c4ea}} . UVLGs are rare in the local universe, but a significant sample can be defined when extending the search to {{formula:55ca6a14-68a0-49b2-899c-2d95d69a6953}}{{formula:94d0ca1e-c2e9-4baf-b303-e9d035cb3b41}} 0.2. These Lyman Break Analogs have {{formula:fb472843-30bf-421b-a038-9db439b17fb3}}  L{{formula:b1da7137-1a5a-44e7-bb98-c3613afbfd31}} , star formation rates (SFR) of 3-30 M{{formula:3fa490b0-bb2e-434d-95b9-e2685c27658e}}  yr{{formula:c5fe001b-f628-4669-878c-f993f8d7610d}} , sizes of a few kpc, and sub-Solar metallicities. In a similar manner, {{cite:2eed8875c65e85baed9a9149a5e775aee304f36d}} defines analogs of high-{{formula:2771e505-ed48-4856-ba32-bd982e6ef6bd}} LBGs based on their UV luminosity. {{cite:adfdbf3aac9e5f8a9fb76c25aa9765272374fd62}} uses H{{formula:ef5e4ed0-d2af-410a-9b14-8af1a20e88dd}} equivalent widths as a requirement to identify local galaxies that have Ly{{formula:cca2b2ba-9819-4a60-bad7-4028e48f6dcb}} luminosities comparable to high redshift Ly{{formula:92ce2512-772e-41ae-aecc-8b1e6656a76a}} Emitters (LAEs) and LBGs (the Ly{{formula:b15bebb2-4a9f-4948-8603-07f8b3d5bc52}} Reference Sample, or LARS). The galaxies comprising LARS are small, metal-poor, {{formula:855b9347-3a52-4f1f-97f5-75a13e9fcb3c}} 0.2-0.6 Z{{formula:4906d39f-35a2-47bc-b231-c8bef6c6ec24}} , gas-rich, with an average gas mass fraction {{formula:9c8363e6-dbfd-427d-90d8-cf7bd73fc46c}} 0.4, and SFRs ranging from 0.6 to {{formula:dcf3154e-d978-46f4-a1c4-472f37342ed0}} 20 M{{formula:f912b350-cd80-4bf9-abf7-d4169f232271}}  yr{{formula:4f77fd70-61f4-46c0-b2b4-6127459ed37f}} . They are morphologically identified as dwarf irregular galaxies ({{cite:4395b6cb5aaaba3d2b780f9d58611acf5531429c}}). Interestingly, most of the LAE and LBG analogs defined by {{cite:adfdbf3aac9e5f8a9fb76c25aa9765272374fd62}} are also far-infrared bright, showing the presence of significant amounts of dust in their interstellar medium, including one galaxy with a Ly{{formula:19a9d7f1-a06d-45e5-80d9-229776d70a62}} escape fraction of {{formula:fc1aa5dd-4f7d-4442-823e-5bd4b26f15cc}} 12%.
i
c9ea8d547bafde0f9043e0736f63c5dd
CVT Algorithm and Optimality. In general, it is difficult to prove guarantees for CVTs {{cite:36f6b98393f1a80f211729bec331f2cfdb95d681}}. Liu et al. provide theory on CVT convergence {{cite:6adb6827f3d8aff2b03f3055408ec631b107ee93}}. Despite theoretical challenges, CVT algorithms adaptively minimize energy and are argued to have practical expectations of convergence. For our algorithm it depends on the definition of {{formula:04e107cf-c23e-4f8b-9b95-c0d11c1b6fc5}} , the new target site position in the update step for Loyde's algorithm. As site density increases and/or the restriction complexity decreases, {{formula:62022b1d-da3b-477a-a2fb-14699f7f0171}} approaches the Euclidean centroid. While the tessellation will evolve from less optimal to more optimal distributions, we are not able to prove theoretical guarantees. A more optimal definition is also possible, but it is unclear what that should be in order to balance efficiency and quality. Our implementation can be extended or improved in the future by redefining {{formula:a45d2c72-b2db-4ca2-9979-0d7c00070daa}} and modifying the update step. As an alternative to Loyde's algorithm, one can also try an approach based on limited memory BFGS method for large scale optimization (L-BFGS) {{cite:8b0a7c8966c322e92a7b4c2f574436cb160adec2}}.
d
d90e84d1432be1646a26c59fee54bda6
We first assess whether sentence encoders yield coherent representations of meaning by computing the Spearman correlation between the human ratings present in the sick benchmark {{cite:dc4124a06c6d9ef51f22e2dc68b15c562eaf2aaf}} and the cosine and Euclidean distances between the two corresponding sentence embeddings; as with word embedding spaces in Section , we expect a significant anti-correlation between the two. Figure REF summarizes correlation scores for Skip-Thought {{cite:aa0e4841919fc2aa53eb474a902a8edb6a914da2}}, Infersent {{cite:909500eeb1d6161b4dc846082d57d8868f06edea}} and the Universal Sentence Encoder {{cite:63f969c9fb77f3e2fbbd53f9bd8793b13ef5bea3}}, along with a randomly initialized (and untrained) Transformer {{cite:bf1c81a03a1e57882ba7fed553d9a06ee7cfdb7d}} for perspective. We observe that use yields the most consistent semantic representations and thus decide to focus in the following on this particular model.
m
b3c9c636e68f8128a96c4a1c7a5d9d6d
Although current methods for generalized MORL are sample efficient, they combined preference values with expected values in a linear manner to derive decisions or to learn expected value functions. When observed in the space of per-objective expected returns, linear methods can identify policies that belong to expected returns from the convex coverage set, which is a subset of the Pareto coverage set {{cite:19fad13033f88fc3fda1554b8d48611d3b9fd640}}, {{cite:495f67499bdf09ff9cc32c6be6548d54d679455b}}. In practice, this limitation can lead to (i) agent behavior that is sub-optimal concerning the preference at hand, (ii) situations where balanced solutions are not found {{cite:f212a053e17d81da5ae5962fa938f6922ce986f4}} and (iii) situations where small changes in the preference values cause huge changes in the policy {{cite:a64f6059160c6eee9d88df535ca3f75a0c8c3705}}. Non-linear MORL methods aim to overcome this problem. Basic approaches for non-linear MORL include non-linear scalarization {{cite:af8fdd02a1f8e86908ba5d59154dbb45f727411a}}, the explicit storing and pruning of sets of non-dominated Q-vectors {{cite:7a1386a468f8f276d081e59bd6a16c6e7ba5801d}}, {{cite:cddf8138f27c5cff614fb814e78f884c2f927375}} and objective thresholding {{cite:4fc1d96f49e41a5c8d2ff4f06fd64a1f54bb3373}}, {{cite:a64f6059160c6eee9d88df535ca3f75a0c8c3705}}, {{cite:75bed84b9291d822e049408eac5294f967bc7f60}}. While early algorithms for non-linear MORL {{cite:af8fdd02a1f8e86908ba5d59154dbb45f727411a}}, {{cite:f398b3bdb4bc9827cf19ebe190d4b7cb87c69207}}, {{cite:cddf8138f27c5cff614fb814e78f884c2f927375}}, {{cite:4fc1d96f49e41a5c8d2ff4f06fd64a1f54bb3373}}, {{cite:a64f6059160c6eee9d88df535ca3f75a0c8c3705}} are based on tabular reinforcement learning, more recent methods {{cite:222e84618bb5f03143675ebf1fe3d20c0af720dd}}, {{cite:75bed84b9291d822e049408eac5294f967bc7f60}}, {{cite:c061bc3fbb84470930901ba51fbc341126ee39de}} are based on deep reinforcement learning and are therefore also applicable in real-world applications with high-dimensional and often continuous state descriptions. The Pareto DQN {{cite:222e84618bb5f03143675ebf1fe3d20c0af720dd}} is a deep MORL version of the Pareto Q-Learning (PQL) algorithm {{cite:7a1386a468f8f276d081e59bd6a16c6e7ba5801d}}. It aims to directly approximate the Pareto front and, to the best of our knowledge, is the only published approach that aims to solve multi-policy MORL problems in a non-linear and inner loop fashion.
i
ac9ed016fdf0696ddb4c4d827f4c1e15
Challenge 1: Scalable self-backhauling design. The majority of prior works relies on point-to-point links, e.g., {{cite:bff7897ec9e57eaaa3f67058810cd3ad59d9d801}}, {{cite:a80360b7c9903ebc841932facf88f7882b7b8f56}}, between MBS and SBSs, which is unscalable in dense SBS deployments. The scalability issue is addressed in a handful of works, e.g., {{cite:81aed1e2ccbc1cf1fa70aa0a66d4343c750fe98b}}, {{cite:ae3025e63aeafb89ea556e52976899736e759d2e}} assume that SBSs are capable of multi-layer successive interference cancellation (SIC). While this assumption simplifies traffic transport, it involves heavy computational tasks (i.e., SIC) not suited for SBSs. Thus, to keep SBS economical for the operators, it is necessary to reduce the computational burden of SBSs by developing practical backhauling mechanisms.
i
917215289851fe1fd190f595ceb3e7ed
Computation Time (CompT). Fig. REF compares CompT for a different number of participants {{formula:4ddce9d7-c57b-41e5-8416-5da407ceef20}} and a different number of training passes {{formula:7a71b4be-f8f3-434c-8c89-0b3834e210f3}} . In the experiments, we use ResNet-18 and normalize their overheads. As we can see, more participants lead to smaller CompT, i.e.; it takes a shorter time to converge. However, the difference is insignificant among 10, 20, and 50 participants, especially when the number of training passes is large. In addition, we can see that larger {{formula:cfc64e71-f743-4c86-9169-415b4e0de825}} has worse CompT. There is no apparent difference between {{formula:cfe1518e-c357-487a-a2d4-0873ab29a98e}} and {{formula:3b693045-2b20-4216-a8f4-0b1f49e81e7f}} though. In a nutshell, the common knowledge of more participants is faster for FL model training is valid. However, the gain of more participants is insignificant when the number of participants is moderate. In addition, it is preferred to adopt a small number of training passes to achieve good time efficiency. Transmission Time (TransT). Fig. REF plots TransT, which clearly shows that TransT favors larger {{formula:a30058ca-1517-4c4d-8414-453744deb288}} and {{formula:3948ce87-6a29-4f3e-b95e-a5c811a9fd27}} . Since TransT is dependent on the number of training rounds {{formula:d21238b0-d581-43ec-ae6d-28e2754ae8b3}} (Eq. (REF )), it is equivalent to the metric of round-to-accuracy. Our measurement result is consistent with common knowledge (e.g., {{cite:26561746d3957e616d3f99d4354df805e76d14a8}}) that more participants and more training passes have a better round-to-accuracy performance. We can also observe that when {{formula:2f2e2f2b-989c-461e-9bca-d675a64ed8b9}} is small, e.g., 1, TransL is much worse than the other cases. Computation Load (CompL). Fig. REF shows CompL. We make the following observations: (1) More participants result in worse CompL. The results indicate that the gain of faster model convergence from more participants does not compensate for the higher computation costs introduced by more participants. (2) CompL is increased when more training passes are used. This is probably because that larger {{formula:f47ac19e-335d-427b-8b4d-2bca436d6cd4}} diverges the model training {{cite:3f33411b99204cb1b297d87a64ff10829fdf0f02}}, and thus, the data utility per unit of computation cost is reduced. Transmission Load (TransL). Fig. REF illustrates TransL. As shown, more participants greatly increase TransL. This is because more participants can only weakly reduce the number of training rounds {{formula:1268fa71-4f93-44dc-b0c3-57423af70361}}  {{cite:63d8019be83cbd68111ff19637533c8ea9c505dd}}, however, in each round, the number of transmissions increases linearly with the number of participants. Regarding the number of training passes, larger {{formula:e7521e1e-0cd6-4d06-b9fa-12ca5077aab5}} reduces the total number of training rounds {{formula:01b406bc-86c4-47af-b6d1-91038441bd3a}} and thus has better TransL. On the other hand, the gain of larger {{formula:e7eeed3a-38d4-4f2c-91e2-a3dff8288456}} diminishes. The results are consistent with the analysis of {{cite:63d8019be83cbd68111ff19637533c8ea9c505dd}} that {{formula:1f7b61e2-4d7a-4b6a-9a8d-7b69079a8e1c}} is hyperbolic with {{formula:6c7f0dbd-6701-47a4-8803-c026e8f8b863}} (the turning point happens around 100-1000 in their experiments). Model Complexity. Table REF tabulates the models for comparing training overheads versus model complexity. In this experiment, we select one participant ({{formula:19d306d9-288f-47ff-831f-054a0430ea96}} ) to train one pass ({{formula:a39125d4-2b38-4346-b31e-656f5f6f0767}} ) on each training round. Fig. REF shows the normalized CompT, TransT, CompL, and TransL for different models. The x-axis is the target model accuracy, and the y-axis is the corresponding overhead to reach that model accuracy. Since only one client and one training pass are used on each round, CompT and CompL have the same normalized comparison, and so are TransT and TransL. The results show that smaller models are better in terms of all training aspects. In addition, it is interesting to note that heavier models have higher increase rates of overheads versus model accuracy. This means that model selection is especially essential for high model accuracy applications.
r
b81fcb0c144fe1d42e9386e8e5d0e789
Lemma 2.4 (Equation 2.19 in {{cite:f271412465ad057c58daa10909fa563638d5688d}}) Let {{formula:8b477669-efdd-444b-ac4c-0873aefd3679}} , where each {{formula:1550ab37-5f78-4083-91db-c0996cbdf66e}} . Then, {{formula:2c259ef9-28df-4df2-862b-de186942d3c5}} and for any {{formula:fc352bc3-72cd-44ce-a1bb-fe16d254cd7d}} , we have {{formula:f191add8-1315-4f13-beba-10386865a8a0}} .
r
bcab53968b67c837c3ac095110d2d498
General sequential approaches such as GRU4Rec {{cite:54e6b9b0b2ff2cac4377bd81d3a276c1e8b0f627}} and SASRec {{cite:e6c2672e218aebd28ecb7987b10ede6fac5a62bc}} rely on explicit item IDs to construct the sequential model, where they assume item IDs are pre-given. These approaches can't perform well under the cold-start setting with new items. As a comparison, our approach aims to construct an ID-agnostic recommendation model that can capture sequential patterns in a more general form of natural language.
d
7aac4b9ad57d158b1912122a79d624e4
Our work shows that while most of the research effort has focused on model-free methods in meta-RL {{cite:96fc1c9e3ce2db85b6f79fc089e5ff707ae76a1b}}, learning to learn might need explicit modeling and then planning. Also, the use of the Transformer architecture can be critical to model the complex relationship between histories of observations and beliefs upon the system transitions.
d
d268590674f826e3ebae60003fef67de
The previous approach can be easily extended to the case of having several observed data instances {{formula:92a058fc-8b78-4437-8efb-3a2a4487cff4}} . In that case the objective is simply the sum of {{formula:fd561ed9-0e4e-416d-ab0d-c3bf0c6b75e7}} , for {{formula:ec4a8767-c3e7-4b6e-b974-38057489d3ec}} , where {{formula:d9d4a7c1-24d3-4a5c-b8f3-33c3af6a36fe}} is the lower bound corresponding to {{formula:fdfb2dc6-06f1-4476-86b7-e12f44b4f22d}} , i.e., the {{formula:36e8ff85-d4db-49e2-872c-117693acd21a}} -th data instance. This sum can be approximated using mini-batches and optimized using stochastic optimization techniques such as the ADAM algorithm {{cite:d026fe52d8b3bb21fd48da28b41a3d16c01c505f}}. In this study, we use a mini-batch size of 1 for all experiments. For a proof of convergence of stochastic optimization see {{cite:c2a64dd60488f461d0e7e0db820ad3d7096ffe67}}. The variational approach is expected to find reasonable values for the prior parameters {{formula:2c0cb445-8e1b-42b6-9805-dcf27676ffe3}} , using approximate maximum likelihood estimation, and to provide a recognition model {{formula:24cc56bf-1a37-4c04-9aa5-0370664597ed}} that can be used to infer the potential values of {{formula:ef36b294-d66f-4a49-adfe-2032eadf808c}} given {{formula:2f9b0fd0-c26a-48c8-99cc-9d3b46a95375}} .
m
df93e57175489ca5ccdd39cc8c5238ef
and updating the prior model by incorporating the the simulated and observed LWD measurements (analysis) . The output of the analysis component is the posterior model. In the following, we explain the method in more detail. The prior ensemble The prior ensemble contains all our prior knowledge, obtained from geological interpretation of the available information. In step 0, we generate 40–100 realizations comprising the prior ensemble {{formula:c7f18202-e10c-4fd1-b981-3858047298de}} . Each realization is a member of the ensemble. The prior ensemble, {{formula:6ce55458-f286-4fc5-8db2-5768fb67f956}} , is represented by the selected model parameters. For our study, we chose to include petrophysical properties such as density, resistivity and layer boundaries in every position along the well path and is combined with the given well path to predict the logs. If the prior ensemble is not specified on the well planning stage it can be approximated as a Gaussian or a uniform distribution using the given single geomodel (deterministic base interpretation) as the ensemble mean. The Gaussian distribution is generated with the given mean model and a specified variance, where the variance represents uncertainties in the model parameters. The prior ensemble serves as regularization for the further probabilistic inversion. {{figure:7517e676-1415-4fbe-98f1-ab5e55560a93}} Incorporating simulated and observed measurements In order to update the prior model, the forward simulated measurements, obtained from the forecast step, are compared with the realization of LWD measurements: Forecast step and simulated measurements In the forecast step, we use the forward simulator {{formula:31e9d6e5-5c9d-4edc-b8b7-5884ad24cab0}} on each of the prior ensemble members to generate the simulated measurements (theoretical data), which is presented in Box 2 a in Figure REF . We present the forward model using in this study in Section: input and output. Observed LWD measurements LWD measurements Box 2ḃ is represented by a vector of real measurements, {{formula:17b62d1c-7dcc-4cb5-bfdc-feb91e92491c}} , perturbed with a vector of additive measurement noise {{formula:77cb3f7f-0fc7-4649-99ed-73225e35f5c1}} (see Equation REF ), where {{formula:c3ef90cc-4a8c-47ac-b6af-2d2d79011198}} indicates each realization. The measurement noise is assumed to be uncorrelated Gaussian with zero mean and a specified variance. We define the variance relative to the value of the observed data. More specifically, we let the standard deviation equal a percent value of the observed data. In the numerical study, the standard deviation is defined as 1-5 % of the measured data, which in turn is squared to give the variance of the measurement noise. {{formula:3b185134-ac77-4d1a-a933-7fda44767ecf}} Analysis and update the prior model At the analysis step (step 3 in Figure REF ), the prior ensemble {{formula:47d5622a-eb1a-4b38-ba1a-96af4e663818}} is updated to reduce statistical misfit with the observed measurements. The output of the analysis step is the posterior ensemble, {{formula:b8d56f7d-022f-44fa-9a99-2306de8af5be}} , which is represented in Box 3 in the figure. The detail of the analysis step is as follows. Here, the model variables are conditioned to realizations of the measurements, {{formula:f993a92f-4b0f-40c8-a6ee-47343381cab1}} by minimization of the following objective function. {{formula:b5df839e-bdec-4d91-b7dd-a6d6759cfb90}} where {{formula:a9cf2412-53d5-4c9b-ad07-6d918ed0e1a4}} Formulating the minimization using the Gauss-Newton approach with the Levenberg-Marquardt modification of the Hessian, the iterative update is given by {{cite:baa294262b88a55201b76b216272a6f7d9031d1f}} {{formula:c905f342-015e-4068-8311-293429bb5d67}} where {{formula:812c0979-b9f1-4dc3-b10b-e4a384bd4f37}} is the sensitivity matrix. The EnRML approximate the sensitivity matrix, and the state covariance matrix {{formula:08775b0f-dc83-41c3-a43b-ec899f7a52b7}} , using information from the ensemble. To this end, we define the scaled and centred ensemble of states {{formula:c5469628-e170-4612-ac42-f1395fba1b2f}} and the scaled and centred ensemble of forecast models {{formula:994ab8d0-c1f4-46ab-8b27-5cc332771a7f}} The ensemble approximation to the sensitivity matrix is then {{formula:909233e4-19a2-4018-98d7-0ede4f4b829f}} and the ensemble approximation to the prior-state covariance matrix is {{formula:06414ee4-6a5e-45c5-b99d-36015a72cc25}} Following {{cite:7a1c896e63553daf9845ac477730c7be780007c2}}, we replace {{formula:8a307bf5-a79a-4aba-b1ce-9b0ed41a0042}} in the Hessian term of REF with {{formula:f3c7a9ee-e4ad-418d-921d-93b66ee67821}} , and we neglect the terms containing the state mismatch. This gives the approximate LM-EnRML update equation {{formula:e2df8746-5137-486f-b46f-c5d6c1ceba5e}} Using the Sherman-Woodbury-Morrison matrix inversion formulas, this is rewritten as {{formula:5f7b15ab-3487-4aaf-a683-c95132be689c}} Inserting (REF )-(REF ), and simplifying gives {{formula:241481d3-e5b3-40ac-b729-1d45280f8ef3}} To stabilize the method, we perform a truncated Singular Value Decomposition (SVD) of {{formula:0282b86e-cd0f-4ae1-b69e-f0846f9f8f60}} {{formula:bc41ffdc-31e6-4c87-b4a1-0597853c071e}} where the subscript {{formula:5aa33a1e-5102-48b6-9ac4-48b75e124113}} denotes the truncation. We retain the number of singular values such that their partial sum represents a certain percentage of the sum of all the singular values. In this numerical study, the singular values are kept such that their sum corresponds to 99% percent of the sum of all the singular values. Inserting the SVD gives the iterative scheme utilized in this work {{formula:52551759-d3dc-4e52-8bdd-bae7095ba9f8}} Equation REF is repeated until convergence. Here, this is defined when the relative improvement in the data misfit is below a threshold. For each iteration that gives a reduction of the objective function, we let {{formula:c0fa0170-5c25-4467-89f0-2b183437157f}} . In addition, we set the initial value, {{formula:b14fa71b-3dbe-4493-8849-2183947e5ccc}} , close to the value of the initial data misfit. Model input and output AS we described earlier, the core of the our interpretation method is the analysis step, which the forward simulated and observed measurements are its inputs, and the updated geomodel from the inversion formulas is its output. We explain the forward and inverse models in the following. As we explained in the forward model section, the input for the forward simulator is the geomodel, defined by petrophysical properties and formation boundaries, and the well path: The geomodel realizations in this work are 1D horizontally layered geomodel that has been extended in the horizontal direction, where all layer boundaries have the same dip. The number of layers is kept constant. Layers are characterized by an initial petrophysical properties (resistivity or density) and their boundaries. In this work we assume no anisotropy. The well path is defined by inclination. The log was sampled each meter along the measured depth (MD) of the well. All sampling points are incorporated simultaneously into the ensemble-based model. Forward model In this work we apply two separate forward simulators: one for the density log and one for the Deep EM logs. Their output is on the same format as the output of the real logging instruments. Density model The density simulations are conducted using the forward simulator developed by {{cite:5a6c1d6c8e7ac17e23fd342129b990cee9a6c149}} which relies on flux sensitivity functions. A detailed explanation of the method can be found in the work by {{cite:2ca7ba8ef2830c0d68929f8c022407c3548fcfb2}}. Deep EM model The Deep EM forward simulator is a deep neural network trained on the output from a commercial simulator software as explained in {{cite:f6cd4c1f467079d7e1d8c505ee6758347e6bf233}}. Our forward simulator is trained to respond to up to seven layers (three above and three below the logging tool). The model inputs are the six boundaries of the layers: three above and three below the measuring instrument; the seven resistivities of the layers; and the relative dip between the layers and the instrument. The schematics of the Deep and Extra Deep EM instruments can be found in {{cite:5eb6d667f487dd8b3b6dc4fab12b7240739f90d7}}. The DNN model outputs 13 log traces (22 including maximum sensitivity angles) that are typically transmitted in real time {{cite:f6cd4c1f467079d7e1d8c505ee6758347e6bf233}}. Numerical results We shall demonstrate that the proposed ensemble-based method is reliable, robust and computationally efficient for interpreting both shallow and deep logs, as well as for quantifying interpretation uncertainties. For this purpose, we first construct synthetic examples where we estimate layer bulk densities and resistivities using the shallow density and Deep EM logs respectively. To further study the performance of the proposed ensemble method, we consider a case, inspired from the Goliat field, where the method uses the Deep EM logs to estimate the layer resistivity and boundaries. We evaluate the computational efficiency of our proposed ensemble method by performing the same example with Metropolis-Hastings Monte Carlo method {{cite:dca2c6de644f09966f41125dfaec4e59211053cd}}. Bulk density estimation In this example we apply the proposed ensemble-based method to interpret density logs in thinly laminated formations by constructing a synthetic case with layer thicknesses varying between 0.5 m and 5.5 m; we assume each layer has uniform density. Layers are horizontal (zero dip angle). The prior ensemble is a realization of density in each layer is modelled by a Gaussian distribution as we explained in Section of the prior model. We describe one case with ensemble of 40 and another case with ensemble of 100 realizations; the posterior ensemble mean conveys the estimated density and the posterior standard deviation expresses its uncertainty. Realization of the observed measurements are generated from the synthetic true measurements by adding a Gaussian relative noise of 1.5 %. The grey lines in Figure REF is the posterior realizations. The figure compares estimated density (posterior ensemble mean) from the ensemble with 40 realizations and with the density from the ensemble with 100 realizations in addition to comparing them with the estimated density from MCMC method and with the true density. It shows that for all the layers, true density (red dotted lines) lies inside of the posterior ensemble (grey lines). They also agree well with the results from MCMC (dotted blue lines) with 10000 sampling steps. In this example, three iterations of Equation REF of LM-EnRML method are sufficient. The posterior ensemble means are very close in cases with 40 and 100 realizations, thus we conclude that ensemble of 40 is sufficient. This means that the total number of forward runs is as low as {{formula:71785cfe-24e0-4ce4-ac58-a61acca4633c}} , compare to 10000 needed for the MCMC. Figure REF shows that by decreasing layer thicknesses, the value of the posterior standard deviation increases; as expected because of the higher relative impact of shoulder bed effects {{cite:50f9ea8ab1bb439e780b528c9a3602f78f4b216a}}. {{figure:f5ef650e-d8d2-4b35-ac82-7cb305388d39}}{{figure:38d1ac1c-8d36-42ff-9572-ea938fd9ba2c}} Resistivity estimation In this example, we verify the speed and robustness of the proposed method for interpreting the Deep EM logs to estimate layer resistivities and boundaries of a layer-cake geomodel as described in section. For this reason, we study the sensitivity of the method for four parameters: the distance from the logging tool to the layer boundaries, property contrast across the layer boundaries, and layer thicknesses by constructing four synthetic scenarios: The logging tool distances to the layer boundaries are varying. The well angle is 80{{formula:2a1a8c1b-8900-4611-bdd0-e4e3d8b1b90b}} –82.5{{formula:69c83a41-2c18-481e-a3cf-a5514e7571bb}} . The tool is in the layer with low resistivity (1.4 ohm-m) and is approaching the layer with the highest resistivity (99 ohm-m). We focus on quantifying uncertainty of boundaries location, which are farther away from the logging tool. Layers with higher resistivity contrast (3 ohm-m and 50 ohm-m), equal thicknesses (20 m). The well path lands with 80{{formula:ab8e2ec6-1299-4425-95a1-22d71766dc5f}} , and its inclination increases with depth and becomes near-horizontal. We focus on quantifying uncertainty in layers with high resistivity. Variable layer thicknesses (0.7 m to 10 m), constant 80{{formula:63db1237-6ea5-4e1d-88cd-5644230a7941}} well angle and no resistivity contrast. We focus on quantifying uncertainty in thinner layers. Furthermore, in the Goliat field example section we consider an example inspired from the Goliat field in the Barents Sea. In this example we compare the estimated resistivity, recovered from the ensemble-based method, with the one from Metropolis Hastings Monte Carlo method. The prior models in all examples are constructed with 40 or 100 realizations with a Gaussian distribution. Logs (observed data) are perturbed with 1–5 % relative noise and sampled along the measured depth of the well every one meter. Case 1: Decreasing distance from the logging tool to the target layer In Figure REF we drill towards the target layer, with high-resistivity (99 ohm-m) and 10 m thickness, passing through the layer above, low-resistivity layer (1.4 ohm-m) with 20 m thickness. We let 'upper boundary' always denote the top boundary of the target layer, and 'lower boundary' always denote the bottom boundary of the target layer. The tool distance to the upper boundary decreases gradually from 17 m to 1 m: the starting position is at TVD 1522 m. and ending position is at TVD 1539 m. Standard deviation of the measurement ensemble is 1% .The number of realization in the ensemble is 100. The standard deviations of the prior ensemble for boundaries is 2 m. {{figure:8d856081-aa30-41fc-88b5-904e5533ae06}}Figure REF shows the posterior distribution of the estimated upper and lower boundaries versus distance of the boundary to the logging tool. We observe as the tool is approaching the target layer, the estimation of the boundary position become more certain. The results of the proposed ensemble-based method show that the estimation of the distance to the nearest boundary (upper boundary) are robust and accurate, with very low uncertainty, at the distance less than 2 m, and the true boundary is recovered by the data. At the distance from 3 m–11 m, the boundary is estimated with less uncertainty in comparison to farther distances: 12 m–17 m. Not that the well angle increases from 80{{formula:bd3cdcd9-6f5e-48a1-ba1b-6cdc28110c57}} to 82.5 {{formula:b36883cc-649d-42aa-99ea-948aa7c3d5b0}} as the tool approaches to the target layer; The higher uncertainty in some sampling points, closer to the target layer, in comparison to the neighbouring farther points may be due to the effect of varying the well angle. The lower boundary is estimated at the range of 29 m to 11 m farther away from the tool despite the noise. The uncertainty decreases as the tool distance become less than 17 m. Note, that around 17 m to 29 m farther from the lower boundary, the prediction stays robust but with increased uncertainty due to weaker signal. The true boundaries are covered by data (Figure REF ). Figure REF shows the standard deviation of the estimated target layer boundaries versus the tool distance to the boundaries. We observe expected high uncertainty in the position, as the logging tool is farther away from the boundaries. The higher uncertainty in the layer location of the boundary is preserved in the layer farther away from the tool. This is in agreement with the result presented by {{cite:f46cb672e4feb72a65e3710864990c82434746ba}}, where the tool detects the position of the upper boundary with low uncertainty in the distance around 13.5–14.8 m and the position of the lower boundary with higher uncertainty at around 16.2-18.2 m, and the tool is able to detect the boundary with uncertainty as the logging tool is approaching the layer at around 33-40 m. {{figure:ac6ea4d4-f772-4642-80d0-8505d9c36507}}{{figure:ace96544-075f-4e71-9340-ec1da5d71438}} Case 2: Formations with high resistivity contrast between neighbouring layers The geomodel for this case, Figure REF , contains 6 horizontal layers with equal layer thicknesses, with high resistivity contrast between neighbouring layers (3 ohm-m and 50 ohm-m). Five layers from the top are drilled with high to near-horizontal angle. The last layer is not penetrated. {{figure:1e8f2c39-841c-4c5d-8fd2-3b7559c3931c}}The plot in figure REF displays the prior and estimated posterior resistivity for each layer after 5 iterations of LM-EnRML method. The number of realizations is 40 hence the number of forward runs is 200 for this case. The true resistivity is covered by the posterior ensemble in the first layer. Figure REF shows ensemble of the posterior distribution is more spread in the layers with the higher resistivity, indicating the higher uncertainty in these layers. The highest uncertainty is preserved in the last layer, which is farther away from the tool. {{figure:8906919a-c6c9-48d7-b6a4-d18c9f83ebc3}}Figure REF compares the standard deviation of the prior ensemble withe the posterior ensemble. It represents that the uncertainty decreases after the update. Figure REF indicates, that layers with higher resistivity (50 ohm-m in this case) have higher posterior standard deviation, so higher uncertainty in comparison to the layers with lower resistivity (3 ohm-m in this case). The very low standard deviation in the layer (1540-1560) is due to the high number of measurements, which are along the well path, in this layer. The last layer, farther away from the logging tool, presents visibly higher standard deviation. There are no measurements for the last bottom layer (1560–1580 m TVD). This demonstrates the ability of the ensemble method to preserve the uncertainty in the regions of the model where the amount of data is insufficient. {{figure:300115ac-cbac-413f-beec-464193e63a89}} Case 3: Formations with varying thicknesses In order to verify the applicability of the method to estimate resistivities of thinner layers, and to study the effect of layer thicknesses on uncertainty, we construct a synthetic case with 100 realizations with varying layer thicknesses from 0.7 m to 10 m. In order to focus only on the effect of layer thicknesses on sensitivity, we assume no resistivity contrast between neighbouring layers and that the true resistivity is 50 ohm-m. The mean resistivity of the initial prior model is 45 ohm-m. Measurements error is 3%, applied as standard devaluation to the true measurements. The well path inclination is 80{{formula:748e499a-ef34-415f-9b50-cb38354d56af}} . Figure REF shows the distribution of the posterior resistivity (grey lines) and its mean (solid blue lines). We see the thinner layers have noticeably higher uncertainty in the posterior ensemble. Inspecting the standard deviations of the layer resistivities (Figure REF ) we see the same relation: there is higher standard deviation for the resistivities of thinner layers. This is expected because of the higher relative impact of shoulder bed effects. These effects also make the problem more non-linear and its solution requires 4 to 6 LM-EnRML iterations. {{figure:0a6054b1-c707-4a8f-b0a5-42ede559b751}}{{figure:5de81115-d294-4b23-b23d-29dcb9b92923}} The Goliat field example We further demonstrate the speed and applicability of the proposed method by testing it on a case inspired by a real geosteering operation in the Goliat field in the Barents Sea, described in {{cite:31ab74d97f77efc9c1c44319045d53a70924f986}}. The geomodel covers a small unfaulted section just before the well ('Well A') lands in the drilling target in the Upper Kobbe formation. This formation is characterized by thin layers and high resistivity contrasts. The aim of this example is to get an indication of the behaviour of the method in a realistic geosteering situation. Figure REF shows a simplified 2D section inspired from {{cite:31ab74d97f77efc9c1c44319045d53a70924f986}}, which was used for testing of the forward model in {{cite:f6cd4c1f467079d7e1d8c505ee6758347e6bf233}}. Model variables are the layer resistivities and boundaries, which are estimated simultaneously. We first demonstrate resistivity estimation results and compare the results with resistivity estimation from Metropolis-Hastings Monte Carlo, afterwards we show the results of the joint estimation of the boundaries. {{figure:801fec19-166c-459a-a59e-091ce4121c00}}Resistivity estimation The well path is from MD 1000 to MD 1105, and a total of 106 positions along the well path are sampled. Similar to the synthetic examples in this work, we derive the theoretical measurements for each sample point from the DNN approximation presented in {{cite:f6cd4c1f467079d7e1d8c505ee6758347e6bf233}}. We construct the ensemble of the measurements with 1% standard deviation. The prior ensemble is set up with 100 realizations. We define the initial prior ensemble with the mean value and the standard deviation. In this model the fraction standard deviation of the prior ensemble is 0.03 (3% error) for all layer's resistivity, and 0.0025 (0.25% error) for the boundary locations. Figure REF shows the standard deviation of the posterior resistivities. This presents the uncertainties decrease after the update, however the model preserves the higher uncertainty of the last two bottom layers, which are farther away from the tool. The first layer also show higher uncertainty; It is because in this model the well path starts from the local depth of 103 m, but we chose the depth of the first layer at the local depth of 100 m, so there is less measurement points for this layer. The box-plots in Figure REF visualize the posterior resistivity distribution at each layer separately. The measurements' mean of the layers 3,4 and 6 lie inside the interquartile range of the box-plot. The estimated resistivity at Layer 5, which is the first layer below the tool, is between the prior and the measurements. The uncertainty in this layer, which has the lowest resistivity among layers, is the lowest; as we observed in our case 2, the prediction of resistivities in layers with low resistivities is more certain. It shows the model can estimate the model parameters ahead of the logging tool with better accuracy than its initial estimation. We also perform the simulation with several farther and closer priors to the true value and observe the similar trend. According to our observations from the previous section, high error in resistivity estimation of the layer 2 can be explained as the layer 2 is a thin layer and has the highest resistivity among layers. Figure REF compares the estimated resistivity from the ensemble method with the resistivity obtained using the Metropolis-Hastings Monte Carlo method {{cite:dca2c6de644f09966f41125dfaec4e59211053cd}} and with the true resistivity. The estimated resistivities using Metropolis-Hastings Monte Carlo show higher accuracy for the first two top layers, however layers 3 and 4 in which the data are sufficient, both methods show a comparable accuracy. Both methods estimate the resistivity of the farthest layer from the tool with high error. The error in the last layers is due to little logging tool sensitivity and can be potentially mitigated by increasing the size of the ensemble or by introducing localization to prevent updates due to the noise. The ensemble-based method requires 3–5 iterations, yielding 300–500 calls of the forward simulators, while Metropolis-Hastings Monte Carlo requires to check at least 10000 states to minimize the misfit and estimate the resistivity with a comparable accuracy. {{figure:b5d57ad4-bbd7-4e4b-8296-17d6a0791389}}{{figure:590202c6-064e-4ed5-ac34-aa12c6de92a3}}{{figure:c99942d3-85a3-451c-a076-1aac29264ded}} Boundary estimation The ensemble-method can update multiple variables simultaneously. In this example we estimate the boundaries location for the Goliat field example. We presented its geomodel earlier in Figure REF . The thickness of the layers are not constant, leading to limitation of selecting a prior model with higher variance for the boundaries location. For this example we use a prior model of the layers with a Gaussian distribution, and with 25 cm to 32 cm standard deviations (0.0025 fractional standard deviation) increasing from the top to the last bottom boundaries. According to the posterior standard deviation in figure REF , apart from the boundaries far from the logging tool (the first and the last three boundaries), the uncertainties of the estimated boundaries are less than 0.05%, resulting in near-deterministic inversion despite the modelled uncertainty in the resistivities. However, higher uncertainties are preserved for the layers farther away from the logging tool. {{figure:ba8475d4-c70b-45d5-b1c2-bdf5ef048d04}} Conclusion With this study we demonstrated the capabilities of the iterative ensemble-based method to estimate petrophysical properties. We estimated layer boundaries as well as formation properties such as density and resistivity using nuclear density and extra deep logs respectively. The method is reducing the statistical misfit between the observed LWD measurements and the theoretical LWD measurements obtained from forward simulators and thus estimates the mean and quantifies the uncertainty with posterior distributions. Even a small improvement in the estimation of layer boundaries may lead to a significant risk reduction when placing the well in an optimal position close to the boundary of a target layer. First and foremost we verify the method on deep EM measurements. When the positions of the boundaries are assumed known, our interpretation is qualitatively similar to earlier work: estimation is less certain in geological configurations with thin layers and high contrast between each layer's petrophysical properties {{cite:50f9ea8ab1bb439e780b528c9a3602f78f4b216a}}. Furthermore, the presented results indicate that the method is capable of reducing boundary and property uncertainties within the expected range of sensitivities of the modeled deep EM tool: with low uncertainties when the boundary is within 15 m from the tool {{cite:f46cb672e4feb72a65e3710864990c82434746ba}}. Finally we verify our method on a case mimicking a historical operation in the Goliat field in the Barents Sea {{cite:31ab74d97f77efc9c1c44319045d53a70924f986}}. The proposed method not only recovers the mean position of the boundary from noisy measurements, but also provides uncertainty estimates at no extra computational cost. In absolute terms, the amount of needed forward simulations is at least twenty times lower compare to an MCMC method which gives similar uncertainty quantification. Moreover for our method most of simulations can be executed in-parallel. These qualities make it really attractive for real-time interpretation. Even though the ensemble method might not be necessary for inversion of simpler logs, such as density, our implementation of the method for nuclear density opens future possibility for joint inversion of multiple logs in the same framework. Thereafter, the framework can be used as part of a future highly automatic real-time workflow for geosteering decision support. Acknowledgements Funding: This work was supported by the research project 'Geosteering for IOR' (NFR-Petromaks2 project no. 268122) which is funded by the Research Council of Norway, Aker BP, Equinor, Vår Energi and Baker Hughes Norway.
m
fa286100fa3a0b2f654a4e656722709c
Presented augmentation leads to more balanced image classes and it also increases rotation invariance of the CNN. Bicubic interpolation is used when the angle was not a multiple of 90°. As was shown by Goodfellow et al. {{cite:da4536f2b066d45045baa1e3dc3a2478cc2dcf28}}, such data augmentation by flipping and rotation compensates for variations between the training and the test sets and has a positive impact on the CNN performance. Total number of training images after balancing step is 96,274. {{table:722c5530-cc38-4f61-b99c-a109ee319956}}
m
7e54ac1931164eae3e4f56b98305339b
More generally, the phylogenetic networks that we have considered in this paper did not have arc weights. It would be interesting to understand how arc weights might affect our results, especially when we want to make a straight-line drawing where the length of an arc is proportional to its weights. A special case that could be considered first are temporal phylogenetic networks which incorporate a natural vertical time axis which provides timings for past evolutionary events (see e.g. {{cite:62613848eb39938118ff60031a76bec6bce633e5}}). This concept appears to be related to upward planarity, and it would be interesting to further investigate this relationship. Moreover, in general as the theory of phylogenetic networks continues to grow it could be worthwhile to develop new algorithms to produce planar phylogenetic networks from biological data.
d
d07d3ec8735034a433ed8764c2eca662
Let {{formula:7830a9ba-23c0-4aef-87f1-bef582993125}} be the miniversal family for the deformation of {{formula:14f85047-121a-497d-9843-8bd25a14b094}} . Since the Milnor number of hypersurface singularities is upper semicontinuous under deformations, the Milnor number of the isolated Du Val surface singularity {{formula:fa18b177-6656-4a1b-aec4-526361bfa7d6}} is greater than or equal to the sum of the Milnor numbers at all singularities of the Du Val surface {{formula:9fd59402-61d8-4d85-870c-10ec598a26d6}} for {{formula:1190b5e2-65c0-4b99-8739-c3d9aa46f86b}} . Recall that the isolated Du Val surface singularity {{formula:3be3a792-07dd-461d-bd6f-1e091c8aaf87}} is simple (and therefore of type {{formula:7d377744-3e71-4ea5-9f93-93b7d6172691}} , {{formula:9a4e6e35-ed3c-410f-ba48-da3f275b7a33}} , {{formula:df55ddf6-3e18-4a69-8d2d-471ca9ba2a12}} , {{formula:08b3de1c-db5d-477c-aaeb-7029c90396c6}} , {{formula:f0c62b22-0445-4c9d-a7de-4fcabed65943}} ). Hence, for sufficiently small {{formula:8c7cac3c-c69c-488f-9168-b905a6d5ae47}} , the Milnor number at a singularity of {{formula:5d818de5-4848-418f-958b-bf256eade03a}} is less than the Milnor number of the isolated Du Val surface singularity {{formula:8ee7dbec-56e2-4f54-a61f-ee9f5a0c99f9}} (cf. {{cite:9a24f23591961dc3f0cdc5780a04493856d8f0bb}} or {{cite:6a71983669490097ca5f9dadc35761e69256132c}}) and then we replace the original small transition {{formula:5ad42a0e-9dca-4f8e-b2a2-062c898cf65d}} with {{formula:941b785c-25aa-41c1-b459-7f8aeb60fdab}} . Repeating this process finitely many times leads to a primitive small transition.
d
58f30a37cdaf69cfb40fd453a7653afd
As discussed in {{cite:087d1645dd55cebe5d88fbeb64d25c1c7ecd9375}}, the random geometry description of the {{formula:91de312a-4193-4a9a-a96a-c76a4880250d}}-deformed CFTs can be straightforwardly translated into the AdS/CFT framework. The gravity dual is an ensemble of AdS{{formula:bf33d2e1-a7ec-413d-b4b9-47e92f609f02}} with a “Gaussian” average over boundary metrics. This, we believe, is equivalent to the nonlinear mixed boundary proposal of Guica and Monten {{cite:ef2be46b6816b814e967f7e0a8364ef657da5f13}} but differs from the cutoff AdS proposal of McGough, Mezei and Verlinde {{cite:029bc5a66e852daddd8f9b65c3f31d6649c6cc6b}}.In a similar way to {{cite:ef2be46b6816b814e967f7e0a8364ef657da5f13}}, by reinterpreting the gravity dual of the {{formula:bc81005b-8839-470e-acf3-1f4ec731e65e}}-deformed BTZ black holes obtained in {{cite:087d1645dd55cebe5d88fbeb64d25c1c7ecd9375}}, we can explicitly show that a “cutoff” surface emerges as a kind of mirage. In this sense, there is a relation between our gravity dual and the cutoff AdS. However, it is hard to regard this surface as a real rigid cutoff in a literal sense. In our gravity dual description, since the conformal anomaly can be derived via holographic renormalization in AdS/CFT {{cite:1e1a622afd268eebfaa83824e120f46034502652}}, {{cite:e645b9116e5cc257cce7b3f4225a1d829c318793}}, by averaging over the boundary metric with the Hubbard-Stratonovich “Gaussian” weight, we can obtain, holographically, the deformed Liouville action. Thus taking variations of the deformed anomaly action so obtained with respect to the boundary background metric, we can calculate the {{formula:b38b4ad0-7aa0-498e-ac53-88cdc28ddfef}}-deformed stress-tensor correlators in the gravity dual and will find exactly the same answer as we found in the field theory. In contrast, in the cutoff AdS, the logarithmic correction (REF ), in particular, seems very hard to be accounted for. This may suggest the necessity of refining a simple cutoff picture.
d
9e60bc504f33fd6d8bdc443a9894b8ed
CP alternatives. Algorithms of aggregated CP {{cite:0bfde49c4bdb4333a1d5d1ac1079c5421486a65d}}, cross CP {{cite:49ac49eb3c8ff0c45144487394fe89beb86ab2f3}}, CV+ and jackknife+ {{cite:2eb45f62694ccb01367757030ad81b6f3fe25605}} can be found in the respective references. Note that CV+ and jackknife+, albeit originally designed for regression, can be extended to classification tasks (Appendix D in {{cite:f2205b50ea59385b5e65d9573d9c78e075dbd3f8}}).
m
201d1948c38b085d9b4dc9ad3c1731e1
The spreading distance {{formula:862ece12-1f77-4994-9e35-174beb22c081}} of failures in the two peaks shows nonlocality characteristics of cascades. The nonlocality of the cascade suggests that the damage is distributed globally, which is hard to predict and recover. Indeed, figuring out the formation of bimodal distribution is the key step in predicting and preventing large-scale cascades. Next, we aim to investigate how the bimodal distribution forms. Firstly, we consider intuitively whether the failure size at the first step of a cascade determines the final cascade size. We donate the failures (failed nodes) at the first step of a cascade as First Failures and those at the second and later steps are called Second Failures here. In Figs. REF (a), (b) and (c), we show the distributions of First Failures sizes and Second Failures sizes in ER networks, SW networks and BA networks respectively. Interestingly, the distribution of First Failures sizes is unimodal and narrow. The result is totally different from the original bimodal distribution, which implies that First Failures cannot represent the final cascading failures. However, the distribution of Second Failures sizes shows weak bimodality, consistent with the original bimodal distribution. In addition, we quantify the dependence of the final cascade sizes on First Failures by the Kendall’s tau rank correlation coefficient{{cite:81a91d022600a97e11728cc9628b4445ea7aea6e}} {{formula:6b23a809-f000-49d4-8fa1-7f10c620f14c}} . In Figs. REF (a), (b), we do not observe obvious linear relation between final cascade sizes and First Failures sizes, and the Kendall’s tau rank correlation coefficients are also small. In Fig. REF (c), the correlation coefficient is slightly higher. This may be because of high heterogeneity in BA networks, considering the initial failed nodes with high heterogeneity would play decisive roles in their cascading outcomes. To sum up, the final bimodal distribution of cascade sizes is mainly caused by the Second Failures and has no significant correlation with First Failure in ER networks and SW networks. This is significantly different from the propagation of disease and information, where the wider spread probably emerges from more infected agents at the first step of propagation{{cite:8e69e4f1d74159ad3259466a8d1d6f3fd0ea5eca}}. {{figure:058e4725-0b6f-4c4c-b46e-e766b6abb3bb}}{{table:94b9708f-91a7-43f2-8da7-07b38e7809d1}}{{figure:77e8a368-39f5-49b0-a3b1-514d86bac992}}
r
88d54fbdbdbac7ea27414216009e961b