id
stringlengths 9
16
| abstract
stringlengths 67
2.61k
| cats
sequence | primary
stringlengths 5
18
| secondary
stringlengths 0
18
| strlabel
stringlengths 5
315
| stratlabel
class label 7.27k
classes |
---|---|---|---|---|---|---|
2109.01696 | A recent work from Bello shows that training and scaling strategies may be
more significant than model architectures for visual recognition. This short
note studies effective training and scaling strategies for video recognition
models. We propose a simple scaling strategy for 3D ResNets, in combination
with improved training strategies and minor architectural changes. The
resulting models, termed 3D ResNet-RS, attain competitive performance of 81.0
on Kinetics-400 and 83.8 on Kinetics-600 without pre-training. When pre-trained
on a large Web Video Text dataset, our best model achieves 83.5 and 84.3 on
Kinetics-400 and Kinetics-600. The proposed scaling rule is further evaluated
in a self-supervised setup using contrastive learning, demonstrating improved
performance. Code is available at:
https://github.com/tensorflow/models/tree/master/official.
| [
"cs.CV",
"cs.LG",
"eess.IV"
] | cs.CV | cs.LG | Computer Vision and Pattern Recognition;Machine Learning;Image and Video Processing | 1,596Computer Vision and Pattern Recognition;Machine Learning;Image and Video Processing
|
2306.16007 | The integration of Language Models (LMs) has proven to be an effective way to
address domain shifts in speech recognition. However, these approaches usually
require a significant amount of target domain text data for the training of
LMs. Different from these methods, in this work, with only a domain-specific
text prompt, we propose two zero-shot ASR domain adaptation methods using
LLaMA, a 7-billion-parameter large language model (LLM). LLM is used in two
ways: 1) second-pass rescoring: reranking N-best hypotheses of a given ASR
system with LLaMA; 2) deep LLM-fusion: incorporating LLM into the decoder of an
encoder-decoder based ASR system. Experiments show that, with only one domain
prompt, both methods can effectively reduce word error rates (WER) on
out-of-domain TedLium-2 and SPGISpeech datasets. Especially, the deep
LLM-fusion has the advantage of better recall of entity and out-of-vocabulary
words.
| [
"cs.CL",
"eess.AS",
"eess.SP"
] | cs.CL | eess.AS | Computation and Language;Audio and Speech Processing;Signal Processing | 7,267longtail
|
1301.2472 | We investigate dynamic properties of inhomogeneous nano-materials, which
appear in analytical descriptions typically as series of $\delta$-functions
with corresponding Gibbs weights. We focus on observables relevant for
transport theories of Josephson junction arrays and granular systems near the
superconductor -- insulator transition. Furthermore, our description applies to
the theory of tunnel junctions exchanging energy with a "bath", the latter
having a discrete spectrum. Using the matrix theta-function formalism we find
an analytical expression for the transport characteristics capturing the
complete temperature driven transition from the quantum to the classical
regime.
| [
"cond-mat.mes-hall",
"cond-mat.dis-nn",
"cond-mat.str-el"
] | cond-mat.mes-hall | cond-mat.dis-nn | Mesoscale and Nanoscale Physics;Disordered Systems and Neural Networks;Strongly Correlated Electrons | 4,480Mesoscale and Nanoscale Physics;Disordered Systems and Neural Networks;Strongly Correlated Electrons
|
1604.03980 | We consider topological constraints that must be satisfied by formulations of
gravitation as a gauge theory. To facilitate the analysis we review and further
justify the composite bundle formalism of Tresguerres as a consistent
underlying structure capable of incorporating both the local Lorentz and
translational degrees of freedom. Identifying an important global structure
required by the composite construction, we translate this into conditions on
the underlying manifold. We find that in addition to admitting the expected
orientability, causality and spin structures, the underlying manifold must also
admit a string structure. We take this to imply that even before considerations
of quantum consistency, topological considerations of gauge gravity provide a
classical motivation for extended degrees of freedom.
| [
"gr-qc",
"hep-th"
] | gr-qc | hep-th | General Relativity and Quantum Cosmology;High Energy Physics - Theory | 2,746General Relativity and Quantum Cosmology;High Energy Physics - Theory
|
1112.0077 | The basic idea of many effective immunization strategies is first to rank the
importance of vertices according to the degrees of vertices and then remove the
vertices from highest importance to lowest until the network becomes
disconnected. Here we define the effective degrees of vertex, i.e., the number
of its connections linking to un-immunized nodes in current network during the
immunization procedure, to rank the importance of vertex, and modify these
strategies by using the effective degrees of vertices. Simulations on both the
scale-free network models with various degree correlations and two real
networks have revealed that the immunization strategies based on the effective
degrees are often more effective than those based on the degrees in the initial
network.
| [
"physics.soc-ph",
"cs.SI"
] | physics.soc-ph | cs.SI | Physics and Society;Social and Information Networks | 5,527Physics and Society;Social and Information Networks
|
2109.08774 | High Dynamic Range (HDR) images are the ones that contain a greater range of
luminosity as compared to the standard images. HDR images have a higher detail
and clarity of structure, objects, and color, which the standard images lack.
HDR images are useful in capturing scenes that pose high brightness, darker
areas, and shadows, etc. An HDR image comprises multiple narrow-range-exposure
images combined into one high-quality image. As these HDR images cannot be
displayed on standard display devices, the real challenge comes while
converting these HDR images to Low dynamic range (LDR) images. The conversion
of HDR image to LDR image is performed using Tone-mapped operators (TMOs). This
conversion results in the loss of much valuable information in structure,
color, naturalness, and exposures. The loss of information in the LDR image may
not directly be visible to the human eye. To calculate how good an LDR image is
after conversion, various metrics have been proposed previously. Some are not
noise resilient, some work on separate color channels (Red, Green, and Blue one
by one), and some lack capacity to identify the structure. To deal with this
problem, we propose a metric in this paper called the Tone Mapping Quality
Index (TMQI-3), which evaluates the quality of the LDR image based on its
objective score. TMQI-3 is noise resilient, takes account of structure and
naturalness, and works on all three color channels combined into one luminosity
component. This eliminates the need to use multiple metrics at the same time.
We compute results for several HDR and LDR images from the literature and show
that our quality index metric performs better than the baseline models.
| [
"eess.IV",
"cs.CV"
] | eess.IV | cs.CV | Image and Video Processing;Computer Vision and Pattern Recognition | 3,532Image and Video Processing;Computer Vision and Pattern Recognition
|
1805.10520 | Networks are everywhere and their many types, including social networks, the
Internet, food webs etc., have been studied for the last few decades. However,
in real-world networks, it's hard to find examples that can be easily
comparable, i.e. have the same density or even number of nodes and edges. We
propose a flexible and extensible NetSim framework to understand how properties
in different types of networks change with varying number of edges and
vertices. Our approach enables to simulate three classical network models
(random, small-world and scale-free) with easily adjustable model parameters
and network size. To be able to compare different networks, for a single
experimental setup we kept the number of edges and vertices fixed across the
models. To understand how they change depending on the number of nodes and
edges we ran over 30,000 simulations and analysed different network
characteristics that cannot be derived analytically. Two of the main findings
from the analysis are that the average shortest path does not change with the
density of the scale-free network but changes for small-world and random
networks; the apparent difference in mean betweenness centrality of the
scale-free network compared with random and small-world networks.
| [
"cs.SI"
] | cs.SI | Social and Information Networks | 6,467Social and Information Networks
|
|
1003.2194 | In nonuniform Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) superconductors, both
the gauge symmetry and the continuous translational symmetry of the normal
state are spontaneously broken. This leads to additional bosonic excitations,
or Goldstone modes, corresponding to the deformations of the order parameter
amplitude modulation in real space. We derive general expressions for the
energy of the phase and elastic Goldstone modes. As an example, the superfluid
density and the elastic modulus of a one-dimensional LOFF superconductor are
calculated at low temperatures.
| [
"cond-mat.supr-con"
] | cond-mat.supr-con | Superconductivity | 7,066Superconductivity
|
|
1903.08755 | A network effect is said to take place when a new feature not only impacts
the people who receive it, but also other users of the platform, like their
connections or the people who follow them. This very common phenomenon violates
the fundamental assumption underpinning nearly all enterprise experimentation
systems, the stable unit treatment value assumption (SUTVA). When this
assumption is broken, a typical experimentation platform, which relies on
Bernoulli randomization for assignment and two-sample t-test for assessment of
significance, will not only fail to account for the network effect, but
potentially give highly biased results.
This paper outlines a simple and scalable solution to measuring network
effects, using ego-network randomization, where a cluster is comprised of an
"ego" (a focal individual), and her "alters" (the individuals she is
immediately connected to). Our approach aims at maintaining representativity of
clusters, avoiding strong modeling assumption, and significantly increasing
power compared to traditional cluster-based randomization. In particular, it
does not require product-specific experiment design, or high levels of
investment from engineering teams, and does not require any changes to
experimentation and analysis platforms, as it only requires assigning treatment
an individual level. Each user either has the feature or does not, and no
complex manipulation of interactions between users is needed. It focuses on
measuring the one-out network effect (i.e the effect of my immediate
connection's treatment on me), and gives reasonable estimates at a very low
setup cost, allowing us to run such experiments dozens of times a year.
| [
"cs.SI",
"stat.AP"
] | cs.SI | stat.AP | Social and Information Networks;Applications | 6,469Social and Information Networks;Applications
|
1011.0746 | The formulation of quantum mechanics within the framework of entropic
dynamics includes several new elements. In this paper we concentrate on one of
them: the implications for the theory of time. Entropic time is introduced as a
book-keeping device to keep track of the accumulation of changes. One new
feature is that, unlike other concepts of time appearing in the so-called
fundamental laws of physics, entropic time incorporates a natural distinction
between past and future.
| [
"quant-ph",
"cond-mat.stat-mech",
"gr-qc"
] | quant-ph | cond-mat.stat-mech | Quantum Physics;Statistical Mechanics;General Relativity and Quantum Cosmology | 6,191Quantum Physics;Statistical Mechanics;General Relativity and Quantum Cosmology
|
0905.4871 | We report on a new electromagnetic phenomenon that emerges in Mott
insulators, i.e., materials that do not conduct electricity because of strong
electronic Coulomb repulsion. The phenomenon manifests as antiferromagnetic
ordering due to orbital electric currents which are spontaneously generated
from the coupling between spin currents and an external homogenous magnetic
field. This novel spin-charge current effect provides the mechanism to detect
the so far elusive spin currents by means of unpolarized neutron scattering,
nuclear magnetic resonance or muon spectroscopy. We illustrate this mechanism
by solving a half-filled Hubbard model on a frustrated ladder, a simple but
nontrivial case of strongly interacting electrons.
| [
"cond-mat.str-el"
] | cond-mat.str-el | Strongly Correlated Electrons | 6,979Strongly Correlated Electrons
|
|
1907.01788 | The quest for practical cryptographic primitives that are robust against
quantum computers is of vital importance for the field of cryptography. Among
the abundance of different cryptographic primitives one may consider, one-way
functions stand out as fundamental building blocks of more complex
cryptographic protocols, and they play a central role in modern asymmetric
cryptography. We propose a mathematical one-way function, which relies on
coarse-grained boson sampling. The evaluation and the inversion of the function
are discussed in the context of classical and quantum computers. The present
results suggest that the scope and power of boson sampling may go beyond the
proof of quantum supremacy, and pave the way towards cryptographic
applications.
| [
"quant-ph",
"cs.CR"
] | quant-ph | cs.CR | Quantum Physics;Cryptography and Security | 6,032Quantum Physics;Cryptography and Security
|
1210.3005 | We present an overview of the EXoplanetary Circumstellar Environments and
Disk Explorer (EXCEDE), selected by NASA for technology development and
maturation. EXCEDE will study the formation, evolution and architectures of
exoplanetary systems, and characterize circumstellar environments into stellar
habitable zones. EXCEDE provides contrast-limited scattered-light detection
sensitivities ~ 1000x greater than HST or JWST coronagraphs at a much smaller
effective inner working angle (IWA), thus enabling the exploration and
characterization of exoplanetary circumstellar disks in currently inaccessible
domains. EXCEDE will utilize a laboratory demonstrated high-performance Phase
Induced Amplitude Apodized Coronagraph (PIAA-C) integrated with a 70 cm
diameter unobscured aperture visible light telescope. The EXCEDE PIAA-C will
deliver star-to-disk augmented image contrasts of < 10E-8 and a 1.2 L/D IWA or
140 mas with a wavefront control system utilizing a 2000-element MEMS DM and
fast steering mirror. EXCEDE will provide 120 mas spatial resolution at 0.4
microns with dust detection sensitivity to levels of a few tens of zodis with
two-band imaging polarimetry. EXCEDE is a science-driven technology pathfinder
that will advance our understanding of the formation and evolution of
exoplanetary systems, placing our solar system in broader astrophysical
context, and will demonstrate the high contrast technologies required for
larger-scale follow-on and multi-wavelength investigations on the road to
finding and characterizing exo-Earths in the years ahead.
| [
"astro-ph.IM"
] | astro-ph.IM | Instrumentation and Methods for Astrophysics | 3,689Instrumentation and Methods for Astrophysics
|
|
1708.00285 | In this paper, the central BMO spaces with variable exponent are introduced.
As an application, we characterize these spaces by the boundedness of
commutators of Hardy operator and its dual operator on variable Lebesgue
spaces. The boundedness of vector-valued commutators on Herz spaces with
variable exponent are also considered.
| [
"math.FA"
] | math.FA | Functional Analysis | 2,549Functional Analysis
|
|
0910.2905 | We complete our high-accuracy studies of the lattice ghost propagator in
Landau gauge in Numerical Stochastic Perturbation Theory up to three loops. We
present a systematic strategy which allows to extract with sufficient precision
the non-logarithmic parts of logarithmically divergent quantities as a function
of the propagator momentum squared in the infinite-volume and $a\to 0$ limits.
We find accurate coincidence with the one-loop result for the ghost self-energy
known from standard Lattice Perturbation Theory and improve our previous
estimate for the two-loop constant contribution to the ghost self-energy in
Landau gauge. Our results for the perturbative ghost propagator are compared
with Monte Carlo measurements of the ghost propagator performed by the Berlin
Humboldt university group which has used the exponential relation between
potentials and gauge links.
| [
"hep-lat"
] | hep-lat | High Energy Physics - Lattice | 3,092High Energy Physics - Lattice
|
|
1212.6367 | We discuss the photograph procured from the archives of the V. Stefanyk Lviv
National Scientific Library of Ukraine dated by 1904 which shows Marian
Smoluchowski together with professors and graduate students of the Philosophy
department of the Lviv University. The personalia includes both the professors
and the graduates depicted on the photograph with the emphasis on the graduates
as being much less known and studied. The photograph originates from the
collection of the Shevchenko Scientific Society, therefore a brief historical
background on the activities of physicists in this society around that period
of time is provided as well.
| [
"physics.hist-ph"
] | physics.hist-ph | History and Philosophy of Physics | 3,447History and Philosophy of Physics
|
|
2309.17328 | We investigate whether the Babcock-Leighton flux-transport dynamo model
remains in agreement with observations if the meridional flow profile is taken
from helioseismic inversions. Additionally, we investigate the effect of the
loss of toroidal flux through the solar surface. We employ the 2D
flux-transport BL dynamo framework. We use the helioseismically-inferred
meridional flow profile, and include toroidal flux loss in a way that is
consistent with the amount of poloidal flux generated by Joy's law. Our model
does not impose a preference for emergences at low latitudes, but we do require
that the model produces such a preference. We can find solutions in general
agreement with observations, including the equatorward drift of the butterfly
wings and the cycle's 11 year period. The most important free parameters in the
model are the depth to which the radial turbulent pumping extends and the
turbulent diffusivity in the lower half of the convection zone. We find that
the pumping needs to extend to depths of about $0.80R_{\odot}$ and the bulk
turbulent diffusivity needs to be around 10 km$^2$/s or less. We find that the
emergences are restricted to low latitudes without the need to impose such a
preference. The flux-transport BL model, incorporating the helioseismically
inferred meridional flow and toroidal field loss term, is compatible with the
properties of the observed butterfly diagram and with the observed toroidal
loss rate. Reasonably tight constraints are placed on the remaining free
parameters. The pumping needs to be to just below the depth corresponding to
the location where the meridional flow changes direction. Our linear model does
not however reproduce the observed "rush to the poles" of the diffuse surface
radial field resulting from the decay of sunspots -- reproducing this might
require the imposition of a preference for flux to emerge near the equator.
| [
"astro-ph.SR",
"physics.space-ph"
] | astro-ph.SR | physics.space-ph | Solar and Stellar Astrophysics;Space Physics | 6,723Solar and Stellar Astrophysics;Space Physics
|
1911.07491 | The paper deals with spectral order isomorphisms in the framework of
AW*-algebras. We establish that every spectral order isomorphism between sets
of all self-adjoint operators (or between sets of all effects, or between sets
of all positive operators) in AW*-factors of Type I has a canonical form
induced by a continuous function calculus and an isomorphism between projection
lattices. In particular, this solves an open question about spectral order
automorphisms of the set of all (bounded) self-adjoint operators on an
infinite-dimensional Hilbert space. We also discuss spectral order isomorphisms
preserving, in addition, orthogonality in both directions.
| [
"math.OA"
] | math.OA | Operator Algebras | 5,107Operator Algebras
|
|
1808.09151 | Context. Although the Gaia catalogue on its own is a very powerful tool, it
is the combination of this high-accuracy archive with other archives that will
truly open up amazing possibilities for astronomical research. The advanced
interoperation of archives is based on cross-matching, leaving the user with
the feeling of working with one single data archive. The data retrieval should
work not only across data archives but also across wavelength domains. The
first step for a seamless access to the data is the computation of the
cross-match between Gaia and external surveys.
Aims. We describe the adopted algorithms and results of the pre-computed
cross-match of the Gaia Data Release 2 (DR2) catalogue with dense surveys
(Pan-STARRS1 DR1, 2MASS, SDSS DR9, GSC 2.3, URAT-1, allWISE, PPMXL, and APASS
DR9) and sparse catalogues (Hipparcos2, Tycho-2, and RAVE 5).
Methods. A new algorithm is developed specifically for sparse catalogues.
Improvements and changes with respect to the algorithm adopted for DR1 are
described in detail.
Results. The outputs of the cross-match are part of the official Gaia DR2
catalogue. The global analysis of the cross-match results is also presented.
| [
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.IM"
] | astro-ph.SR | astro-ph.GA | Solar and Stellar Astrophysics;Astrophysics of Galaxies;Instrumentation and Methods for Astrophysics | 6,672Solar and Stellar Astrophysics;Astrophysics of Galaxies;Instrumentation and Methods for Astrophysics
|
2111.08546 | Transformer-based language models trained on large text corpora have enjoyed
immense popularity in the natural language processing community and are
commonly used as a starting point for downstream tasks. While these models are
undeniably useful, it is a challenge to quantify their performance beyond
traditional accuracy metrics. In this paper, we compare BERT-based language
models through snapshots of acquired knowledge at sequential stages of the
training process. Structured relationships from training corpora may be
uncovered through querying a masked language model with probing tasks. We
present a methodology to unveil a knowledge acquisition timeline by generating
knowledge graph extracts from cloze "fill-in-the-blank" statements at various
stages of RoBERTa's early training. We extend this analysis to a comparison of
pretrained variations of BERT models (DistilBERT, BERT-base, RoBERTa). This
work proposes a quantitative framework to compare language models through
knowledge graph extraction (GED, Graph2Vec) and showcases a part-of-speech
analysis (POSOR) to identify the linguistic strengths of each model variant.
Using these metrics, machine learning practitioners can compare models,
diagnose their models' behavioral strengths and weaknesses, and identify new
targeted datasets to improve model performance.
| [
"cs.LG",
"cs.CL"
] | cs.LG | cs.CL | Machine Learning;Computation and Language | 4,009Machine Learning;Computation and Language
|
0909.5217 | A new kind of tripartite coherent-entangled state (CES) $\ket{\beta,\gamma,
x}_{\mu\nu\tau}$ is proposed, which exhibits the properties of both coherence
and entanglement. We investigate its completeness and orthogonality, and find
it can make up a representation of tripartite CES. A protocol for generating
the tripartite CES is proposed using asymmetric beam splitter. Applications of
the tripartite CES in quantum optics are also presented.
| [
"quant-ph"
] | quant-ph | Quantum Physics | 5,985Quantum Physics
|
|
0706.2139 | We show individual high resolution spectra of components A, B, and C of the
nearby late-M type multiple system LHS 1070. Component A is a mid-M star, B and
C are known to have masses at the threshold to brown dwarfs. From our spectra
we measure rotation velocities and the mean magnetic field for all three
components individually. We find magnetic flux on the order of several
kilo-Gauss in all components. The rotation velocities of the two late-M objects
B and C are similar (vsini = 16km/s), the earlier A component is spinning only
at about half that rate. This suggests weakening of net rotational braking at
late-M spectral type, and that the lack of slowly rotating late-M and L dwarfs
is real. Furthermore, we found that magnetic flux in the B component is about
twice as strong as in component C at similar rotation rate. This indicates that
rotational braking is not proportional to magnetic field strength in fully
convective objects, and that a different field topology is the reason for the
weak braking in low mass objects.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1702.02063 | In this study, a new position control scheme for the tendon-sheath mechanism
(TSM) which is used in flexible medical devices is presented. TSM is widely
used in dexterous robotic applications because it can flexibly work in limited
space, in constrained environments, and provides efficient power transmission
from the external actuator to the distal joint. However, nonlinearities from
friction and backlash hysteresis between the tendon and the sheath pose
challenges in achieving precise position controls of the end effector. Previous
studies on the TSM only address the control problem under the assumptions of
known tendon-sheath configuration and known model parameters of the backlash
hysteresis nonlinearity. These approaches can have adverse impact and
limitations on the overall system performances and practical implementation.
This paper presents a new approach to model and control the TSM-driven flexible
robotic systems. The designed controller does not require exact knowledge of
nonlinear friction and backlash hysteresis parameters, only their bounds are
online estimated. Simulation and experimental validation results show that the
proposed control scheme can significantly improve the tracking performances
without the presence of the exact knowledge of the model parameters and the
sheath configuration.
| [
"cs.RO",
"math.DS"
] | cs.RO | math.DS | Robotics;Dynamical Systems | 6,370Robotics;Dynamical Systems
|
2111.04706 | Federated learning is an established method for training machine learning
models without sharing training data. However, recent work has shown that it
cannot guarantee data privacy as shared gradients can still leak sensitive
information. To formalize the problem of gradient leakage, we propose a
theoretical framework that enables, for the first time, analysis of the Bayes
optimal adversary phrased as an optimization problem. We demonstrate that
existing leakage attacks can be seen as approximations of this optimal
adversary with different assumptions on the probability distributions of the
input data and gradients. Our experiments confirm the effectiveness of the
Bayes optimal adversary when it has knowledge of the underlying distribution.
Further, our experimental evaluation shows that several existing heuristic
defenses are not effective against stronger attacks, especially early in the
training process. Thus, our findings indicate that the construction of more
effective defenses and their evaluation remains an open problem.
| [
"cs.LG",
"cs.CR"
] | cs.LG | cs.CR | Machine Learning;Cryptography and Security | 4,077Machine Learning;Cryptography and Security
|
astro-ph/0102090 | The traditional use of fixed apertures in determining the well known
color-magnitude (CM) relation of early type galaxies, coupled with the presence
of radial color gradients within these systems, introduces a bias in the CM
relation itself. The effect of this bias is studied here deriving a CM relation
which is based on color measurements carried out homogeneously within an
aperture of radius equal to that of the galaxy effective radius. A sample of 48
giant early-type galaxies in the Coma cluster, with CCD observations in the U-
and V-band, is used for this derivation. It is found that internal radial color
gradients in early-type galaxies cannot be neglected when discussing the colors
of these systems, and that the CM relation derived using color measurements
within the effective radius is significantly flatter than those based on
fixed-aperture color measurements. With the presently available data it is
impossible to determine whether the relation is completely flat, or whether a
small correlation is still present between galaxy color and luminosity.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
0906.4608 | In this letter, we report on all-optical fiber approach to the generation of
ultra-low noise microwave signals. We make use of two erbium fiber mode-locked
lasers phase locked to a common ultra-stable laser source to generate an 11.55
GHz signal with an unprecedented relative phase noise of -111 dBc/Hz at 1 Hz
from the carrier.The residual frequency instability of the microwave signals
derived from the two optical frequency combs is below 2.3 10^(-16) at 1s and
about 4 10^(-19) at 6.5 10^(4)s (in 5 Hz bandwidth, three days continuous
operation).
| [
"physics.optics",
"physics.ao-ph"
] | physics.optics | physics.ao-ph | Optics;Atmospheric and Oceanic Physics | 5,157Optics;Atmospheric and Oceanic Physics
|
2008.00305 | Point clouds provide a compact and efficient representation of 3D shapes.
While deep neural networks have achieved impressive results on point cloud
learning tasks, they require massive amounts of manually labeled data, which
can be costly and time-consuming to collect. In this paper, we leverage 3D
self-supervision for learning downstream tasks on point clouds with fewer
labels. A point cloud can be rotated in infinitely many ways, which provides a
rich label-free source for self-supervision. We consider the auxiliary task of
predicting rotations that in turn leads to useful features for other tasks such
as shape classification and 3D keypoint prediction. Using experiments on
ShapeNet and ModelNet, we demonstrate that our approach outperforms the
state-of-the-art. Moreover, features learned by our model are complementary to
other self-supervised methods and combining them leads to further performance
improvement.
| [
"cs.CV",
"cs.GR",
"cs.LG"
] | cs.CV | cs.GR | Computer Vision and Pattern Recognition;Graphics;Machine Learning | 1,571Computer Vision and Pattern Recognition;Graphics;Machine Learning
|
astro-ph/0602297 | (Abridged) We present a detailed analysis of the morphology, isophotal
parameters and surface brightness profiles for 100 early-type members of the
Virgo Cluster, from dwarfs (M_B = -15.1 mag) to giants (M_B = -21.8 mag). Each
galaxy has been imaged in two filters, closely resembling the Sloan g and z
passbands, using the Advanced Camera for Surveys on board the Hubble Space
Telescope.
Dust and complex morphological structures are common, with kiloparsec-scale
stellar disks, bars, and nuclear stellar disks seen in 60% of galaxies with
intermediate luminosity (-20 < M_B < -17), and dust seen in 42% of galaxies
brighter than M_B = -18.9 mag. Dust morphologies range from faint wisps and
patches on tens of parsec scales, to regular, highly organized kpc-scale dust
disks, often showing evidence of recent star formation.
Surface brightness profiles and isophotal parameters are derived typically
within 8 kpc from the center for the brightest galaxies, and 1.5 kpc for the
faintest systems, with a resolution (FWHM) of 7 pc. Based on a parametrization
of the surface brightness profiles in terms of a Sersic or core-Sersic model,
we find that 1) there is no evidence of a bimodal behavior of the slope, gamma,
of the profile in the innermost regions; 2) although the brightest galaxies
have shallow inner profiles, the shallowest profiles (lowest gamma values) are
found in faint dwarf systems; 3) the widely adopted separation of early-type
galaxies between "core" and "power-law" types, which had originally been
prompted by the claim of a clear bimodal distribution of gamma values, is
untenable; and 4) there is no evidence of a structural dichothomy between dwarf
and regular ellipticals.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1906.08922 | For the rank regularized minimization problem, we introduce several kinds of
stationary points by the problem itself and its equivalent reformulations
including the mathematical program with an equilibrium constraint (MPEC), the
global exact penalty of the MPEC,the surrogate yielded by eliminating the dual
part in the exact penalty. A clear relation chart is established for these
stationary points, which guides the user to choose an appropriate reformulation
for seeking a low-rank solution. As a byproduct, we also provide a weaker
condition for a local minimizer of the MPEC to be the M-stationary point by
characterizing the directional limiting normal cone to the graph of the normal
cone mapping of the positive semidefinite (PSD) cone.
| [
"math.OC"
] | math.OC | Optimization and Control | 5,234Optimization and Control
|
|
0808.3517 | I review the status of theoretical predictions for events containing a W or Z
boson and jets, one or more of which may include heavy quarks. Special
attention is paid to comparisons between different theoretical approaches and
with the latest experimental data.
| [
"hep-ph"
] | hep-ph | High Energy Physics - Phenomenology | 3,129High Energy Physics - Phenomenology
|
|
2207.07750 | Spectral purity of any millimeter wave (mmW) source is of the utmost interest
in low-noise applications. Optical synthesis via photomixing is an attractive
source for such mmWs, which usually involves expensive spectrally pure lasers
with narrow linewidths approaching monochromaticity due to their inherent
fabrication costs or specifications. Here, we report an alternative option for
enhancing the spectral purity of inexpensive semiconductor diode lasers via a
self-injection locking technique through corresponding Stokes waves from a
fiber Brillouin cavity exhibiting greatly improved phase noise levels and large
wavelength tunability of ~1.8 nm. We implement a system with two self-injected
diode lasers on a common Brillouin cavity aimed at difference frequency
generation in the mmW and THz region. We generate tunable sub-mmW (0.3 and 0.5
THz) waves by beating the self-injected two wavelength Stokes light on a
uni-travelling carrier photodiode and characterize the noise performance. The
sub-mmW features miniscule timing noise levels in the zepto-second (zs.Hz^-0.5)
scale outperforming the state of the art dissipative Kerr soliton based
micro-resonator setups while offering broader frequency tunability. These
results suggest a viable inexpensive alternative for mmW sources aimed at
low-noise applications featuring lab-scale footprints and rack-mounted
portability while paving the way for chip-scale photonic integration.
| [
"physics.optics",
"physics.app-ph"
] | physics.optics | physics.app-ph | Optics;Applied Physics | 5,150Optics;Applied Physics
|
gr-qc/9604026 | We consider the possibility of discriminating different theories of gravity
using a recently proposed gravitational wave detector of spherical shape. We
argue that the spin content of different theories can be extracted relating the
measurements of the excited spheroidal vibrational eigenmodes to the
Newman-Penrose parameters. The sphere toroidal modes cannot be excited by any
metric GW and can be thus used as a veto.
| [
"gr-qc",
"hep-th"
] | gr-qc | hep-th | General Relativity and Quantum Cosmology;High Energy Physics - Theory | 2,746General Relativity and Quantum Cosmology;High Energy Physics - Theory
|
2103.10338 | Single-phase multiferroic materials that allow the coexistence of
ferroelectric and magnetic ordering above room temperature are highly
desirable, motivating an ongoing search for mechanisms for unconventional
ferroelectricity in magnetic oxides. Here, we report an antisite defect
mechanism for room temperature ferroelectricity in epitaxial thin films of
yttrium orthoferrite, YFeO3, a perovskite-structured canted antiferromagnet. A
combination of piezoresponse force microscopy, atomically resolved elemental
mapping with aberration corrected scanning transmission electron microscopy and
density functional theory calculations reveals that the presence of YFe
antisite defects facilitates a non-centrosymmetric distortion promoting
ferroelectricity. This mechanism is predicted to work analogously for other
rare earth orthoferrites, with a dependence of the polarization on the radius
of the rare earth cation. Furthermore, a vertically aligned nanocomposite
consisting of pillars of a magnetoelastic oxide CoFe2O4 embedded epitaxially in
the YFeO3 matrix exhibits both robust ferroelectricity and ferrimagnetism at
room temperature, as well as a noticeable strain-mediated magnetoelectric
coupling effect. Our work uncovers the distinctive role of antisite defects in
providing a novel mechanism for ferroelectricity in a range of magnetic
orthoferrites and further augments the functionality of this family of complex
oxides for multiferroic applications.
| [
"cond-mat.mtrl-sci"
] | cond-mat.mtrl-sci | Materials Science | 4,287Materials Science
|
|
physics/0606081 | Far field radiation pattern under tight focusing condition is investigated in
Coherent Anti-stokes Raman Scattering (CARS) microscopy both in the forward
(F-CARS) and backward (E-CARS) directions. While we assume no refraction index
mismatch between the sample and the environing medium, our rigorous numerical
electromagnetic computation takes into account the exact polarizations of the
excitation laser beams and of the induced nonlinear dipoles. F-CARS and E-CARS
radiation patterns, as well as their divergence, are studied as a function of
the size of the sample object and compared to the excitation beams.
| [
"physics.optics",
"physics.bio-ph"
] | physics.optics | physics.bio-ph | Optics;Biological Physics | 5,163Optics;Biological Physics
|
1709.00164 | We present kleuren, a novel assembly-free method to reconstruct phylogenetic
trees using the Colored de Bruijn Graph. kleuren works by constructing the
Colored de Bruijn Graph and then traversing it, finding bubble structures in
the graph that provide phylogenetic signal. The bubbles are then aligned and
concatenated to form a supermatrix, from which a phylogenetic tree is inferred.
We introduce the algorithms that kleuren uses to accomplish this task, and show
its performance on reconstructing the phylogenetic tree of 12 Drosophila
species. kleuren reconstructed the established phylogenetic tree accurately,
and is a viable tool for phylogenetic tree reconstruction using whole genome
sequences. Software package available at: https://github.com/Colelyman/kleuren
| [
"q-bio.PE",
"cs.DS"
] | q-bio.PE | cs.DS | Populations and Evolution;Data Structures and Algorithms | 5,646Populations and Evolution;Data Structures and Algorithms
|
2005.10065 | The notions of time and causality are revisited, as well as the A- and
B-theory of time, in order to determine which theory of time is most compatible
with relativistic spacetimes. By considering orientable spacetimes and defining
a time-orientation, we formalize the concepts of a time-series in relativistic
spacetimes; A-theory and B-theory are given mathematical descriptions within
the formalism of General Relativity. As a result, in time-orientable
spacetimes, the notions of events being in the future and in the past, which
are notions of A-theory, are found to be more fundamental than the notions of
events being earlier than or later than other events, which are notions of
B-theory. Furthermore, we find that B-theory notions are incompatible with some
structures encountered in globally hyperbolic spacetimes, namely past and
future inextendible curves. Hence, GR is favorable to A-theory and the notions
of past, present and future.
| [
"gr-qc",
"physics.hist-ph"
] | gr-qc | physics.hist-ph | General Relativity and Quantum Cosmology;History and Philosophy of Physics | 2,757General Relativity and Quantum Cosmology;History and Philosophy of Physics
|
2305.07056 | Massive neutrinos modify the expansion history of the universe and suppress
the structure formation below their free streaming scale. Cosmic microwave
background (CMB) observations at small angular scales can be used to constrain
the total mass $\Sigma m_\nu$ of the three neutrino flavors. However, at these
scales, the CMB-measured $\Sigma m_\nu$ is degenerate with $\tau$, the optical
depth to reionization, which quantifies the damping of CMB anisotropies due to
the scattering of CMB photons with free electrons along the line of sight. Here
we revisit the idea to use 21-cm power spectrum observations to provide direct
estimates for $\tau$. A joint analysis of CMB and 21-cm data can alleviate the
$\tau-\Sigma m_\nu$ degeneracy, making it possible to measure $\Sigma m_\nu$
with unprecedented precision. Forecasting for the upcoming Hydrogen Epoch of
Reionization Array (HERA), we find that a $\lesssim\mathcal{O}(10\%)$
measurement of $\tau$ is achievable, which would enable a $\gtrsim 5\sigma$
measurement of $\Sigma m_\nu=60\,[{\rm meV}]$, for any astrophysics model that
we considered. Precise estimates of $\tau$ also help reduce uncertainties in
other cosmological parameters, such as $A_s$, the amplitude of the primordial
scalar fluctuations power spectrum.
| [
"astro-ph.CO",
"astro-ph.IM"
] | astro-ph.CO | astro-ph.IM | Cosmology and Nongalactic Astrophysics;Instrumentation and Methods for Astrophysics | 1,767Cosmology and Nongalactic Astrophysics;Instrumentation and Methods for Astrophysics
|
hep-ph/9908417 | The mass term for Majorana neutrinos explicitly violates lepton number.
Several authors have used this fact to create a lepton asymmetry in the
universe by considering CP violating effects in the one loop self-energy
correction for the decaying heavy Majorana neutrino. We compare and comment on
the different approaches used to calculate the lepton asymmetry including those
using an effective Hamiltonian and resummed propagators. We also recalculate
the asymmetry in the small mass difference limit.
| [
"hep-ph",
"astro-ph"
] | hep-ph | astro-ph | High Energy Physics - Phenomenology;Astrophysics | 3,131High Energy Physics - Phenomenology;Astrophysics
|
1909.05710 | Superfluid dark matter postulates that the centers of galaxies contain
superfluid condensates. An important quantity regarding these superfluids is
their chemical potential $ \mu $. Here, we discuss two issues related to this
chemical potential. First, there is no exactly conserved quantity associated
with this chemical potential due to the symmetry-breaking baryon-phonon
coupling. Second, $ \mu $ is sometimes introduced by shifting the phonon field
by $ \mu \cdot t $ which -- again due to the symmetry-breaking baryon-phonon
coupling -- introduces an explicit time dependence in the Lagrangian. We
investigate under which conditions introducing a chemical potential is
nevertheless justified and show how to correctly introduce it when these
conditions are met. We further propose a model that recovers superfluid dark
matter's zero-temperature equations of motion including a chemical potential
even if the aforementioned conditions for justifying a chemical potential are
not met.
| [
"astro-ph.GA",
"hep-ph",
"hep-th"
] | astro-ph.GA | hep-ph | Astrophysics of Galaxies;High Energy Physics - Phenomenology;High Energy Physics - Theory | 7,267longtail
|
hep-ph/0307149 | We discuss recent developments in neutrino physics and focus, in particular,
on neutrino oscillations and matter effects of three light active neutrinos.
Moreover, we discuss the difference between Dirac and Majorana neutrinos,
neutrinoless $\beta\beta$-decay, absolute neutrino masses and electromagnetic
moments. Basic mechanisms and a few models for neutrino masses and mixing are
also presented.
| [
"hep-ph"
] | hep-ph | High Energy Physics - Phenomenology | 3,129High Energy Physics - Phenomenology
|
|
2004.06932 | We prove that the implicit time Euler scheme coupled with finite elements
space discretization for the 2D Navier-Stokes equations on the torus subject to
a random perturbation converges in $L^2(\Omega)$, and describe the rate of
convergence for an $H^1$-valued initial condition. This refines previous
results which only established the convergence in probability of these
numerical approximations. Using exponential moment estimates of the solution of
the stochastic Navier-Stokes equations and convergence of a localized scheme,
we can prove strong convergence of this space-time approximation. The speed of
the $L^2(\Omega)$-convergence depends on the diffusion coefficient and on the
viscosity parameter. In case of Scott-Vogelius mixed elements and for an
additive noise, the convergence is polynomial.
| [
"math.PR",
"cs.NA",
"math.NA"
] | math.PR | cs.NA | Probability;Numerical Analysis;Numerical Analysis | 5,770Probability;Numerical Analysis;Numerical Analysis
|
1006.0528 | This paper has been withdrawn by the author. In this short paper I will put
in evidence a problem nested in Ozawa's effort to block von Neumann's chains
and in his attributing the wave-collapse to a interaction between systems. This
suggests distinguishing sharply the mathematical world from the
phenomenological one.
| [
"quant-ph"
] | quant-ph | Quantum Physics | 5,985Quantum Physics
|
|
1301.6736 | In this article we propose a qualitative (ordinal) counterpart for the
Partially Observable Markov Decision Processes model (POMDP) in which the
uncertainty, as well as the preferences of the agent, are modeled by
possibility distributions. This qualitative counterpart of the POMDP model
relies on a possibilistic theory of decision under uncertainty, recently
developed. One advantage of such a qualitative framework is its ability to
escape from the classical obstacle of stochastic POMDPs, in which even with a
finite state space, the obtained belief state space of the POMDP is infinite.
Instead, in the possibilistic framework even if exponentially larger than the
state space, the belief state space remains finite.
| [
"cs.AI"
] | cs.AI | Artificial Intelligence | 361Artificial Intelligence
|
|
1509.06365 | Mixture models have found uses in many areas. To list a few: unsupervised
learning, empirical Bayes, latent class and trait models. The current
applications of mixture models to empirical data is limited to computing a
mixture model from the same parametric family, e.g. Gaussians or Poissons. In
this paper it is shown that by using Hermite polynomials and ideals, the
modeling of a mixture process can be extended to include different families in
terms of their cumulative distribution functions (cdfs)
| [
"stat.CO"
] | stat.CO | Computation | 1,167Computation
|
|
astro-ph/0311357 | Spectroscopic observations of distant quasars have resulted in the detection
of molecular hydrogen in intervening damped Lyman-alpha absorption clouds
(DLAs). We use observations compiled from different experimental groups to show
that the molecular hydrogen abundance exhibits a dramatic increase over a
cosmological time period corresponding to 13% to 24% of the age of the
universe. We also tentatively show that the heavy element abundances in the
same gas clouds exhibit a faster and more well-defined cosmological evolution
compared to the general DLA population over the same time baseline. We argue
that this latter point is unsurprising, because the general DLA population
arises in a wide variety of galaxy types and environments, and thus a spans
broad range of ISM gas-phases and abundances at the same cosmic time. DLAs
exhibiting H2 absorption may therefore circumvent this problem, efficiently
identifying a narrower class of objects, and provide a more sensitive probe of
cosmological chemical evolution.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1808.08928 | The recent application of electrosprays to characterize the air-water
interface, along with the reports on dramatically accelerated chemical
reactions in aqueous electrosprays, have sparked a broad interest. Herein, we
report on complementary laboratory and in silico experiments tracking the
oligomerization of isoprene, an important biogenic gas, in electrosprays and
isoprene-water emulsions to differentiate the contributions of interfacial
effects from those of high voltages leading to charge-separation and
concentration of reactants in the electrosprays. To this end, we employed
electrospray ionization mass spectrometry, proton nuclear magnetic resonance,
and quantum mechanical simulations. We found that the oligomerization of
isoprene in aqueous electrosprays involved minimally hydrated and highly
reactive hydronium ions. Those conditions, however, are non-existent at
pristine air-water interfaces and oil-water emulsions under normal temperature
and pressure. Thus, electrosprays should be complemented with surface-specific
platforms and theoretical methods to reliably investigate chemistries at the
pristine air-water interface.
| [
"physics.chem-ph"
] | physics.chem-ph | Chemical Physics | 859Chemical Physics
|
|
0907.2979 | We study the role of electron correlations among Co 3d electrons contributing
to the conduction band of a Kondo lattice compound, Ce2CoSi3, using high
resolution photoemission spectroscopy and ab initio band structure
calculations. Experimental results reveal signature of Ce 4$f$ states derived
Kondo resonance feature at the Fermi level and dominance of Co 3d contributions
at higher binding energies in the valence band. The line shape of the
experimental Co 3$d$ band is found to be significantly different from that
obtained from the band structure calculations within the local density
approximations. Consideration of electron-electron Coulomb repulsion among Co
3d electrons leads to a better representation of experimental results. The
correlation strength among Co 3$d$ electrons is found to be about 3 eV.
Signature of electron correlation induced satellite feature is also observed in
the Co 2p core level spectrum. Thus, these results demonstrate the importance
of the electron correlation among conduction electrons to derive the
microscopic description of such Kondo systems.
| [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] | cond-mat.str-el | cond-mat.mtrl-sci | Strongly Correlated Electrons;Materials Science | 7,006Strongly Correlated Electrons;Materials Science
|
2308.04778 | By combining related objects, unsupervised machine learning techniques aim to
reveal the underlying patterns in a data set. Non-negative Matrix Factorization
(NMF) is a data mining technique that splits data matrices by imposing
restrictions on the elements' non-negativity into two matrices: one
representing the data partitions and the other to represent the cluster
prototypes of the data set. This method has attracted a lot of attention and is
used in a wide range of applications, including text mining, clustering,
language modeling, music transcription, and neuroscience (gene separation). The
interpretation of the generated matrices is made simpler by the absence of
negative values. In this article, we propose a study on multi-modal clustering
algorithms and present a novel method called multi-modal multi-view
non-negative matrix factorization, in which we analyze the collaboration of
several local NMF models. The experimental results show the value of the
proposed approach, which was evaluated using a variety of data sets, and the
obtained results are very promising compared to state of art methods.
| [
"cs.AI"
] | cs.AI | Artificial Intelligence | 361Artificial Intelligence
|
|
0901.3553 | We use the publicly available subhalo catalogs from the Via Lactea simulation
along with a Gpc-scale N-body simulation to understand the impact of
inhomogeneous reionization on the satellite galaxy population of the Milky Way.
The large-volume simulation is combined with a model for reionization that
allows us to predict the distribution of reionization times for Milky Way mass
halos. Motivated by this distribution, we identify candidate satellite galaxies
in the simulation by requiring that any subhalo must grow above a specified
mass threshold before it is reionized; after this time the photoionizing
background will suppress both the formation of stars and the accretion of gas.
We show that varying the reionization time over the range expected for Milky
Way mass halos can change the number of satellite galaxies by roughly two
orders of magnitude. This conclusion is in contradiction with a number of
studies in the literature, and we conclude that this is a result of
inconsistent application of the results of Gnedin (2000). We compare our
satellite galaxies to observations using both abundance matching and stellar
population synthesis methods to assign luminosities to our subhalos and account
for observational completeness effects. Additionally, if we assume that the
mass threshold is set by the virial temperature Tvir = 8e3K we find that our
model accurately matches the vmax distribution, radial distribution, and
luminosity function of observed Milky Way satellites for a reionization time
zreion = 9.6^{1.0}_{-2.1}, assuming that the Via Lacteasubhalo distribution is
representative of the Milky Way. This results in the presence of
119^{+202}_{-50} satellite galaxies.
| [
"astro-ph.CO",
"astro-ph.GA"
] | astro-ph.CO | astro-ph.GA | Cosmology and Nongalactic Astrophysics;Astrophysics of Galaxies | 1,727Cosmology and Nongalactic Astrophysics;Astrophysics of Galaxies
|
2306.14086 | Accommodating long-running deep learning (DL) training and inference jobs is
challenging on GPU clusters that use traditional batch schedulers, such as
Slurm. Given fixed wall clock time limits, DL researchers usually need to run a
sequence of batch jobs and experience long interruptions on overloaded
machines. Such interruptions significantly lower the research productivity and
QoS for services that are deployed in production. To mitigate the issues from
interruption, we investigate a set of statistical learning and reinforcement
learning (RL) techniques, including random forest, xgboost, Deep Q-Network, and
policy gradient to design a proactive provisioner using production job traces
from three GPU clusters. We follow the standard machine learning practice by
partitioning each job trace into training and validation subsets, then train
each model using the training subset and evaluate the generality using the
validation subset. We introduce Mirage, a Slurm-compatible resource provisioner
that integrates the candidate RL methods. Our experiments show that the Mirage
can reduce the interruption by 17-100% and safeguard 23%-76% of jobs with zero
interruption across varying load levels on the three clusters.
| [
"cs.DC"
] | cs.DC | Distributed, Parallel, and Cluster Computing | 2,194Distributed, Parallel, and Cluster Computing
|
|
astro-ph/9705004 | Recent ASCA and ROSAT X-ray observations of active galaxies have revealed a
host of new data on the fundamental properties of active galaxies. Amongst
these are the discovery and characterization of absorption by ionized gas in
Seyfert-I galaxies (the "warm absorber") , the discovery and parameterization
of broad Fe K lines which originate in the central 100 Schwarzschild radii, a
substantial modification in the form of the ionization continuum from previous
models and the absence of X-ray emission from broad absorption line quasars. We
briefly summarize the present observational situation and indicate where this
field might progress in the next few years with the enhanced capabilities of
AXAF, XMM and Astro-E.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
2202.03382 | We introduce Corrupted Image Modeling (CIM) for self-supervised visual
pre-training. CIM uses an auxiliary generator with a small trainable BEiT to
corrupt the input image instead of using artificial [MASK] tokens, where some
patches are randomly selected and replaced with plausible alternatives sampled
from the BEiT output distribution. Given this corrupted image, an enhancer
network learns to either recover all the original image pixels, or predict
whether each visual token is replaced by a generator sample or not. The
generator and the enhancer are simultaneously trained and synergistically
updated. After pre-training, the enhancer can be used as a high-capacity visual
encoder for downstream tasks. CIM is a general and flexible visual pre-training
framework that is suitable for various network architectures. For the first
time, CIM demonstrates that both ViT and CNN can learn rich visual
representations using a unified, non-Siamese framework. Experimental results
show that our approach achieves compelling results in vision benchmarks, such
as ImageNet classification and ADE20K semantic segmentation.
| [
"cs.CV",
"cs.AI",
"cs.LG"
] | cs.CV | cs.AI | Computer Vision and Pattern Recognition;Artificial Intelligence;Machine Learning | 1,521Computer Vision and Pattern Recognition;Artificial Intelligence;Machine Learning
|
1710.00513 | Shape reconstruction techniques using structured light have been widely
researched and developed due to their robustness, high precision, and density.
Because the techniques are based on decoding a pattern to find correspondences,
it implicitly requires that the projected patterns be clearly captured by an
image sensor, i.e., to avoid defocus and motion blur of the projected pattern.
Although intensive researches have been conducted for solving defocus blur, few
researches for motion blur and only solution is to capture with extremely fast
shutter speed. In this paper, unlike the previous approaches, we actively
utilize motion blur, which we refer to as a light flow, to estimate depth.
Analysis reveals that minimum two light flows, which are retrieved from two
projected patterns on the object, are required for depth estimation. To
retrieve two light flows at the same time, two sets of parallel line patterns
are illuminated from two video projectors and the size of motion blur of each
line is precisely measured. By analyzing the light flows, i.e. lengths of the
blurs, scene depth information is estimated. In the experiments, 3D shapes of
fast moving objects, which are inevitably captured with motion blur, are
successfully reconstructed by our technique.
| [
"cs.CV"
] | cs.CV | Computer Vision and Pattern Recognition | 1,498Computer Vision and Pattern Recognition
|
|
2012.12241 | Hydroxyl ($\rm OH$) is known to form efficiently in cold gas ($T\sim 100$K)
along with the molecule $\rm H_2$ and can be used as an efficient tracer of the
diffuse molecular gas in the interstellar medium (ISM). Using a simple
formalism describing the $\rm H\,I/H_2$ transition and a reduced network of
major chemical reactions, we present a semi-analytical prescription to estimate
the abundances of O-bearing molecules in the diffuse ISM. We show that
predictions based on our prescription are in good agreement with the estimates
obtained using the MEUDON PDR code which utilizes the full reaction network. We
investigate the dependence of the relative abundances of $\rm OH/H\,I$ and $\rm
OH/H_2$ on the variations of physical conditions i.e., the metallicity, number
density ($n$), cosmic ray ionization rate ($\zeta$) and strength of UV field
($\chi$) in the medium. We find that the $\rm OH/H\,I$ abundances observed in
the Galactic ISM can be reproduced by models with $n\sim 50$cm$^{-3}$,
$\chi\sim 1$ (Mathis field) and $\zeta\sim3\times10^{-17}$s$^{-1}$, with a
variation of about one dex allowed around these values. Using the constrained
$\rm H_2$ column density distribution function at $z\sim3$, we estimate the
$\rm OH$ column density distribution function and discuss future prospects with
the upcoming large radio absorption line surveys.
| [
"astro-ph.GA",
"astro-ph.CO"
] | astro-ph.GA | astro-ph.CO | Astrophysics of Galaxies;Cosmology and Nongalactic Astrophysics | 470Astrophysics of Galaxies;Cosmology and Nongalactic Astrophysics
|
1509.00905 | The matched interface and boundary (MIB) method has a proven ability for
delivering the second order accuracy in handling elliptic interface problems
with arbitrarily complex interface geometries. However, its collocation
formulation requires relatively high solution regularity. Finite volume method
(FVM) has its merit in dealing with conservation law problems and its integral
formulation works well with relatively low solution regularity. We propose an
MIB-FVM to take the advantages of both MIB and FVM for solving elliptic
interface problems. We construct the proposed method on Cartesian meshes with
vertex-centered control volumes. A large number of numerical experiments are
designed to validate the present method in both two dimensional (2D) and three
dimensional (3D) domains. It is found that the proposed MIB-FVM achieves the
second order convergence for elliptic interface problems with complex interface
geometries in both $L_{\infty}$ and $L_2$ norms.
| [
"math.NA"
] | math.NA | Numerical Analysis | 5,002Numerical Analysis
|
|
2204.03473 | We study the roots of a random polynomial over the field of p-adic numbers.
For a random monic polynomial with coefficients in $\mathbb{Z}_p$, we obtain an
asymptotic formula for the factorial moments of the number of roots of this
polynomial. In addition, we show the probability that a random polynomial of
degree $n$ has more than $\log n$ roots is $O\big(n^{-K}\big)$ for some $K >
0$.
| [
"math.NT"
] | math.NT | Number Theory | 4,945Number Theory
|
|
1303.5071 | We describe the detection, interpretation, and removal of the signal
resulting from interactions of high energy particles with the \Planck\ High
Frequency Instrument (HFI). There are two types of interactions: heating of the
0.1\,K bolometer plate; and glitches in each detector time stream. The
transient responses to detector glitch shapes are not simple single-pole
exponential decays and fall into three families. The glitch shape for each
family has been characterized empirically in flight data and these shapes have
been used to remove glitches from the detector time streams. The spectrum of
the count rate per unit energy is computed for each family and a correspondence
is made to the location on the detector of the particle hit. Most of the
detected glitches are from Galactic protons incident on the die frame
supporting the micro-machined bolometric detectors. In the \Planck\ orbit at
L2, the particle flux is around $5\,{\rm cm}^{-2}\,{\rm s}^{-1}$ and is
dominated by protons incident on the spacecraft with energy $>$39\,MeV, at a
rate of typically one event per second per detector. Different categories of
glitches have different signatures in the time stream. Two of the glitch types
have a low amplitude component that decays over nearly 1\,s. This component
produces excess noise if not properly removed from the time-ordered data. We
have used a glitch detection and subtraction method based on the joint fit of
population templates. The application of this novel glitch subtraction method
removes excess noise from the time streams. Using realistic simulations, we
find that this method does not introduce signal bias into the \Planck\ data.
| [
"astro-ph.CO",
"astro-ph.IM"
] | astro-ph.CO | astro-ph.IM | Cosmology and Nongalactic Astrophysics;Instrumentation and Methods for Astrophysics | 1,767Cosmology and Nongalactic Astrophysics;Instrumentation and Methods for Astrophysics
|
2106.12474 | In this paper, we enable automated property verification of deliberative
components in robot control architectures. We focus on formalizing the
execution context of Behavior Trees (BTs) to provide a scalable, yet formally
grounded, methodology to enable runtime verification and prevent unexpected
robot behaviors. To this end, we consider a message-passing model that
accommodates both synchronous and asynchronous composition of parallel
components, in which BTs and other components execute and interact according to
the communication patterns commonly adopted in robotic software architectures.
We introduce a formal property specification language to encode requirements
and build runtime monitors. We performed a set of experiments, both on
simulations and on the real robot, demonstrating the feasibility of our
approach in a realistic application and its integration in a typical robot
software architecture. We also provide an OS-level virtualization environment
to reproduce the experiments in the simulated scenario.
| [
"cs.RO",
"cs.FL"
] | cs.RO | cs.FL | Robotics;Formal Languages and Automata Theory | 7,267longtail
|
1812.03389 | We present both an overview and a perspective of recent experimental advances
and proposed new approaches to performing computation using memristors. A
memristor is a 2-terminal passive component with a dynamic resistance depending
on an internal parameter. We provide an brief historical introduction, as well
as an overview over the physical mechanism that lead to memristive behavior.
This review is meant to guide nonpractitioners in the field of memristive
circuits and their connection to machine learning and neural computation.
| [
"cs.ET",
"cond-mat.dis-nn"
] | cs.ET | cond-mat.dis-nn | Emerging Technologies;Disordered Systems and Neural Networks | 2,418Emerging Technologies;Disordered Systems and Neural Networks
|
astro-ph/0508592 | In this paper, we discuss improvements of the Suto et al. (2000) model, in
the light of recent theoretical developments (new theoretical mass functions, a
more accurate mass-temperature relation and an improved bias model) to predict
the clustering properties of galaxy clusters and to obtain constraints on
cosmological parameters. We re-derive the two-point correlation function of
clusters of galaxies for OCDM and LambdaCDM cosmological models, and we compare
these results with the observed spatial correlation function for clusters in
RASS1 (ROSAT All-Sky Survey 1), and in XBACs (X-RAY Brighest Abell-Type)
samples. The comparison shows that the best agreement is obtained for the
LambdaCDM model with Omega=0.3. The values of the correlation length obtained,
(r_\simeq 28.2 \pm 5.2 \rm h^{-1}} Mpc for LambdaCDM), are larger than those
found in the literature and comparable with the results found in Borgani,
Plionis & Kolokotronis (1999). (REST IN THE PAPER ABSTRACT)
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1212.0163 | This note contains a complete proof of the Abhyankar-Moh-Suzuki theorem (in
characteristic zero case).
| [
"math.AC"
] | math.AC | Commutative Algebra | 1,107Commutative Algebra
|
|
astro-ph/0501679 | Eighteen days of MERLIN data and 42 hours of A-array VLA data at 1.4 GHz have
been combined to image a 10-arcmin field centred on the Hubble Deep and
Flanking Fields (HDF and HFF). A complete sample of 92 radio sources with
1.4-GHz flux densities above 40 microJy has been imaged using MERLIN+VLA. The
images are amongst the most sensitive yet made at 1.4 GHz, with rms noise
levels of 3.3 microJy/beam in the 0.2-arcsec images. Virtually all the sources
are resolved, with angular sizes in the range 0.2 to 3 arcsec. No additional
sources were detected down to 23 microJy in the central 3 arcmin, indicating
that sources fainter than 40 microJy are heavily resolved with MERLIN and must
have typical angular sizes greater than 0.5 arcsec. Compact radio sources were
used to align the optical data to the ICRF, to <50 mas in the HDF. We find a
statistical association of very faint (2 microJy and above) radio sources with
optically bright HDF galaxies down to about 23 mag. Of the 92 radio sources
above 40 microJy, about 85 percent are identified with galaxies brighter than
about I = 25 mag; the remaining 15 percent are associated with optically faint
systems. We identify several very red, optically faint systems including the
the strongest sub-mm source in the HDF, HDF850.1. 72 percent of the radio
sources are starburst or AGN-type systems; the remainder are unclassified. The
proportion of starburst systems increases with decreasing flux density; below
100 microJy 70 percent of the sources are starburst-type systems in the
redshift range 0.3 -- 1.3. Chandra detections are associated with 55 of the 92
radio sources but their X-ray flux densities do not appear to be correlated
with the radio flux densities or morphologies.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1203.5509 | Two new carbon allotropes (H-carbon and S-carbon) are proposed, as possible
candidates for the intermediate superhard phases between graphite and diamond
obtained in the process of cold compressing graphite, based on the results of
first-principles calculations. Both H-carbon and S-carbon are more stable than
previously proposed M-carbon and W-carbon and their bulk modulus are comparable
to that of diamond. H-carbon is an indirect-band-gap semiconductor with a gap
of 4.459 eV and S-carbon is a direct-band-gap semiconductor with a gap of 4.343
eV. The transition pressure from cold compressing graphite is 10.08 GPa and
5.93 Gpa for H-carbon and S-carbon, respectively, which is in consistent with
the recent experimental report.
| [
"cond-mat.mtrl-sci"
] | cond-mat.mtrl-sci | Materials Science | 4,287Materials Science
|
|
1803.09743 | We consider a scenario in which the inflaton $\phi$ is a pseudoscalar field
non-minimally coupled to gravity through a term of the form ${\cal X} R
\phi^2$. The pseudoscalar is also coupled to a $U(1)$ gauge field (or an
ensemble of ${\cal N}$ gauge fields) through an axial coupling of the form
$\phi F \tilde{F}$. After M. M. Anber and L. Sorbo, Phys. Rev. D 81, 043534
(2010), Ref. [1], it is well known that this axial coupling leads to a
production of gauge particles which acts as a friction term in the dynamics of
the inflaton, producing a slow-roll regime even in presence of a steep
potential. A remarkable result in this scenario, is that the spectrum of the
chiral gravitational waves sourced by the scalar-gauge field interplay can be
enhanced due to the non-minimal coupling with gravity, leading to measurable
signatures, while maintaining agreement with current observational constraints
on $n_s$ and $r$. The inclusion of non-minimal coupling could be helpful to
alleviate tensions with non-Gaussianity bounds in models including axial
couplings.
| [
"astro-ph.CO",
"gr-qc",
"hep-ph"
] | astro-ph.CO | gr-qc | Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Phenomenology | 1,746Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Phenomenology
|
astro-ph/0311446 | We have analyzed two long BeppoSAX observations of the bright Seyfert galaxy
NGC 4151, searching for short timescale (10-200 ksec) X-ray spectral
variability. The light curve of a softness ratio, chosen as most sensitive to
pinpoint changes of the column density of the absorbing gas along the line of
sight, shows significant variations. We try to model these variations by
performing a detailed, time resolved, spectral analysis. We find significant,
large (factors of 1.5-6) variations of the absorber column densities on time
scales of 40-200 ksec. These values are 10-100 times shorter than those found
by Risaliti et al. 2002 in a sample of Seyfert 2 galaxies, and provide strong
constraints on the geometry of the obscuring medium.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1902.06746 | The asteroseismic modelling of period spacing patterns from gravito-inertial
modes in stars with a convective core is a high-dimensional problem. We utilise
the measured period spacing pattern of prograde dipole gravity modes (acquiring
$\Pi_0$), in combination with the effective temperature ($T_{\rm eff}$) and
surface gravity ($\log g$) derived from spectroscopy, to estimate the
fundamental stellar parameters and core properties of 37 $\gamma~$Doradus
($\gamma~$Dor) stars whose rotation frequency has been derived from
$\textit{Kepler}$ photometry. We make use of two 6D grids of stellar models,
one with step core overshooting and one with exponential core overshooting, to
evaluate correlations between the three observables $\Pi_0$, $T_{\rm eff}$, and
$\log g$ and the mass, age, core overshooting, metallicity, initial hydrogen
mass fraction and envelope mixing. We provide multivariate linear model recipes
relating the stellar parameters to be estimated to the three observables
($\Pi_0$, $T_{\rm eff}$, $\log g$). We estimate the (core) mass, age, core
overshooting and metallicity of $\gamma~$Dor stars from an ensemble analysis
and achieve relative uncertainties of $\sim\!10$ per cent for the parameters.
The asteroseismic age determination allows us to conclude that efficient
angular momentum transport occurs already early on during the main sequence. We
find that the nine stars with observed Rossby modes occur across almost the
entire main-sequence phase, except close to core-hydrogen exhaustion. Future
improvements of our work will come from the inclusion of more types of detected
modes per star, larger samples, and modelling of individual mode frequencies.
| [
"astro-ph.SR"
] | astro-ph.SR | Solar and Stellar Astrophysics | 6,668Solar and Stellar Astrophysics
|
|
2011.12145 | The phenomenon of life is discussed within a framework of its origin as
defined by four hypotheses. The 1. hypothesis says: Life, as we know, is
(H-C-N-O) based and relies on the number of bulk (Na-Mg-P-S-Cl-K-Ca) and trace
elements (Cr-Mn-Fe-Co-Ni-Cu-Zn-Se-Mo-I-W, and possibly Li-B-F-Si-V-As). It
originated when the element abundance curve of the living matter and of the
Universe, coincided. The 2. hypothesis is: Life originated in an interstellar
molecular cloud with the critical role of dust particles. The 3. hypothesis
arises from the 1. and states: Because of the Universe ageing, life originated
only once. The dust forming planetary system and stars already contained an
excess of L-type amino acids and D-type sugars, therefore, the emerging life on
any planet had to be chiral. Consequently, the 4. hypothesis has been formed:
Chirality is a sine qua non-condition for the emergence of life. The arguments
supporting these hypotheses are put forward based on numerous astrophysical
observations and physics laws.
| [
"astro-ph.GA",
"physics.chem-ph"
] | astro-ph.GA | physics.chem-ph | Astrophysics of Galaxies;Chemical Physics | 468Astrophysics of Galaxies;Chemical Physics
|
1807.09585 | We have calculated a quantitative measure of information of experimentally
determined temporal dominance of sensations (TDS) frequencies of texture
attributes, for a set of diverse samples throughout the mastication cycle. The
samples were emulsion filled gels, two-layered emulsion filled gels, and
sausages. For the majority of the samples we find one master curve, where
swallowing takes place after the information increases from its minimum. The
master curve may indicate a simplifying principle during mastication and
subsequent swallowing. We have also calculated a particular complexity measure.
This measure displays an increase just before swallowing.
| [
"eess.SP"
] | eess.SP | Signal Processing | 6,402Signal Processing
|
|
2305.16145 | Many recent works have turned to multi-agent reinforcement learning (MARL)
for adaptive traffic signal control to optimize the travel time of vehicles
over large urban networks. However, achieving effective and scalable
cooperation among junctions (agents) remains an open challenge, as existing
methods often rely on extensive, non-generalizable reward shaping or on
non-scalable centralized learning. To address these problems, we propose a new
MARL method for traffic signal control, SocialLight, which learns cooperative
traffic control policies by distributedly estimating the individual marginal
contribution of agents on their local neighborhood. SocialLight relies on the
Asynchronous Actor Critic (A3C) framework, and makes learning scalable by
learning a locally-centralized critic conditioned over the states and actions
of neighboring agents, used by agents to estimate individual contributions by
counterfactual reasoning. We further introduce important modifications to the
advantage calculation that help stabilize policy updates. These modifications
decouple the impact of the neighbors' actions on the computed advantages,
thereby reducing the variance in the gradient updates. We benchmark our trained
network against state-of-the-art traffic signal control methods on standard
benchmarks in two traffic simulators, SUMO and CityFlow. Our results show that
SocialLight exhibits improved scalability to larger road networks and better
performance across usual traffic metrics.
| [
"cs.LG"
] | cs.LG | Machine Learning | 3,882Machine Learning
|
|
2308.13021 | PPE (Personal Protective Equipment) has allowed firefighters to perform their
everyday tasks without getting harmed since the mid 1800s. Now, the advancement
of technology has given rise to the improvements of PPE. PPE can now include
sensors to detect any number of environmental hazards (chemical, biological,
temperature etc.). As the GT class of CS3750, we have decided to create a
version of an interface design sensor that will help firefighters in two ways:
navigation and communication. In order to augment a firefighter display when
they are within a building, we chose to augment their SCBA (self-contained
breathing apparatus). The gas mask will include a small screen that displays
vital information directly towards the firefighter without need of any other
support. We used the Google Glass to display vital information directly towards
the eye in a minimalistic manner, while also augmenting that by adding LED
lights to simulate someone calling their name or other auditory signals.While
our prototype focuses on two main components of a firefighters search and
rescue in a building, both of them combine to augment a firefighters display
when searching throughout a building to help improve accuracy, speed and
overall experience.
| [
"cs.HC"
] | cs.HC | Human-Computer Interaction | 3,474Human-Computer Interaction
|
|
1101.4319 | P Cygni is a prototype of the Luminous Blue Variables (or S Doradus
variables), and the star displays photometric and emission line variability on
a timescale of years (known as the "short S Doradus phase" variations). Here we
present new high resolution H-alpha spectroscopy of P Cyg that we combine with
earlier spectra and concurrent V-band photometry to document the emission and
continuum flux variations over a 24 y time span. We show that the emission and
continuum fluxes vary in concert on timescales of 1.6 y and longer, but differ
on shorter timescales. The H-alpha profile shape also varies on the photometric
timescales, and we describe the observed co-variations of the emission peak and
absorption trough properties. We argue that the episodes of photometric and
emission brightening are caused by increases in the size of the emission region
that are related to variations in wind mass loss rate and outflow speed. We
find evidence of blueward accelerating, Discrete Absorption Components (DACs)
in the absorption trough of the H-alpha profile, and these features have slower
accelerations and longer durations than those observed in other lines. The DAC
strengths also appear to vary on the photometric timescales, and we suggest
that the propagation of the DAC-related wind structures is closely related to
changes in the overall wind mass loss rate and velocity.
| [
"astro-ph.SR"
] | astro-ph.SR | Solar and Stellar Astrophysics | 6,668Solar and Stellar Astrophysics
|
|
1204.4735 | The "textbook" phonon mean free path (MFP) of heat carrying phonons in
silicon at room temperature is ~40 nm. However, a large contribution to the
thermal conductivity comes from low-frequency phonons with much longer MFPs. We
present a simple experiment demonstrating that room temperature thermal
transport in Si significantly deviates from the diffusion model already at
micron distances. Absorption of crossed laser pulses in a freestanding silicon
membrane sets up a sinusoidal temperature profile that is monitored via
diffraction of a probe laser beam. By changing the period of the thermal
grating we vary the heat transport distance within the range ~1-10 {\mu}m. At
small distances, we observe a reduction in the effective thermal conductivity
indicating a transition from the diffusive to the ballistic transport regime
for the low-frequency part of the phonon spectrum.
| [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] | cond-mat.mtrl-sci | cond-mat.mes-hall | Materials Science;Mesoscale and Nanoscale Physics | 4,330Materials Science;Mesoscale and Nanoscale Physics
|
1807.02814 | Errors-in-variables is a long-standing, difficult issue in linear regression;
and progress depends in part on new identifying assumptions. I characterize
measurement error as bad-leverage points and assume that fewer than half the
sample observations are heavily contaminated, in which case a high-breakdown
robust estimator may be able to isolate and down weight or discard the
problematic data. In simulations of simple and multiple regression where eiv
affects 25% of the data and R-squared is mediocre, certain high-breakdown
estimators have small bias and reliable confidence intervals.
| [
"econ.EM",
"stat.AP",
"stat.ME"
] | econ.EM | stat.AP | Econometrics;Applications;Methodology | 2,400Econometrics;Applications;Methodology
|
2204.08203 | We give a discussion of the classical Bowen$\unicode{x2013}$Series coding
and, in particular, its application to the study of zeta functions and their
zeros. In the case of compact surfaces of constant negative curvature $-1$ the
analytic extension of the Selberg zeta function to the entire complex plane is
classical, and can be achieved using the Selberg trace formula. However, an
alternative dynamical approach is to use the Bowen$\unicode{x2013}$Series
coding on the boundary at infinity to obtain a piecewise analytic expanding map
from which the extension of the zeta function can be obtained using properties
of the associated transfer operator. This latter method has the advantage that
it also applies in the case of infinite area surfaces provided they do not have
cusps. For such examples the location of the zeros is somewhat more mysterious.
However, in particularly simple examples there is a striking structure to the
zeros when we take appropriate limits. We will try to give some insight into
this phenomenon.
The survey is based on lectures given by the first author during the Workshop
on Statistical Properties of Nonequilibrium Dynamical Systems which took place
in July 2016 at South University of Science and Technology of China in
ShenZhen.
| [
"math.DS",
"math.SP"
] | math.DS | math.SP | Dynamical Systems;Spectral Theory | 2,342Dynamical Systems;Spectral Theory
|
1406.2352 | In Efroimsky & Makarov (2014), we derived from the first principles a formula
for the tidal heating rate in a tidally perturbed homogeneous sphere. We
compared it with the formulae used in the literature, and pointed out the
differences. Using this result, we now present three case studies - Mercury,
Kepler-10b, and a triaxial Io. A very sharp frequency-dependence of k2/Q near
spin-orbit resonances yields a similarly sharp dependence of k2/Q on the spin
rate. This indicates that physical libration may play a major role in tidal
heating of synchronously rotating bodies. The magnitude of libration in the
spin rate being defined by the planet's triaxiality, the latter should be a
factor determining the dissipation rate. Other parameters equal, a
synchronously rotating body with a stronger triaxiality should generate more
heat than a similar body of a more symmetrical shape. Further in the paper, we
discuss scenarios where initially triaxial objects melt and lose their
triaxiality. Thereafter, dissipation in them becomes less intensive; so the
bodies freeze. The tidal bulge becomes a new permanent figure, with a new
triaxiality lower than the original. In the paper, we also derive simplified,
approximate expressions for dissipation rate in a rocky planet of the Maxwell
rheology, with a not too small Maxwell time. The three expressions derived
pertain to the cases of a synchronous spin, a 3:2 resonance, and a nonresonant
rotation; so they can be applied to most close-in super-Earth exoplanets
detected thus far. In such bodies, the rate of tidal heating outside of
synchronous rotation is weakly dependent on the eccentricity and obliquity,
provided both these parameters are small or moderate. According to our
calculation, Kepler-10b could hardly survive the great amount of tidal heating
without being synchronised, circularised and also reshaped through a complete
or partial melt-down.
| [
"astro-ph.EP",
"physics.geo-ph"
] | astro-ph.EP | physics.geo-ph | Earth and Planetary Astrophysics;Geophysics | 2,372Earth and Planetary Astrophysics;Geophysics
|
astro-ph/0404003 | We present velocity dispersion measurements of 14 globular clusters in NGC
5128 (Centarus A) obtained with the MIKE echelle spectrograph on the 6.5m
Magellan Clay telescope. These clusters are among the most luminous globular
clusters in NGC 5128 and have velocity dispersions comparable to the most
massive clusters known in the Local Group, ranging from 10 - 30 km/s. We
describe in detail our cross-correlation measurements, as well as simulations
to quantify the uncertainties. These 14 globular clusters are the brightest NGC
5128 globular clusters with surface photometry and structural parameters
measured from the Hubble Space Telescope. We have used these measurements to
derive masses and mass-to-light ratios for all of these clusters and establish
that the fundamental plane relations for globular clusters extend to an order
of magnitude higher mass than in the Local Group. The mean mass-to-light ratio
for the NGC 5128 clusters is ~3+/-1, higher than measurements for all but the
most massive Local Group clusters. These massive clusters begin to bridge the
mass gap between the most massive star clusters and the lowest-mass galaxies.
We find that the properties of NGC 5128 globular clusters overlap quite well
with the central properties of nucleated dwarf galaxies and ultracompact dwarf
galaxies. As six of these clusters also show evidence for extratidal light, we
hypothesize that at least some of these massive clusters are the nuclei of
tidally stripped dwarfs.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1910.14294 | We study the synthesis problem for systems with a parameterized number of
processes. As in the classical case due to Church, the system selects actions
depending on the program run so far, with the aim of fulfilling a given
specification. The difficulty is that, at the same time, the environment
executes actions that the system cannot control. In contrast to the case of
fixed, finite alphabets, here we consider the case of parameterized alphabets.
An alphabet reflects the number of processes that are static but unknown. The
synthesis problem then asks whether there is a finite number of processes for
which the system can satisfy the specification. This variant is already
undecidable for very limited logics. Therefore, we consider a first-order logic
without the order on word positions. We show that even in this restricted case
synthesis is undecidable if both the system and the environment have access to
all processes. On the other hand, we prove that the problem is decidable if the
environment only has access to a bounded number of processes. In that case,
there is even a cutoff meaning that it is enough to examine a bounded number of
process architectures to solve the synthesis problem.
| [
"cs.LO",
"cs.FL"
] | cs.LO | cs.FL | Logic in Computer Science;Formal Languages and Automata Theory | 3,827Logic in Computer Science;Formal Languages and Automata Theory
|
2112.00383 | The orbital multiplicity in multiband superconductors yields orbital
differentiation in normal-state properties, and can lead to orbital-selective
spin-fluctuation Cooper pairing. This phenomenon has become increasingly
pivotal in clarifying the pairing 'enigma' particularly for multiband
high-temperature superconductors. In one-unit-cell (1-UC) FeSe/SrTiO3, the
thinnest and highest-Tc member of iron-based superconductors, the standard
electron-hole Fermi pocket nesting scenario is apparently not applicable since
the Gamma-centered hole pockets are absent, so the actual pairing mechanism is
the subject of intense debate. Here, by measuring high-resolution Bogoliubov
quasiparticle interference, we report observations of highly anisotropic
magnetic Cooper pairing in 1-UC FeSe. From a theoretical point of view, it is
important to incorporate effects of electronic correlations within a
spin-fluctuation pairing calculation, where the dxy orbital becomes
coherence-suppressed. The resulting pairing gap is compatible with the
experimental findings, which suggests that high-Tc Cooper pairing with orbital
selectivity applies to 1-UC FeSe. Our findings imply the general existence of
orbital selectivity in iron-based superconductors and the universal importance
of electron correlations in high-Tc superconductors.
| [
"cond-mat.supr-con",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] | cond-mat.supr-con | cond-mat.mes-hall | Superconductivity;Mesoscale and Nanoscale Physics;Materials Science;Strongly Correlated Electrons | 7,086Superconductivity;Mesoscale and Nanoscale Physics;Materials Science;Strongly Correlated Electrons
|
astro-ph/0609385 | We present a Chandra-LETGS observation of the Seyfert 1 galaxy Mrk 279. This
observation was carried out simultaneously with HST-STIS and FUSE, in the
context of a multiwavelength study of this source. The Chandra pointings were
spread over ten days for a total exposure time of ~360 ks. The spectrum of
Mrk279 shows evidence of broad emission features, especially at the wavelength
of the OVII triplet. We quantitatively explore the possibility that this
emission is produced in the broad line region (BLR). We modeled the broad UV
emission lines seen in the FUSE and HST-STIS spectra following the ``locally
optimally emitting cloud" approach. We find that the X-ray lines luminosity
derived from the best fit BLR model can match the X-ray features, suggesting
that the gas producing the UV lines is sufficient to account also for the X-ray
emission. The spectrum is absorbed by ionized gas whose total column density is
~5x10^{20} cm^{-2}. The absorption spectrum can be modeled by two distinct gas
components (log xi ~ 0.47 and 2.49, respectively) both showing a significant
outflow velocity. However, the data allow also the presence of intermediate
ionization components. The distribution of the column densities of such extra
components as a function of the ionization parameter is not consistent with a
continuous, power law-like, absorber, suggesting a complex structure for the
gas outflow for Mrk 279 (abridged).
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
hep-ph/0411090 | It is shown how grand unification can occur in models which are partly
supersymmetric. The particle states which are composite do not contribute to
the running of gauge couplings above the compositeness scale, while the
elementary states contribute the usual large logarithmns. This introduces a new
differential running contribution to the gauge couplings from partly composite
SU(5) matter multiplets. In particular, for partly supersymmetric models, the
incomplete SU(5) elementary matter multiplets restore gauge coupling
unification even though the usual elementary gaugino and Higgsino contributions
need not be present.
| [
"hep-ph",
"hep-th"
] | hep-ph | hep-th | High Energy Physics - Phenomenology;High Energy Physics - Theory | 3,223High Energy Physics - Phenomenology;High Energy Physics - Theory
|
1703.07826 | We construct labeling homomorphisms on the cubical homology of
higher-dimensional automata and show that they are natural with respect to
cubical dimaps and compatible with the tensor product of HDAs. We also indicate
two possible applications of labeled homology in concurrency theory.
| [
"math.AT",
"cs.FL"
] | math.AT | cs.FL | Algebraic Topology;Formal Languages and Automata Theory | 7,267longtail
|
1708.06338 | A stress is applied at the flat face and the apex of a prismatic
piezoelectric crystal. The voltage generated at these points differs in order
of magnitude. The result may be used to nondestructively test the uniformity of
surfaces of piezoelectric crystals.
| [
"physics.ins-det"
] | physics.ins-det | Instrumentation and Detectors | 3,624Instrumentation and Detectors
|
|
1908.04925 | Let $D$ be a division ring with center $F$, and $G$ an almost subnormal
subgroup of $D^*$. In this paper, we show that if $G$ contains a non-abelian
locally solvable maximal subgroup, then $D$ must be a cyclic algebra of prime
degree over $F$. Moreover, it is proved that every locally nilpotent maximal
subgroup of $G$ is abelian.
| [
"math.RA",
"math.GR"
] | math.RA | math.GR | Rings and Algebras;Group Theory | 6,294Rings and Algebras;Group Theory
|
1103.5934 | Epileptic seizures are one of the most well-known dysfunctions of the nervous
system. During a seizure, a highly synchronized behavior of neural activity is
observed that can cause symptoms ranging from mild sensual malfunctions to the
complete loss of body control. In this paper, we aim to contribute towards a
better understanding of the dynamical systems phenomena that cause seizures.
Based on data analysis and modelling, seizure dynamics can be identified to
possess multiple spatial scales and on each spatial scale also multiple time
scales. At each scale, we reach several novel insights. On the smallest spatial
scale we consider single model neurons and investigate early-warning signs of
spiking. This introduces the theory of critical transitions to excitable
systems. For clusters of neurons (or neuronal regions) we use patient data and
find oscillatory behavior and new scaling laws near the seizure onset. These
scalings lead to substantiate the conjecture obtained from mean-field models
that a Hopf bifurcation could be involved near seizure onset. On the largest
spatial scale we introduce a measure based on phase-locking intervals and
wavelets into seizure modelling. It is used to resolve synchronization between
different regions in the brain and identifies time-shifted scaling laws at
different wavelet scales. We also compare our wavelet-based multiscale approach
with maximum linear cross-correlation and mean-phase coherence measures.
| [
"q-bio.NC",
"math.DS",
"nlin.CD",
"nlin.PS",
"physics.med-ph"
] | q-bio.NC | math.DS | Neurons and Cognition;Dynamical Systems;Chaotic Dynamics;Pattern Formation and Solitons;Medical Physics | 7,267longtail
|
nlin/0310009 | We study a d-dimensional coupled map lattice consisting of hyperbolic toral
automorphisms (Arnold cat maps) that are weakly coupled by an analytic coupling
map. We construct the Sinai-Ruelle-Bowen measure for this system and study its
marginals on the tori. We prove they are absolutely continuous with respect to
the Lebesgue measure if and only if the coupling satisfies a nondegeneracy
condition.
| [
"nlin.CD"
] | nlin.CD | Chaotic Dynamics | 810Chaotic Dynamics
|
|
1905.00976 | Deep reinforcement learning algorithms have been successfully applied to a
range of challenging control tasks. However, these methods typically struggle
with achieving effective exploration and are extremely sensitive to the choice
of hyperparameters. One reason is that most approaches use a noisy version of
their operating policy to explore - thereby limiting the range of exploration.
In this paper, we introduce Collaborative Evolutionary Reinforcement Learning
(CERL), a scalable framework that comprises a portfolio of policies that
simultaneously explore and exploit diverse regions of the solution space. A
collection of learners - typically proven algorithms like TD3 - optimize over
varying time-horizons leading to this diverse portfolio. All learners
contribute to and use a shared replay buffer to achieve greater sample
efficiency. Computational resources are dynamically distributed to favor the
best learners as a form of online algorithm selection. Neuroevolution binds
this entire process to generate a single emergent learner that exceeds the
capabilities of any individual learner. Experiments in a range of continuous
control benchmarks demonstrate that the emergent learner significantly
outperforms its composite learners while remaining overall more
sample-efficient - notably solving the Mujoco Humanoid benchmark where all of
its composite learners (TD3) fail entirely in isolation.
| [
"cs.LG",
"cs.AI",
"stat.ML"
] | cs.LG | cs.AI | Machine Learning;Artificial Intelligence;Machine Learning | 3,951Machine Learning;Artificial Intelligence;Machine Learning
|
0811.3386 | Nonrenormalizable scalar fields, such as \varphi^4_n, n\ge5, require
infinitely many distinct counter terms when perturbed about the free theory,
and lead to free theories when defined as the continuum limit of a lattice
regularized theory restricted only to arbitrary mass and coupling constant
renormalization. Based on the proposal that functional integrals for
interacting nonrenormalizable models do not reduce to the expression for the
free field functional integral as the coupling constant vanishes -- a proposal
supported by the fact that even the set of classical solutions for such models
does not reduce to the set of free field solutions as the coupling constant
vanishes -- it has been conjectured that for nonrenormalizable models the
interaction term acts partially as a hard core eliminating certain fields
otherwise allowed by the free theory. As a consequence, interacting models are
continuously connected to a pseudofree theory that takes into account the hard
core as the coupling constant vanishes, and this general view is supported not
only by simple quantum mechanical examples as well as soluble but
nonrelativistic nonrenormalizable models. The present article proposes a
pseudofree model for relativistic nonrenormalizable models about which it is
argued that a perturbation expansion of the interaction is term-by-term
divergence free.
| [
"hep-th"
] | hep-th | High Energy Physics - Theory | 3,266High Energy Physics - Theory
|
|
2001.10942 | Matrix weighted rational B\'{e}zier curves can represent complex curve shapes
using small numbers of control points and clear geometric definitions of matrix
weights. Explicit formulae are derived to convert matrix weighted rational
B\'{e}zier curves in 2D or 3D space to rational B\'{e}zier curves. A method for
computing the convex hulls of matrix weighted rational B\'{e}zier curves is
given as a conjecture.
| [
"math.NA",
"cs.NA"
] | math.NA | cs.NA | Numerical Analysis;Numerical Analysis | 5,059Numerical Analysis;Numerical Analysis
|
0803.1329 | We present results from a time dependent gas phase chemical model of a hot
core based on the physical conditions of G305.2+0.2. While the cyanopolyyne
HC_3N has been observed in hot cores, the longer chained species, HC_5N, HC_7N,
and HC_9N have not been considered typical hot core species. We present results
which show that these species can be formed under hot core conditions. We
discuss the important chemical reactions in this process and, in particular,
show that their abundances are linked to the parent species acetylene which is
evaporated from icy grain mantles. The cyanopolyynes show promise as `chemical
clocks' which may aid future observations in determining the age of hot core
sources. The abundance of the larger cyanopolyynes increase and decrease over
relatively short time scales, ~10^2.5 years. We also discuss several sulphur
bearing species. We present results from a non-LTE statistical equilibrium
excitation model as a series of density, temperature and column density
dependent contour plots which show both the line intensities and several line
ratios. These aid in the interpretation of spectral line data, even when there
is limited line information available.
| [
"astro-ph"
] | astro-ph | Astrophysics | 463Astrophysics
|
|
1606.05345 | I examine differences in non-linear structure formation between cosmological
models that share a $z=0$ linear power spectrum in both shape and amplitude,
but that differ via their growth history. $N$-body simulations of these models
display an approximately identical large-scale-structure skeleton, but reveal
deeply non-linear differences in the demographics and properties of haloes. I
investigate to what extent the spherical-collapse model can help in
understanding these differences, in both real and redshift space. I discuss how
this is difficult to do if one attempts to identify haloes directly, because in
that case one is subject to the vagaries of halo finding algorithms. However, I
demonstrate that the halo model of structure formation provides an accurate
non-linear response in the power spectrum, but only if results from spherical
collapse that include formation hysteresis are properly incorporated. I comment
on how this fact can be used to provide per cent level accurate matter power
spectrum predictions for dark energy models for $k\leq5\,h\mathrm{Mpc}^{-1}$ by
using the halo model as a correction to accurate $\Lambda$CDM simulations. In
the appendix I provide some fitting functions for the linear-collapse threshold
($\delta_\mathrm{c}$) and virialized overdensity ($\Delta_\mathrm{v}$) that are
valid for a wide range of dark energy models. I also make my spherical-collapse
code available at https://github.com/alexander-mead/collapse
| [
"astro-ph.CO"
] | astro-ph.CO | Cosmology and Nongalactic Astrophysics | 1,725Cosmology and Nongalactic Astrophysics
|
|
hep-ph/0611012 | We study an example of Grand Unified Theory (GUT), known as trinification,
which was first introduced in 1984 by S.Glashow. This model has the GUT gauge
group as $[SU(3)]^3$ with a discrete $\mathbb{Z}_3$ to ensure the couplings are
unified at the GUT scale. In this letter we consider this trinification model
in its minimal formulation and investigate its robustness in the context of
cosmology. In particular we show that for a large set of the parameter space
the model doesn't seem to provide a Dark Matter candidate compatible with
cosmological data.
| [
"hep-ph"
] | hep-ph | High Energy Physics - Phenomenology | 3,129High Energy Physics - Phenomenology
|
|
0810.1354 | Theorem 1 Let F:N-->R stand for any function which a) $F$ monotonically
weakly increases; b) $F$ tends to infinity; and c) such that $q/F(q)$ tends to
infinity.
Let Z_F(q) equal the number of divisors of q less than sqrt{F(q)} minus the
number of divisors of q between sqrt{F(q)} and F(q).
Then, on the average, Z_F(q) equals Euler's constant
Theorem 2 Fix a in (0,1). Write A for the average number of divisors of n
that lie in (0,sqrt{a n}) minus the number of that lie in (sqrt{a n},a n)$.
Then A= (sum_{i=1}^{\lceil {1-a}/a \rceil} \frac{1}{i}) - ln(1/a).
| [
"math.NT"
] | math.NT | Number Theory | 4,945Number Theory
|
|
1405.4463 | Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.
| [
"cs.NI",
"cs.LG"
] | cs.NI | cs.LG | Networking and Internet Architecture;Machine Learning | 4,736Networking and Internet Architecture;Machine Learning
|
1109.3511 | In supernova cores and neutron star crusts, nuclei with exotic shapes such as
rod-like and slab-like nuclei are expected to exist. These nuclei are
collectively called nuclear "pasta". For the past decades, existence of the
pasta phases in the equilibrium state has been studied using various methods.
Recently, the formation process of the pasta phases, which has been a
long-standing problem, has been unveiled using molecular dynamics simulations.
In this review, we first provide the astrophysical background of supernovae and
neutron stars and overview the history of the study of the pasta phases. We
then focus on the recent study on the formation process of the pasta phases.
Finally, we discuss future important issues related to the pasta phases: their
astrophysical evidence and consequences.
| [
"nucl-th",
"astro-ph.SR",
"cond-mat.other"
] | nucl-th | astro-ph.SR | Nuclear Theory;Solar and Stellar Astrophysics;Other Condensed Matter | 7,267longtail
|
cond-mat/0405569 | The aim of this paper is two-fold. First, via a phenomenological
consideration I show that, equally with the conventional phases (body-centred
cubic, hexagonal planar and lamellar), such non-conventional phases as simple
cubic, face-centered cubic, well known double gyroid as well as some other
phases could be stable in a vicinity of the critical point in the systems
undergoing the order-disorder and order-order transition. A general phase
diagram indicating the strength of so-called angle dependence of the forth
vertex necessary for existence of these non-conventional phases is presented.
Next, I demonstrate via a direct Leibler-like microscopic consideration of the
ternary ABC block and graft copolymers that these real systems do reveal these
nonconventional phases even close to the critical point. In particular, the
ternary ABC block copolymers with a long middle block non-selective with
respect to both side blocks are especially inclined to form the gyroid phase. A
new cubic non-centrosymmetric phase and some other cubic phases are also first
predicted to exist as the most stable low temperature phase instead of the
lamellar one. Such a phase behavior is suggested to be common for a new class
of materials we propose to call amphiphobic since their (macro)molecules
consist al least of three mutually incompatible types of monomers.
| [
"cond-mat.soft"
] | cond-mat.soft | Soft Condensed Matter | 6,537Soft Condensed Matter
|
|
1112.2272 | We observe that the dominant one loop contribution to the graviton propagator
in the theory of N (N>>1) light scalar fields \phi_a (with masses smaller than
M_{pl}/\sqrt{N}) minimally coupled to Einstein gravity is proportional to N
while that of graviton-scalar-scalar interaction vertex is N independent. We
use this to argue that the coefficient of the R\phi_a^2 term appearing at one
loop level is 1/N suppressed. This observation provides a resolution to the
\eta-problem, that the slow-roll parameter \eta receives order one quantum loop
corrections for inflationary models built within the framework of scalar fields
minimally coupled to Einstein gravity, for models involving large number of
fields. As particular examples, we employ this to argue in favor of the absence
of \eta-problem in M-flation and N-flation scenarios.
| [
"hep-th",
"astro-ph.CO",
"gr-qc",
"hep-ph"
] | hep-th | astro-ph.CO | High Energy Physics - Theory;Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Phenomenology | 3,307High Energy Physics - Theory;Cosmology and Nongalactic Astrophysics;General Relativity and Quantum Cosmology;High Energy Physics - Phenomenology
|
2302.02089 | Contrastive Learning and Masked Image Modelling have demonstrated exceptional
performance on self-supervised representation learning, where Momentum Contrast
(i.e., MoCo) and Masked AutoEncoder (i.e., MAE) are the state-of-the-art,
respectively. In this work, we propose MOMA to distill from pre-trained MoCo
and MAE in a self-supervised manner to collaborate the knowledge from both
paradigms. We introduce three different mechanisms of knowledge transfer in the
propsoed MOMA framework. : (1) Distill pre-trained MoCo to MAE. (2) Distill
pre-trained MAE to MoCo (3) Distill pre-trained MoCo and MAE to a random
initialized student. During the distillation, the teacher and the student are
fed with original inputs and masked inputs, respectively. The learning is
enabled by aligning the normalized representations from the teacher and the
projected representations from the student. This simple design leads to
efficient computation with extremely high mask ratio and dramatically reduced
training epochs, and does not require extra considerations on the distillation
target. The experiments show MOMA delivers compact student models with
comparable performance to existing state-of-the-art methods, combining the
power of both self-supervised learning paradigms. It presents competitive
results against different benchmarks in computer vision. We hope our method
provides an insight on transferring and adapting the knowledge from large-scale
pre-trained models in a computationally efficient way.
| [
"cs.CV",
"cs.AI",
"cs.LG"
] | cs.CV | cs.AI | Computer Vision and Pattern Recognition;Artificial Intelligence;Machine Learning | 1,521Computer Vision and Pattern Recognition;Artificial Intelligence;Machine Learning
|
cond-mat/0501173 | Resonant x-ray reflectivity of the surface of the liquid phase of the
Bi$_{43}$Sn$_{57}$ eutectic alloy reveals atomic-scale demixing extending over
three near-surface atomic layers. Due to the absence of underlying atomic
lattice which typically defines adsorption in crystalline alloys, studies of
adsorption in liquid alloys provide unique insight on interatomic interactions
at the surface. The observed composition modulation could be accounted for
quantitatively by the Defay-Prigogine and Strohl-King multilayer extensions of
the single-layer Gibbs model, revealing a near-surface domination of the
attractive Bi-Sn interaction over the entropy.
| [
"cond-mat.stat-mech",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci"
] | cond-mat.stat-mech | cond-mat.dis-nn | Statistical Mechanics;Disordered Systems and Neural Networks;Materials Science | 6,869Statistical Mechanics;Disordered Systems and Neural Networks;Materials Science
|
1910.08248 | In this paper, we evaluate and compare the performance of two approaches,
namely self-stabilization and rollback, to handling consistency violation
faults (cvf) that occurred when a distributed program is executed on eventually
consistent key-value store. We observe that self-stabilization is usually
better than rollbacks in our experiments. Moreover, when we aggressively allow
more cvf in exchange of eliminating mechanisms for guaranteeing atomicity
requirements of actions, we observe the programs in our case studies achieve a
speedup between 2--15 times compared with the standard implementation. We also
analyze different factors that contribute to the results. Our results and
analysis are useful in helping a system designer choose proper design options
for their program.
| [
"cs.DC"
] | cs.DC | Distributed, Parallel, and Cluster Computing | 2,194Distributed, Parallel, and Cluster Computing
|
|
1406.5493 | Network traffic model is a critical problem for urban applications, mainly
because of its diversity and node density. As wireless sensor network is highly
concerned with the development of smart cities, careful consideration to
traffic model helps choose appropriate protocols and adapt network parameters
to reach best performances on energy-latency tradeoffs. In this paper, we
compare the performance of two off-the-shelf medium access control protocols on
two different kinds of traffic models, and then evaluate their application-end
information delay and energy consumption while varying traffic parameters and
network density. From the simulation results, we highlight some limits induced
by network density and occurrence frequency of event-driven applications. When
it comes to realtime urban services, a protocol selection shall be taken into
account - even dynamically - with a special attention to energy-delay tradeoff.
To this end, we provide several insights on parking sensor networks.
| [
"cs.NI"
] | cs.NI | Networking and Internet Architecture | 4,711Networking and Internet Architecture
|