id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2102.11069 | Paul Viallard | Paul Viallard (LHC), Guillaume Vidot (IRIT-ARGOS), Amaury Habrard
(LHC), Emilie Morvant (LHC) | A PAC-Bayes Analysis of Adversarial Robustness | null | NeurIPS 2021, Dec 2021, Sydney, Australia | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the first general PAC-Bayesian generalization bounds for
adversarial robustness, that estimate, at test time, how much a model will be
invariant to imperceptible perturbations in the input. Instead of deriving a
worst-case analysis of the risk of a hypothesis over all the possible
perturbations, we leverage the PAC-Bayesian framework to bound the averaged
risk on the perturbations for majority votes (over the whole class of
hypotheses). Our theoretically founded analysis has the advantage to provide
general bounds (i) that are valid for any kind of attacks (i.e., the
adversarial attacks), (ii) that are tight thanks to the PAC-Bayesian framework,
(iii) that can be directly minimized during the learning phase to obtain a
robust model on different attacks at test time.
| [
{
"created": "Fri, 19 Feb 2021 10:23:48 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2021 09:15:05 GMT",
"version": "v2"
}
] | 2021-10-28 | [
[
"Viallard",
"Paul",
"",
"LHC"
],
[
"Vidot",
"Guillaume",
"",
"IRIT-ARGOS"
],
[
"Habrard",
"Amaury",
"",
"LHC"
],
[
"Morvant",
"Emilie",
"",
"LHC"
]
] |
2102.11085 | Serkan Budak | Serkan Budak and Bahadir Akbal | Comparative Fault Location Estimation by Using Image Processing in Mixed
Transmission Lines | arXiv admin note: substantial text overlap with arXiv:2011.03238 | Konya Journal of Engineering Sciences v. 8, Special Issue, pp.
62-75, (2020) | 10.36306/konjes.821726 | null | eess.IV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The distance protection relays are used to determine the impedance based
fault location according to the current and voltage magnitudes in the
transmission lines. However, the fault location cannot be correctly detected in
mixed transmission lines due to different characteristic impedance per unit
length because the characteristic impedance of high voltage cable line is
significantly different from overhead line. Thus, determinations of the fault
section and location with the distance protection relays are difficult in the
mixed transmission lines. In this study, 154 kV overhead transmission line and
underground cable line are examined as the mixed transmission line for the
distance protection relays. Phase to ground faults are created in the mixed
transmission line. overhead line section and underground cable section are
simulated by using PSCAD-EMTDC.The short circuit fault images are generated in
the distance protection relay for the overhead transmission line and
underground cable transmission line faults. The images include the R-X
impedance diagram of the fault, and the R-X impedance diagram have been
detected by applying image processing steps. Artificial neural network (ANN)
and the regression methods are used for prediction of the fault location, and
the results of image processing are used as the input parameters for the
training process of ANN and the regression methods. The results of ANN and
regression methods are compared to select the most suitable method at the end
of this study for forecasting of the fault location in transmission lines.
| [
{
"created": "Mon, 22 Feb 2021 14:57:36 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Budak",
"Serkan",
""
],
[
"Akbal",
"Bahadir",
""
]
] |
2102.11137 | Yichen Yang | Yichen David Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu,
Armando Solar-Lezama, Martin Rinard | Program Synthesis Guided Reinforcement Learning for Partially Observed
Environments | null | NeurIPS 2021 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key challenge for reinforcement learning is solving long-horizon planning
problems. Recent work has leveraged programs to guide reinforcement learning in
these settings. However, these approaches impose a high manual burden on the
user since they must provide a guiding program for every new task. Partially
observed environments further complicate the programming task because the
program must implement a strategy that correctly, and ideally optimally,
handles every possible configuration of the hidden regions of the environment.
We propose a new approach, model predictive program synthesis (MPPS), that uses
program synthesis to automatically generate the guiding programs. It trains a
generative model to predict the unobserved portions of the world, and then
synthesizes a program based on samples from this model in a way that is robust
to its uncertainty. In our experiments, we show that our approach significantly
outperforms non-program-guided approaches on a set of challenging benchmarks,
including a 2D Minecraft-inspired environment where the agent must complete a
complex sequence of subtasks to achieve its goal, and achieves a similar
performance as using handcrafted programs to guide the agent. Our results
demonstrate that our approach can obtain the benefits of program-guided
reinforcement learning without requiring the user to provide a new guiding
program for every new task.
| [
{
"created": "Mon, 22 Feb 2021 16:05:32 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 18:04:02 GMT",
"version": "v2"
}
] | 2021-11-03 | [
[
"Yang",
"Yichen David",
""
],
[
"Inala",
"Jeevana Priya",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Pu",
"Yewen",
""
],
[
"Solar-Lezama",
"Armando",
""
],
[
"Rinard",
"Martin",
""
]
] |
2102.11271 | Denis Yarats | Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto | Reinforcement Learning with Prototypical Representations | null | ICML 2021 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning effective representations in image-based environments is crucial for
sample efficient Reinforcement Learning (RL). Unfortunately, in RL,
representation learning is confounded with the exploratory experience of the
agent -- learning a useful representation requires diverse data, while
effective exploration is only possible with coherent representations.
Furthermore, we would like to learn representations that not only generalize
across tasks but also accelerate downstream exploration for efficient
task-specific training. To address these challenges we propose Proto-RL, a
self-supervised framework that ties representation learning with exploration
through prototypical representations. These prototypes simultaneously serve as
a summarization of the exploratory experience of an agent as well as a basis
for representing observations. We pre-train these task-agnostic representations
and prototypes on environments without downstream task information. This
enables state-of-the-art downstream policy learning on a set of difficult
continuous control tasks.
| [
{
"created": "Mon, 22 Feb 2021 18:56:34 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Jul 2021 17:36:06 GMT",
"version": "v2"
}
] | 2021-07-21 | [
[
"Yarats",
"Denis",
""
],
[
"Fergus",
"Rob",
""
],
[
"Lazaric",
"Alessandro",
""
],
[
"Pinto",
"Lerrel",
""
]
] |
2102.11327 | Guy Tennenholtz | Guy Tennenholtz and Shie Mannor | Uncertainty Estimation Using Riemannian Model Dynamics for Offline
Reinforcement Learning | null | Neurips 2022 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-based offline reinforcement learning approaches generally rely on
bounds of model error. Estimating these bounds is usually achieved through
uncertainty estimation methods. In this work, we combine parametric and
nonparametric methods for uncertainty estimation through a novel latent space
based metric. In particular, we build upon recent advances in Riemannian
geometry of generative models to construct a pullback metric of an
encoder-decoder based forward model. Our proposed metric measures both the
quality of out-of-distribution samples as well as the discrepancy of examples
in the data. We leverage our method for uncertainty estimation in a pessimistic
model-based framework, showing a significant improvement upon contemporary
model-based offline approaches on continuous control and autonomous driving
benchmarks.
| [
{
"created": "Mon, 22 Feb 2021 19:42:40 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Oct 2022 04:33:52 GMT",
"version": "v2"
}
] | 2022-11-07 | [
[
"Tennenholtz",
"Guy",
""
],
[
"Mannor",
"Shie",
""
]
] |
2102.11352 | Julie Jiang | Julie Jiang, Kristina Lerman, Emilio Ferrara | Individualized Context-Aware Tensor Factorization for Online Games
Predictions | null | 2020 International Conference on Data Mining Workshops (ICDMW) | 10.1109/ICDMW51313.2020.00048 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual behavior and decisions are substantially influenced by their
contexts, such as location, environment, and time. Changes along these
dimensions can be readily observed in Multiplayer Online Battle Arena games
(MOBA), where players face different in-game settings for each match and are
subject to frequent game patches. Existing methods utilizing contextual
information generalize the effect of a context over the entire population, but
contextual information tailored to each individual can be more effective. To
achieve this, we present the Neural Individualized Context-aware Embeddings
(NICE) model for predicting user performance and game outcomes. Our proposed
method identifies individual behavioral differences in different contexts by
learning latent representations of users and contexts through non-negative
tensor factorization. Using a dataset from the MOBA game League of Legends, we
demonstrate that our model substantially improves the prediction of winning
outcome, individual user performance, and user engagement.
| [
{
"created": "Mon, 22 Feb 2021 20:46:02 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Jiang",
"Julie",
""
],
[
"Lerman",
"Kristina",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
2102.11395 | Ghani Lawal Mr. | Ghani O. Lawal and Michael Greenspan | Procam Calibration from a Single Pose of a Planar Target | 11 pages, 9 figures, 10 tables. Submitted to the VISAPP Conference.
Stored in the SciTepress Digital Library:
https://www.scitepress.org/PublicationsDetail.aspx?ID=rGG70YCQyOs=&t=1 | In Proceedings of the 16th International Joint Conference on
Computer Vision, Imaging and Computer Graphics Theory and Applications
(VISIGRAPP 2021) - Volume 5: VISAPP, pages 817-827 | 10.5220/0010327708170827 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel user friendly method is proposed for calibrating a procam system from
a single pose of a planar chessboard target. The user simply needs to orient
the chessboard in a single appropriate pose. A sequence of Gray Code patterns
are projected onto the chessboard, which allows correspondences between the
camera, projector and the chessboard to be automatically extracted. These
correspondences are fed as input to a nonlinear optimization method that models
the projector of the principle points onto the chessboard, and accurately
calculates the intrinsic and extrinsic parameters of both the camera and the
projector, as well as the camera's distortion coefficients. The method is
experimentally validated on the procam system, which is shown to be comparable
in accuracy with existing multi-pose approaches. The impact of the orientation
of the chessboard with respect to the procam imaging places is also explored
through extensive simulation.
| [
{
"created": "Mon, 22 Feb 2021 22:53:29 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Lawal",
"Ghani O.",
""
],
[
"Greenspan",
"Michael",
""
]
] |
2102.11480 | Mario Campos Soberanis | Rafael Viana-C\'amara, Diego Campos-Sobrino, Mario Campos-Soberanis | Evolutionary optimization of contexts for phonetic correction in speech
recognition systems | 13 pages, 4 figures, This article is a translation of the paper
"Optimizaci\'on evolutiva de contextos para la correcci\'on fon\'etica en
sistemas de reconocimiento del habla" presented in COMIA 2019 | Research in Computing Science Issue 148(8), 2019, pp. 293-306.
ISSN 1870-4069 | null | null | eess.AS cs.CL cs.SD | http://creativecommons.org/licenses/by/4.0/ | Automatic Speech Recognition (ASR) is an area of growing academic and
commercial interest due to the high demand for applications that use it to
provide a natural communication method. It is common for general purpose ASR
systems to fail in applications that use a domain-specific language. Various
strategies have been used to reduce the error, such as providing a context that
modifies the language model and post-processing correction methods. This
article explores the use of an evolutionary process to generate an optimized
context for a specific application domain, as well as different correction
techniques based on phonetic distance metrics. The results show the viability
of a genetic algorithm as a tool for context optimization, which, added to a
post-processing correction based on phonetic representations, can reduce the
errors on the recognized speech.
| [
{
"created": "Tue, 23 Feb 2021 04:14:51 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Viana-Cámara",
"Rafael",
""
],
[
"Campos-Sobrino",
"Diego",
""
],
[
"Campos-Soberanis",
"Mario",
""
]
] |
2102.11485 | Zeyu Sun | Zeyu Sun, Wenjie Zhang, Lili Mou, Qihao Zhu, Yingfei Xiong, Lu Zhang | Generalized Equivariance and Preferential Labeling for GNN Node
Classification | null | AAAI 2022 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Existing graph neural networks (GNNs) largely rely on node embeddings, which
represent a node as a vector by its identity, type, or content. However, graphs
with unattributed nodes widely exist in real-world applications (e.g.,
anonymized social networks). Previous GNNs either assign random labels to nodes
(which introduces artefacts to the GNN) or assign one embedding to all nodes
(which fails to explicitly distinguish one node from another). Further, when
these GNNs are applied to unattributed node classification problems, they have
an undesired equivariance property, which are fundamentally unable to address
the data with multiple possible outputs. In this paper, we analyze the
limitation of existing approaches to node classification problems. Inspired by
our analysis, we propose a generalized equivariance property and a Preferential
Labeling technique that satisfies the desired property asymptotically.
Experimental results show that we achieve high performance in several
unattributed node classification tasks.
| [
{
"created": "Tue, 23 Feb 2021 04:30:35 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Dec 2021 10:21:04 GMT",
"version": "v2"
},
{
"created": "Sat, 26 Feb 2022 08:35:21 GMT",
"version": "v3"
}
] | 2022-03-01 | [
[
"Sun",
"Zeyu",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Mou",
"Lili",
""
],
[
"Zhu",
"Qihao",
""
],
[
"Xiong",
"Yingfei",
""
],
[
"Zhang",
"Lu",
""
]
] |
2102.11492 | Xianyuan Zhan | Xianyuan Zhan, Haoran Xu, Yue Zhang, Xiangyu Zhu, Honglei Yin, Yu
Zheng | DeepThermal: Combustion Optimization for Thermal Power Generating Units
Using Offline Reinforcement Learning | null | Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI2022) | null | null | cs.LG cs.AI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimizing the combustion efficiency of a thermal power generating unit
(TPGU) is a highly challenging and critical task in the energy industry. We
develop a new data-driven AI system, namely DeepThermal, to optimize the
combustion control strategy for TPGUs. At its core, is a new model-based
offline reinforcement learning (RL) framework, called MORE, which leverages
historical operational data of a TGPU to solve a highly complex constrained
Markov decision process problem via purely offline training. In DeepThermal, we
first learn a data-driven combustion process simulator from the offline
dataset. The RL agent of MORE is then trained by combining real historical data
as well as carefully filtered and processed simulation data through a novel
restrictive exploration scheme. DeepThermal has been successfully deployed in
four large coal-fired thermal power plants in China. Real-world experiments
show that DeepThermal effectively improves the combustion efficiency of TPGUs.
We also report the superior performance of MORE by comparing with the
state-of-the-art algorithms on the standard offline RL benchmarks.
| [
{
"created": "Tue, 23 Feb 2021 04:55:12 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 04:05:07 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Apr 2022 09:05:30 GMT",
"version": "v3"
}
] | 2022-04-06 | [
[
"Zhan",
"Xianyuan",
""
],
[
"Xu",
"Haoran",
""
],
[
"Zhang",
"Yue",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"Yin",
"Honglei",
""
],
[
"Zheng",
"Yu",
""
]
] |
2102.11506 | Sulabh Katiyar | Sulabh Katiyar, Samir Kumar Borgohain | Comparative evaluation of CNN architectures for Image Caption Generation | Article Published in International Journal of Advanced Computer
Science and Applications(IJACSA), Volume 11 Issue 12, 2020 | in International Journal of Advanced Computer Science and
Applications, 11(12), 2020 | 10.14569/IJACSA.2020.0111291 | null | cs.CV cs.AI cs.LG cs.MM cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Aided by recent advances in Deep Learning, Image Caption Generation has seen
tremendous progress over the last few years. Most methods use transfer learning
to extract visual information, in the form of image features, with the help of
pre-trained Convolutional Neural Network models followed by transformation of
the visual information using a Caption Generator module to generate the output
sentences. Different methods have used different Convolutional Neural Network
Architectures and, to the best of our knowledge, there is no systematic study
which compares the relative efficacy of different Convolutional Neural Network
architectures for extracting the visual information. In this work, we have
evaluated 17 different Convolutional Neural Networks on two popular Image
Caption Generation frameworks: the first based on Neural Image Caption (NIC)
generation model and the second based on Soft-Attention framework. We observe
that model complexity of Convolutional Neural Network, as measured by number of
parameters, and the accuracy of the model on Object Recognition task does not
necessarily co-relate with its efficacy on feature extraction for Image Caption
Generation task.
| [
{
"created": "Tue, 23 Feb 2021 05:43:54 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Katiyar",
"Sulabh",
""
],
[
"Borgohain",
"Samir Kumar",
""
]
] |
2102.11531 | Ganesh Venkatesh | Ganesh Venkatesh, Alagappan Valliappan, Jay Mahadeokar, Yuan
Shangguan, Christian Fuegen, Michael L. Seltzer, Vikas Chandra | Memory-efficient Speech Recognition on Smart Devices | null | ICASSP 2021 | null | null | cs.SD cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent transducer models have emerged as a promising solution for speech
recognition on the current and next generation smart devices. The transducer
models provide competitive accuracy within a reasonable memory footprint
alleviating the memory capacity constraints in these devices. However, these
models access parameters from off-chip memory for every input time step which
adversely effects device battery life and limits their usability on low-power
devices.
We address transducer model's memory access concerns by optimizing their
model architecture and designing novel recurrent cell designs. We demonstrate
that i) model's energy cost is dominated by accessing model weights from
off-chip memory, ii) transducer model architecture is pivotal in determining
the number of accesses to off-chip memory and just model size is not a good
proxy, iii) our transducer model optimizations and novel recurrent cell reduces
off-chip memory accesses by 4.5x and model size by 2x with minimal accuracy
impact.
| [
{
"created": "Tue, 23 Feb 2021 07:43:45 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Venkatesh",
"Ganesh",
""
],
[
"Valliappan",
"Alagappan",
""
],
[
"Mahadeokar",
"Jay",
""
],
[
"Shangguan",
"Yuan",
""
],
[
"Fuegen",
"Christian",
""
],
[
"Seltzer",
"Michael L.",
""
],
[
"Chandra",
"Vikas",
""
]
] |
2102.11585 | Gurkirt Singh | Gurkirt Singh, Stephen Akrigg, Manuele Di Maio, Valentina Fontana,
Reza Javanmard Alitappeh, Suman Saha, Kossar Jeddisaravi, Farzad Yousefi,
Jacob Culley, Tom Nicholson, Jordan Omokeowa, Salman Khan, Stanislao
Grazioso, Andrew Bradley, Giuseppe Di Gironimo, Fabio Cuzzolin | ROAD: The ROad event Awareness Dataset for Autonomous Driving | 29 pages, accepted at TPAMI | TPAMI.2022.3150906 | 10.1109/TPAMI.2022.3150906 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Humans drive in a holistic fashion which entails, in particular,
understanding dynamic road events and their evolution. Injecting these
capabilities in autonomous vehicles can thus take situational awareness and
decision making closer to human-level performance. To this purpose, we
introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to
our knowledge the first of its kind. ROAD is designed to test an autonomous
vehicle's ability to detect road events, defined as triplets composed by an
active agent, the action(s) it performs and the corresponding scene locations.
ROAD comprises videos originally from the Oxford RobotCar Dataset annotated
with bounding boxes showing the location in the image plane of each road event.
We benchmark various detection tasks, proposing as a baseline a new incremental
algorithm for online road event awareness termed 3D-RetinaNet. We also report
the performance on the ROAD tasks of Slowfast and YOLOv5 detectors, as well as
that of the winners of the ICCV2021 ROAD challenge, which highlight the
challenges faced by situation awareness in autonomous driving. ROAD is designed
to allow scholars to investigate exciting tasks such as complex (road) activity
detection, future event anticipation and continual learning. The dataset is
available at https://github.com/gurkirt/road-dataset; the baseline can be found
at https://github.com/gurkirt/3D-RetinaNet.
| [
{
"created": "Tue, 23 Feb 2021 09:48:56 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Feb 2021 10:07:31 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Apr 2022 12:19:51 GMT",
"version": "v3"
}
] | 2022-04-04 | [
[
"Singh",
"Gurkirt",
""
],
[
"Akrigg",
"Stephen",
""
],
[
"Di Maio",
"Manuele",
""
],
[
"Fontana",
"Valentina",
""
],
[
"Alitappeh",
"Reza Javanmard",
""
],
[
"Saha",
"Suman",
""
],
[
"Jeddisaravi",
"Kossar",
""
],
[
"Yousefi",
"Farzad",
""
],
[
"Culley",
"Jacob",
""
],
[
"Nicholson",
"Tom",
""
],
[
"Omokeowa",
"Jordan",
""
],
[
"Khan",
"Salman",
""
],
[
"Grazioso",
"Stanislao",
""
],
[
"Bradley",
"Andrew",
""
],
[
"Di Gironimo",
"Giuseppe",
""
],
[
"Cuzzolin",
"Fabio",
""
]
] |
2102.11730 | Marco Wallner | Marco Wallner, Daniel Steininger, Verena Widhalm, Matthias
Sch\"orghuber, Csaba Beleznai | RGB-D Railway Platform Monitoring and Scene Understanding for Enhanced
Passenger Safety | The final authenticated version is available online at
https://doi.org/10.1007/978-3-030-68787-8_47 | Pattern Recognition. ICPR International Workshops and Challenges.
ICPR 2021. Lecture Notes in Computer Science, vol 12667. Springer, Cham | 10.1007/978-3-030-68787-8_47 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated monitoring and analysis of passenger movement in safety-critical
parts of transport infrastructures represent a relevant visual surveillance
task. Recent breakthroughs in visual representation learning and spatial
sensing opened up new possibilities for detecting and tracking humans and
objects within a 3D spatial context. This paper proposes a flexible analysis
scheme and a thorough evaluation of various processing pipelines to detect and
track humans on a ground plane, calibrated automatically via stereo depth and
pedestrian detection. We consider multiple combinations within a set of RGB-
and depth-based detection and tracking modalities. We exploit the modular
concepts of Meshroom [2] and demonstrate its use as a generic vision processing
pipeline and scalable evaluation framework. Furthermore, we introduce a novel
open RGB-D railway platform dataset with annotations to support research
activities in automated RGB-D surveillance. We present quantitative results for
multiple object detection and tracking for various algorithmic combinations on
our dataset. Results indicate that the combined use of depth-based spatial
information and learned representations yields substantially enhanced detection
and tracking accuracies. As demonstrated, these enhancements are especially
pronounced in adverse situations when occlusions and objects not captured by
learned representations are present.
| [
{
"created": "Tue, 23 Feb 2021 14:44:34 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Wallner",
"Marco",
""
],
[
"Steininger",
"Daniel",
""
],
[
"Widhalm",
"Verena",
""
],
[
"Schörghuber",
"Matthias",
""
],
[
"Beleznai",
"Csaba",
""
]
] |
2102.11762 | Hardik Meisheri | Omkar Shelke, Hardik Meisheri, Harshad Khadilkar | School of hard knocks: Curriculum analysis for Pommerman with a fixed
computational budget | 8 pages, Submitted to ALA workshop 2021 | CODS-COMAD 2022: 5th Joint International Conference on Data
Science & Management of Data (9th ACM IKDD CODS and 27th COMAD) | 10.1145/3493700.3493709 | null | cs.AI cs.LG cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pommerman is a hybrid cooperative/adversarial multi-agent environment, with
challenging characteristics in terms of partial observability, limited or no
communication, sparse and delayed rewards, and restrictive computational time
limits. This makes it a challenging environment for reinforcement learning (RL)
approaches. In this paper, we focus on developing a curriculum for learning a
robust and promising policy in a constrained computational budget of 100,000
games, starting from a fixed base policy (which is itself trained to imitate a
noisy expert policy). All RL algorithms starting from the base policy use
vanilla proximal-policy optimization (PPO) with the same reward function, and
the only difference between their training is the mix and sequence of opponent
policies. One expects that beginning training with simpler opponents and then
gradually increasing the opponent difficulty will facilitate faster learning,
leading to more robust policies compared against a baseline where all available
opponent policies are introduced from the start. We test this hypothesis and
show that within constrained computational budgets, it is in fact better to
"learn in the school of hard knocks", i.e., against all available opponent
policies nearly from the start. We also include ablation studies where we study
the effect of modifying the base environment properties of ammo and bomb blast
strength on the agent performance.
| [
{
"created": "Tue, 23 Feb 2021 15:43:09 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 07:54:32 GMT",
"version": "v2"
}
] | 2022-01-11 | [
[
"Shelke",
"Omkar",
""
],
[
"Meisheri",
"Hardik",
""
],
[
"Khadilkar",
"Harshad",
""
]
] |
2102.12096 | Jianzhun Shao | Jianzhun Shao, Yuhang Jiang, Gu Wang, Zhigang Li, Xiangyang Ji | PFRL: Pose-Free Reinforcement Learning for 6D Pose Estimation | null | In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pp. 11454-11463. 2020 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 6D pose estimation from a single RGB image is a challenging and vital task in
computer vision. The current mainstream deep model methods resort to 2D images
annotated with real-world ground-truth 6D object poses, whose collection is
fairly cumbersome and expensive, even unavailable in many cases. In this work,
to get rid of the burden of 6D annotations, we formulate the 6D pose refinement
as a Markov Decision Process and impose on the reinforcement learning approach
with only 2D image annotations as weakly-supervised 6D pose information, via a
delicate reward definition and a composite reinforced optimization method for
efficient and effective policy training. Experiments on LINEMOD and T-LESS
datasets demonstrate that our Pose-Free approach is able to achieve
state-of-the-art performance compared with the methods without using real-world
ground-truth 6D pose labels.
| [
{
"created": "Wed, 24 Feb 2021 06:49:41 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Shao",
"Jianzhun",
""
],
[
"Jiang",
"Yuhang",
""
],
[
"Wang",
"Gu",
""
],
[
"Li",
"Zhigang",
""
],
[
"Ji",
"Xiangyang",
""
]
] |
2102.12127 | Ngoc Tran | Toan Pham Van, Son Trung Nguyen, Linh Bao Doan, Ngoc N. Tran and Ta
Minh Thanh | Efficient Palm-Line Segmentation with U-Net Context Fusion Module | Published in 2020 International Conference on Advanced Computing and
Applications (ACOMP) | 2020 International Conference on Advanced Computing and
Applications (ACOMP), Quy Nhon, Vietnam, 2020, pp. 23-28 | 10.1109/ACOMP50827.2020.00011 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Many cultures around the world believe that palm reading can be used to
predict the future life of a person. Palmistry uses features of the hand such
as palm lines, hand shape, or fingertip position. However, the research on
palm-line detection is still scarce, many of them applied traditional image
processing techniques. In most real-world scenarios, images usually are not in
well-conditioned, causing these methods to severely under-perform. In this
paper, we propose an algorithm to extract principle palm lines from an image of
a person's hand. Our method applies deep learning networks (DNNs) to improve
performance. Another challenge of this problem is the lack of training data. To
deal with this issue, we handcrafted a dataset from scratch. From this dataset,
we compare the performance of readily available methods with ours. Furthermore,
based on the UNet segmentation neural network architecture and the knowledge of
attention mechanism, we propose a highly efficient architecture to detect
palm-lines. We proposed the Context Fusion Module to capture the most important
context feature, which aims to improve segmentation accuracy. The experimental
results show that it outperforms the other methods with the highest F1 Score
about 99.42% and mIoU is 0.584 for the same dataset.
| [
{
"created": "Wed, 24 Feb 2021 08:42:52 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Van",
"Toan Pham",
""
],
[
"Nguyen",
"Son Trung",
""
],
[
"Doan",
"Linh Bao",
""
],
[
"Tran",
"Ngoc N.",
""
],
[
"Thanh",
"Ta Minh",
""
]
] |
2102.12139 | Ngoc Tran | Toan Pham Van, Tam Minh Nguyen, Ngoc N. Tran, Hoai Viet Nguyen, Linh
Bao Doan, Huy Quang Dao and Thanh Ta Minh | Interpreting the Latent Space of Generative Adversarial Networks using
Supervised Learning | Published in 2020 International Conference on Advanced Computing and
Applications (ACOMP) | 2020 International Conference on Advanced Computing and
Applications (ACOMP), Quy Nhon, Vietnam, 2020, pp. 49-54 | 10.1109/ACOMP50827.2020.00015 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | With great progress in the development of Generative Adversarial Networks
(GANs), in recent years, the quest for insights in understanding and
manipulating the latent space of GAN has gained more and more attention due to
its wide range of applications. While most of the researches on this task have
focused on unsupervised learning method, which induces difficulties in training
and limitation in results, our work approaches another direction, encoding
human's prior knowledge to discover more about the hidden space of GAN. With
this supervised manner, we produce promising results, demonstrated by accurate
manipulation of generated images. Even though our model is more suitable for
task-specific problems, we hope that its ease in implementation, preciseness,
robustness, and the allowance of richer set of properties (compared to other
approaches) for image manipulation can enhance the result of many current
applications.
| [
{
"created": "Wed, 24 Feb 2021 09:00:18 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Van",
"Toan Pham",
""
],
[
"Nguyen",
"Tam Minh",
""
],
[
"Tran",
"Ngoc N.",
""
],
[
"Nguyen",
"Hoai Viet",
""
],
[
"Doan",
"Linh Bao",
""
],
[
"Dao",
"Huy Quang",
""
],
[
"Minh",
"Thanh Ta",
""
]
] |
2102.12152 | Tung-I Chen | Tung-I Chen, Yueh-Cheng Liu, Hung-Ting Su, Yu-Cheng Chang, Yu-Hsiang
Lin, Jia-Fong Yeh, Wen-Chin Chen, Winston H. Hsu | Dual-Awareness Attention for Few-Shot Object Detection | null | IEEE Transactions on Multimedia 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While recent progress has significantly boosted few-shot classification (FSC)
performance, few-shot object detection (FSOD) remains challenging for modern
learning systems. Existing FSOD systems follow FSC approaches, ignoring
critical issues such as spatial variability and uncertain representations, and
consequently result in low performance. Observing this, we propose a novel
\textbf{Dual-Awareness Attention (DAnA)} mechanism that enables networks to
adaptively interpret the given support images. DAnA transforms support images
into \textbf{query-position-aware} (QPA) features, guiding detection networks
precisely by assigning customized support information to each local region of
the query. In addition, the proposed DAnA component is flexible and adaptable
to multiple existing object detection frameworks. By adopting DAnA,
conventional object detection networks, Faster R-CNN and RetinaNet, which are
not designed explicitly for few-shot learning, reach state-of-the-art
performance in FSOD tasks. In comparison with previous methods, our model
significantly increases the performance by 47\% (+6.9 AP), showing remarkable
ability under various evaluation settings.
| [
{
"created": "Wed, 24 Feb 2021 09:17:27 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jul 2021 08:40:00 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Sep 2021 03:02:15 GMT",
"version": "v3"
}
] | 2021-09-17 | [
[
"Chen",
"Tung-I",
""
],
[
"Liu",
"Yueh-Cheng",
""
],
[
"Su",
"Hung-Ting",
""
],
[
"Chang",
"Yu-Cheng",
""
],
[
"Lin",
"Yu-Hsiang",
""
],
[
"Yeh",
"Jia-Fong",
""
],
[
"Chen",
"Wen-Chin",
""
],
[
"Hsu",
"Winston H.",
""
]
] |
2102.12162 | Ngoc Tran | Quang Huu Pham, Viet Anh Nguyen, Linh Bao Doan, Ngoc N. Tran and Ta
Minh Thanh | From Universal Language Model to Downstream Task: Improving
RoBERTa-Based Vietnamese Hate Speech Detection | Published in 2020 12th International Conference on Knowledge and
Systems Engineering (KSE) | 2020 12th International Conference on Knowledge and Systems
Engineering (KSE), Can Tho, Vietnam, 2020, pp. 37-42 | 10.1109/KSE50997.2020.9287406 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Natural language processing is a fast-growing field of artificial
intelligence. Since the Transformer was introduced by Google in 2017, a large
number of language models such as BERT, GPT, and ELMo have been inspired by
this architecture. These models were trained on huge datasets and achieved
state-of-the-art results on natural language understanding. However,
fine-tuning a pre-trained language model on much smaller datasets for
downstream tasks requires a carefully-designed pipeline to mitigate problems of
the datasets such as lack of training data and imbalanced data. In this paper,
we propose a pipeline to adapt the general-purpose RoBERTa language model to a
specific text classification task: Vietnamese Hate Speech Detection. We first
tune the PhoBERT on our dataset by re-training the model on the Masked Language
Model task; then, we employ its encoder for text classification. In order to
preserve pre-trained weights while learning new feature representations, we
further utilize different training techniques: layer freezing, block-wise
learning rate, and label smoothing. Our experiments proved that our proposed
pipeline boosts the performance significantly, achieving a new state-of-the-art
on Vietnamese Hate Speech Detection campaign with 0.7221 F1 score.
| [
{
"created": "Wed, 24 Feb 2021 09:30:55 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Pham",
"Quang Huu",
""
],
[
"Nguyen",
"Viet Anh",
""
],
[
"Doan",
"Linh Bao",
""
],
[
"Tran",
"Ngoc N.",
""
],
[
"Thanh",
"Ta Minh",
""
]
] |
2102.12191 | Md Mamunur Rahaman | Md Mamunur Rahaman, Chen Li, Yudong Yao, Frank Kulwa, Xiangchen Wu,
Xiaoyan Li, Qian Wang | DeepCervix: A Deep Learning-based Framework for the Classification of
Cervical Cells Using Hybrid Deep Feature Fusion Techniques | 12 pages, 8 figures, Published in Computers in Biology and Medicine | Computers in Biology and Medicine, 136, 104649 (2021) | 10.1016/j.compbiomed.2021.104649 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cervical cancer, one of the most common fatal cancers among women, can be
prevented by regular screening to detect any precancerous lesions at early
stages and treat them. Pap smear test is a widely performed screening technique
for early detection of cervical cancer, whereas this manual screening method
suffers from high false-positive results because of human errors. To improve
the manual screening practice, machine learning (ML) and deep learning (DL)
based computer-aided diagnostic (CAD) systems have been investigated widely to
classify cervical pap cells. Most of the existing researches require
pre-segmented images to obtain good classification results, whereas accurate
cervical cell segmentation is challenging because of cell clustering. Some
studies rely on handcrafted features, which cannot guarantee the classification
stage's optimality. Moreover, DL provides poor performance for a multiclass
classification task when there is an uneven distribution of data, which is
prevalent in the cervical cell dataset. This investigation has addressed those
limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF)
technique based on DL to classify the cervical cells accurately. Our proposed
method uses various DL models to capture more potential information to enhance
classification performance. Our proposed HDFF method is tested on the publicly
available SIPAKMED dataset and compared the performance with base DL models and
the LF method. For the SIPAKMED dataset, we have obtained the state-of-the-art
classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class, and
5-class classification. Moreover, our method is tested on the Herlev dataset
and achieves an accuracy of 98.32% for binary class and 90.32% for 7-class
classification.
| [
{
"created": "Wed, 24 Feb 2021 10:34:51 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Aug 2021 13:30:24 GMT",
"version": "v2"
}
] | 2021-08-31 | [
[
"Rahaman",
"Md Mamunur",
""
],
[
"Li",
"Chen",
""
],
[
"Yao",
"Yudong",
""
],
[
"Kulwa",
"Frank",
""
],
[
"Wu",
"Xiangchen",
""
],
[
"Li",
"Xiaoyan",
""
],
[
"Wang",
"Qian",
""
]
] |
2102.12227 | Andrea Galassi | Andrea Galassi, Marco Lippi, Paolo Torroni | Multi-Task Attentive Residual Networks for Argument Mining | 16 pages, 3 figures | IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol 31, pp 1877-1892, 2023 | 10.1109/TASLP.2023.3275040 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We explore the use of residual networks and neural attention for multiple
argument mining tasks. We propose a residual architecture that exploits
attention, multi-task learning, and makes use of ensemble, without any
assumption on document or argument structure. We present an extensive
experimental evaluation on five different corpora of user-generated comments,
scientific publications, and persuasive essays. Our results show that our
approach is a strong competitor against state-of-the-art architectures with a
higher computational footprint or corpus-specific design, representing an
interesting compromise between generality, performance accuracy and reduced
model size.
| [
{
"created": "Wed, 24 Feb 2021 11:35:28 GMT",
"version": "v1"
},
{
"created": "Mon, 15 May 2023 16:53:00 GMT",
"version": "v2"
},
{
"created": "Thu, 25 May 2023 22:46:54 GMT",
"version": "v3"
}
] | 2023-05-29 | [
[
"Galassi",
"Andrea",
""
],
[
"Lippi",
"Marco",
""
],
[
"Torroni",
"Paolo",
""
]
] |
2102.12255 | Abheesht Sharma | Abheesht Sharma, Harshit Pandey, Gunjan Chhablani, Yash Bhartia,
Tirtharaj Dash | LRG at SemEval-2021 Task 4: Improving Reading Comprehension with
Abstract Words using Augmentation, Linguistic Features and Voting | 10 pages, 4 figures, SemEval-2021 Workshop, ACL-IJCNLP 2021 | Proceedings of the 15th International Workshop on Semantic
Evaluation (SemEval-2021), 2021, Online | 10.18653/v1/2021.semeval-1.21 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we present our methodologies for SemEval-2021 Task-4:
Reading Comprehension of Abstract Meaning. Given a fill-in-the-blank-type
question and a corresponding context, the task is to predict the most suitable
word from a list of 5 options. There are three sub-tasks within this task:
Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection
(subtask-III). We use encoders of transformers-based models pre-trained on the
masked language modelling (MLM) task to build our Fill-in-the-blank (FitB)
models. Moreover, to model imperceptibility, we define certain linguistic
features, and to model non-specificity, we leverage information from hypernyms
and hyponyms provided by a lexical database. Specifically, for non-specificity,
we try out augmentation techniques, and other statistical techniques. We also
propose variants, namely Chunk Voting and Max Context, to take care of input
length restrictions for BERT, etc. Additionally, we perform a thorough ablation
study, and use Integrated Gradients to explain our predictions on a few
samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the
test sets for subtask-I and subtask-II, respectively. For subtask-III, we
achieve accuracies of 65.64% and 62.27%.
| [
{
"created": "Wed, 24 Feb 2021 12:33:12 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Jun 2021 14:02:41 GMT",
"version": "v2"
}
] | 2022-02-24 | [
[
"Sharma",
"Abheesht",
""
],
[
"Pandey",
"Harshit",
""
],
[
"Chhablani",
"Gunjan",
""
],
[
"Bhartia",
"Yash",
""
],
[
"Dash",
"Tirtharaj",
""
]
] |
2102.12281 | Aydogan Ozcan | Luzhe Huang, Tairan Liu, Xilin Yang, Yi Luo, Yair Rivenson, Aydogan
Ozcan | Holographic image reconstruction with phase recovery and autofocusing
using recurrent neural networks | 18 Pages, 7 Figures, 1 Table | ACS Photonics (2021) | 10.1021/acsphotonics.1c00337 | null | eess.IV cs.CV cs.LG physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital holography is one of the most widely used label-free microscopy
techniques in biomedical imaging. Recovery of the missing phase information of
a hologram is an important step in holographic image reconstruction. Here we
demonstrate a convolutional recurrent neural network (RNN) based phase recovery
approach that uses multiple holograms, captured at different sample-to-sensor
distances to rapidly reconstruct the phase and amplitude information of a
sample, while also performing autofocusing through the same network. We
demonstrated the success of this deep learning-enabled holography method by
imaging microscopic features of human tissue samples and Papanicolaou (Pap)
smears. These results constitute the first demonstration of the use of
recurrent neural networks for holographic imaging and phase recovery, and
compared with existing methods, the presented approach improves the
reconstructed image quality, while also increasing the depth-of-field and
inference speed.
| [
{
"created": "Fri, 12 Feb 2021 01:51:43 GMT",
"version": "v1"
}
] | 2021-05-28 | [
[
"Huang",
"Luzhe",
""
],
[
"Liu",
"Tairan",
""
],
[
"Yang",
"Xilin",
""
],
[
"Luo",
"Yi",
""
],
[
"Rivenson",
"Yair",
""
],
[
"Ozcan",
"Aydogan",
""
]
] |
2102.12354 | Fl\'avio Santos | Flavio Santos, Cleber Zanchettin, Leonardo Matos, and Paulo Novais | On the Impact of Interpretability Methods in Active Image Augmentation
Method | published in Logic Journal of the IGPL (2021) | Logic Journal of the IGPL, 2021, jzab006 | 10.1093/jigpal/jzab006 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robustness is a significant constraint in machine learning models. The
performance of the algorithms must not deteriorate when training and testing
with slightly different data. Deep neural network models achieve awe-inspiring
results in a wide range of applications of computer vision. Still, in the
presence of noise or region occlusion, some models exhibit inaccurate
performance even with data handled in training. Besides, some experiments
suggest deep learning models sometimes use incorrect parts of the input
information to perform inference. Activate Image Augmentation (ADA) is an
augmentation method that uses interpretability methods to augment the training
data and improve its robustness to face the described problems. Although ADA
presented interesting results, its original version only used the Vanilla
Backpropagation interpretability to train the U-Net model. In this work, we
propose an extensive experimental analysis of the interpretability method's
impact on ADA. We use five interpretability methods: Vanilla Backpropagation,
Guided Backpropagation, GradCam, Guided GradCam, and InputXGradient. The
results show that all methods achieve similar performance at the ending of
training, but when combining ADA with GradCam, the U-Net model presented an
impressive fast convergence.
| [
{
"created": "Wed, 24 Feb 2021 15:40:54 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Santos",
"Flavio",
""
],
[
"Zanchettin",
"Cleber",
""
],
[
"Matos",
"Leonardo",
""
],
[
"Novais",
"Paulo",
""
]
] |
2102.12459 | Tao Lei | Tao Lei | When Attention Meets Fast Recurrence: Training Language Models with
Reduced Compute | null | EMNLP 2021 | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models have become increasingly difficult to train because of
the growing computation time and cost. In this work, we present SRU++, a
highly-efficient architecture that combines fast recurrence and attention for
sequence modeling. SRU++ exhibits strong modeling capacity and training
efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and
Billion Word datasets, our model obtains better bits-per-character and
perplexity while using 3x-10x less training cost compared to top-performing
Transformer models. For instance, our model achieves a state-of-the-art result
on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We
further demonstrate that SRU++ requires minimal attention for near
state-of-the-art performance. Our results suggest jointly leveraging fast
recurrence with little attention as a promising direction for accelerating
model training and inference.
| [
{
"created": "Wed, 24 Feb 2021 18:39:56 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2021 16:32:25 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Sep 2021 03:59:10 GMT",
"version": "v3"
}
] | 2021-09-16 | [
[
"Lei",
"Tao",
""
]
] |
2102.12505 | Utako Yamamoto | Utako Yamamoto, Megumi Nakao, Masayuki Ohzeki, Junko Tokuno, Toyofumi
Fengshi Chen-Yoshikawa, and Tetsuya Matsuda | Kernel-based framework to estimate deformations of pneumothorax lung
using relative position of anatomical landmarks | 10 pages, 6 figures | Expert Systems with Applications, 183(2021), 115288 | 10.1016/j.eswa.2021.115288 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In video-assisted thoracoscopic surgeries, successful procedures of nodule
resection are highly dependent on the precise estimation of lung deformation
between the inflated lung in the computed tomography (CT) images during
preoperative planning and the deflated lung in the treatment views during
surgery. Lungs in the pneumothorax state during surgery have a large volume
change from normal lungs, making it difficult to build a mechanical model. The
purpose of this study is to develop a deformation estimation method of the 3D
surface of a deflated lung from a few partial observations. To estimate
deformations for a largely deformed lung, a kernel regression-based solution
was introduced. The proposed method used a few landmarks to capture the partial
deformation between the 3D surface mesh obtained from preoperative CT and the
intraoperative anatomical positions. The deformation for each vertex of the
entire mesh model was estimated per-vertex as a relative position from the
landmarks. The landmarks were placed in the anatomical position of the lung's
outer contour. The method was applied on nine datasets of the left lungs of
live Beagle dogs. Contrast-enhanced CT images of the lungs were acquired. The
proposed method achieved a local positional error of vertices of 2.74 mm,
Hausdorff distance of 6.11 mm, and Dice similarity coefficient of 0.94.
Moreover, the proposed method could estimate lung deformations from a small
number of training cases and a small observation area. This study contributes
to the data-driven modeling of pneumothorax deformation of the lung.
| [
{
"created": "Wed, 24 Feb 2021 19:00:17 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Aug 2021 17:11:55 GMT",
"version": "v2"
}
] | 2021-08-20 | [
[
"Yamamoto",
"Utako",
""
],
[
"Nakao",
"Megumi",
""
],
[
"Ohzeki",
"Masayuki",
""
],
[
"Tokuno",
"Junko",
""
],
[
"Chen-Yoshikawa",
"Toyofumi Fengshi",
""
],
[
"Matsuda",
"Tetsuya",
""
]
] |
2102.12670 | Azarakhsh Keipour | Azarakhsh Keipour and Guilherme A. S. Pereira and Sebastian Scherer | Real-Time Ellipse Detection for Robotics Applications | Accepted to RA-L and IROS 2021 | IEEE Robotics and Automation Letters, vol. 6, no. 4, pp.
7009-7016, Oct. 2021 | 10.1109/LRA.2021.3097057 | null | cs.RO cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new algorithm for real-time detection and tracking of elliptic
patterns suitable for real-world robotics applications. The method fits
ellipses to each contour in the image frame and rejects ellipses that do not
yield a good fit. The resulting detection and tracking method is lightweight
enough to be used on robots' resource-limited onboard computers, can deal with
lighting variations and detect the pattern even when the view is partial. The
method is tested on an example application of an autonomous UAV landing on a
fast-moving vehicle to show its performance indoors, outdoors, and in
simulation on a real-world robotics task. The comparison with other well-known
ellipse detection methods shows that our proposed algorithm outperforms other
methods with the F1 score of 0.981 on a dataset with over 1500 frames. The
videos of experiments, the source codes, and the collected dataset are provided
with the paper at https://theairlab.org/landing-on-vehicle .
| [
{
"created": "Thu, 25 Feb 2021 03:53:59 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jul 2021 06:17:41 GMT",
"version": "v2"
}
] | 2021-12-09 | [
[
"Keipour",
"Azarakhsh",
""
],
[
"Pereira",
"Guilherme A. S.",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
2102.12759 | Georg Muntingh PhD | Oliver J.D. Barrowclough, Georg Muntingh, Varatharajan Nainamalai,
Ivar Stangeby | Binary segmentation of medical images using implicit spline
representations and deep learning | 17 pages, 5 figures | Computer Aided Geometric Design, Volume 85, 2021 | 10.1016/j.cagd.2021.101972 | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach to image segmentation based on combining implicit
spline representations with deep convolutional neural networks. This is done by
predicting the control points of a bivariate spline function whose zero-set
represents the segmentation boundary. We adapt several existing neural network
architectures and design novel loss functions that are tailored towards
providing implicit spline curve approximations. The method is evaluated on a
congenital heart disease computed tomography medical imaging dataset.
Experiments are carried out by measuring performance in various standard
metrics for different networks and loss functions. We determine that splines of
bidegree $(1,1)$ with $128\times128$ coefficient resolution performed optimally
for $512\times 512$ resolution CT images. For our best network, we achieve an
average volumetric test Dice score of almost 92%, which reaches the state of
the art for this congenital heart disease dataset.
| [
{
"created": "Thu, 25 Feb 2021 10:04:25 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Mar 2021 08:50:53 GMT",
"version": "v2"
}
] | 2021-03-22 | [
[
"Barrowclough",
"Oliver J. D.",
""
],
[
"Muntingh",
"Georg",
""
],
[
"Nainamalai",
"Varatharajan",
""
],
[
"Stangeby",
"Ivar",
""
]
] |
2102.12773 | Fengshi Tian Clarence | Fengshi Tian, Jie Yang, Shiqi Zhao, Mohamad Sawan | A New Neuromorphic Computing Approach for Epileptic Seizure Prediction | Accepted to 2021 IEEE International Symposium on Circuits and Systems
(ISCAS) | 2021 IEEE International Symposium on Circuits and Systems (ISCAS) | 10.1109/ISCAS51556.2021.9401560 | null | cs.NE cs.AI cs.HC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several high specificity and sensitivity seizure prediction methods with
convolutional neural networks (CNNs) are reported. However, CNNs are
computationally expensive and power hungry. These inconveniences make CNN-based
methods hard to be implemented on wearable devices. Motivated by the
energy-efficient spiking neural networks (SNNs), a neuromorphic computing
approach for seizure prediction is proposed in this work. This approach uses a
designed gaussian random discrete encoder to generate spike sequences from the
EEG samples and make predictions in a spiking convolutional neural network
(Spiking-CNN) which combines the advantages of CNNs and SNNs. The experimental
results show that the sensitivity, specificity and AUC can remain 95.1%, 99.2%
and 0.912 respectively while the computation complexity is reduced by 98.58%
compared to CNN, indicating that the proposed Spiking-CNN is hardware friendly
and of high precision.
| [
{
"created": "Thu, 25 Feb 2021 10:39:18 GMT",
"version": "v1"
}
] | 2022-08-25 | [
[
"Tian",
"Fengshi",
""
],
[
"Yang",
"Jie",
""
],
[
"Zhao",
"Shiqi",
""
],
[
"Sawan",
"Mohamad",
""
]
] |
2102.12846 | Dimitri Kartsaklis | Robin Lorenz, Anna Pearson, Konstantinos Meichanetzidis, Dimitri
Kartsaklis, Bob Coecke | QNLP in Practice: Running Compositional Models of Meaning on a Quantum
Computer | 38 pages | Journal of Artificial Intelligence Research Vol. 76 (2023),
1305-1342 | 10.1613/jair.1.14329 | null | cs.CL cs.AI cs.LG quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum Natural Language Processing (QNLP) deals with the design and
implementation of NLP models intended to be run on quantum hardware. In this
paper, we present results on the first NLP experiments conducted on Noisy
Intermediate-Scale Quantum (NISQ) computers for datasets of size greater than
100 sentences. Exploiting the formal similarity of the compositional model of
meaning by Coecke, Sadrzadeh and Clark (2010) with quantum theory, we create
representations for sentences that have a natural mapping to quantum circuits.
We use these representations to implement and successfully train NLP models
that solve simple sentence classification tasks on quantum hardware. We conduct
quantum simulations that compare the syntax-sensitive model of Coecke et al.
with two baselines that use less or no syntax; specifically, we implement the
quantum analogues of a "bag-of-words" model, where syntax is not taken into
account at all, and of a word-sequence model, where only word order is
respected. We demonstrate that all models converge smoothly both in simulations
and when run on quantum hardware, and that the results are the expected ones
based on the nature of the tasks and the datasets used. Another important goal
of this paper is to describe in a way accessible to AI and NLP researchers the
main principles, process and challenges of experiments on quantum hardware. Our
aim in doing this is to take the first small steps in this unexplored research
territory and pave the way for practical Quantum Natural Language Processing.
| [
{
"created": "Thu, 25 Feb 2021 13:37:33 GMT",
"version": "v1"
},
{
"created": "Thu, 4 May 2023 11:34:16 GMT",
"version": "v2"
}
] | 2023-05-05 | [
[
"Lorenz",
"Robin",
""
],
[
"Pearson",
"Anna",
""
],
[
"Meichanetzidis",
"Konstantinos",
""
],
[
"Kartsaklis",
"Dimitri",
""
],
[
"Coecke",
"Bob",
""
]
] |
2102.12853 | M. Alex O. Vasilescu | M. Alex O. Vasilescu, Eric Kim, and Xiao S. Zeng | CausalX: Causal Explanations and Block Multilinear Factor Analysis | arXiv admin note: text overlap with arXiv:1911.04180 | 2020 25th International Conference on Pattern Recognition (ICPR),
Milan, Italy, pp. 10736-10743 | 10.1109/ICPR48806.2021.9412780 | null | cs.CV cs.AI cs.LG math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By adhering to the dictum, "No causation without manipulation (treatment,
intervention)", cause and effect data analysis represents changes in observed
data in terms of changes in the causal factors. When causal factors are not
amenable for active manipulation in the real world due to current technological
limitations or ethical considerations, a counterfactual approach performs an
intervention on the model of data formation. In the case of object
representation or activity (temporal object) representation, varying object
parts is generally unfeasible whether they be spatial and/or temporal.
Multilinear algebra, the algebra of higher-order tensors, is a suitable and
transparent framework for disentangling the causal factors of data formation.
Learning a part-based intrinsic causal factor representations in a multilinear
framework requires applying a set of interventions on a part-based multilinear
model. We propose a unified multilinear model of wholes and parts. We derive a
hierarchical block multilinear factorization, the M-mode Block SVD, that
computes a disentangled representation of the causal factors by optimizing
simultaneously across the entire object hierarchy. Given computational
efficiency considerations, we introduce an incremental bottom-up computational
alternative, the Incremental M-mode Block SVD, that employs the lower-level
abstractions, the part representations, to represent the higher level of
abstractions, the parent wholes. This incremental computational approach may
also be employed to update the causal model parameters when data becomes
available incrementally. The resulting object representation is an
interpretable combinatorial choice of intrinsic causal factor representations
related to an object's recursive hierarchy of wholes and parts that renders
object recognition robust to occlusion and reduces training data requirements.
| [
{
"created": "Thu, 25 Feb 2021 13:49:01 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Feb 2021 12:03:44 GMT",
"version": "v2"
}
] | 2022-02-08 | [
[
"Vasilescu",
"M. Alex O.",
""
],
[
"Kim",
"Eric",
""
],
[
"Zeng",
"Xiao S.",
""
]
] |
2102.12855 | Mingyu Cai | Mingyu Cai, Mohammadhosein Hasanbeig, Shaoping Xiao, Alessandro Abate
and Zhen Kan | Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic | arXiv admin note: text overlap with arXiv:2010.06797 | IEEE Robotics and Automation Letters, 2021 | 10.1109/LRA.2021.3101544 | null | cs.LG cs.AI cs.FL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the motion planning of autonomous dynamical systems
modeled by Markov decision processes (MDP) with unknown transition
probabilities over continuous state and action spaces. Linear temporal logic
(LTL) is used to specify high-level tasks over infinite horizon, which can be
converted into a limit deterministic generalized B\"uchi automaton (LDGBA) with
several accepting sets. The novelty is to design an embedded product MDP
(EP-MDP) between the LDGBA and the MDP by incorporating a synchronous
tracking-frontier function to record unvisited accepting sets of the automaton,
and to facilitate the satisfaction of the accepting conditions. The proposed
LDGBA-based reward shaping and discounting schemes for the model-free
reinforcement learning (RL) only depend on the EP-MDP states and can overcome
the issues of sparse rewards. Rigorous analysis shows that any RL method that
optimizes the expected discounted return is guaranteed to find an optimal
policy whose traces maximize the satisfaction probability. A modular deep
deterministic policy gradient (DDPG) is then developed to generate such
policies over continuous state and action spaces. The performance of our
framework is evaluated via an array of OpenAI gym environments.
| [
{
"created": "Wed, 24 Feb 2021 01:11:25 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jun 2021 18:52:06 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Jul 2021 16:26:14 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Oct 2021 13:55:55 GMT",
"version": "v4"
},
{
"created": "Wed, 6 Oct 2021 15:29:29 GMT",
"version": "v5"
},
{
"created": "Mon, 22 Nov 2021 23:45:50 GMT",
"version": "v6"
},
{
"created": "Sun, 23 Jan 2022 22:02:35 GMT",
"version": "v7"
}
] | 2022-01-25 | [
[
"Cai",
"Mingyu",
""
],
[
"Hasanbeig",
"Mohammadhosein",
""
],
[
"Xiao",
"Shaoping",
""
],
[
"Abate",
"Alessandro",
""
],
[
"Kan",
"Zhen",
""
]
] |
2102.13034 | Yuan Shen | Yuan Shen, Niviru Wijayaratne, Peter Du, Shanduojiao Jiang, Katherine
Driggs Campbell | AutoPreview: A Framework for Autopilot Behavior Understanding | 7 pages, 5 figures, CHI 2021 Late breaking Work | CHI Conference on Human Factors in Computing Systems Extended
Abstracts (CHI '21 Extended Abstracts), May 8 to 13, 2021, Yokohama, Japan | 10.1145/3411763.3451591 | null | cs.AI cs.HC cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The behavior of self driving cars may differ from people expectations, (e.g.
an autopilot may unexpectedly relinquish control). This expectation mismatch
can cause potential and existing users to distrust self driving technology and
can increase the likelihood of accidents. We propose a simple but effective
framework, AutoPreview, to enable consumers to preview a target autopilot
potential actions in the real world driving context before deployment. For a
given target autopilot, we design a delegate policy that replicates the target
autopilot behavior with explainable action representations, which can then be
queried online for comparison and to build an accurate mental model. To
demonstrate its practicality, we present a prototype of AutoPreview integrated
with the CARLA simulator along with two potential use cases of the framework.
We conduct a pilot study to investigate whether or not AutoPreview provides
deeper understanding about autopilot behavior when experiencing a new autopilot
policy for the first time. Our results suggest that the AutoPreview method
helps users understand autopilot behavior in terms of driving style
comprehension, deployment preference, and exact action timing prediction.
| [
{
"created": "Thu, 25 Feb 2021 17:40:59 GMT",
"version": "v1"
}
] | 2021-02-26 | [
[
"Shen",
"Yuan",
""
],
[
"Wijayaratne",
"Niviru",
""
],
[
"Du",
"Peter",
""
],
[
"Jiang",
"Shanduojiao",
""
],
[
"Campbell",
"Katherine Driggs",
""
]
] |
2102.13139 | Milos Jovanovik | Nasi Jofche, Kostadin Mishev, Riste Stojanov, Milos Jovanovik, Dimitar
Trajanov | PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts using
Transfer Learning | null | Computers. 2023; 12(1):17 | 10.3390/computers12010017 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenge of recognizing named entities in a given text has been a very
dynamic field in recent years. This is due to the advances in neural network
architectures, increase of computing power and the availability of diverse
labeled datasets, which deliver pre-trained, highly accurate models. These
tasks are generally focused on tagging common entities, but domain-specific
use-cases require tagging custom entities which are not part of the pre-trained
models. This can be solved by either fine-tuning the pre-trained models, or by
training custom models. The main challenge lies in obtaining reliable labeled
training and test datasets, and manual labeling would be a highly tedious task.
In this paper we present PharmKE, a text analysis platform focused on the
pharmaceutical domain, which applies deep learning through several stages for
thorough semantic analysis of pharmaceutical articles. It performs text
classification using state-of-the-art transfer learning models, and thoroughly
integrates the results obtained through a proposed methodology. The methodology
is used to create accurately labeled training and test datasets, which are then
used to train models for custom entity labeling tasks, centered on the
pharmaceutical domain. The obtained results are compared to the fine-tuned BERT
and BioBERT models trained on the same dataset. Additionally, the PharmKE
platform integrates the results obtained from named entity recognition tasks to
resolve co-references of entities and analyze the semantic relations in every
sentence, thus setting up a baseline for additional text analysis tasks, such
as question answering and fact extraction. The recognized entities are also
used to expand the knowledge graph generated by DBpedia Spotlight for a given
pharmaceutical text.
| [
{
"created": "Thu, 25 Feb 2021 19:36:35 GMT",
"version": "v1"
}
] | 2023-01-10 | [
[
"Jofche",
"Nasi",
""
],
[
"Mishev",
"Kostadin",
""
],
[
"Stojanov",
"Riste",
""
],
[
"Jovanovik",
"Milos",
""
],
[
"Trajanov",
"Dimitar",
""
]
] |
2102.13196 | Alexander M. Rush | David Chiang, Alexander M. Rush, Boaz Barak | Named Tensor Notation | null | TMLR, January 2023 | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose a notation for tensors with named axes, which relieves the author,
reader, and future implementers of machine learning models from the burden of
keeping track of the order of axes and the purpose of each. The notation makes
it easy to lift operations on low-order tensors to higher order ones, for
example, from images to minibatches of images, or from an attention mechanism
to multiple attention heads.
After a brief overview and formal definition of the notation, we illustrate
it through several examples from modern machine learning, from building blocks
like attention and convolution to full models like Transformers and LeNet. We
then discuss differential calculus in our notation and compare with some
alternative notations. Our proposals build on ideas from many previous papers
and software libraries. We hope that our notation will encourage more authors
to use named tensors, resulting in clearer papers and more precise
implementations.
| [
{
"created": "Thu, 25 Feb 2021 22:21:30 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 03:00:53 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Jan 2023 19:52:28 GMT",
"version": "v3"
}
] | 2023-01-19 | [
[
"Chiang",
"David",
""
],
[
"Rush",
"Alexander M.",
""
],
[
"Barak",
"Boaz",
""
]
] |
2102.13391 | Rajat Sharma | Rajat Sharma, Tobias Schwandt, Christian Kunert, Steffen Urban and
Wolfgang Broll | Point Cloud Upsampling and Normal Estimation using Deep Learning for
Robust Surface Reconstruction | null | In Proceedings of the 16th International Joint Conference on
Computer Vision, Imaging and Computer Graphics Theory and Applications
(VISIGRAPP 2021) - Volume 5: VISAPP, pages 70-79 | 10.5220/0010211600700079 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The reconstruction of real-world surfaces is on high demand in various
applications. Most existing reconstruction approaches apply 3D scanners for
creating point clouds which are generally sparse and of low density. These
points clouds will be triangulated and used for visualization in combination
with surface normals estimated by geometrical approaches. However, the quality
of the reconstruction depends on the density of the point cloud and the
estimation of the surface normals. In this paper, we present a novel deep
learning architecture for point cloud upsampling that enables subsequent stable
and smooth surface reconstruction. A noisy point cloud of low density with
corresponding point normals is used to estimate a point cloud with higher
density and appendant point normals. To this end, we propose a compound loss
function that encourages the network to estimate points that lie on a surface
including normals accurately predicting the orientation of the surface. Our
results show the benefit of estimating normals together with point positions.
The resulting point cloud is smoother, more complete, and the final surface
reconstruction is much closer to ground truth.
| [
{
"created": "Fri, 26 Feb 2021 10:58:26 GMT",
"version": "v1"
}
] | 2021-03-01 | [
[
"Sharma",
"Rajat",
""
],
[
"Schwandt",
"Tobias",
""
],
[
"Kunert",
"Christian",
""
],
[
"Urban",
"Steffen",
""
],
[
"Broll",
"Wolfgang",
""
]
] |
2102.13493 | Dom Ginhac | Yu Liu, Fan Yang and Dominique Ginhac | ACDnet: An action detection network for real-time edge computing based
on flow-guided feature approximation and memory aggregation | Accepted for publication in Pattern Recognition Letters | Pattern Recognition Letters, 145 , 118-126, 2021 | 10.1016/j.patrec.2021.02.001 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Interpreting human actions requires understanding the spatial and temporal
context of the scenes. State-of-the-art action detectors based on Convolutional
Neural Network (CNN) have demonstrated remarkable results by adopting
two-stream or 3D CNN architectures. However, these methods typically operate in
a non-real-time, ofline fashion due to system complexity to reason
spatio-temporal information. Consequently, their high computational cost is not
compliant with emerging real-world scenarios such as service robots or public
surveillance where detection needs to take place at resource-limited edge
devices. In this paper, we propose ACDnet, a compact action detection network
targeting real-time edge computing which addresses both efficiency and
accuracy. It intelligently exploits the temporal coherence between successive
video frames to approximate their CNN features rather than naively extracting
them. It also integrates memory feature aggregation from past video frames to
enhance current detection stability, implicitly modeling long temporal cues
over time. Experiments conducted on the public benchmark datasets UCF-24 and
JHMDB-21 demonstrate that ACDnet, when integrated with the SSD detector, can
robustly achieve detection well above real-time (75 FPS). At the same time, it
retains reasonable accuracy (70.92 and 49.53 frame mAP) compared to other
top-performing methods using far heavier configurations. Codes will be
available at https://github.com/dginhac/ACDnet.
| [
{
"created": "Fri, 26 Feb 2021 14:06:31 GMT",
"version": "v1"
}
] | 2021-03-01 | [
[
"Liu",
"Yu",
""
],
[
"Yang",
"Fan",
""
],
[
"Ginhac",
"Dominique",
""
]
] |
2102.13519 | Stefan Bl\"ucher | Stefan Bl\"ucher, Johanna Vielhaben and Nils Strodthoff | PredDiff: Explanations and Interactions from Conditional Expectations | 35 pages, 20 Figures, accepted journal version, code available at
https://github.com/AI4HealthUOL/preddiff-interactions | Artificial Intelligence 312 (2022) 103774 | 10.1016/j.artint.2022.103774 | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | PredDiff is a model-agnostic, local attribution method that is firmly rooted
in probability theory. Its simple intuition is to measure prediction changes
while marginalizing features. In this work, we clarify properties of PredDiff
and its close connection to Shapley values. We stress important differences
between classification and regression, which require a specific treatment
within both formalisms. We extend PredDiff by introducing a new, well-founded
measure for interaction effects between arbitrary feature subsets. The study of
interaction effects represents an inevitable step towards a comprehensive
understanding of black-box models and is particularly important for science
applications. Equipped with our novel interaction measure, PredDiff is a
promising model-agnostic approach for obtaining reliable, numerically
inexpensive and theoretically sound attributions.
| [
{
"created": "Fri, 26 Feb 2021 14:46:47 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2021 14:27:07 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Oct 2021 08:54:14 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Sep 2022 14:18:50 GMT",
"version": "v4"
}
] | 2023-07-12 | [
[
"Blücher",
"Stefan",
""
],
[
"Vielhaben",
"Johanna",
""
],
[
"Strodthoff",
"Nils",
""
]
] |
2102.13558 | Hao Zhang | Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, Rick
Siow Mong Goh | Natural Language Video Localization: A Revisit in Span-based Question
Answering Framework | 15 pages, 18 figures, and 10 tables. Accepted by IEEE Transactions on
Pattern Analysis and Machine Intelligence (TPAMI). arXiv admin note:
substantial text overlap with arXiv:2004.13931 | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2021 | 10.1109/TPAMI.2021.3060449 | TPAMI-2020-09-1337.R1 | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural Language Video Localization (NLVL) aims to locate a target moment
from an untrimmed video that semantically corresponds to a text query. Existing
approaches mainly solve the NLVL problem from the perspective of computer
vision by formulating it as ranking, anchor, or regression tasks. These methods
suffer from large performance degradation when localizing on long videos. In
this work, we address the NLVL from a new perspective, i.e., span-based
question answering (QA), by treating the input video as a text passage. We
propose a video span localizing network (VSLNet), on top of the standard
span-based QA framework (named VSLBase), to address NLVL. VSLNet tackles the
differences between NLVL and span-based QA through a simple yet effective
query-guided highlighting (QGH) strategy. QGH guides VSLNet to search for the
matching video span within a highlighted region. To address the performance
degradation on long videos, we further extend VSLNet to VSLNet-L by applying a
multi-scale split-and-concatenation strategy. VSLNet-L first splits the
untrimmed video into short clip segments; then, it predicts which clip segment
contains the target moment and suppresses the importance of other segments.
Finally, the clip segments are concatenated, with different confidences, to
locate the target moment accurately. Extensive experiments on three benchmark
datasets show that the proposed VSLNet and VSLNet-L outperform the
state-of-the-art methods; VSLNet-L addresses the issue of performance
degradation on long videos. Our study suggests that the span-based QA framework
is an effective strategy to solve the NLVL problem.
| [
{
"created": "Fri, 26 Feb 2021 15:57:59 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 07:58:49 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Mar 2021 09:42:19 GMT",
"version": "v3"
}
] | 2021-03-03 | [
[
"Zhang",
"Hao",
""
],
[
"Sun",
"Aixin",
""
],
[
"Jing",
"Wei",
""
],
[
"Zhen",
"Liangli",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Goh",
"Rick Siow Mong",
""
]
] |
2102.13640 | Jakob Heiss | Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef
Teichmann | NOMU: Neural Optimization-based Model Uncertainty | 9 pages + appendix | Proceedings of the 39th International Conference on Machine
Learning, PMLR 162:8708-8758, 2022 | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study methods for estimating model uncertainty for neural networks (NNs)
in regression. To isolate the effect of model uncertainty, we focus on a
noiseless setting with scarce training data. We introduce five important
desiderata regarding model uncertainty that any method should satisfy. However,
we find that established benchmarks often fail to reliably capture some of
these desiderata, even those that are required by Bayesian theory. To address
this, we introduce a new approach for capturing model uncertainty for NNs,
which we call Neural Optimization-based Model Uncertainty (NOMU). The main idea
of NOMU is to design a network architecture consisting of two connected
sub-NNs, one for model prediction and one for model uncertainty, and to train
it using a carefully-designed loss function. Importantly, our design enforces
that NOMU satisfies our five desiderata. Due to its modular architecture, NOMU
can provide model uncertainty for any given (previously trained) NN if given
access to its training data. We evaluate NOMU in various regressions tasks and
noiseless Bayesian optimization (BO) with costly evaluations. In regression,
NOMU performs at least as well as state-of-the-art methods. In BO, NOMU even
outperforms all considered benchmarks.
| [
{
"created": "Fri, 26 Feb 2021 18:34:43 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 16:53:19 GMT",
"version": "v2"
},
{
"created": "Mon, 31 May 2021 22:00:03 GMT",
"version": "v3"
},
{
"created": "Sat, 23 Jul 2022 20:29:03 GMT",
"version": "v4"
},
{
"created": "Sat, 11 Mar 2023 21:27:41 GMT",
"version": "v5"
}
] | 2023-03-14 | [
[
"Heiss",
"Jakob",
""
],
[
"Weissteiner",
"Jakob",
""
],
[
"Wutte",
"Hanna",
""
],
[
"Seuken",
"Sven",
""
],
[
"Teichmann",
"Josef",
""
]
] |
2103.00053 | Reyhan Kevser Keser | Reyhan Kevser Keser, Aydin Ayanzadeh, Omid Abdollahi Aghdam, Caglar
Kilcioglu, Behcet Ugur Toreyin, Nazim Kemal Ure | PURSUhInT: In Search of Informative Hint Points Based on Layer
Clustering for Knowledge Distillation | Our codes are published on Code Ocean, where the link to our codes
is: https://codeocean.com/capsule/4245746/tree/v1 | Expert Systems with Applications, Volume 213, Part B, March 2023,
119040 | 10.1016/j.eswa.2022.119040 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One of the most efficient methods for model compression is hint distillation,
where the student model is injected with information (hints) from several
different layers of the teacher model. Although the selection of hint points
can drastically alter the compression performance, conventional distillation
approaches overlook this fact and use the same hint points as in the early
studies. Therefore, we propose a clustering based hint selection methodology,
where the layers of teacher model are clustered with respect to several metrics
and the cluster centers are used as the hint points. Our method is applicable
for any student network, once it is applied on a chosen teacher network. The
proposed approach is validated in CIFAR-100 and ImageNet datasets, using
various teacher-student pairs and numerous hint distillation methods. Our
results show that hint points selected by our algorithm results in superior
compression performance compared to state-of-the-art knowledge distillation
algorithms on the same student models and datasets.
| [
{
"created": "Fri, 26 Feb 2021 21:18:34 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Feb 2022 20:50:30 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Nov 2022 22:41:42 GMT",
"version": "v3"
}
] | 2022-11-07 | [
[
"Keser",
"Reyhan Kevser",
""
],
[
"Ayanzadeh",
"Aydin",
""
],
[
"Aghdam",
"Omid Abdollahi",
""
],
[
"Kilcioglu",
"Caglar",
""
],
[
"Toreyin",
"Behcet Ugur",
""
],
[
"Ure",
"Nazim Kemal",
""
]
] |
2103.00086 | Moshiur R Farazi | Ce Wang, Moshiur Farazi, Nick Barnes | Recursive Training for Zero-Shot Semantic Segmentation | null | 2021 International Joint Conference on Neural Networks (IJCNN) | 10.1109/IJCNN52387.2021.9534049 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General purpose semantic segmentation relies on a backbone CNN network to
extract discriminative features that help classify each image pixel into a
'seen' object class (ie., the object classes available during training) or a
background class. Zero-shot semantic segmentation is a challenging task that
requires a computer vision model to identify image pixels belonging to an
object class which it has never seen before. Equipping a general purpose
semantic segmentation model to separate image pixels of 'unseen' classes from
the background remains an open challenge. Some recent models have approached
this problem by fine-tuning the final pixel classification layer of a semantic
segmentation model for a Zero-Shot setting, but struggle to learn
discriminative features due to the lack of supervision. We propose a recursive
training scheme to supervise the retraining of a semantic segmentation model
for a zero-shot setting using a pseudo-feature representation. To this end, we
propose a Zero-Shot Maximum Mean Discrepancy (ZS-MMD) loss that weighs high
confidence outputs of the pixel classification layer as a pseudo-feature
representation, and feeds it back to the generator. By closing-the-loop on the
generator end, we provide supervision during retraining that in turn helps the
model learn a more discriminative feature representation for 'unseen' classes.
We show that using our recursive training and ZS-MMD loss, our proposed model
achieves state-of-the-art performance on the Pascal-VOC 2012 dataset and
Pascal-Context dataset.
| [
{
"created": "Fri, 26 Feb 2021 23:44:16 GMT",
"version": "v1"
}
] | 2021-10-06 | [
[
"Wang",
"Ce",
""
],
[
"Farazi",
"Moshiur",
""
],
[
"Barnes",
"Nick",
""
]
] |
2103.00119 | Ali Pourramezan Fard | Ali Pourramezan Fard, Hojjat Abdollahi, Mohammad Mahoor | ASMNet: a Lightweight Deep Neural Network for Face Alignment and Pose
Estimation | Accepted at CVPR 2021 Biometrics Workshop, jointly with the Workshop
on Analysis and Modeling of Faces and Gestures | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2021, pp. 1521-1530 | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Active Shape Model (ASM) is a statistical model of object shapes that
represents a target structure. ASM can guide machine learning algorithms to fit
a set of points representing an object (e.g., face) onto an image. This paper
presents a lightweight Convolutional Neural Network (CNN) architecture with a
loss function being assisted by ASM for face alignment and estimating head pose
in the wild. We use ASM to first guide the network towards learning a smoother
distribution of the facial landmark points. Inspired by transfer learning,
during the training process, we gradually harden the regression problem and
guide the network towards learning the original landmark points distribution.
We define multi-tasks in our loss function that are responsible for detecting
facial landmark points as well as estimating the face pose. Learning multiple
correlated tasks simultaneously builds synergy and improves the performance of
individual tasks. We compare the performance of our proposed model called
ASMNet with MobileNetV2 (which is about 2 times bigger than ASMNet) in both the
face alignment and pose estimation tasks. Experimental results on challenging
datasets show that by using the proposed ASM assisted loss function, the ASMNet
performance is comparable with MobileNetV2 in the face alignment task. In
addition, for face pose estimation, ASMNet performs much better than
MobileNetV2. ASMNet achieves an acceptable performance for facial landmark
points detection and pose estimation while having a significantly smaller
number of parameters and floating-point operations compared to many CNN-based
models.
| [
{
"created": "Sat, 27 Feb 2021 03:46:54 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 18:40:12 GMT",
"version": "v2"
},
{
"created": "Fri, 7 May 2021 17:44:58 GMT",
"version": "v3"
}
] | 2021-06-17 | [
[
"Fard",
"Ali Pourramezan",
""
],
[
"Abdollahi",
"Hojjat",
""
],
[
"Mahoor",
"Mohammad",
""
]
] |
2103.00145 | Fei Li | Fei Li, Shiwei Fan, Pengzhen Chen, and Xiangxu Li | Pedestrian Motion State Estimation From 2D Pose | null | 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1682-1687 | 10.1109/IV47402.2020.9304784 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic violation and the flexible and changeable nature of pedestrians make
it more difficult to predict pedestrian behavior or intention, which might be a
potential safety hazard on the road. Pedestrian motion state (such as walking
and standing) directly affects or reflects its intention. In combination with
pedestrian motion state and other influencing factors, pedestrian intention can
be predicted to avoid unnecessary accidents. In this paper, pedestrian is
treated as non-rigid object, which can be represented by a set of
two-dimensional key points, and the movement of key point relative to the torso
is introduced as micro motion. Static and dynamic micro motion features, such
as position, angle and distance, and their differential calculations in time
domain, are used to describe its motion pattern. Gated recurrent neural network
based seq2seq model is used to learn the dependence of motion state transition
on previous information, finally the pedestrian motion state is estimated via a
softmax classifier. The proposed method only needs the previous hidden state of
GRU and current feature to evaluate the probability of current motion state,
and it is computation efficient to deploy on vehicles. This paper verifies the
proposed algorithm on the JAAD public dataset, and the accuracy is improved by
11.6% compared with the existing method.
| [
{
"created": "Sat, 27 Feb 2021 07:00:06 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Li",
"Fei",
""
],
[
"Fan",
"Shiwei",
""
],
[
"Chen",
"Pengzhen",
""
],
[
"Li",
"Xiangxu",
""
]
] |
2103.00167 | Dirk Fahland | Dirk Fahland, Vadim Denisov, Wil. M.P. van der Aalst | Inferring Unobserved Events in Systems With Shared Resources and Queues | Final formatted version at Fundamenta Informatica | Fundamenta Informaticae, Volume 183, Issues 3-4: Petri Nets 2020
(December 23, 2021) fi:7232 | null | null | cs.DC cs.AI cs.FL cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To identify the causes of performance problems or to predict process
behavior, it is essential to have correct and complete event data. This is
particularly important for distributed systems with shared resources, e.g., one
case can block another case competing for the same machine, leading to
inter-case dependencies in performance. However, due to a variety of reasons,
real-life systems often record only a subset of all events taking place. To
understand and analyze the behavior and performance of processes with shared
resources, we aim to reconstruct bounds for timestamps of events in a case that
must have happened but were not recorded by inference over events in other
cases in the system. We formulate and solve the problem by systematically
introducing multi-entity concepts in event logs and process models. We
introduce a partial-order based model of a multi-entity event log and a
corresponding compositional model for multi-entity processes. We define
PQR-systems as a special class of multi-entity processes with shared resources
and queues. We then study the problem of inferring from an incomplete event log
unobserved events and their timestamps that are globally consistent with a
PQR-system. We solve the problem by reconstructing unobserved traces of
resources and queues according to the PQR-model and derive bounds for their
timestamps using a linear program. While the problem is illustrated for
material handling systems like baggage handling systems in airports, the
approach can be applied to other settings where recording is incomplete. The
ideas have been implemented in ProM and were evaluated using both synthetic and
real-life event logs.
| [
{
"created": "Sat, 27 Feb 2021 09:34:01 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Oct 2021 08:30:23 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Dec 2021 15:24:15 GMT",
"version": "v3"
}
] | 2023-06-22 | [
[
"Fahland",
"Dirk",
""
],
[
"Denisov",
"Vadim",
""
],
[
"van der Aalst",
"Wil. M. P.",
""
]
] |
2103.00188 | Mengxi Liu | Mengxi Liu, Qian Shi, Andrea Marinoni, Da He, Xiaoping Liu, Liangpei
Zhang | Super-resolution-based Change Detection Network with Stacked Attention
Module for Images with Different Resolutions | null | IEEE Transactions on Geoscience and Remote Sensing. 2021 | 10.1109/TGRS.2021.3091758 | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change detection, which aims to distinguish surface changes based on
bi-temporal images, plays a vital role in ecological protection and urban
planning. Since high resolution (HR) images cannot be typically acquired
continuously over time, bi-temporal images with different resolutions are often
adopted for change detection in practical applications. Traditional
subpixel-based methods for change detection using images with different
resolutions may lead to substantial error accumulation when HR images are
employed; this is because of intraclass heterogeneity and interclass
similarity. Therefore, it is necessary to develop a novel method for change
detection using images with different resolutions, that is more suitable for HR
images. To this end, we propose a super-resolution-based change detection
network (SRCDNet) with a stacked attention module. The SRCDNet employs a super
resolution (SR) module containing a generator and a discriminator to directly
learn SR images through adversarial learning and overcome the resolution
difference between bi-temporal images. To enhance the useful information in
multi-scale features, a stacked attention module consisting of five
convolutional block attention modules (CBAMs) is integrated to the feature
extractor. The final change map is obtained through a metric learning-based
change decision module, wherein a distance map between bi-temporal features is
calculated. The experimental results demonstrate the superiority of the
proposed method, which not only outperforms all baselines -with the highest F1
scores of 87.40% on the building change detection dataset and 92.94% on the
change detection dataset -but also obtains the best accuracies on experiments
performed with images having a 4x and 8x resolution difference. The source code
of SRCDNet will be available at https://github.com/liumency/SRCDNet.
| [
{
"created": "Sat, 27 Feb 2021 11:17:40 GMT",
"version": "v1"
}
] | 2021-06-24 | [
[
"Liu",
"Mengxi",
""
],
[
"Shi",
"Qian",
""
],
[
"Marinoni",
"Andrea",
""
],
[
"He",
"Da",
""
],
[
"Liu",
"Xiaoping",
""
],
[
"Zhang",
"Liangpei",
""
]
] |
2103.00232 | V\'it Novotn\'y | Eniafe Festus Ayetiran (1), Petr Sojka (1), V\'it Novotn\'y (1) ((1)
Faculty of Informatics Masaryk University) | EDS-MEMBED: Multi-sense embeddings based on enhanced distributional
semantic structures via a graph walk over word senses | null | Knowledge-Based Systems. 219 (2021) 106902 | 10.1016/j.knosys.2021.106902 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several language applications often require word semantics as a core part of
their processing pipeline, either as precise meaning inference or semantic
similarity. Multi-sense embeddings (M-SE) can be exploited for this important
requirement. M-SE seeks to represent each word by their distinct senses in
order to resolve the conflation of meanings of words as used in different
contexts. Previous works usually approach this task by training a model on a
large corpus and often ignore the effect and usefulness of the semantic
relations offered by lexical resources. However, even with large training data,
coverage of all possible word senses is still an issue. In addition, a
considerable percentage of contextual semantic knowledge are never learned
because a huge amount of possible distributional semantic structures are never
explored. In this paper, we leverage the rich semantic structures in WordNet
using a graph-theoretic walk technique over word senses to enhance the quality
of multi-sense embeddings. This algorithm composes enriched texts from the
original texts. Furthermore, we derive new distributional semantic similarity
measures for M-SE from prior ones. We adapt these measures to word sense
disambiguation (WSD) aspect of our experiment. We report evaluation results on
11 benchmark datasets involving WSD and Word Similarity tasks and show that our
method for enhancing distributional semantic structures improves embeddings
quality on the baselines. Despite the small training data, it achieves
state-of-the-art performance on some of the datasets.
| [
{
"created": "Sat, 27 Feb 2021 14:36:55 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Ayetiran",
"Eniafe Festus",
""
],
[
"Sojka",
"Petr",
""
],
[
"Novotný",
"Vít",
""
]
] |
2103.00324 | Manuel Sam Ribeiro | Manuel Sam Ribeiro, Joanne Cleland, Aciel Eshky, Korin Richmond, Steve
Renals | Exploiting ultrasound tongue imaging for the automatic detection of
speech articulation errors | 15 pages, 9 figures, 6 tables | Speech Communication, Volume 128, April 2021, Pages 24-34 | 10.1016/j.specom.2021.02.001 | null | eess.AS cs.CL cs.SD q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Speech sound disorders are a common communication impairment in childhood.
Because speech disorders can negatively affect the lives and the development of
children, clinical intervention is often recommended. To help with diagnosis
and treatment, clinicians use instrumented methods such as spectrograms or
ultrasound tongue imaging to analyse speech articulations. Analysis with these
methods can be laborious for clinicians, therefore there is growing interest in
its automation. In this paper, we investigate the contribution of ultrasound
tongue imaging for the automatic detection of speech articulation errors. Our
systems are trained on typically developing child speech and augmented with a
database of adult speech using audio and ultrasound. Evaluation on typically
developing speech indicates that pre-training on adult speech and jointly using
ultrasound and audio gives the best results with an accuracy of 86.9%. To
evaluate on disordered speech, we collect pronunciation scores from experienced
speech and language therapists, focusing on cases of velar fronting and gliding
of /r/. The scores show good inter-annotator agreement for velar fronting, but
not for gliding errors. For automatic velar fronting error detection, the best
results are obtained when jointly using ultrasound and audio. The best system
correctly detects 86.6% of the errors identified by experienced clinicians. Out
of all the segments identified as errors by the best system, 73.2% match errors
identified by clinicians. Results on automatic gliding detection are harder to
interpret due to poor inter-annotator agreement, but appear promising. Overall
findings suggest that automatic detection of speech articulation errors has
potential to be integrated into ultrasound intervention software for
automatically quantifying progress during speech therapy.
| [
{
"created": "Sat, 27 Feb 2021 21:16:45 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Ribeiro",
"Manuel Sam",
""
],
[
"Cleland",
"Joanne",
""
],
[
"Eshky",
"Aciel",
""
],
[
"Richmond",
"Korin",
""
],
[
"Renals",
"Steve",
""
]
] |
2103.00355 | Weixiao Gao | Weixiao Gao, Liangliang Nan, Bas Boom, Hugo Ledoux | SUM: A Benchmark Dataset of Semantic Urban Meshes | 27 pages, 14 figures | ISPRS Journal of Photogrammetry and Remote Sensing, Volume 179,
September 2021, Pages 108-120 | 10.1016/j.isprsjprs.2021.07.008 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in data acquisition technology allow us to collect 3D
texture meshes quickly. Those can help us understand and analyse the urban
environment, and as a consequence are useful for several applications like
spatial analysis and urban planning. Semantic segmentation of texture meshes
through deep learning methods can enhance this understanding, but it requires a
lot of labelled data. The contributions of this work are threefold: (1) a new
benchmark dataset of semantic urban meshes, (2) a novel semi-automatic
annotation framework, and (3) an annotation tool for 3D meshes. In particular,
our dataset covers about 4 km2 in Helsinki (Finland), with six classes, and we
estimate that we save about 600 hours of labelling work using our annotation
framework, which includes initial segmentation and interactive refinement. We
also compare the performance of several state-of-theart 3D semantic
segmentation methods on the new benchmark dataset. Other researchers can use
our results to train their networks: the dataset is publicly available, and the
annotation tool is released as open-source.
| [
{
"created": "Sat, 27 Feb 2021 23:26:21 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jul 2021 14:25:37 GMT",
"version": "v2"
}
] | 2022-02-08 | [
[
"Gao",
"Weixiao",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Boom",
"Bas",
""
],
[
"Ledoux",
"Hugo",
""
]
] |
2103.00356 | Lei Gao | Lei Gao, Lin Qi, Ling Guan | Online Behavioral Analysis with Application to Emotion State
Identification | null | IEEE Intelligent Systems, 2016 | 10.1109/MIS.2016.26 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel discriminative model for online behavioral
analysis with application to emotion state identification. The proposed model
is able to extract more discriminative characteristics from behavioral data
effectively and find the direction of optimal projection efficiently to satisfy
requirements of online data analysis, leading to better utilization of the
behavioral information to produce more accurate recognition results.
| [
{
"created": "Sat, 27 Feb 2021 23:53:52 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Gao",
"Lei",
""
],
[
"Qi",
"Lin",
""
],
[
"Guan",
"Ling",
""
]
] |
2103.00359 | Lei Gao | Lei Gao, Rui Zhang, Lin Qi, Enqing Chen, and Ling Guan | The Labeled Multiple Canonical Correlation Analysis for Information
Fusion | null | IEEE Transactions on Multimedia, 2019 | 10.1109/TMM.2018.2859590 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of multimodal information fusion is to mathematically analyze
information carried in different sources and create a new representation which
will be more effectively utilized in pattern recognition and other multimedia
information processing tasks. In this paper, we introduce a new method for
multimodal information fusion and representation based on the Labeled Multiple
Canonical Correlation Analysis (LMCCA). By incorporating class label
information of the training samples,the proposed LMCCA ensures that the fused
features carry discriminative characteristics of the multimodal information
representations, and are capable of providing superior recognition performance.
We implement a prototype of LMCCA to demonstrate its effectiveness on
handwritten digit recognition,face recognition and object recognition utilizing
multiple features,bimodal human emotion recognition involving information from
both audio and visual domains. The generic nature of LMCCA allows it to take as
input features extracted by any means,including those by deep learning (DL)
methods. Experimental results show that the proposed method enhanced the
performance of both statistical machine learning (SML) methods, and methods
based on DL.
| [
{
"created": "Sun, 28 Feb 2021 00:13:36 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Gao",
"Lei",
""
],
[
"Zhang",
"Rui",
""
],
[
"Qi",
"Lin",
""
],
[
"Chen",
"Enqing",
""
],
[
"Guan",
"Ling",
""
]
] |
2103.00364 | Rohan Shad | Rohan Shad, Nicolas Quach, Robyn Fong, Patpilai Kasinpila, Cayley
Bowles, Miguel Castro, Ashrith Guha, Eddie Suarez, Stefan Jovinge, Sangjin
Lee, Theodore Boeve, Myriam Amsallem, Xiu Tang, Francois Haddad, Yasuhiro
Shudo, Y. Joseph Woo, Jeffrey Teuteberg, John P. Cunningham, Curt P.
Langlotz, William Hiesinger | Predicting post-operative right ventricular failure using video-based
deep learning | 12 pages, 3 figures | Nat Commun 12, 5192 (2021) | 10.1038/s41467-021-25503-9 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-invasive and cost effective in nature, the echocardiogram allows for a
comprehensive assessment of the cardiac musculature and valves. Despite
progressive improvements over the decades, the rich temporally resolved data in
echocardiography videos remain underutilized. Human reads of echocardiograms
reduce the complex patterns of cardiac wall motion, to a small list of
measurements of heart function. Furthermore, all modern echocardiography
artificial intelligence (AI) systems are similarly limited by design -
automating measurements of the same reductionist metrics rather than utilizing
the wealth of data embedded within each echo study. This underutilization is
most evident in situations where clinical decision making is guided by
subjective assessments of disease acuity, and tools that predict disease onset
within clinically actionable timeframes are unavailable. Predicting the
likelihood of developing post-operative right ventricular failure (RV failure)
in the setting of mechanical circulatory support is one such clinical example.
To address this, we developed a novel video AI system trained to predict
post-operative right ventricular failure (RV failure), using the full
spatiotemporal density of information from pre-operative echocardiography
scans. We achieve an AUC of 0.729, specificity of 52% at 80% sensitivity and
46% sensitivity at 80% specificity. Furthermore, we show that our ML system
significantly outperforms a team of human experts tasked with predicting RV
failure on independent clinical evaluation. Finally, the methods we describe
are generalizable to any cardiac clinical decision support application where
treatment or patient selection is guided by qualitative echocardiography
assessments.
| [
{
"created": "Sun, 28 Feb 2021 00:58:53 GMT",
"version": "v1"
}
] | 2021-09-02 | [
[
"Shad",
"Rohan",
""
],
[
"Quach",
"Nicolas",
""
],
[
"Fong",
"Robyn",
""
],
[
"Kasinpila",
"Patpilai",
""
],
[
"Bowles",
"Cayley",
""
],
[
"Castro",
"Miguel",
""
],
[
"Guha",
"Ashrith",
""
],
[
"Suarez",
"Eddie",
""
],
[
"Jovinge",
"Stefan",
""
],
[
"Lee",
"Sangjin",
""
],
[
"Boeve",
"Theodore",
""
],
[
"Amsallem",
"Myriam",
""
],
[
"Tang",
"Xiu",
""
],
[
"Haddad",
"Francois",
""
],
[
"Shudo",
"Yasuhiro",
""
],
[
"Woo",
"Y. Joseph",
""
],
[
"Teuteberg",
"Jeffrey",
""
],
[
"Cunningham",
"John P.",
""
],
[
"Langlotz",
"Curt P.",
""
],
[
"Hiesinger",
"William",
""
]
] |
2103.00380 | Abheesht Sharma | Abheesht Sharma and Harshit Pandey | LRG at TREC 2020: Document Ranking with XLNet-Based Models | Published at TREC 2020 | In Proceedings of the Twenty-Ninth Text REtrieval Conference (TREC
2020) | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Establishing a good information retrieval system in popular mediums of
entertainment is a quickly growing area of investigation for companies and
researchers alike. We delve into the domain of information retrieval for
podcasts. In Spotify's Podcast Challenge, we are given a user's query with a
description to find the most relevant short segment from the given dataset
having all the podcasts. Previous techniques that include solely classical
Information Retrieval (IR) techniques, perform poorly when descriptive queries
are presented. On the other hand, models which exclusively rely on large neural
networks tend to perform better. The downside to this technique is that a
considerable amount of time and computing power are required to infer the
result. We experiment with two hybrid models which first filter out the best
podcasts based on user's query with a classical IR technique, and then perform
re-ranking on the shortlisted documents based on the detailed description using
a transformer-based model.
| [
{
"created": "Sun, 28 Feb 2021 03:04:29 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Mar 2021 13:49:17 GMT",
"version": "v2"
}
] | 2021-03-09 | [
[
"Sharma",
"Abheesht",
""
],
[
"Pandey",
"Harshit",
""
]
] |
2103.00483 | Chenyu Tian | Chenyu Tian, Yuchun Zhang, Zefeng Weng, Xiusen Gu, Wai Kin Victor Chan | Learning Large-scale Location Embedding From Human Mobility Trajectories
with Graphs | null | 2022 International Joint Conference on Neural Networks (IJCNN) | 10.1109/IJCNN55064.2022.9892698 | null | cs.SI cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An increasing amount of location-based service (LBS) data is being
accumulated and helps to study urban dynamics and human mobility. GPS
coordinates and other location indicators are normally low dimensional and only
representing spatial proximity, thus difficult to be effectively utilized by
machine learning models in Geo-aware applications. Existing location embedding
methods are mostly tailored for specific problems that are taken place within
areas of interest. When it comes to the scale of a city or even a country,
existing approaches always suffer from extensive computational cost and
significant data sparsity. Different from existing studies, we propose to learn
representations through a GCN-aided skip-gram model named GCN-L2V by
considering both spatial connection and human mobility. With a flow graph and a
spatial graph, it embeds context information into vector representations.
GCN-L2V is able to capture relationships among locations and provide a better
notion of similarity in a spatial environment. Across quantitative experiments
and case studies, we empirically demonstrate that representations learned by
GCN-L2V are effective. As far as we know, this is the first study that provides
a fine-grained location embedding at the city level using only LBS records.
GCN-L2V is a general-purpose embedding model with high flexibility and can be
applied in down-streaming Geo-aware applications.
| [
{
"created": "Tue, 23 Feb 2021 09:11:33 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 10:42:38 GMT",
"version": "v2"
}
] | 2022-10-11 | [
[
"Tian",
"Chenyu",
""
],
[
"Zhang",
"Yuchun",
""
],
[
"Weng",
"Zefeng",
""
],
[
"Gu",
"Xiusen",
""
],
[
"Chan",
"Wai Kin Victor",
""
]
] |
2103.00560 | Alexander Mathis | Maxime Vidal and Nathan Wolf and Beth Rosenberg and Bradley P. Harris
and Alexander Mathis | Perspectives on individual animal identification from biology and
computer vision | 12 pages, 1 figure, 2 boxes and 1 table | Integr Comp Biol . 2021 Oct 4;61(3):900-916 | 10.1093/icb/icab107 | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying individual animals is crucial for many biological investigations.
In response to some of the limitations of current identification methods, new
automated computer vision approaches have emerged with strong performance.
Here, we review current advances of computer vision identification techniques
to provide both computer scientists and biologists with an overview of the
available tools and discuss their applications. We conclude by offering
recommendations for starting an animal identification project, illustrate
current limitations and propose how they might be addressed in the future.
| [
{
"created": "Sun, 28 Feb 2021 16:50:09 GMT",
"version": "v1"
}
] | 2021-12-22 | [
[
"Vidal",
"Maxime",
""
],
[
"Wolf",
"Nathan",
""
],
[
"Rosenberg",
"Beth",
""
],
[
"Harris",
"Bradley P.",
""
],
[
"Mathis",
"Alexander",
""
]
] |
2103.00686 | Divya Mahajan | Muhammad Adnan, Yassaman Ebrahimzadeh Maboud, Divya Mahajan, Prashant
J. Nair | Accelerating Recommendation System Training by Leveraging Popular
Choices | null | Proceedings of the VLDB Endowment, 2022 | 10.14778/3485450.3485462 | null | cs.IR cs.AI cs.AR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recommender models are commonly used to suggest relevant items to a user for
e-commerce and online advertisement-based applications. These models use
massive embedding tables to store numerical representation of items' and users'
categorical variables (memory intensive) and employ neural networks (compute
intensive) to generate final recommendations. Training these large-scale
recommendation models is evolving to require increasing data and compute
resources. The highly parallel neural networks portion of these models can
benefit from GPU acceleration however, large embedding tables often cannot fit
in the limited-capacity GPU device memory. Hence, this paper deep dives into
the semantics of training data and obtains insights about the feature access,
transfer, and usage patterns of these models. We observe that, due to the
popularity of certain inputs, the accesses to the embeddings are highly skewed
with a few embedding entries being accessed up to 10000x more. This paper
leverages this asymmetrical access pattern to offer a framework, called FAE,
and proposes a hot-embedding aware data layout for training recommender models.
This layout utilizes the scarce GPU memory for storing the highly accessed
embeddings, thus reduces the data transfers from CPU to GPU. At the same time,
FAE engages the GPU to accelerate the executions of these hot embedding
entries. Experiments on production-scale recommendation models with real
datasets show that FAE reduces the overall training time by 2.3x and 1.52x in
comparison to XDL CPU-only and XDL CPU-GPU execution while maintaining baseline
accuracy
| [
{
"created": "Mon, 1 Mar 2021 01:43:26 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 19:16:36 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Sep 2021 19:08:26 GMT",
"version": "v3"
}
] | 2024-03-19 | [
[
"Adnan",
"Muhammad",
""
],
[
"Maboud",
"Yassaman Ebrahimzadeh",
""
],
[
"Mahajan",
"Divya",
""
],
[
"Nair",
"Prashant J.",
""
]
] |
2103.00718 | Keyu Li Miss | Keyu Li, Jian Wang, Yangxin Xu, Hao Qin, Dongsheng Liu, Li Liu, Max
Q.-H. Meng | Autonomous Navigation of an Ultrasound Probe Towards Standard Scan
Planes with Deep Reinforcement Learning | Accepted at ICRA 2021. Copyright may be transferred without notice,
after which this version may no longer be accessible | 2021 IEEE International Conference on Robotics and Automation
(ICRA), 2021, pp. 8302-8308 | 10.1109/ICRA48506.2021.9561295 | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous ultrasound (US) acquisition is an important yet challenging task,
as it involves interpretation of the highly complex and variable images and
their spatial relationships. In this work, we propose a deep reinforcement
learning framework to autonomously control the 6-D pose of a virtual US probe
based on real-time image feedback to navigate towards the standard scan planes
under the restrictions in real-world US scans. Furthermore, we propose a
confidence-based approach to encode the optimization of image quality in the
learning process. We validate our method in a simulation environment built with
real-world data collected in the US imaging of the spine. Experimental results
demonstrate that our method can perform reproducible US probe navigation
towards the standard scan plane with an accuracy of $4.91mm/4.65^\circ$ in the
intra-patient setting, and accomplish the task in the intra- and inter-patient
settings with a success rate of $92\%$ and $46\%$, respectively. The results
also show that the introduction of image quality optimization in our method can
effectively improve the navigation performance.
| [
{
"created": "Mon, 1 Mar 2021 03:09:17 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Aug 2021 01:42:18 GMT",
"version": "v2"
}
] | 2021-11-05 | [
[
"Li",
"Keyu",
""
],
[
"Wang",
"Jian",
""
],
[
"Xu",
"Yangxin",
""
],
[
"Qin",
"Hao",
""
],
[
"Liu",
"Dongsheng",
""
],
[
"Liu",
"Li",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
2103.00760 | Ukcheol Shin | Ukcheol Shin, Kyunghyun Lee, Seokju Lee, In So Kweon | Self-Supervised Depth and Ego-Motion Estimation for Monocular Thermal
Video Using Multi-Spectral Consistency Loss | 8 pages, Accepted by IEEE Robotics and Automation Letters (RA-L) with
ICRA 2022 option | IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
1103-1110, April 2022 | 10.1109/LRA.2021.3137895. | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A thermal camera can robustly capture thermal radiation images under harsh
light conditions such as night scenes, tunnels, and disaster scenarios.
However, despite this advantage, neither depth nor ego-motion estimation
research for the thermal camera have not been actively explored so far. In this
paper, we propose a self-supervised learning method for depth and ego-motion
estimation from thermal images. The proposed method exploits multi-spectral
consistency that consists of temperature and photometric consistency loss. The
temperature consistency loss provides a fundamental self-supervisory signal by
reconstructing clipped and colorized thermal images. Additionally, we design a
differentiable forward warping module that can transform the coordinate system
of the estimated depth map and relative pose from thermal camera to visible
camera. Based on the proposed module, the photometric consistency loss can
provide complementary self-supervision to networks. Networks trained with the
proposed method robustly estimate the depth and pose from monocular thermal
video under low-light and even zero-light conditions. To the best of our
knowledge, this is the first work to simultaneously estimate both depth and
ego-motion from monocular thermal video in a self-supervised manner.
| [
{
"created": "Mon, 1 Mar 2021 05:29:04 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 02:05:01 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Jul 2022 04:03:15 GMT",
"version": "v3"
}
] | 2022-07-08 | [
[
"Shin",
"Ukcheol",
""
],
[
"Lee",
"Kyunghyun",
""
],
[
"Lee",
"Seokju",
""
],
[
"Kweon",
"In So",
""
]
] |
2103.00778 | Mahsa Paknezhad | Mahsa Paknezhad, Cuong Phuc Ngo, Amadeus Aristo Winarto, Alistair
Cheong, Chuen Yang Beh, Jiayang Wu, Hwee Kuan Lee | Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis | null | Neurocomputing, 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite many proposed algorithms to provide robustness to deep learning (DL)
models, DL models remain susceptible to adversarial attacks. We hypothesize
that the adversarial vulnerability of DL models stems from two factors. The
first factor is data sparsity which is that in the high dimensional input data
space, there exist large regions outside the support of the data distribution.
The second factor is the existence of many redundant parameters in the DL
models. Owing to these factors, different models are able to come up with
different decision boundaries with comparably high prediction accuracy. The
appearance of the decision boundaries in the space outside the support of the
data distribution does not affect the prediction accuracy of the model.
However, it makes an important difference in the adversarial robustness of the
model. We hypothesize that the ideal decision boundary is as far as possible
from the support of the data distribution. In this paper, we develop a training
framework to observe if DL models are able to learn such a decision boundary
spanning the space around the class distributions further from the data points
themselves. Semi-supervised learning was deployed during training by leveraging
unlabeled data generated in the space outside the support of the data
distribution. We measured adversarial robustness of the models trained using
this training framework against well-known adversarial attacks and by using
robustness metrics. We found that models trained using our framework, as well
as other regularization methods and adversarial training support our hypothesis
of data sparsity and that models trained with these methods learn to have
decision boundaries more similar to the aforementioned ideal decision boundary.
The code for our training framework is available at
https://github.com/MahsaPaknezhad/AdversariallyRobustTraining.
| [
{
"created": "Mon, 1 Mar 2021 06:04:31 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Feb 2022 06:50:24 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Feb 2022 04:49:23 GMT",
"version": "v3"
}
] | 2022-02-21 | [
[
"Paknezhad",
"Mahsa",
""
],
[
"Ngo",
"Cuong Phuc",
""
],
[
"Winarto",
"Amadeus Aristo",
""
],
[
"Cheong",
"Alistair",
""
],
[
"Beh",
"Chuen Yang",
""
],
[
"Wu",
"Jiayang",
""
],
[
"Lee",
"Hwee Kuan",
""
]
] |
2103.00793 | Shuchang Lyu | Qi Zhao, Shuchang Lyu, Zhiwei Zhang, Ting-Bing Xu and Guangliang Cheng | Embedded Knowledge Distillation in Depth-Level Dynamic Neural Network | 4 pages, 3 figures; Accepted by CVPR2021 workshop: Dynamic Neural
Networks Meets Computer Vision | https://sites.google.com/view/cvpr2021-dnetcv | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real applications, different computation-resource devices need
different-depth networks (e.g., ResNet-18/34/50) with high-accuracy. Usually,
existing methods either design multiple networks and train them independently,
or construct depth-level/width-level dynamic neural networks which is hard to
prove the accuracy of each sub-net. In this article, we propose an elegant
Depth-Level Dynamic Neural Network (DDNN) integrated different-depth sub-nets
of similar architectures. To improve the generalization of sub-nets, we design
the Embedded-Knowledge-Distillation (EKD) training mechanism for the DDNN to
implement knowledge transfer from the teacher (full-net) to multiple students
(sub-nets). Specifically, the Kullback-Leibler (KL) divergence is introduced to
constrain the posterior class probability consistency between full-net and
sub-nets, and self-attention distillation on the same resolution feature of
different depth is addressed to drive more abundant feature representations of
sub-nets. Thus, we can obtain multiple high-accuracy sub-nets simultaneously in
a DDNN via the online knowledge distillation in each training iteration without
extra computation cost. Extensive experiments on CIFAR-10/100, and ImageNet
datasets demonstrate that sub-nets in DDNN with EKD training achieve better
performance than individually training networks while preserving the original
performance of full-nets.
| [
{
"created": "Mon, 1 Mar 2021 06:35:31 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 09:49:16 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Aug 2021 13:18:04 GMT",
"version": "v3"
}
] | 2021-08-11 | [
[
"Zhao",
"Qi",
""
],
[
"Lyu",
"Shuchang",
""
],
[
"Zhang",
"Zhiwei",
""
],
[
"Xu",
"Ting-Bing",
""
],
[
"Cheng",
"Guangliang",
""
]
] |
2103.00833 | Thomas Pellegrini | Thomas Pellegrini (IRIT-SAMoVA), Timoth\'ee Masquelier (CERCO) | Fast threshold optimization for multi-label audio tagging using
Surrogate gradient learning | null | IEEE International Conference on Acoustics, Speech and Signal
Processing, Jun 2021, Toronto, Canada | null | null | cs.AI cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label audio tagging consists of assigning sets of tags to audio
recordings. At inference time, thresholds are applied on the confidence scores
outputted by a probabilistic classifier, in order to decide which classes are
detected active. In this work, we consider having at disposal a trained
classifier and we seek to automatically optimize the decision thresholds
according to a performance metric of interest, in our case F-measure
(micro-F1). We propose a new method, called SGL-Thresh for Surrogate Gradient
Learning of Thresholds, that makes use of gradient descent. Since F1 is not
differentiable, we propose to approximate the thresholding operation gradients
with the gradients of a sigmoid function. We report experiments on three
datasets, using state-of-the-art pre-trained deep neural networks. In all
cases, SGL-Thresh outperformed three other approaches: a default threshold
value (defThresh), an heuristic search algorithm and a method estimating F1
gradients numerically. It reached 54.9\% F1 on AudioSet eval, compared to 50.7%
with defThresh. SGL-Thresh is very fast and scalable to a large number of tags.
To facilitate reproducibility, data and source code in Pytorch are available
online: https://github.com/topel/SGL-Thresh
| [
{
"created": "Mon, 1 Mar 2021 08:05:07 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Pellegrini",
"Thomas",
"",
"IRIT-SAMoVA"
],
[
"Masquelier",
"Timothée",
"",
"CERCO"
]
] |
2103.00841 | Yixing Xu | Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang | Learning Frequency Domain Approximation for Binary Neural Networks | 12 pages | NeurIPS 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary neural networks (BNNs) represent original full-precision weights and
activations into 1-bit with sign function. Since the gradient of the
conventional sign function is almost zero everywhere which cannot be used for
back-propagation, several attempts have been proposed to alleviate the
optimization difficulty by using approximate gradient. However, those
approximations corrupt the main direction of factual gradient. To this end, we
propose to estimate the gradient of sign function in the Fourier frequency
domain using the combination of sine functions for training BNNs, namely
frequency domain approximation (FDA). The proposed approach does not affect the
low-frequency information of the original sign function which occupies most of
the overall energy, and high-frequency coefficients will be ignored to avoid
the huge computational overhead. In addition, we embed a noise adaptation
module into the training phase to compensate the approximation error. The
experiments on several benchmark datasets and neural architectures illustrate
that the binary network learned using our method achieves the state-of-the-art
accuracy. Code will be available at
\textit{https://gitee.com/mindspore/models/tree/master/research/cv/FDA-BNN}.
| [
{
"created": "Mon, 1 Mar 2021 08:25:26 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Nov 2021 03:28:50 GMT",
"version": "v2"
}
] | 2021-11-23 | [
[
"Xu",
"Yixing",
""
],
[
"Han",
"Kai",
""
],
[
"Xu",
"Chang",
""
],
[
"Tang",
"Yehui",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Wang",
"Yunhe",
""
]
] |
2103.00923 | Sarah Janboecke | Sarah Janboecke and Susanne Zajitschek | Anticipation Next -- System-sensitive technology development and
integration in work contexts | null | Information 2021, 12, 269 | 10.3390/info12070269 | null | cs.HC cs.AI cs.CY cs.RO | http://creativecommons.org/licenses/by/4.0/ | When discussing future concerns within socio-technical systems in work
contexts, we often find descriptions of missed technology development and
integration. The experience of technology that fails whilst being integrated is
often rooted in dysfunctional epistemological approaches within the research
and development process. Thus, ultimately leading to sustainable
technology-distrust in work contexts. This is true for organizations that
integrate new technologies and for organizations that invent them.
Organizations in which we find failed technology development and integrations
are, in their very nature, social systems. Nowadays, those complex social
systems act within an even more complex environment. This urges the development
of new anticipation methods for technology development and integration.
Gathering of and dealing with complex information in the described context is
what we call Anticipation Next. This explorative work uses existing literature
from the adjoining research fields of system theory, organizational theory, and
socio-technical research to combine various concepts. We deliberately aim at a
networked way of thinking in scientific contexts and thus combine
multidisciplinary subject areas in one paper to present an innovative way to
deal with multi-faceted problems in a human-centred way. We end with suggesting
a conceptual framework that should be used in the very early stages of
technology development and integration in work contexts.
| [
{
"created": "Mon, 1 Mar 2021 11:27:19 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jul 2021 07:38:29 GMT",
"version": "v2"
}
] | 2021-07-09 | [
[
"Janboecke",
"Sarah",
""
],
[
"Zajitschek",
"Susanne",
""
]
] |
2103.00940 | Juan Marcos Ramirez Rond\'on | Juan Marcos Ram\'irez, Jos\'e Ignacio Mart\'inez Torre, Henry Arguello
Fuentes | LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From
Compressive Data | 29 pages, 15 figures, 4 tables | Juan Marcos Ramirez, Jose Ignacio Martinez-Torre, and Henry
Arguello, "LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From
Compressive Data", Signal Processing, vol. 189, Dec 2021, 108239 | 10.1016/j.sigpro.2021.108239 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image fusion aims at estimating a high-resolution spectral image from a
low-spatial-resolution hyperspectral image and a low-spectral-resolution
multispectral image. In this regard, compressive spectral imaging (CSI) has
emerged as an acquisition framework that captures the relevant information of
spectral images using a reduced number of measurements. Recently, various image
fusion methods from CSI measurements have been proposed. However, these methods
exhibit high running times and face the challenging task of choosing
sparsity-inducing bases. In this paper, a deep network under the algorithm
unrolling approach is proposed for fusing spectral images from compressive
measurements. This architecture, dubbed LADMM-Net, casts each iteration of a
linearized version of the alternating direction method of multipliers into a
processing layer whose concatenation deploys a deep network. The linearized
approach enables obtaining fusion estimates without resorting to costly matrix
inversions. Furthermore, this approach exploits the benefits of learnable
transforms to estimate the image details included in both the auxiliary
variable and the Lagrange multiplier. Finally, the performance of the proposed
technique is evaluated on two spectral image databases and one dataset captured
at the laboratory. Extensive simulations show that the proposed method
outperforms the state-of-the-art approaches that fuse spectral images from
compressive measurements.
| [
{
"created": "Mon, 1 Mar 2021 12:04:42 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Aug 2021 19:17:25 GMT",
"version": "v2"
}
] | 2021-08-04 | [
[
"Ramírez",
"Juan Marcos",
""
],
[
"Torre",
"José Ignacio Martínez",
""
],
[
"Fuentes",
"Henry Arguello",
""
]
] |
2103.00944 | Dengyu Wu | Dengyu Wu, Xinping Yi, Xiaowei Huang | A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate
Spiking Neural Network from Convolutional Neural Network | null | Frontiers in Neuroscience, 16 (2022) | 10.3389/fnins.2022.759900 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking neural networks (SNNs) offer an inherent ability to process
spatial-temporal data, or in other words, realworld sensory data, but suffer
from the difficulty of training high accuracy models. A major thread of
research on SNNs is on converting a pre-trained convolutional neural network
(CNN) to an SNN of the same structure. State-of-the-art conversion methods are
approaching the accuracy limit, i.e., the near-zero accuracy loss of SNN
against the original CNN. However, we note that this is made possible only when
significantly more energy is consumed to process an input. In this paper, we
argue that this trend of "energy for accuracy" is not necessary -- a little
energy can go a long way to achieve the near-zero accuracy loss. Specifically,
we propose a novel CNN-to-SNN conversion method that is able to use a
reasonably short spike train (e.g., 256 timesteps for CIFAR10 images) to
achieve the near-zero accuracy loss. The new conversion method, named as
explicit current control (ECC), contains three techniques (current
normalisation, thresholding for residual elimination, and consistency
maintenance for batch-normalisation), in order to explicitly control the
currents flowing through the SNN when processing inputs. We implement ECC into
a tool nicknamed SpKeras, which can conveniently import Keras CNN models and
convert them into SNNs. We conduct an extensive set of experiments with the
tool -- working with VGG16 and various datasets such as CIFAR10 and CIFAR100 --
and compare with state-of-the-art conversion methods. Results show that ECC is
a promising method that can optimise over energy consumption and accuracy loss
simultaneously.
| [
{
"created": "Mon, 1 Mar 2021 12:15:29 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Mar 2021 12:24:59 GMT",
"version": "v2"
},
{
"created": "Thu, 26 May 2022 17:25:17 GMT",
"version": "v3"
}
] | 2022-05-27 | [
[
"Wu",
"Dengyu",
""
],
[
"Yi",
"Xinping",
""
],
[
"Huang",
"Xiaowei",
""
]
] |
2103.00953 | Guangyao Chen | Guangyao Chen and Peixi Peng and Xiangqian Wang and Yonghong Tian | Adversarial Reciprocal Points Learning for Open Set Recognition | IEEE-TPAMI,2021 | IEEE Transactions on Pattern Analysis and Machine Intelligence
2021 | 10.1109/TPAMI.2021.3106743 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open set recognition (OSR), aiming to simultaneously classify the seen
classes and identify the unseen classes as 'unknown', is essential for reliable
machine learning.The key challenge of OSR is how to reduce the empirical
classification risk on the labeled known data and the open space risk on the
potential unknown data simultaneously. To handle the challenge, we formulate
the open space risk problem from the perspective of multi-class integration,
and model the unexploited extra-class space with a novel concept Reciprocal
Point. Follow this, a novel learning framework, termed Adversarial Reciprocal
Point Learning (ARPL), is proposed to minimize the overlap of known
distribution and unknown distributions without loss of known classification
accuracy. Specifically, each reciprocal point is learned by the extra-class
space with the corresponding known category, and the confrontation among
multiple known categories are employed to reduce the empirical classification
risk. Then, an adversarial margin constraint is proposed to reduce the open
space risk by limiting the latent open space constructed by reciprocal points.
To further estimate the unknown distribution from open space, an instantiated
adversarial enhancement method is designed to generate diverse and confusing
training samples, based on the adversarial mechanism between the reciprocal
points and known classes. This can effectively enhance the model
distinguishability to the unknown classes. Extensive experimental results on
various benchmark datasets indicate that the proposed method is significantly
superior to other existing approaches and achieves state-of-the-art
performance.
| [
{
"created": "Mon, 1 Mar 2021 12:25:45 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 02:04:04 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Aug 2021 11:12:53 GMT",
"version": "v3"
}
] | 2021-09-08 | [
[
"Chen",
"Guangyao",
""
],
[
"Peng",
"Peixi",
""
],
[
"Wang",
"Xiangqian",
""
],
[
"Tian",
"Yonghong",
""
]
] |
2103.01035 | Mark Keane | Mark T Keane, Eoin M Kenny, Eoin Delaney, Barry Smyth | If Only We Had Better Counterfactual Explanations: Five Key Deficits to
Rectify in the Evaluation of Counterfactual XAI Techniques | 13 pages, 2 figures | Proceedings of the 30th International Joint Conference on
Artificial Intelligence (IJCAI-21), August, 2021 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been an explosion of AI research on counterfactual
explanations as a solution to the problem of eXplainable AI (XAI). These
explanations seem to offer technical, psychological and legal benefits over
other explanation techniques. We survey 100 distinct counterfactual explanation
methods reported in the literature. This survey addresses the extent to which
these methods have been adequately evaluated, both psychologically and
computationally, and quantifies the shortfalls occurring. For instance, only
21% of these methods have been user tested. Five key deficits in the evaluation
of these methods are detailed and a roadmap, with standardised benchmark
evaluations, is proposed to resolve the issues arising; issues, that currently
effectively block scientific progress in this field.
| [
{
"created": "Fri, 26 Feb 2021 09:57:33 GMT",
"version": "v1"
}
] | 2021-05-03 | [
[
"Keane",
"Mark T",
""
],
[
"Kenny",
"Eoin M",
""
],
[
"Delaney",
"Eoin",
""
],
[
"Smyth",
"Barry",
""
]
] |
2103.01039 | Elmira Amirloo Abolfathi | Elmira Amirloo, Mohsen Rohani, Ershad Banijamali, Jun Luo, Pascal
Poupart | Self-Supervised Simultaneous Multi-Step Prediction of Road Dynamics and
Cost Map | null | CVPR 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While supervised learning is widely used for perception modules in
conventional autonomous driving solutions, scalability is hindered by the huge
amount of data labeling needed. In contrast, while end-to-end architectures do
not require labeled data and are potentially more scalable, interpretability is
sacrificed. We introduce a novel architecture that is trained in a fully
self-supervised fashion for simultaneous multi-step prediction of space-time
cost map and road dynamics. Our solution replaces the manually designed cost
function for motion planning with a learned high dimensional cost map that is
naturally interpretable and allows diverse contextual information to be
integrated without manual data labeling. Experiments on real world driving data
show that our solution leads to lower number of collisions and road violations
in long planning horizons in comparison to baselines, demonstrating the
feasibility of fully self-supervised prediction without sacrificing either
scalability or interpretability.
| [
{
"created": "Mon, 1 Mar 2021 14:32:40 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 20:45:13 GMT",
"version": "v2"
}
] | 2021-03-31 | [
[
"Amirloo",
"Elmira",
""
],
[
"Rohani",
"Mohsen",
""
],
[
"Banijamali",
"Ershad",
""
],
[
"Luo",
"Jun",
""
],
[
"Poupart",
"Pascal",
""
]
] |
2103.01203 | Sydney Katz | Sydney M. Katz, Kyle D. Julian, Christopher A. Strong, Mykel J.
Kochenderfer | Generating Probabilistic Safety Guarantees for Neural Network
Controllers | 31 pages, 19 figures | Mach Learn (2021).
http://link.springer.com/article/10.1007/s10994-021-06065-9 | 10.1007/s10994-021-06065-9 | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks serve as effective controllers in a variety of complex
settings due to their ability to represent expressive policies. The complex
nature of neural networks, however, makes their output difficult to verify and
predict, which limits their use in safety-critical applications. While
simulations provide insight into the performance of neural network controllers,
they are not enough to guarantee that the controller will perform safely in all
scenarios. To address this problem, recent work has focused on formal methods
to verify properties of neural network outputs. For neural network controllers,
we can use a dynamics model to determine the output properties that must hold
for the controller to operate safely. In this work, we develop a method to use
the results from neural network verification tools to provide probabilistic
safety guarantees on a neural network controller. We develop an adaptive
verification approach to efficiently generate an overapproximation of the
neural network policy. Next, we modify the traditional formulation of Markov
decision process (MDP) model checking to provide guarantees on the
overapproximated policy given a stochastic dynamics model. Finally, we
incorporate techniques in state abstraction to reduce overapproximation error
during the model checking process. We show that our method is able to generate
meaningful probabilistic safety guarantees for aircraft collision avoidance
neural networks that are loosely inspired by Airborne Collision Avoidance
System X (ACAS X), a family of collision avoidance systems that formulates the
problem as a partially observable Markov decision process (POMDP).
| [
{
"created": "Mon, 1 Mar 2021 18:48:21 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Oct 2021 18:37:50 GMT",
"version": "v2"
}
] | 2021-10-22 | [
[
"Katz",
"Sydney M.",
""
],
[
"Julian",
"Kyle D.",
""
],
[
"Strong",
"Christopher A.",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2103.01217 | Burak Pak | Gorsev Argin, Burak Pak, Handan Turkoglu | Between Post-Flaneur and Smartphone Zombie Smartphone Users Altering
Visual Attention and Walking Behavior in Public Space | null | 2020 ISPRS International Journal of Geo-Information 9, 12, 700 | 10.3390/ijgi9120700 | null | cs.HC cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | The extensive use of smartphones in our everyday lives has created new modes
of appropriation and behavior in public spaces. Recognition of these are
essential for urban design and planning practices which help us to improve the
relationship between humans, technologies, and urban environment. This study
aims to research smartphone users in public space by observing their altering
visual attention and walking behavior, and, in this way, to reveal the emergent
new figures. For this purpose, Korenmarkt square in Ghent, Belgium, was
observed for seven days in 10-min time intervals. The gaze and walking behavior
of smartphone users were encoded as geo-located and temporal data, analyzed and
mapped using statistical and spatial analysis methods. Developing and
implementing new methods for identifying the characteristics of smartphone
users, this study resulted in a nuanced characterization of novel spatial
appropriations. The findings led to a better understanding and knowledge of the
different behavior patterns of emergent figures such as post-flaneurs and
smartphone zombies while uncovering their altering visual interactions with and
movements in the public space. The results evoked questions on how researchers
and designers can make use of spatial analysis methods and rethink the public
space of the future as a hybrid construct integrating the virtual and the
physical.
| [
{
"created": "Fri, 26 Feb 2021 14:53:45 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Argin",
"Gorsev",
""
],
[
"Pak",
"Burak",
""
],
[
"Turkoglu",
"Handan",
""
]
] |
2103.01353 | Abhinav Valada | Francisco Rivera Valverde, Juana Valeria Hurtado, Abhinav Valada | There is More than Meets the Eye: Self-Supervised Multi-Object Detection
and Tracking with Sound by Distilling Multimodal Knowledge | Accepted at CVPR 2021. Dataset, code and models are available at
http://rl.uni-freiburg.de/research/multimodal-distill | IEEE/ CVF International Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 11612-11621, 2021 | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attributes of sound inherent to objects can provide valuable cues to learn
rich representations for object detection and tracking. Furthermore, the
co-occurrence of audiovisual events in videos can be exploited to localize
objects over the image field by solely monitoring the sound in the environment.
Thus far, this has only been feasible in scenarios where the camera is static
and for single object detection. Moreover, the robustness of these methods has
been limited as they primarily rely on RGB images which are highly susceptible
to illumination and weather changes. In this work, we present the novel
self-supervised MM-DistillNet framework consisting of multiple teachers that
leverage diverse modalities including RGB, depth and thermal images, to
simultaneously exploit complementary cues and distill knowledge into a single
audio student network. We propose the new MTA loss function that facilitates
the distillation of information from multimodal teachers in a self-supervised
manner. Additionally, we propose a novel self-supervised pretext task for the
audio student that enables us to not rely on labor-intensive manual
annotations. We introduce a large-scale multimodal dataset with over 113,000
time-synchronized frames of RGB, depth, thermal, and audio modalities.
Extensive experiments demonstrate that our approach outperforms
state-of-the-art methods while being able to detect multiple objects using only
sound during inference and even while moving.
| [
{
"created": "Mon, 1 Mar 2021 23:42:18 GMT",
"version": "v1"
}
] | 2021-11-05 | [
[
"Valverde",
"Francisco Rivera",
""
],
[
"Hurtado",
"Juana Valeria",
""
],
[
"Valada",
"Abhinav",
""
]
] |
2103.01359 | Gustavo Olague Dr. | Gerardo Ibarra-Vazquez, Gustavo Olague, Mariana Chan-Ley, Cesar
Puente, Carlos Soubervielle-Montalvo | Brain Programming is Immune to Adversarial Attacks: Towards Accurate and
Robust Image Classification using Symbolic Learning | 58 pages, 9 figures, 13 tables, 81 references | Swarm and Evolutionary Computation 2022 | 10.1016/j.swevo.2022.101059 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, the security concerns about the vulnerability of Deep
Convolutional Neural Networks (DCNN) to Adversarial Attacks (AA) in the form of
small modifications to the input image almost invisible to human vision make
their predictions untrustworthy. Therefore, it is necessary to provide
robustness to adversarial examples in addition to an accurate score when
developing a new classifier. In this work, we perform a comparative study of
the effects of AA on the complex problem of art media categorization, which
involves a sophisticated analysis of features to classify a fine collection of
artworks. We tested a prevailing bag of visual words approach from computer
vision, four state-of-the-art DCNN models (AlexNet, VGG, ResNet, ResNet101),
and the Brain Programming (BP) algorithm. In this study, we analyze the
algorithms' performance using accuracy. Besides, we use the accuracy ratio
between adversarial examples and clean images to measure robustness. Moreover,
we propose a statistical analysis of each classifier's predictions' confidence
to corroborate the results. We confirm that BP predictions' change was below
2\% using adversarial examples computed with the fast gradient sign method.
Also, considering the multiple pixel attack, BP obtained four out of seven
classes without changes and the rest with a maximum error of 4\% in the
predictions. Finally, BP also gets four categories using adversarial patches
without changes and for the remaining three classes with a variation of 1\%.
Additionally, the statistical analysis showed that the predictions' confidence
of BP were not significantly different for each pair of clean and perturbed
images in every experiment. These results prove BP's robustness against
adversarial examples compared to DCNN and handcrafted features methods, whose
performance on the art media classification was compromised with the proposed
perturbations.
| [
{
"created": "Mon, 1 Mar 2021 23:49:26 GMT",
"version": "v1"
}
] | 2022-04-06 | [
[
"Ibarra-Vazquez",
"Gerardo",
""
],
[
"Olague",
"Gustavo",
""
],
[
"Chan-Ley",
"Mariana",
""
],
[
"Puente",
"Cesar",
""
],
[
"Soubervielle-Montalvo",
"Carlos",
""
]
] |
2103.01373 | Aleksandra \'Ciprijanovi\'c | A. \'Ciprijanovi\'c, D. Kafkes, K. Downey, S. Jenkins, G. N. Perdue,
S. Madireddy, T. Johnston, G. F. Snyder, B. Nord | DeepMerge II: Building Robust Deep Learning Algorithms for Merging
Galaxy Identification Across Domains | Submitted to MNRAS; 21 pages, 9 figures, 9 tables | MNRAS, Volume 506, Issue 1, September 2021, Page 677 | 10.1093/mnras/stab1677 | FERMILAB-PUB-21-072-SCD | astro-ph.IM astro-ph.GA cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In astronomy, neural networks are often trained on simulation data with the
prospect of being used on telescope observations. Unfortunately, training a
model on simulation data and then applying it to instrument data leads to a
substantial and potentially even detrimental decrease in model accuracy on the
new target dataset. Simulated and instrument data represent different data
domains, and for an algorithm to work in both, domain-invariant learning is
necessary. Here we employ domain adaptation techniques$-$ Maximum Mean
Discrepancy (MMD) as an additional transfer loss and Domain Adversarial Neural
Networks (DANNs)$-$ and demonstrate their viability to extract domain-invariant
features within the astronomical context of classifying merging and non-merging
galaxies. Additionally, we explore the use of Fisher loss and entropy
minimization to enforce better in-domain class discriminability. We show that
the addition of each domain adaptation technique improves the performance of a
classifier when compared to conventional deep learning algorithms. We
demonstrate this on two examples: between two Illustris-1 simulated datasets of
distant merging galaxies, and between Illustris-1 simulated data of nearby
merging galaxies and observed data from the Sloan Digital Sky Survey. The use
of domain adaptation techniques in our experiments leads to an increase of
target domain classification accuracy of up to ${\sim}20\%$. With further
development, these techniques will allow astronomers to successfully implement
neural network models trained on simulation data to efficiently detect and
study astrophysical objects in current and future large-scale astronomical
surveys.
| [
{
"created": "Tue, 2 Mar 2021 00:24:10 GMT",
"version": "v1"
}
] | 2021-07-16 | [
[
"Ćiprijanović",
"A.",
""
],
[
"Kafkes",
"D.",
""
],
[
"Downey",
"K.",
""
],
[
"Jenkins",
"S.",
""
],
[
"Perdue",
"G. N.",
""
],
[
"Madireddy",
"S.",
""
],
[
"Johnston",
"T.",
""
],
[
"Snyder",
"G. F.",
""
],
[
"Nord",
"B.",
""
]
] |
2103.01498 | Chenguo Lin | Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In
So Kweon | A Survey On Universal Adversarial Attack | Accepted by IJCAI 2021, survey track:
https://www.ijcai.org/proceedings/2021/635 | International Joint Conferences on Artificial Intelligence (IJCAI)
2021, survey track | 10.24963/ijcai.2021/635 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intriguing phenomenon of adversarial examples has attracted significant
attention in machine learning and what might be more surprising to the
community is the existence of universal adversarial perturbations (UAPs), i.e.
a single perturbation to fool the target DNN for most images. With the focus on
UAP against deep classifiers, this survey summarizes the recent progress on
universal adversarial attacks, discussing the challenges from both the attack
and defense sides, as well as the reason for the existence of UAP. We aim to
extend this work as a dynamic survey that will regularly update its content to
follow new works regarding UAP or universal attack in a wide range of domains,
such as image, audio, video, text, etc. Relevant updates will be discussed at:
https://bit.ly/2SbQlLG. We welcome authors of future works in this field to
contact us for including your new finding.
| [
{
"created": "Tue, 2 Mar 2021 06:35:09 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jan 2022 09:52:21 GMT",
"version": "v2"
}
] | 2022-04-20 | [
[
"Zhang",
"Chaoning",
""
],
[
"Benz",
"Philipp",
""
],
[
"Lin",
"Chenguo",
""
],
[
"Karjauv",
"Adil",
""
],
[
"Wu",
"Jing",
""
],
[
"Kweon",
"In So",
""
]
] |
2103.01616 | Prashanth Vijayaraghavan | Prashanth Vijayaraghavan, Hugo Larochelle, Deb Roy | Interpretable Multi-Modal Hate Speech Detection | 5 pages, Accepted at the International Conference on Machine Learning
AI for Social Good Workshop, Long Beach, United States, 2019 | ICML Workshop on AI for Social Good, 2019 | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | With growing role of social media in shaping public opinions and beliefs
across the world, there has been an increased attention to identify and counter
the problem of hate speech on social media. Hate speech on online spaces has
serious manifestations, including social polarization and hate crimes. While
prior works have proposed automated techniques to detect hate speech online,
these techniques primarily fail to look beyond the textual content. Moreover,
few attempts have been made to focus on the aspects of interpretability of such
models given the social and legal implications of incorrect predictions. In
this work, we propose a deep neural multi-modal model that can: (a) detect hate
speech by effectively capturing the semantics of the text along with
socio-cultural context in which a particular hate expression is made, and (b)
provide interpretable insights into decisions of our model. By performing a
thorough evaluation of different modeling techniques, we demonstrate that our
model is able to outperform the existing state-of-the-art hate speech
classification approaches. Finally, we show the importance of social and
cultural context features towards unearthing clusters associated with different
categories of hate.
| [
{
"created": "Tue, 2 Mar 2021 10:12:26 GMT",
"version": "v1"
}
] | 2021-03-03 | [
[
"Vijayaraghavan",
"Prashanth",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Roy",
"Deb",
""
]
] |
2103.01620 | Charlotte Caucheteux | Charlotte Caucheteux, Alexandre Gramfort, Jean-Remi King | Disentangling Syntax and Semantics in the Brain with Deep Networks | Accepted to ICML 2021 | International Conference on Machine Learning (ICML), 2021 | null | null | cs.CL cs.LG q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The activations of language transformers like GPT-2 have been shown to
linearly map onto brain activity during speech comprehension. However, the
nature of these activations remains largely unknown and presumably conflate
distinct linguistic classes. Here, we propose a taxonomy to factorize the
high-dimensional activations of language models into four combinatorial
classes: lexical, compositional, syntactic, and semantic representations. We
then introduce a statistical method to decompose, through the lens of GPT-2's
activations, the brain activity of 345 subjects recorded with functional
magnetic resonance imaging (fMRI) during the listening of ~4.6 hours of
narrated text. The results highlight two findings. First, compositional
representations recruit a more widespread cortical network than lexical ones,
and encompass the bilateral temporal, parietal and prefrontal cortices. Second,
contrary to previous claims, syntax and semantics are not associated with
separated modules, but, instead, appear to share a common and distributed
neural substrate. Overall, this study introduces a versatile framework to
isolate, in the brain activity, the distributed representations of linguistic
constructs.
| [
{
"created": "Tue, 2 Mar 2021 10:24:05 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Jun 2021 09:59:36 GMT",
"version": "v2"
}
] | 2023-03-21 | [
[
"Caucheteux",
"Charlotte",
""
],
[
"Gramfort",
"Alexandre",
""
],
[
"King",
"Jean-Remi",
""
]
] |
2103.01636 | Decebal Constantin Mocanu | Decebal Constantin Mocanu, Elena Mocanu, Tiago Pinto, Selima Curci,
Phuong H. Nguyen, Madeleine Gibescu, Damien Ernst, Zita A. Vale | Sparse Training Theory for Scalable and Efficient Agents | null | 20th International Conference on Autonomous Agents and Multiagent
Systems (AAMAS 2021) | null | null | cs.AI cs.LG cs.MA cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental task for artificial intelligence is learning. Deep Neural
Networks have proven to cope perfectly with all learning paradigms, i.e.
supervised, unsupervised, and reinforcement learning. Nevertheless, traditional
deep learning approaches make use of cloud computing facilities and do not
scale well to autonomous agents with low computational resources. Even in the
cloud, they suffer from computational and memory limitations, and they cannot
be used to model adequately large physical worlds for agents which assume
networks with billions of neurons. These issues are addressed in the last few
years by the emerging topic of sparse training, which trains sparse networks
from scratch. This paper discusses sparse training state-of-the-art, its
challenges and limitations while introducing a couple of new theoretical
research directions which has the potential of alleviating sparse training
limitations to push deep learning scalability well beyond its current
boundaries. Nevertheless, the theoretical advancements impact in complex
multi-agents settings is discussed from a real-world perspective, using the
smart grid case study.
| [
{
"created": "Tue, 2 Mar 2021 10:48:29 GMT",
"version": "v1"
}
] | 2021-03-03 | [
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Mocanu",
"Elena",
""
],
[
"Pinto",
"Tiago",
""
],
[
"Curci",
"Selima",
""
],
[
"Nguyen",
"Phuong H.",
""
],
[
"Gibescu",
"Madeleine",
""
],
[
"Ernst",
"Damien",
""
],
[
"Vale",
"Zita A.",
""
]
] |
2103.01702 | Alexandros Papadopoulos | Alexandros Papadopoulos, Fotis Topouzis, Anastasios Delopoulos | An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images | 11 pages | Sci Rep 11, 14326 (2021) | 10.1038/s41598-021-93632-8 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diabetic Retinopathy (DR) is a leading cause of vision loss globally. Yet
despite its prevalence, the majority of affected people lack access to the
specialized ophthalmologists and equipment required for assessing their
condition. This can lead to delays in the start of treatment, thereby lowering
their chances for a successful outcome. Machine learning systems that
automatically detect the disease in eye fundus images have been proposed as a
means of facilitating access to DR severity estimates for patients in remote
regions or even for complementing the human expert's diagnosis. In this paper,
we propose a machine learning system for the detection of referable DR in
fundus images that is based on the paradigm of multiple-instance learning. By
extracting local information from image patches and combining it efficiently
through an attention mechanism, our system is able to achieve high
classification accuracy. Moreover, it can highlight potential image regions
where DR manifests through its characteristic lesions. We evaluate our approach
on publicly available retinal image datasets, in which it exhibits near
state-of-the-art performance, while also producing interpretable visualizations
of its predictions.
| [
{
"created": "Tue, 2 Mar 2021 13:14:15 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Papadopoulos",
"Alexandros",
""
],
[
"Topouzis",
"Fotis",
""
],
[
"Delopoulos",
"Anastasios",
""
]
] |
2103.01819 | Matthias Gall\'e | Vassilina Nikoulina, Maxat Tezekbayev, Nuradil Kozhakhmet, Madina
Babazhanova, Matthias Gall\'e, Zhenisbek Assylbekov | The Rediscovery Hypothesis: Language Models Need to Meet Linguistics | null | Journal of Artificial Intelligence Vol. 72 (2021) 1343-1384 | 10.1613/jair.1.12788 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an ongoing debate in the NLP community whether modern language
models contain linguistic knowledge, recovered through so-called probes. In
this paper, we study whether linguistic knowledge is a necessary condition for
the good performance of modern language models, which we call the
\textit{rediscovery hypothesis}. In the first place, we show that language
models that are significantly compressed but perform well on their pretraining
objectives retain good scores when probed for linguistic structures. This
result supports the rediscovery hypothesis and leads to the second contribution
of our paper: an information-theoretic framework that relates language modeling
objectives with linguistic information. This framework also provides a metric
to measure the impact of linguistic information on the word prediction task. We
reinforce our analytical results with various experiments, both on synthetic
and on real NLP tasks in English.
| [
{
"created": "Tue, 2 Mar 2021 15:57:39 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jan 2022 07:31:01 GMT",
"version": "v2"
}
] | 2022-01-04 | [
[
"Nikoulina",
"Vassilina",
""
],
[
"Tezekbayev",
"Maxat",
""
],
[
"Kozhakhmet",
"Nuradil",
""
],
[
"Babazhanova",
"Madina",
""
],
[
"Gallé",
"Matthias",
""
],
[
"Assylbekov",
"Zhenisbek",
""
]
] |
2103.01890 | Neil Jethani | Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, Rajesh
Ranganath | Have We Learned to Explain?: How Interpretability Methods Can Learn to
Encode Predictions in their Interpretations | 15 pages, 3 figures, Proceedings of the 24th International Conference
on Artificial Intelligence and Statistics (AISTATS) 2021 | Proceedings of the 24th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2021 | null | null | stat.ML cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | While the need for interpretable machine learning has been established, many
common approaches are slow, lack fidelity, or hard to evaluate. Amortized
explanation methods reduce the cost of providing interpretations by learning a
global selector model that returns feature importances for a single instance of
data. The selector model is trained to optimize the fidelity of the
interpretations, as evaluated by a predictor model for the target. Popular
methods learn the selector and predictor model in concert, which we show allows
predictions to be encoded within interpretations. We introduce EVAL-X as a
method to quantitatively evaluate interpretations and REAL-X as an amortized
explanation method, which learn a predictor model that approximates the true
data generating distribution given any subset of the input. We show EVAL-X can
detect when predictions are encoded in interpretations and show the advantages
of REAL-X through quantitative and radiologist evaluation.
| [
{
"created": "Tue, 2 Mar 2021 17:42:33 GMT",
"version": "v1"
}
] | 2021-03-03 | [
[
"Jethani",
"Neil",
""
],
[
"Sudarshan",
"Mukund",
""
],
[
"Aphinyanaphongs",
"Yindalon",
""
],
[
"Ranganath",
"Rajesh",
""
]
] |
2103.01938 | Rohan Shad | Rohan Shad, John P. Cunningham, Euan A. Ashley, Curtis P. Langlotz,
William Hiesinger | Medical Imaging and Machine Learning | 9 pages, 4 figures | Nat Mach Intell 3, 929 - 935 (2021) | 10.1038/s42256-021-00399-8 | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in computing power, deep learning architectures, and expert labelled
datasets have spurred the development of medical imaging artificial
intelligence systems that rival clinical experts in a variety of scenarios. The
National Institutes of Health in 2018 identified key focus areas for the future
of artificial intelligence in medical imaging, creating a foundational roadmap
for research in image acquisition, algorithms, data standardization, and
translatable clinical decision support systems. Among the key issues raised in
the report: data availability, need for novel computing architectures and
explainable AI algorithms, are still relevant despite the tremendous progress
made over the past few years alone. Furthermore, translational goals of data
sharing, validation of performance for regulatory approval, generalizability
and mitigation of unintended bias must be accounted for early in the
development process. In this perspective paper we explore challenges unique to
high dimensional clinical imaging data, in addition to highlighting some of the
technical and ethical considerations in developing high-dimensional,
multi-modality, machine learning systems for clinical decision support.
| [
{
"created": "Tue, 2 Mar 2021 18:53:39 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Shad",
"Rohan",
""
],
[
"Cunningham",
"John P.",
""
],
[
"Ashley",
"Euan A.",
""
],
[
"Langlotz",
"Curtis P.",
""
],
[
"Hiesinger",
"William",
""
]
] |
2103.01997 | Federico Zocco | Federico Zocco, Se\'an McLoone and Beatrice Smyth | Material Measurement Units for a Circular Economy: Foundations through a
Review | Extension and overall improvement of previous version | Sustainable Production and Consumption, vol. 32, pp. 833-850, 2022 | 10.1016/j.spc.2022.05.022 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term availability of minerals and industrial materials is a necessary
condition for sustainable development as they are the constituents of any
manufacturing product. To enhance the efficiency of material management, we
define a computer-vision-enabled material measurement system and provide a
review of works relevant to its development with particular emphasis on the
foundations. A network of such systems for wide-area material stock monitoring
is also covered. Finally, challenges and future research directions are
discussed. As the first article bridging industrial ecology and advanced
computer vision, this review is intended to support both research communities
towards more sustainable manufacturing.
| [
{
"created": "Tue, 2 Mar 2021 19:36:12 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Sep 2021 15:14:30 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Apr 2022 15:35:19 GMT",
"version": "v3"
}
] | 2023-03-07 | [
[
"Zocco",
"Federico",
""
],
[
"McLoone",
"Seán",
""
],
[
"Smyth",
"Beatrice",
""
]
] |
2103.02083 | Suman Sedai | Suman Sedai, Bhavna Antony, Ravneet Rai, Katie Jones, Hiroshi
Ishikawa, Joel Schuman, Wollstein Gadi and Rahil Garnavi | Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images | MICCAI,19 | MICCAI 2019 pp 282-290 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks have shown outstanding performance in
medical image segmentation tasks. The usual problem when training supervised
deep learning methods is the lack of labeled data which is time-consuming and
costly to obtain. In this paper, we propose a novel uncertainty-guided
semi-supervised learning based on a student-teacher approach for training the
segmentation network using limited labeled samples and a large number of
unlabeled images. First, a teacher segmentation model is trained from the
labeled samples using Bayesian deep learning. The trained model is used to
generate soft segmentation labels and uncertainty maps for the unlabeled set.
The student model is then updated using the softly segmented samples and the
corresponding pixel-wise confidence of the segmentation quality estimated from
the uncertainty of the teacher model using a newly designed loss function.
Experimental results on a retinal layer segmentation task show that the
proposed method improves the segmentation performance in comparison to the
fully supervised approach and is on par with the expert annotator. The proposed
semi-supervised segmentation framework is a key contribution and applicable for
biomedical image segmentation across various imaging modalities where access to
annotated medical images is challenging
| [
{
"created": "Tue, 2 Mar 2021 23:14:25 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Sedai",
"Suman",
""
],
[
"Antony",
"Bhavna",
""
],
[
"Rai",
"Ravneet",
""
],
[
"Jones",
"Katie",
""
],
[
"Ishikawa",
"Hiroshi",
""
],
[
"Schuman",
"Joel",
""
],
[
"Gadi",
"Wollstein",
""
],
[
"Garnavi",
"Rahil",
""
]
] |
2103.02084 | Cameron Voloshin | Cameron Voloshin, Nan Jiang, Yisong Yue | Minimax Model Learning | null | PMLR, Volume 130, 2021 | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present a novel off-policy loss function for learning a transition model
in model-based reinforcement learning. Notably, our loss is derived from the
off-policy policy evaluation objective with an emphasis on correcting
distribution shift. Compared to previous model-based techniques, our approach
allows for greater robustness under model misspecification or distribution
shift induced by learning/evaluating policies that are distinct from the
data-generating policy. We provide a theoretical analysis and show empirical
improvements over existing model-based off-policy evaluation methods. We
provide further analysis showing our loss can be used for off-policy
optimization (OPO) and demonstrate its integration with more recent
improvements in OPO.
| [
{
"created": "Tue, 2 Mar 2021 23:16:36 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Voloshin",
"Cameron",
""
],
[
"Jiang",
"Nan",
""
],
[
"Yue",
"Yisong",
""
]
] |
2103.02144 | Qingyang Xu | Qingyang Xu, Qingsong Wen, Liang Sun | Two-Stage Framework for Seasonal Time Series Forecasting | 5 pages, 2 figures, 3 tables, ICASSP 2021 | IEEE ICASSP 2021 | 10.1109/ICASSP39728.2021.9414118. | null | cs.LG cs.AI stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Seasonal time series Forecasting remains a challenging problem due to the
long-term dependency from seasonality. In this paper, we propose a two-stage
framework to forecast univariate seasonal time series. The first stage
explicitly learns the long-range time series structure in a time window beyond
the forecast horizon. By incorporating the learned long-range structure, the
second stage can enhance the prediction accuracy in the forecast horizon. In
both stages, we integrate the auto-regressive model with neural networks to
capture both linear and non-linear characteristics in time series. Our
framework achieves state-of-the-art performance on M4 Competition Hourly
datasets. In particular, we show that incorporating the intermediate results
generated in the first stage to existing forecast models can effectively
enhance their prediction performance.
| [
{
"created": "Wed, 3 Mar 2021 02:53:39 GMT",
"version": "v1"
}
] | 2021-06-08 | [
[
"Xu",
"Qingyang",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Sun",
"Liang",
""
]
] |
2103.02205 | Haoran Xu | Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White,
Benjamin Van Durme and Kenton Murray | Gradual Fine-Tuning for Low-Resource Domain Adaptation | Adapt-NLP, EACL 2021 | Adapt-NLP EACL 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning is known to improve NLP models by adapting an initial model
trained on more plentiful but less domain-salient examples to data in a target
domain. Such domain adaptation is typically done using one stage of
fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process
can yield substantial further gains and can be applied without modifying the
model or learning objective.
| [
{
"created": "Wed, 3 Mar 2021 06:24:54 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Sep 2021 19:03:37 GMT",
"version": "v2"
}
] | 2021-09-08 | [
[
"Xu",
"Haoran",
""
],
[
"Ebner",
"Seth",
""
],
[
"Yarmohammadi",
"Mahsa",
""
],
[
"White",
"Aaron Steven",
""
],
[
"Van Durme",
"Benjamin",
""
],
[
"Murray",
"Kenton",
""
]
] |
2103.02212 | Haoran Xu | Haoran Xu and Philipp Koehn | Zero-Shot Cross-Lingual Dependency Parsing through Contextual Embedding
Transformation | null | Adapt-NLP EACL 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear embedding transformation has been shown to be effective for zero-shot
cross-lingual transfer tasks and achieve surprisingly promising results.
However, cross-lingual embedding space mapping is usually studied in static
word-level embeddings, where a space transformation is derived by aligning
representations of translation pairs that are referred from dictionaries. We
move further from this line and investigate a contextual embedding alignment
approach which is sense-level and dictionary-free. To enhance the quality of
the mapping, we also provide a deep view of properties of contextual
embeddings, i.e., anisotropy problem and its solution. Experiments on zero-shot
dependency parsing through the concept-shared space built by our embedding
transformation substantially outperform state-of-the-art methods using
multilingual embeddings.
| [
{
"created": "Wed, 3 Mar 2021 06:50:43 GMT",
"version": "v1"
}
] | 2021-09-08 | [
[
"Xu",
"Haoran",
""
],
[
"Koehn",
"Philipp",
""
]
] |
2103.02227 | Lijie Wang | Kun Wu, Lijie Wang, Zhenghua Li, Ao Zhang, Xinyan Xiao, Hua Wu, Min
Zhang, Haifeng Wang | Data Augmentation with Hierarchical SQL-to-Question Generation for
Cross-domain Text-to-SQL Parsing | null | EMNLP 2021 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Data augmentation has attracted a lot of research attention in the deep
learning era for its ability in alleviating data sparseness. The lack of
labeled data for unseen evaluation databases is exactly the major challenge for
cross-domain text-to-SQL parsing. Previous works either require human
intervention to guarantee the quality of generated data, or fail to handle
complex SQL queries. This paper presents a simple yet effective data
augmentation framework. First, given a database, we automatically produce a
large number of SQL queries based on an abstract syntax tree grammar. For
better distribution matching, we require that at least 80% of SQL patterns in
the training data are covered by generated queries. Second, we propose a
hierarchical SQL-to-question generation model to obtain high-quality natural
language questions, which is the major contribution of this work. Finally, we
design a simple sampling strategy that can greatly improve training efficiency
given large amounts of generated data. Experiments on three cross-domain
datasets, i.e., WikiSQL and Spider in English, and DuSQL in Chinese, show that
our proposed data augmentation framework can consistently improve performance
over strong baselines, and the hierarchical generation component is the key for
the improvement.
| [
{
"created": "Wed, 3 Mar 2021 07:37:38 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Mar 2021 07:33:28 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Oct 2021 12:04:00 GMT",
"version": "v3"
},
{
"created": "Tue, 15 Nov 2022 02:12:31 GMT",
"version": "v4"
}
] | 2022-11-16 | [
[
"Wu",
"Kun",
""
],
[
"Wang",
"Lijie",
""
],
[
"Li",
"Zhenghua",
""
],
[
"Zhang",
"Ao",
""
],
[
"Xiao",
"Xinyan",
""
],
[
"Wu",
"Hua",
""
],
[
"Zhang",
"Min",
""
],
[
"Wang",
"Haifeng",
""
]
] |
2103.02263 | Fabian Duerr | Fabian Duerr, Mario Pfaller, Hendrik Weigel, Juergen Beyerer | LiDAR-based Recurrent 3D Semantic Segmentation with Temporal Memory
Alignment | null | International Conference on 3D Vision (3DV), pages 781-790, 2020 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding and interpreting a 3d environment is a key challenge for
autonomous vehicles. Semantic segmentation of 3d point clouds combines 3d
information with semantics and thereby provides a valuable contribution to this
task. In many real-world applications, point clouds are generated by lidar
sensors in a consecutive fashion. Working with a time series instead of single
and independent frames enables the exploitation of temporal information. We
therefore propose a recurrent segmentation architecture (RNN), which takes a
single range image frame as input and exploits recursively aggregated temporal
information. An alignment strategy, which we call Temporal Memory Alignment,
uses ego motion to temporally align the memory between consecutive frames in
feature space. A Residual Network and ConvGRU are investigated for the memory
update. We demonstrate the benefits of the presented approach on two
large-scale datasets and compare it to several stateof-the-art methods. Our
approach ranks first on the SemanticKITTI multiple scan benchmark and achieves
state-of-the-art performance on the single scan benchmark. In addition, the
evaluation shows that the exploitation of temporal information significantly
improves segmentation results compared to a single frame approach.
| [
{
"created": "Wed, 3 Mar 2021 09:01:45 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Duerr",
"Fabian",
""
],
[
"Pfaller",
"Mario",
""
],
[
"Weigel",
"Hendrik",
""
],
[
"Beyerer",
"Juergen",
""
]
] |
2103.02278 | Markus Horn | Markus Horn, Ole Schumann, Markus Hahn, J\"urgen Dickmann, Klaus
Dietmayer | Motion Classification and Height Estimation of Pedestrians Using Sparse
Radar Data | 6 pages, 6 figures, 1 table | 2018 Sensor Data Fusion: Trends, Solutions, Applications (SDF) | 10.1109/SDF.2018.8547092 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A complete overview of the surrounding vehicle environment is important for
driver assistance systems and highly autonomous driving. Fusing results of
multiple sensor types like camera, radar and lidar is crucial for increasing
the robustness. The detection and classification of objects like cars, bicycles
or pedestrians has been analyzed in the past for many sensor types. Beyond
that, it is also helpful to refine these classes and distinguish for example
between different pedestrian types or activities. This task is usually
performed on camera data, though recent developments are based on radar
spectrograms. However, for most automotive radar systems, it is only possible
to obtain radar targets instead of the original spectrograms. This work
demonstrates that it is possible to estimate the body height of walking
pedestrians using 2D radar targets. Furthermore, different pedestrian motion
types are classified.
| [
{
"created": "Wed, 3 Mar 2021 09:36:11 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Horn",
"Markus",
""
],
[
"Schumann",
"Ole",
""
],
[
"Hahn",
"Markus",
""
],
[
"Dickmann",
"Jürgen",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
2103.02288 | Shoffan Saifullah | Shoffan Saifullah, Rafal Drezewski, Alin Khaliduzzaman, Lean Karlo
Tolentino, Rabbimov Ilyos | K-means segmentation based-on lab color space for embryo detection in
incubated egg | 11 pages, 6 figures, ICoSiET Conference 2020, Jurnal Ilmiah Teknik
Elektro Komputer dan Informatika (JITEKI) | J. Ilm. Tek. Elektro Komput. dan Inform., 2022, Vol. 7, No. 2, p.
175-185 | 10.26555/jiteki.v8i2.23724 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | The quality of the hatching process influences the success of the hatch rate
besides the inherent egg factors. Eliminating infertile or dead eggs and
monitoring embryonic growth are very important factors in efficient hatchery
practices. This process aims to sort eggs that only have embryos to remain in
the incubator until the end of the hatching process. This process aims to sort
eggs with embryos to remain hatched until the end. Maximum checking is done the
first week in the hatching period. This study aims to detect the presence of
embryos in eggs and processed by segmentation. Egg images are segmented using
the K-means algorithm based on Lab color images. The results of the image
acquisition are converted into Lab color space images. The results of Lab color
space images are processed using K-means for each color. The K-means process
uses cluster k=3 and divides into three parts: background, eggs, and yolk. Egg
yolks are part of eggs that have embryonic characteristics. This study applies
the concept of color in the initial segmentation and grayscale in the final
stages. The initial phase results show that the image segmentation results
using k-means clustering based on Lab color space provide a grouping of three
parts. At the grayscale image processing stage, the results of color image
segmentation are processed with grayscaling, image enhancement, and morphology.
Thus, it seems clear that the yolk segmented shows the presence of egg embryos.
Based on this results, the initial stages of the embryo detection process used
K-means segmentation based on Lab color space. The evaluation uses MSE and
MSSIM, with values of 0.0486 and 0.9979; this can be used to reference that the
results obtained can detect embryos in egg yolk. This protocol could be used in
a non-destructive quantitative study on embryos and their morphology in a
precision poultry production system in the future.
| [
{
"created": "Wed, 3 Mar 2021 10:03:36 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Aug 2022 00:19:55 GMT",
"version": "v2"
}
] | 2022-08-03 | [
[
"Saifullah",
"Shoffan",
""
],
[
"Drezewski",
"Rafal",
""
],
[
"Khaliduzzaman",
"Alin",
""
],
[
"Tolentino",
"Lean Karlo",
""
],
[
"Ilyos",
"Rabbimov",
""
]
] |
2103.02362 | Ting Wu | Ting Wu, Junjie Peng, Wenqiang Zhang, Huiran Zhang, Chuanshuai Ma,
Yansong Huang | Video Sentiment Analysis with Bimodal Information-augmented Multi-Head
Attention | 12 pages, 4 figures, content and journal information updated | Knowledge Based Systems 235 (2022) 107676 | 10.1016/j.knosys.2021.107676 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans express feelings or emotions via different channels. Take language as
an example, it entails different sentiments under different visual-acoustic
contexts. To precisely understand human intentions as well as reduce the
misunderstandings caused by ambiguity and sarcasm, we should consider
multimodal signals including textual, visual and acoustic signals. The crucial
challenge is to fuse different modalities of features for sentiment analysis.
To effectively fuse the information carried by different modalities and better
predict the sentiments, we design a novel multi-head attention based fusion
network, which is inspired by the observations that the interactions between
any two pair-wise modalities are different and they do not equally contribute
to the final sentiment prediction. By assigning the acoustic-visual,
acoustic-textual and visual-textual features with reasonable attention and
exploiting a residual structure, we attend to attain the significant features.
We conduct extensive experiments on four public multimodal datasets including
one in Chinese and three in English. The results show that our approach
outperforms the existing methods and can explain the contributions of bimodal
interaction in multiple modalities.
| [
{
"created": "Wed, 3 Mar 2021 12:30:11 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2021 02:54:35 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Nov 2021 07:02:53 GMT",
"version": "v3"
}
] | 2021-11-17 | [
[
"Wu",
"Ting",
""
],
[
"Peng",
"Junjie",
""
],
[
"Zhang",
"Wenqiang",
""
],
[
"Zhang",
"Huiran",
""
],
[
"Ma",
"Chuanshuai",
""
],
[
"Huang",
"Yansong",
""
]
] |
2103.02372 | Thomas Hirsch | Thomas Hirsch, Birgit Hofer | Root cause prediction based on bug reports | 6 pages | Proceedings of the 2020 IEEE International Symposium on Software
Reliability Engineering Workshops (ISSREW), Coimbra, Portugal, 2020, pp.
171-176 | 10.1109/ISSREW51248.2020.00067 | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a supervised machine learning approach for predicting the
root cause of a given bug report. Knowing the root cause of a bug can help
developers in the debugging process - either directly or indirectly by choosing
proper tool support for the debugging task. We mined 54755 closed bug reports
from the issue trackers of 103 GitHub projects and applied a set of heuristics
to create a benchmark consisting of 10459 reports. A subset was manually
classified into three groups (semantic, memory, and concurrency) based on the
bugs' root causes. Since the types of root cause are not equally distributed, a
combination of keyword search and random selection was applied. Our data set
for the machine learning approach consists of 369 bug reports (122 concurrency,
121 memory, and 126 semantic bugs). The bug reports are used as input to a
natural language processing algorithm. We evaluated the performance of several
classifiers for predicting the root causes for the given bug reports. Linear
Support Vector machines achieved the highest mean precision (0.74) and recall
(0.72) scores. The created bug data set and classification are publicly
available.
| [
{
"created": "Wed, 3 Mar 2021 12:47:15 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Hirsch",
"Thomas",
""
],
[
"Hofer",
"Birgit",
""
]
] |
2103.02380 | Bin Chen | Ruizhen Hu, Bin Chen, Juzhan Xu, Oliver van Kaick, Oliver Deussen, Hui
Huang | Shape-driven Coordinate Ordering for Star Glyph Sets via Reinforcement
Learning | null | IEEE Transactions on Visualization and Computer Graphics 2021 | 10.1109/TVCG.2021.3052167 | null | cs.CV cs.GR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a neural optimization model trained with reinforcement learning to
solve the coordinate ordering problem for sets of star glyphs. Given a set of
star glyphs associated to multiple class labels, we propose to use shape
context descriptors to measure the perceptual distance between pairs of glyphs,
and use the derived silhouette coefficient to measure the perception of class
separability within the entire set. To find the optimal coordinate order for
the given set, we train a neural network using reinforcement learning to reward
orderings with high silhouette coefficients. The network consists of an encoder
and a decoder with an attention mechanism. The encoder employs a recurrent
neural network (RNN) to encode input shape and class information, while the
decoder together with the attention mechanism employs another RNN to output a
sequence with the new coordinate order. In addition, we introduce a neural
network to efficiently estimate the similarity between shape context
descriptors, which allows to speed up the computation of silhouette
coefficients and thus the training of the axis ordering network. Two user
studies demonstrate that the orders provided by our method are preferred by
users for perceiving class separation. We tested our model on different
settings to show its robustness and generalization abilities and demonstrate
that it allows to order input sets with unseen data size, data dimension, or
number of classes. We also demonstrate that our model can be adapted to
coordinate ordering of other types of plots such as RadViz by replacing the
proposed shape-aware silhouette coefficient with the corresponding quality
metric to guide network training.
| [
{
"created": "Wed, 3 Mar 2021 13:05:10 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Hu",
"Ruizhen",
""
],
[
"Chen",
"Bin",
""
],
[
"Xu",
"Juzhan",
""
],
[
"van Kaick",
"Oliver",
""
],
[
"Deussen",
"Oliver",
""
],
[
"Huang",
"Hui",
""
]
] |
2103.02386 | Thomas Hirsch | Thomas Hirsch | A Fault Localization and Debugging Support Framework driven by Bug
Tracking Data | 4 pages | Proceedings of the 2020 IEEE International Symposium on Software
Reliability Engineering Workshops (ISSREW), Coimbra, Portugal, 2020, pp.
139-142 | 10.1109/ISSREW51248.2020.00053 | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault localization has been determined as a major resource factor in the
software development life cycle. Academic fault localization techniques are
mostly unknown and unused in professional environments. Although manual
debugging approaches can vary significantly depending on bug type (e.g. memory
bugs or semantic bugs), these differences are not reflected in most existing
fault localization tools. Little research has gone into automated
identification of bug types to optimize the fault localization process.
Further, existing fault localization techniques leverage on historical data
only for augmentation of suspiciousness rankings. This thesis aims to provide a
fault localization framework by combining data from various sources to help
developers in the fault localization process. To achieve this, a bug
classification schema is introduced, benchmarks are created, and a novel fault
localization method based on historical data is proposed.
| [
{
"created": "Wed, 3 Mar 2021 13:23:13 GMT",
"version": "v1"
}
] | 2021-03-04 | [
[
"Hirsch",
"Thomas",
""
]
] |
2103.02410 | Xiao Liu | Xiao Liu, Da Yin, Jingnan Zheng, Xingjian Zhang, Peng Zhang, Hongxia
Yang, Yuxiao Dong, Jie Tang | OAG-BERT: Towards A Unified Backbone Language Model For Academic
Knowledge Services | Accepted to KDD 2022 | In Proceedings of the 28th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD 2022). Association for Computing Machinery,
New York, NY, USA, 3418-3428 | 10.1145/3534678.3539210 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Academic knowledge services have substantially facilitated the development of
the science enterprise by providing a plenitude of efficient research tools.
However, many applications highly depend on ad-hoc models and expensive human
labeling to understand scientific contents, hindering deployments into real
products. To build a unified backbone language model for different
knowledge-intensive academic applications, we pre-train an academic language
model OAG-BERT that integrates both the heterogeneous entity knowledge and
scientific corpora in the Open Academic Graph (OAG) -- the largest public
academic graph to date. In OAG-BERT, we develop strategies for pre-training
text and entity data along with zero-shot inference techniques. In OAG-BERT, we
develop strategies for pre-training text and entity data along with zero-shot
inference techniques. Its zero-shot capability furthers the path to mitigate
the need of expensive annotations. OAG-BERT has been deployed for real-world
applications, such as the reviewer recommendation function for National Nature
Science Foundation of China (NSFC) -- one of the largest funding agencies in
China -- and paper tagging in AMiner. All codes and pre-trained models are
available via the CogDL toolkit.
| [
{
"created": "Wed, 3 Mar 2021 14:00:57 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Mar 2021 09:40:33 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Oct 2022 04:41:17 GMT",
"version": "v3"
}
] | 2022-10-04 | [
[
"Liu",
"Xiao",
""
],
[
"Yin",
"Da",
""
],
[
"Zheng",
"Jingnan",
""
],
[
"Zhang",
"Xingjian",
""
],
[
"Zhang",
"Peng",
""
],
[
"Yang",
"Hongxia",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
]
] |
2103.02484 | Javier Hernandez | Javier Hernandez, Daniel McDuff, Ognjen (Oggi) Rudovic, Alberto Fung,
Mary Czerwinski | DeepFN: Towards Generalizable Facial Action Unit Recognition with Deep
Face Normalization | null | 2022 10th International Conference on Affective Computing and
Intelligent Interaction (ACII) | 10.1109/ACII55700.2022.9953868 | null | cs.CV cs.AI cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial action unit recognition has many applications from market research to
psychotherapy and from image captioning to entertainment. Despite its recent
progress, deployment of these models has been impeded due to their limited
generalization to unseen people and demographics. This work conducts an
in-depth analysis of performance across several dimensions: individuals(40
subjects), genders (male and female), skin types (darker and lighter), and
databases (BP4D and DISFA). To help suppress the variance in data, we use the
notion of self-supervised denoising autoencoders to design a method for deep
face normalization(DeepFN) that transfers facial expressions of different
people onto a common facial template which is then used to train and evaluate
facial action recognition models. We show that person-independent models yield
significantly lower performance (55% average F1 and accuracy across 40
subjects) than person-dependent models (60.3%), leading to a generalization gap
of 5.3%. However, normalizing the data with the newly introduced DeepFN
significantly increased the performance of person-independent models (59.6%),
effectively reducing the gap. Similarly, we observed generalization gaps when
considering gender (2.4%), skin type (5.3%), and dataset (9.4%), which were
significantly reduced with the use of DeepFN. These findings represent an
important step towards the creation of more generalizable facial action unit
recognition systems.
| [
{
"created": "Wed, 3 Mar 2021 15:50:51 GMT",
"version": "v1"
}
] | 2023-10-20 | [
[
"Hernandez",
"Javier",
"",
"Oggi"
],
[
"McDuff",
"Daniel",
"",
"Oggi"
],
[
"Ognjen",
"",
"",
"Oggi"
],
[
"Rudovic",
"",
""
],
[
"Fung",
"Alberto",
""
],
[
"Czerwinski",
"Mary",
""
]
] |
2103.02654 | Yudi Dong | Yudi Dong and Huaxia Wang and Yu-Dong Yao | A Robust Adversarial Network-Based End-to-End Communications System With
Strong Generalization Ability Against Adversarial Attacks | 5 pages letter | ICC 2022 - IEEE International Conference on Communications | 10.1109/ICC45855.2022.9838452 | null | cs.LG cs.AI eess.SP | http://creativecommons.org/licenses/by/4.0/ | We propose a novel defensive mechanism based on a generative adversarial
network (GAN) framework to defend against adversarial attacks in end-to-end
communications systems. Specifically, we utilize a generative network to model
a powerful adversary and enable the end-to-end communications system to combat
the generative attack network via a minimax game. We show that the proposed
system not only works well against white-box and black-box adversarial attacks
but also possesses excellent generalization capabilities to maintain good
performance under no attacks. We also show that our GAN-based end-to-end system
outperforms the conventional communications system and the end-to-end
communications system with/without adversarial training.
| [
{
"created": "Wed, 3 Mar 2021 20:04:42 GMT",
"version": "v1"
}
] | 2022-08-16 | [
[
"Dong",
"Yudi",
""
],
[
"Wang",
"Huaxia",
""
],
[
"Yao",
"Yu-Dong",
""
]
] |
2103.02691 | Waheed Ahmed Abro | Waheed Ahmed Abro, Annalena Aicher, Niklas Rach, Stefan Ultes,
Wolfgang Minker, Guilin Qi | Natural Language Understanding for Argumentative Dialogue Systems in the
Opinion Building Domain | null | Knowledge-Based Systems (2022): 108318 | 10.1016/j.knosys.2022.108318 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a natural language understanding (NLU) framework for
argumentative dialogue systems in the information-seeking and opinion building
domain. The proposed framework consists of two sub-models, namely intent
classifier and argument similarity. Intent classifier model stacks BiLSTM with
attention mechanism on top of the pre-trained BERT model and fine-tune the
model for recognizing the user intent, whereas the argument similarity model
employs BERT+BiLSTM for identifying system arguments the user refers to in his
or her natural language utterances. Our model is evaluated in an argumentative
dialogue system that engages the user to inform him-/herself about a
controversial topic by exploring pro and con arguments and build his/her
opinion towards the topic. In order to evaluate the proposed approach, we
collect user utterances for the interaction with the respective system labeling
intent and referenced argument in an extensive online study. The data
collection includes multiple topics and two different user types (native
English speakers from the UK and non-native English speakers from China).
Additionally, we evaluate the proposed intent classifier and argument
similarity models separately on the publicly available Banking77 and STS
benchmark datasets. The evaluation indicates a clear advantage of the utilized
techniques over baseline approaches on several datasets, as well as the
robustness of the proposed approach against new topics and different language
proficiency as well as the cultural background of the user. Furthermore,
results show that our intent classifier model outperforms DIET, DistillBERT,
and BERT fine-tuned models in few-shot setups (i.e., with 10, 20, or 30 labeled
examples per intent) and full data setup.
| [
{
"created": "Wed, 3 Mar 2021 21:17:24 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Feb 2022 14:32:16 GMT",
"version": "v2"
}
] | 2022-02-22 | [
[
"Abro",
"Waheed Ahmed",
""
],
[
"Aicher",
"Annalena",
""
],
[
"Rach",
"Niklas",
""
],
[
"Ultes",
"Stefan",
""
],
[
"Minker",
"Wolfgang",
""
],
[
"Qi",
"Guilin",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.