id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.13520 | Amit Sheth | Amit Sheth and Krishnaprasad Thirunarayan | The Duality of Data and Knowledge Across the Three Waves of AI | A version of this will appear as (cite as): IT Professional Magazine
(special section to commemorate the 75th Anniversary of IEEE Computer
Society), 23 (3) April-May 2021 | IT Professional, 23 (3), April-May 2021 | 10.1109/MITP.2021.3070985 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss how over the last 30 to 50 years, Artificial Intelligence (AI)
systems that focused only on data have been handicapped, and how knowledge has
been critical in developing smarter, intelligent, and more effective systems.
In fact, the vast progress in AI can be viewed in terms of the three waves of
AI as identified by DARPA. During the first wave, handcrafted knowledge has
been at the center-piece, while during the second wave, the data-driven
approaches supplanted knowledge. Now we see a strong role and resurgence of
knowledge fueling major breakthroughs in the third wave of AI underpinning
future intelligent systems as they attempt human-like decision making, and seek
to become trusted assistants and companions for humans. We find a wider
availability of knowledge created from diverse sources, using manual to
automated means both by repurposing as well as by extraction. Using knowledge
with statistical learning is becoming increasingly indispensable to help make
AI systems more transparent and auditable. We will draw a parallel with the
role of knowledge and experience in human intelligence based on cognitive
science, and discuss emerging neuro-symbolic or hybrid AI systems in which
knowledge is the critical enabler for combining capabilities of the
data-intensive statistical AI systems with those of symbolic AI systems,
resulting in more capable AI systems that support more human-like intelligence.
| [
{
"created": "Wed, 24 Mar 2021 23:07:47 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Apr 2021 19:57:57 GMT",
"version": "v2"
}
] | 2021-04-16 | [
[
"Sheth",
"Amit",
""
],
[
"Thirunarayan",
"Krishnaprasad",
""
]
] |
2103.13544 | Zheng Tong | Zheng Tong, Philippe Xu, Thierry Den{\oe}ux | Evidential fully convolutional network for semantic segmentation | 34 pages, 21 figures | Applied Intelligence, volume 51, pages 6376-6399 (2021) | 10.1007/s10489-021-02327-0 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a hybrid architecture composed of a fully convolutional network
(FCN) and a Dempster-Shafer layer for image semantic segmentation. In the
so-called evidential FCN (E-FCN), an encoder-decoder architecture first
extracts pixel-wise feature maps from an input image. A Dempster-Shafer layer
then computes mass functions at each pixel location based on distances to
prototypes. Finally, a utility layer performs semantic segmentation from mass
functions and allows for imprecise classification of ambiguous pixels and
outliers. We propose an end-to-end learning strategy for jointly updating the
network parameters, which can make use of soft (imprecise) labels. Experiments
using three databases (Pascal VOC 2011, MIT-scene Parsing and SIFT Flow) show
that the proposed combination improves the accuracy and calibration of semantic
segmentation by assigning confusing pixels to multi-class sets.
| [
{
"created": "Thu, 25 Mar 2021 01:21:22 GMT",
"version": "v1"
}
] | 2022-02-17 | [
[
"Tong",
"Zheng",
""
],
[
"Xu",
"Philippe",
""
],
[
"Denœux",
"Thierry",
""
]
] |
2103.13549 | Zheng Tong | Zheng Tong, Philippe Xu, Thierry Den{\oe}ux | An evidential classifier based on Dempster-Shafer theory and deep
learning | null | Neurocomputing, Vol. 450, pages 275-293, 2021 | 10.1016/j.neucom.2021.03.066 | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new classifier based on Dempster-Shafer (DS) theory and a
convolutional neural network (CNN) architecture for set-valued classification.
In this classifier, called the evidential deep-learning classifier,
convolutional and pooling layers first extract high-dimensional features from
input data. The features are then converted into mass functions and aggregated
by Dempster's rule in a DS layer. Finally, an expected utility layer performs
set-valued classification based on mass functions. We propose an end-to-end
learning strategy for jointly updating the network parameters. Additionally, an
approach for selecting partial multi-class acts is proposed. Experiments on
image recognition, signal processing, and semantic-relationship classification
tasks demonstrate that the proposed combination of deep CNN, DS layer, and
expected utility layer makes it possible to improve classification accuracy and
to make cautious decisions by assigning confusing patterns to multi-class sets.
| [
{
"created": "Thu, 25 Mar 2021 01:29:05 GMT",
"version": "v1"
}
] | 2021-05-07 | [
[
"Tong",
"Zheng",
""
],
[
"Xu",
"Philippe",
""
],
[
"Denœux",
"Thierry",
""
]
] |
2103.13550 | Andreas Hamm | Andreas Hamm and Simon Odrowski (German Aerospace Center DLR) | Term-community-based topic detection with variable resolution | 31 pages, 6 figures | Information. 2021; 12(6):221 | 10.3390/info12060221 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Network-based procedures for topic detection in huge text collections offer
an intuitive alternative to probabilistic topic models. We present in detail a
method that is especially designed with the requirements of domain experts in
mind. Like similar methods, it employs community detection in term
co-occurrence graphs, but it is enhanced by including a resolution parameter
that can be used for changing the targeted topic granularity. We also establish
a term ranking and use semantic word-embedding for presenting term communities
in a way that facilitates their interpretation. We demonstrate the application
of our method with a widely used corpus of general news articles and show the
results of detailed social-sciences expert evaluations of detected topics at
various resolutions. A comparison with topics detected by Latent Dirichlet
Allocation is also included. Finally, we discuss factors that influence topic
interpretation.
| [
{
"created": "Thu, 25 Mar 2021 01:29:39 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Jul 2021 23:26:58 GMT",
"version": "v2"
}
] | 2021-07-27 | [
[
"Hamm",
"Andreas",
"",
"German Aerospace Center DLR"
],
[
"Odrowski",
"Simon",
"",
"German Aerospace Center DLR"
]
] |
2103.13565 | Haobing Liu | Haobing Liu, Yanmin Zhu, Tianzi Zang, Yanan Xu, Jiadi Yu, Feilong Tang | Jointly Modeling Heterogeneous Student Behaviors and Interactions Among
Multiple Prediction Tasks | null | ACM TKDD 2022 | 10.1145/3458023 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction tasks about students have practical significance for both student
and college. Making multiple predictions about students is an important part of
a smart campus. For instance, predicting whether a student will fail to
graduate can alert the student affairs office to take predictive measures to
help the student improve his/her academic performance. With the development of
information technology in colleges, we can collect digital footprints which
encode heterogeneous behaviors continuously. In this paper, we focus on
modeling heterogeneous behaviors and making multiple predictions together,
since some prediction tasks are related and learning the model for a specific
task may have the data sparsity problem. To this end, we propose a variant of
LSTM and a soft-attention mechanism. The proposed LSTM is able to learn the
student profile-aware representation from heterogeneous behavior sequences. The
proposed soft-attention mechanism can dynamically learn different importance
degrees of different days for every student. In this way, heterogeneous
behaviors can be well modeled. In order to model interactions among multiple
prediction tasks, we propose a co-attention mechanism based unit. With the help
of the stacked units, we can explicitly control the knowledge transfer among
multiple tasks. We design three motivating behavior prediction tasks based on a
real-world dataset collected from a college. Qualitative and quantitative
experiments on the three prediction tasks have demonstrated the effectiveness
of our model.
| [
{
"created": "Thu, 25 Mar 2021 02:01:58 GMT",
"version": "v1"
}
] | 2023-09-27 | [
[
"Liu",
"Haobing",
""
],
[
"Zhu",
"Yanmin",
""
],
[
"Zang",
"Tianzi",
""
],
[
"Xu",
"Yanan",
""
],
[
"Yu",
"Jiadi",
""
],
[
"Tang",
"Feilong",
""
]
] |
2103.13578 | Wentao Zhu | Wentao Zhu and Yufang Huang and Daguang Xu and Zhen Qian and Wei Fan
and Xiaohui Xie | Test-Time Training for Deformable Multi-Scale Image Registration | ICRA 2021; 8 pages, 4 figures, 2 big tables | ICRA 2021 | null | null | cs.CV cs.LG cs.NE cs.RO eess.IV | http://creativecommons.org/licenses/by/4.0/ | Registration is a fundamental task in medical robotics and is often a crucial
step for many downstream tasks such as motion analysis, intra-operative
tracking and image segmentation. Popular registration methods such as ANTs and
NiftyReg optimize objective functions for each pair of images from scratch,
which are time-consuming for 3D and sequential images with complex
deformations. Recently, deep learning-based registration approaches such as
VoxelMorph have been emerging and achieve competitive performance. In this
work, we construct a test-time training for deep deformable image registration
to improve the generalization ability of conventional learning-based
registration model. We design multi-scale deep networks to consecutively model
the residual deformations, which is effective for high variational
deformations. Extensive experiments validate the effectiveness of multi-scale
deep registration with test-time training based on Dice coefficient for image
segmentation and mean square error (MSE), normalized local cross-correlation
(NLCC) for tissue dense tracking tasks. Two videos are in
https://www.youtube.com/watch?v=NvLrCaqCiAE and
https://www.youtube.com/watch?v=pEA6ZmtTNuQ
| [
{
"created": "Thu, 25 Mar 2021 03:22:59 GMT",
"version": "v1"
}
] | 2021-03-26 | [
[
"Zhu",
"Wentao",
""
],
[
"Huang",
"Yufang",
""
],
[
"Xu",
"Daguang",
""
],
[
"Qian",
"Zhen",
""
],
[
"Fan",
"Wei",
""
],
[
"Xie",
"Xiaohui",
""
]
] |
2103.13580 | Feng Lu | Feng Lu, Baifan Chen, Xiang-Dong Zhou and Dezhen Song | STA-VPR: Spatio-temporal Alignment for Visual Place Recognition | Accepted for publication in IEEE RA-L 2021 | IEEE Robotics and Automation Letters, 2021 | 10.1109/LRA.2021.3067623 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the methods based on Convolutional Neural Networks (CNNs) have
gained popularity in the field of visual place recognition (VPR). In
particular, the features from the middle layers of CNNs are more robust to
drastic appearance changes than handcrafted features and high-layer features.
Unfortunately, the holistic mid-layer features lack robustness to large
viewpoint changes. Here we split the holistic mid-layer features into local
features, and propose an adaptive dynamic time warping (DTW) algorithm to align
local features from the spatial domain while measuring the distance between two
images. This realizes viewpoint-invariant and condition-invariant place
recognition. Meanwhile, a local matching DTW (LM-DTW) algorithm is applied to
perform image sequence matching based on temporal alignment, which achieves
further improvements and ensures linear time complexity. We perform extensive
experiments on five representative VPR datasets. The results show that the
proposed method significantly improves the CNN-based methods. Moreover, our
method outperforms several state-of-the-art methods while maintaining good
run-time performance. This work provides a novel way to boost the performance
of CNN methods without any re-training for VPR. The code is available at
https://github.com/Lu-Feng/STA-VPR.
| [
{
"created": "Thu, 25 Mar 2021 03:27:42 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Apr 2021 09:00:03 GMT",
"version": "v2"
}
] | 2021-04-12 | [
[
"Lu",
"Feng",
""
],
[
"Chen",
"Baifan",
""
],
[
"Zhou",
"Xiang-Dong",
""
],
[
"Song",
"Dezhen",
""
]
] |
2103.13686 | Hugo Manuel Proen\c{c}a | Hugo Manuel Proen\c{c}a, Peter Gr\"unwald, Thomas B\"ack, Matthijs van
Leeuwen | Robust subgroup discovery | For associated code, see https://github.com/HMProenca/RuleList ;
submitted to Data Mining and Knowledge Discovery Journal | Data Mining and Knowledge Discovery 36 (2022)1885-1970 | 10.1007/s10618-022-00856-x | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the problem of robust subgroup discovery, i.e., finding a set of
interpretable descriptions of subsets that 1) stand out with respect to one or
more target attributes, 2) are statistically robust, and 3) non-redundant. Many
attempts have been made to mine either locally robust subgroups or to tackle
the pattern explosion, but we are the first to address both challenges at the
same time from a global modelling perspective. First, we formulate the broad
model class of subgroup lists, i.e., ordered sets of subgroups, for univariate
and multivariate targets that can consist of nominal or numeric variables,
including traditional top-1 subgroup discovery in its definition. This novel
model class allows us to formalise the problem of optimal robust subgroup
discovery using the Minimum Description Length (MDL) principle, where we resort
to optimal Normalised Maximum Likelihood and Bayesian encodings for nominal and
numeric targets, respectively. Second, finding optimal subgroup lists is
NP-hard. Therefore, we propose SSD++, a greedy heuristic that finds good
subgroup lists and guarantees that the most significant subgroup found
according to the MDL criterion is added in each iteration. In fact, the greedy
gain is shown to be equivalent to a Bayesian one-sample proportion,
multinomial, or t-test between the subgroup and dataset marginal target
distributions plus a multiple hypothesis testing penalty. Furthermore, we
empirically show on 54 datasets that SSD++ outperforms previous subgroup
discovery methods in terms of quality, generalisation on unseen data, and
subgroup list size.
| [
{
"created": "Thu, 25 Mar 2021 09:04:13 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Nov 2021 17:46:19 GMT",
"version": "v2"
},
{
"created": "Fri, 13 May 2022 20:39:47 GMT",
"version": "v3"
},
{
"created": "Thu, 30 Jun 2022 20:24:20 GMT",
"version": "v4"
}
] | 2022-10-11 | [
[
"Proença",
"Hugo Manuel",
""
],
[
"Grünwald",
"Peter",
""
],
[
"Bäck",
"Thomas",
""
],
[
"van Leeuwen",
"Matthijs",
""
]
] |
2103.13725 | Haipeng Li | Haipeng Li and Kunming Luo and Shuaicheng Liu | GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning | null | 2021 IEEE/CVF International Conference on Computer Vision (ICCV) | 10.1109/ICCV48922.2021.01263 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing optical flow methods are erroneous in challenging scenes, such as
fog, rain, and night because the basic optical flow assumptions such as
brightness and gradient constancy are broken. To address this problem, we
present an unsupervised learning approach that fuses gyroscope into optical
flow learning. Specifically, we first convert gyroscope readings into motion
fields named gyro field. Second, we design a self-guided fusion module to fuse
the background motion extracted from the gyro field with the optical flow and
guide the network to focus on motion details. To the best of our knowledge,
this is the first deep learning-based framework that fuses gyroscope data and
image content for optical flow learning. To validate our method, we propose a
new dataset that covers regular and challenging scenes. Experiments show that
our method outperforms the state-of-art methods in both regular and challenging
scenes. Code and dataset are available at
https://github.com/megvii-research/GyroFlow.
| [
{
"created": "Thu, 25 Mar 2021 10:14:57 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Aug 2021 07:46:31 GMT",
"version": "v2"
}
] | 2023-06-13 | [
[
"Li",
"Haipeng",
""
],
[
"Luo",
"Kunming",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
2103.13799 | David Vilares | David Vilares and Marcos Garcia and Carlos G\'omez-Rodr\'iguez | Bertinho: Galician BERT Representations | Accepted in the journal Procesamiento del Lenguaje Natural | Procesamiento del Lenguaje Natural. 66 (2021) 13-26 | 10.26342/2021-66-1 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a monolingual BERT model for Galician. We follow the
recent trend that shows that it is feasible to build robust monolingual BERT
models even for relatively low-resource languages, while performing better than
the well-known official multilingual BERT (mBERT). More particularly, we
release two monolingual Galician BERT models, built using 6 and 12 transformer
layers, respectively; trained with limited resources (~45 million tokens on a
single GPU of 24GB). We then provide an exhaustive evaluation on a number of
tasks such as POS-tagging, dependency parsing and named entity recognition. For
this purpose, all these tasks are cast in a pure sequence labeling setup in
order to run BERT without the need to include any additional layers on top of
it (we only use an output classification layer to map the contextualized
representations into the predicted label). The experiments show that our
models, especially the 12-layer one, outperform the results of mBERT in most
tasks.
| [
{
"created": "Thu, 25 Mar 2021 12:51:34 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Vilares",
"David",
""
],
[
"Garcia",
"Marcos",
""
],
[
"Gómez-Rodríguez",
"Carlos",
""
]
] |
2103.13823 | Sunil Kumar Kopparapu Dr | Ayush Tripathi and Rupayan Chakraborty and Sunil Kumar Kopparapu | A Novel Adaptive Minority Oversampling Technique for Improved
Classification in Data Imbalanced Scenarios | 8 pages | ICPR 2020 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Imbalance in the proportion of training samples belonging to different
classes often poses performance degradation of conventional classifiers. This
is primarily due to the tendency of the classifier to be biased towards the
majority classes in the imbalanced dataset. In this paper, we propose a novel
three step technique to address imbalanced data. As a first step we
significantly oversample the minority class distribution by employing the
traditional Synthetic Minority OverSampling Technique (SMOTE) algorithm using
the neighborhood of the minority class samples and in the next step we
partition the generated samples using a Gaussian-Mixture Model based clustering
algorithm. In the final step synthetic data samples are chosen based on the
weight associated with the cluster, the weight itself being determined by the
distribution of the majority class samples. Extensive experiments on several
standard datasets from diverse domains shows the usefulness of the proposed
technique in comparison with the original SMOTE and its state-of-the-art
variants algorithms.
| [
{
"created": "Wed, 24 Mar 2021 09:58:02 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Mar 2021 18:12:45 GMT",
"version": "v2"
}
] | 2021-03-30 | [
[
"Tripathi",
"Ayush",
""
],
[
"Chakraborty",
"Rupayan",
""
],
[
"Kopparapu",
"Sunil Kumar",
""
]
] |
2103.13922 | Daniel Martin | Daniel Martin, Ana Serrano, Alexander W. Bergman, Gordon Wetzstein,
Belen Masia | ScanGAN360: A Generative Model of Realistic Scanpaths for 360$^{\circ}$
Images | null | IEEE Transactions on Visualization and Computer Graphics 2022 | 10.1109/TVCG.2022.3150502 | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Understanding and modeling the dynamics of human gaze behavior in 360$^\circ$
environments is a key challenge in computer vision and virtual reality.
Generative adversarial approaches could alleviate this challenge by generating
a large number of possible scanpaths for unseen images. Existing methods for
scanpath generation, however, do not adequately predict realistic scanpaths for
360$^\circ$ images. We present ScanGAN360, a new generative adversarial
approach to address this challenging problem. Our network generator is tailored
to the specifics of 360$^\circ$ images representing immersive environments.
Specifically, we accomplish this by leveraging the use of a spherical
adaptation of dynamic-time warping as a loss function and proposing a novel
parameterization of 360$^\circ$ scanpaths. The quality of our scanpaths
outperforms competing approaches by a large margin and is almost on par with
the human baseline. ScanGAN360 thus allows fast simulation of large numbers of
virtual observers, whose behavior mimics real users, enabling a better
understanding of gaze behavior and novel applications in virtual scene design.
| [
{
"created": "Thu, 25 Mar 2021 15:34:18 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Martin",
"Daniel",
""
],
[
"Serrano",
"Ana",
""
],
[
"Bergman",
"Alexander W.",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Masia",
"Belen",
""
]
] |
2103.14015 | Agnieszka Szczotka | Agnieszka Barbara Szczotka, Dzhoshkun Ismail Shakir, Matthew J.
Clarkson, Stephen P. Pereira, Tom Vercauteren | Zero-shot super-resolution with a physically-motivated downsampling
kernel for endomicroscopy | null | IEEE Transactions on Medical Imaging, 2021 | 10.1109/TMI.2021.3067512 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Super-resolution (SR) methods have seen significant advances thanks to the
development of convolutional neural networks (CNNs). CNNs have been
successfully employed to improve the quality of endomicroscopy imaging. Yet,
the inherent limitation of research on SR in endomicroscopy remains the lack of
ground truth high-resolution (HR) images, commonly used for both supervised
training and reference-based image quality assessment (IQA). Therefore,
alternative methods, such as unsupervised SR are being explored. To address the
need for non-reference image quality improvement, we designed a novel zero-shot
super-resolution (ZSSR) approach that relies only on the endomicroscopy data to
be processed in a self-supervised manner without the need for ground-truth HR
images. We tailored the proposed pipeline to the idiosyncrasies of
endomicroscopy by introducing both: a physically-motivated Voronoi downscaling
kernel accounting for the endomicroscope's irregular fibre-based sampling
pattern, and realistic noise patterns. We also took advantage of video
sequences to exploit a sequence of images for self-supervised zero-shot image
quality improvement. We run ablation studies to assess our contribution in
regards to the downscaling kernel and noise simulation. We validate our
methodology on both synthetic and original data. Synthetic experiments were
assessed with reference-based IQA, while our results for original images were
evaluated in a user study conducted with both expert and non-expert observers.
The results demonstrated superior performance in image quality of ZSSR
reconstructions in comparison to the baseline method. The ZSSR is also
competitive when compared to supervised single-image SR, especially being the
preferred reconstruction technique by experts.
| [
{
"created": "Thu, 25 Mar 2021 17:47:02 GMT",
"version": "v1"
}
] | 2021-03-26 | [
[
"Szczotka",
"Agnieszka Barbara",
""
],
[
"Shakir",
"Dzhoshkun Ismail",
""
],
[
"Clarkson",
"Matthew J.",
""
],
[
"Pereira",
"Stephen P.",
""
],
[
"Vercauteren",
"Tom",
""
]
] |
2103.14107 | Chuhua Wang | Chuhua Wang, Yuchen Wang, Mingze Xu, David J. Crandall | Stepwise Goal-Driven Networks for Trajectory Prediction | Accepted By RA-L and ICRA2022 | in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
2716-2723, April 2022 | 10.1109/LRA.2022.3145090 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to predict the future trajectories of observed agents (e.g.,
pedestrians or vehicles) by estimating and using their goals at multiple time
scales. We argue that the goal of a moving agent may change over time, and
modeling goals continuously provides more accurate and detailed information for
future trajectory estimation. To this end, we present a recurrent network for
trajectory prediction, called Stepwise Goal-Driven Network (SGNet). Unlike
prior work that models only a single, long-term goal, SGNet estimates and uses
goals at multiple temporal scales. In particular, it incorporates an encoder
that captures historical information, a stepwise goal estimator that predicts
successive goals into the future, and a decoder that predicts future
trajectory. We evaluate our model on three first-person traffic datasets
(HEV-I, JAAD, and PIE) as well as on three bird's eye view datasets (NuScenes,
ETH, and UCY), and show that our model achieves state-of-the-art results on all
datasets. Code has been made available at:
https://github.com/ChuhuaW/SGNet.pytorch.
| [
{
"created": "Thu, 25 Mar 2021 19:51:54 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jan 2022 05:56:54 GMT",
"version": "v2"
},
{
"created": "Sun, 27 Mar 2022 08:22:45 GMT",
"version": "v3"
}
] | 2022-03-29 | [
[
"Wang",
"Chuhua",
""
],
[
"Wang",
"Yuchen",
""
],
[
"Xu",
"Mingze",
""
],
[
"Crandall",
"David J.",
""
]
] |
2103.14161 | Thanh Nguyen | Thanh Nguyen-Duc, Natasha Mulligan, Gurdeep S. Mannu, Joao H.
Bettencourt-Silva | Deep EHR Spotlight: a Framework and Mechanism to Highlight Events in
Electronic Health Records for Explainable Predictions | AMIA 2021 Virtual Informatics Summit | AMIA 2021 Virtual Informatics Summit | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The wide adoption of Electronic Health Records (EHR) has resulted in large
amounts of clinical data becoming available, which promises to support service
delivery and advance clinical and informatics research. Deep learning
techniques have demonstrated performance in predictive analytic tasks using
EHRs yet they typically lack model result transparency or explainability
functionalities and require cumbersome pre-processing tasks. Moreover, EHRs
contain heterogeneous and multi-modal data points such as text, numbers and
time series which further hinder visualisation and interpretability. This paper
proposes a deep learning framework to: 1) encode patient pathways from EHRs
into images, 2) highlight important events within pathway images, and 3) enable
more complex predictions with additional intelligibility. The proposed method
relies on a deep attention mechanism for visualisation of the predictions and
allows predicting multiple sequential outcomes.
| [
{
"created": "Thu, 25 Mar 2021 22:30:14 GMT",
"version": "v1"
}
] | 2022-02-14 | [
[
"Nguyen-Duc",
"Thanh",
""
],
[
"Mulligan",
"Natasha",
""
],
[
"Mannu",
"Gurdeep S.",
""
],
[
"Bettencourt-Silva",
"Joao H.",
""
]
] |
2103.14250 | Rohitash Chandra | Rohitash Chandra, Shaurya Goyal, Rishabh Gupta | Evaluation of deep learning models for multi-step ahead time series
prediction | null | IEEE Access, 2021 | 10.1109/ACCESS.2021.3085085 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Time series prediction with neural networks has been the focus of much
research in the past few decades. Given the recent deep learning revolution,
there has been much attention in using deep learning models for time series
prediction, and hence it is important to evaluate their strengths and
weaknesses. In this paper, we present an evaluation study that compares the
performance of deep learning models for multi-step ahead time series
prediction. The deep learning methods comprise simple recurrent neural
networks, long short-term memory (LSTM) networks, bidirectional LSTM networks,
encoder-decoder LSTM networks, and convolutional neural networks. We provide a
further comparison with simple neural networks that use stochastic gradient
descent and adaptive moment estimation (Adam) for training. We focus on
univariate time series for multi-step-ahead prediction from benchmark
time-series datasets and provide a further comparison of the results with
related methods from the literature. The results show that the bidirectional
and encoder-decoder LSTM network provides the best performance in accuracy for
the given time series problems.
| [
{
"created": "Fri, 26 Mar 2021 04:07:11 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jun 2021 10:43:11 GMT",
"version": "v2"
}
] | 2021-06-08 | [
[
"Chandra",
"Rohitash",
""
],
[
"Goyal",
"Shaurya",
""
],
[
"Gupta",
"Rishabh",
""
]
] |
2103.14326 | Wenbo Hu | Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, Tien-Tsin Wong | Bidirectional Projection Network for Cross Dimension Scene Understanding | CVPR 2021 (Oral) | CVPR 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 2D image representations are in regular grids and can be processed
efficiently, whereas 3D point clouds are unordered and scattered in 3D space.
The information inside these two visual domains is well complementary, e.g., 2D
images have fine-grained texture while 3D point clouds contain plentiful
geometry information. However, most current visual recognition systems process
them individually. In this paper, we present a \emph{bidirectional projection
network (BPNet)} for joint 2D and 3D reasoning in an end-to-end manner. It
contains 2D and 3D sub-networks with symmetric architectures, that are
connected by our proposed \emph{bidirectional projection module (BPM)}. Via the
\emph{BPM}, complementary 2D and 3D information can interact with each other in
multiple architectural levels, such that advantages in these two visual domains
can be combined for better scene recognition. Extensive quantitative and
qualitative experimental evaluations show that joint reasoning over 2D and 3D
visual domains can benefit both 2D and 3D scene understanding simultaneously.
Our \emph{BPNet} achieves top performance on the ScanNetV2 benchmark for both
2D and 3D semantic segmentation. Code is available at
\url{https://github.com/wbhu/BPNet}.
| [
{
"created": "Fri, 26 Mar 2021 08:31:39 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Hu",
"Wenbo",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Jiang",
"Li",
""
],
[
"Jia",
"Jiaya",
""
],
[
"Wong",
"Tien-Tsin",
""
]
] |
2103.14441 | Youngeun Kim | Youngeun Kim, Priyadarshini Panda | Visual Explanations from Spiking Neural Networks using Interspike
Intervals | null | Scientific Reports 11, 2021 | 10.1038/S41598-021-98448 | 19037 | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spiking Neural Networks (SNNs) compute and communicate with asynchronous
binary temporal events that can lead to significant energy savings with
neuromorphic hardware. Recent algorithmic efforts on training SNNs have shown
competitive performance on a variety of classification tasks. However, a
visualization tool for analysing and explaining the internal spike behavior of
such temporal deep SNNs has not been explored. In this paper, we propose a new
concept of bio-plausible visualization for SNNs, called Spike Activation Map
(SAM). The proposed SAM circumvents the non-differentiable characteristic of
spiking neurons by eliminating the need for calculating gradients to obtain
visual explanations. Instead, SAM calculates a temporal visualization map by
forward propagating input spikes over different time-steps. SAM yields an
attention map corresponding to each time-step of input data by highlighting
neurons with short inter-spike interval activity. Interestingly, without both
the backpropagation process and the class label, SAM highlights the
discriminative region of the image while capturing fine-grained details. With
SAM, for the first time, we provide a comprehensive analysis on how internal
spikes work in various SNN training configurations depending on optimization
types, leak behavior, as well as when faced with adversarial examples.
| [
{
"created": "Fri, 26 Mar 2021 12:49:46 GMT",
"version": "v1"
}
] | 2021-10-18 | [
[
"Kim",
"Youngeun",
""
],
[
"Panda",
"Priyadarshini",
""
]
] |
2103.14453 | Markus Bayer | Markus Bayer, Marc-Andr\'e Kaufhold, Bj\"orn Buchhold, Marcel Keller,
J\"org Dallmeyer and Christian Reuter | Data Augmentation in Natural Language Processing: A Novel Text
Generation Approach for Long and Short Text Classifiers | 17 pages, 3 figure, 5 tables | International Journal of Machine Learning and Cybernetics (2022) | 10.1007/s13042-022-01553-3 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many cases of machine learning, research suggests that the development of
training data might have a higher relevance than the choice and modelling of
classifiers themselves. Thus, data augmentation methods have been developed to
improve classifiers by artificially created training data. In NLP, there is the
challenge of establishing universal rules for text transformations which
provide new linguistic patterns. In this paper, we present and evaluate a text
generation method suitable to increase the performance of classifiers for long
and short texts. We achieved promising improvements when evaluating short as
well as long text tasks with the enhancement by our text generation method.
Especially with regard to small data analytics, additive accuracy gains of up
to 15.53% and 3.56% are achieved within a constructed low data regime, compared
to the no augmentation baseline and another data augmentation technique. As the
current track of these constructed regimes is not universally applicable, we
also show major improvements in several real world low data tasks (up to +4.84
F1-score). Since we are evaluating the method from many perspectives (in total
11 datasets), we also observe situations where the method might not be
suitable. We discuss implications and patterns for the successful application
of our approach on different types of datasets.
| [
{
"created": "Fri, 26 Mar 2021 13:16:07 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jul 2022 13:10:00 GMT",
"version": "v2"
}
] | 2022-07-25 | [
[
"Bayer",
"Markus",
""
],
[
"Kaufhold",
"Marc-André",
""
],
[
"Buchhold",
"Björn",
""
],
[
"Keller",
"Marcel",
""
],
[
"Dallmeyer",
"Jörg",
""
],
[
"Reuter",
"Christian",
""
]
] |
2103.14529 | Xinggang Wang | Xinggang Wang, Zhaojin Huang, Bencheng Liao, Lichao Huang, Yongchao
Gong, Chang Huang | Real-Time and Accurate Object Detection in Compressed Video by Long
Short-term Feature Aggregation | null | Computer Vision and Image Understanding,Volume 206, May 2021 | 10.1016/j.cviu.2021.103188 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video object detection is a fundamental problem in computer vision and has a
wide spectrum of applications. Based on deep networks, video object detection
is actively studied for pushing the limits of detection speed and accuracy. To
reduce the computation cost, we sparsely sample key frames in video and treat
the rest frames are non-key frames; a large and deep network is used to extract
features for key frames and a tiny network is used for non-key frames. To
enhance the features of non-key frames, we propose a novel short-term feature
aggregation method to propagate the rich information in key frame features to
non-key frame features in a fast way. The fast feature aggregation is enabled
by the freely available motion cues in compressed videos. Further, key frame
features are also aggregated based on optical flow. The propagated deep
features are then integrated with the directly extracted features for object
detection. The feature extraction and feature integration parameters are
optimized in an end-to-end manner. The proposed video object detection network
is evaluated on the large-scale ImageNet VID benchmark and achieves 77.2\% mAP,
which is on-par with state-of-the-art accuracy, at the speed of 30 FPS using a
Titan X GPU. The source codes are available at
\url{https://github.com/hustvl/LSFA}.
| [
{
"created": "Thu, 25 Mar 2021 01:38:31 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Wang",
"Xinggang",
""
],
[
"Huang",
"Zhaojin",
""
],
[
"Liao",
"Bencheng",
""
],
[
"Huang",
"Lichao",
""
],
[
"Gong",
"Yongchao",
""
],
[
"Huang",
"Chang",
""
]
] |
2103.14620 | Irene Li | Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura and
Ruihai Dong | LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label
Text Classification | 8 tables, 3 figures | DLG4NLP Workshop, NAACL 2022 | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Multi-label text classification (MLTC) is an attractive and challenging task
in natural language processing (NLP). Compared with single-label text
classification, MLTC has a wider range of applications in practice. In this
paper, we propose a label-interpretable graph convolutional network model to
solve the MLTC problem by modeling tokens and labels as nodes in a
heterogeneous graph. In this way, we are able to take into account multiple
relationships including token-level relationships. Besides, the model allows
better interpretability for predicted labels as the token-label edges are
exposed. We evaluate our method on four real-world datasets and it achieves
competitive scores against selected baseline methods. Specifically, this model
achieves a gain of 0.14 on the F1 score in the small label set MLTC, and 0.07
in the large label set scenario.
| [
{
"created": "Fri, 26 Mar 2021 17:33:31 GMT",
"version": "v1"
},
{
"created": "Sun, 22 May 2022 18:42:50 GMT",
"version": "v2"
}
] | 2022-05-24 | [
[
"Li",
"Irene",
""
],
[
"Feng",
"Aosong",
""
],
[
"Wu",
"Hao",
""
],
[
"Li",
"Tianxiao",
""
],
[
"Suzumura",
"Toyotaro",
""
],
[
"Dong",
"Ruihai",
""
]
] |
2103.14633 | Michael S. Ryoo | Iretiayo Akinola, Anelia Angelova, Yao Lu, Yevgen Chebotar, Dmitry
Kalashnikov, Jacob Varley, Julian Ibarz, Michael S. Ryoo | Visionary: Vision architecture discovery for robot learning | null | ICRA 2021 | null | null | cs.RO cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a vision-based architecture search algorithm for robot
manipulation learning, which discovers interactions between low dimension
action inputs and high dimensional visual inputs. Our approach automatically
designs architectures while training on the task - discovering novel ways of
combining and attending image feature representations with actions as well as
features from previous layers. The obtained new architectures demonstrate
better task success rates, in some cases with a large margin, compared to a
recent high performing baseline. Our real robot experiments also confirm that
it improves grasping performance by 6%. This is the first approach to
demonstrate a successful neural architecture search and attention connectivity
search for a real-robot task.
| [
{
"created": "Fri, 26 Mar 2021 17:51:43 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Akinola",
"Iretiayo",
""
],
[
"Angelova",
"Anelia",
""
],
[
"Lu",
"Yao",
""
],
[
"Chebotar",
"Yevgen",
""
],
[
"Kalashnikov",
"Dmitry",
""
],
[
"Varley",
"Jacob",
""
],
[
"Ibarz",
"Julian",
""
],
[
"Ryoo",
"Michael S.",
""
]
] |
2103.14651 | Limor Gultchin | David Watson, Limor Gultchin, Ankur Taly, Luciano Floridi | Local Explanations via Necessity and Sufficiency: Unifying Theory and
Practice | null | 37th Conference on Uncertainty in Artificial Intelligence (UAI
2021) | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Necessity and sufficiency are the building blocks of all successful
explanations. Yet despite their importance, these notions have been
conceptually underdeveloped and inconsistently applied in explainable
artificial intelligence (XAI), a fast-growing research area that is so far
lacking in firm theoretical foundations. Building on work in logic,
probability, and causality, we establish the central role of necessity and
sufficiency in XAI, unifying seemingly disparate methods in a single formal
framework. We provide a sound and complete algorithm for computing explanatory
factors with respect to a given context, and demonstrate its flexibility and
competitive performance against state of the art alternatives on various tasks.
| [
{
"created": "Sat, 27 Mar 2021 01:58:53 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 17:53:48 GMT",
"version": "v2"
}
] | 2021-06-11 | [
[
"Watson",
"David",
""
],
[
"Gultchin",
"Limor",
""
],
[
"Taly",
"Ankur",
""
],
[
"Floridi",
"Luciano",
""
]
] |
2103.14749 | Anish Athalye | Curtis G. Northcutt, Anish Athalye, Jonas Mueller | Pervasive Label Errors in Test Sets Destabilize Machine Learning
Benchmarks | Demo available at https://labelerrors.com/ and source code available
at https://github.com/cleanlab/label-errors | 35th Conference on Neural Information Processing Systems (NeurIPS
2021) Track on Datasets and Benchmarks | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We identify label errors in the test sets of 10 of the most commonly-used
computer vision, natural language, and audio datasets, and subsequently study
the potential for these label errors to affect benchmark results. Errors in
test sets are numerous and widespread: we estimate an average of at least 3.3%
errors across the 10 datasets, where for example label errors comprise at least
6% of the ImageNet validation set. Putative label errors are identified using
confident learning algorithms and then human-validated via crowdsourcing (51%
of the algorithmically-flagged candidates are indeed erroneously labeled, on
average across the datasets). Traditionally, machine learning practitioners
choose which model to deploy based on test accuracy - our findings advise
caution here, proposing that judging models over correctly labeled test sets
may be more useful, especially for noisy real-world datasets. Surprisingly, we
find that lower capacity models may be practically more useful than higher
capacity models in real-world datasets with high proportions of erroneously
labeled data. For example, on ImageNet with corrected labels: ResNet-18
outperforms ResNet-50 if the prevalence of originally mislabeled test examples
increases by just 6%. On CIFAR-10 with corrected labels: VGG-11 outperforms
VGG-19 if the prevalence of originally mislabeled test examples increases by
just 5%. Test set errors across the 10 datasets can be viewed at
https://labelerrors.com and all label errors can be reproduced by
https://github.com/cleanlab/label-errors.
| [
{
"created": "Fri, 26 Mar 2021 21:54:36 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Apr 2021 02:32:02 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Apr 2021 19:41:55 GMT",
"version": "v3"
},
{
"created": "Sun, 7 Nov 2021 13:04:04 GMT",
"version": "v4"
}
] | 2021-11-09 | [
[
"Northcutt",
"Curtis G.",
""
],
[
"Athalye",
"Anish",
""
],
[
"Mueller",
"Jonas",
""
]
] |
2103.14757 | Ikechukwu Onyenwe | Chidinma A. Nwafor and Ikechukwu E. Onyenwe | An Automated Multiple-Choice Question Generation Using Natural Language
Processing Techniques | Recently accepted by the International Journal on Natural Language
Computing (IJNLC) awaiting publication, 11 pages, 4 figures, 5 tables | International Journal on Natural Language Computing(IJNLC), April
2021 | 10.5121/ijnlc.2021.10201 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic multiple-choice question generation (MCQG) is a useful yet
challenging task in Natural Language Processing (NLP). It is the task of
automatic generation of correct and relevant questions from textual data.
Despite its usefulness, manually creating sizeable, meaningful and relevant
questions is a time-consuming and challenging task for teachers. In this paper,
we present an NLP-based system for automatic MCQG for Computer-Based Testing
Examination (CBTE).We used NLP technique to extract keywords that are important
words in a given lesson material. To validate that the system is not perverse,
five lesson materials were used to check the effectiveness and efficiency of
the system. The manually extracted keywords by the teacher were compared to the
auto-generated keywords and the result shows that the system was capable of
extracting keywords from lesson materials in setting examinable questions. This
outcome is presented in a user-friendly interface for easy accessibility.
| [
{
"created": "Fri, 26 Mar 2021 22:39:59 GMT",
"version": "v1"
}
] | 2021-05-04 | [
[
"Nwafor",
"Chidinma A.",
""
],
[
"Onyenwe",
"Ikechukwu E.",
""
]
] |
2103.14770 | Artan Sheshmani | Artan Sheshmani and Yizhuang You | Categorical Representation Learning: Morphism is All You Need | Fixed some typos. 16 pages. Comments are welcome | Machine Learning: Science and Technology, 3, 2021 | 10.1088/2632-2153/ac2c5d | 015016 | cs.LG cond-mat.dis-nn cs.AI math.CT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a construction for categorical representation learning and
introduce the foundations of "$\textit{categorifier}$". The central theme in
representation learning is the idea of $\textbf{everything to vector}$. Every
object in a dataset $\mathcal{S}$ can be represented as a vector in
$\mathbb{R}^n$ by an $\textit{encoding map}$ $E:
\mathcal{O}bj(\mathcal{S})\to\mathbb{R}^n$. More importantly, every morphism
can be represented as a matrix $E:
\mathcal{H}om(\mathcal{S})\to\mathbb{R}^{n}_{n}$. The encoding map $E$ is
generally modeled by a $\textit{deep neural network}$. The goal of
representation learning is to design appropriate tasks on the dataset to train
the encoding map (assuming that an encoding is optimal if it universally
optimizes the performance on various tasks). However, the latter is still a
$\textit{set-theoretic}$ approach. The goal of the current article is to
promote the representation learning to a new level via a
$\textit{category-theoretic}$ approach. As a proof of concept, we provide an
example of a text translator equipped with our technology, showing that our
categorical learning model outperforms the current deep learning models by 17
times. The content of the current article is part of the recent US patent
proposal (patent application number: 63110906).
| [
{
"created": "Fri, 26 Mar 2021 23:47:15 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2021 17:34:05 GMT",
"version": "v2"
}
] | 2023-01-25 | [
[
"Sheshmani",
"Artan",
""
],
[
"You",
"Yizhuang",
""
]
] |
2103.14950 | Michael Green | Christoph Salge, Michael Cerny Green, Rodrigo Canaan, Filip Skwarski,
Rafael Fritsch, Adrian Brightmoore, Shaofang Ye, Changxing Cao and Julian
Togelius | The AI Settlement Generation Challenge in Minecraft: First Year Report | 14 pages, 9 figures, published in KI-K\"unstliche Intelligenz | KI-K\"unstliche Intelligenz 2020 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article outlines what we learned from the first year of the AI
Settlement Generation Competition in Minecraft, a competition about producing
AI programs that can generate interesting settlements in Minecraft for an
unseen map. This challenge seeks to focus research into adaptive and holistic
procedural content generation. Generating Minecraft towns and villages given
existing maps is a suitable task for this, as it requires the generated content
to be adaptive, functional, evocative and aesthetic at the same time. Here, we
present the results from the first iteration of the competition. We discuss the
evaluation methodology, present the different technical approaches by the
competitors, and outline the open problems.
| [
{
"created": "Sat, 27 Mar 2021 17:27:05 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Salge",
"Christoph",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Canaan",
"Rodrigo",
""
],
[
"Skwarski",
"Filip",
""
],
[
"Fritsch",
"Rafael",
""
],
[
"Brightmoore",
"Adrian",
""
],
[
"Ye",
"Shaofang",
""
],
[
"Cao",
"Changxing",
""
],
[
"Togelius",
"Julian",
""
]
] |
2103.14968 | Rameen Abdal | Rameen Abdal, Peihao Zhu, Niloy Mitra, Peter Wonka | Labels4Free: Unsupervised Segmentation using StyleGAN | "Project Page: https://rameenabdal.github.io/Labels4Free/" | ICCV 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose an unsupervised segmentation framework for StyleGAN generated
objects. We build on two main observations. First, the features generated by
StyleGAN hold valuable information that can be utilized towards training
segmentation networks. Second, the foreground and background can often be
treated to be largely independent and be composited in different ways. For our
solution, we propose to augment the StyleGAN2 generator architecture with a
segmentation branch and to split the generator into a foreground and background
network. This enables us to generate soft segmentation masks for the foreground
object in an unsupervised fashion. On multiple object classes, we report
comparable results against state-of-the-art supervised segmentation networks,
while against the best unsupervised segmentation approach we demonstrate a
clear improvement, both in qualitative and quantitative metrics.
| [
{
"created": "Sat, 27 Mar 2021 18:59:22 GMT",
"version": "v1"
}
] | 2021-09-28 | [
[
"Abdal",
"Rameen",
""
],
[
"Zhu",
"Peihao",
""
],
[
"Mitra",
"Niloy",
""
],
[
"Wonka",
"Peter",
""
]
] |
2103.14972 | Francielle Alves Vargas | Francielle Alves Vargas, Isabelle Carvalho, Fabiana Rodrigues de
G\'oes, Fabr\'icio Benevenuto, Thiago Alexandre Salgueiro Pardo | HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments
for Offensive Language and Hate Speech Detection | Published at LREC 2022 Proceedings | https://aclanthology.org/2022.lrec-1.777/ | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Due to the severity of the social media offensive and hateful comments in
Brazil, and the lack of research in Portuguese, this paper provides the first
large-scale expert annotated corpus of Brazilian Instagram comments for hate
speech and offensive language detection. The HateBR corpus was collected from
the comment section of Brazilian politicians' accounts on Instagram and
manually annotated by specialists, reaching a high inter-annotator agreement.
The corpus consists of 7,000 documents annotated according to three different
layers: a binary classification (offensive versus non-offensive comments),
offensiveness-level classification (highly, moderately, and slightly
offensive), and nine hate speech groups (xenophobia, racism, homophobia,
sexism, religious intolerance, partyism, apology for the dictatorship,
antisemitism, and fatphobia). We also implemented baseline experiments for
offensive language and hate speech detection and compared them with a
literature baseline. Results show that the baseline experiments on our corpus
outperform the current state-of-the-art for the Portuguese language.
| [
{
"created": "Sat, 27 Mar 2021 19:43:16 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Apr 2021 22:15:40 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Apr 2021 10:02:52 GMT",
"version": "v3"
},
{
"created": "Sun, 2 May 2021 20:58:41 GMT",
"version": "v4"
},
{
"created": "Sun, 9 May 2021 16:41:18 GMT",
"version": "v5"
},
{
"created": "Tue, 27 Dec 2022 12:24:13 GMT",
"version": "v6"
}
] | 2022-12-29 | [
[
"Vargas",
"Francielle Alves",
""
],
[
"Carvalho",
"Isabelle",
""
],
[
"de Góes",
"Fabiana Rodrigues",
""
],
[
"Benevenuto",
"Fabrício",
""
],
[
"Pardo",
"Thiago Alexandre Salgueiro",
""
]
] |
2103.15004 | Carolin Wienrich Prof. Dr. | Carolin Wienrich and Marc Erich Latoschik | eXtended Artificial Intelligence: New Prospects of Human-AI Interaction
Research | null | Front. Virtual Real., 06 September 2021, Sec. Virtual Reality and
Human Behaviour | 10.3389/frvir.2021.686783 | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) covers a broad spectrum of computational
problems and use cases. Many of those implicate profound and sometimes
intricate questions of how humans interact or should interact with AIs.
Moreover, many users or future users do have abstract ideas of what AI is,
significantly depending on the specific embodiment of AI applications.
Human-centered-design approaches would suggest evaluating the impact of
different embodiments on human perception of and interaction with AI. An
approach that is difficult to realize due to the sheer complexity of
application fields and embodiments in reality. However, here XR opens new
possibilities to research human-AI interactions. The article's contribution is
twofold: First, it provides a theoretical treatment and model of human-AI
interaction based on an XR-AI continuum as a framework for and a perspective of
different approaches of XR-AI combinations. It motivates XR-AI combinations as
a method to learn about the effects of prospective human-AI interfaces and
shows why the combination of XR and AI fruitfully contributes to a valid and
systematic investigation of human-AI interactions and interfaces. Second, the
article provides two exemplary experiments investigating the aforementioned
approach for two distinct AI-systems. The first experiment reveals an
interesting gender effect in human-robot interaction, while the second
experiment reveals an Eliza effect of a recommender system. Here the article
introduces two paradigmatic implementations of the proposed XR testbed for
human-AI interactions and interfaces and shows how a valid and systematic
investigation can be conducted. In sum, the article opens new perspectives on
how XR benefits human-centered AI design and development.
| [
{
"created": "Sat, 27 Mar 2021 22:12:06 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 11:18:14 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Apr 2021 16:10:19 GMT",
"version": "v3"
}
] | 2022-09-19 | [
[
"Wienrich",
"Carolin",
""
],
[
"Latoschik",
"Marc Erich",
""
]
] |
2103.15076 | Huan Lei | Huan Lei, Naveed Akhtar, Ajmal Mian | Picasso: A CUDA-based Library for Deep Learning over 3D Meshes | Accepted to CVPR2021 | CVPR,2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Picasso, a CUDA-based library comprising novel modules for deep
learning over complex real-world 3D meshes. Hierarchical neural architectures
have proved effective in multi-scale feature extraction which signifies the
need for fast mesh decimation. However, existing methods rely on CPU-based
implementations to obtain multi-resolution meshes. We design GPU-accelerated
mesh decimation to facilitate network resolution reduction efficiently
on-the-fly. Pooling and unpooling modules are defined on the vertex clusters
gathered during decimation. For feature learning over meshes, Picasso contains
three types of novel convolutions namely, facet2vertex, vertex2facet, and
facet2facet convolution. Hence, it treats a mesh as a geometric structure
comprising vertices and facets, rather than a spatial graph with edges as
previous methods do. Picasso also incorporates a fuzzy mechanism in its filters
for robustness to mesh sampling (vertex density). It exploits Gaussian mixtures
to define fuzzy coefficients for the facet2vertex convolution, and barycentric
interpolation to define the coefficients for the remaining two convolutions. In
this release, we demonstrate the effectiveness of the proposed modules with
competitive segmentation results on S3DIS. The library will be made public
through https://github.com/hlei-ziyan/Picasso.
| [
{
"created": "Sun, 28 Mar 2021 08:04:50 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Lei",
"Huan",
""
],
[
"Akhtar",
"Naveed",
""
],
[
"Mian",
"Ajmal",
""
]
] |
2103.15206 | Sultan Mahmud | Sultan Mahmud, Md. Mohsin, Ijaz Ahmed Khan, Ashraf Uddin Mian, Miah
Akib Zaman | Knowledge, beliefs, attitudes and perceived risk about COVID-19 vaccine
and determinants of COVID-19 vaccine acceptance in Bangladesh | Accepted by PLOS ONE: https://doi.org/10.1371/journal.pone.0257096 | Plos One 16(9):e0257096 (2021) | 10.1371/journal.pone.0257096 | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | A total of 605 eligible respondents took part in this survey (population size
1630046161 and required sample size 591) with an age range of 18 to 100. A
large proportion of the respondents are aged less than 50 (82%) and male
(62.15%). The majority of the respondents live in urban areas (60.83%). A total
of 61.16% (370/605) of the respondents were willing to accept/take the COVID-19
vaccine. Among the accepted group, only 35.14% showed the willingness to take
the COVID-19 vaccine immediately, while 64.86% would delay the vaccination
until they are confirmed about the vaccine s efficacy and safety or COVID-19
becomes deadlier in Bangladesh. The regression results showed age, gender,
location (urban/rural), level of education, income, perceived risk of being
infected with COVID-19 in the future, perceived severity of infection, having
previous vaccination experience after age 18, having higher knowledge about
COVID-19 and vaccination were significantly associated with the acceptance of
COVID-19 vaccines. The research reported a high prevalence of COVID-19 vaccine
refusal and hesitancy in Bangladesh.
| [
{
"created": "Sun, 28 Mar 2021 19:37:47 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Nov 2022 19:11:20 GMT",
"version": "v2"
}
] | 2022-11-11 | [
[
"Mahmud",
"Sultan",
""
],
[
"Mohsin",
"Md.",
""
],
[
"Khan",
"Ijaz Ahmed",
""
],
[
"Mian",
"Ashraf Uddin",
""
],
[
"Zaman",
"Miah Akib",
""
]
] |
2103.15307 | Dingwen Zhang | Dingwen Zhang, Bo Wang, Gerong Wang, Qiang Zhang, Jiajia Zhang,
Jungong Han, Zheng You | Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images | null | SCIENCE CHINA Information Sciences, 2021 | null | null | cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Onfocus detection aims at identifying whether the focus of the individual
captured by a camera is on the camera or not. Based on the behavioral research,
the focus of an individual during face-to-camera communication leads to a
special type of eye contact, i.e., the individual-camera eye contact, which is
a powerful signal in social communication and plays a crucial role in
recognizing irregular individual status (e.g., lying or suffering mental
disease) and special purposes (e.g., seeking help or attracting fans). Thus,
developing effective onfocus detection algorithms is of significance for
assisting the criminal investigation, disease discovery, and social behavior
analysis. However, the review of the literature shows that very few efforts
have been made toward the development of onfocus detector due to the lack of
large-scale public available datasets as well as the challenging nature of this
task. To this end, this paper engages in the onfocus detection research by
addressing the above two issues. Firstly, we build a large-scale onfocus
detection dataset, named as the OnFocus Detection In the Wild (OFDIW). It
consists of 20,623 images in unconstrained capture conditions (thus called ``in
the wild'') and contains individuals with diverse emotions, ages, facial
characteristics, and rich interactions with surrounding objects and background
scenes. On top of that, we propose a novel end-to-end deep model, i.e., the
eye-context interaction inferring network (ECIIN), for onfocus detection, which
explores eye-context interaction via dynamic capsule routing. Finally,
comprehensive experiments are conducted on the proposed OFDIW dataset to
benchmark the existing learning models and demonstrate the effectiveness of the
proposed ECIIN. The project (containing both datasets and codes) is at
https://github.com/wintercho/focus.
| [
{
"created": "Mon, 29 Mar 2021 03:29:09 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Zhang",
"Dingwen",
""
],
[
"Wang",
"Bo",
""
],
[
"Wang",
"Gerong",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Zhang",
"Jiajia",
""
],
[
"Han",
"Jungong",
""
],
[
"You",
"Zheng",
""
]
] |
2103.15361 | Chen Lyu | Chen Lyu, Ruyun Wang, Hongyu Zhang, Hanwen Zhang, Songlin Hu | Embedding API Dependency Graph for Neural Code Generation | null | Empir Software Eng 26, 61 (2021) | 10.1007/s10664-021-09968-2 | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of code generation from textual program descriptions has long
been viewed as a grand challenge in software engineering. In recent years, many
deep learning based approaches have been proposed, which can generate a
sequence of code from a sequence of textual program description. However, the
existing approaches ignore the global relationships among API methods, which
are important for understanding the usage of APIs. In this paper, we propose to
model the dependencies among API methods as an API dependency graph (ADG) and
incorporate the graph embedding into a sequence-to-sequence (Seq2Seq) model. In
addition to the existing encoder-decoder structure, a new module named
``embedder" is introduced. In this way, the decoder can utilize both global
structural dependencies and textual program description to predict the target
code. We conduct extensive code generation experiments on three public datasets
and in two programming languages (Python and Java). Our proposed approach,
called ADG-Seq2Seq, yields significant improvements over existing
state-of-the-art methods and maintains its performance as the length of the
target code increases. Extensive ablation tests show that the proposed ADG
embedding is effective and outperforms the baselines.
| [
{
"created": "Mon, 29 Mar 2021 06:26:38 GMT",
"version": "v1"
}
] | 2021-04-23 | [
[
"Lyu",
"Chen",
""
],
[
"Wang",
"Ruyun",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Zhang",
"Hanwen",
""
],
[
"Hu",
"Songlin",
""
]
] |
2103.15409 | Xudong Chen | Xudong Chen, Shugong Xu, Qiaobin Ji, Shan Cao | A Dataset and Benchmark Towards Multi-Modal Face Anti-Spoofing Under
Surveillance Scenarios | Published in: IEEE Access | IEEE Access, vol. 9, pp. 28140-28155, 2021 | 10.1109/ACCESS.2021.3052728 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face Anti-spoofing (FAS) is a challenging problem due to complex serving
scenarios and diverse face presentation attack patterns. Especially when
captured images are low-resolution, blurry, and coming from different domains,
the performance of FAS will degrade significantly. The existing multi-modal FAS
datasets rarely pay attention to the cross-domain problems under deployment
scenarios, which is not conducive to the study of model performance. To solve
these problems, we explore the fine-grained differences between multi-modal
cameras and construct a cross-domain multi-modal FAS dataset under surveillance
scenarios called GREAT-FASD-S. Besides, we propose an Attention based Face
Anti-spoofing network with Feature Augment (AFA) to solve the FAS towards
low-quality face images. It consists of the depthwise separable attention
module (DAM) and the multi-modal based feature augment module (MFAM). Our model
can achieve state-of-the-art performance on the CASIA-SURF dataset and our
proposed GREAT-FASD-S dataset.
| [
{
"created": "Mon, 29 Mar 2021 08:14:14 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Chen",
"Xudong",
""
],
[
"Xu",
"Shugong",
""
],
[
"Ji",
"Qiaobin",
""
],
[
"Cao",
"Shan",
""
]
] |
2103.15446 | Soham Mazumder | Shivangi Aneja and Soham Mazumder | Deep Image Compositing | ESSE 2020: Proceedings of the 2020 European Symposium on Software
Engineering | In Proceedings of the 2020 European Symposium on Software
Engineering (pp. 101-104) 2020 | 10.1145/3393822.3432314 | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | In image editing, the most common task is pasting objects from one image to
the other and then eventually adjusting the manifestation of the foreground
object with the background object. This task is called image compositing. But
image compositing is a challenging problem that requires professional editing
skills and a considerable amount of time. Not only these professionals are
expensive to hire, but the tools (like Adobe Photoshop) used for doing such
tasks are also expensive to purchase making the overall task of image
compositing difficult for people without this skillset. In this work, we aim to
cater to this problem by making composite images look realistic. To achieve
this, we are using Generative Adversarial Networks (GANS). By training the
network with a diverse range of filters applied to the images and special loss
functions, the model is able to decode the color histogram of the foreground
and background part of the image and also learns to blend the foreground object
with the background. The hue and saturation values of the image play an
important role as discussed in this paper. To the best of our knowledge, this
is the first work that uses GANs for the task of image compositing. Currently,
there is no benchmark dataset available for image compositing. So we created
the dataset and will also make the dataset publicly available for benchmarking.
Experimental results on this dataset show that our method outperforms all
current state-of-the-art methods.
| [
{
"created": "Mon, 29 Mar 2021 09:23:37 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Aneja",
"Shivangi",
""
],
[
"Mazumder",
"Soham",
""
]
] |
2103.15449 | Benjamin Filtjens | Benjamin Filtjens, Pieter Ginis, Alice Nieuwboer, Peter Slaets, and
Bart Vanrumste | Automated freezing of gait assessment with marker-based motion capture
and multi-stage spatial-temporal graph convolutional neural networks | null | J NeuroEngineering Rehabil 19, 48 (2022) | 10.1186/s12984-022-01025-3 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Freezing of gait (FOG) is a common and debilitating gait impairment in
Parkinson's disease. Further insight into this phenomenon is hampered by the
difficulty to objectively assess FOG. To meet this clinical need, this paper
proposes an automated motion-capture-based FOG assessment method driven by a
novel deep neural network. Automated FOG assessment can be formulated as an
action segmentation problem, where temporal models are tasked to recognize and
temporally localize the FOG segments in untrimmed motion capture trials. This
paper takes a closer look at the performance of state-of-the-art action
segmentation models when tasked to automatically assess FOG. Furthermore, a
novel deep neural network architecture is proposed that aims to better capture
the spatial and temporal dependencies than the state-of-the-art baselines. The
proposed network, termed multi-stage spatial-temporal graph convolutional
network (MS-GCN), combines the spatial-temporal graph convolutional network
(ST-GCN) and the multi-stage temporal convolutional network (MS-TCN). The
ST-GCN captures the hierarchical spatial-temporal motion among the joints
inherent to motion capture, while the multi-stage component reduces
over-segmentation errors by refining the predictions over multiple stages. The
experiments indicate that the proposed model outperforms four state-of-the-art
baselines. Moreover, FOG outcomes derived from MS-GCN predictions had an
excellent (r=0.93 [0.87, 0.97]) and moderately strong (r=0.75 [0.55, 0.87])
linear relationship with FOG outcomes derived from manual annotations. The
proposed MS-GCN may provide an automated and objective alternative to
labor-intensive clinician-based FOG assessment. Future work is now possible
that aims to assess the generalization of MS-GCN to a larger and more varied
verification cohort.
| [
{
"created": "Mon, 29 Mar 2021 09:32:45 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Apr 2021 19:24:52 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Feb 2022 16:40:53 GMT",
"version": "v3"
}
] | 2022-08-15 | [
[
"Filtjens",
"Benjamin",
""
],
[
"Ginis",
"Pieter",
""
],
[
"Nieuwboer",
"Alice",
""
],
[
"Slaets",
"Peter",
""
],
[
"Vanrumste",
"Bart",
""
]
] |
2103.15459 | Jindong Gu | Jindong Gu, Volker Tresp, Han Hu | Capsule Network is Not More Robust than Convolutional Network | null | IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Capsule Network is widely believed to be more robust than Convolutional
Networks. However, there are no comprehensive comparisons between these two
networks, and it is also unknown which components in the CapsNet affect its
robustness. In this paper, we first carefully examine the special designs in
CapsNet that differ from that of a ConvNet commonly used for image
classification. The examination reveals five major new/different components in
CapsNet: a transformation process, a dynamic routing layer, a squashing
function, a marginal loss other than cross-entropy loss, and an additional
class-conditional reconstruction loss for regularization. Along with these
major differences, we conduct comprehensive ablation studies on three kinds of
robustness, including affine transformation, overlapping digits, and semantic
representation. The study reveals that some designs, which are thought critical
to CapsNet, actually can harm its robustness, i.e., the dynamic routing layer
and the transformation process, while others are beneficial for the robustness.
Based on these findings, we propose enhanced ConvNets simply by introducing the
essential components behind the CapsNet's success. The proposed simple ConvNets
can achieve better robustness than the CapsNet.
| [
{
"created": "Mon, 29 Mar 2021 09:47:00 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Gu",
"Jindong",
""
],
[
"Tresp",
"Volker",
""
],
[
"Hu",
"Han",
""
]
] |
2103.15469 | Jihyong Oh | Jihyong Oh, Munchurl Kim | PeaceGAN: A GAN-based Multi-Task Learning Method for SAR Target Image
Generation with a Pose Estimator and an Auxiliary Classifier | 14 pages, 10 figures, 6 tables | Remote Sensing, 13(19):3939, 2021 | 10.3390/rs13193939 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although Generative Adversarial Networks (GANs) are successfully applied to
diverse fields, training GANs on synthetic aperture radar (SAR) data is a
challenging task mostly due to speckle noise. On the one hands, in a learning
perspective of human's perception, it is natural to learn a task by using
various information from multiple sources. However, in the previous GAN works
on SAR target image generation, the information on target classes has only been
used. Due to the backscattering characteristics of SAR image signals, the
shapes and structures of SAR target images are strongly dependent on their pose
angles. Nevertheless, the pose angle information has not been incorporated into
such generative models for SAR target images. In this paper, we firstly propose
a novel GAN-based multi-task learning (MTL) method for SAR target image
generation, called PeaceGAN that uses both pose angle and target class
information, which makes it possible to produce SAR target images of desired
target classes at intended pose angles. For this, the PeaceGAN has two
additional structures, a pose estimator and an auxiliary classifier, at the
side of its discriminator to combine the pose and class information more
efficiently. In addition, the PeaceGAN is jointly learned in an end-to-end
manner as MTL with both pose angle and target class information, thus enhancing
the diversity and quality of generated SAR target images The extensive
experiments show that taking an advantage of both pose angle and target class
learning by the proposed pose estimator and auxiliary classifier can help the
PeaceGAN's generator effectively learn the distributions of SAR target images
in the MTL framework, so that it can better generate the SAR target images more
flexibly and faithfully at intended pose angles for desired target classes
compared to the recent state-of-the-art methods.
| [
{
"created": "Mon, 29 Mar 2021 10:03:09 GMT",
"version": "v1"
}
] | 2021-10-04 | [
[
"Oh",
"Jihyong",
""
],
[
"Kim",
"Munchurl",
""
]
] |
2103.15510 | Melanie Schellenberg | Melanie Schellenberg, Janek Gr\"ohl, Kris K. Dreher, Jan-Hinrich
N\"olke, Niklas Holzwarth, Minu D. Tizabi, Alexander Seitel, Lena Maier-Hein | Photoacoustic image synthesis with generative adversarial networks | 10 pages, 6 figures, 2 tables, update with paper published at
Photoacoustics | Photoacoustics 28 (2022): 100402 | 10.1016/j.pacs.2022.100402 | null | eess.IV cs.CV cs.LG physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Photoacoustic tomography (PAT) has the potential to recover morphological and
functional tissue properties with high spatial resolution. However, previous
attempts to solve the optical inverse problem with supervised machine learning
were hampered by the absence of labeled reference data. While this bottleneck
has been tackled by simulating training data, the domain gap between real and
simulated images remains an unsolved challenge. We propose a novel approach to
PAT image synthesis that involves subdividing the challenge of generating
plausible simulations into two disjoint problems: (1) Probabilistic generation
of realistic tissue morphology, and (2) pixel-wise assignment of corresponding
optical and acoustic properties. The former is achieved with Generative
Adversarial Networks (GANs) trained on semantically annotated medical imaging
data. According to a validation study on a downstream task our approach yields
more realistic synthetic images than the traditional model-based approach and
could therefore become a fundamental step for deep learning-based quantitative
PAT (qPAT).
| [
{
"created": "Mon, 29 Mar 2021 11:30:18 GMT",
"version": "v1"
},
{
"created": "Tue, 11 May 2021 14:50:48 GMT",
"version": "v2"
},
{
"created": "Tue, 25 Oct 2022 13:10:43 GMT",
"version": "v3"
}
] | 2022-10-26 | [
[
"Schellenberg",
"Melanie",
""
],
[
"Gröhl",
"Janek",
""
],
[
"Dreher",
"Kris K.",
""
],
[
"Nölke",
"Jan-Hinrich",
""
],
[
"Holzwarth",
"Niklas",
""
],
[
"Tizabi",
"Minu D.",
""
],
[
"Seitel",
"Alexander",
""
],
[
"Maier-Hein",
"Lena",
""
]
] |
2103.15555 | Cm Pintea | Oliviu Matei, Erdei Rudolf, Camelia-M. Pintea | Selective Survey: Most Efficient Models and Solvers for Integrative
Multimodal Transport | 12 pages; Accepted: Informatica (ISSN 0868-4952) | Informatica, vol. 32, no. 2, pp. 371-396, 2021 | 10.15388/21-INFOR449 | null | cs.AI cs.CY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the family of Intelligent Transportation Systems (ITS), Multimodal
Transport Systems (MMTS) have placed themselves as a mainstream transportation
mean of our time as a feasible integrative transportation process. The Global
Economy progressed with the help of transportation. The volume of goods and
distances covered have doubled in the last ten years, so there is a high demand
of an optimized transportation, fast but with low costs, saving resources but
also safe, with low or zero emissions. Thus, it is important to have an
overview of existing research in this field, to know what was already done and
what is to be studied next. The main objective is to explore a beneficent
selection of the existing research, methods and information in the field of
multimodal transportation research, to identify industry needs and gaps in
research and provide context for future research. The selective survey covers
multimodal transport design and optimization in terms of: cost, time, and
network topology. The multimodal transport theoretical aspects, context and
resources are also covering various aspects. The survey's selection includes
nowadays best methods and solvers for Intelligent Transportation Systems (ITS).
The gap between theory and real-world applications should be further solved in
order to optimize the global multimodal transportation system.
| [
{
"created": "Tue, 16 Mar 2021 08:31:44 GMT",
"version": "v1"
}
] | 2021-07-26 | [
[
"Matei",
"Oliviu",
""
],
[
"Rudolf",
"Erdei",
""
],
[
"Pintea",
"Camelia-M.",
""
]
] |
2103.15558 | Huansheng Ning Prof | Wenxi Wang, Huansheng Ning, Feifei Shi, Sahraoui Dhelim, Weishan
Zhang, Liming Chen | A Survey of Hybrid Human-Artificial Intelligence for Social Computing | null | IEEE Transactions on Human-Machine Systems 2021 | 10.1109/THMS.2021.3131683 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Along with the development of modern computing technology and social
sciences, both theoretical research and practical applications of social
computing have been continuously extended. In particular with the boom of
artificial intelligence (AI), social computing is significantly influenced by
AI. However, the conventional technologies of AI have drawbacks in dealing with
more complicated and dynamic problems. Such deficiency can be rectified by
hybrid human-artificial intelligence (H-AI) which integrates both human
intelligence and AI into one unity, forming a new enhanced intelligence. H-AI
in dealing with social problems shows the advantages that AI can not surpass.
This paper firstly introduces the concept of H-AI. AI is the intelligence in
the transition stage of H-AI, so the latest research progresses of AI in social
computing are reviewed. Secondly, it summarizes typical challenges faced by AI
in social computing, and makes it possible to introduce H-AI to solve these
challenges. Finally, the paper proposes a holistic framework of social
computing combining with H-AI, which consists of four layers: object layer,
base layer, analysis layer, and application layer. It represents H-AI has
significant advantages over AI in solving social problems.
| [
{
"created": "Wed, 17 Mar 2021 08:39:44 GMT",
"version": "v1"
}
] | 2022-02-28 | [
[
"Wang",
"Wenxi",
""
],
[
"Ning",
"Huansheng",
""
],
[
"Shi",
"Feifei",
""
],
[
"Dhelim",
"Sahraoui",
""
],
[
"Zhang",
"Weishan",
""
],
[
"Chen",
"Liming",
""
]
] |
2103.15566 | Georgios Leontidis | Mamatha Thota and Georgios Leontidis | Contrastive Domain Adaptation | 10 pages, 6 figures, 5 tables | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2021, pp. 2209-2218 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, contrastive self-supervised learning has become a key component for
learning visual representations across many computer vision tasks and
benchmarks. However, contrastive learning in the context of domain adaptation
remains largely underexplored. In this paper, we propose to extend contrastive
learning to a new domain adaptation setting, a particular situation occurring
where the similarity is learned and deployed on samples following different
probability distributions without access to labels. Contrastive learning learns
by comparing and contrasting positive and negative pairs of samples in an
unsupervised setting without access to source and target labels. We have
developed a variation of a recently proposed contrastive learning framework
that helps tackle the domain adaptation problem, further identifying and
removing possible negatives similar to the anchor to mitigate the effects of
false negatives. Extensive experiments demonstrate that the proposed method
adapts well, and improves the performance on the downstream domain adaptation
task.
| [
{
"created": "Fri, 26 Mar 2021 13:55:19 GMT",
"version": "v1"
}
] | 2021-06-25 | [
[
"Thota",
"Mamatha",
""
],
[
"Leontidis",
"Georgios",
""
]
] |
2103.15632 | Federico Pernici | Federico Pernici and Matteo Bruni and Claudio Baecchi and Alberto Del
Bimbo | Regular Polytope Networks | arXiv admin note: substantial text overlap with arXiv:1902.10441 | IEEE Transactions on Neural Networks and Learning Systems, 2021 | 10.1109/TNNLS.2021.3056762 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are widely used as a model for classification in a large
variety of tasks. Typically, a learnable transformation (i.e. the classifier)
is placed at the end of such models returning a value for each class used for
classification. This transformation plays an important role in determining how
the generated features change during the learning process. In this work, we
argue that this transformation not only can be fixed (i.e. set as
non-trainable) with no loss of accuracy and with a reduction in memory usage,
but it can also be used to learn stationary and maximally separated embeddings.
We show that the stationarity of the embedding and its maximal separated
representation can be theoretically justified by setting the weights of the
fixed classifier to values taken from the coordinate vertices of the three
regular polytopes available in $\mathbb{R}^d$, namely: the $d$-Simplex, the
$d$-Cube and the $d$-Orthoplex. These regular polytopes have the maximal amount
of symmetry that can be exploited to generate stationary features angularly
centered around their corresponding fixed weights. Our approach improves and
broadens the concept of a fixed classifier, recently proposed in
\cite{hoffer2018fix}, to a larger class of fixed classifier models.
Experimental results confirm the theoretical analysis, the generalization
capability, the faster convergence and the improved performance of the proposed
method. Code will be publicly available.
| [
{
"created": "Mon, 29 Mar 2021 14:11:32 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Pernici",
"Federico",
""
],
[
"Bruni",
"Matteo",
""
],
[
"Baecchi",
"Claudio",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] |
2103.15684 | Anouk van Diepen | A. van Diepen, T. H. G. F. Bakkes, A. J. R. De Bie, S. Turco, R. A.
Bouwman, P. H. Woerlee, M. Mischi | A Model-Based Approach to Synthetic Data Set Generation for
Patient-Ventilator Waveforms for Machine Learning and Educational Use | null | J Clin Monit Comput (2022) | 10.1007/s10877-022-00822-4 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Although mechanical ventilation is a lifesaving intervention in the ICU, it
has harmful side-effects, such as barotrauma and volutrauma. These harms can
occur due to asynchronies. Asynchronies are defined as a mismatch between the
ventilator timing and patient respiratory effort. Automatic detection of these
asynchronies, and subsequent feedback, would improve lung ventilation and
reduce the probability of lung damage. Neural networks to detect asynchronies
provide a promising new approach but require large annotated data sets, which
are difficult to obtain and require complex monitoring of inspiratory effort.
In this work, we propose a model-based approach to generate a synthetic data
set for machine learning and educational use by extending an existing lung
model with a first-order ventilator model. The physiological nature of the
derived lung model allows adaptation to various disease archetypes, resulting
in a diverse data set. We generated a synthetic data set using 9 different
patient archetypes, which are derived from measurements in the literature. The
model and synthetic data quality have been verified by comparison with clinical
data, review by a clinical expert, and an artificial intelligence model that
was trained on experimental data. The evaluation showed it was possible to
generate patient-ventilator waveforms including asynchronies that have the most
important features of experimental patient-ventilator waveforms.
| [
{
"created": "Mon, 29 Mar 2021 15:10:17 GMT",
"version": "v1"
},
{
"created": "Fri, 7 May 2021 12:05:08 GMT",
"version": "v2"
}
] | 2022-02-11 | [
[
"van Diepen",
"A.",
""
],
[
"Bakkes",
"T. H. G. F.",
""
],
[
"De Bie",
"A. J. R.",
""
],
[
"Turco",
"S.",
""
],
[
"Bouwman",
"R. A.",
""
],
[
"Woerlee",
"P. H.",
""
],
[
"Mischi",
"M.",
""
]
] |
2103.15685 | Zhedong Zheng | Zhedong Zheng and Yi Yang | Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in
Scene Segmentation | 11 pages, 9 tables, 5 figures | IEEE Transactions on Image Processing (2022) | 10.1109/TIP.2022.3195642 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation is to transfer the shared knowledge learned from the source
domain to a new environment, i.e., target domain. One common practice is to
train the model on both labeled source-domain data and unlabeled target-domain
data. Yet the learned models are usually biased due to the strong supervision
of the source domain. Most researchers adopt the early-stopping strategy to
prevent over-fitting, but when to stop training remains a challenging problem
since the lack of the target-domain validation set. In this paper, we propose
one efficient bootstrapping method, called Adaboost Student, explicitly
learning complementary models during training and liberating users from
empirical early stopping. Adaboost Student combines the deep model learning
with the conventional training strategy, i.e., adaptive boosting, and enables
interactions between learned models and the data sampler. We adopt one adaptive
data sampler to progressively facilitate learning on hard samples and aggregate
"weak" models to prevent over-fitting. Extensive experiments show that (1)
Without the need to worry about the stopping time, AdaBoost Student provides
one robust solution by efficient complementary model learning during training.
(2) AdaBoost Student is orthogonal to most domain adaptation methods, which can
be combined with existing approaches to further improve the state-of-the-art
performance. We have achieved competitive results on three widely-used scene
segmentation domain adaptation benchmarks.
| [
{
"created": "Mon, 29 Mar 2021 15:12:58 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jul 2021 04:03:28 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Sep 2022 06:05:22 GMT",
"version": "v3"
}
] | 2022-12-13 | [
[
"Zheng",
"Zhedong",
""
],
[
"Yang",
"Yi",
""
]
] |
2103.15812 | Xingzhe He | Xingzhe He, Bastian Wandt, Helge Rhodin | LatentKeypointGAN: Controlling Images via Latent Keypoints | null | Conference on Robots and Vision 2023 | 10.1109/CRV60082.2023.00009 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative adversarial networks (GANs) have attained photo-realistic quality
in image generation. However, how to best control the image content remains an
open challenge. We introduce LatentKeypointGAN, a two-stage GAN which is
trained end-to-end on the classical GAN objective with internal conditioning on
a set of space keypoints. These keypoints have associated appearance embeddings
that respectively control the position and style of the generated objects and
their parts. A major difficulty that we address with suitable network
architectures and training schemes is disentangling the image into spatial and
appearance factors without domain knowledge and supervision signals. We
demonstrate that LatentKeypointGAN provides an interpretable latent space that
can be used to re-arrange the generated images by re-positioning and exchanging
keypoint embeddings, such as generating portraits by combining the eyes, nose,
and mouth from different images. In addition, the explicit generation of
keypoints and matching images enables a new, GAN-based method for unsupervised
keypoint detection.
| [
{
"created": "Mon, 29 Mar 2021 17:59:10 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 19:40:55 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Dec 2021 02:18:05 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Jun 2023 21:43:08 GMT",
"version": "v4"
},
{
"created": "Sun, 13 Oct 2024 19:57:19 GMT",
"version": "v5"
}
] | 2024-10-15 | [
[
"He",
"Xingzhe",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Rhodin",
"Helge",
""
]
] |
2103.15819 | Linus Gissl\'en | Joakim Bergdahl, Camilo Gordillo, Konrad Tollmar, Linus Gissl\'en | Augmenting Automated Game Testing with Deep Reinforcement Learning | 4 pages, 6 figures, 2020 IEEE Conference on Games (CoG), 600-603 | 2020 IEEE Conference on Games (CoG), 600-603 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General game testing relies on the use of human play testers, play test
scripting, and prior knowledge of areas of interest to produce relevant test
data. Using deep reinforcement learning (DRL), we introduce a self-learning
mechanism to the game testing framework. With DRL, the framework is capable of
exploring and/or exploiting the game mechanics based on a user-defined,
reinforcing reward signal. As a result, test coverage is increased and
unintended game play mechanics, exploits and bugs are discovered in a multitude
of game types. In this paper, we show that DRL can be used to increase test
coverage, find exploits, test map difficulty, and to detect common problems
that arise in the testing of first-person shooter (FPS) games.
| [
{
"created": "Mon, 29 Mar 2021 11:55:15 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Bergdahl",
"Joakim",
""
],
[
"Gordillo",
"Camilo",
""
],
[
"Tollmar",
"Konrad",
""
],
[
"Gisslén",
"Linus",
""
]
] |
2103.15953 | Jussi Karlgren | Rosie Jones, Ben Carterette, Ann Clifton, Maria Eskevich, Gareth J. F.
Jones, Jussi Karlgren, Aasish Pappu, Sravana Reddy, Yongze Yu | TREC 2020 Podcasts Track Overview | null | The Proceedings of the Twenty-Ninth Text REtrieval Conference
Proceedings (TREC 2020) | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Podcast Track is new at the Text Retrieval Conference (TREC) in 2020. The
podcast track was designed to encourage research into podcasts in the
information retrieval and NLP research communities. The track consisted of two
shared tasks: segment retrieval and summarization, both based on a dataset of
over 100,000 podcast episodes (metadata, audio, and automatic transcripts)
which was released concurrently with the track. The track generated
considerable interest, attracted hundreds of new registrations to TREC and
fifteen teams, mostly disjoint between search and summarization, made final
submissions for assessment. Deep learning was the dominant experimental
approach for both search experiments and summarization. This paper gives an
overview of the tasks and the results of the participants' experiments. The
track will return to TREC 2021 with the same two tasks, incorporating slight
modifications in response to participant feedback.
| [
{
"created": "Mon, 29 Mar 2021 20:58:10 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Jones",
"Rosie",
""
],
[
"Carterette",
"Ben",
""
],
[
"Clifton",
"Ann",
""
],
[
"Eskevich",
"Maria",
""
],
[
"Jones",
"Gareth J. F.",
""
],
[
"Karlgren",
"Jussi",
""
],
[
"Pappu",
"Aasish",
""
],
[
"Reddy",
"Sravana",
""
],
[
"Yu",
"Yongze",
""
]
] |
2103.16019 | Weihong Deng | Yuke Fang, Jiani Hu, Weihong Deng | Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition | 36 pages, 11 figures | Pattern Recognition, vol.102, pp.107249, 2020 | 10.1016/j.patcog.2020.107249 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face photo-sketch synthesis and recognition has many applications in digital
entertainment and law enforcement. Recently, generative adversarial networks
(GANs) based methods have significantly improved the quality of image
synthesis, but they have not explicitly considered the purpose of recognition.
In this paper, we first propose an Identity-Aware CycleGAN (IACycleGAN) model
that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the
synthesis of key facial regions, such as eyes and nose, which are important for
identity recognition. Furthermore, we develop a mutual optimization procedure
between the synthesis model and the recognition model, which iteratively
synthesizes better images by IACycleGAN and enhances the recognition model by
the triplet loss of the generated and real samples. Extensive experiments are
performed on both photo-tosketch and sketch-to-photo tasks using the widely
used CUFS and CUFSF databases. The results show that the proposed method
performs better than several state-of-the-art methods in terms of both
synthetic image quality and photo-sketch recognition accuracy.
| [
{
"created": "Tue, 30 Mar 2021 01:30:08 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Fang",
"Yuke",
""
],
[
"Hu",
"Jiani",
""
],
[
"Deng",
"Weihong",
""
]
] |
2103.16215 | Enrique Fernandez-Blanco | Enrique Fernandez-Blanco, Daniel Rivero, Alejandro Pazos | Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal | 20 pages, 4 figures, 4 tables | Soft Computing 24, 4067-4079 (2020) | 10.1007/s00500-019-04174-1 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Sleeping problems have become one of the major diseases all over the world.
To tackle this issue, the basic tool used by specialists is the Polysomnogram,
which is a collection of different signals recorded during sleep. After its
recording, the specialists have to score the different signals according to one
of the standard guidelines. This process is carried out manually, which can be
highly time-consuming and very prone to annotation errors. Therefore, over the
years, many approaches have been explored in an attempt to support the
specialists in this task. In this paper, an approach based on convolutional
neural networks is presented, where an in-depth comparison is performed in
order to determine the convenience of using more than one signal simultaneously
as input. Additionally, the models were also used as parts of an ensemble model
to check whether any useful information can be extracted from signal processing
a single signal at a time which the dual-signal model cannot identify. Tests
have been performed by using a well-known dataset called expanded sleep-EDF,
which is the most commonly used dataset as the benchmark for this problem. The
tests were carried out with a leave-one-out cross-validation over the patients,
which ensures that there is no possible contamination between training and
testing. The resulting proposal is a network smaller than previously published
ones, but which overcomes the results of any previous models on the same
dataset. The best result shows an accuracy of 92.67\% and a Cohen's Kappa value
over 0.84 compared to human experts.
| [
{
"created": "Tue, 30 Mar 2021 09:59:56 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Fernandez-Blanco",
"Enrique",
""
],
[
"Rivero",
"Daniel",
""
],
[
"Pazos",
"Alejandro",
""
]
] |
2103.16440 | Chen Qiu | Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph | Neural Transformation Learning for Deep Anomaly Detection Beyond Images | null | Proceedings of the 38th International Conference on Machine
Learning, 2021, volume:139, pages:8703--8714 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data transformations (e.g. rotations, reflections, and cropping) play an
important role in self-supervised learning. Typically, images are transformed
into different views, and neural networks trained on tasks involving these
views produce useful feature representations for downstream tasks, including
anomaly detection. However, for anomaly detection beyond image data, it is
often unclear which transformations to use. Here we present a simple end-to-end
procedure for anomaly detection with learnable transformations. The key idea is
to embed the transformed data into a semantic space such that the transformed
data still resemble their untransformed form, while different transformations
are easily distinguishable. Extensive experiments on time series demonstrate
that our proposed method outperforms existing approaches in the one-vs.-rest
setting and is competitive in the more challenging n-vs.-rest anomaly detection
task. On tabular datasets from the medical and cyber-security domains, our
method learns domain-specific transformations and detects anomalies more
accurately than previous work.
| [
{
"created": "Tue, 30 Mar 2021 15:38:18 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 15:09:56 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Jul 2021 13:25:36 GMT",
"version": "v3"
},
{
"created": "Thu, 3 Feb 2022 16:55:59 GMT",
"version": "v4"
}
] | 2022-02-04 | [
[
"Qiu",
"Chen",
""
],
[
"Pfrommer",
"Timo",
""
],
[
"Kloft",
"Marius",
""
],
[
"Mandt",
"Stephan",
""
],
[
"Rudolph",
"Maja",
""
]
] |
2103.16442 | Zoe Landgraf | Zoe Landgraf, Raluca Scona, Tristan Laidlow, Stephen James, Stefan
Leutenegger, Andrew J. Davison | SIMstack: A Generative Shape and Instance Model for Unordered Object
Stacks | null | ICCV 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | By estimating 3D shape and instances from a single view, we can capture
information about an environment quickly, without the need for comprehensive
scanning and multi-view fusion. Solving this task for composite scenes (such as
object stacks) is challenging: occluded areas are not only ambiguous in shape
but also in instance segmentation; multiple decompositions could be valid. We
observe that physics constrains decomposition as well as shape in occluded
regions and hypothesise that a latent space learned from scenes built under
physics simulation can serve as a prior to better predict shape and instances
in occluded regions. To this end we propose SIMstack, a depth-conditioned
Variational Auto-Encoder (VAE), trained on a dataset of objects stacked under
physics simulation. We formulate instance segmentation as a centre voting task
which allows for class-agnostic detection and doesn't require setting the
maximum number of objects in the scene. At test time, our model can generate 3D
shape and instance segmentation from a single depth view, probabilistically
sampling proposals for the occluded region from the learned latent space. Our
method has practical applications in providing robots some of the ability
humans have to make rapid intuitive inferences of partially observed scenes. We
demonstrate an application for precise (non-disruptive) object grasping of
unknown objects from a single depth view.
| [
{
"created": "Tue, 30 Mar 2021 15:42:43 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Sep 2021 07:34:55 GMT",
"version": "v2"
}
] | 2021-09-28 | [
[
"Landgraf",
"Zoe",
""
],
[
"Scona",
"Raluca",
""
],
[
"Laidlow",
"Tristan",
""
],
[
"James",
"Stephen",
""
],
[
"Leutenegger",
"Stefan",
""
],
[
"Davison",
"Andrew J.",
""
]
] |
2103.16510 | Cagatay Basdogan | Senem Ezgi Emgin, Amirreza Aghakhani, T. Metin Sezgin, and Cagatay
Basdogan | HapTable: An Interactive Tabletop Providing Online Haptic Feedback for
Touch Gestures | null | IEEE Transactions on Visualization and Computer Graphics, 2019,
Vol. 25, No. 9, pp. 2749-2762 | 10.1109/TVCG.2018.2855154 | null | cs.HC cs.CV cs.GR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present HapTable; a multimodal interactive tabletop that allows users to
interact with digital images and objects through natural touch gestures, and
receive visual and haptic feedback accordingly. In our system, hand pose is
registered by an infrared camera and hand gestures are classified using a
Support Vector Machine (SVM) classifier. To display a rich set of haptic
effects for both static and dynamic gestures, we integrated electromechanical
and electrostatic actuation techniques effectively on tabletop surface of
HapTable, which is a surface capacitive touch screen. We attached four piezo
patches to the edges of tabletop to display vibrotactile feedback for static
gestures. For this purpose, the vibration response of the tabletop, in the form
of frequency response functions (FRFs), was obtained by a laser Doppler
vibrometer for 84 grid points on its surface. Using these FRFs, it is possible
to display localized vibrotactile feedback on the surface for static gestures.
For dynamic gestures, we utilize the electrostatic actuation technique to
modulate the frictional forces between finger skin and tabletop surface by
applying voltage to its conductive layer. Here, we present two examples of such
applications, one for static and one for dynamic gestures, along with detailed
user studies. In the first one, user detects the direction of a virtual flow,
such as that of wind or water, by putting their hand on the tabletop surface
and feeling a vibrotactile stimulus traveling underneath it. In the second
example, user rotates a virtual knob on the tabletop surface to select an item
from a menu while feeling the knob's detents and resistance to rotation in the
form of frictional haptic feedback.
| [
{
"created": "Tue, 30 Mar 2021 17:12:10 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Emgin",
"Senem Ezgi",
""
],
[
"Aghakhani",
"Amirreza",
""
],
[
"Sezgin",
"T. Metin",
""
],
[
"Basdogan",
"Cagatay",
""
]
] |
2103.16516 | Aj Piergiovanni | AJ Piergiovanni and Michael S. Ryoo | Recognizing Actions in Videos from Unseen Viewpoints | null | CVPR 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard methods for video recognition use large CNNs designed to capture
spatio-temporal data. However, training these models requires a large amount of
labeled training data, containing a wide variety of actions, scenes, settings
and camera viewpoints. In this paper, we show that current convolutional neural
network models are unable to recognize actions from camera viewpoints not
present in their training data (i.e., unseen view action recognition). To
address this, we develop approaches based on 3D representations and introduce a
new geometric convolutional layer that can learn viewpoint invariant
representations. Further, we introduce a new, challenging dataset for unseen
view recognition and show the approaches ability to learn viewpoint invariant
representations.
| [
{
"created": "Tue, 30 Mar 2021 17:17:54 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Piergiovanni",
"AJ",
""
],
[
"Ryoo",
"Michael S.",
""
]
] |
2103.16624 | Ikechukwu Onyenwe | D.C. Asogwa, S.O. Anigbogu, I.E. Onyenwe, F.A. Sani | Text Classification Using Hybrid Machine Learning Algorithms on Big Data | 8 pages, 2 figures, 8 tables, Journal | International Journal of Trend in Research and Development, Volume
6(5), ISSN: 2394-9333, 2019 | null | null | cs.IR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there are unprecedented data growth originating from different
online platforms which contribute to big data in terms of volume, velocity,
variety and veracity (4Vs). Given this nature of big data which is
unstructured, performing analytics to extract meaningful information is
currently a great challenge to big data analytics. Collecting and analyzing
unstructured textual data allows decision makers to study the escalation of
comments/posts on our social media platforms. Hence, there is need for
automatic big data analysis to overcome the noise and the non-reliability of
these unstructured dataset from the digital media platforms. However, current
machine learning algorithms used are performance driven focusing on the
classification/prediction accuracy based on known properties learned from the
training samples. With the learning task in a large dataset, most machine
learning models are known to require high computational cost which eventually
leads to computational complexity. In this work, two supervised machine
learning algorithms are combined with text mining techniques to produce a
hybrid model which consists of Na\"ive Bayes and support vector machines (SVM).
This is to increase the efficiency and accuracy of the results obtained and
also to reduce the computational cost and complexity. The system also provides
an open platform where a group of persons with a common interest can share
their comments/messages and these comments classified automatically as legal or
illegal. This improves the quality of conversation among users. The hybrid
model was developed using WEKA tools and Java programming language. The result
shows that the hybrid model gave 96.76% accuracy as against the 61.45% and
69.21% of the Na\"ive Bayes and SVM models respectively.
| [
{
"created": "Tue, 30 Mar 2021 19:02:48 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"Asogwa",
"D. C.",
""
],
[
"Anigbogu",
"S. O.",
""
],
[
"Onyenwe",
"I. E.",
""
],
[
"Sani",
"F. A.",
""
]
] |
2103.16652 | Tobias Lorenz | Tobias Lorenz, Anian Ruoss, Mislav Balunovi\'c, Gagandeep Singh,
Martin Vechev | Robustness Certification for Point Cloud Models | International Conference on Computer Vision (ICCV) 2021 | Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV) 2021, pp. 7608-7618 | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of deep 3D point cloud models in safety-critical applications, such
as autonomous driving, dictates the need to certify the robustness of these
models to real-world transformations. This is technically challenging, as it
requires a scalable verifier tailored to point cloud models that handles a wide
range of semantic 3D transformations. In this work, we address this challenge
and introduce 3DCertify, the first verifier able to certify the robustness of
point cloud models. 3DCertify is based on two key insights: (i) a generic
relaxation based on first-order Taylor approximations, applicable to any
differentiable transformation, and (ii) a precise relaxation for global feature
pooling, which is more complex than pointwise activations (e.g., ReLU or
sigmoid) but commonly employed in point cloud models. We demonstrate the
effectiveness of 3DCertify by performing an extensive evaluation on a wide
range of 3D transformations (e.g., rotation, twisting) for both classification
and part segmentation tasks. For example, we can certify robustness against
rotations by $\pm$60{\deg} for 95.7% of point clouds, and our max pool
relaxation increases certification by up to 15.6%.
| [
{
"created": "Tue, 30 Mar 2021 19:52:07 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Aug 2021 09:37:54 GMT",
"version": "v2"
}
] | 2021-10-14 | [
[
"Lorenz",
"Tobias",
""
],
[
"Ruoss",
"Anian",
""
],
[
"Balunović",
"Mislav",
""
],
[
"Singh",
"Gagandeep",
""
],
[
"Vechev",
"Martin",
""
]
] |
2103.16670 | Alexis Perakis | Alexis Perakis, Ali Gorji, Samriddhi Jain, Krishna Chaitanya, Simone
Rizza, Ender Konukoglu | Contrastive Learning of Single-Cell Phenotypic Representations for
Treatment Classification | 12 pages, 2 figures, 7 tables. This article is a pre-print and is
currently under review at a conference | In: Lian C., Cao X., Rekik I., Xu X., Yan P. (eds) Machine
Learning in Medical Imaging. MLMI 2021. Lecture Notes in Computer Science,
vol 12966. Springer, Cham | 10.1007/978-3-030-87589-3_58 | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning robust representations to discriminate cell phenotypes based on
microscopy images is important for drug discovery. Drug development efforts
typically analyse thousands of cell images to screen for potential treatments.
Early works focus on creating hand-engineered features from these images or
learn such features with deep neural networks in a fully or weakly-supervised
framework. Both require prior knowledge or labelled datasets. Therefore,
subsequent works propose unsupervised approaches based on generative models to
learn these representations. Recently, representations learned with
self-supervised contrastive loss-based methods have yielded state-of-the-art
results on various imaging tasks compared to earlier unsupervised approaches.
In this work, we leverage a contrastive learning framework to learn appropriate
representations from single-cell fluorescent microscopy images for the task of
Mechanism-of-Action classification. The proposed work is evaluated on the
annotated BBBC021 dataset, and we obtain state-of-the-art results in NSC, NCSB
and drop metrics for an unsupervised approach. We observe an improvement of 10%
in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised
method. Moreover, the performance of our unsupervised approach ties with the
best supervised approach. Additionally, we observe that our framework performs
well even without post-processing, unlike earlier methods. With this, we
conclude that one can learn robust cell representations with contrastive
learning.
| [
{
"created": "Tue, 30 Mar 2021 20:29:04 GMT",
"version": "v1"
}
] | 2021-12-28 | [
[
"Perakis",
"Alexis",
""
],
[
"Gorji",
"Ali",
""
],
[
"Jain",
"Samriddhi",
""
],
[
"Chaitanya",
"Krishna",
""
],
[
"Rizza",
"Simone",
""
],
[
"Konukoglu",
"Ender",
""
]
] |
2103.16827 | Sehoon Kim | Sehoon Kim, Amir Gholami, Zhewei Yao, Nicholas Lee, Patrick Wang,
Aniruddha Nrusimha, Bohan Zhai, Tianren Gao, Michael W. Mahoney, Kurt Keutzer | Integer-only Zero-shot Quantization for Efficient Speech Recognition | null | ICASSP 2022 | null | null | eess.AS cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end neural network models achieve improved performance on various
automatic speech recognition (ASR) tasks. However, these models perform poorly
on edge hardware due to large memory and computation requirements. While
quantizing model weights and/or activations to low-precision can be a promising
solution, previous research on quantizing ASR models is limited. In particular,
the previous approaches use floating-point arithmetic during inference and thus
they cannot fully exploit efficient integer processing units. Moreover, they
require training and/or validation data during quantization, which may not be
available due to security or privacy concerns. To address these limitations, we
propose an integer-only, zero-shot quantization scheme for ASR models. In
particular, we generate synthetic data whose runtime statistics resemble the
real data, and we use it to calibrate models during quantization. We apply our
method to quantize QuartzNet, Jasper, and Conformer and show negligible WER
degradation as compared to the full-precision baseline models, even without
using any data. Moreover, we achieve up to 2.35x speedup on a T4 GPU and 4x
compression rate, with a modest WER degradation of <1% with INT8 quantization.
| [
{
"created": "Wed, 31 Mar 2021 06:05:40 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Oct 2021 22:10:39 GMT",
"version": "v2"
},
{
"created": "Sun, 30 Jan 2022 22:10:56 GMT",
"version": "v3"
}
] | 2022-02-01 | [
[
"Kim",
"Sehoon",
""
],
[
"Gholami",
"Amir",
""
],
[
"Yao",
"Zhewei",
""
],
[
"Lee",
"Nicholas",
""
],
[
"Wang",
"Patrick",
""
],
[
"Nrusimha",
"Aniruddha",
""
],
[
"Zhai",
"Bohan",
""
],
[
"Gao",
"Tianren",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Keutzer",
"Kurt",
""
]
] |
2103.16836 | Hermann Courteille | Hermann Courteille (LISTIC), A. Beno\^it (LISTIC), N M\'eger (LISTIC),
A Atto (LISTIC), D. Ienco (UMR TETIS) | Channel-Based Attention for LCC Using Sentinel-2 Time Series | null | International Geoscience and Remote Sensing Symposium (IGARSS),
Jul 2021, Brussels, Belgium | null | null | cs.CV cs.LG cs.NE eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) are getting increasing attention to deal with
Land Cover Classification (LCC) relying on Satellite Image Time Series (SITS).
Though high performances can be achieved, the rationale of a prediction yielded
by a DNN often remains unclear. An architecture expressing predictions with
respect to input channels is thus proposed in this paper. It relies on
convolutional layers and an attention mechanism weighting the importance of
each channel in the final classification decision. The correlation between
channels is taken into account to set up shared kernels and lower model
complexity. Experiments based on a Sentinel-2 SITS show promising results.
| [
{
"created": "Wed, 31 Mar 2021 06:24:15 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"Courteille",
"Hermann",
"",
"LISTIC"
],
[
"Benoît",
"A.",
"",
"LISTIC"
],
[
"Méger",
"N",
"",
"LISTIC"
],
[
"Atto",
"A",
"",
"LISTIC"
],
[
"Ienco",
"D.",
"",
"UMR TETIS"
]
] |
2103.16854 | Fuyan Ma | Fuyan Ma, Bin Sun and Shutao Li | Facial Expression Recognition with Visual Transformers and Attentional
Selective Fusion | null | IEEE Trans. Affective Comput. 1(2021)1-1 | 10.1109/TAFFC.2021.3122146 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial Expression Recognition (FER) in the wild is extremely challenging due
to occlusions, variant head poses, face deformation and motion blur under
unconstrained conditions. Although substantial progresses have been made in
automatic FER in the past few decades, previous studies were mainly designed
for lab-controlled FER. Real-world occlusions, variant head poses and other
issues definitely increase the difficulty of FER on account of these
information-deficient regions and complex backgrounds. Different from previous
pure CNNs based methods, we argue that it is feasible and practical to
translate facial images into sequences of visual words and perform expression
recognition from a global perspective. Therefore, we propose the Visual
Transformers with Feature Fusion (VTFF) to tackle FER in the wild by two main
steps. First, we propose the attentional selective fusion (ASF) for leveraging
two kinds of feature maps generated by two-branch CNNs. The ASF captures
discriminative information by fusing multiple features with the global-local
attention. The fused feature maps are then flattened and projected into
sequences of visual words. Second, inspired by the success of Transformers in
natural language processing, we propose to model relationships between these
visual words with the global self-attention. The proposed method is evaluated
on three public in-the-wild facial expression datasets (RAF-DB, FERPlus and
AffectNet). Under the same settings, extensive experiments demonstrate that our
method shows superior performance over other methods, setting new state of the
art on RAF-DB with 88.14%, FERPlus with 88.81% and AffectNet with 61.85%. The
cross-dataset evaluation on CK+ shows the promising generalization capability
of the proposed method.
| [
{
"created": "Wed, 31 Mar 2021 07:07:56 GMT",
"version": "v1"
},
{
"created": "Sun, 23 May 2021 03:41:03 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Feb 2022 01:51:36 GMT",
"version": "v3"
}
] | 2022-05-12 | [
[
"Ma",
"Fuyan",
""
],
[
"Sun",
"Bin",
""
],
[
"Li",
"Shutao",
""
]
] |
2103.16898 | Wojciech Ozga | Wojciech Ozga, Do Le Quoc, Christof Fetzer | Perun: Secure Multi-Stakeholder Machine Learning Framework with GPU
Support | null | The 35th Annual IFIP Conference on Data and Applications Security
and Privacy (DBSec 2021) | null | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Confidential multi-stakeholder machine learning (ML) allows multiple parties
to perform collaborative data analytics while not revealing their intellectual
property, such as ML source code, model, or datasets. State-of-the-art
solutions based on homomorphic encryption incur a large performance overhead.
Hardware-based solutions, such as trusted execution environments (TEEs),
significantly improve the performance in inference computations but still
suffer from low performance in training computations, e.g., deep neural
networks model training, because of limited availability of protected memory
and lack of GPU support.
To address this problem, we designed and implemented Perun, a framework for
confidential multi-stakeholder machine learning that allows users to make a
trade-off between security and performance. Perun executes ML training on
hardware accelerators (e.g., GPU) while providing security guarantees using
trusted computing technologies, such as trusted platform module and integrity
measurement architecture. Less compute-intensive workloads, such as inference,
execute only inside TEE, thus at a lower trusted computing base. The evaluation
shows that during the ML training on CIFAR-10 and real-world medical datasets,
Perun achieved a 161x to 1560x speedup compared to a pure TEE-based approach.
| [
{
"created": "Wed, 31 Mar 2021 08:31:07 GMT",
"version": "v1"
}
] | 2021-06-04 | [
[
"Ozga",
"Wojciech",
""
],
[
"Quoc",
"Do Le",
""
],
[
"Fetzer",
"Christof",
""
]
] |
2103.17007 | Vyacheslav Yukalov | V.I. Yukalov | Tossing Quantum Coins and Dice | 26 pages | Laser Physics 31 (2021) 055201 | 10.1088/1555-6611/abee8f | null | quant-ph cs.AI cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | The procedure of tossing quantum coins and dice is described. This case is an
important example of a quantum procedure because it presents a typical
framework employed in quantum information processing and quantum computing. The
emphasis is on the clarification of the difference between quantum and
classical conditional probabilities. These probabilities are designed for
characterizing different systems, either quantum or classical, and they,
generally, cannot be reduced to each other. Thus the L\"{u}ders probability
cannot be treated as a generalization of the classical conditional probability.
The analogies between quantum theory of measurements and quantum decision
theory are elucidated.
| [
{
"created": "Wed, 31 Mar 2021 11:39:56 GMT",
"version": "v1"
}
] | 2021-05-26 | [
[
"Yukalov",
"V. I.",
""
]
] |
2103.17111 | Ezequiel de la Rosa | Ezequiel de la Rosa, David Robben, Diana M. Sima, Jan S. Kirschke,
Bjoern Menze | Differentiable Deconvolution for Improved Stroke Perfusion Analysis | Accepted at MICCAI 2020 | International Conference on Medical Image Computing and
Computer-Assisted Intervention 2020 Oct 4 (pp. 593-602) | 10.1007/978-3-030-59728-3_58 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Perfusion imaging is the current gold standard for acute ischemic stroke
analysis. It allows quantification of the salvageable and non-salvageable
tissue regions (penumbra and core areas respectively). In clinical settings,
the singular value decomposition (SVD) deconvolution is one of the most
accepted and used approaches for generating interpretable and physically
meaningful maps. Though this method has been widely validated in experimental
and clinical settings, it might produce suboptimal results because the chosen
inputs to the model cannot guarantee optimal performance. For the most critical
input, the arterial input function (AIF), it is still controversial how and
where it should be chosen even though the method is very sensitive to this
input. In this work we propose an AIF selection approach that is optimized for
maximal core lesion segmentation performance. The AIF is regressed by a neural
network optimized through a differentiable SVD deconvolution, aiming to
maximize core lesion segmentation agreement with ground truth data. To our
knowledge, this is the first work exploiting a differentiable deconvolution
model with neural networks. We show that our approach is able to generate AIFs
without any manual annotation, and hence avoiding manual rater's influences.
The method achieves manual expert performance in the ISLES18 dataset. We
conclude that the methodology opens new possibilities for improving perfusion
imaging quantification with deep neural networks.
| [
{
"created": "Wed, 31 Mar 2021 14:29:36 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"de la Rosa",
"Ezequiel",
""
],
[
"Robben",
"David",
""
],
[
"Sima",
"Diana M.",
""
],
[
"Kirschke",
"Jan S.",
""
],
[
"Menze",
"Bjoern",
""
]
] |
2103.17118 | Zhenhua Xu | Zhenhua Xu, Yuxiang Sun, Ming Liu | iCurb: Imitation Learning-based Detection of Road Curbs using Aerial
Images for Autonomous Driving | null | IEEE Robotics and Automation Letters,6,(2021),1097-1104 | 10.1109/LRA.2021.3056344 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection of road curbs is an essential capability for autonomous driving. It
can be used for autonomous vehicles to determine drivable areas on roads.
Usually, road curbs are detected on-line using vehicle-mounted sensors, such as
video cameras and 3-D Lidars. However, on-line detection using video cameras
may suffer from challenging illumination conditions, and Lidar-based approaches
may be difficult to detect far-away road curbs due to the sparsity issue of
point clouds. In recent years, aerial images are becoming more and more
worldwide available. We find that the visual appearances between road areas and
off-road areas are usually different in aerial images, so we propose a novel
solution to detect road curbs off-line using aerial images. The input to our
method is an aerial image, and the output is directly a graph (i.e., vertices
and edges) representing road curbs. To this end, we formulate the problem as an
imitation learning problem, and design a novel network and an innovative
training strategy to train an agent to iteratively find the road-curb graph.
The experimental results on a public dataset confirm the effectiveness and
superiority of our method. This work is accompanied with a demonstration video
and a supplementary document at https://tonyxuqaq.github.io/iCurb/.
| [
{
"created": "Wed, 31 Mar 2021 14:40:31 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"Xu",
"Zhenhua",
""
],
[
"Sun",
"Yuxiang",
""
],
[
"Liu",
"Ming",
""
]
] |
2103.17123 | Trung-Nghia Le | Trung-Nghia Le, Yubo Cao, Tan-Cong Nguyen, Minh-Quan Le, Khanh-Duy
Nguyen, Thanh-Toan Do, Minh-Triet Tran, Tam V. Nguyen | Camouflaged Instance Segmentation In-The-Wild: Dataset, Method, and
Benchmark Suite | TIP acceptance. Project page:
https://sites.google.com/view/ltnghia/research/camo_plus_plus | IEEE Transactions on Image Processing 2021 | 10.1109/TIP.2021.3130490 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper pushes the envelope on decomposing camouflaged regions in an image
into meaningful components, namely, camouflaged instances. To promote the new
task of camouflaged instance segmentation of in-the-wild images, we introduce a
dataset, dubbed CAMO++, that extends our preliminary CAMO dataset (camouflaged
object segmentation) in terms of quantity and diversity. The new dataset
substantially increases the number of images with hierarchical pixel-wise
ground truths. We also provide a benchmark suite for the task of camouflaged
instance segmentation. In particular, we present an extensive evaluation of
state-of-the-art instance segmentation methods on our newly constructed CAMO++
dataset in various scenarios. We also present a camouflage fusion learning
(CFL) framework for camouflaged instance segmentation to further improve the
performance of state-of-the-art methods. The dataset, model, evaluation suite,
and benchmark will be made publicly available on our project page:
https://sites.google.com/view/ltnghia/research/camo_plus_plus
| [
{
"created": "Wed, 31 Mar 2021 14:46:12 GMT",
"version": "v1"
},
{
"created": "Thu, 20 May 2021 01:25:37 GMT",
"version": "v2"
},
{
"created": "Fri, 21 May 2021 01:22:30 GMT",
"version": "v3"
},
{
"created": "Sun, 12 Dec 2021 01:46:26 GMT",
"version": "v4"
}
] | 2021-12-14 | [
[
"Le",
"Trung-Nghia",
""
],
[
"Cao",
"Yubo",
""
],
[
"Nguyen",
"Tan-Cong",
""
],
[
"Le",
"Minh-Quan",
""
],
[
"Nguyen",
"Khanh-Duy",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Nguyen",
"Tam V.",
""
]
] |
2103.17235 | Debesh Jha | Nikhil Kumar Tomar, Debesh Jha, Michael A. Riegler, H{\aa}vard D.
Johansen, Dag Johansen, Jens Rittscher, P{\aa}l Halvorsen, and Sharib Ali | FANet: A Feedback Attention Network for Improved Biomedical Image
Segmentation | null | IEEE Transactions on Neural Networks and Learning Systems, 2022 | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The increase of available large clinical and experimental datasets has
contributed to a substantial amount of important contributions in the area of
biomedical image analysis. Image segmentation, which is crucial for any
quantitative analysis, has especially attracted attention. Recent hardware
advancement has led to the success of deep learning approaches. However,
although deep learning models are being trained on large datasets, existing
methods do not use the information from different learning epochs effectively.
In this work, we leverage the information of each training epoch to prune the
prediction maps of the subsequent epochs. We propose a novel architecture
called feedback attention network (FANet) that unifies the previous epoch mask
with the feature map of the current training epoch. The previous epoch mask is
then used to provide a hard attention to the learned feature maps at different
convolutional layers. The network also allows to rectify the predictions in an
iterative fashion during the test time. We show that our proposed
\textit{feedback attention} model provides a substantial improvement on most
segmentation metrics tested on seven publicly available biomedical imaging
datasets demonstrating the effectiveness of FANet. The source code is available
at \url{https://github.com/nikhilroxtomar/FANet}.
| [
{
"created": "Wed, 31 Mar 2021 17:34:20 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jan 2022 03:25:47 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Mar 2022 18:17:11 GMT",
"version": "v3"
}
] | 2022-03-29 | [
[
"Tomar",
"Nikhil Kumar",
""
],
[
"Jha",
"Debesh",
""
],
[
"Riegler",
"Michael A.",
""
],
[
"Johansen",
"Håvard D.",
""
],
[
"Johansen",
"Dag",
""
],
[
"Rittscher",
"Jens",
""
],
[
"Halvorsen",
"Pål",
""
],
[
"Ali",
"Sharib",
""
]
] |
2103.17245 | Enis Karaarslan Dr. | \"Ozg\"ur Dogan, Oguzhan Sahin, Enis Karaarslan | Digital Twin Based Disaster Management System Proposal: DT-DMS | 5 pages, 6 figures | Journal of Emerging Computer Technologies (JECT), 2021, Vol:1 (2),
25-30 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The damage and the impact of natural disasters are becoming more destructive
with the increase of urbanization. Today's metropolitan cities are not
sufficiently prepared for the pre and post-disaster situations. Digital Twin
technology can provide a solution. A virtual copy of the physical city could be
created by collecting data from sensors of the Internet of Things (IoT) devices
and stored on the cloud infrastructure. This virtual copy is kept current and
up to date with the continuous flow of the data coming from the sensors. We
propose a disaster management system utilizing machine learning called DT-DMS
is used to support decision-making mechanisms. This study aims to show how to
educate and prepare emergency center staff by simulating potential disaster
situations on the virtual copy. The event of a disaster will be simulated
allowing emergency center staff to make decisions and depicting the potential
outcomes of these decisions. A rescue operation after an earthquake is
simulated. Test results are promising and the simulation scope is planned to be
extended.
| [
{
"created": "Wed, 31 Mar 2021 17:47:15 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"Dogan",
"Özgür",
""
],
[
"Sahin",
"Oguzhan",
""
],
[
"Karaarslan",
"Enis",
""
]
] |
2104.00085 | Hudson Bruno | Hudson M. S. Bruno and Esther L. Colombini | A comparative evaluation of learned feature descriptors on hybrid
monocular visual SLAM methods | 6 pages, Published in 2020 Latin American Robotics Symposium (LARS) | 2020 Latin American Robotics Symposium (LARS), Natal, Brazil,
2020, pp. 1-6 | 10.1109/LARS/SBR/WRE51543.2020.9307033 | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Classical Visual Simultaneous Localization and Mapping (VSLAM) algorithms can
be easily induced to fail when either the robot's motion or the environment is
too challenging. The use of Deep Neural Networks to enhance VSLAM algorithms
has recently achieved promising results, which we call hybrid methods. In this
paper, we compare the performance of hybrid monocular VSLAM methods with
different learned feature descriptors. To this end, we propose a set of
experiments to evaluate the robustness of the algorithms under different
environments, camera motion, and camera sensor noise. Experiments conducted on
KITTI and Euroc MAV datasets confirm that learned feature descriptors can
create more robust VSLAM systems.
| [
{
"created": "Wed, 31 Mar 2021 19:56:32 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Bruno",
"Hudson M. S.",
""
],
[
"Colombini",
"Esther L.",
""
]
] |
2104.00185 | Jurandy Almeida | Samuel Felipe dos Santos and Jurandy Almeida | Less is More: Accelerating Faster Neural Networks Straight from JPEG | arXiv admin note: text overlap with arXiv:2012.14426 | in 2021 25th Iberoamerican Congress on Pattern Recognition
(CIARP), 2021, pp. 237-247 | 10.1007/978-3-030-93420-0_23 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most image data available are often stored in a compressed format, from which
JPEG is the most widespread. To feed this data on a convolutional neural
network (CNN), a preliminary decoding process is required to obtain RGB pixels,
demanding a high computational load and memory usage. For this reason, the
design of CNNs for processing JPEG compressed data has gained attention in
recent years. In most existing works, typical CNN architectures are adapted to
facilitate the learning with the DCT coefficients rather than RGB pixels.
Although they are effective, their architectural changes either raise the
computational costs or neglect relevant information from DCT inputs. In this
paper, we examine different ways of speeding up CNNs designed for DCT inputs,
exploiting learning strategies to reduce the computational complexity by taking
full advantage of DCT inputs. Our experiments were conducted on the ImageNet
dataset. Results show that learning how to combine all DCT inputs in a
data-driven fashion is better than discarding them by hand, and its combination
with a reduction of layers has proven to be effective for reducing the
computational costs while retaining accuracy.
| [
{
"created": "Thu, 1 Apr 2021 01:21:24 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Aug 2022 14:25:39 GMT",
"version": "v2"
}
] | 2022-08-25 | [
[
"Santos",
"Samuel Felipe dos",
""
],
[
"Almeida",
"Jurandy",
""
]
] |
2104.00190 | Wail Gueaieb | Mohammed Abouheaf, Wail Gueaieb, Md. Suruz Miah, Davide Spinello | Trajectory Tracking of Underactuated Sea Vessels With Uncertain
Dynamics: An Integral Reinforcement Learning Approach | null | IEEE International Conference on Systems, Man, and Cybernetics
(SMC), Toronto, ON, Canada, 2020, pp. 1866-1871 | 10.1109/SMC42975.2020.9283399 | null | eess.SY cs.AI cs.LG cs.RO cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Underactuated systems like sea vessels have degrees of motion that are
insufficiently matched by a set of independent actuation forces. In addition,
the underlying trajectory-tracking control problems grow in complexity in order
to decide the optimal rudder and thrust control signals. This enforces several
difficult-to-solve constraints that are associated with the error dynamical
equations using classical optimal tracking and adaptive control approaches. An
online machine learning mechanism based on integral reinforcement learning is
proposed to find a solution for a class of nonlinear tracking problems with
partial prior knowledge of the system dynamics. The actuation forces are
decided using innovative forms of temporal difference equations relevant to the
vessel's surge and angular velocities. The solution is implemented using an
online value iteration process which is realized by employing means of the
adaptive critics and gradient descent approaches. The adaptive learning
mechanism exhibited well-functioning and interactive features in react to
different desired reference-tracking scenarios.
| [
{
"created": "Thu, 1 Apr 2021 01:41:49 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Abouheaf",
"Mohammed",
""
],
[
"Gueaieb",
"Wail",
""
],
[
"Miah",
"Md. Suruz",
""
],
[
"Spinello",
"Davide",
""
]
] |
2104.00199 | Wail Gueaieb | Ning Wang, Mohammed Abouheaf, Wail Gueaieb | Data-Driven Optimized Tracking Control Heuristic for MIMO Structures: A
Balance System Case Study | null | IEEE International Conference on Systems, Man, and Cybernetics
(SMC), Toronto, ON, Canada, 2020, pp. 2365-2370 | 10.1109/SMC42975.2020.9283038 | null | eess.SY cs.AI cs.LG cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A data-driven computational heuristic is proposed to control MIMO systems
without prior knowledge of their dynamics. The heuristic is illustrated on a
two-input two-output balance system. It integrates a self-adjusting nonlinear
threshold accepting heuristic with a neural network to compromise between the
desired transient and steady state characteristics of the system while
optimizing a dynamic cost function. The heuristic decides on the control gains
of multiple interacting PID control loops. The neural network is trained upon
optimizing a weighted-derivative like objective cost function. The performance
of the developed mechanism is compared with another controller that employs a
combined PID-Riccati approach. One of the salient features of the proposed
control schemes is that they do not require prior knowledge of the system
dynamics. However, they depend on a known region of stability for the control
gains to be used as a search space by the optimization algorithm. The control
mechanism is validated using different optimization criteria which address
different design requirements.
| [
{
"created": "Thu, 1 Apr 2021 02:00:20 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Wang",
"Ning",
""
],
[
"Abouheaf",
"Mohammed",
""
],
[
"Gueaieb",
"Wail",
""
]
] |
2104.00298 | Mingxing Tan | Mingxing Tan, Quoc V. Le | EfficientNetV2: Smaller Models and Faster Training | ICML 2021 | International Conference on Machine Learning, 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces EfficientNetV2, a new family of convolutional networks
that have faster training speed and better parameter efficiency than previous
models. To develop this family of models, we use a combination of
training-aware neural architecture search and scaling, to jointly optimize
training speed and parameter efficiency. The models were searched from the
search space enriched with new ops such as Fused-MBConv. Our experiments show
that EfficientNetV2 models train much faster than state-of-the-art models while
being up to 6.8x smaller.
Our training can be further sped up by progressively increasing the image
size during training, but it often causes a drop in accuracy. To compensate for
this accuracy drop, we propose to adaptively adjust regularization (e.g.,
dropout and data augmentation) as well, such that we can achieve both fast
training and good accuracy.
With progressive learning, our EfficientNetV2 significantly outperforms
previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on
the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on
ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while
training 5x-11x faster using the same computing resources. Code will be
available at https://github.com/google/automl/tree/master/efficientnetv2.
| [
{
"created": "Thu, 1 Apr 2021 07:08:36 GMT",
"version": "v1"
},
{
"created": "Thu, 13 May 2021 01:51:01 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Jun 2021 22:04:56 GMT",
"version": "v3"
}
] | 2021-06-25 | [
[
"Tan",
"Mingxing",
""
],
[
"Le",
"Quoc V.",
""
]
] |
2104.00322 | Matan Levi | Matan Levi, Idan Attias, Aryeh Kontorovich | Domain Invariant Adversarial Learning | null | Transactions of Machine Learning Research (2022) | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenomenon of adversarial examples illustrates one of the most basic
vulnerabilities of deep neural networks. Among the variety of techniques
introduced to surmount this inherent weakness, adversarial training has emerged
as the most effective strategy for learning robust models. Typically, this is
achieved by balancing robust and natural objectives. In this work, we aim to
further optimize the trade-off between robust and standard accuracy by
enforcing a domain-invariant feature representation. We present a new
adversarial training method, Domain Invariant Adversarial Learning (DIAL),
which learns a feature representation that is both robust and domain invariant.
DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural
domain and its corresponding adversarial domain. In the case where the source
domain consists of natural examples and the target domain is the adversarially
perturbed examples, our method learns a feature representation constrained not
to discriminate between the natural and adversarial examples, and can therefore
achieve a more robust representation. DIAL is a generic and modular technique
that can be easily incorporated into any adversarial training method. Our
experiments indicate that incorporating DIAL in the adversarial training
process improves both robustness and standard accuracy.
| [
{
"created": "Thu, 1 Apr 2021 08:04:10 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Jun 2021 14:23:20 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Oct 2021 21:35:42 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Sep 2022 10:03:47 GMT",
"version": "v4"
}
] | 2022-09-14 | [
[
"Levi",
"Matan",
""
],
[
"Attias",
"Idan",
""
],
[
"Kontorovich",
"Aryeh",
""
]
] |
2104.00424 | Jussi Karlgren | Jussi Karlgren and Pentti Kanerva | High-dimensional distributed semantic spaces for utterances | null | Natural Language Engineering 25, no. 4 (2019): 503-517 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-dimensional distributed semantic spaces have proven useful and effective
for aggregating and processing visual, auditory, and lexical information for
many tasks related to human-generated data. Human language makes use of a large
and varying number of features, lexical and constructional items as well as
contextual and discourse-specific data of various types, which all interact to
represent various aspects of communicative information. Some of these features
are mostly local and useful for the organisation of e.g. argument structure of
a predication; others are persistent over the course of a discourse and
necessary for achieving a reasonable level of understanding of the content.
This paper describes a model for high-dimensional representation for utterance
and text level data including features such as constructions or contextual
data, based on a mathematically principled and behaviourally plausible approach
to representing linguistic information. The implementation of the
representation is a straightforward extension of Random Indexing models
previously used for lexical linguistic items. The paper shows how the
implemented model is able to represent a broad range of linguistic features in
a common integral framework of fixed dimensionality, which is computationally
habitable, and which is suitable as a bridge between symbolic representations
such as dependency analysis and continuous representations used e.g. in
classifiers or further machine-learning approaches. This is achieved with
operations on vectors that constitute a powerful computational algebra,
accompanied with an associative memory for the vectors. The paper provides a
technical overview of the framework and a worked through implemented example of
how it can be applied to various types of linguistic features.
| [
{
"created": "Thu, 1 Apr 2021 12:09:47 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Karlgren",
"Jussi",
""
],
[
"Kanerva",
"Pentti",
""
]
] |
2104.00431 | Guangming Wang | Guangming Wang, Hesheng Wang, Yiling Liu and Weidong Chen | Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple
Masks | Accepted to ICRA 2019 | 2019 International Conference on Robotics and Automation (ICRA).
IEEE, 2019, pp. 4724-4730 | 10.1109/ICRA.2019.8793622 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new unsupervised learning method of depth and ego-motion using multiple
masks from monocular video is proposed in this paper. The depth estimation
network and the ego-motion estimation network are trained according to the
constraints of depth and ego-motion without truth values. The main contribution
of our method is to carefully consider the occlusion of the pixels generated
when the adjacent frames are projected to each other, and the blank problem
generated in the projection target imaging plane. Two fine masks are designed
to solve most of the image pixel mismatch caused by the movement of the camera.
In addition, some relatively rare circumstances are considered, and repeated
masking is proposed. To some extent, the method is to use a geometric
relationship to filter the mismatched pixels for training, making unsupervised
learning more efficient and accurate. The experiments on KITTI dataset show our
method achieves good performance in terms of depth and ego-motion. The
generalization capability of our method is demonstrated by training on the
low-quality uncalibrated bike video dataset and evaluating on KITTI dataset,
and the results are still good.
| [
{
"created": "Thu, 1 Apr 2021 12:29:23 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Wang",
"Guangming",
""
],
[
"Wang",
"Hesheng",
""
],
[
"Liu",
"Yiling",
""
],
[
"Chen",
"Weidong",
""
]
] |
2104.00527 | Yusuf Nasir | Yusuf Nasir, Jincong He, Chaoshun Hu, Shusei Tanaka, Kainan Wang and
XianHuan Wen | Deep Reinforcement Learning for Constrained Field Development
Optimization in Subsurface Two-phase Flow | Journal paper | Front. Appl. Math. Stat. 7 (2021) | 10.3389/fams.2021.689934 | null | cs.LG cs.AI math.OC physics.comp-ph physics.geo-ph | http://creativecommons.org/licenses/by/4.0/ | We present a deep reinforcement learning-based artificial intelligence agent
that could provide optimized development plans given a basic description of the
reservoir and rock/fluid properties with minimal computational cost. This
artificial intelligence agent, comprising of a convolutional neural network,
provides a mapping from a given state of the reservoir model, constraints, and
economic condition to the optimal decision (drill/do not drill and well
location) to be taken in the next stage of the defined sequential field
development planning process. The state of the reservoir model is defined using
parameters that appear in the governing equations of the two-phase flow. A
feedback loop training process referred to as deep reinforcement learning is
used to train an artificial intelligence agent with such a capability. The
training entails millions of flow simulations with varying reservoir model
descriptions (structural, rock and fluid properties), operational constraints,
and economic conditions. The parameters that define the reservoir model,
operational constraints, and economic conditions are randomly sampled from a
defined range of applicability. Several algorithmic treatments are introduced
to enhance the training of the artificial intelligence agent. After appropriate
training, the artificial intelligence agent provides an optimized field
development plan instantly for new scenarios within the defined range of
applicability. This approach has advantages over traditional optimization
algorithms (e.g., particle swarm optimization, genetic algorithm) that are
generally used to find a solution for a specific field development scenario and
typically not generalizable to different scenarios.
| [
{
"created": "Wed, 31 Mar 2021 07:08:24 GMT",
"version": "v1"
}
] | 2022-07-22 | [
[
"Nasir",
"Yusuf",
""
],
[
"He",
"Jincong",
""
],
[
"Hu",
"Chaoshun",
""
],
[
"Tanaka",
"Shusei",
""
],
[
"Wang",
"Kainan",
""
],
[
"Wen",
"XianHuan",
""
]
] |
2104.00564 | Mauro Martini | Mauro Martini, Vittorio Mazzia, Aleem Khaliq, Marcello Chiaberge | Domain-Adversarial Training of Self-Attention Based Networks for Land
Cover Classification using Multi-temporal Sentinel-2 Satellite Imagery | null | Remote Sensing 13.13 (2021): 2564 | 10.3390/rs13132564 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The increasing availability of large-scale remote sensing labeled data has
prompted researchers to develop increasingly precise and accurate data-driven
models for land cover and crop classification (LC&CC). Moreover, with the
introduction of self-attention and introspection mechanisms, deep learning
approaches have shown promising results in processing long temporal sequences
in the multi-spectral domain with a contained computational request.
Nevertheless, most practical applications cannot rely on labeled data, and in
the field, surveys are a time consuming solution that poses strict limitations
to the number of collected samples. Moreover, atmospheric conditions and
specific geographical region characteristics constitute a relevant domain gap
that does not allow direct applicability of a trained model on the available
dataset to the area of interest. In this paper, we investigate adversarial
training of deep neural networks to bridge the domain discrepancy between
distinct geographical zones. In particular, we perform a thorough analysis of
domain adaptation applied to challenging multi-spectral, multi-temporal data,
accurately highlighting the advantages of adapting state-of-the-art
self-attention based models for LC&CC to different target zones where labeled
data are not available. Extensive experimentation demonstrated significant
performance and generalization gain in applying domain-adversarial training to
source and target regions with marked dissimilarities between the distribution
of extracted features.
| [
{
"created": "Thu, 1 Apr 2021 15:45:17 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Jun 2021 14:30:42 GMT",
"version": "v2"
}
] | 2022-11-03 | [
[
"Martini",
"Mauro",
""
],
[
"Mazzia",
"Vittorio",
""
],
[
"Khaliq",
"Aleem",
""
],
[
"Chiaberge",
"Marcello",
""
]
] |
2104.00615 | Roman Popovych | Alex Bihlo and Roman O. Popovych | Physics-informed neural networks for the shallow-water equations on the
sphere | 24 pages, 9 figures, 1 tables, minor extensions | J. Comp. Phys. 456 (2022), 111024 | 10.1016/j.jcp.2022.111024 | null | physics.comp-ph cs.AI cs.LG cs.NA math.NA physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the use of physics-informed neural networks for solving the
shallow-water equations on the sphere in the meteorological context.
Physics-informed neural networks are trained to satisfy the differential
equations along with the prescribed initial and boundary data, and thus can be
seen as an alternative approach to solving differential equations compared to
traditional numerical approaches such as finite difference, finite volume or
spectral methods. We discuss the training difficulties of physics-informed
neural networks for the shallow-water equations on the sphere and propose a
simple multi-model approach to tackle test cases of comparatively long time
intervals. Here we train a sequence of neural networks instead of a single
neural network for the entire integration interval. We also avoid the use of a
boundary value loss by encoding the boundary conditions in a custom neural
network layer. We illustrate the abilities of the method by solving the most
prominent test cases proposed by Williamson et al. [J. Comput. Phys. 102
(1992), 211-224].
| [
{
"created": "Thu, 1 Apr 2021 16:47:40 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 07:31:22 GMT",
"version": "v2"
},
{
"created": "Sat, 12 Feb 2022 19:15:01 GMT",
"version": "v3"
}
] | 2024-09-19 | [
[
"Bihlo",
"Alex",
""
],
[
"Popovych",
"Roman O.",
""
]
] |
2104.00639 | Rafel Palliser Sans | Rafel Palliser-Sans, Albert Rial-Farr\`as | HLE-UPC at SemEval-2021 Task 5: Multi-Depth DistilBERT for Toxic Spans
Detection | 7 pages, SemEval-2021 Workshop, ACL-IJCNLP 2021 | In Proceedings of ACL-IJCNLP 2021 | 10.18653/v1/2021.semeval-1.131 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents our submission to SemEval-2021 Task 5: Toxic Spans
Detection. The purpose of this task is to detect the spans that make a text
toxic, which is a complex labour for several reasons. Firstly, because of the
intrinsic subjectivity of toxicity, and secondly, due to toxicity not always
coming from single words like insults or offends, but sometimes from whole
expressions formed by words that may not be toxic individually. Following this
idea of focusing on both single words and multi-word expressions, we study the
impact of using a multi-depth DistilBERT model, which uses embeddings from
different layers to estimate the final per-token toxicity. Our quantitative
results show that using information from multiple depths boosts the performance
of the model. Finally, we also analyze our best model qualitatively.
| [
{
"created": "Thu, 1 Apr 2021 17:37:38 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Apr 2021 11:05:54 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Aug 2021 10:24:19 GMT",
"version": "v3"
}
] | 2021-08-03 | [
[
"Palliser-Sans",
"Rafel",
""
],
[
"Rial-Farràs",
"Albert",
""
]
] |
2104.00742 | Yunye Gong | Yunye Gong, Xiao Lin, Yi Yao, Thomas G. Dietterich, Ajay Divakaran,
Melinda Gervasio | Confidence Calibration for Domain Generalization under Covariate Shift | null | Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV), 2021, pp. 8958-8967 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing calibration algorithms address the problem of covariate shift via
unsupervised domain adaptation. However, these methods suffer from the
following limitations: 1) they require unlabeled data from the target domain,
which may not be available at the stage of calibration in real-world
applications and 2) their performance depends heavily on the disparity between
the distributions of the source and target domains. To address these two
limitations, we present novel calibration solutions via domain generalization.
Our core idea is to leverage multiple calibration domains to reduce the
effective distribution disparity between the target and calibration domains for
improved calibration transfer without needing any data from the target domain.
We provide theoretical justification and empirical experimental results to
demonstrate the effectiveness of our proposed algorithms. Compared against
state-of-the-art calibration methods designed for domain adaptation, we observe
a decrease of 8.86 percentage points in expected calibration error or,
equivalently, an increase of 35 percentage points in improvement ratio for
multi-class classification on the Office-Home dataset.
| [
{
"created": "Thu, 1 Apr 2021 19:31:54 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Aug 2021 20:22:14 GMT",
"version": "v2"
}
] | 2021-10-19 | [
[
"Gong",
"Yunye",
""
],
[
"Lin",
"Xiao",
""
],
[
"Yao",
"Yi",
""
],
[
"Dietterich",
"Thomas G.",
""
],
[
"Divakaran",
"Ajay",
""
],
[
"Gervasio",
"Melinda",
""
]
] |
2104.00769 | Axel Berg | Axel Berg, Mark O'Connor, Miguel Tairum Cruz | Keyword Transformer: A Self-Attention Model for Keyword Spotting | Proceedings of INTERSPEECH | Proc. Interspeech 2021, 4249-4253 | 10.21437/Interspeech.2021-1286 | null | eess.AS cs.CL cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Transformer architecture has been successful across many domains,
including natural language processing, computer vision and speech recognition.
In keyword spotting, self-attention has primarily been used on top of
convolutional or recurrent encoders. We investigate a range of ways to adapt
the Transformer architecture to keyword spotting and introduce the Keyword
Transformer (KWT), a fully self-attentional architecture that exceeds
state-of-the-art performance across multiple tasks without any pre-training or
additional data. Surprisingly, this simple architecture outperforms more
complex models that mix convolutional, recurrent and attentive layers. KWT can
be used as a drop-in replacement for these models, setting two new benchmark
records on the Google Speech Commands dataset with 98.6% and 97.7% accuracy on
the 12 and 35-command tasks respectively.
| [
{
"created": "Thu, 1 Apr 2021 21:15:30 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Apr 2021 14:28:41 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Jun 2021 13:06:01 GMT",
"version": "v3"
}
] | 2022-04-11 | [
[
"Berg",
"Axel",
""
],
[
"O'Connor",
"Mark",
""
],
[
"Cruz",
"Miguel Tairum",
""
]
] |
2104.00842 | Aviral Joshi | A Vinay, Aviral Joshi, Hardik Mahipal Surana, Harsh Garg, K N
BalasubramanyaMurthy, S Natarajan | Unconstrained Face Recognition using ASURF and Cloud-Forest Classifier
optimized with VLAD | 8 Pages, 3 Figures | Procedia computer science, 143, 570-578 (2018) | 10.1016/j.procs.2018.10.433 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The paper posits a computationally-efficient algorithm for multi-class facial
image classification in which images are constrained with translation,
rotation, scale, color, illumination and affine distortion. The proposed method
is divided into five main building blocks including Haar-Cascade for face
detection, Bilateral Filter for image preprocessing to remove unwanted noise,
Affine Speeded-Up Robust Features (ASURF) for keypoint detection and
description, Vector of Locally Aggregated Descriptors (VLAD) for feature
quantization and Cloud Forest for image classification. The proposed method
aims at improving the accuracy and the time taken for face recognition systems.
The usage of the Cloud Forest algorithm as a classifier on three benchmark
datasets, namely the FACES95, FACES96 and ORL facial datasets, showed promising
results. The proposed methodology using Cloud Forest algorithm successfully
improves the recognition model by 2-12\% when differentiated against other
ensemble techniques like the Random Forest classifier depending upon the
dataset used.
| [
{
"created": "Fri, 2 Apr 2021 01:26:26 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Vinay",
"A",
""
],
[
"Joshi",
"Aviral",
""
],
[
"Surana",
"Hardik Mahipal",
""
],
[
"Garg",
"Harsh",
""
],
[
"BalasubramanyaMurthy",
"K N",
""
],
[
"Natarajan",
"S",
""
]
] |
2104.00912 | Frederic Le Mouel | Michael Puentes (UIS), Diana Novoa, John Delgado Nivia (UTS), Carlos
Barrios Hern\'andez (UIS), Oscar Carrillo (DYNAMID, CPE), Fr\'ed\'eric Le
Mou\"el (DYNAMID) | Datacentric analysis to reduce pedestrians accidents: A case study in
Colombia | null | International Conference on Sustainable Smart Cities and
Territories (SSCt2021), Apr 2021, Doha, Qatar | null | null | cs.AI cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since 2012, in a case-study in Bucaramanga-Colombia, 179 pedestrians died in
car accidents, and another 2873 pedestrians were injured. Each day, at least
one passerby is involved in a tragedy. Knowing the causes to decrease accidents
is crucial, and using system-dynamics to reproduce the collisions' events is
critical to prevent further accidents. This work implements simulations to save
lives by reducing the city's accidental rate and suggesting new safety policies
to implement. Simulation's inputs are video recordings in some areas of the
city. Deep Learning analysis of the images results in the segmentation of the
different objects in the scene, and an interaction model identifies the primary
reasons which prevail in the pedestrians or vehicles' behaviours. The first and
most efficient safety policy to implement-validated by our simulations-would be
to build speed bumps in specific places before the crossings reducing the
accident rate by 80%.
| [
{
"created": "Fri, 2 Apr 2021 06:59:50 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Puentes",
"Michael",
"",
"UIS"
],
[
"Novoa",
"Diana",
"",
"UTS"
],
[
"Nivia",
"John Delgado",
"",
"UTS"
],
[
"Hernández",
"Carlos Barrios",
"",
"UIS"
],
[
"Carrillo",
"Oscar",
"",
"DYNAMID, CPE"
],
[
"Mouël",
"Frédéric Le",
"",
"DYNAMID"
]
] |
2104.00925 | Dmitry V. Dylov | Iaroslav Bespalov, Nazar Buzun, Oleg Kachan and Dmitry V. Dylov | Landmarks Augmentation with Manifold-Barycentric Oversampling | 11 pages, 4 figures, 3 tables. I.B. and N.B. contributed equally.
D.V.D. is the corresponding author | IEEE Access 2022 | 10.1109/ACCESS.2022.3219934 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The training of Generative Adversarial Networks (GANs) requires a large
amount of data, stimulating the development of new augmentation methods to
alleviate the challenge. Oftentimes, these methods either fail to produce
enough new data or expand the dataset beyond the original manifold. In this
paper, we propose a new augmentation method that guarantees to keep the new
data within the original data manifold thanks to the optimal transport theory.
The proposed algorithm finds cliques in the nearest-neighbors graph and, at
each sampling iteration, randomly draws one clique to compute the Wasserstein
barycenter with random uniform weights. These barycenters then become the new
natural-looking elements that one could add to the dataset. We apply this
approach to the problem of landmarks detection and augment the available
annotation in both unpaired and in semi-supervised scenarios. Additionally, the
idea is validated on cardiac data for the task of medical segmentation. Our
approach reduces the overfitting and improves the quality metrics beyond the
original data outcome and beyond the result obtained with popular modern
augmentation methods.
| [
{
"created": "Fri, 2 Apr 2021 08:07:21 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Dec 2021 14:03:35 GMT",
"version": "v2"
}
] | 2022-11-15 | [
[
"Bespalov",
"Iaroslav",
""
],
[
"Buzun",
"Nazar",
""
],
[
"Kachan",
"Oleg",
""
],
[
"Dylov",
"Dmitry V.",
""
]
] |
2104.00948 | Angelo Salatino | Angelo A. Salatino, Francesco Osborne, Thiviyan Thanapalasingam,
Enrico Motta | The CSO Classifier: Ontology-Driven Detection of Research Topics in
Scholarly Articles | Conference paper at TPDL 2019 | In Digital Libraries for Open Knowledge. LNCS, vol 11799.
Springer, Cham (2019) | 10.1007/978-3-030-30760-8_26 | null | cs.IR cs.AI cs.DL | http://creativecommons.org/licenses/by/4.0/ | Classifying research papers according to their research topics is an
important task to improve their retrievability, assist the creation of smart
analytics, and support a variety of approaches for analysing and making sense
of the research environment. In this paper, we present the CSO Classifier, a
new unsupervised approach for automatically classifying research papers
according to the Computer Science Ontology (CSO), a comprehensive ontology of
re-search areas in the field of Computer Science. The CSO Classifier takes as
input the metadata associated with a research paper (title, abstract, keywords)
and returns a selection of research concepts drawn from the ontology. The
approach was evaluated on a gold standard of manually annotated articles
yielding a significant improvement over alternative methods.
| [
{
"created": "Fri, 2 Apr 2021 09:02:32 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Salatino",
"Angelo A.",
""
],
[
"Osborne",
"Francesco",
""
],
[
"Thanapalasingam",
"Thiviyan",
""
],
[
"Motta",
"Enrico",
""
]
] |
2104.00975 | Gabriella Tognola | Emma Chiaramello, Francesco Pinciroli, Alberico Bonalumi, Angelo
Caroli, Gabriella Tognola | Use of 'off-the-shelf' information extraction algorithms in clinical
informatics: a feasibility study of MetaMap annotation of Italian medical
notes | This paper has been published in the Journal of biomedical
informatics, Volume 63, October 2016, Pages 22-32 | Journal of biomedical informatics, Volume 63, October 2016, Pages
22-32 | 10.1016/j.jbi.2016.07.017 | null | cs.CL cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Information extraction from narrative clinical notes is useful for patient
care, as well as for secondary use of medical data, for research or clinical
purposes. Many studies focused on information extraction from English clinical
texts, but less dealt with clinical notes in languages other than English. This
study tested the feasibility of using 'off the shelf' information extraction
algorithms to identify medical concepts from Italian clinical notes. We used
MetaMap to map medical concepts to the Unified Medical Language System (UMLS).
The study addressed two questions: (Q1) to understand if it would be possible
to properly map medical terms found in clinical notes and related to the
semantic group of 'Disorders' to the Italian UMLS resources; (Q2) to
investigate if it would be feasible to use MetaMap as it is to extract these
medical concepts from Italian clinical notes. Results in EXP1 showed that the
Italian UMLS Metathesaurus sources covered 91% of the medical terms of the
'Disorders' semantic group, as found in the studied dataset. Even if MetaMap
was built to analyze texts written in English, it worked properly also with
texts written in Italian. MetaMap identified correctly about half of the
concepts in the Italian clinical notes. Using MetaMap's annotation on Italian
clinical notes instead of a simple text search improved our results of about 15
percentage points. MetaMap showed recall, precision and F-measure of 0.53, 0.98
and 0.69, respectively. Most of the failures were due to the impossibility for
MetaMap to generate Italian meaningful variants. MetaMap's performance in
annotating automatically translated English clinical notes was in line with
findings in the literature, with similar recall (0.75), F-measure (0.83) and
even higher precision (0.95).
| [
{
"created": "Fri, 2 Apr 2021 10:28:50 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Chiaramello",
"Emma",
""
],
[
"Pinciroli",
"Francesco",
""
],
[
"Bonalumi",
"Alberico",
""
],
[
"Caroli",
"Angelo",
""
],
[
"Tognola",
"Gabriella",
""
]
] |
2104.01008 | Hugo Cisneros | Hugo Cisneros, Josef Sivic, Tomas Mikolov | Visualizing computation in large-scale cellular automata | null | Artificial Life Conference Proceedings 2020 (pp. 239-247). MIT
Press | 10.1162/isal_a_00277 | null | nlin.CG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emergent processes in complex systems such as cellular automata can perform
computations of increasing complexity, and could possibly lead to artificial
evolution. Such a feat would require scaling up current simulation sizes to
allow for enough computational capacity. Understanding complex computations
happening in cellular automata and other systems capable of emergence poses
many challenges, especially in large-scale systems. We propose methods for
coarse-graining cellular automata based on frequency analysis of cell states,
clustering and autoencoders. These innovative techniques facilitate the
discovery of large-scale structure formation and complexity analysis in those
systems. They emphasize interesting behaviors in elementary cellular automata
while filtering out background patterns. Moreover, our methods reduce large 2D
automata to smaller sizes and enable identifying systems that behave
interestingly at multiple scales.
| [
{
"created": "Thu, 1 Apr 2021 08:14:15 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Cisneros",
"Hugo",
""
],
[
"Sivic",
"Josef",
""
],
[
"Mikolov",
"Tomas",
""
]
] |
2104.01103 | Octave Mariotti | Octave Mariotti, Hakan Bilen | Semi-supervised Viewpoint Estimation with Geometry-aware Conditional
Generation | null | ECCV 2020: Computer Vision - ECCV 2020 Workshops pp 631-647 | 10.1007/978-3-030-66096-3_42 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | There is a growing interest in developing computer vision methods that can
learn from limited supervision. In this paper, we consider the problem of
learning to predict camera viewpoints, where obtaining ground-truth annotations
are expensive and require special equipment, from a limited number of labeled
images. We propose a semi-supervised viewpoint estimation method that can learn
to infer viewpoint information from unlabeled image pairs, where two images
differ by a viewpoint change. In particular our method learns to synthesize the
second image by combining the appearance from the first one and viewpoint from
the second one. We demonstrate that our method significantly improves the
supervised techniques, especially in the low-label regime and outperforms the
state-of-the-art semi-supervised methods.
| [
{
"created": "Fri, 2 Apr 2021 15:55:27 GMT",
"version": "v1"
}
] | 2021-04-05 | [
[
"Mariotti",
"Octave",
""
],
[
"Bilen",
"Hakan",
""
]
] |
2104.01106 | Vlad Atanasiu | Vlad Atanasiu, Isabelle Marthot-Santaniello | Personalizing image enhancement for critical visual tasks: improved
legibility of papyri using color processing and visual illusions | Article accepted for publication by the International Journal on
Document Analysis and Recognition (IJDAR) on 2021.08.27. Open Source software
accessible at https://hierax.ch. Comments to version 2: Extendend Sections
3.2 Machine learning, 5.3.5 Comparisons and 6 Paradim; added supplemental
material; other improvements throughout the article | nternational Journal on Document Analysis and Recognition (IJDAR)
(2021) | 10.1007/s10032-021-00386-0 | null | cs.CV cs.DL cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Purpose: This article develops theoretical, algorithmic, perceptual, and
interaction aspects of script legibility enhancement in the visible light
spectrum for the purpose of scholarly editing of papyri texts. - Methods: Novel
legibility enhancement algorithms based on color processing and visual
illusions are compared to classic methods in a user experience experiment. -
Results: (1) The proposed methods outperformed the comparison methods. (2)
Users exhibited a broad behavioral spectrum, under the influence of factors
such as personality and social conditioning, tasks and application domains,
expertise level and image quality, and affordances of software, hardware, and
interfaces. No single enhancement method satisfied all factor configurations.
Therefore, it is suggested to offer users a broad choice of methods to
facilitate personalization, contextualization, and complementarity. (3) A
distinction is made between casual and critical vision on the basis of signal
ambiguity and error consequences. The criteria of a paradigm for enhancing
images for critical applications comprise: interpreting images skeptically;
approaching enhancement as a system problem; considering all image structures
as potential information; and making uncertainty and alternative
interpretations explicit, both visually and numerically.
| [
{
"created": "Thu, 11 Mar 2021 23:48:17 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Aug 2021 21:28:00 GMT",
"version": "v2"
}
] | 2022-02-21 | [
[
"Atanasiu",
"Vlad",
""
],
[
"Marthot-Santaniello",
"Isabelle",
""
]
] |
2104.01111 | Xiaojun Chang | Xiaojun Chang, Pengzhen Ren, Pengfei Xu, Zhihui Li, Xiaojiang Chen,
and Alex Hauptmann | A Comprehensive Survey of Scene Graphs: Generation and Application | 25 pages | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2021 | 10.1109/TPAMI.2021.3137605 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Scene graph is a structured representation of a scene that can clearly
express the objects, attributes, and relationships between objects in the
scene. As computer vision technology continues to develop, people are no longer
satisfied with simply detecting and recognizing objects in images; instead,
people look forward to a higher level of understanding and reasoning about
visual scenes. For example, given an image, we want to not only detect and
recognize objects in the image, but also know the relationship between objects
(visual relationship detection), and generate a text description (image
captioning) based on the image content. Alternatively, we might want the
machine to tell us what the little girl in the image is doing (Visual Question
Answering (VQA)), or even remove the dog from the image and find similar images
(image editing and retrieval), etc. These tasks require a higher level of
understanding and reasoning for image vision tasks. The scene graph is just
such a powerful tool for scene understanding. Therefore, scene graphs have
attracted the attention of a large number of researchers, and related research
is often cross-modal, complex, and rapidly developing. However, no relatively
systematic survey of scene graphs exists at present. To this end, this survey
conducts a comprehensive investigation of the current scene graph research.
More specifically, we first summarized the general definition of the scene
graph, then conducted a comprehensive and systematic discussion on the
generation method of the scene graph (SGG) and the SGG with the aid of prior
knowledge. We then investigated the main applications of scene graphs and
summarized the most commonly used datasets. Finally, we provide some insights
into the future development of scene graphs. We believe this will be a very
helpful foundation for future research on scene graphs.
| [
{
"created": "Wed, 17 Mar 2021 04:24:20 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Sep 2021 04:07:08 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Oct 2021 23:27:54 GMT",
"version": "v3"
},
{
"created": "Tue, 21 Dec 2021 02:24:22 GMT",
"version": "v4"
},
{
"created": "Fri, 7 Jan 2022 01:35:21 GMT",
"version": "v5"
}
] | 2022-01-10 | [
[
"Chang",
"Xiaojun",
""
],
[
"Ren",
"Pengzhen",
""
],
[
"Xu",
"Pengfei",
""
],
[
"Li",
"Zhihui",
""
],
[
"Chen",
"Xiaojiang",
""
],
[
"Hauptmann",
"Alex",
""
]
] |
2104.01193 | Ana Ozaki | Ana Ozaki | Learning Description Logic Ontologies. Five Approaches. Where Do They
Stand? | null | KI Kunstliche Intelligenz (2020) 34 317-327 | 10.1007/s13218-020-00656-9 | null | cs.AI cs.LG cs.LO | http://creativecommons.org/licenses/by/4.0/ | The quest for acquiring a formal representation of the knowledge of a domain
of interest has attracted researchers with various backgrounds into a diverse
field called ontology learning. We highlight classical machine learning and
data mining approaches that have been proposed for (semi-)automating the
creation of description logic (DL) ontologies. These are based on association
rule mining, formal concept analysis, inductive logic programming,
computational learning theory, and neural networks. We provide an overview of
each approach and how it has been adapted for dealing with DL ontologies.
Finally, we discuss the benefits and limitations of each of them for learning
DL ontologies.
| [
{
"created": "Fri, 2 Apr 2021 18:36:45 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Ozaki",
"Ana",
""
]
] |
2104.01215 | Lynnette Hui Xian Ng | Lynnette Hui Xian Ng and Kathleen M. Carley | The Coronavirus is a Bioweapon: Analysing Coronavirus Fact-Checked
Stories | null | SBP-Brims 2020 COVID Special Track | null | null | cs.SI cs.CL | http://creativecommons.org/licenses/by/4.0/ | The 2020 coronavirus pandemic has heightened the need to flag
coronavirus-related misinformation, and fact-checking groups have taken to
verifying misinformation on the Internet. We explore stories reported by
fact-checking groups PolitiFact, Poynter and Snopes from January to June 2020,
characterising them into six story clusters before then analyse time-series and
story validity trends and the level of agreement across sites. We further break
down the story clusters into more granular story types by proposing a unique
automated method with a BERT classifier, which can be used to classify diverse
story sources, in both fact-checked stories and tweets.
| [
{
"created": "Fri, 2 Apr 2021 19:27:53 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Ng",
"Lynnette Hui Xian",
""
],
[
"Carley",
"Kathleen M.",
""
]
] |
2104.01271 | C.-H. Huck Yang | Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee | PATE-AAE: Incorporating Adversarial Autoencoder into Private Aggregation
of Teacher Ensembles for Spoken Command Classification | Accepted to Interspeech 2021 | Proc. Interspeech 2021 | 10.21437/Interspeech.2021-640 | null | cs.SD cs.AI cs.LG cs.NE eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose using an adversarial autoencoder (AAE) to replace generative
adversarial network (GAN) in the private aggregation of teacher ensembles
(PATE), a solution for ensuring differential privacy in speech applications.
The AAE architecture allows us to obtain good synthetic speech leveraging upon
a discriminative training of latent vectors. Such synthetic speech is used to
build a privacy-preserving classifier when non-sensitive data is not
sufficiently available in the public domain. This classifier follows the PATE
scheme that uses an ensemble of noisy outputs to label the synthetic samples
and guarantee $\varepsilon$-differential privacy (DP) on its derived
classifiers. Our proposed framework thus consists of an AAE-based generator and
a PATE-based classifier (PATE-AAE). Evaluated on the Google Speech Commands
Dataset Version II, the proposed PATE-AAE improves the average classification
accuracy by +$2.11\%$ and +$6.60\%$, respectively, when compared with
alternative privacy-preserving solutions, namely PATE-GAN and DP-GAN, while
maintaining a strong level of privacy target at $\varepsilon$=0.01 with a fixed
$\delta$=10$^{-5}$.
| [
{
"created": "Fri, 2 Apr 2021 23:10:57 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Jun 2021 06:09:42 GMT",
"version": "v2"
}
] | 2021-10-11 | [
[
"Yang",
"Chao-Han Huck",
""
],
[
"Siniscalchi",
"Sabato Marco",
""
],
[
"Lee",
"Chin-Hui",
""
]
] |
2104.01290 | Jonathan Dunn | Jonathan Dunn and Tom Coupe and Benjamin Adams | Measuring Linguistic Diversity During COVID-19 | null | Proceedings of the 4th Workshop on NLP and Computational Social
Science (2020) | 10.18653/v1/P17 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Computational measures of linguistic diversity help us understand the
linguistic landscape using digital language data. The contribution of this
paper is to calibrate measures of linguistic diversity using restrictions on
international travel resulting from the COVID-19 pandemic. Previous work has
mapped the distribution of languages using geo-referenced social media and web
data. The goal, however, has been to describe these corpora themselves rather
than to make inferences about underlying populations. This paper shows that a
difference-in-differences method based on the Herfindahl-Hirschman Index can
identify the bias in digital corpora that is introduced by non-local
populations. These methods tell us where significant changes have taken place
and whether this leads to increased or decreased diversity. This is an
important step in aligning digital corpora like social media with the
real-world populations that have produced them.
| [
{
"created": "Sat, 3 Apr 2021 02:09:37 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Dunn",
"Jonathan",
""
],
[
"Coupe",
"Tom",
""
],
[
"Adams",
"Benjamin",
""
]
] |
2104.01294 | Jonathan Dunn | Jonathan Dunn | Representations of Language Varieties Are Reliable Given Corpus
Similarity Measures | null | Proceedings of the Eighth Workshop on NLP for Similar Languages,
Varieties, and Dialects (2021) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper measures similarity both within and between 84 language varieties
across nine languages. These corpora are drawn from digital sources (the web
and tweets), allowing us to evaluate whether such geo-referenced corpora are
reliable for modelling linguistic variation. The basic idea is that, if each
source adequately represents a single underlying language variety, then the
similarity between these sources should be stable across all languages and
countries. The paper shows that there is a consistent agreement between these
sources using frequency-based corpus similarity measures. This provides further
evidence that digital geo-referenced corpora consistently represent local
language varieties.
| [
{
"created": "Sat, 3 Apr 2021 02:19:46 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Dunn",
"Jonathan",
""
]
] |
2104.01297 | Jonathan Dunn | Jonathan Dunn | Multi-Unit Directional Measures of Association: Moving Beyond Pairs of
Words | null | International Journal of Corpus Linguistics (2018) | 10.1075/ijcl.16098.dun | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper formulates and evaluates a series of multi-unit measures of
directional association, building on the pairwise {\Delta}P measure, that are
able to quantify association in sequences of varying length and type of
representation. Multi-unit measures face an additional segmentation problem:
once the implicit length constraint of pairwise measures is abandoned,
association measures must also identify the borders of meaningful sequences.
This paper takes a vector-based approach to the segmentation problem by using
18 unique measures to describe different aspects of multi-unit association. An
examination of these measures across eight languages shows that they are stable
across languages and that each provides a unique rank of associated sequences.
Taken together, these measures expand corpus-based approaches to association by
generalizing across varying lengths and types of representation.
| [
{
"created": "Sat, 3 Apr 2021 02:43:24 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Dunn",
"Jonathan",
""
]
] |
2104.01299 | Jonathan Dunn | Jonathan Dunn | Finding Variants for Construction-Based Dialectometry: A Corpus-Based
Approach to Regional CxGs | null | Cognitive Linguistics (2018) | 10.1515/cog-2017-0029 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper develops a construction-based dialectometry capable of identifying
previously unknown constructions and measuring the degree to which a given
construction is subject to regional variation. The central idea is to learn a
grammar of constructions (a CxG) using construction grammar induction and then
to use these constructions as features for dialectometry. This offers a method
for measuring the aggregate similarity between regional CxGs without limiting
in advance the set of constructions subject to variation. The learned CxG is
evaluated on how well it describes held-out test corpora while dialectometry is
evaluated on how well it can model regional varieties of English. Themethod is
tested using two distinct datasets: First, the International Corpus of English
representing eight outer circle varieties; Second, a web-crawled corpus
representing five inner circle varieties. Results show that themethod (1)
produces a grammar with stable quality across sub-sets of a single corpus that
is (2) capable of distinguishing between regional varieties of Englishwith a
high degree of accuracy, thus (3) supporting dialectometricmethods formeasuring
the similarity between varieties of English and (4) measuring the degree to
which each construction is subject to regional variation. This is important for
cognitive sociolinguistics because it operationalizes the idea that competition
between constructions is organized at the functional level so that
dialectometry needs to represent as much of the available functional space as
possible.
| [
{
"created": "Sat, 3 Apr 2021 02:52:14 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Dunn",
"Jonathan",
""
]
] |
2104.01306 | Jonathan Dunn | Jonathan Dunn | Global Syntactic Variation in Seven Languages: Towards a Computational
Dialectology | null | Frontiers in Artificial Intelligence: Language and Computation
(2019) | 10.3389/frai.2019.00015 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The goal of this paper is to provide a complete representation of regional
linguistic variation on a global scale. To this end, the paper focuses on
removing three constraints that have previously limited work within
dialectology/dialectometry. First, rather than assuming a fixed and incomplete
set of variants, we use Computational Construction Grammar to provide a
replicable and falsifiable set of syntactic features. Second, rather than
assuming a specific area of interest, we use global language mapping based on
web-crawled and social media datasets to determine the selection of national
varieties. Third, rather than looking at a single language in isolation, we
model seven major languages together using the same methods: Arabic, English,
French, German, Portuguese, Russian, and Spanish. Results show that models for
each language are able to robustly predict the region-of-origin of held-out
samples better using Construction Grammars than using simpler syntactic
features. These global-scale experiments are used to argue that new methods in
computational sociolinguistics are able to provide more generalized models of
regional variation that are essential for understanding language variation and
change at scale.
| [
{
"created": "Sat, 3 Apr 2021 03:40:21 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Dunn",
"Jonathan",
""
]
] |
2104.01328 | Niko S\"underhauf | Dimity Miller, Niko S\"underhauf, Michael Milford and Feras Dayoub | Uncertainty for Identifying Open-Set Errors in Visual Object Detection | null | IEEE Robotics and Automation Letters (January 2022), Volume 7,
Issue 1, pages 215-222, ISSN 2377-3766 | 10.1109/LRA.2021.3123374 | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deployed into an open world, object detectors are prone to open-set errors,
false positive detections of object classes not present in the training
dataset. We propose GMM-Det, a real-time method for extracting epistemic
uncertainty from object detectors to identify and reject open-set errors.
GMM-Det trains the detector to produce a structured logit space that is
modelled with class-specific Gaussian Mixture Models. At test time, open-set
errors are identified by their low log-probability under all Gaussian Mixture
Models. We test two common detector architectures, Faster R-CNN and RetinaNet,
across three varied datasets spanning robotics and computer vision. Our results
show that GMM-Det consistently outperforms existing uncertainty techniques for
identifying and rejecting open-set detections, especially at the low-error-rate
operating point required for safety-critical applications. GMM-Det maintains
object detection performance, and introduces only minimal computational
overhead. We also introduce a methodology for converting existing object
detection datasets into specific open-set datasets to evaluate open-set
performance in object detection.
| [
{
"created": "Sat, 3 Apr 2021 07:12:31 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Nov 2021 04:18:05 GMT",
"version": "v2"
}
] | 2021-11-15 | [
[
"Miller",
"Dimity",
""
],
[
"Sünderhauf",
"Niko",
""
],
[
"Milford",
"Michael",
""
],
[
"Dayoub",
"Feras",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.