id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.07769 | Preslav Nakov | Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer
Elsayed, Alberto Barr\'on-Cede\~no, Paolo Papotti, Shaden Shaar, Giovanni Da
San Martino | Automated Fact-Checking for Assisting Human Fact-Checkers | fact-checking, fact-checkers, check-worthiness, detecting previously
fact-checked claims, evidence retrieval | IJCAI-2021 | null | null | cs.AI cs.CL cs.CR cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reporting and the analysis of current events around the globe has
expanded from professional, editor-lead journalism all the way to citizen
journalism. Nowadays, politicians and other key players enjoy direct access to
their audiences through social media, bypassing the filters of official cables
or traditional media. However, the multiple advantages of free speech and
direct communication are dimmed by the misuse of media to spread inaccurate or
misleading claims. These phenomena have led to the modern incarnation of the
fact-checker -- a professional whose main aim is to examine claims using
available evidence and to assess their veracity. As in other text forensics
tasks, the amount of information available makes the work of the fact-checker
more difficult. With this in mind, starting from the perspective of the
professional fact-checker, we survey the available intelligent technologies
that can support the human expert in the different steps of her fact-checking
endeavor. These include identifying claims worth fact-checking, detecting
relevant previously fact-checked claims, retrieving relevant evidence to
fact-check a claim, and actually verifying a claim. In each case, we pay
attention to the challenges in future work and the potential impact on
real-world fact-checking.
| [
{
"created": "Sat, 13 Mar 2021 18:29:14 GMT",
"version": "v1"
},
{
"created": "Sat, 22 May 2021 12:27:05 GMT",
"version": "v2"
}
] | 2021-05-25 | [
[
"Nakov",
"Preslav",
""
],
[
"Corney",
"David",
""
],
[
"Hasanain",
"Maram",
""
],
[
"Alam",
"Firoj",
""
],
[
"Elsayed",
"Tamer",
""
],
[
"Barrón-Cedeño",
"Alberto",
""
],
[
"Papotti",
"Paolo",
""
],
[
"Shaar",
"Shaden",
""
],
[
"Martino",
"Giovanni Da San",
""
]
] |
2103.07779 | Robin Swezey | Robin Swezey, Young-joo Chung | Recommending Short-lived Dynamic Packages for Golf Booking Services | null | In Proceedings of the 24th ACM International on Conference on
Information and Knowledge Management (CIKM 2015). Association for Computing
Machinery, New York, NY, USA, 1779-1782 | 10.1145/2806416.2806608 | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an approach to recommending short-lived dynamic packages for
golf booking services. Two challenges are addressed in this work. The first is
the short life of the items, which puts the system in a state of a permanent
cold start. The second is the uninformative nature of the package attributes,
which makes clustering or figuring latent packages challenging. Although such
settings are fairly pervasive, they have not been studied in traditional
recommendation research, and there is thus a call for original approaches for
recommender systems. In this paper, we introduce a hybrid method that leverages
user analysis and its relation to the packages, as well as package pricing and
environmental analysis, and traditional collaborative filtering. The proposed
approach achieved appreciable improvement in precision compared with baselines.
| [
{
"created": "Sat, 13 Mar 2021 19:48:04 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Swezey",
"Robin",
""
],
[
"Chung",
"Young-joo",
""
]
] |
2103.07780 | Le Cong Dinh | Le Cong Dinh, Yaodong Yang, Stephen McAleer, Zheng Tian, Nicolas Perez
Nieves, Oliver Slumbers, David Henry Mguni, Haitham Bou Ammar, Jun Wang | Online Double Oracle | Accepted at Transactions on Machine Learning Research (TMLR) | Transactions on Machine Learning Research 2022 | null | null | cs.AI cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving strategic games with huge action space is a critical yet
under-explored topic in economics, operations research and artificial
intelligence. This paper proposes new learning algorithms for solving
two-player zero-sum normal-form games where the number of pure strategies is
prohibitively large. Specifically, we combine no-regret analysis from online
learning with Double Oracle (DO) methods from game theory. Our method --
\emph{Online Double Oracle (ODO)} -- is provably convergent to a Nash
equilibrium (NE). Most importantly, unlike normal DO methods, ODO is
\emph{rationale} in the sense that each agent in ODO can exploit strategic
adversary with a regret bound of $\mathcal{O}(\sqrt{T k \log(k)})$ where $k$ is
not the total number of pure strategies, but rather the size of \emph{effective
strategy set} that is linearly dependent on the support size of the NE. On tens
of different real-world games, ODO outperforms DO, PSRO methods, and no-regret
algorithms such as Multiplicative Weight Update by a significant margin, both
in terms of convergence rate to a NE and average payoff against strategic
adversaries.
| [
{
"created": "Sat, 13 Mar 2021 19:48:27 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Mar 2021 14:34:47 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Jun 2021 22:50:56 GMT",
"version": "v3"
},
{
"created": "Mon, 16 May 2022 16:43:15 GMT",
"version": "v4"
},
{
"created": "Wed, 15 Feb 2023 09:58:59 GMT",
"version": "v5"
}
] | 2023-02-16 | [
[
"Dinh",
"Le Cong",
""
],
[
"Yang",
"Yaodong",
""
],
[
"McAleer",
"Stephen",
""
],
[
"Tian",
"Zheng",
""
],
[
"Nieves",
"Nicolas Perez",
""
],
[
"Slumbers",
"Oliver",
""
],
[
"Mguni",
"David Henry",
""
],
[
"Ammar",
"Haitham Bou",
""
],
[
"Wang",
"Jun",
""
]
] |
2103.07825 | Xu Dong | Xu Dong, Binnan Zhuang, Yunxiang Mao, Langechuan Liu | Radar Camera Fusion via Representation Learning in Autonomous Driving | null | In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pp. 1672-1681. 2021 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Radars and cameras are mature, cost-effective, and robust sensors and have
been widely used in the perception stack of mass-produced autonomous driving
systems. Due to their complementary properties, outputs from radar detection
(radar pins) and camera perception (2D bounding boxes) are usually fused to
generate the best perception results. The key to successful radar-camera fusion
is the accurate data association. The challenges in the radar-camera
association can be attributed to the complexity of driving scenes, the noisy
and sparse nature of radar measurements, and the depth ambiguity from 2D
bounding boxes. Traditional rule-based association methods are susceptible to
performance degradation in challenging scenarios and failure in corner cases.
In this study, we propose to address radar-camera association via deep
representation learning, to explore feature-level interaction and global
reasoning. Additionally, we design a loss sampling mechanism and an innovative
ordinal loss to overcome the difficulty of imperfect labeling and to enforce
critical human-like reasoning. Despite being trained with noisy labels
generated by a rule-based algorithm, our proposed method achieves a performance
of 92.2% F1 score, which is 11.6% higher than the rule-based teacher. Moreover,
this data-driven method also lends itself to continuous improvement via corner
case mining.
| [
{
"created": "Sun, 14 Mar 2021 01:32:03 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Apr 2021 21:02:47 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Jun 2021 15:48:11 GMT",
"version": "v3"
}
] | 2021-06-21 | [
[
"Dong",
"Xu",
""
],
[
"Zhuang",
"Binnan",
""
],
[
"Mao",
"Yunxiang",
""
],
[
"Liu",
"Langechuan",
""
]
] |
2103.07986 | Arthur Venter Mr | Arthur E. W. Venter and Marthinus W. Theunissen and Marelie H. Davel | Pre-interpolation loss behaviour in neural networks | 11 pages, 8 figures. Presented at the 2021 SACAIR online conference
in February 2021 | Communications in Computer and Information Science, volume 1342,
year 2021, pages 296-309 | 10.1007/978-3-030-66151-9_19 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When training neural networks as classifiers, it is common to observe an
increase in average test loss while still maintaining or improving the overall
classification accuracy on the same dataset. In spite of the ubiquity of this
phenomenon, it has not been well studied and is often dismissively attributed
to an increase in borderline correct classifications. We present an empirical
investigation that shows how this phenomenon is actually a result of the
differential manner by which test samples are processed. In essence: test loss
does not increase overall, but only for a small minority of samples. Large
representational capacities allow losses to decrease for the vast majority of
test samples at the cost of extreme increases for others. This effect seems to
be mainly caused by increased parameter values relating to the correctly
processed sample features. Our findings contribute to the practical
understanding of a common behaviour of deep neural networks. We also discuss
the implications of this work for network optimisation and generalisation.
| [
{
"created": "Sun, 14 Mar 2021 18:08:59 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Venter",
"Arthur E. W.",
""
],
[
"Theunissen",
"Marthinus W.",
""
],
[
"Davel",
"Marelie H.",
""
]
] |
2103.08052 | Bonaventure F. P. Dossou | Bonaventure F. P. Dossou and Chris C. Emezue | Crowdsourced Phrase-Based Tokenization for Low-Resourced Neural Machine
Translation: The Case of Fon Language | null | African NLP, EACL 2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Building effective neural machine translation (NMT) models for very
low-resourced and morphologically rich African indigenous languages is an open
challenge. Besides the issue of finding available resources for them, a lot of
work is put into preprocessing and tokenization. Recent studies have shown that
standard tokenization methods do not always adequately deal with the
grammatical, diacritical, and tonal properties of some African languages. That,
coupled with the extremely low availability of training samples, hinders the
production of reliable NMT models. In this paper, using Fon language as a case
study, we revisit standard tokenization methods and introduce
Word-Expressions-Based (WEB) tokenization, a human-involved super-words
tokenization strategy to create a better representative vocabulary for
training. Furthermore, we compare our tokenization strategy to others on the
Fon-French and French-Fon translation tasks.
| [
{
"created": "Sun, 14 Mar 2021 22:12:14 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2021 13:00:28 GMT",
"version": "v2"
}
] | 2021-03-18 | [
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Emezue",
"Chris C.",
""
]
] |
2103.08105 | Murilo Marques Marinho | Masakazu Yoshimura and Murilo Marques Marinho and Kanako Harada and
Mamoru Mitsuishi | MBAPose: Mask and Bounding-Box Aware Pose Estimation of Surgical
Instruments with Photorealistic Domain Randomization | Accepted on IROS 2021, 8 pages | 2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2021, pp. 9445-9452 | 10.1109/IROS51168.2021.9636404 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surgical robots are usually controlled using a priori models based on the
robots' geometric parameters, which are calibrated before the surgical
procedure. One of the challenges in using robots in real surgical settings is
that those parameters can change over time, consequently deteriorating control
accuracy. In this context, our group has been investigating online calibration
strategies without added sensors. In one step toward that goal, we have
developed an algorithm to estimate the pose of the instruments' shafts in
endoscopic images. In this study, we build upon that earlier work and propose a
new framework to more precisely estimate the pose of a rigid surgical
instrument. Our strategy is based on a novel pose estimation model called
MBAPose and the use of synthetic training data. Our experiments demonstrated an
improvement of 21 % for translation error and 26 % for orientation error on
synthetic test data with respect to our previous work. Results with real test
data provide a baseline for further research.
| [
{
"created": "Mon, 15 Mar 2021 02:53:41 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jan 2022 07:23:56 GMT",
"version": "v2"
}
] | 2022-06-03 | [
[
"Yoshimura",
"Masakazu",
""
],
[
"Marinho",
"Murilo Marques",
""
],
[
"Harada",
"Kanako",
""
],
[
"Mitsuishi",
"Mamoru",
""
]
] |
2103.08129 | Pranav Kadam | Pranav Kadam, Min Zhang, Shan Liu, C.-C. Jay Kuo | R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration
Method | 16 pages, 12 figures. Accepted by IEEE Transactions on Image
Processing | IEEE Transactions on Image Processing, vol. 31, pp. 2710-2725,
2022 | 10.1109/TIP.2022.3160609 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the recent PointHop classification method, an unsupervised 3D
point cloud registration method, called R-PointHop, is proposed in this work.
R-PointHop first determines a local reference frame (LRF) for every point using
its nearest neighbors and finds local attributes. Next, R-PointHop obtains
local-to-global hierarchical features by point downsampling, neighborhood
expansion, attribute construction and dimensionality reduction steps. Thus,
point correspondences are built in hierarchical feature space using the nearest
neighbor rule. Afterwards, a subset of salient points with good correspondence
is selected to estimate the 3D transformation. The use of the LRF allows for
invariance of the hierarchical features of points with respect to rotation and
translation, thus making R-PointHop more robust at building point
correspondence, even when the rotation angles are large. Experiments are
conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which
demonstrate the effectiveness of R-PointHop for 3D point cloud registration.
R-PointHop's model size and training time are an order of magnitude smaller
than those of deep learning methods, and its registration errors are smaller,
making it a green and accurate solution. Our codes are available on GitHub.
| [
{
"created": "Mon, 15 Mar 2021 04:12:44 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Oct 2021 20:56:08 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Mar 2022 04:20:44 GMT",
"version": "v3"
}
] | 2022-04-01 | [
[
"Kadam",
"Pranav",
""
],
[
"Zhang",
"Min",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
2103.08183 | Tadahiro Taniguchi | Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya,
Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi | A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive
Architectures for Developmental Robots | 62 pages, 9 figures, submitted to Neural Networks | Neural Networks, 2022, Volume 150, 293-312 | 10.1016/j.neunet.2022.02.026 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building a humanlike integrative artificial cognitive system, that is, an
artificial general intelligence (AGI), is the holy grail of the artificial
intelligence (AI) field. Furthermore, a computational model that enables an
artificial system to achieve cognitive development will be an excellent
reference for brain and cognitive science. This paper describes an approach to
develop a cognitive architecture by integrating elemental cognitive modules to
enable the training of the modules as a whole. This approach is based on two
ideas: (1) brain-inspired AI, learning human brain architecture to build
human-level intelligence, and (2) a probabilistic generative model(PGM)-based
cognitive system to develop a cognitive system for developmental robots by
integrating PGMs. The development framework is called a whole brain PGM
(WB-PGM), which differs fundamentally from existing cognitive architectures in
that it can learn continuously through a system based on sensory-motor
information. In this study, we describe the rationale of WB-PGM, the current
status of PGM-based elemental cognitive modules, their relationship with the
human brain, the approach to the integration of the cognitive modules, and
future challenges. Our findings can serve as a reference for brain studies. As
PGMs describe explicit informational relationships between variables, this
description provides interpretable guidance from computational sciences to
brain science. By providing such information, researchers in neuroscience can
provide feedback to researchers in AI and robotics on what the current models
lack with reference to the brain. Further, it can facilitate collaboration
among researchers in neuro-cognitive sciences as well as AI and robotics.
| [
{
"created": "Mon, 15 Mar 2021 07:42:04 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Jan 2022 23:38:27 GMT",
"version": "v2"
}
] | 2023-01-18 | [
[
"Taniguchi",
"Tadahiro",
""
],
[
"Yamakawa",
"Hiroshi",
""
],
[
"Nagai",
"Takayuki",
""
],
[
"Doya",
"Kenji",
""
],
[
"Sakagami",
"Masamichi",
""
],
[
"Suzuki",
"Masahiro",
""
],
[
"Nakamura",
"Tomoaki",
""
],
[
"Taniguchi",
"Akira",
""
]
] |
2103.08199 | Tadahiro Taniguchi | Yasuaki Okuda, Ryo Ozaki, and Tadahiro Taniguchi | Double Articulation Analyzer with Prosody for Unsupervised Word and
Phoneme Discovery | 11 pages, Submitted to IEEE Transactions on Cognitive and
Developmental Systems | IEEE Transactions on Cognitive and Developmental Systems, 2022 | 10.1109/TCDS.2022.3210751 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infants acquire words and phonemes from unsegmented speech signals using
segmentation cues, such as distributional, prosodic, and co-occurrence cues.
Many pre-existing computational models that represent the process tend to focus
on distributional or prosodic cues. This paper proposes a nonparametric
Bayesian probabilistic generative model called the prosodic hierarchical
Dirichlet process-hidden language model (Prosodic HDP-HLM). Prosodic HDP-HLM,
an extension of HDP-HLM, considers both prosodic and distributional cues within
a single integrative generative model. We conducted three experiments on
different types of datasets, and demonstrate the validity of the proposed
method. The results show that the Prosodic DAA successfully uses prosodic cues
and outperforms a method that solely uses distributional cues. The main
contributions of this study are as follows: 1) We develop a probabilistic
generative model for time series data including prosody that potentially has a
double articulation structure; 2) We propose the Prosodic DAA by deriving the
inference procedure for Prosodic HDP-HLM and show that Prosodic DAA can
discover words directly from continuous human speech signals using statistical
information and prosodic information in an unsupervised manner; 3) We show that
prosodic cues contribute to word segmentation more in naturally distributed
case words, i.e., they follow Zipf's law.
| [
{
"created": "Mon, 15 Mar 2021 08:17:44 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Okuda",
"Yasuaki",
""
],
[
"Ozaki",
"Ryo",
""
],
[
"Taniguchi",
"Tadahiro",
""
]
] |
2103.08233 | Thanh Nguyen Xuan | Thanh Nguyen, Tung Luu, Trung Pham, Sanzhar Rakhimkul, Chang D. Yoo | Robust MAML: Prioritization task buffer with adaptive learning process
for model-agnostic meta-learning | null | ICASSP 2021 - 2021 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP39728.2021.9413446 | null | cs.LG cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Model agnostic meta-learning (MAML) is a popular state-of-the-art
meta-learning algorithm that provides good weight initialization of a model
given a variety of learning tasks. The model initialized by provided weight can
be fine-tuned to an unseen task despite only using a small amount of samples
and within a few adaptation steps. MAML is simple and versatile but requires
costly learning rate tuning and careful design of the task distribution which
affects its scalability and generalization. This paper proposes a more robust
MAML based on an adaptive learning scheme and a prioritization task buffer(PTB)
referred to as Robust MAML (RMAML) for improving scalability of training
process and alleviating the problem of distribution mismatch. RMAML uses
gradient-based hyper-parameter optimization to automatically find the optimal
learning rate and uses the PTB to gradually adjust train-ing task distribution
toward testing task distribution over the course of training. Experimental
results on meta reinforcement learning environments demonstrate a substantial
performance gain as well as being less sensitive to hyper-parameter choice and
robust to distribution mismatch.
| [
{
"created": "Mon, 15 Mar 2021 09:34:34 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 13:56:07 GMT",
"version": "v2"
}
] | 2021-06-11 | [
[
"Nguyen",
"Thanh",
""
],
[
"Luu",
"Tung",
""
],
[
"Pham",
"Trung",
""
],
[
"Rakhimkul",
"Sanzhar",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
2103.08255 | Thanh Nguyen Xuan | Thanh Nguyen, Tung M. Luu, Thang Vu and Chang D. Yoo | Sample-efficient Reinforcement Learning Representation Learning with
Curiosity Contrastive Forward Dynamics Model | null | 2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) | 10.1109/IROS51168.2021.9636536 | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Developing an agent in reinforcement learning (RL) that is capable of
performing complex control tasks directly from high-dimensional observation
such as raw pixels is yet a challenge as efforts are made towards improving
sample efficiency and generalization. This paper considers a learning framework
for Curiosity Contrastive Forward Dynamics Model (CCFDM) in achieving a more
sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward
dynamics model (FDM) and performs contrastive learning to train its deep
convolutional neural network-based image encoder (IE) to extract conducive
spatial and temporal information for achieving a more sample efficiency for RL.
In addition, during training, CCFDM provides intrinsic rewards, produced based
on FDM prediction error, encourages the curiosity of the RL agent to improve
exploration. The diverge and less-repetitive observations provide by both our
exploration strategy and data augmentation available in contrastive learning
improve not only the sample efficiency but also the generalization. Performance
of existing model-free RL methods such as Soft Actor-Critic built on top of
CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind
Control Suite benchmark.
| [
{
"created": "Mon, 15 Mar 2021 10:08:52 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Oct 2021 13:19:41 GMT",
"version": "v2"
}
] | 2023-01-13 | [
[
"Nguyen",
"Thanh",
""
],
[
"Luu",
"Tung M.",
""
],
[
"Vu",
"Thang",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
2103.08286 | Marcus Valtonen \"Ornhag | Marcus Valtonen \"Ornhag and Patrik Persson and M{\aa}rten Wadenb\"ack
and Kalle {\AA}str\"om and Anders Heyden | Trust Your IMU: Consequences of Ignoring the IMU Drift | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops 2022 | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we argue that modern pre-integration methods for inertial
measurement units (IMUs) are accurate enough to ignore the drift for short time
intervals. This allows us to consider a simplified camera model, which in turn
admits further intrinsic calibration. We develop the first-ever solver to
jointly solve the relative pose problem with unknown and equal focal length and
radial distortion profile while utilizing the IMU data. Furthermore, we show
significant speed-up compared to state-of-the-art algorithms, with small or
negligible loss in accuracy for partially calibrated setups. The proposed
algorithms are tested on both synthetic and real data, where the latter is
focused on navigation using unmanned aerial vehicles (UAVs). We evaluate the
proposed solvers on different commercially available low-cost UAVs, and
demonstrate that the novel assumption on IMU drift is feasible in real-life
applications. The extended intrinsic auto-calibration enables us to use
distorted input images, making tedious calibration processes obsolete, compared
to current state-of-the-art methods.
| [
{
"created": "Mon, 15 Mar 2021 11:24:54 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Mar 2021 20:25:39 GMT",
"version": "v2"
}
] | 2022-08-18 | [
[
"Örnhag",
"Marcus Valtonen",
""
],
[
"Persson",
"Patrik",
""
],
[
"Wadenbäck",
"Mårten",
""
],
[
"Åström",
"Kalle",
""
],
[
"Heyden",
"Anders",
""
]
] |
2103.08391 | Blai Bonet | Ivan D. Rodriguez and Blai Bonet and Sebastian Sardina and Hector
Geffner | Flexible FOND Planning with Explicit Fairness Assumptions | Extended version of ICAPS-21 paper | Journal of Artificial Intelligence Research 2022 | 10.1613/jair.1.13599 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of reaching a propositional goal condition in
fully-observable non-deterministic (FOND) planning under a general class of
fairness assumptions that are given explicitly. The fairness assumptions are of
the form A/B and say that state trajectories that contain infinite occurrences
of an action a from A in a state s and finite occurrence of actions from B,
must also contain infinite occurrences of action a in s followed by each one of
its possible outcomes. The infinite trajectories that violate this condition
are deemed as unfair, and the solutions are policies for which all the fair
trajectories reach a goal state. We show that strong and strong-cyclic FOND
planning, as well as QNP planning, a planning model introduced recently for
generalized planning, are all special cases of FOND planning with fairness
assumptions of this form which can also be combined. FOND+ planning, as this
form of planning is called, combines the syntax of FOND planning with some of
the versatility of LTL for expressing fairness constraints. A new planner is
implemented by reducing FOND+ planning to answer set programs, and the
performance of the planner is evaluated in comparison with FOND and QNP
planners, and LTL synthesis tools.
| [
{
"created": "Mon, 15 Mar 2021 13:57:07 GMT",
"version": "v1"
}
] | 2022-06-29 | [
[
"Rodriguez",
"Ivan D.",
""
],
[
"Bonet",
"Blai",
""
],
[
"Sardina",
"Sebastian",
""
],
[
"Geffner",
"Hector",
""
]
] |
2103.08533 | Miguel Sim\~oes | Miguel Sim\~oes, Andreas Themelis, Panagiotis Patrinos | Lasry-Lions Envelopes and Nonconvex Optimization: A Homotopy Approach | 29th Eur. Signal Process. Conf. (EUSIPCO 2021), accepted. 5 pages, 2
figures, 2 tables | Eur Sig Proc Conf (EUSIPCO), 2021, pp 2089-2093 | 10.23919/EUSIPCO54536.2021.9616167 | null | math.OC cs.CV eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large-scale optimization, the presence of nonsmooth and nonconvex terms in
a given problem typically makes it hard to solve. A popular approach to address
nonsmooth terms in convex optimization is to approximate them with their
respective Moreau envelopes. In this work, we study the use of Lasry-Lions
double envelopes to approximate nonsmooth terms that are also not convex. These
envelopes are an extension of the Moreau ones but exhibit an additional
smoothness property that makes them amenable to fast optimization algorithms.
Lasry-Lions envelopes can also be seen as an "intermediate" between a given
function and its convex envelope, and we make use of this property to develop a
method that builds a sequence of approximate subproblems that are easier to
solve than the original problem. We discuss convergence properties of this
method when used to address composite minimization problems; additionally,
based on a number of experiments, we discuss settings where it may be more
useful than classical alternatives in two domains: signal decoding and spectral
unmixing.
| [
{
"created": "Mon, 15 Mar 2021 16:55:11 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jun 2021 09:21:35 GMT",
"version": "v2"
}
] | 2024-04-17 | [
[
"Simões",
"Miguel",
""
],
[
"Themelis",
"Andreas",
""
],
[
"Patrinos",
"Panagiotis",
""
]
] |
2103.08562 | Kai Packh\"auser | Kai Packh\"auser, Sebastian G\"undel, Nicolas M\"unster, Christopher
Syben, Vincent Christlein, Andreas Maier | Deep Learning-based Patient Re-identification Is able to Exploit the
Biometric Nature of Medical Chest X-ray Data | Published in Scientific Reports | Scientific Reports, 12, Article number: 14851 (2022) | 10.1038/s41598-022-19045-3 | null | cs.CV cs.AI cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | With the rise and ever-increasing potential of deep learning techniques in
recent years, publicly available medical datasets became a key factor to enable
reproducible development of diagnostic algorithms in the medical domain.
Medical data contains sensitive patient-related information and is therefore
usually anonymized by removing patient identifiers, e.g., patient names before
publication. To the best of our knowledge, we are the first to show that a
well-trained deep learning system is able to recover the patient identity from
chest X-ray data. We demonstrate this using the publicly available large-scale
ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images
from 30,805 unique patients. Our verification system is able to identify
whether two frontal chest X-ray images are from the same person with an AUC of
0.9940 and a classification accuracy of 95.55%. We further highlight that the
proposed system is able to reveal the same person even ten and more years after
the initial scan. When pursuing a retrieval approach, we observe an mAP@R of
0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to
0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks
on external datasets such as CheXpert and the COVID-19 Image Data Collection.
Based on this high identification rate, a potential attacker may leak
patient-related information and additionally cross-reference images to obtain
more information. Thus, there is a great risk of sensitive content falling into
unauthorized hands or being disseminated against the will of the concerned
patients. Especially during the COVID-19 pandemic, numerous chest X-ray
datasets have been published to advance research. Therefore, such data may be
vulnerable to potential attacks by deep learning-based re-identification
algorithms.
| [
{
"created": "Mon, 15 Mar 2021 17:26:43 GMT",
"version": "v1"
},
{
"created": "Mon, 31 May 2021 17:22:04 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Jun 2021 10:36:57 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Sep 2022 12:45:01 GMT",
"version": "v4"
}
] | 2022-09-05 | [
[
"Packhäuser",
"Kai",
""
],
[
"Gündel",
"Sebastian",
""
],
[
"Münster",
"Nicolas",
""
],
[
"Syben",
"Christopher",
""
],
[
"Christlein",
"Vincent",
""
],
[
"Maier",
"Andreas",
""
]
] |
2103.08733 | Nikolaos Kondylidis | Nikolaos Kondylidis, Jie Zou and Evangelos Kanoulas | Category Aware Explainable Conversational Recommendation | Workshop on Mixed-Initiative ConveRsatiOnal Systems (MICROS) @ECIR,
2021 | Workshop on Mixed-Initiative ConveRsatiOnal Systems (MICROS)
@ECIR, 2021 | null | null | cs.AI cs.HC cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Most conversational recommendation approaches are either not explainable, or
they require external user's knowledge for explaining or their explanations
cannot be applied in real time due to computational limitations. In this work,
we present a real time category based conversational recommendation approach,
which can provide concise explanations without prior user knowledge being
required. We first perform an explainable user model in the form of preferences
over the items' categories, and then use the category preferences to recommend
items. The user model is performed by applying a BERT-based neural architecture
on the conversation. Then, we translate the user model into item recommendation
scores using a Feed Forward Network. User preferences during the conversation
in our approach are represented by category vectors which are directly
interpretable. The experimental results on the real conversational
recommendation dataset ReDial demonstrate comparable performance to the
state-of-the-art, while our approach is explainable. We also show the potential
power of our framework by involving an oracle setting of category preference
prediction.
| [
{
"created": "Mon, 15 Mar 2021 21:45:13 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Kondylidis",
"Nikolaos",
""
],
[
"Zou",
"Jie",
""
],
[
"Kanoulas",
"Evangelos",
""
]
] |
2103.08773 | Fevziye Irem Eyiokur | Fevziye Irem Eyiokur, Haz{\i}m Kemal Ekenel, Alexander Waibel | Unconstrained Face-Mask & Face-Hand Datasets: Building a Computer Vision
System to Help Prevent the Transmission of COVID-19 | 9 pages, 4 figures | SIViP (2022) | 10.1007/s11760-022-02308-x | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Health organizations advise social distancing, wearing face mask, and
avoiding touching face to prevent the spread of coronavirus. Based on these
protective measures, we developed a computer vision system to help prevent the
transmission of COVID-19. Specifically, the developed system performs face mask
detection, face-hand interaction detection, and measures social distance. To
train and evaluate the developed system, we collected and annotated images that
represent face mask usage and face-hand interaction in the real world. Besides
assessing the performance of the developed system on our own datasets, we also
tested it on existing datasets in the literature without performing any
adaptation on them. In addition, we proposed a module to track social distance
between people. Experimental results indicate that our datasets represent the
real-world's diversity well. The proposed system achieved very high performance
and generalization capacity for face mask usage detection, face-hand
interaction detection, and measuring social distance in a real-world scenario
on unseen data. The datasets will be available at
https://github.com/iremeyiokur/COVID-19-Preventions-Control-System.
| [
{
"created": "Tue, 16 Mar 2021 00:00:04 GMT",
"version": "v1"
},
{
"created": "Tue, 4 May 2021 13:53:06 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Dec 2021 12:54:18 GMT",
"version": "v3"
}
] | 2022-12-16 | [
[
"Eyiokur",
"Fevziye Irem",
""
],
[
"Ekenel",
"Hazım Kemal",
""
],
[
"Waibel",
"Alexander",
""
]
] |
2103.08796 | Xiaojun Li | Xiaojun Li, Jianwei Li, Ali Abdollahi and Trevor Jones | Data-driven Thermal Anomaly Detection for Batteries using Unsupervised
Shape Clustering | 6 pages | 2021 IEEE 30th International Symposium on Industrial Electronics
(ISIE), 2021, pp. 1-6 | 10.1109/ISIE45552.2021.9576348 | null | eess.SY cs.AI cs.LG cs.SY | http://creativecommons.org/licenses/by/4.0/ | For electric vehicles (EV) and energy storage (ES) batteries, thermal runaway
is a critical issue as it can lead to uncontrollable fires or even explosions.
Thermal anomaly detection can identify problematic battery packs that may
eventually undergo thermal runaway. However, there are common challenges like
data unavailability, environment and configuration variations, and battery
aging. We propose a data-driven method to detect battery thermal anomaly based
on comparing shape-similarity between thermal measurements. Based on their
shapes, the measurements are continuously being grouped into different
clusters. Anomaly is detected by monitoring deviations within the clusters.
Unlike model-based or other data-driven methods, the proposed method is robust
to data loss and requires minimal reference data for different pack
configurations. As the initial experimental results show, the method not only
can be more accurate than the onboard BMS and but also can detect unforeseen
anomalies at the early stage.
| [
{
"created": "Tue, 16 Mar 2021 01:29:41 GMT",
"version": "v1"
},
{
"created": "Wed, 19 May 2021 23:56:30 GMT",
"version": "v2"
}
] | 2022-01-10 | [
[
"Li",
"Xiaojun",
""
],
[
"Li",
"Jianwei",
""
],
[
"Abdollahi",
"Ali",
""
],
[
"Jones",
"Trevor",
""
]
] |
2103.08877 | Djordje Miladinovic | {\DJ}or{\dj}e Miladinovi\'c, Aleksandar Stani\'c, Stefan Bauer,
J\"urgen Schmidhuber, Joachim M. Buhmann | Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling | null | International Conference on Learning Representations (2021); | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to improve generative modeling by better exploiting spatial regularities
and coherence in images? We introduce a novel neural network for building image
generators (decoders) and apply it to variational autoencoders (VAEs). In our
spatial dependency networks (SDNs), feature maps at each level of a deep neural
net are computed in a spatially coherent way, using a sequential gating-based
mechanism that distributes contextual information across 2-D space. We show
that augmenting the decoder of a hierarchical VAE by spatial dependency layers
considerably improves density estimation over baseline convolutional
architectures and the state-of-the-art among the models within the same class.
Furthermore, we demonstrate that SDN can be applied to large images by
synthesizing samples of high quality and coherence. In a vanilla VAE setting,
we find that a powerful SDN decoder also improves learning disentangled
representations, indicating that neural architectures play an important role in
this task. Our results suggest favoring spatial dependency over convolutional
layers in various VAE settings. The accompanying source code is given at
https://github.com/djordjemila/sdn.
| [
{
"created": "Tue, 16 Mar 2021 07:01:08 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Miladinović",
"Đorđe",
""
],
[
"Stanić",
"Aleksandar",
""
],
[
"Bauer",
"Stefan",
""
],
[
"Schmidhuber",
"Jürgen",
""
],
[
"Buhmann",
"Joachim M.",
""
]
] |
2103.08894 | Medha Atre | Medha Atre and Birendra Jha and Ashwini Rao | Distributed Deep Learning Using Volunteer Computing-Like Paradigm | null | ScaDL workshop at IEEE International Parallel and Distributed
Processing Symposium 2021 | 10.1109/IPDPSW52791.2021.00144 | null | cs.DC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Use of Deep Learning (DL) in commercial applications such as image
classification, sentiment analysis and speech recognition is increasing. When
training DL models with large number of parameters and/or large datasets, cost
and speed of training can become prohibitive. Distributed DL training solutions
that split a training job into subtasks and execute them over multiple nodes
can decrease training time. However, the cost of current solutions, built
predominantly for cluster computing systems, can still be an issue. In contrast
to cluster computing systems, Volunteer Computing (VC) systems can lower the
cost of computing, but applications running on VC systems have to handle fault
tolerance, variable network latency and heterogeneity of compute nodes, and the
current solutions are not designed to do so. We design a distributed solution
that can run DL training on a VC system by using a data parallel approach. We
implement a novel asynchronous SGD scheme called VC-ASGD suited for VC systems.
In contrast to traditional VC systems that lower cost by using untrustworthy
volunteer devices, we lower cost by leveraging preemptible computing instances
on commercial cloud platforms. By using preemptible instances that require
applications to be fault tolerant, we lower cost by 70-90% and improve data
security.
| [
{
"created": "Tue, 16 Mar 2021 07:32:58 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Apr 2021 12:50:05 GMT",
"version": "v2"
},
{
"created": "Thu, 27 May 2021 06:41:45 GMT",
"version": "v3"
}
] | 2021-05-28 | [
[
"Atre",
"Medha",
""
],
[
"Jha",
"Birendra",
""
],
[
"Rao",
"Ashwini",
""
]
] |
2103.08922 | Pit Schneider | Pit Schneider | Combining Morphological and Histogram based Text Line Segmentation in
the OCR Context | Journal of Data Mining and Digital Humanities; Small adjustments | Journal of Data Mining & Digital Humanities, 2021,
HistoInformatics (November 4, 2021) jdmdh:7277 | 10.46298/jdmdh.7277 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text line segmentation is one of the pre-stages of modern optical character
recognition systems. The algorithmic approach proposed by this paper has been
designed for this exact purpose. Its main characteristic is the combination of
two different techniques, morphological image operations and horizontal
histogram projections. The method was developed to be applied on a historic
data collection that commonly features quality issues, such as degraded paper,
blurred text, or presence of noise. For that reason, the segmenter in question
could be of particular interest for cultural institutions, that want access to
robust line bounding boxes for a given historic document. Because of the
promising segmentation results that are joined by low computational cost, the
algorithm was incorporated into the OCR pipeline of the National Library of
Luxembourg, in the context of the initiative of reprocessing their historic
newspaper collection. The general contribution of this paper is to outline the
approach and to evaluate the gains in terms of accuracy and speed, comparing it
to the segmentation algorithm bundled with the used open source OCR software.
| [
{
"created": "Tue, 16 Mar 2021 09:06:25 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Sep 2021 10:26:56 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Oct 2021 13:14:35 GMT",
"version": "v3"
},
{
"created": "Mon, 1 Nov 2021 12:56:57 GMT",
"version": "v4"
}
] | 2023-06-22 | [
[
"Schneider",
"Pit",
""
]
] |
2103.08952 | Philipp Wicke | Philipp Wicke and Marianna M. Bolognesi | Covid-19 Discourse on Twitter: How the Topics, Sentiments, Subjectivity,
and Figurative Frames Changed Over Time | null | Frontiers in Communication, Volume: 6, Pages: 45, Year: 2021 | 10.3389/fcomm.2021.651997 | null | cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | The words we use to talk about the current epidemiological crisis on social
media can inform us on how we are conceptualizing the pandemic and how we are
reacting to its development. This paper provides an extensive explorative
analysis of how the discourse about Covid-19 reported on Twitter changes
through time, focusing on the first wave of this pandemic. Based on an
extensive corpus of tweets (produced between 20th March and 1st July 2020)
first we show how the topics associated with the development of the pandemic
changed through time, using topic modeling. Second, we show how the sentiment
polarity of the language used in the tweets changed from a relatively positive
valence during the first lockdown, toward a more negative valence in
correspondence with the reopening. Third we show how the average subjectivity
of the tweets increased linearly and fourth, how the popular and frequently
used figurative frame of WAR changed when real riots and fights entered the
discourse.
| [
{
"created": "Tue, 16 Mar 2021 10:22:39 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Wicke",
"Philipp",
""
],
[
"Bolognesi",
"Marianna M.",
""
]
] |
2103.08971 | Tsing Zhang | Jianqing Zhang (1), Dongjing Wang (1), Dongjin Yu (1) ((1) School of
Computer Science and Technology, Hangzhou Dianzi University, China) | TLSAN: Time-aware Long- and Short-term Attention Network for Next-item
Recommendation | null | Neurocomputing, Volume 441, 21 June 2021, Pages 179-191 | 10.1016/j.neucom.2021.02.015 | null | cs.IR cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently, deep neural networks are widely applied in recommender systems for
their effectiveness in capturing/modeling users' preferences. Especially, the
attention mechanism in deep learning enables recommender systems to incorporate
various features in an adaptive way. Specifically, as for the next item
recommendation task, we have the following three observations: 1) users'
sequential behavior records aggregate at time positions ("time-aggregation"),
2) users have personalized taste that is related to the "time-aggregation"
phenomenon ("personalized time-aggregation"), and 3) users' short-term
interests play an important role in the next item prediction/recommendation. In
this paper, we propose a new Time-aware Long- and Short-term Attention Network
(TLSAN) to address those observations mentioned above. Specifically, TLSAN
consists of two main components. Firstly, TLSAN models "personalized
time-aggregation" and learn user-specific temporal taste via trainable
personalized time position embeddings with category-aware correlations in
long-term behaviors. Secondly, long- and short-term feature-wise attention
layers are proposed to effectively capture users' long- and short-term
preferences for accurate recommendation. Especially, the attention mechanism
enables TLSAN to utilize users' preferences in an adaptive way, and its usage
in long- and short-term layers enhances TLSAN's ability of dealing with sparse
interaction data. Extensive experiments are conducted on Amazon datasets from
different fields (also with different size), and the results show that TLSAN
outperforms state-of-the-art baselines in both capturing users' preferences and
performing time-sensitive next-item recommendation.
| [
{
"created": "Tue, 16 Mar 2021 10:51:57 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Zhang",
"Jianqing",
""
],
[
"Wang",
"Dongjing",
""
],
[
"Yu",
"Dongjin",
""
]
] |
2103.09002 | Gabriele Lagani | Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato | Hebbian Semi-Supervised Learning in a Sample Efficiency Setting | 18 pages, 9 figures, 3 tables, accepted by Elsevier Neural Networks | Neural Networks, Volume 143, November 2021, Pages 719-731,
Elsevier | 10.1016/j.neunet.2021.08.003 | null | cs.NE cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose to address the issue of sample efficiency, in Deep Convolutional
Neural Networks (DCNN), with a semi-supervised training strategy that combines
Hebbian learning with gradient descent: all internal layers (both convolutional
and fully connected) are pre-trained using an unsupervised approach based on
Hebbian learning, and the last fully connected layer (the classification layer)
is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian
learning is an unsupervised learning method, its potential lies in the
possibility of training the internal layers of a DCNN without labels. Only the
final fully connected layer has to be trained with labeled examples.
We performed experiments on various object recognition datasets, in different
regimes of sample efficiency, comparing our semi-supervised (Hebbian for
internal layers + SGD for the final fully connected layer) approach with
end-to-end supervised backprop training, and with semi-supervised learning
based on Variational Auto-Encoder (VAE). The results show that, in regimes
where the number of available labeled samples is low, our semi-supervised
approach outperforms the other approaches in almost all the cases.
| [
{
"created": "Tue, 16 Mar 2021 11:57:52 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Sep 2021 08:29:06 GMT",
"version": "v2"
}
] | 2021-09-21 | [
[
"Lagani",
"Gabriele",
""
],
[
"Falchi",
"Fabrizio",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Amato",
"Giuseppe",
""
]
] |
2103.09108 | Lukas Tuggener | Lukas Tuggener, J\"urgen Schmidhuber, Thilo Stadelmann | Is it enough to optimize CNN architectures on ImageNet? | null | Frontiers in Computer Science, Volume 4, 2022 | 10.3389/fcomp.2022.1041703 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Classification performance based on ImageNet is the de-facto standard metric
for CNN development. In this work we challenge the notion that CNN architecture
design solely based on ImageNet leads to generally effective convolutional
neural network (CNN) architectures that perform well on a diverse set of
datasets and application domains. To this end, we investigate and ultimately
improve ImageNet as a basis for deriving such architectures. We conduct an
extensive empirical study for which we train $500$ CNN architectures, sampled
from the broad AnyNetX design space, on ImageNet as well as $8$ additional well
known image classification benchmark datasets from a diverse array of
application domains. We observe that the performances of the architectures are
highly dataset dependent. Some datasets even exhibit a negative error
correlation with ImageNet across all architectures. We show how to
significantly increase these correlations by utilizing ImageNet subsets
restricted to fewer classes. These contributions can have a profound impact on
the way we design future CNN architectures and help alleviate the tilt we see
currently in our community with respect to over-reliance on one dataset.
| [
{
"created": "Tue, 16 Mar 2021 14:42:01 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jun 2021 15:23:38 GMT",
"version": "v2"
},
{
"created": "Thu, 17 Mar 2022 19:17:25 GMT",
"version": "v3"
},
{
"created": "Mon, 6 Mar 2023 14:50:44 GMT",
"version": "v4"
}
] | 2023-03-07 | [
[
"Tuggener",
"Lukas",
""
],
[
"Schmidhuber",
"Jürgen",
""
],
[
"Stadelmann",
"Thilo",
""
]
] |
2103.09151 | Han Wu | Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, and Johan Wahlstrom | Adversarial Driving: Attacking End-to-End Autonomous Driving | Accepted by IEEE Intelligent Vehicle Symposium, 2023 | IEEE Intelligent Vehicle Symposium, 2023 | 10.1109/IV55152.2023.10186386 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As research in deep neural networks advances, deep convolutional networks
become promising for autonomous driving tasks. In particular, there is an
emerging trend of employing end-to-end neural network models for autonomous
driving. However, previous research has shown that deep neural network
classifiers are vulnerable to adversarial attacks. While for regression tasks,
the effect of adversarial attacks is not as well understood. In this research,
we devise two white-box targeted attacks against end-to-end autonomous driving
models. Our attacks manipulate the behavior of the autonomous driving system by
perturbing the input image. In an average of 800 attacks with the same attack
strength (epsilon=1), the image-specific and image-agnostic attack deviates the
steering angle from the original output by 0.478 and 0.111, respectively, which
is much stronger than random noises that only perturbs the steering angle by
0.002 (The steering angle ranges from [-1, 1]). Both attacks can be initiated
in real-time on CPUs without employing GPUs. Demo video:
https://youtu.be/I0i8uN2oOP0.
| [
{
"created": "Tue, 16 Mar 2021 15:47:34 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Mar 2021 14:04:36 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Aug 2022 16:42:49 GMT",
"version": "v3"
},
{
"created": "Fri, 16 Sep 2022 17:44:13 GMT",
"version": "v4"
},
{
"created": "Wed, 1 Feb 2023 10:12:11 GMT",
"version": "v5"
},
{
"created": "Tue, 4 Apr 2023 14:53:04 GMT",
"version": "v6"
},
{
"created": "Wed, 31 May 2023 10:51:04 GMT",
"version": "v7"
},
{
"created": "Tue, 12 Dec 2023 11:27:44 GMT",
"version": "v8"
}
] | 2023-12-13 | [
[
"Wu",
"Han",
""
],
[
"Yunas",
"Syed",
""
],
[
"Rowlands",
"Sareh",
""
],
[
"Ruan",
"Wenjie",
""
],
[
"Wahlstrom",
"Johan",
""
]
] |
2103.09160 | Jingdao Chen | Jingdao Chen, Zsolt Kira, and Yong K. Cho | LRGNet: Learnable Region Growing for Class-Agnostic Point Cloud
Segmentation | null | IEEE Robotics and Automation Letters 2021 | 10.1109/LRA.2021.3062607 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | 3D point cloud segmentation is an important function that helps robots
understand the layout of their surrounding environment and perform tasks such
as grasping objects, avoiding obstacles, and finding landmarks. Current
segmentation methods are mostly class-specific, many of which are tuned to work
with specific object categories and may not be generalizable to different types
of scenes. This research proposes a learnable region growing method for
class-agnostic point cloud segmentation, specifically for the task of instance
label prediction. The proposed method is able to segment any class of objects
using a single deep neural network without any assumptions about their shapes
and sizes. The deep neural network is trained to predict how to add or remove
points from a point cloud region to morph it into incrementally more complete
regions of an object instance. Segmentation results on the S3DIS and ScanNet
datasets show that the proposed method outperforms competing methods by 1%-9%
on 6 different evaluation metrics.
| [
{
"created": "Tue, 16 Mar 2021 15:58:01 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Chen",
"Jingdao",
""
],
[
"Kira",
"Zsolt",
""
],
[
"Cho",
"Yong K.",
""
]
] |
2103.09311 | Arash Shaban-Nejad | Nariman Ammar, James E Bailey, Robert L Davis, Arash Shaban-Nejad | Using a Personal Health Library-Enabled mHealth Recommender System for
Self-Management of Diabetes Among Underserved Populations: Use Case for
Knowledge Graphs and Linked Data | 21 Pages, 13 Figures | JMIR Form Res. 2021 March 16;5(3):e24738 | 10.2196/24738 | null | cs.AI cs.DL | http://creativecommons.org/licenses/by/4.0/ | Personal health libraries (PHLs) provide a single point of secure access to
patients digital health data and enable the integration of knowledge stored in
their digital health profiles with other sources of global knowledge. PHLs can
help empower caregivers and health care providers to make informed decisions
about patients health by understanding medical events in the context of their
lives. This paper reports the implementation of a mobile health digital
intervention that incorporates both digital health data stored in patients PHLs
and other sources of contextual knowledge to deliver tailored recommendations
for improving self-care behaviors in diabetic adults. We conducted a thematic
assessment of patient functional and nonfunctional requirements that are
missing from current EHRs based on evidence from the literature. We used the
results to identify the technologies needed to address those requirements. We
describe the technological infrastructures used to construct, manage, and
integrate the types of knowledge stored in the PHL. We leverage the Social
Linked Data (Solid) platform to design a fully decentralized and privacy-aware
platform that supports interoperability and care integration. We provided an
initial prototype design of a PHL and drafted a use case scenario that involves
four actors to demonstrate how the proposed prototype can be used to address
user requirements, including the construction and management of the PHL and its
utilization for developing a mobile app that queries the knowledge stored and
integrated into the PHL in a private and fully decentralized manner to provide
better recommendations. The proposed PHL helps patients and their caregivers
take a central role in making decisions regarding their health and equips their
health care providers with informatics tools that support the collection and
interpretation of the collected knowledge.
| [
{
"created": "Tue, 16 Mar 2021 20:43:17 GMT",
"version": "v1"
}
] | 2021-03-18 | [
[
"Ammar",
"Nariman",
""
],
[
"Bailey",
"James E",
""
],
[
"Davis",
"Robert L",
""
],
[
"Shaban-Nejad",
"Arash",
""
]
] |
2103.09382 | Chuang Niu | Chuang Niu and Hongming Shan and Ge Wang | SPICE: Semantic Pseudo-labeling for Image Clustering | null | IEEE Transactions on Image Processing, 2022 | 10.1109/TIP.2022.3221290 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The similarity among samples and the discrepancy between clusters are two
crucial aspects of image clustering. However, current deep clustering methods
suffer from the inaccurate estimation of either feature similarity or semantic
discrepancy. In this paper, we present a Semantic Pseudo-labeling-based Image
ClustEring (SPICE) framework, which divides the clustering network into a
feature model for measuring the instance-level similarity and a clustering head
for identifying the cluster-level discrepancy. We design two semantics-aware
pseudo-labeling algorithms, prototype pseudo-labeling, and reliable
pseudo-labeling, which enable accurate and reliable self-supervision over
clustering. Without using any ground-truth label, we optimize the clustering
network in three stages: 1) train the feature model through contrastive
learning to measure the instance similarity, 2) train the clustering head with
the prototype pseudo-labeling algorithm to identify cluster semantics, and 3)
jointly train the feature model and clustering head with the reliable
pseudo-labeling algorithm to improve the clustering performance. Extensive
experimental results demonstrate that SPICE achieves significant improvements
(~10%) over existing methods and establishes the new state-of-the-art
clustering results on six image benchmark datasets in terms of three popular
metrics. Importantly, SPICE significantly reduces the gap between unsupervised
and fully-supervised classification; e.g., there is only a 2% (91.8% vs 93.8%)
accuracy difference on CIFAR-10. Our code has been made publically available at
https://github.com/niuchuangnn/SPICE.
| [
{
"created": "Wed, 17 Mar 2021 00:52:27 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Oct 2021 14:11:41 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jan 2022 14:18:19 GMT",
"version": "v3"
}
] | 2022-11-23 | [
[
"Niu",
"Chuang",
""
],
[
"Shan",
"Hongming",
""
],
[
"Wang",
"Ge",
""
]
] |
2103.09384 | Aditya Challa Dr | Aditya Challa, Sravan Danda, B.S.Daya Sagar and Laurent Najman | Triplet-Watershed for Hyperspectral Image Classification | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp.
1-14, 2022 | 10.1109/TGRS.2021.3113721 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Hyperspectral images (HSI) consist of rich spatial and spectral information,
which can potentially be used for several applications. However, noise, band
correlations and high dimensionality restrict the applicability of such data.
This is recently addressed using creative deep learning network architectures
such as ResNet, SSRN, and A2S2K. However, the last layer, i.e the
classification layer, remains unchanged and is taken to be the softmax
classifier. In this article, we propose to use a watershed classifier.
Watershed classifier extends the watershed operator from Mathematical
Morphology for classification. In its vanilla form, the watershed classifier
does not have any trainable parameters. In this article, we propose a novel
approach to train deep learning networks to obtain representations suitable for
the watershed classifier. The watershed classifier exploits the connectivity
patterns, a characteristic of HSI datasets, for better inference. We show that
exploiting such characteristics allows the Triplet-Watershed to achieve
state-of-art results in supervised and semi-supervised contexts. These results
are validated on Indianpines (IP), University of Pavia (UP), Kennedy Space
Center (KSC) and University of Houston (UH) datasets, relying on simple convnet
architecture using a quarter of parameters compared to previous
state-of-the-art networks. The source code for reproducing the experiments and
supplementary material (high resolution images) is available at
https://github.com/ac20/TripletWatershed Code.
| [
{
"created": "Wed, 17 Mar 2021 01:06:49 GMT",
"version": "v1"
},
{
"created": "Sat, 22 May 2021 06:02:17 GMT",
"version": "v2"
},
{
"created": "Sun, 5 Sep 2021 09:11:27 GMT",
"version": "v3"
}
] | 2023-02-23 | [
[
"Challa",
"Aditya",
""
],
[
"Danda",
"Sravan",
""
],
[
"Sagar",
"B. S. Daya",
""
],
[
"Najman",
"Laurent",
""
]
] |
2103.09564 | Dominik Drees | Dominik Drees, Florian Eilers and Xiaoyi Jiang | Hierarchical Random Walker Segmentation for Large Volumetric Biomedical
Images | null | IEEE Trans. Image Process. 31: pp. 4431-4446 (2022) | 10.1109/TIP.2022.3185551 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The random walker method for image segmentation is a popular tool for
semi-automatic image segmentation, especially in the biomedical field. However,
its linear asymptotic run time and memory requirements make application to 3D
datasets of increasing sizes impractical. We propose a hierarchical framework
that, to the best of our knowledge, is the first attempt to overcome these
restrictions for the random walker algorithm and achieves sublinear run time
and constant memory complexity. The goal of this framework is -- rather than
improving the segmentation quality compared to the baseline method -- to make
interactive segmentation on out-of-core datasets possible. The method is
evaluated quantitavely on synthetic data and the CT-ORG dataset where the
expected improvements in algorithm run time while maintaining high segmentation
quality are confirmed. The incremental (i.e., interaction update) run time is
demonstrated to be in seconds on a standard PC even for volumes of hundreds of
gigabytes in size. In a small case study the applicability to large real world
from current biomedical research is demonstrated. An implementation of the
presented method is publicly available in version 5.2 of the widely used volume
rendering and processing software Voreen (https://www.uni-muenster.de/Voreen/).
| [
{
"created": "Wed, 17 Mar 2021 11:02:44 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Aug 2021 11:56:25 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Aug 2022 13:38:13 GMT",
"version": "v3"
}
] | 2022-08-24 | [
[
"Drees",
"Dominik",
""
],
[
"Eilers",
"Florian",
""
],
[
"Jiang",
"Xiaoyi",
""
]
] |
2103.09568 | Roxana R\u{a}dulescu | Conor F. Hayes, Roxana R\u{a}dulescu, Eugenio Bargiacchi, Johan
K\"allstr\"om, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten,
Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A.
Irissappane, Patrick Mannion, Ann Now\'e, Gabriel Ramos, Marcello Restelli,
Peter Vamplew, Diederik M. Roijers | A Practical Guide to Multi-Objective Reinforcement Learning and Planning | null | Auton Agent Multi-Agent Syst 36, 26 (2022) | 10.1007/s10458-022-09552-y | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world decision-making tasks are generally complex, requiring trade-offs
between multiple, often conflicting, objectives. Despite this, the majority of
research in reinforcement learning and decision-theoretic planning either
assumes only a single objective, or that multiple objectives can be adequately
handled via a simple linear combination. Such approaches may oversimplify the
underlying problem and hence produce suboptimal results. This paper serves as a
guide to the application of multi-objective methods to difficult problems, and
is aimed at researchers who are already familiar with single-objective
reinforcement learning and planning methods who wish to adopt a multi-objective
perspective on their research, as well as practitioners who encounter
multi-objective decision problems in practice. It identifies the factors that
may influence the nature of the desired solution, and illustrates by example
how these influence the design of multi-objective decision-making systems for
complex problems.
| [
{
"created": "Wed, 17 Mar 2021 11:07:28 GMT",
"version": "v1"
}
] | 2022-04-22 | [
[
"Hayes",
"Conor F.",
""
],
[
"Rădulescu",
"Roxana",
""
],
[
"Bargiacchi",
"Eugenio",
""
],
[
"Källström",
"Johan",
""
],
[
"Macfarlane",
"Matthew",
""
],
[
"Reymond",
"Mathieu",
""
],
[
"Verstraeten",
"Timothy",
""
],
[
"Zintgraf",
"Luisa M.",
""
],
[
"Dazeley",
"Richard",
""
],
[
"Heintz",
"Fredrik",
""
],
[
"Howley",
"Enda",
""
],
[
"Irissappane",
"Athirai A.",
""
],
[
"Mannion",
"Patrick",
""
],
[
"Nowé",
"Ann",
""
],
[
"Ramos",
"Gabriel",
""
],
[
"Restelli",
"Marcello",
""
],
[
"Vamplew",
"Peter",
""
],
[
"Roijers",
"Diederik M.",
""
]
] |
2103.09577 | Justyna P. Zwolak | Brian J. Weber, Sandesh S. Kalantre, Thomas McJunkin, Jacob M. Taylor,
Justyna P. Zwolak | Theoretical bounds on data requirements for the ray-based classification | 10 pages, 5 figures | SN Comput. Sci. 3, 57 (2022) | 10.1007/s42979-021-00921-0 | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of classifying high-dimensional shapes in real-world data grows
in complexity as the dimension of the space increases. For the case of
identifying convex shapes of different geometries, a new classification
framework has recently been proposed in which the intersections of a set of
one-dimensional representations, called rays, with the boundaries of the shape
are used to identify the specific geometry. This ray-based classification (RBC)
has been empirically verified using a synthetic dataset of two- and
three-dimensional shapes (Zwolak et al. in Proceedings of Third Workshop on
Machine Learning and the Physical Sciences (NeurIPS 2020), Vancouver, Canada
[December 11, 2020], arXiv:2010.00500, 2020) and, more recently, has also been
validated experimentally (Zwolak et al., PRX Quantum 2:020335, 2021). Here, we
establish a bound on the number of rays necessary for shape classification,
defined by key angular metrics, for arbitrary convex shapes. For two
dimensions, we derive a lower bound on the number of rays in terms of the
shape's length, diameter, and exterior angles. For convex polytopes in
$\mathbb{R}^N$, we generalize this result to a similar bound given as a
function of the dihedral angle and the geometrical parameters of polygonal
faces. This result enables a different approach for estimating high-dimensional
shapes using substantially fewer data elements than volumetric or surface-based
approaches.
| [
{
"created": "Wed, 17 Mar 2021 11:38:45 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Nov 2021 20:23:36 GMT",
"version": "v2"
},
{
"created": "Sat, 26 Feb 2022 15:56:24 GMT",
"version": "v3"
}
] | 2022-03-01 | [
[
"Weber",
"Brian J.",
""
],
[
"Kalantre",
"Sandesh S.",
""
],
[
"McJunkin",
"Thomas",
""
],
[
"Taylor",
"Jacob M.",
""
],
[
"Zwolak",
"Justyna P.",
""
]
] |
2103.09593 | Samson Tan | Samson Tan, Shafiq Joty | Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots | To be presented at NAACL-HLT 2021. Abstract also published in the
Rising Stars Track of the Workshop on Computational Approaches to Linguistic
Code-Switching (CALCS 2021) | 2021.naacl-main.282 | null | null | cs.CL cs.AI cs.CY cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilingual models have demonstrated impressive cross-lingual transfer
performance. However, test sets like XNLI are monolingual at the example level.
In multilingual communities, it is common for polyglots to code-mix when
conversing with each other. Inspired by this phenomenon, we present two strong
black-box adversarial attacks (one word-level, one phrase-level) for
multilingual models that push their ability to handle code-mixed sentences to
the limit. The former uses bilingual dictionaries to propose perturbations and
translations of the clean example for sense disambiguation. The latter directly
aligns the clean example with its translations before extracting phrases as
perturbations. Our phrase-level attack has a success rate of 89.75% against
XLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI.
Finally, we propose an efficient adversarial training scheme that trains in the
same number of steps as the original model and show that it improves model
accuracy.
| [
{
"created": "Wed, 17 Mar 2021 12:20:53 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Apr 2021 09:30:27 GMT",
"version": "v2"
},
{
"created": "Sat, 5 Jun 2021 02:02:07 GMT",
"version": "v3"
}
] | 2021-06-08 | [
[
"Tan",
"Samson",
""
],
[
"Joty",
"Shafiq",
""
]
] |
2103.09627 | Keisuke Fujii | Kosuke Toda, Masakiyo Teranishi, Keisuke Kushiro, Keisuke Fujii | Evaluation of soccer team defense based on prediction models of ball
recovery and being attacked: A pilot study | 15 pages, 5 figures | PLoS One, 17(1) e0263051, 2022 | 10.1371/journal.pone.0263051 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the development of measurement technology, data on the movements of
actual games in various sports can be obtained and used for planning and
evaluating the tactics and strategy. Defense in team sports is generally
difficult to be evaluated because of the lack of statistical data. Conventional
evaluation methods based on predictions of scores are considered unreliable
because they predict rare events throughout the game. Besides, it is difficult
to evaluate various plays leading up to a score. In this study, we propose a
method to evaluate team defense from a comprehensive perspective related to
team performance by predicting ball recovery and being attacked, which occur
more frequently than goals, using player actions and positional data of all
players and the ball. Using data from 45 soccer matches, we examined the
relationship between the proposed index and team performance in actual matches
and throughout a season. Results show that the proposed classifiers predicted
the true events (mean F1 score $>$ 0.483) better than the existing classifiers
which were based on rare events or goals (mean F1 score $<$ 0.201). Also, the
proposed index had a moderate correlation with the long-term outcomes of the
season ($r =$ 0.397). These results suggest that the proposed index might be a
more reliable indicator rather than winning or losing with the inclusion of
accidental factors.
| [
{
"created": "Wed, 17 Mar 2021 13:15:41 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Mar 2021 00:42:56 GMT",
"version": "v2"
},
{
"created": "Sat, 7 May 2022 06:27:09 GMT",
"version": "v3"
}
] | 2022-05-10 | [
[
"Toda",
"Kosuke",
""
],
[
"Teranishi",
"Masakiyo",
""
],
[
"Kushiro",
"Keisuke",
""
],
[
"Fujii",
"Keisuke",
""
]
] |
2103.09656 | Mateusz Jurewicz | Mateusz Jurewicz, Leon Str{\o}mberg-Derczynski | Set-to-Sequence Methods in Machine Learning: a Review | 46 pages of text, with 10 pages of references. Contains 2 tables and
4 figures. Updated version includes expanded notes on method comparison | Journal of Artificial Intelligence Research 71 (2021): 885 - 924 | 10.1613/jair.1.12839 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning on sets towards sequential output is an important and
ubiquitous task, with applications ranging from language modeling and
meta-learning to multi-agent strategy games and power grid optimization.
Combining elements of representation learning and structured prediction, its
two primary challenges include obtaining a meaningful, permutation invariant
set representation and subsequently utilizing this representation to output a
complex target permutation. This paper provides a comprehensive introduction to
the field as well as an overview of important machine learning methods tackling
both of these key challenges, with a detailed qualitative comparison of
selected model architectures.
| [
{
"created": "Wed, 17 Mar 2021 13:52:33 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Aug 2021 12:32:05 GMT",
"version": "v2"
}
] | 2021-09-10 | [
[
"Jurewicz",
"Mateusz",
""
],
[
"Strømberg-Derczynski",
"Leon",
""
]
] |
2103.09704 | Jiaye Li | Shichao Zhang, Jiaye Li and Yangding Li | Reachable Distance Function for KNN Classification | null | IEEE Transactions on Knowledge and Data Engineering, 2022 | 10.1109/TKDE.2022.3185149. | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Distance function is a main metrics of measuring the affinity between two
data points in machine learning. Extant distance functions often provide
unreachable distance values in real applications. This can lead to incorrect
measure of the affinity between data points. This paper proposes a reachable
distance function for KNN classification. The reachable distance function is
not a geometric direct-line distance between two data points. It gives a
consideration to the class attribute of a training dataset when measuring the
affinity between data points. Concretely speaking, the reachable distance
between data points includes their class center distance and real distance. Its
shape looks like "Z", and we also call it a Z distance function. In this way,
the affinity between data points in the same class is always stronger than that
in different classes. Or, the intraclass data points are always closer than
those interclass data points. We evaluated the reachable distance with
experiments, and demonstrated that the proposed distance function achieved
better performance in KNN classification.
| [
{
"created": "Wed, 17 Mar 2021 15:01:17 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Jun 2022 06:02:07 GMT",
"version": "v2"
}
] | 2022-07-14 | [
[
"Zhang",
"Shichao",
""
],
[
"Li",
"Jiaye",
""
],
[
"Li",
"Yangding",
""
]
] |
2103.09762 | Gobinda Saha | Gobinda Saha, Isha Garg, Kaushik Roy | Gradient Projection Memory for Continual Learning | Accepted for Oral Presentation at ICLR 2021
https://openreview.net/forum?id=3AOj0RCNC2 | International Conference on Learning Representations (ICLR), 2021 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to learn continually without forgetting the past tasks is a
desired attribute for artificial learning systems. Existing approaches to
enable such learning in artificial neural networks usually rely on network
growth, importance based weight update or replay of old data from the memory.
In contrast, we propose a novel approach where a neural network learns new
tasks by taking gradient steps in the orthogonal direction to the gradient
subspaces deemed important for the past tasks. We find the bases of these
subspaces by analyzing network representations (activations) after learning
each task with Singular Value Decomposition (SVD) in a single shot manner and
store them in the memory as Gradient Projection Memory (GPM). With qualitative
and quantitative analyses, we show that such orthogonal gradient descent
induces minimum to no interference with the past tasks, thereby mitigates
forgetting. We evaluate our algorithm on diverse image classification datasets
with short and long sequences of tasks and report better or on-par performance
compared to the state-of-the-art approaches.
| [
{
"created": "Wed, 17 Mar 2021 16:31:29 GMT",
"version": "v1"
}
] | 2021-03-18 | [
[
"Saha",
"Gobinda",
""
],
[
"Garg",
"Isha",
""
],
[
"Roy",
"Kaushik",
""
]
] |
2103.09996 | Tajwar Abrar Aleef | Tajwar Abrar Aleef, Ingrid T. Spadinger, Michael D. Peacock, Septimiu
E. Salcudean, S. Sara Mahdavi | Rapid treatment planning for low-dose-rate prostate brachytherapy with
TP-GAN | 10 pages, 2 figures, 2 tables | Medical Image Computing and Computer Assisted Intervention MICCAI
2021, vol 12904. Springer, Cham | 10.1007/978-3-030-87202-1_56 | null | cs.CV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Treatment planning in low-dose-rate prostate brachytherapy (LDR-PB) aims to
produce arrangement of implantable radioactive seeds that deliver a minimum
prescribed dose to the prostate whilst minimizing toxicity to healthy tissues.
There can be multiple seed arrangements that satisfy this dosimetric criterion,
not all deemed 'acceptable' for implant from a physician's perspective. This
leads to plans that are subjective to the physician's/centre's preference,
planning style, and expertise. We propose a method that aims to reduce this
variability by training a model to learn from a large pool of successful
retrospective LDR-PB data (961 patients) and create consistent plans that mimic
the high-quality manual plans. Our model is based on conditional generative
adversarial networks that use a novel loss function for penalizing the model on
spatial constraints of the seeds. An optional optimizer based on a simulated
annealing (SA) algorithm can be used to further fine-tune the plans if
necessary (determined by the treating physician). Performance analysis was
conducted on 150 test cases demonstrating comparable results to that of the
manual prehistorical plans. On average, the clinical target volume covering
100% of the prescribed dose was 98.9% for our method compared to 99.4% for
manual plans. Moreover, using our model, the planning time was significantly
reduced to an average of 2.5 mins/plan with SA, and less than 3 seconds without
SA. Compared to this, manual planning at our centre takes around 20 mins/plan.
| [
{
"created": "Thu, 18 Mar 2021 03:02:45 GMT",
"version": "v1"
}
] | 2022-05-10 | [
[
"Aleef",
"Tajwar Abrar",
""
],
[
"Spadinger",
"Ingrid T.",
""
],
[
"Peacock",
"Michael D.",
""
],
[
"Salcudean",
"Septimiu E.",
""
],
[
"Mahdavi",
"S. Sara",
""
]
] |
2103.10003 | Ashkan Ebadi | Ashkan Ebadi, Pengcheng Xi, Alexander MacLean, St\'ephane Tremblay,
Sonny Kohli, Alexander Wong | COVIDx-US -- An open-access benchmark dataset of ultrasound imaging data
for AI-driven COVID-19 analytics | 12 pages, 5 figures, to be submitted to Nature Scientific Data | Front. Biosci. (Landmark Ed) 2022, 27(7), 198 | 10.31083/j.fbl2707198 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic continues to have a devastating effect on the health
and well-being of the global population. Apart from the global health crises,
the pandemic has also caused significant economic and financial difficulties
and socio-physiological implications. Effective screening, triage, treatment
planning, and prognostication of outcome plays a key role in controlling the
pandemic. Recent studies have highlighted the role of point-of-care ultrasound
imaging for COVID-19 screening and prognosis, particularly given that it is
non-invasive, globally available, and easy-to-sanitize. Motivated by these
attributes and the promise of artificial intelligence tools to aid clinicians,
we introduce COVIDx-US, an open-access benchmark dataset of COVID-19 related
ultrasound imaging data. The COVIDx-US dataset was curated from multiple
sources and its current version, i.e., v1.2., consists of 150 lung ultrasound
videos and 12,943 processed images of patients infected with COVID-19
infection, non-COVID-19 infection, other lung diseases/conditions, as well as
normal control cases. The COVIDx-US is the largest open-access fully-curated
dataset of its kind that has been systematically curated, processed, and
validated specifically for the purpose of building and evaluating artificial
intelligence algorithms and models.
| [
{
"created": "Thu, 18 Mar 2021 03:31:33 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 13:51:52 GMT",
"version": "v2"
}
] | 2023-02-08 | [
[
"Ebadi",
"Ashkan",
""
],
[
"Xi",
"Pengcheng",
""
],
[
"MacLean",
"Alexander",
""
],
[
"Tremblay",
"Stéphane",
""
],
[
"Kohli",
"Sonny",
""
],
[
"Wong",
"Alexander",
""
]
] |
2103.10051 | Donghyun Lee | Donghyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song and Changkyu
Choi | Data-free mixed-precision quantization using novel sensitivity metric | Submission to ICIP2021 | 2021 IEEE International Conference on Image Processing (ICIP),
2021, pp. 1294-1298 | 10.1109/ICIP42928.2021.9506527 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Post-training quantization is a representative technique for compressing
neural networks, making them smaller and more efficient for deployment on edge
devices. However, an inaccessible user dataset often makes it difficult to
ensure the quality of the quantized neural network in practice. In addition,
existing approaches may use a single uniform bit-width across the network,
resulting in significant accuracy degradation at extremely low bit-widths. To
utilize multiple bit-width, sensitivity metric plays a key role in balancing
accuracy and compression. In this paper, we propose a novel sensitivity metric
that considers the effect of quantization error on task loss and interaction
with other layers. Moreover, we develop labeled data generation methods that
are not dependent on a specific operation of the neural network. Our
experiments show that the proposed metric better represents quantization
sensitivity, and generated data are more feasible to be applied to
mixed-precision quantization.
| [
{
"created": "Thu, 18 Mar 2021 07:23:21 GMT",
"version": "v1"
}
] | 2022-01-05 | [
[
"Lee",
"Donghyun",
""
],
[
"Cho",
"Minkyoung",
""
],
[
"Lee",
"Seungwon",
""
],
[
"Song",
"Joonho",
""
],
[
"Choi",
"Changkyu",
""
]
] |
2103.10142 | Florian Rehm | Florian Rehm, Sofia Vallecorsa, Vikram Saletore, Hans Pabst, Adel
Chaibi, Valeriu Codreanu, Kerstin Borras, Dirk Kr\"ucker | Reduced Precision Strategies for Deep Learning: A High Energy Physics
Generative Adversarial Network Use Case | Submitted at ICPRAM 2021; from CERN openlab - Intel collaboration | ICPRAM 2021 | 10.5220/0010245002510258 | null | physics.data-an cs.AI hep-ex | http://creativecommons.org/licenses/by/4.0/ | Deep learning is finding its way into high energy physics by replacing
traditional Monte Carlo simulations. However, deep learning still requires an
excessive amount of computational resources. A promising approach to make deep
learning more efficient is to quantize the parameters of the neural networks to
reduced precision. Reduced precision computing is extensively used in modern
deep learning and results to lower execution inference time, smaller memory
footprint and less memory bandwidth. In this paper we analyse the effects of
low precision inference on a complex deep generative adversarial network model.
The use case which we are addressing is calorimeter detector simulations of
subatomic particle interactions in accelerator based high energy physics. We
employ the novel Intel low precision optimization tool (iLoT) for quantization
and compare the results to the quantized model from TensorFlow Lite. In the
performance benchmark we gain a speed-up of 1.73x on Intel hardware for the
quantized iLoT model compared to the initial, not quantized, model. With
different physics-inspired self-developed metrics, we validate that the
quantized iLoT model shows a lower loss of physical accuracy in comparison to
the TensorFlow Lite model.
| [
{
"created": "Thu, 18 Mar 2021 10:20:23 GMT",
"version": "v1"
}
] | 2021-03-19 | [
[
"Rehm",
"Florian",
""
],
[
"Vallecorsa",
"Sofia",
""
],
[
"Saletore",
"Vikram",
""
],
[
"Pabst",
"Hans",
""
],
[
"Chaibi",
"Adel",
""
],
[
"Codreanu",
"Valeriu",
""
],
[
"Borras",
"Kerstin",
""
],
[
"Krücker",
"Dirk",
""
]
] |
2103.10292 | Veronika Cheplygina | Ga\"el Varoquaux and Veronika Cheplygina | How I failed machine learning in medical imaging -- shortcomings and
recommendations | null | npj Digit. Med. 5, 48 (2022).
https://doi.org/10.1038/s41746-022-00592-y | 10.1038/s41746-022-00592-y | null | eess.IV cs.CV cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Medical imaging is an important research field with many opportunities for
improving patients' health. However, there are a number of challenges that are
slowing down the progress of the field as a whole, such optimizing for
publication. In this paper we reviewed several problems related to choosing
datasets, methods, evaluation metrics, and publication strategies. With a
review of literature and our own analysis, we show that at every step,
potential biases can creep in. On a positive note, we also see that initiatives
to counteract these problems are already being started. Finally we provide a
broad range of recommendations on how to further these address problems in the
future. For reproducibility, data and code for our analyses are available on
\url{https://github.com/GaelVaroquaux/ml_med_imaging_failures}
| [
{
"created": "Thu, 18 Mar 2021 14:46:35 GMT",
"version": "v1"
},
{
"created": "Thu, 12 May 2022 15:03:28 GMT",
"version": "v2"
}
] | 2022-05-14 | [
[
"Varoquaux",
"Gaël",
""
],
[
"Cheplygina",
"Veronika",
""
]
] |
2103.10390 | Olivier Rukundo | Olivier Rukundo | Challenges of 3D Surface Reconstruction in Capsule Endoscopy | 7 pages, 2 figures | Journal of Clinical Medicine, 2023 | 10.3390/jcm12154955 | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Essential for improving the accuracy and reliability of bowel cancer
screening, three-dimensional (3D) surface reconstruction using capsule
endoscopy (CE) images remains challenging due to CE hardware and software
limitations. This report generally focuses on challenges associated with 3D
visualization and specifically investigates the impact of the indeterminate
selection of the angle of the line of sight on 3D surfaces. Furthermore, it
demonstrates that impact through 3D surfaces viewed at the same azimuth angles
and different elevation angles of the line of sight. The report concludes that
3D printing of reconstructed 3D surfaces can potentially overcome line of sight
indeterminate selection and 2D screen visual restriction-related errors.
| [
{
"created": "Thu, 18 Mar 2021 17:18:48 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Sep 2022 21:38:31 GMT",
"version": "v2"
},
{
"created": "Sat, 13 May 2023 10:32:05 GMT",
"version": "v3"
},
{
"created": "Thu, 27 Jul 2023 19:21:56 GMT",
"version": "v4"
}
] | 2023-07-31 | [
[
"Rukundo",
"Olivier",
""
]
] |
2103.10489 | Ivan Srba | Ivan Srba, Gabriele Lenzini, Matus Pikuliak, Samuel Pecar | Addressing Hate Speech with Data Science: An Overview from Computer
Science Perspective | null | Wachs S., Koch-Priewe B., Zick A. (eds) Hate Speech -
Multidisziplinare Analysen und Handlungsoptionen. Springer VS, Wiesbaden.
2021 | 10.1007/978-3-658-31793-5_14 | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | From a computer science perspective, addressing on-line hate speech is a
challenging task that is attracting the attention of both industry (mainly
social media platform owners) and academia. In this chapter, we provide an
overview of state-of-the-art data-science approaches - how they define hate
speech, which tasks they solve to mitigate the phenomenon, and how they address
these tasks. We limit our investigation mostly to (semi-)automatic detection of
hate speech, which is the task that the majority of existing computer science
works focus on. Finally, we summarize the challenges and the open problems in
the current data-science research and the future directions in this field. Our
aim is to prepare an easily understandable report, capable to promote the
multidisciplinary character of hate speech research. Researchers from other
domains (e.g., psychology and sociology) can thus take advantage of the
knowledge achieved in the computer science domain but also contribute back and
help improve how computer science is addressing that urgent and socially
relevant issue which is the prevalence of hate speech in social media.
| [
{
"created": "Thu, 18 Mar 2021 19:19:44 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Srba",
"Ivan",
""
],
[
"Lenzini",
"Gabriele",
""
],
[
"Pikuliak",
"Matus",
""
],
[
"Pecar",
"Samuel",
""
]
] |
2103.10492 | Jakaria Rabbi | Md. Tahmid Hasan Fuad, Awal Ahmed Fime, Delowar Sikder, Md. Akil
Raihan Iftee, Jakaria Rabbi, Mabrook S. Al-rakhami, Abdu Gumae, Ovishake Sen,
Mohtasim Fuad, and Md. Nazrul Islam | Recent Advances in Deep Learning Techniques for Face Recognition | 32 pages and citation: M. T. H. Fuad et al., "Recent Advances in Deep
Learning Techniques for Face Recognition," in IEEE Access, vol. 9, pp.
99112-99142, 2021, doi: 10.1109/ACCESS.2021.3096136 | in IEEE Access, vol. 9, pp. 99112-99142, 2021 | 10.1109/ACCESS.2021.3096136 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, researchers have proposed many deep learning (DL) methods
for various tasks, and particularly face recognition (FR) made an enormous leap
using these techniques. Deep FR systems benefit from the hierarchical
architecture of the DL methods to learn discriminative face representation.
Therefore, DL techniques significantly improve state-of-the-art performance on
FR systems and encourage diverse and efficient real-world applications. In this
paper, we present a comprehensive analysis of various FR systems that leverage
the different types of DL techniques, and for the study, we summarize 168
recent contributions from this area. We discuss the papers related to different
algorithms, architectures, loss functions, activation functions, datasets,
challenges, improvement ideas, current and future trends of DL-based FR
systems. We provide a detailed discussion of various DL methods to understand
the current state-of-the-art, and then we discuss various activation and loss
functions for the methods. Additionally, we summarize different datasets used
widely for FR tasks and discuss challenges related to illumination, expression,
pose variations, and occlusion. Finally, we discuss improvement ideas, current
and future trends of FR tasks.
| [
{
"created": "Thu, 18 Mar 2021 19:39:12 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jul 2021 16:31:53 GMT",
"version": "v2"
}
] | 2021-07-22 | [
[
"Fuad",
"Md. Tahmid Hasan",
""
],
[
"Fime",
"Awal Ahmed",
""
],
[
"Sikder",
"Delowar",
""
],
[
"Iftee",
"Md. Akil Raihan",
""
],
[
"Rabbi",
"Jakaria",
""
],
[
"Al-rakhami",
"Mabrook S.",
""
],
[
"Gumae",
"Abdu",
""
],
[
"Sen",
"Ovishake",
""
],
[
"Fuad",
"Mohtasim",
""
],
[
"Islam",
"Md. Nazrul",
""
]
] |
2103.10599 | Pratik K. Biswas | Pratik K. Biswas and Aleksandr Iakubovich | Extractive Summarization of Call Transcripts | Journal paper | IEEE Access, 2022 | 10.1109/ACCESS.2022.3221404 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text summarization is the process of extracting the most important
information from the text and presenting it concisely in fewer sentences. Call
transcript is a text that involves textual description of a phone conversation
between a customer (caller) and agent(s) (customer representatives). This paper
presents an indigenously developed method that combines topic modeling and
sentence selection with punctuation restoration in condensing ill-punctuated or
un-punctuated call transcripts to produce summaries that are more readable.
Extensive testing, evaluation and comparisons have demonstrated the efficacy of
this summarizer for call transcript summarization.
| [
{
"created": "Fri, 19 Mar 2021 02:40:59 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Apr 2021 18:48:02 GMT",
"version": "v2"
}
] | 2022-11-16 | [
[
"Biswas",
"Pratik K.",
""
],
[
"Iakubovich",
"Aleksandr",
""
]
] |
2103.10642 | Sergio A. Serrano | Sergio A. Serrano, Elizabeth Santiago, Jose Martinez-Carranza, Eduardo
Morales, L. Enrique Sucar | Knowledge-Based Hierarchical POMDPs for Task Planning | null | Journal of Intelligent & Robotic Systems 101 (2021) 1-30 | 10.1007/s10846-021-01348-8 | null | cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The main goal in task planning is to build a sequence of actions that takes
an agent from an initial state to a goal state. In robotics, this is
particularly difficult because actions usually have several possible results,
and sensors are prone to produce measurements with error. Partially observable
Markov decision processes (POMDPs) are commonly employed, thanks to their
capacity to model the uncertainty of actions that modify and monitor the state
of a system. However, since solving a POMDP is computationally expensive, their
usage becomes prohibitive for most robotic applications. In this paper, we
propose a task planning architecture for service robotics. In the context of
service robot design, we present a scheme to encode knowledge about the robot
and its environment, that promotes the modularity and reuse of information.
Also, we introduce a new recursive definition of a POMDP that enables our
architecture to autonomously build a hierarchy of POMDPs, so that it can be
used to generate and execute plans that solve the task at hand. Experimental
results show that, in comparison to baseline methods, by following a recursive
hierarchical approach the architecture is able to significantly reduce the
planning time, while maintaining (or even improving) the robustness under
several scenarios that vary in uncertainty and size.
| [
{
"created": "Fri, 19 Mar 2021 05:45:05 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Apr 2021 17:33:30 GMT",
"version": "v2"
}
] | 2021-04-12 | [
[
"Serrano",
"Sergio A.",
""
],
[
"Santiago",
"Elizabeth",
""
],
[
"Martinez-Carranza",
"Jose",
""
],
[
"Morales",
"Eduardo",
""
],
[
"Sucar",
"L. Enrique",
""
]
] |
2103.10656 | Nicolas Gillis | Maryam Abdolali, Nicolas Gillis | Beyond Linear Subspace Clustering: A Comparative Study of Nonlinear
Manifold Clustering Algorithms | 55 pages | Computer Science Review 42, 100435, 2021 | 10.1016/j.cosrev.2021.100435 | null | cs.LG cs.AI cs.CV eess.SP | http://creativecommons.org/licenses/by/4.0/ | Subspace clustering is an important unsupervised clustering approach. It is
based on the assumption that the high-dimensional data points are approximately
distributed around several low-dimensional linear subspaces. The majority of
the prominent subspace clustering algorithms rely on the representation of the
data points as linear combinations of other data points, which is known as a
self-expressive representation. To overcome the restrictive linearity
assumption, numerous nonlinear approaches were proposed to extend successful
subspace clustering approaches to data on a union of nonlinear manifolds. In
this comparative study, we provide a comprehensive overview of nonlinear
subspace clustering approaches proposed in the last decade. We introduce a new
taxonomy to classify the state-of-the-art approaches into three categories,
namely locality preserving, kernel based, and neural network based. The major
representative algorithms within each category are extensively compared on
carefully designed synthetic and real-world data sets. The detailed analysis of
these approaches unfolds potential research directions and unsolved challenges
in this field.
| [
{
"created": "Fri, 19 Mar 2021 06:34:34 GMT",
"version": "v1"
}
] | 2021-12-20 | [
[
"Abdolali",
"Maryam",
""
],
[
"Gillis",
"Nicolas",
""
]
] |
2103.10699 | Aleksandr Petiushko | Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr
Petiushko | MDMMT: Multidomain Multimodal Transformer for Video Retrieval | null | CVPR Workshops 2021: 3354-3363 | 10.1109/CVPRW53098.2021.00374 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new state-of-the-art on the text to video retrieval task on
MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions
by a large margin. Moreover, state-of-the-art results are achieved with a
single model on two datasets without finetuning. This multidomain
generalisation is achieved by a proper combination of different video caption
datasets. We show that training on different datasets can improve test results
of each other. Additionally we check intersection between many popular datasets
and found that MSRVTT has a significant overlap between the test and the train
parts, and the same situation is observed for ActivityNet.
| [
{
"created": "Fri, 19 Mar 2021 09:16:39 GMT",
"version": "v1"
}
] | 2021-11-09 | [
[
"Dzabraev",
"Maksim",
""
],
[
"Kalashnikov",
"Maksim",
""
],
[
"Komkov",
"Stepan",
""
],
[
"Petiushko",
"Aleksandr",
""
]
] |
2103.11024 | Andres Karjus | Andres Karjus, Richard A. Blythe, Simon Kirby, Tianyu Wang, Kenny
Smith | Conceptual similarity and communicative need shape colexification: an
experimental study | null | Cognitive Science (2021) 45 e1303 | 10.1111/cogs.13035 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Colexification refers to the phenomenon of multiple meanings sharing one word
in a language. Cross-linguistic lexification patterns have been shown to be
largely predictable, as similar concepts are often colexified. We test a recent
claim that, beyond this general tendency, communicative needs play an important
role in shaping colexification patterns. We approach this question by means of
a series of human experiments, using an artificial language communication game
paradigm. Our results across four experiments match the previous
cross-linguistic findings: all other things being equal, speakers do prefer to
colexify similar concepts. However, we also find evidence supporting the
communicative need hypothesis: when faced with a frequent need to distinguish
similar pairs of meanings, speakers adjust their colexification preferences to
maintain communicative efficiency, and avoid colexifying those similar meanings
which need to be distinguished in communication. This research provides further
evidence to support the argument that languages are shaped by the needs and
preferences of their speakers.
| [
{
"created": "Fri, 19 Mar 2021 21:18:16 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Sep 2021 18:59:56 GMT",
"version": "v2"
}
] | 2021-09-28 | [
[
"Karjus",
"Andres",
""
],
[
"Blythe",
"Richard A.",
""
],
[
"Kirby",
"Simon",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Smith",
"Kenny",
""
]
] |
2103.11059 | Abu Md Niamul Taufique | Abu Md Niamul Taufique, Andreas Savakis, Jonathan Leckenby | Automatic Quantification of Facial Asymmetry using Facial Landmarks | 5 pages, 4 figures | 2019 IEEE Western New York Image and Signal Processing Workshop
(WNYISPW) | 10.1109/WNYIPW.2019.8923078 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One-sided facial paralysis causes uneven movements of facial muscles on the
sides of the face. Physicians currently assess facial asymmetry in a subjective
manner based on their clinical experience. This paper proposes a novel method
to provide an objective and quantitative asymmetry score for frontal faces. Our
metric has the potential to help physicians for diagnosis as well as monitoring
the rehabilitation of patients with one-sided facial paralysis. A deep learning
based landmark detection technique is used to estimate style invariant facial
landmark points and dense optical flow is used to generate motion maps from a
short sequence of frames. Six face regions are considered corresponding to the
left and right parts of the forehead, eyes, and mouth. Motion is computed and
compared between the left and the right parts of each region of interest to
estimate the symmetry score. For testing, asymmetric sequences are
synthetically generated from a facial expression dataset. A score equation is
developed to quantify symmetry in both symmetric and asymmetric face sequences.
| [
{
"created": "Sat, 20 Mar 2021 00:08:37 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Taufique",
"Abu Md Niamul",
""
],
[
"Savakis",
"Andreas",
""
],
[
"Leckenby",
"Jonathan",
""
]
] |
2103.11061 | Abu Md Niamul Taufique | Abu Md Niamul Taufique, Navya Nagananda, Andreas Savakis | Visualization of Deep Transfer Learning In SAR Imagery | 4 pages, 5 figures | IGARSS 2020 - 2020 IEEE International Geoscience and Remote
Sensing Symposium | 10.1109/IGARSS39084.2020.9324490 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic Aperture Radar (SAR) imagery has diverse applications in land and
marine surveillance. Unlike electro-optical (EO) systems, these systems are not
affected by weather conditions and can be used in the day and night times. With
the growing importance of SAR imagery, it would be desirable if models trained
on widely available EO datasets can also be used for SAR images. In this work,
we consider transfer learning to leverage deep features from a network trained
on an EO ships dataset and generate predictions on SAR imagery. Furthermore, by
exploring the network activations in the form of class-activation maps (CAMs),
we visualize the transfer learning process to SAR imagery and gain insight on
how a deep network interprets a new modality.
| [
{
"created": "Sat, 20 Mar 2021 00:16:15 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Taufique",
"Abu Md Niamul",
""
],
[
"Nagananda",
"Navya",
""
],
[
"Savakis",
"Andreas",
""
]
] |
2103.11070 | Dian Yu | Dian Yu, Zhou Yu, and Kenji Sagae | Attribute Alignment: Controlling Text Generation from Pre-trained
Language Models | null | EMNLP 2021 Findings | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models benefit from training with a large amount of unlabeled
text, which gives them increasingly fluent and diverse generation capabilities.
However, using these models for text generation that takes into account target
attributes, such as sentiment polarity or specific topics, remains a challenge.
We propose a simple and flexible method for controlling text generation by
aligning disentangled attribute representations. In contrast to recent efforts
on training a discriminator to perturb the token level distribution for an
attribute, we use the same data to learn an alignment function to guide the
pre-trained, non-controlled language model to generate texts with the target
attribute without changing the original language model parameters. We evaluate
our method on sentiment- and topic-controlled generation, and show large
performance gains over previous methods while retaining fluency and diversity.
| [
{
"created": "Sat, 20 Mar 2021 01:51:32 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Sep 2021 20:10:29 GMT",
"version": "v2"
}
] | 2021-09-16 | [
[
"Yu",
"Dian",
""
],
[
"Yu",
"Zhou",
""
],
[
"Sagae",
"Kenji",
""
]
] |
2103.11071 | Yuguang Shi | Yuguang Shi, Yu Guo, Zhenqiang Mi, Xinjie Li | Stereo CenterNet based 3D Object Detection for Autonomous Driving | null | Published by Neurocomputing,Volume 471, 30 January 2022, Pages
219-229 | 10.1016/j.neucom.2021.11.048 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, three-dimensional (3D) detection based on stereo images has
progressed remarkably; however, most advanced methods adopt anchor-based
two-dimensional (2D) detection or depth estimation to address this problem.
Nevertheless, high computational cost inhibits these methods from achieving
real-time performance. In this study, we propose a 3D object detection method,
Stereo CenterNet (SC), using geometric information in stereo imagery. SC
predicts the four semantic key points of the 3D bounding box of the object in
space and utilizes 2D left and right boxes, 3D dimension, orientation, and key
points to restore the bounding box of the object in the 3D space. Subsequently,
we adopt an improved photometric alignment module to further optimize the
position of the 3D bounding box. Experiments conducted on the KITTI dataset
indicate that the proposed SC exhibits the best speed-accuracy trade-off among
advanced methods without using extra data.
| [
{
"created": "Sat, 20 Mar 2021 02:18:49 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Apr 2021 16:16:14 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Sep 2021 08:50:58 GMT",
"version": "v3"
}
] | 2021-12-03 | [
[
"Shi",
"Yuguang",
""
],
[
"Guo",
"Yu",
""
],
[
"Mi",
"Zhenqiang",
""
],
[
"Li",
"Xinjie",
""
]
] |
2103.11083 | Hong-Ning Dai Prof. | Ke Zhang, Hanbo Ying, Hong-Ning Dai, Lin Li, Yuangyuang Peng, Keyi
Guo, Hongfang Yu | Compacting Deep Neural Networks for Internet of Things: Methods and
Applications | 25 pages, 11 figures | IEEE Internet of Things Journal, 2021 | 10.1109/JIOT.2021.3063497 | null | cs.LG cs.AI cs.NI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep Neural Networks (DNNs) have shown great success in completing complex
tasks. However, DNNs inevitably bring high computational cost and storage
consumption due to the complexity of hierarchical structures, thereby hindering
their wide deployment in Internet-of-Things (IoT) devices, which have limited
computational capability and storage capacity. Therefore, it is a necessity to
investigate the technologies to compact DNNs. Despite tremendous advances in
compacting DNNs, few surveys summarize compacting-DNNs technologies, especially
for IoT applications. Hence, this paper presents a comprehensive study on
compacting-DNNs technologies. We categorize compacting-DNNs technologies into
three major types: 1) network model compression, 2) Knowledge Distillation
(KD), 3) modification of network structures. We also elaborate on the diversity
of these approaches and make side-by-side comparisons. Moreover, we discuss the
applications of compacted DNNs in various IoT applications and outline future
directions.
| [
{
"created": "Sat, 20 Mar 2021 03:18:42 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Zhang",
"Ke",
""
],
[
"Ying",
"Hanbo",
""
],
[
"Dai",
"Hong-Ning",
""
],
[
"Li",
"Lin",
""
],
[
"Peng",
"Yuangyuang",
""
],
[
"Guo",
"Keyi",
""
],
[
"Yu",
"Hongfang",
""
]
] |
2103.11110 | Khwaja Monib Sediqi | Khwaja Monib Sediqi, and Hyo Jong Lee | A Novel Upsampling and Context Convolution for Image Semantic
Segmentation | 11 pages, published in sensors journal | Sensors 2021, 21, 2170 | 10.3390/s21062170 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semantic segmentation, which refers to pixel-wise classification of an image,
is a fundamental topic in computer vision owing to its growing importance in
robot vision and autonomous driving industries. It provides rich information
about objects in the scene such as object boundary, category, and location.
Recent methods for semantic segmentation often employ an encoder-decoder
structure using deep convolutional neural networks. The encoder part extracts
feature of the image using several filters and pooling operations, whereas the
decoder part gradually recovers the low-resolution feature maps of the encoder
into a full input resolution feature map for pixel-wise prediction. However,
the encoder-decoder variants for semantic segmentation suffer from severe
spatial information loss, caused by pooling operations or convolutions with
stride, and does not consider the context in the scene. In this paper, we
propose a dense upsampling convolution method based on guided filtering to
effectively preserve the spatial information of the image in the network. We
further propose a novel local context convolution method that not only covers
larger-scale objects in the scene but covers them densely for precise object
boundary delineation. Theoretical analyses and experimental results on several
benchmark datasets verify the effectiveness of our method. Qualitatively, our
approach delineates object boundaries at a level of accuracy that is beyond the
current excellent methods. Quantitatively, we report a new record of 82.86% and
81.62% of pixel accuracy on ADE20K and Pascal-Context benchmark datasets,
respectively. In comparison with the state-of-the-art methods, the proposed
method offers promising improvements.
| [
{
"created": "Sat, 20 Mar 2021 06:16:42 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Sediqi",
"Khwaja Monib",
""
],
[
"Lee",
"Hyo Jong",
""
]
] |
2103.11189 | Jonne S\"alev\"a | Jonne S\"alev\"a and Constantine Lignos | The Effectiveness of Morphology-aware Segmentation in Low-Resource
Neural Machine Translation | EACL 2021 Student Research Workshop | https://aclanthology.org/2021.eacl-srw.22/ | 10.18653/v1/2021.eacl-srw.22 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper evaluates the performance of several modern subword segmentation
methods in a low-resource neural machine translation setting. We compare
segmentations produced by applying BPE at the token or sentence level with
morphologically-based segmentations from LMVR and MORSEL. We evaluate
translation tasks between English and each of Nepali, Sinhala, and Kazakh, and
predict that using morphologically-based segmentation methods would lead to
better performance in this setting. However, comparing to BPE, we find that no
consistent and reliable differences emerge between the segmentation methods.
While morphologically-based methods outperform BPE in a few cases, what
performs best tends to vary across tasks, and the performance of segmentation
methods is often statistically indistinguishable.
| [
{
"created": "Sat, 20 Mar 2021 14:39:25 GMT",
"version": "v1"
}
] | 2024-05-17 | [
[
"Sälevä",
"Jonne",
""
],
[
"Lignos",
"Constantine",
""
]
] |
2103.11271 | Vuong M. Ngo | Vuong M. Ngo and Sven Helmer and Nhien-An Le-Khac and M-Tahar Kechadi | Structural Textile Pattern Recognition and Processing Based on
Hypergraphs | 38 pages, 23 figures | Information Retrieval Journal, Springer, 2021 | 10.1007/s10791-020-09384-y | null | cs.IR cs.CC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The humanities, like many other areas of society, are currently undergoing
major changes in the wake of digital transformation. However, in order to make
collection of digitised material in this area easily accessible, we often still
lack adequate search functionality. For instance, digital archives for textiles
offer keyword search, which is fairly well understood, and arrange their
content following a certain taxonomy, but search functionality at the level of
thread structure is still missing. To facilitate the clustering and search, we
introduce an approach for recognising similar weaving patterns based on their
structures for textile archives. We first represent textile structures using
hypergraphs and extract multisets of k-neighbourhoods describing weaving
patterns from these graphs. Then, the resulting multisets are clustered using
various distance measures and various clustering algorithms (K-Means for
simplicity and hierarchical agglomerative algorithms for precision). We
evaluate the different variants of our approach experimentally, showing that
this can be implemented efficiently (meaning it has linear complexity), and
demonstrate its quality to query and cluster datasets containing large textile
samples. As, to the est of our knowledge, this is the first practical approach
for explicitly modelling complex and irregular weaving patterns usable for
retrieval, we aim at establishing a solid baseline.
| [
{
"created": "Sun, 21 Mar 2021 00:44:40 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Ngo",
"Vuong M.",
""
],
[
"Helmer",
"Sven",
""
],
[
"Le-Khac",
"Nhien-An",
""
],
[
"Kechadi",
"M-Tahar",
""
]
] |
2103.11276 | Erkan Kayacan | Zhongzhong Zhang, Erkan Kayacan, Benjamin Thompson and Girish
Chowdhary | High precision control and deep learning-based corn stand counting
algorithms for agricultural robot | 14 pages, 9 figures | Autonomous Robots, volume 44, pages 1289-1302, 2020 | 10.1007/s10514-020-09915-y | null | cs.RO cs.AI cs.CV cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | This paper presents high precision control and deep learning-based corn stand
counting algorithms for a low-cost, ultra-compact 3D printed and autonomous
field robot for agricultural operations. Currently, plant traits, such as
emergence rate, biomass, vigor, and stand counting, are measured manually. This
is highly labor-intensive and prone to errors. The robot, termed TerraSentia,
is designed to automate the measurement of plant traits for efficient
phenotyping as an alternative to manual measurements. In this paper, we
formulate a Nonlinear Moving Horizon Estimator (NMHE) that identifies key
terrain parameters using onboard robot sensors and a learning-based Nonlinear
Model Predictive Control (NMPC) that ensures high precision path tracking in
the presence of unknown wheel-terrain interaction. Moreover, we develop a
machine vision algorithm designed to enable an ultra-compact ground robot to
count corn stands by driving through the fields autonomously. The algorithm
leverages a deep network to detect corn plants in images, and a visual tracking
model to re-identify detected objects at different time steps. We collected
data from 53 corn plots in various fields for corn plants around 14 days after
emergence (stage V3 - V4). The robot predictions have agreed well with the
ground truth with $C_{robot}=1.02 \times C_{human}-0.86$ and a correlation
coefficient $R=0.96$. The mean relative error given by the algorithm is
$-3.78\%$, and the standard deviation is $6.76\%$. These results indicate a
first and significant step towards autonomous robot-based real-time phenotyping
using low-cost, ultra-compact ground robots for corn and potentially other
crops.
| [
{
"created": "Sun, 21 Mar 2021 01:13:38 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Zhang",
"Zhongzhong",
""
],
[
"Kayacan",
"Erkan",
""
],
[
"Thompson",
"Benjamin",
""
],
[
"Chowdhary",
"Girish",
""
]
] |
2103.11285 | Charles (A.) Kantor | Charles A. Kantor, Marta Skreta, Brice Rauby, L\'eonard Boussioux,
Emmanuel Jehanno, Alexandra Luccioni, David Rolnick, Hugues Talbot | Geo-Spatiotemporal Features and Shape-Based Prior Knowledge for
Fine-grained Imbalanced Data Classification | Copyright by the authors. All rights reserved to authors only.
Correspondence to: ckantor (at) stanford [dot] edu | Proc. IJCAI 2021, Workshop on AI for Social Good, Harvard
University (2021) | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fine-grained classification aims at distinguishing between items with similar
global perception and patterns, but that differ by minute details. Our primary
challenges come from both small inter-class variations and large intra-class
variations. In this article, we propose to combine several innovations to
improve fine-grained classification within the use-case of wildlife, which is
of practical interest for experts. We utilize geo-spatiotemporal data to enrich
the picture information and further improve the performance. We also
investigate state-of-the-art methods for handling the imbalanced data issue.
| [
{
"created": "Sun, 21 Mar 2021 02:01:38 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Kantor",
"Charles A.",
""
],
[
"Skreta",
"Marta",
""
],
[
"Rauby",
"Brice",
""
],
[
"Boussioux",
"Léonard",
""
],
[
"Jehanno",
"Emmanuel",
""
],
[
"Luccioni",
"Alexandra",
""
],
[
"Rolnick",
"David",
""
],
[
"Talbot",
"Hugues",
""
]
] |
2103.11313 | Bo Pang | Bo Pang, Gao Peng, Yizhuo Li, Cewu Lu | PGT: A Progressive Method for Training Models on Long Videos | CVPR21, Oral | CVPR2021 oral | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Convolutional video models have an order of magnitude larger computational
complexity than their counterpart image-level models. Constrained by
computational resources, there is no model or training method that can train
long video sequences end-to-end. Currently, the main-stream method is to split
a raw video into clips, leading to incomplete fragmentary temporal information
flow. Inspired by natural language processing techniques dealing with long
sentences, we propose to treat videos as serial fragments satisfying Markov
property, and train it as a whole by progressively propagating information
through the temporal dimension in multiple steps. This progressive training
(PGT) method is able to train long videos end-to-end with limited resources and
ensures the effective transmission of information. As a general and robust
training method, we empirically demonstrate that it yields significant
performance improvements on different models and datasets. As an illustrative
example, the proposed method improves SlowOnly network by 3.7 mAP on Charades
and 1.9 top-1 accuracy on Kinetics with negligible parameter and computation
overhead. Code is available at https://github.com/BoPang1996/PGT.
| [
{
"created": "Sun, 21 Mar 2021 06:15:20 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Pang",
"Bo",
""
],
[
"Peng",
"Gao",
""
],
[
"Li",
"Yizhuo",
""
],
[
"Lu",
"Cewu",
""
]
] |
2103.11338 | Aparna Varde | Anita Pampoore-Thampi, Aparna S. Varde, Danlin Yu | Mining GIS Data to Predict Urban Sprawl | 8 Pages, 13 figures, KDD 2014 conference Bloomberg track | ACM KDD 2014 Conference (Bloomberg Track) | null | null | cs.AI cs.DB stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | This paper addresses the interesting problem of processing and analyzing data
in geographic information systems (GIS) to achieve a clear perspective on urban
sprawl. The term urban sprawl refers to overgrowth and expansion of low-density
areas with issues such as car dependency and segregation between residential
versus commercial use. Sprawl has impacts on the environment and public health.
In our work, spatiotemporal features related to real GIS data on urban sprawl
such as population growth and demographics are mined to discover knowledge for
decision support. We adapt data mining algorithms, Apriori for association rule
mining and J4.8 for decision tree classification to geospatial analysis,
deploying the ArcGIS tool for mapping. Knowledge discovered by mining this
spatiotemporal data is used to implement a prototype spatial decision support
system (SDSS). This SDSS predicts whether urban sprawl is likely to occur.
Further, it estimates the values of pertinent variables to understand how the
variables impact each other. The SDSS can help decision-makers identify
problems and create solutions for avoiding future sprawl occurrence and
conducting urban planning where sprawl already occurs, thus aiding sustainable
development. This work falls in the broad realm of geospatial intelligence and
sets the stage for designing a large scale SDSS to process big data in complex
environments, which constitutes part of our future work.
| [
{
"created": "Sun, 21 Mar 2021 08:41:35 GMT",
"version": "v1"
}
] | 2024-09-30 | [
[
"Pampoore-Thampi",
"Anita",
""
],
[
"Varde",
"Aparna S.",
""
],
[
"Yu",
"Danlin",
""
]
] |
2103.11357 | Andreas Holzinger | Andr\'e M. Carrington, Douglas G. Manuel, Paul W. Fieguth, Tim Ramsay,
Venet Osmani, Bernhard Wernly, Carol Bennett, Steven Hawken, Matthew McInnes,
Olivia Magwood, Yusuf Sheikh, Andreas Holzinger | Deep ROC Analysis and AUC as Balanced Average Accuracy to Improve Model
Selection, Understanding and Interpretation | 14 pages, 6 Figures, submitted to IEEE Transactions on Pattern
Analysis and Machine Intelligence (TPAMI), currently under review | IEEE Transactions on Pattern Analysis and Machine Intelligence
2022 | 10.1109/TPAMI.2022.3145392 | null | stat.ME cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Optimal performance is critical for decision-making tasks from medicine to
autonomous driving, however common performance measures may be too general or
too specific. For binary classifiers, diagnostic tests or prognosis at a
timepoint, measures such as the area under the receiver operating
characteristic curve, or the area under the precision recall curve, are too
general because they include unrealistic decision thresholds. On the other
hand, measures such as accuracy, sensitivity or the F1 score are measures at a
single threshold that reflect an individual single probability or predicted
risk, rather than a range of individuals or risk. We propose a method in
between, deep ROC analysis, that examines groups of probabilities or predicted
risks for more insightful analysis. We translate esoteric measures into
familiar terms: AUC and the normalized concordant partial AUC are balanced
average accuracy (a new finding); the normalized partial AUC is average
sensitivity; and the normalized horizontal partial AUC is average specificity.
Along with post-test measures, we provide a method that can improve model
selection in some cases and provide interpretation and assurance for patients
in each risk group. We demonstrate deep ROC analysis in two case studies and
provide a toolkit in Python.
| [
{
"created": "Sun, 21 Mar 2021 10:27:35 GMT",
"version": "v1"
}
] | 2022-01-28 | [
[
"Carrington",
"André M.",
""
],
[
"Manuel",
"Douglas G.",
""
],
[
"Fieguth",
"Paul W.",
""
],
[
"Ramsay",
"Tim",
""
],
[
"Osmani",
"Venet",
""
],
[
"Wernly",
"Bernhard",
""
],
[
"Bennett",
"Carol",
""
],
[
"Hawken",
"Steven",
""
],
[
"McInnes",
"Matthew",
""
],
[
"Magwood",
"Olivia",
""
],
[
"Sheikh",
"Yusuf",
""
],
[
"Holzinger",
"Andreas",
""
]
] |
2103.11388 | Antonios Liapis | Konstantinos Sfikas and Antonios Liapis | Collaborative Agent Gameplay in the Pandemic Board Game | 11 pages | Proceedings of the Foundations of Digital Games Conference, 2020 | 10.1145/3402942.3402943 | null | cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While artificial intelligence has been applied to control players' decisions
in board games for over half a century, little attention is given to games with
no player competition. Pandemic is an exemplar collaborative board game where
all players coordinate to overcome challenges posed by events occurring during
the game's progression. This paper proposes an artificial agent which controls
all players' actions and balances chances of winning versus risk of losing in
this highly stochastic environment. The agent applies a Rolling Horizon
Evolutionary Algorithm on an abstraction of the game-state that lowers the
branching factor and simulates the game's stochasticity. Results show that the
proposed algorithm can find winning strategies more consistently in different
games of varying difficulty. The impact of a number of state evaluation metrics
is explored, balancing between optimistic strategies that favor winning and
pessimistic strategies that guard against losing.
| [
{
"created": "Sun, 21 Mar 2021 13:18:20 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Sfikas",
"Konstantinos",
""
],
[
"Liapis",
"Antonios",
""
]
] |
2103.11390 | Gijs Van Tulder | Gijs van Tulder, Yao Tong, Elena Marchiori | Multi-view analysis of unregistered medical images using cross-view
transformers | Conference paper presented at MICCAI 2021. Code available via
https://vantulder.net/code/2021/miccai-transformers/ | In: M. de Bruijne et al. (Eds.): MICCAI 2021, LNCS 12903, pp.
104-113, Springer Nature Switzerland, 2021 | 10.1007/978-3-030-87199-4_10 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view medical image analysis often depends on the combination of
information from multiple views. However, differences in perspective or other
forms of misalignment can make it difficult to combine views effectively, as
registration is not always possible. Without registration, views can only be
combined at a global feature level, by joining feature vectors after global
pooling. We present a novel cross-view transformer method to transfer
information between unregistered views at the level of spatial feature maps. We
demonstrate this method on multi-view mammography and chest X-ray datasets. On
both datasets, we find that a cross-view transformer that links spatial feature
maps can outperform a baseline model that joins feature vectors after global
pooling.
| [
{
"created": "Sun, 21 Mar 2021 13:29:51 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Sep 2021 17:14:21 GMT",
"version": "v2"
}
] | 2021-09-24 | [
[
"van Tulder",
"Gijs",
""
],
[
"Tong",
"Yao",
""
],
[
"Marchiori",
"Elena",
""
]
] |
2103.11408 | Raviraj Joshi | Atharva Kulkarni, Meet Mandhane, Manali Likhitkar, Gayatri Kshirsagar,
Raviraj Joshi | L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset | Accepted at WASSA@EACL 2021 | https://www.aclweb.org/anthology/2021.wassa-1.23/ | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment analysis is one of the most fundamental tasks in Natural Language
Processing. Popular languages like English, Arabic, Russian, Mandarin, and also
Indian languages such as Hindi, Bengali, Tamil have seen a significant amount
of work in this area. However, the Marathi language which is the third most
popular language in India still lags behind due to the absence of proper
datasets. In this paper, we present the first major publicly available Marathi
Sentiment Analysis Dataset - L3CubeMahaSent. It is curated using tweets
extracted from various Maharashtrian personalities' Twitter accounts. Our
dataset consists of ~16,000 distinct tweets classified in three broad classes
viz. positive, negative, and neutral. We also present the guidelines using
which we annotated the tweets. Finally, we present the statistics of our
dataset and baseline classification results using CNN, LSTM, ULMFiT, and
BERT-based deep learning models.
| [
{
"created": "Sun, 21 Mar 2021 14:22:13 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Apr 2021 07:15:12 GMT",
"version": "v2"
}
] | 2021-06-29 | [
[
"Kulkarni",
"Atharva",
""
],
[
"Mandhane",
"Meet",
""
],
[
"Likhitkar",
"Manali",
""
],
[
"Kshirsagar",
"Gayatri",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
2103.11575 | Jonathan Francis | James Herman, Jonathan Francis, Siddha Ganju, Bingqing Chen, Anirudh
Koul, Abhinav Gupta, Alexey Skabelkin, Ivan Zhukov, Max Kumskoy, Eric Nyberg | Learn-to-Race: A Multimodal Control Environment for Autonomous Racing | Accepted to the International Conference on Computer Vision (ICCV
2021); equal contribution - JH and JF; 15 pages, 4 figures | International Conference on Computer Vision (ICCV), 2021 | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing research on autonomous driving primarily focuses on urban driving,
which is insufficient for characterising the complex driving behaviour
underlying high-speed racing. At the same time, existing racing simulation
frameworks struggle in capturing realism, with respect to visual rendering,
vehicular dynamics, and task objectives, inhibiting the transfer of learning
agents to real-world contexts. We introduce a new environment, where agents
Learn-to-Race (L2R) in simulated competition-style racing, using multimodal
information--from virtual cameras to a comprehensive array of inertial
measurement sensors. Our environment, which includes a simulator and an
interfacing training framework, accurately models vehicle dynamics and racing
conditions. In this paper, we release the Arrival simulator for autonomous
racing. Next, we propose the L2R task with challenging metrics, inspired by
learning-to-drive challenges, Formula-style racing, and multimodal trajectory
prediction for autonomous driving. Additionally, we provide the L2R framework
suite, facilitating simulated racing on high-precision models of real-world
tracks. Finally, we provide an official L2R task dataset of expert
demonstrations, as well as a series of baseline experiments and reference
implementations. We make all code available:
https://github.com/learn-to-race/l2r.
| [
{
"created": "Mon, 22 Mar 2021 04:03:06 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 19:52:52 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Aug 2021 13:35:14 GMT",
"version": "v3"
}
] | 2021-11-04 | [
[
"Herman",
"James",
""
],
[
"Francis",
"Jonathan",
""
],
[
"Ganju",
"Siddha",
""
],
[
"Chen",
"Bingqing",
""
],
[
"Koul",
"Anirudh",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Skabelkin",
"Alexey",
""
],
[
"Zhukov",
"Ivan",
""
],
[
"Kumskoy",
"Max",
""
],
[
"Nyberg",
"Eric",
""
]
] |
2103.11652 | Sijia Wen | Sijia Wen, Yingqiang Zheng, Feng Lu | Polarization Guided Specular Reflection Separation | null | IEEE Transactions on Image Processing 2021 | 10.1109/TIP.2021.3104188 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since specular reflection often exists in the real captured images and causes
deviation between the recorded color and intrinsic color, specular reflection
separation can bring advantages to multiple applications that require
consistent object surface appearance. However, due to the color of an object is
significantly influenced by the color of the illumination, the existing
researches still suffer from the near-duplicate challenge, that is, the
separation becomes unstable when the illumination color is close to the surface
color. In this paper, we derive a polarization guided model to incorporate the
polarization information into a designed iteration optimization separation
strategy to separate the specular reflection. Based on the analysis of
polarization, we propose a polarization guided model to generate a polarization
chromaticity image, which is able to reveal the geometrical profile of the
input image in complex scenarios, such as diversity of illumination. The
polarization chromaticity image can accurately cluster the pixels with similar
diffuse color. We further use the specular separation of all these clusters as
an implicit prior to ensure that the diffuse components will not be mistakenly
separated as the specular components. With the polarization guided model, we
reformulate the specular reflection separation into a unified optimization
function which can be solved by the ADMM strategy. The specular reflection will
be detected and separated jointly by RGB and polarimetric information. Both
qualitative and quantitative experimental results have shown that our method
can faithfully separate the specular reflection, especially in some challenging
scenarios.
| [
{
"created": "Mon, 22 Mar 2021 08:22:28 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jan 2022 07:26:13 GMT",
"version": "v2"
}
] | 2022-01-26 | [
[
"Wen",
"Sijia",
""
],
[
"Zheng",
"Yingqiang",
""
],
[
"Lu",
"Feng",
""
]
] |
2103.11715 | Antonios Liapis | Antonios Liapis, Hector P. Martinez, Julian Togelius and Georgios N.
Yannakakis | Transforming Exploratory Creativity with DeLeNoX | 8 pages | Proceedings of the Fourth International Conference on
Computational Creativity, 2013, pages 56-63 | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce DeLeNoX (Deep Learning Novelty Explorer), a system that
autonomously creates artifacts in constrained spaces according to its own
evolving interestingness criterion. DeLeNoX proceeds in alternating phases of
exploration and transformation. In the exploration phases, a version of novelty
search augmented with constraint handling searches for maximally diverse
artifacts using a given distance function. In the transformation phases, a deep
learning autoencoder learns to compress the variation between the found
artifacts into a lower-dimensional space. The newly trained encoder is then
used as the basis for a new distance function, transforming the criteria for
the next exploration phase. In the current paper, we apply DeLeNoX to the
creation of spaceships suitable for use in two-dimensional arcade-style
computer games, a representative problem in procedural content generation in
games. We also situate DeLeNoX in relation to the distinction between
exploratory and transformational creativity, and in relation to Schmidhuber's
theory of creativity through the drive for compression progress.
| [
{
"created": "Mon, 22 Mar 2021 10:39:29 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Liapis",
"Antonios",
""
],
[
"Martinez",
"Hector P.",
""
],
[
"Togelius",
"Julian",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] |
2103.11775 | Alexander Mathis | S\'ebastien B. Hausmann and Alessandro Marin Vargas and Alexander
Mathis and Mackenzie W. Mathis | Measuring and modeling the motor system with machine learning | null | Current Opinion in Neurobiology 2021 | 10.1016/j.conb.2021.04.004 | null | q-bio.QM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The utility of machine learning in understanding the motor system is
promising a revolution in how to collect, measure, and analyze data. The field
of movement science already elegantly incorporates theory and engineering
principles to guide experimental work, and in this review we discuss the
growing use of machine learning: from pose estimation, kinematic analyses,
dimensionality reduction, and closed-loop feedback, to its use in understanding
neural correlates and untangling sensorimotor systems. We also give our
perspective on new avenues where markerless motion capture combined with
biomechanical modeling and neural networks could be a new platform for
hypothesis-driven research.
| [
{
"created": "Mon, 22 Mar 2021 12:42:16 GMT",
"version": "v1"
}
] | 2021-09-16 | [
[
"Hausmann",
"Sébastien B.",
""
],
[
"Vargas",
"Alessandro Marin",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Mathis",
"Mackenzie W.",
""
]
] |
2103.11863 | Zahra Nili Ahmadabadi | Karan Sridharan, Patrick McNamee, Zahra Nili Ahmadabadi, Jeffrey
Hudack | Online search of unknown terrains using a dynamical system-based path
planning approach | null | J Intell Robot Syst 106, 21 (2022) | 10.1007/s10846-022-01707-z | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surveillance and exploration of large environments is a tedious task. In
spaces with limited environmental cues, random-like search is an effective
approach as it allows the robot to perform online coverage of environments
using simple algorithm designs. One way to generate random-like scanning search
is to use nonlinear dynamical systems to impart chaos into the searching
robot's controller. This will result in the generation of unpredictable yet
deterministic trajectories, allowing designers to control the system and
achieve a high scanning coverage of an area. However, the unpredictability
comes at the cost of increased coverage time and a lack of scalability, both of
which have been ignored by the state-of-the-art chaotic path planners. This
work introduces a new, scalable technique that helps a robot to steer away from
the obstacles and cover the entire search space in a short period of time. The
technique involves coupling and manipulating two chaotic systems to reduce the
coverage time and enable scanning of unknown environments with different online
properties. Using this new technique resulted in an average 49% boost in the
robot's performance compared to the state-of-the-art planners. the overall
search performance of the chaotic planner remained comparable to optimal
systems while still ensuring unpredictable paths.
| [
{
"created": "Mon, 22 Mar 2021 14:00:04 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Nov 2022 21:13:32 GMT",
"version": "v2"
}
] | 2022-11-15 | [
[
"Sridharan",
"Karan",
""
],
[
"McNamee",
"Patrick",
""
],
[
"Ahmadabadi",
"Zahra Nili",
""
],
[
"Hudack",
"Jeffrey",
""
]
] |
2103.11909 | Jan Philip Wahle | Jan Philip Wahle, Terry Ruas, Tom\'a\v{s} Folt\'ynek, Norman Meuschke,
Bela Gipp | Identifying Machine-Paraphrased Plagiarism | null | iConference 2022 | 10.1007/978-3-030-96957-8_34 | null | cs.CL cs.AI cs.DL | http://creativecommons.org/licenses/by-sa/4.0/ | Employing paraphrasing tools to conceal plagiarized text is a severe threat
to academic integrity. To enable the detection of machine-paraphrased text, we
evaluate the effectiveness of five pre-trained word embedding models combined
with machine-learning classifiers and eight state-of-the-art neural language
models. We analyzed preprints of research papers, graduation theses, and
Wikipedia articles, which we paraphrased using different configurations of the
tools SpinBot and SpinnerChief. The best-performing technique, Longformer,
achieved an average F1 score of 81.0% (F1=99.7% for SpinBot and F1=71.6% for
SpinnerChief cases), while human evaluators achieved F1=78.4% for SpinBot and
F1=65.6% for SpinnerChief cases. We show that the automated classification
alleviates shortcomings of widely-used text-matching systems, such as Turnitin
and PlagScan. To facilitate future research, all data, code, and two web
applications showcasing our contributions are openly available at
https://github.com/jpwahle/iconf22-paraphrase.
| [
{
"created": "Mon, 22 Mar 2021 14:54:54 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Nov 2021 15:31:34 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Nov 2021 15:34:34 GMT",
"version": "v3"
},
{
"created": "Fri, 29 Apr 2022 12:31:15 GMT",
"version": "v4"
},
{
"created": "Thu, 3 Nov 2022 11:20:10 GMT",
"version": "v5"
},
{
"created": "Thu, 10 Nov 2022 10:53:22 GMT",
"version": "v6"
},
{
"created": "Sat, 25 Feb 2023 12:52:21 GMT",
"version": "v7"
}
] | 2023-10-24 | [
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Foltýnek",
"Tomáš",
""
],
[
"Meuschke",
"Norman",
""
],
[
"Gipp",
"Bela",
""
]
] |
2103.11921 | Darsh Shah | Darsh J Shah, Lili Yu, Tao Lei and Regina Barzilay | Nutri-bullets: Summarizing Health Studies by Composing Segments | 12 pages | AAAI 2021 Camera Ready | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We introduce \emph{Nutri-bullets}, a multi-document summarization task for
health and nutrition. First, we present two datasets of food and health
summaries from multiple scientific studies. Furthermore, we propose a novel
\emph{extract-compose} model to solve the problem in the regime of limited
parallel data. We explicitly select key spans from several abstracts using a
policy network, followed by composing the selected spans to present a summary
via a task specific language model. Compared to state-of-the-art methods, our
approach leads to more faithful, relevant and diverse summarization --
properties imperative to this application. For instance, on the BreastCancer
dataset our approach gets a more than 50\% improvement on relevance and
faithfulness.\footnote{Our code and data is available at
\url{https://github.com/darsh10/Nutribullets.}}
| [
{
"created": "Mon, 22 Mar 2021 15:08:46 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Shah",
"Darsh J",
""
],
[
"Yu",
"Lili",
""
],
[
"Lei",
"Tao",
""
],
[
"Barzilay",
"Regina",
""
]
] |
2103.12021 | Paria Rashidinejad | Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell | Bridging Offline Reinforcement Learning and Imitation Learning: A Tale
of Pessimism | null | Published at NeurIPS 2021 and IEEE Transactions on Information
Theory | null | null | cs.LG cs.AI math.OC math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline (or batch) reinforcement learning (RL) algorithms seek to learn an
optimal policy from a fixed dataset without active data collection. Based on
the composition of the offline dataset, two main categories of methods are
used: imitation learning which is suitable for expert datasets and vanilla
offline RL which often requires uniform coverage datasets. From a practical
standpoint, datasets often deviate from these two extremes and the exact data
composition is usually unknown a priori. To bridge this gap, we present a new
offline RL framework that smoothly interpolates between the two extremes of
data composition, hence unifying imitation learning and vanilla offline RL. The
new framework is centered around a weak version of the concentrability
coefficient that measures the deviation from the behavior policy to the expert
policy alone.
Under this new framework, we further investigate the question on algorithm
design: can one develop an algorithm that achieves a minimax optimal rate and
also adapts to unknown data composition? To address this question, we consider
a lower confidence bound (LCB) algorithm developed based on pessimism in the
face of uncertainty in offline RL. We study finite-sample properties of LCB as
well as information-theoretic limits in multi-armed bandits, contextual
bandits, and Markov decision processes (MDPs). Our analysis reveals surprising
facts about optimality rates. In particular, in all three settings, LCB
achieves a faster rate of $1/N$ for nearly-expert datasets compared to the
usual rate of $1/\sqrt{N}$ in offline RL, where $N$ is the number of samples in
the batch dataset. In the case of contextual bandits with at least two
contexts, we prove that LCB is adaptively optimal for the entire data
composition range, achieving a smooth transition from imitation learning to
offline RL. We further show that LCB is almost adaptively optimal in MDPs.
| [
{
"created": "Mon, 22 Mar 2021 17:27:08 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jul 2023 04:47:42 GMT",
"version": "v2"
}
] | 2023-07-04 | [
[
"Rashidinejad",
"Paria",
""
],
[
"Zhu",
"Banghua",
""
],
[
"Ma",
"Cong",
""
],
[
"Jiao",
"Jiantao",
""
],
[
"Russell",
"Stuart",
""
]
] |
2103.12028 | Pedro Ortiz Suarez | Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch,
Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov,
Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb,
Beno\^it Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey
Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo
Rubungo, Toan Q. Nguyen, Mathias M\"uller, Andr\'e M\"uller, Shamsuddeen
Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov,
Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine
Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile
Dlamini, Nisansa de Silva, Sakine \c{C}abuk Ball{\i}, Stella Biderman,
Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe
Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia,
Sweta Agrawal, Mofetoluwa Adeyemi | Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets | Accepted at TACL; pre-MIT Press publication version | Transactions of the Association for Computational Linguistics
(2022) 10: 50-72 | 10.1162/tacl_a_00447 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the success of large-scale pre-training and multilingual modeling in
Natural Language Processing (NLP), recent years have seen a proliferation of
large, web-mined text datasets covering hundreds of languages. We manually
audit the quality of 205 language-specific corpora released with five major
public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource
corpora have systematic issues: At least 15 corpora have no usable text, and a
significant fraction contains less than 50% sentences of acceptable quality. In
addition, many are mislabeled or use nonstandard/ambiguous language codes. We
demonstrate that these issues are easy to detect even for non-proficient
speakers, and supplement the human audit with automatic analyses. Finally, we
recommend techniques to evaluate and improve multilingual corpora and discuss
potential risks that come with low-quality data releases.
| [
{
"created": "Mon, 22 Mar 2021 17:30:33 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Apr 2021 19:38:25 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Oct 2021 21:15:29 GMT",
"version": "v3"
},
{
"created": "Mon, 21 Feb 2022 16:41:38 GMT",
"version": "v4"
}
] | 2022-02-22 | [
[
"Kreutzer",
"Julia",
""
],
[
"Caswell",
"Isaac",
""
],
[
"Wang",
"Lisa",
""
],
[
"Wahab",
"Ahsan",
""
],
[
"van Esch",
"Daan",
""
],
[
"Ulzii-Orshikh",
"Nasanbayar",
""
],
[
"Tapo",
"Allahsera",
""
],
[
"Subramani",
"Nishant",
""
],
[
"Sokolov",
"Artem",
""
],
[
"Sikasote",
"Claytone",
""
],
[
"Setyawan",
"Monang",
""
],
[
"Sarin",
"Supheakmungkol",
""
],
[
"Samb",
"Sokhar",
""
],
[
"Sagot",
"Benoît",
""
],
[
"Rivera",
"Clara",
""
],
[
"Rios",
"Annette",
""
],
[
"Papadimitriou",
"Isabel",
""
],
[
"Osei",
"Salomey",
""
],
[
"Suarez",
"Pedro Ortiz",
""
],
[
"Orife",
"Iroro",
""
],
[
"Ogueji",
"Kelechi",
""
],
[
"Rubungo",
"Andre Niyongabo",
""
],
[
"Nguyen",
"Toan Q.",
""
],
[
"Müller",
"Mathias",
""
],
[
"Müller",
"André",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Muhammad",
"Nanda",
""
],
[
"Mnyakeni",
"Ayanda",
""
],
[
"Mirzakhalov",
"Jamshidbek",
""
],
[
"Matangira",
"Tapiwanashe",
""
],
[
"Leong",
"Colin",
""
],
[
"Lawson",
"Nze",
""
],
[
"Kudugunta",
"Sneha",
""
],
[
"Jernite",
"Yacine",
""
],
[
"Jenny",
"Mathias",
""
],
[
"Firat",
"Orhan",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Dlamini",
"Sakhile",
""
],
[
"de Silva",
"Nisansa",
""
],
[
"Ballı",
"Sakine Çabuk",
""
],
[
"Biderman",
"Stella",
""
],
[
"Battisti",
"Alessia",
""
],
[
"Baruwa",
"Ahmed",
""
],
[
"Bapna",
"Ankur",
""
],
[
"Baljekar",
"Pallavi",
""
],
[
"Azime",
"Israel Abebe",
""
],
[
"Awokoya",
"Ayodele",
""
],
[
"Ataman",
"Duygu",
""
],
[
"Ahia",
"Orevaoghene",
""
],
[
"Ahia",
"Oghenefego",
""
],
[
"Agrawal",
"Sweta",
""
],
[
"Adeyemi",
"Mofetoluwa",
""
]
] |
2103.12057 | Pedro Lara-Ben\'itez | Pedro Lara-Ben\'itez, Manuel Carranza-Garc\'ia and Jos\'e C. Riquelme | An Experimental Review on Deep Learning Architectures for Time Series
Forecasting | null | International Journal of Neural Systems, Vol. 31, No. 3 (2021)
2130001 | 10.1142/S0129065721300011 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, deep learning techniques have outperformed traditional
models in many machine learning tasks. Deep neural networks have successfully
been applied to address time series forecasting problems, which is a very
important topic in data mining. They have proved to be an effective solution
given their capacity to automatically learn the temporal dependencies present
in time series. However, selecting the most convenient type of deep neural
network and its parametrization is a complex task that requires considerable
expertise. Therefore, there is a need for deeper studies on the suitability of
all existing architectures for different forecasting tasks. In this work, we
face two main challenges: a comprehensive review of the latest works using deep
learning for time series forecasting; and an experimental study comparing the
performance of the most popular architectures. The comparison involves a
thorough analysis of seven types of deep learning models in terms of accuracy
and efficiency. We evaluate the rankings and distribution of results obtained
with the proposed models under many different architecture configurations and
training hyperparameters. The datasets used comprise more than 50000 time
series divided into 12 different forecasting problems. By training more than
38000 models on these data, we provide the most extensive deep learning study
for time series forecasting. Among all studied models, the results show that
long short-term memory (LSTM) and convolutional networks (CNN) are the best
alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve
comparable performance with less variability of results under different
parameter configurations, while also being more efficient.
| [
{
"created": "Mon, 22 Mar 2021 17:58:36 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Apr 2021 16:59:09 GMT",
"version": "v2"
}
] | 2021-04-09 | [
[
"Lara-Benítez",
"Pedro",
""
],
[
"Carranza-García",
"Manuel",
""
],
[
"Riquelme",
"José C.",
""
]
] |
2103.12069 | Kieran Greer Dr | Kieran Greer | Exemplars can Reciprocate Principal Components | null | WSEAS Transactions on Computers, ISSN / E-ISSN: 1109-2750 /
2224-2872, Volume 20, 2021, Art. #4, pp. 30-38 | 10.37394/23205.2021.20.4 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a clustering algorithm that is an extension of the
Category Trees algorithm. Category Trees is a clustering method that creates
tree structures that branch on category type and not feature. The development
in this paper is to consider a secondary order of clustering that is not the
category to which the data row belongs, but the tree, representing a single
classifier, that it is eventually clustered with. Each tree branches to store
subsets of other categories, but the rows in those subsets may also be related.
This paper is therefore concerned with looking at that second level of
clustering between the other category subsets, to try to determine if there is
any consistency over it. It is argued that Principal Components may be a
related and reciprocal type of structure, and there is an even bigger question
about the relation between exemplars and principal components, in general. The
theory is demonstrated using the Portugal Forest Fires dataset as a case study.
The Category Trees are then combined with other Self-Organising algorithms from
the author and it is suggested that they all belong to the same family type,
which is an Entropy-style of classifier.
| [
{
"created": "Mon, 22 Mar 2021 12:46:29 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Apr 2021 13:19:12 GMT",
"version": "v2"
}
] | 2021-04-23 | [
[
"Greer",
"Kieran",
""
]
] |
2103.12169 | Stefan Kuhn | Stefan Kuhn, Eda Tumer, Simon Colreavy-Donnelly, Ricardo Moreira
Borges | A Pilot Study For Fragment Identification Using 2D NMR and Deep Learning | 11 pages, 3 figures, 3 tables | Magn Reson Chem 2021, 1 | 10.1002/MRC.5212 | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a method to identify substructures in NMR spectra of
mixtures, specifically 2D spectra, using a bespoke image-based Convolutional
Neural Network application. This is done using HSQC and HMBC spectra separately
and in combination. The application can reliably detect substructures in pure
compounds, using a simple network. It can work for mixtures when trained on
pure compounds only. HMBC data and the combination of HMBC and HSQC show better
results than HSQC alone.
| [
{
"created": "Thu, 18 Mar 2021 20:25:41 GMT",
"version": "v1"
}
] | 2021-11-01 | [
[
"Kuhn",
"Stefan",
""
],
[
"Tumer",
"Eda",
""
],
[
"Colreavy-Donnelly",
"Simon",
""
],
[
"Borges",
"Ricardo Moreira",
""
]
] |
2103.12201 | Shlok Mishra | Shlok Kumar Mishra and Kuntal Sengupta and Max Horowitz-Gelb and
Wen-Sheng Chu and Sofien Bouaziz and David Jacobs | Improved Detection of Face Presentation Attacks Using Image
Decomposition | Conference - IJCB | 2022 IEEE international joint conference on biometrics (IJCB)
(ORAL) | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Presentation attack detection (PAD) is a critical component in secure face
authentication. We present a PAD algorithm to distinguish face spoofs generated
by a photograph of a subject from live images. Our method uses an image
decomposition network to extract albedo and normal. The domain gap between the
real and spoof face images leads to easily identifiable differences, especially
between the recovered albedo maps. We enhance this domain gap by retraining
existing methods using supervised contrastive loss. We present empirical and
theoretical analysis that demonstrates that contrast and lighting effects can
play a significant role in PAD; these show up, particularly in the recovered
albedo. Finally, we demonstrate that by combining all of these methods we
achieve state-of-the-art results on both intra-dataset testing for
CelebA-Spoof, OULU, CASIA-SURF datasets and inter-dataset setting on SiW,
CASIA-MFSD, Replay-Attack and MSU-MFSD datasets.
| [
{
"created": "Mon, 22 Mar 2021 22:15:17 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Dec 2022 06:44:05 GMT",
"version": "v2"
}
] | 2022-12-02 | [
[
"Mishra",
"Shlok Kumar",
""
],
[
"Sengupta",
"Kuntal",
""
],
[
"Horowitz-Gelb",
"Max",
""
],
[
"Chu",
"Wen-Sheng",
""
],
[
"Bouaziz",
"Sofien",
""
],
[
"Jacobs",
"David",
""
]
] |
2103.12242 | Ali Ayub | Ali Ayub, Alan R. Wagner | F-SIOL-310: A Robotic Dataset and Benchmark for Few-Shot Incremental
Object Learning | Fixed the link to dataset | IEEE International Conference on Robotics and Automation (ICRA)
2021 | 10.1109/ICRA48506.2021.9561509 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning has achieved remarkable success in object recognition tasks
through the availability of large scale datasets like ImageNet. However, deep
learning systems suffer from catastrophic forgetting when learning
incrementally without replaying old data. For real-world applications, robots
also need to incrementally learn new objects. Further, since robots have
limited human assistance available, they must learn from only a few examples.
However, very few object recognition datasets and benchmarks exist to test
incremental learning capability for robotic vision. Further, there is no
dataset or benchmark specifically designed for incremental object learning from
a few examples. To fill this gap, we present a new dataset termed F-SIOL-310
(Few-Shot Incremental Object Learning) which is specifically captured for
testing few-shot incremental object learning capability for robotic vision. We
also provide benchmarks and evaluations of 8 incremental learning algorithms on
F-SIOL-310 for future comparisons. Our results demonstrate that the few-shot
incremental object learning problem for robotic vision is far from being
solved.
| [
{
"created": "Tue, 23 Mar 2021 00:25:50 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Nov 2021 05:55:53 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Apr 2022 20:54:22 GMT",
"version": "v3"
}
] | 2022-04-22 | [
[
"Ayub",
"Ali",
""
],
[
"Wagner",
"Alan R.",
""
]
] |
2103.12311 | Hanwen Cao | Hanwen Cao, Hao-Shu Fang, Wenhai Liu, Cewu Lu | SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping | null | IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 6, NO. 4, 2021 | 10.1109/LRA.2021.3115406 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Suction is an important solution for the longstanding robotic grasping
problem. Compared with other kinds of grasping, suction grasping is easier to
represent and often more reliable in practice. Though preferred in many
scenarios, it is not fully investigated and lacks sufficient training data and
evaluation benchmarks. To address that, firstly, we propose a new physical
model to analytically evaluate seal formation and wrench resistance of a
suction grasping, which are two key aspects of grasp success. Secondly, a
two-step methodology is adopted to generate annotations on a large-scale
dataset collected in real-world cluttered scenarios. Thirdly, a standard online
evaluation system is proposed to evaluate suction poses in continuous operation
space, which can benchmark different algorithms fairly without the need of
exhaustive labeling. Real-robot experiments are conducted to show that our
annotations align well with real world. Meanwhile, we propose a method to
predict numerous suction poses from an RGB-D image of a cluttered scene and
demonstrate our superiority against several previous methods. Result analyses
are further provided to help readers better understand the challenges in this
area. Data and source code are publicly available at www.graspnet.net.
| [
{
"created": "Tue, 23 Mar 2021 05:02:52 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Oct 2021 06:19:54 GMT",
"version": "v2"
}
] | 2021-11-01 | [
[
"Cao",
"Hanwen",
""
],
[
"Fang",
"Hao-Shu",
""
],
[
"Liu",
"Wenhai",
""
],
[
"Lu",
"Cewu",
""
]
] |
2103.12450 | Jan Philip Wahle | Jan Philip Wahle, Terry Ruas, Norman Meuschke, Bela Gipp | Are Neural Language Models Good Plagiarists? A Benchmark for Neural
Paraphrase Detection | null | JCDL 2021 | 10.1109/JCDL52503.2021.00065 | null | cs.CL cs.AI cs.DL | http://creativecommons.org/licenses/by-sa/4.0/ | The rise of language models such as BERT allows for high-quality text
paraphrasing. This is a problem to academic integrity, as it is difficult to
differentiate between original and machine-generated content. We propose a
benchmark consisting of paraphrased articles using recent language models
relying on the Transformer architecture. Our contribution fosters future
research of paraphrase detection systems as it offers a large collection of
aligned original and paraphrased documents, a study regarding its structure,
classification experiments with state-of-the-art systems, and we make our
findings publicly available.
| [
{
"created": "Tue, 23 Mar 2021 11:01:35 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Nov 2021 14:23:29 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Apr 2022 12:30:35 GMT",
"version": "v3"
},
{
"created": "Thu, 3 Nov 2022 11:43:41 GMT",
"version": "v4"
},
{
"created": "Thu, 10 Nov 2022 10:54:09 GMT",
"version": "v5"
}
] | 2023-10-24 | [
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Meuschke",
"Norman",
""
],
[
"Gipp",
"Bela",
""
]
] |
2103.12474 | Osama Makansi | Osama Makansi, \"Ozg\"un Cicek, Yassine Marrakchi, and Thomas Brox | On Exposing the Challenging Long Tail in Future Prediction of Traffic
Actors | null | ICCV 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Predicting the states of dynamic traffic actors into the future is important
for autonomous systems to operate safelyand efficiently. Remarkably, the most
critical scenarios aremuch less frequent and more complex than the
uncriticalones. Therefore, uncritical cases dominate the prediction. In this
paper, we address specifically the challenging scenarios at the long tail of
the dataset distribution. Our analysis shows that the common losses tend to
place challenging cases suboptimally in the embedding space. As a consequence,
we propose to supplement the usual loss with aloss that places challenging
cases closer to each other. This triggers sharing information among challenging
cases andlearning specific predictive features. We show on four public datasets
that this leads to improved performance on the challenging scenarios while the
overall performance stays stable. The approach is agnostic w.r.t. the used
network architecture, input modality or viewpoint, and can be integrated into
existing solutions easily. Code is available at
https://github.com/lmb-freiburg/Contrastive-Future-Trajectory-Prediction
| [
{
"created": "Tue, 23 Mar 2021 11:56:15 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Mar 2021 10:29:42 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Aug 2021 12:58:37 GMT",
"version": "v3"
}
] | 2022-01-19 | [
[
"Makansi",
"Osama",
""
],
[
"Cicek",
"Özgün",
""
],
[
"Marrakchi",
"Yassine",
""
],
[
"Brox",
"Thomas",
""
]
] |
2103.12489 | Chenguo Lin | Ruowei Wang, Chenguo Lin, Qijun Zhao, Feiyu Zhu | Watermark Faker: Towards Forgery of Digital Image Watermarking | 6 pages; accepted by ICME2021 | International Conference on Multimedia and Expo (ICME) 2021 | 10.1109/ICME51207.2021.9428410 | null | cs.CR cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital watermarking has been widely used to protect the copyright and
integrity of multimedia data. Previous studies mainly focus on designing
watermarking techniques that are robust to attacks of destroying the embedded
watermarks. However, the emerging deep learning based image generation
technology raises new open issues that whether it is possible to generate fake
watermarked images for circumvention. In this paper, we make the first attempt
to develop digital image watermark fakers by using generative adversarial
learning. Suppose that a set of paired images of original and watermarked
images generated by the targeted watermarker are available, we use them to
train a watermark faker with U-Net as the backbone, whose input is an original
image, and after a domain-specific preprocessing, it outputs a fake watermarked
image. Our experiments show that the proposed watermark faker can effectively
crack digital image watermarkers in both spatial and frequency domains,
suggesting the risk of such forgery attacks.
| [
{
"created": "Tue, 23 Mar 2021 12:28:00 GMT",
"version": "v1"
}
] | 2022-04-20 | [
[
"Wang",
"Ruowei",
""
],
[
"Lin",
"Chenguo",
""
],
[
"Zhao",
"Qijun",
""
],
[
"Zhu",
"Feiyu",
""
]
] |
2103.12576 | Negar Safinianaini | Negar Safinianaini and Henrik Bostr\"om | Towards interpretability of Mixtures of Hidden Markov Models | null | AAAI Workshop XAI (2021) 4-10 | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixtures of Hidden Markov Models (MHMMs) are frequently used for clustering
of sequential data. An important aspect of MHMMs, as of any clustering
approach, is that they can be interpretable, allowing for novel insights to be
gained from the data. However, without a proper way of measuring
interpretability, the evaluation of novel contributions is difficult and it
becomes practically impossible to devise techniques that directly optimize this
property. In this work, an information-theoretic measure (entropy) is proposed
for interpretability of MHMMs, and based on that, a novel approach to improve
model interpretability is proposed, i.e., an entropy-regularized Expectation
Maximization (EM) algorithm. The new approach aims for reducing the entropy of
the Markov chains (involving state transition matrices) within an MHMM, i.e.,
assigning higher weights to common state transitions during clustering. It is
argued that this entropy reduction, in general, leads to improved
interpretability since the most influential and important state transitions of
the clusters can be more easily identified. An empirical investigation shows
that it is possible to improve the interpretability of MHMMs, as measured by
entropy, without sacrificing (but rather improving) clustering performance and
computational costs, as measured by the v-measure and number of EM iterations,
respectively.
| [
{
"created": "Tue, 23 Mar 2021 14:25:03 GMT",
"version": "v1"
}
] | 2021-03-24 | [
[
"Safinianaini",
"Negar",
""
],
[
"Boström",
"Henrik",
""
]
] |
2103.12622 | Julio Marco | Julio Marco, Adrian Jarabo, Ji Hyun Nam, Xiaochun Liu, Miguel \'Angel
Cosculluela, Andreas Velten, Diego Gutierrez | Virtual Light Transport Matrices for Non-Line-Of-Sight Imaging | ICCV 2021 (Oral) | Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV), 2021, pp. 2440-2449 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The light transport matrix (LTM) is an instrumental tool in line-of-sight
(LOS) imaging, describing how light interacts with the scene and enabling
applications such as relighting or separation of illumination components. We
introduce a framework to estimate the LTM of non-line-of-sight (NLOS)
scenarios, coupling recent virtual forward light propagation models for NLOS
imaging with the LOS light transport equation. We design computational
projector-camera setups, and use these virtual imaging systems to estimate the
transport matrix of hidden scenes. We introduce the specific illumination
functions to compute the different elements of the matrix, overcoming the
challenging wide-aperture conditions of NLOS setups. Our NLOS light transport
matrix allows us to (re)illuminate specific locations of a hidden scene, and
separate direct, first-order indirect, and higher-order indirect illumination
of complex cluttered hidden scenes, similar to existing LOS techniques.
| [
{
"created": "Tue, 23 Mar 2021 15:17:45 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 15:59:03 GMT",
"version": "v2"
}
] | 2021-10-06 | [
[
"Marco",
"Julio",
""
],
[
"Jarabo",
"Adrian",
""
],
[
"Nam",
"Ji Hyun",
""
],
[
"Liu",
"Xiaochun",
""
],
[
"Cosculluela",
"Miguel Ángel",
""
],
[
"Velten",
"Andreas",
""
],
[
"Gutierrez",
"Diego",
""
]
] |
2103.12715 | Andr\'e Cruz | Andr\'e F. Cruz, Pedro Saleiro, Catarina Bel\'em, Carlos Soares, Pedro
Bizarro | Promoting Fairness through Hyperparameter Optimization | arXiv admin note: substantial text overlap with arXiv:2010.03665 | 2021 IEEE International Conference on Data Mining (ICDM) | 10.1109/ICDM51629.2021.00119 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable research effort has been guided towards algorithmic fairness but
real-world adoption of bias reduction techniques is still scarce. Existing
methods are either metric- or model-specific, require access to sensitive
attributes at inference time, or carry high development or deployment costs.
This work explores the unfairness that emerges when optimizing ML models solely
for predictive performance, and how to mitigate it with a simple and easily
deployed intervention: fairness-aware hyperparameter optimization (HO). We
propose and evaluate fairness-aware variants of three popular HO algorithms:
Fair Random Search, Fair TPE, and Fairband. We validate our approach on a
real-world bank account opening fraud case-study, as well as on three datasets
from the fairness literature. Results show that, without extra training cost,
it is feasible to find models with 111% mean fairness increase and just 6%
decrease in performance when compared with fairness-blind HO.
| [
{
"created": "Tue, 23 Mar 2021 17:36:22 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Oct 2021 14:08:24 GMT",
"version": "v2"
}
] | 2022-07-13 | [
[
"Cruz",
"André F.",
""
],
[
"Saleiro",
"Pedro",
""
],
[
"Belém",
"Catarina",
""
],
[
"Soares",
"Carlos",
""
],
[
"Bizarro",
"Pedro",
""
]
] |
2103.12924 | Abu Md Niamul Taufique | Abu Md Niamul Taufique, Breton Minnehan, Andreas Savakis | Benchmarking Deep Trackers on Aerial Videos | 25 pages, 10 figures, 7 tables | Sensors 2020, 20(2), 547 | 10.3390/s20020547 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep learning-based visual object trackers have achieved
state-of-the-art performance on several visual object tracking benchmarks.
However, most tracking benchmarks are focused on ground level videos, whereas
aerial tracking presents a new set of challenges. In this paper, we compare ten
trackers based on deep learning techniques on four aerial datasets. We choose
top performing trackers utilizing different approaches, specifically tracking
by detection, discriminative correlation filters, Siamese networks and
reinforcement learning. In our experiments, we use a subset of OTB2015 dataset
with aerial style videos; the UAV123 dataset without synthetic sequences; the
UAV20L dataset, which contains 20 long sequences; and DTB70 dataset as our
benchmark datasets. We compare the advantages and disadvantages of different
trackers in different tracking situations encountered in aerial data. Our
findings indicate that the trackers perform significantly worse in aerial
datasets compared to standard ground level videos. We attribute this effect to
smaller target size, camera motion, significant camera rotation with respect to
the target, out of view movement, and clutter in the form of occlusions or
similar looking distractors near tracked object.
| [
{
"created": "Wed, 24 Mar 2021 01:45:19 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Taufique",
"Abu Md Niamul",
""
],
[
"Minnehan",
"Breton",
""
],
[
"Savakis",
"Andreas",
""
]
] |
2103.12995 | Yi Zhang | Zexin Lu, Wenjun Xia, Yongqiang Huang, Hongming Shan, Hu Chen, Jiliu
Zhou, Yi Zhang | MANAS: Multi-Scale and Multi-Level Neural Architecture Search for
Low-Dose CT Denoising | null | IEEE Transactions on Medical Imaging, 42(3), 850-863, 2023 | 10.1109/TMI.2022.3219286 | null | physics.med-ph cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lowering the radiation dose in computed tomography (CT) can greatly reduce
the potential risk to public health. However, the reconstructed images from the
dose-reduced CT or low-dose CT (LDCT) suffer from severe noise, compromising
the subsequent diagnosis and analysis. Recently, convolutional neural networks
have achieved promising results in removing noise from LDCT images; the network
architectures used are either handcrafted or built on top of conventional
networks such as ResNet and U-Net. Recent advance on neural network
architecture search (NAS) has proved that the network architecture has a
dramatic effect on the model performance, which indicates that current network
architectures for LDCT may be sub-optimal. Therefore, in this paper, we make
the first attempt to apply NAS to LDCT and propose a multi-scale and
multi-level NAS for LDCT denoising, termed MANAS. On the one hand, the proposed
MANAS fuses features extracted by different scale cells to capture multi-scale
image structural details. On the other hand, the proposed MANAS can search a
hybrid cell- and network-level structure for better performance. Extensively
experimental results on three different dose levels demonstrate that the
proposed MANAS can achieve better performance in terms of preserving image
structural details than several state-of-the-art methods. In addition, we also
validate the effectiveness of the multi-scale and multi-level architecture for
LDCT denoising.
| [
{
"created": "Wed, 24 Mar 2021 05:41:01 GMT",
"version": "v1"
}
] | 2023-03-07 | [
[
"Lu",
"Zexin",
""
],
[
"Xia",
"Wenjun",
""
],
[
"Huang",
"Yongqiang",
""
],
[
"Shan",
"Hongming",
""
],
[
"Chen",
"Hu",
""
],
[
"Zhou",
"Jiliu",
""
],
[
"Zhang",
"Yi",
""
]
] |
2103.12996 | Zhanghao Sun | Zhanghao Sun, Ronald Quan, Olav Solgaard | Resonant Scanning Design and Control for Fast Spatial Sampling | 16 pages, 11 figures | Sci Rep 11, 20011 (2021) | 10.1038/s41598-021-99373-y | null | eess.IV cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two-dimensional, resonant scanners have been utilized in a large variety of
imaging modules due to their compact form, low power consumption, large angular
range, and high speed. However, resonant scanners have problems with
non-optimal and inflexible scanning patterns and inherent phase uncertainty,
which limit practical applications. Here we propose methods for optimized
design and control of the scanning trajectory of two-dimensional resonant
scanners under various physical constraints, including high frame-rate and
limited actuation amplitude. First, we propose an analytical design rule for
uniform spatial sampling. We demonstrate theoretically and experimentally that
by including non-repeating scanning patterns, the proposed designs outperform
previous designs in terms of scanning range and fill factor. Second, we show
that we can create flexible scanning patterns that allow focusing on
user-defined Regions-of-Interest (RoI) by modulation of the scanning
parameters. The scanning parameters are found by an optimization algorithm. In
simulations, we demonstrate the benefits of these designs with standard metrics
and higher-level computer vision tasks (LiDAR odometry and 3D object
detection). Finally, we experimentally implement and verify both unmodulated
and modulated scanning modes using a two-dimensional, resonant MEMS scanner.
Central to the implementations is high bandwidth monitoring of the phase of the
angular scans in both dimensions. This task is carried out with a
position-sensitive photodetector combined with high-bandwidth electronics,
enabling fast spatial sampling at ~ 100Hz frame-rate.
| [
{
"created": "Wed, 24 Mar 2021 05:44:48 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Aug 2021 07:07:06 GMT",
"version": "v2"
}
] | 2023-05-05 | [
[
"Sun",
"Zhanghao",
""
],
[
"Quan",
"Ronald",
""
],
[
"Solgaard",
"Olav",
""
]
] |
2103.13003 | Tobias Schlagenhauf | Tobias Schlagenhauf, Magnus Landwehr, Juergen Fleischer | Industrial Machine Tool Component Surface Defect Dataset | 7 pages, 13 figures | Data in Brief, 39, 107643 (2021) | 10.1016/j.dib.2021.107643 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | Using machine learning (ML) techniques in general and deep learning
techniques in specific needs a certain amount of data often not available in
large quantities in technical domains. The manual inspection of machine tool
components and the manual end-of-line check of products are labor-intensive
tasks in industrial applications that companies often want to automate. To
automate classification processes and develop reliable and robust machine
learning-based classification and wear prognostics models, one needs real-world
datasets to train and test the models. The dataset is available under
https://doi.org/10.5445/IR/1000129520.
| [
{
"created": "Wed, 24 Mar 2021 06:17:21 GMT",
"version": "v1"
}
] | 2022-02-22 | [
[
"Schlagenhauf",
"Tobias",
""
],
[
"Landwehr",
"Magnus",
""
],
[
"Fleischer",
"Juergen",
""
]
] |
2103.13019 | Christof Sch\"och | Christof Sch\"och | Topic Modeling Genre: An Exploration of French Classical and
Enlightenment Drama | 11 figures | Digital Humanities Quarterly, 11.2, 2017 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The concept of literary genre is a highly complex one: not only are different
genres frequently defined on several, but not necessarily the same levels of
description, but consideration of genres as cognitive, social, or scholarly
constructs with a rich history further complicate the matter. This contribution
focuses on thematic aspects of genre with a quantitative approach, namely Topic
Modeling. Topic Modeling has proven to be useful to discover thematic patterns
and trends in large collections of texts, with a view to class or browse them
on the basis of their dominant themes. It has rarely if ever, however, been
applied to collections of dramatic texts.
In this contribution, Topic Modeling is used to analyze a collection of
French Drama of the Classical Age and the Enlightenment. The general aim of
this contribution is to discover what semantic types of topics are found in
this collection, whether different dramatic subgenres have distinctive dominant
topics and plot-related topic patterns, and inversely, to what extent
clustering methods based on topic scores per play produce groupings of texts
which agree with more conventional genre distinctions. This contribution shows
that interesting topic patterns can be detected which provide new insights into
the thematic, subgenre-related structure of French drama as well as into the
history of French drama of the Classical Age and the Enlightenment.
| [
{
"created": "Wed, 24 Mar 2021 06:57:00 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Schöch",
"Christof",
""
]
] |
2103.13043 | Gaochang Wu | Gaochang Wu, Yebin Liu, Lu Fang, Qionghai Dai, Tianyou Chai | Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications | Published in IEEE TPAMI, 2019 | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2019 | 10.1109/TPAMI.2018.2845393 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel convolutional neural network (CNN)-based framework is
developed for light field reconstruction from a sparse set of views. We
indicate that the reconstruction can be efficiently modeled as angular
restoration on an epipolar plane image (EPI). The main problem in direct
reconstruction on the EPI involves an information asymmetry between the spatial
and angular dimensions, where the detailed portion in the angular dimensions is
damaged by undersampling. Directly upsampling or super-resolving the light
field in the angular dimensions causes ghosting effects. To suppress these
ghosting effects, we contribute a novel "blur-restoration-deblur" framework.
First, the "blur" step is applied to extract the low-frequency components of
the light field in the spatial dimensions by convolving each EPI slice with a
selected blur kernel. Then, the "restoration" step is implemented by a CNN,
which is trained to restore the angular details of the EPI. Finally, we use a
non-blind "deblur" operation to recover the spatial high frequencies suppressed
by the EPI blur. We evaluate our approach on several datasets, including
synthetic scenes, real-world scenes and challenging microscope light field
data. We demonstrate the high performance and robustness of the proposed
framework compared with state-of-the-art algorithms. We further show extended
applications, including depth enhancement and interpolation for unstructured
input. More importantly, a novel rendering approach is presented by combining
the proposed framework and depth information to handle large disparities.
| [
{
"created": "Wed, 24 Mar 2021 08:16:32 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Wu",
"Gaochang",
""
],
[
"Liu",
"Yebin",
""
],
[
"Fang",
"Lu",
""
],
[
"Dai",
"Qionghai",
""
],
[
"Chai",
"Tianyou",
""
]
] |
2103.13275 | Khalid Alnajjar | Khalid Alnajjar | When Word Embeddings Become Endangered | null | In M. H\"am\"al\"ainen, N. Partanen, & K. Alnajjar (Eds.),
Multilingual Facilitation (pp. 275-288). University of Helsinki (2021) | 10.31885/9789515150257.24 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Big languages such as English and Finnish have many natural language
processing (NLP) resources and models, but this is not the case for
low-resourced and endangered languages as such resources are so scarce despite
the great advantages they would provide for the language communities. The most
common types of resources available for low-resourced and endangered languages
are translation dictionaries and universal dependencies. In this paper, we
present a method for constructing word embeddings for endangered languages
using existing word embeddings of different resource-rich languages and the
translation dictionaries of resource-poor languages. Thereafter, the embeddings
are fine-tuned using the sentences in the universal dependencies and aligned to
match the semantic spaces of the big languages; resulting in cross-lingual
embeddings. The endangered languages we work with here are Erzya, Moksha,
Komi-Zyrian and Skolt Sami. Furthermore, we build a universal sentiment
analysis model for all the languages that are part of this study, whether
endangered or not, by utilizing cross-lingual word embeddings. The evaluation
conducted shows that our word embeddings for endangered languages are
well-aligned with the resource-rich languages, and they are suitable for
training task-specific models as demonstrated by our sentiment analysis model
which achieved a high accuracy. All our cross-lingual word embeddings and the
sentiment analysis model have been released openly via an easy-to-use Python
library.
| [
{
"created": "Wed, 24 Mar 2021 15:42:53 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Alnajjar",
"Khalid",
""
]
] |
2103.13282 | Alexander Mathis | Daniel Joska and Liam Clark and Naoya Muramatsu and Ricardo Jericevich
and Fred Nicolls and Alexander Mathis and Mackenzie W. Mathis and Amir Patel | AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild | Code and data can be found at:
https://github.com/African-Robotics-Unit/AcinoSet | 2021 IEEE International Conference on Robotics and Automation
(ICRA), 2021, pp. 13901-13908 | 10.1109/ICRA48506.2021.9561338 | null | cs.CV cs.SY eess.SY q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animals are capable of extreme agility, yet understanding their complex
dynamics, which have ecological, biomechanical and evolutionary implications,
remains challenging. Being able to study this incredible agility will be
critical for the development of next-generation autonomous legged robots. In
particular, the cheetah (acinonyx jubatus) is supremely fast and maneuverable,
yet quantifying its whole-body 3D kinematic data during locomotion in the wild
remains a challenge, even with new deep learning-based methods. In this work we
present an extensive dataset of free-running cheetahs in the wild, called
AcinoSet, that contains 119,490 frames of multi-view synchronized high-speed
video footage, camera calibration files and 7,588 human-annotated frames. We
utilize markerless animal pose estimation to provide 2D keypoints. Then, we use
three methods that serve as strong baselines for 3D pose estimation tool
development: traditional sparse bundle adjustment, an Extended Kalman Filter,
and a trajectory optimization-based method we call Full Trajectory Estimation.
The resulting 3D trajectories, human-checked 3D ground truth, and an
interactive tool to inspect the data is also provided. We believe this dataset
will be useful for a diverse range of fields such as ecology, neuroscience,
robotics, biomechanics as well as computer vision.
| [
{
"created": "Wed, 24 Mar 2021 15:54:11 GMT",
"version": "v1"
}
] | 2021-12-22 | [
[
"Joska",
"Daniel",
""
],
[
"Clark",
"Liam",
""
],
[
"Muramatsu",
"Naoya",
""
],
[
"Jericevich",
"Ricardo",
""
],
[
"Nicolls",
"Fred",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Mathis",
"Mackenzie W.",
""
],
[
"Patel",
"Amir",
""
]
] |
2103.13339 | Faraz Lotfi Dr | Faraz Lotfi, Farnoosh Faraji, Hamid D. Taghirad | Object Localization Through a Single Multiple-Model Convolutional Neural
Network with a Specific Training Approach | null | Applied Soft Computing, Volume 115, January 2022, 108166 | 10.1016/j.asoc.2021.108166 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object localization has a vital role in any object detector, and therefore,
has been the focus of attention by many researchers. In this article, a special
training approach is proposed for a light convolutional neural network (CNN) to
determine the region of interest (ROI) in an image while effectively reducing
the number of probable anchor boxes. Almost all CNN-based detectors utilize a
fixed input size image, which may yield poor performance when dealing with
various object sizes. In this paper, a different CNN structure is proposed
taking three different input sizes, to enhance the performance. In order to
demonstrate the effectiveness of the proposed method, two common data set are
used for training while tracking by localization application is considered to
demonstrate its final performance. The promising results indicate the
applicability of the presented structure and the training method in practice.
| [
{
"created": "Wed, 24 Mar 2021 16:52:01 GMT",
"version": "v1"
}
] | 2023-05-18 | [
[
"Lotfi",
"Faraz",
""
],
[
"Faraji",
"Farnoosh",
""
],
[
"Taghirad",
"Hamid D.",
""
]
] |
2103.13427 | Eleonora Giunchiglia | Eleonora Giunchiglia and Thomas Lukasiewicz | Multi-Label Classification Neural Networks with Hard Logical Constraints | arXiv admin note: text overlap with arXiv:2010.10151 | J. Artif. Intell. Res. 72 (2021) 759--818 | 10.1613/jair.1.12850 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label classification (MC) is a standard machine learning problem in
which a data point can be associated with a set of classes. A more challenging
scenario is given by hierarchical multi-label classification (HMC) problems, in
which every prediction must satisfy a given set of hard constraints expressing
subclass relationships between classes. In this paper, we propose C-HMCNN(h), a
novel approach for solving HMC problems, which, given a network h for the
underlying MC problem, exploits the hierarchy information in order to produce
predictions coherent with the constraints and to improve performance.
Furthermore, we extend the logic used to express HMC constraints in order to be
able to specify more complex relations among the classes and propose a new
model CCN(h), which extends C-HMCNN(h) and is again able to satisfy and exploit
the constraints to improve performance. We conduct an extensive experimental
analysis showing the superior performance of both C-HMCNN(h) and CCN(h) when
compared to state-of-the-art models in both the HMC and the general MC setting
with hard logical constraints.
| [
{
"created": "Wed, 24 Mar 2021 18:13:56 GMT",
"version": "v1"
}
] | 2022-10-05 | [
[
"Giunchiglia",
"Eleonora",
""
],
[
"Lukasiewicz",
"Thomas",
""
]
] |
2103.13452 | Anh Tuan Nguyen | Anh Tuan Nguyen, Markus W. Drealan, Diu Khue Luu, Ming Jiang, Jian Xu,
Jonathan Cheng, Qi Zhao, Edward W. Keefer, Zhi Yang | A Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based
Finger Control | null | Journal of Neural Engineering 18 (2021) 056051 | 10.1088/1741-2552/ac2a8d | null | cs.RO cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: Deep learning-based neural decoders have emerged as the prominent
approach to enable dexterous and intuitive control of neuroprosthetic hands.
Yet few studies have materialized the use of deep learning in clinical settings
due to its high computational requirements. Methods: Recent advancements of
edge computing devices bring the potential to alleviate this problem. Here we
present the implementation of a neuroprosthetic hand with embedded deep
learning-based control. The neural decoder is designed based on the recurrent
neural network (RNN) architecture and deployed on the NVIDIA Jetson Nano - a
compacted yet powerful edge computing platform for deep learning inference.
This enables the implementation of the neuroprosthetic hand as a portable and
self-contained unit with real-time control of individual finger movements.
Results: The proposed system is evaluated on a transradial amputee using
peripheral nerve signals (ENG) with implanted intrafascicular microelectrodes.
The experiment results demonstrate the system's capabilities of providing
robust, high-accuracy (95-99%) and low-latency (50-120 msec) control of
individual finger movements in various laboratory and real-world environments.
Conclusion: Modern edge computing platforms enable the effective use of deep
learning-based neural decoders for neuroprosthesis control as an autonomous
system. Significance: This work helps pioneer the deployment of deep neural
networks in clinical applications underlying a new class of wearable biomedical
devices with embedded artificial intelligence.
| [
{
"created": "Wed, 24 Mar 2021 19:11:58 GMT",
"version": "v1"
}
] | 2021-10-19 | [
[
"Nguyen",
"Anh Tuan",
""
],
[
"Drealan",
"Markus W.",
""
],
[
"Luu",
"Diu Khue",
""
],
[
"Jiang",
"Ming",
""
],
[
"Xu",
"Jian",
""
],
[
"Cheng",
"Jonathan",
""
],
[
"Zhao",
"Qi",
""
],
[
"Keefer",
"Edward W.",
""
],
[
"Yang",
"Zhi",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.