id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2104.01329 | Leonardo Rossi | Leonardo Rossi, Akbar Karimi, Andrea Prati | Recursively Refined R-CNN: Instance Segmentation with Self-RoI
Rebalancing | null | International Conference on Computer Analysis of Images and
Patterns. Springer, Cham, 2021 | 10.1007/978-3-030-89128-2_46 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the field of instance segmentation, most of the state-of-the-art deep
learning networks rely nowadays on cascade architectures, where multiple object
detectors are trained sequentially, re-sampling the ground truth at each step.
This offers a solution to the problem of exponentially vanishing positive
samples. However, it also translates into an increase in network complexity in
terms of the number of parameters. To address this issue, we propose
Recursively Refined R-CNN (R^3-CNN) which avoids duplicates by introducing a
loop mechanism instead. At the same time, it achieves a quality boost using a
recursive re-sampling technique, where a specific IoU quality is utilized in
each recursion to eventually equally cover the positive spectrum. Our
experiments highlight the specific encoding of the loop mechanism in the
weights, requiring its usage at inference time. The R^3-CNN architecture is
able to surpass the recently proposed HTC model, while reducing the number of
parameters significantly. Experiments on COCO minival 2017 dataset show
performance boost independently from the utilized baseline model. The code is
available online at https://github.com/IMPLabUniPr/mmdetection/tree/r3_cnn.
| [
{
"created": "Sat, 3 Apr 2021 07:25:33 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Aug 2021 09:36:09 GMT",
"version": "v2"
}
] | 2022-06-22 | [
[
"Rossi",
"Leonardo",
""
],
[
"Karimi",
"Akbar",
""
],
[
"Prati",
"Andrea",
""
]
] |
2104.01375 | Ioannis Kakogeorgiou | Ioannis Kakogeorgiou and Konstantinos Karantzalos | Evaluating explainable artificial intelligence methods for multi-label
deep learning classification tasks in remote sensing | null | International Journal of Applied Earth Observation and
Geoinformation 103 (2021) 102520 | 10.1016/j.jag.2021.102520 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Although deep neural networks hold the state-of-the-art in several remote
sensing tasks, their black-box operation hinders the understanding of their
decisions, concealing any bias and other shortcomings in datasets and model
performance. To this end, we have applied explainable artificial intelligence
(XAI) methods in remote sensing multi-label classification tasks towards
producing human-interpretable explanations and improve transparency. In
particular, we utilized and trained deep learning models with state-of-the-art
performance in the benchmark BigEarthNet and SEN12MS datasets. Ten XAI methods
were employed towards understanding and interpreting models' predictions, along
with quantitative metrics to assess and compare their performance. Numerous
experiments were performed to assess the overall performance of XAI methods for
straightforward prediction cases, competing multiple labels, as well as
misclassification cases. According to our findings, Occlusion, Grad-CAM and
Lime were the most interpretable and reliable XAI methods. However, none
delivers high-resolution outputs, while apart from Grad-CAM, both Lime and
Occlusion are computationally expensive. We also highlight different aspects of
XAI performance and elaborate with insights on black-box decisions in order to
improve transparency, understand their behavior and reveal, as well, datasets'
particularities.
| [
{
"created": "Sat, 3 Apr 2021 11:13:14 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Sep 2021 11:04:15 GMT",
"version": "v2"
}
] | 2021-09-21 | [
[
"Kakogeorgiou",
"Ioannis",
""
],
[
"Karantzalos",
"Konstantinos",
""
]
] |
2104.01454 | Mark Mazumder | Mark Mazumder, Colby Banbury, Josh Meyer, Pete Warden, Vijay Janapa
Reddi | Few-Shot Keyword Spotting in Any Language | null | Proc. Interspeech 2021 | 10.21437/Interspeech.2021-1966 | null | cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a few-shot transfer learning method for keyword spotting in any
language. Leveraging open speech corpora in nine languages, we automate the
extraction of a large multilingual keyword bank and use it to train an
embedding model. With just five training examples, we fine-tune the embedding
model for keyword spotting and achieve an average F1 score of 0.75 on keyword
classification for 180 new keywords unseen by the embedding model in these nine
languages. This embedding model also generalizes to new languages. We achieve
an average F1 score of 0.65 on 5-shot models for 260 keywords sampled across 13
new languages unseen by the embedding model. We investigate streaming accuracy
for our 5-shot models in two contexts: keyword spotting and keyword search.
Across 440 keywords in 22 languages, we achieve an average streaming keyword
spotting accuracy of 87.4% with a false acceptance rate of 4.3%, and observe
promising initial results on keyword search.
| [
{
"created": "Sat, 3 Apr 2021 17:27:37 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Apr 2021 15:48:01 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Apr 2021 18:58:44 GMT",
"version": "v3"
},
{
"created": "Thu, 9 Sep 2021 20:36:28 GMT",
"version": "v4"
}
] | 2021-09-13 | [
[
"Mazumder",
"Mark",
""
],
[
"Banbury",
"Colby",
""
],
[
"Meyer",
"Josh",
""
],
[
"Warden",
"Pete",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
2104.01526 | Xinggang Wang | Xinggang Wang and Jiapei Feng and Bin Hu and Qi Ding and Longjin Ran
and Xiaoxin Chen and Wenyu Liu | Weakly-supervised Instance Segmentation via Class-agnostic Learning with
Salient Images | null | CVPR 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Humans have a strong class-agnostic object segmentation ability and can
outline boundaries of unknown objects precisely, which motivates us to propose
a box-supervised class-agnostic object segmentation (BoxCaseg) based solution
for weakly-supervised instance segmentation. The BoxCaseg model is jointly
trained using box-supervised images and salient images in a multi-task learning
manner. The fine-annotated salient images provide class-agnostic and precise
object localization guidance for box-supervised images. The object masks
predicted by a pretrained BoxCaseg model are refined via a novel merged and
dropped strategy as proxy ground truth to train a Mask R-CNN for
weakly-supervised instance segmentation. Only using $7991$ salient images, the
weakly-supervised Mask R-CNN is on par with fully-supervised Mask R-CNN on
PASCAL VOC and significantly outperforms previous state-of-the-art
box-supervised instance segmentation methods on COCO. The source code,
pretrained models and datasets are available at
\url{https://github.com/hustvl/BoxCaseg}.
| [
{
"created": "Sun, 4 Apr 2021 03:01:52 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Wang",
"Xinggang",
""
],
[
"Feng",
"Jiapei",
""
],
[
"Hu",
"Bin",
""
],
[
"Ding",
"Qi",
""
],
[
"Ran",
"Longjin",
""
],
[
"Chen",
"Xiaoxin",
""
],
[
"Liu",
"Wenyu",
""
]
] |
2104.01642 | Martin Weyssow | Martin Weyssow, Houari Sahraoui, Eugene Syriani | Recommending Metamodel Concepts during Modeling Activities with
Pre-Trained Language Models | 18+2 pages | Software and Systems Modeling, 2022 | 10.1007/s10270-022-00975-5 | null | cs.SE cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The design of conceptually sound metamodels that embody proper semantics in
relation to the application domain is particularly tedious in Model-Driven
Engineering. As metamodels define complex relationships between domain
concepts, it is crucial for a modeler to define these concepts thoroughly while
being consistent with respect to the application domain. We propose an approach
to assist a modeler in the design of a metamodel by recommending relevant
domain concepts in several modeling scenarios. Our approach does not require to
extract knowledge from the domain or to hand-design completion rules. Instead,
we design a fully data-driven approach using a deep learning model that is able
to abstract domain concepts by learning from both structural and lexical
metamodel properties in a corpus of thousands of independent metamodels. We
evaluate our approach on a test set containing 166 metamodels, unseen during
the model training, with more than 5000 test samples. Our preliminary results
show that the trained model is able to provide accurate top-$5$ lists of
relevant recommendations for concept renaming scenarios. Although promising,
the results are less compelling for the scenario of the iterative construction
of the metamodel, in part because of the conservative strategy we use to
evaluate the recommendations.
| [
{
"created": "Sun, 4 Apr 2021 16:29:10 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jan 2022 14:49:40 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Feb 2022 02:56:06 GMT",
"version": "v3"
}
] | 2022-02-22 | [
[
"Weyssow",
"Martin",
""
],
[
"Sahraoui",
"Houari",
""
],
[
"Syriani",
"Eugene",
""
]
] |
2104.01687 | Roman Solovyev A | Roman Solovyev, Alexandr A. Kalinin, Tatiana Gabruseva | 3D Convolutional Neural Networks for Stalled Brain Capillary Detection | null | Computers in biology and medicine. 2022 | 10.1016/j.compbiomed.2021.105089 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adequate blood supply is critical for normal brain function. Brain
vasculature dysfunctions such as stalled blood flow in cerebral capillaries are
associated with cognitive decline and pathogenesis in Alzheimer's disease.
Recent advances in imaging technology enabled generation of high-quality 3D
images that can be used to visualize stalled blood vessels. However,
localization of stalled vessels in 3D images is often required as the first
step for downstream analysis, which can be tedious, time-consuming and
error-prone, when done manually. Here, we describe a deep learning-based
approach for automatic detection of stalled capillaries in brain images based
on 3D convolutional neural networks. Our networks employed custom 3D data
augmentations and were used weight transfer from pre-trained 2D models for
initialization. We used an ensemble of several 3D models to produce the winning
submission to the Clog Loss: Advance Alzheimer's Research with Stall Catchers
machine learning competition that challenged the participants with classifying
blood vessels in 3D image stacks as stalled or flowing. In this setting, our
approach outperformed other methods and demonstrated state-of-the-art results,
achieving 0.85 Matthews correlation coefficient, 85% sensitivity, and 99.3%
specificity. The source code for our solution is made publicly available.
| [
{
"created": "Sun, 4 Apr 2021 20:30:14 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Feb 2022 14:55:01 GMT",
"version": "v2"
}
] | 2022-02-15 | [
[
"Solovyev",
"Roman",
""
],
[
"Kalinin",
"Alexandr A.",
""
],
[
"Gabruseva",
"Tatiana",
""
]
] |
2104.01732 | Chuhua Wang | Zhenhua Chen, Chuhua Wang, David J. Crandall | Semantically Stealthy Adversarial Attacks against Segmentation Models | null | Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV), 2022, pp. 4080-4089 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Segmentation models have been found to be vulnerable to targeted and
non-targeted adversarial attacks. However, the resulting segmentation outputs
are often so damaged that it is easy to spot an attack. In this paper, we
propose semantically stealthy adversarial attacks which can manipulate targeted
labels while preserving non-targeted labels at the same time. One challenge is
making semantically meaningful manipulations across datasets and models.
Another challenge is avoiding damaging non-targeted labels. To solve these
challenges, we consider each input image as prior knowledge to generate
perturbations. We also design a special regularizer to help extract features.
To evaluate our model's performance, we design three basic attack types, namely
`vanishing into the context,' `embedding fake labels,' and `displacing target
objects.' Our experiments show that our stealthy adversarial model can attack
segmentation models with a relatively high success rate on Cityscapes,
Mapillary, and BDD100K. Our framework shows good empirical generalization
across datasets and models.
| [
{
"created": "Mon, 5 Apr 2021 00:56:45 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 00:43:47 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Jan 2022 07:29:04 GMT",
"version": "v3"
}
] | 2022-01-21 | [
[
"Chen",
"Zhenhua",
""
],
[
"Wang",
"Chuhua",
""
],
[
"Crandall",
"David J.",
""
]
] |
2104.01762 | Yanhong Zeng | Yanhong Zeng, Jianlong Fu, Hongyang Chao | 3D Human Body Reshaping with Anthropometric Modeling | ICIMCS 2017(oral). The final publication is available at Springer via
https://doi.org/10.1007/978-981-10-8530-7_10 | In International Conference on Internet Multimedia Computing and
Service (pp. 96-107). Springer, Singapore (2017) | 10.1007/978-981-10-8530-7_10 | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Reshaping accurate and realistic 3D human bodies from anthropometric
parameters (e.g., height, chest size, etc.) poses a fundamental challenge for
person identification, online shopping and virtual reality. Existing approaches
for creating such 3D shapes often suffer from complex measurement by range
cameras or high-end scanners, which either involve heavy expense cost or result
in low quality. However, these high-quality equipments limit existing
approaches in real applications, because the equipments are not easily
accessible for common users. In this paper, we have designed a 3D human body
reshaping system by proposing a novel feature-selection-based local mapping
technique, which enables automatic anthropometric parameter modeling for each
body facet. Note that the proposed approach can leverage limited anthropometric
parameters (i.e., 3-5 measurements) as input, which avoids complex measurement,
and thus better user-friendly experience can be achieved in real scenarios.
Specifically, the proposed reshaping model consists of three steps. First, we
calculate full-body anthropometric parameters from limited user inputs by
imputation technique, and thus essential anthropometric parameters for 3D body
reshaping can be obtained. Second, we select the most relevant anthropometric
parameters for each facet by adopting relevance masks, which are learned
offline by the proposed local mapping technique. Third, we generate the 3D body
meshes by mapping matrices, which are learned by linear regression from the
selected parameters to mesh-based body representation. We conduct experiments
by anthropomorphic evaluation and a user study from 68 volunteers. Experiments
show the superior results of the proposed system in terms of mean
reconstruction error against the state-of-the-art approaches.
| [
{
"created": "Mon, 5 Apr 2021 04:09:39 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Zeng",
"Yanhong",
""
],
[
"Fu",
"Jianlong",
""
],
[
"Chao",
"Hongyang",
""
]
] |
2104.01865 | Thimal Kempitiya | Thimal Kempitiya, Seppo Sierla, Daswin De Silva, Matti Yli-Ojanpera,
Damminda Alahakoon, Valeriy Vyatkin | An Artificial Intelligence Framework for Bidding Optimization with
Uncertainty in Multiple Frequency Reserve Markets | null | Applied Energy, Volume 280, 15 December 2020, 115918 | 10.1016/j.apenergy.2020.115918 | null | cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The global ambitions of a carbon-neutral society necessitate a stable and
robust smart grid that capitalises on frequency reserves of renewable energy.
Frequency reserves are resources that adjust power production or consumption in
real time to react to a power grid frequency deviation. Revenue generation
motivates the availability of these resources for managing such deviations.
However, limited research has been conducted on data-driven decisions and
optimal bidding strategies for trading such capacities in multiple frequency
reserves markets. We address this limitation by making the following research
contributions. Firstly, a generalised model is designed based on an extensive
study of critical characteristics of global frequency reserves markets.
Secondly, three bidding strategies are proposed, based on this market model, to
capitalise on price peaks in multi-stage markets. Two strategies are proposed
for non-reschedulable loads, in which case the bidding strategy aims to select
the market with the highest anticipated price, and the third bidding strategy
focuses on rescheduling loads to hours on which highest reserve market prices
are anticipated. The third research contribution is an Artificial Intelligence
(AI) based bidding optimization framework that implements these three
strategies, with novel uncertainty metrics that supplement data-driven price
prediction. Finally, the framework is evaluated empirically using a case study
of multiple frequency reserves markets in Finland. The results from this
evaluation confirm the effectiveness of the proposed bidding strategies and the
AI-based bidding optimization framework in terms of cumulative revenue
generation, leading to an increased availability of frequency reserves.
| [
{
"created": "Mon, 5 Apr 2021 12:04:29 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Kempitiya",
"Thimal",
""
],
[
"Sierla",
"Seppo",
""
],
[
"De Silva",
"Daswin",
""
],
[
"Yli-Ojanpera",
"Matti",
""
],
[
"Alahakoon",
"Damminda",
""
],
[
"Vyatkin",
"Valeriy",
""
]
] |
2104.01928 | Dingwen Zhang | Dingwen Zhang, Haibin Tian, and Jungong Han | Few-Cost Salient Object Detection with Adversarial-Paced Learning | null | 34th Conference on Neural Information Processing Systems (NeurIPS
2020) | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detecting and segmenting salient objects from given image scenes has received
great attention in recent years. A fundamental challenge in training the
existing deep saliency detection models is the requirement of large amounts of
annotated data. While gathering large quantities of training data becomes cheap
and easy, annotating the data is an expensive process in terms of time, labor
and human expertise. To address this problem, this paper proposes to learn the
effective salient object detection model based on the manual annotation on a
few training images only, thus dramatically alleviating human labor in training
models. To this end, we name this task as the few-cost salient object detection
and propose an adversarial-paced learning (APL)-based framework to facilitate
the few-cost learning scenario. Essentially, APL is derived from the self-paced
learning (SPL) regime but it infers the robust learning pace through the
data-driven adversarial learning mechanism rather than the heuristic design of
the learning regularizer. Comprehensive experiments on four widely-used
benchmark datasets demonstrate that the proposed method can effectively
approach to the existing supervised deep salient object detection models with
only 1k human-annotated training images. The project page is available at
https://github.com/hb-stone/FC-SOD.
| [
{
"created": "Mon, 5 Apr 2021 14:15:49 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Zhang",
"Dingwen",
""
],
[
"Tian",
"Haibin",
""
],
[
"Han",
"Jungong",
""
]
] |
2104.01948 | Dmitrii Marin | Dmitrii Marin and Yuri Boykov | Robust Trust Region for Weakly Supervised Segmentation | Accepted to ICCV 2021 | Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV), 2021, pp. 6608-6618 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acquisition of training data for the standard semantic segmentation is
expensive if requiring that each pixel is labeled. Yet, current methods
significantly deteriorate in weakly supervised settings, e.g. where a fraction
of pixels is labeled or when only image-level tags are available. It has been
shown that regularized losses - originally developed for unsupervised low-level
segmentation and representing geometric priors on pixel labels - can
considerably improve the quality of weakly supervised training. However, many
common priors require optimization stronger than gradient descent. Thus, such
regularizers have limited applicability in deep learning. We propose a new
robust trust region approach for regularized losses improving the
state-of-the-art results. Our approach can be seen as a higher-order
generalization of the classic chain rule. It allows neural network optimization
to use strong low-level solvers for the corresponding regularizers, including
discrete ones.
| [
{
"created": "Mon, 5 Apr 2021 15:11:29 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Sep 2021 04:54:16 GMT",
"version": "v2"
}
] | 2021-10-13 | [
[
"Marin",
"Dmitrii",
""
],
[
"Boykov",
"Yuri",
""
]
] |
2104.01955 | Dhivya Chandrasekaran | Dhivya Chandrasekaran and Vijay Mago | Automating Transfer Credit Assessment in Student Mobility -- A Natural
Language Processing-based Approach | 13 pages and 5 figures | CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 2257-2274,
2022 | 10.32604/cmc.2022.027236 | null | cs.CL cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Student mobility or academic mobility involves students moving between
institutions during their post-secondary education, and one of the challenging
tasks in this process is to assess the transfer credits to be offered to the
incoming student. In general, this process involves domain experts comparing
the learning outcomes of the courses, to decide on offering transfer credits to
the incoming students. This manual implementation is not only labor-intensive
but also influenced by undue bias and administrative complexity. The proposed
research article focuses on identifying a model that exploits the advancements
in the field of Natural Language Processing (NLP) to effectively automate this
process. Given the unique structure, domain specificity, and complexity of
learning outcomes (LOs), a need for designing a tailor-made model arises. The
proposed model uses a clustering-inspired methodology based on knowledge-based
semantic similarity measures to assess the taxonomic similarity of LOs and a
transformer-based semantic similarity model to assess the semantic similarity
of the LOs. The similarity between LOs is further aggregated to form course to
course similarity. Due to the lack of quality benchmark datasets, a new
benchmark dataset containing seven course-to-course similarity measures is
proposed. Understanding the inherent need for flexibility in the
decision-making process the aggregation part of the model offers tunable
parameters to accommodate different scenarios. While providing an efficient
model to assess the similarity between courses with existing resources, this
research work steers future research attempts to apply NLP in the field of
articulation in an ideal direction by highlighting the persisting research
gaps.
| [
{
"created": "Mon, 5 Apr 2021 15:14:59 GMT",
"version": "v1"
}
] | 2022-06-24 | [
[
"Chandrasekaran",
"Dhivya",
""
],
[
"Mago",
"Vijay",
""
]
] |
2104.01966 | Martin Garriga | Damian Andrew Tamburri, Willem-Jan Van den Heuvel, Martin Garriga | DataOps for Societal Intelligence: a Data Pipeline for Labor Market
Skills Extraction and Matching | null | 2020 IEEE 21st International Conference on Information Reuse and
Integration for Data Science (IRI), Las Vegas, NV, USA, 2020, pp. 391-394 | 10.1109/IRI49571.2020.00063 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Big Data analytics supported by AI algorithms can support skills localization
and retrieval in the context of a labor market intelligence problem. We
formulate and solve this problem through specific DataOps models, blending data
sources from administrative and technical partners in several countries into
cooperation, creating shared knowledge to support policy and decision-making.
We then focus on the critical task of skills extraction from resumes and
vacancies featuring state-of-the-art machine learning models. We showcase
preliminary results with applied machine learning on real data from the
employment agencies of the Netherlands and the Flemish region in Belgium. The
final goal is to match these skills to standard ontologies of skills, jobs and
occupations.
| [
{
"created": "Mon, 5 Apr 2021 15:37:25 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Tamburri",
"Damian Andrew",
""
],
[
"Heuvel",
"Willem-Jan Van den",
""
],
[
"Garriga",
"Martin",
""
]
] |
2104.02066 | Jun-En Ding | Jun-En Ding, Chi-Hsiang Chu, Mong-Na Lo Huang, Chien-Ching Hsu | Dopamine Transporter SPECT Image Classification for Neurodegenerative
Parkinsonism via Diffusion Maps and Machine Learning Classifiers | null | 24th Annual Conference, MIUA 2021, Oxford, UK, July 12-14, 2021,
Proceedings | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Neurodegenerative parkinsonism can be assessed by dopamine transporter single
photon emission computed tomography (DaT-SPECT). Although generating images is
time consuming, these images can show interobserver variability and they have
been visually interpreted by nuclear medicine physicians to date. Accordingly,
this study aims to provide an automatic and robust method based on Diffusion
Maps and machine learning classifiers to classify the SPECT images into two
types, namely Normal and Abnormal DaT-SPECT image groups. In the proposed
method, the 3D images of N patients are mapped to an N by N pairwise distance
matrix and are visualized in Diffusion Maps coordinates. The images of the
training set are embedded into a low-dimensional space by using diffusion maps.
Moreover, we use Nystr\"om's out-of-sample extension, which embeds new sample
points as the testing set in the reduced space. Testing samples in the embedded
space are then classified into two types through the ensemble classifier with
Linear Discriminant Analysis (LDA) and voting procedure through
twenty-five-fold cross-validation results. The feasibility of the method is
demonstrated via Parkinsonism Progression Markers Initiative (PPMI) dataset of
1097 subjects and a clinical cohort from Kaohsiung Chang Gung Memorial Hospital
(KCGMH-TW) of 630 patients. We compare performances using Diffusion Maps with
those of three alternative manifold methods for dimension reduction, namely
Locally Linear Embedding (LLE), Isomorphic Mapping Algorithm (Isomap), and
Kernel Principal Component Analysis (Kernel PCA). We also compare results using
2D and 3D CNN methods. The diffusion maps method has an average accuracy of 98%
for the PPMI and 90% for the KCGMH-TW dataset with twenty-five fold
cross-validation results. It outperforms the other three methods concerning the
overall accuracy and the robustness in the training and testing samples.
| [
{
"created": "Tue, 6 Apr 2021 06:30:15 GMT",
"version": "v1"
},
{
"created": "Fri, 7 May 2021 15:47:56 GMT",
"version": "v2"
}
] | 2021-05-10 | [
[
"Ding",
"Jun-En",
""
],
[
"Chu",
"Chi-Hsiang",
""
],
[
"Huang",
"Mong-Na Lo",
""
],
[
"Hsu",
"Chien-Ching",
""
]
] |
2104.02173 | AKM Bahalul Haque | A K M Bahalul Haque, Tahmid Hasan Pranto, Abdulla All Noman and Atik
Mahmood | Insight about Detection, Prediction and Weather Impact of Coronavirus
(Covid-19) using Neural Network | 15 Pages, 13 Figures and 4 Tables | International Journal of Artificial Intelligence & Applications
11(4):67-81, July. 2020 | 10.5121/ijaia.2020.11406 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The world is facing a tough situation due to the catastrophic pandemic caused
by novel coronavirus (COVID-19). The number people affected by this virus are
increasing exponentially day by day and the number has already crossed 6.4
million. As no vaccine has been discovered yet, the early detection of patients
and isolation is the only and most effective way to reduce the spread of the
virus. Detecting infected persons from chest X-Ray by using Deep Neural
Networks, can be applied as a time and laborsaving solution. In this study, we
tried to detect Covid-19 by classification of Covid-19, pneumonia and normal
chest X-Rays. We used five different Convolutional Pre-Trained Neural Network
models (VGG16, VGG19, Xception, InceptionV3 and Resnet50) and compared their
performance. VGG16 and VGG19 shows precise performance in classification. Both
models can classify between three kinds of X-Rays with an accuracy over 92%.
Another part of our study was to find the impact of weather factors
(temperature, humidity, sun hour and wind speed) on this pandemic using
Decision Tree Regressor. We found that temperature, humidity and sun-hour
jointly hold 85.88% impact on escalation of Covid-19 and 91.89% impact on death
due to Covid-19 where humidity has 8.09% impact on death. We also tried to
predict the death of an individual based on age, gender, country, and location
due to COVID-19 using the LogisticRegression, which can predict death of an
individual with a model accuracy of 94.40%.
| [
{
"created": "Mon, 5 Apr 2021 22:18:57 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Haque",
"A K M Bahalul",
""
],
[
"Pranto",
"Tahmid Hasan",
""
],
[
"Noman",
"Abdulla All",
""
],
[
"Mahmood",
"Atik",
""
]
] |
2104.02242 | Olawale Onabola | Olawale Onabola, Zhuang Ma, Yang Xie, Benjamin Akera, Abdulrahman
Ibraheem, Jia Xue, Dianbo Liu, Yoshua Bengio | HBert + BiasCorp -- Fighting Racism on the Web | null | ltedi-1. 4 (2021) 26-33 | null | 2021.ltedi-1.4 2021.ltedi-1.4 | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Subtle and overt racism is still present both in physical and online
communities today and has impacted many lives in different segments of the
society. In this short piece of work, we present how we're tackling this
societal issue with Natural Language Processing. We are releasing BiasCorp, a
dataset containing 139,090 comments and news segment from three specific
sources - Fox News, BreitbartNews and YouTube. The first batch (45,000 manually
annotated) is ready for publication. We are currently in the final phase of
manually labeling the remaining dataset using Amazon Mechanical Turk. BERT has
been used widely in several downstream tasks. In this work, we present hBERT,
where we modify certain layers of the pretrained BERT model with the new
Hopfield Layer. hBert generalizes well across different distributions with the
added advantage of a reduced model complexity. We are also releasing a
JavaScript library and a Chrome Extension Application, to help developers make
use of our trained model in web applications (say chat application) and for
users to identify and report racially biased contents on the web respectively.
| [
{
"created": "Tue, 6 Apr 2021 02:17:20 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jun 2021 14:23:24 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Oct 2021 22:35:01 GMT",
"version": "v3"
}
] | 2021-11-02 | [
[
"Onabola",
"Olawale",
""
],
[
"Ma",
"Zhuang",
""
],
[
"Xie",
"Yang",
""
],
[
"Akera",
"Benjamin",
""
],
[
"Ibraheem",
"Abdulrahman",
""
],
[
"Xue",
"Jia",
""
],
[
"Liu",
"Dianbo",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
2104.02245 | Wang Xin | Xin Wang, Yang Zhao, Tangwen Yang, Qiuqi Ruan | Multi-Scale Context Aggregation Network with Attention-Guided for Crowd
Counting | null | ICSP2020 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Crowd counting aims to predict the number of people and generate the density
map in the image. There are many challenges, including varying head scales, the
diversity of crowd distribution across images and cluttered backgrounds. In
this paper, we propose a multi-scale context aggregation network (MSCANet)
based on single-column encoder-decoder architecture for crowd counting, which
consists of an encoder based on a dense context-aware module (DCAM) and a
hierarchical attention-guided decoder. To handle the issue of scale variation,
we construct the DCAM to aggregate multi-scale contextual information by
densely connecting the dilated convolution with varying receptive fields. The
proposed DCAM can capture rich contextual information of crowd areas due to its
long-range receptive fields and dense scale sampling. Moreover, to suppress the
background noise and generate a high-quality density map, we adopt a
hierarchical attention-guided mechanism in the decoder. This helps to integrate
more useful spatial information from shallow feature maps of the encoder by
introducing multiple supervision based on semantic attention module (SAM).
Extensive experiments demonstrate that the proposed approach achieves better
performance than other similar state-of-the-art methods on three challenging
benchmark datasets for crowd counting. The code is available at
https://github.com/KingMV/MSCANet
| [
{
"created": "Tue, 6 Apr 2021 02:24:06 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Wang",
"Xin",
""
],
[
"Zhao",
"Yang",
""
],
[
"Yang",
"Tangwen",
""
],
[
"Ruan",
"Qiuqi",
""
]
] |
2104.02287 | Wesley Holliday | Matthew Harrison-Trainor, Wesley H. Holliday, and Thomas F. Icard III | Preferential Structures for Comparative Probabilistic Reasoning | Postprint of AAAI 2017 paper, corrected to include a distinguished
set of states in Definitions 2-3 and 5 (resp. before Theorem 3) to match the
appropriate special case of the semantics of Holliday and Icard 2013 (resp.
van der Hoek 1996) | AAAI Conference on Artificial Intelligence, 2017, pp. 1135-1141 | null | null | cs.AI cs.LO math.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Qualitative and quantitative approaches to reasoning about uncertainty can
lead to different logical systems for formalizing such reasoning, even when the
language for expressing uncertainty is the same. In the case of reasoning about
relative likelihood, with statements of the form $\varphi\succsim\psi$
expressing that $\varphi$ is at least as likely as $\psi$, a standard
qualitative approach using preordered preferential structures yields a
dramatically different logical system than a quantitative approach using
probability measures. In fact, the standard preferential approach validates
principles of reasoning that are incorrect from a probabilistic point of view.
However, in this paper we show that a natural modification of the preferential
approach yields exactly the same logical system as a probabilistic
approach--not using single probability measures, but rather sets of probability
measures. Thus, the same preferential structures used in the study of
non-monotonic logics and belief revision may be used in the study of
comparative probabilistic reasoning based on imprecise probabilities.
| [
{
"created": "Tue, 6 Apr 2021 05:00:20 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Harrison-Trainor",
"Matthew",
""
],
[
"Holliday",
"Wesley H.",
""
],
[
"Icard",
"Thomas F.",
"III"
]
] |
2104.02391 | Jing Zhang | Wangbo Zhao and Jing Zhang and Long Li and Nick Barnes and Nian Liu
and Junwei Han | Weakly Supervised Video Salient Object Detection | null | 2021 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant performance improvement has been achieved for fully-supervised
video salient object detection with the pixel-wise labeled training datasets,
which are time-consuming and expensive to obtain. To relieve the burden of data
annotation, we present the first weakly supervised video salient object
detection model based on relabeled "fixation guided scribble annotations".
Specifically, an "Appearance-motion fusion module" and bidirectional ConvLSTM
based framework are proposed to achieve effective multi-modal learning and
long-term temporal context modeling based on our new weak annotations. Further,
we design a novel foreground-background similarity loss to further explore the
labeling similarity across frames. A weak annotation boosting strategy is also
introduced to boost our model performance with a new pseudo-label generation
technique. Extensive experimental results on six benchmark video saliency
detection datasets illustrate the effectiveness of our solution.
| [
{
"created": "Tue, 6 Apr 2021 09:48:38 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Zhao",
"Wangbo",
""
],
[
"Zhang",
"Jing",
""
],
[
"Li",
"Long",
""
],
[
"Barnes",
"Nick",
""
],
[
"Liu",
"Nian",
""
],
[
"Han",
"Junwei",
""
]
] |
2104.02395 | M Tanveer PhD | M.A. Ganaie and Minghui Hu and A.K. Malik and M. Tanveer and P.N.
Suganthan | Ensemble deep learning: A review | null | Engineering Applications of Artificial Intelligence, 2022 | 10.1016/j.engappai.2022.105151 | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble learning combines several individual models to obtain better
generalization performance. Currently, deep learning architectures are showing
better performance compared to the shallow or traditional models. Deep ensemble
learning models combine the advantages of both the deep learning models as well
as the ensemble learning such that the final model has better generalization
performance. This paper reviews the state-of-art deep ensemble models and hence
serves as an extensive summary for the researchers. The ensemble models are
broadly categorised into bagging, boosting, stacking, negative correlation
based deep ensemble models, explicit/implicit ensembles,
homogeneous/heterogeneous ensemble, decision fusion strategies based deep
ensemble models. Applications of deep ensemble models in different domains are
also briefly discussed. Finally, we conclude this paper with some potential
future research directions.
| [
{
"created": "Tue, 6 Apr 2021 09:56:29 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Mar 2022 04:44:41 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Aug 2022 17:50:53 GMT",
"version": "v3"
}
] | 2022-08-09 | [
[
"Ganaie",
"M. A.",
""
],
[
"Hu",
"Minghui",
""
],
[
"Malik",
"A. K.",
""
],
[
"Tanveer",
"M.",
""
],
[
"Suganthan",
"P. N.",
""
]
] |
2104.02429 | Zhe Ma | Jianfeng Dong, Zhe Ma, Xiaofeng Mao, Xun Yang, Yuan He, Richang Hong,
Shouling Ji | Fine-Grained Fashion Similarity Prediction by Attribute-Specific
Embedding Learning | Conference paper: arXiv:2002.02814 | IEEE Transactions on Image Processing, vol. 30, pp. 8410-8425,
2021 | 10.1109/TIP.2021.3115658 | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper strives to predict fine-grained fashion similarity. In this
similarity paradigm, one should pay more attention to the similarity in terms
of a specific design/attribute between fashion items. For example, whether the
collar designs of the two clothes are similar. It has potential value in many
fashion related applications, such as fashion copyright protection. To this
end, we propose an Attribute-Specific Embedding Network (ASEN) to jointly learn
multiple attribute-specific embeddings, thus measure the fine-grained
similarity in the corresponding space. The proposed ASEN is comprised of a
global branch and a local branch. The global branch takes the whole image as
input to extract features from a global perspective, while the local branch
takes as input the zoomed-in region-of-interest (RoI) w.r.t. the specified
attribute thus able to extract more fine-grained features. As the global branch
and the local branch extract the features from different perspectives, they are
complementary to each other. Additionally, in each branch, two attention
modules, i.e., Attribute-aware Spatial Attention and Attribute-aware Channel
Attention, are integrated to make ASEN be able to locate the related regions
and capture the essential patterns under the guidance of the specified
attribute, thus make the learned attribute-specific embeddings better reflect
the fine-grained similarity. Extensive experiments on three fashion-related
datasets, i.e., FashionAI, DARN, and DeepFashion, show the effectiveness of
ASEN for fine-grained fashion similarity prediction and its potential for
fashion reranking. Code and data are available at
https://github.com/maryeon/asenpp .
| [
{
"created": "Tue, 6 Apr 2021 11:26:38 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Oct 2021 08:36:29 GMT",
"version": "v2"
}
] | 2021-10-12 | [
[
"Dong",
"Jianfeng",
""
],
[
"Ma",
"Zhe",
""
],
[
"Mao",
"Xiaofeng",
""
],
[
"Yang",
"Xun",
""
],
[
"He",
"Yuan",
""
],
[
"Hong",
"Richang",
""
],
[
"Ji",
"Shouling",
""
]
] |
2104.02471 | Khalil Khan | Khalil Khan, Jehad Ali, Irfan Uddin, Sahib Khan, and Byeong-hee Roh | A Facial Feature Discovery Framework for Race Classification Using Deep
Learning | Number of pages in the paper are 15 | Under review in Computer, Material, and Continua, 2021 | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Race classification is a long-standing challenge in the field of face image
analysis. The investigation of salient facial features is an important task to
avoid processing all face parts. Face segmentation strongly benefits several
face analysis tasks, including ethnicity and race classification. We propose a
raceclassification algorithm using a prior face segmentation framework. A deep
convolutional neural network (DCNN) was used to construct a face segmentation
model. For training the DCNN, we label face images according to seven different
classes, that is, nose, skin, hair, eyes, brows, back, and mouth. The DCNN
model developed in the first phase was used to create segmentation results. The
probabilistic classification method is used, and probability maps (PMs) are
created for each semantic class. We investigated five salient facial features
from among seven that help in race classification. Features are extracted from
the PMs of five classes, and a new model is trained based on the DCNN. We
assessed the performance of the proposed race classification method on four
standard face datasets, reporting superior results compared with previous
studies.
| [
{
"created": "Mon, 29 Mar 2021 06:33:04 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Khan",
"Khalil",
""
],
[
"Ali",
"Jehad",
""
],
[
"Uddin",
"Irfan",
""
],
[
"Khan",
"Sahib",
""
],
[
"Roh",
"Byeong-hee",
""
]
] |
2104.02542 | Rosanna Turrisi | Rosanna Turrisi, Arianna Braccia, Marco Emanuele, Simone Giulietti,
Maura Pugliatti, Mariachiara Sensi, Luciano Fadiga, Leonardo Badino | EasyCall corpus: a dysarthric speech dataset | null | Interspeech 2021 | 10.21437/Interspeech.2021-549 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a new dysarthric speech command dataset in Italian,
called EasyCall corpus. The dataset consists of 21386 audio recordings from 24
healthy and 31 dysarthric speakers, whose individual degree of speech
impairment was assessed by neurologists through the Therapy Outcome Measure.
The corpus aims at providing a resource for the development of ASR-based
assistive technologies for patients with dysarthria. In particular, it may be
exploited to develop a voice-controlled contact application for commercial
smartphones, aiming at improving dysarthric patients' ability to communicate
with their family and caregivers. Before recording the dataset, participants
were administered a survey to evaluate which commands are more likely to be
employed by dysarthric individuals in a voice-controlled contact application.
In addition, the dataset includes a list of non-commands (i.e., words
near/inside commands or phonetically close to commands) that can be leveraged
to build a more robust command recognition system. At present commercial ASR
systems perform poorly on the EasyCall Corpus as we report in this paper. This
result corroborates the need for dysarthric speech corpora for developing
effective assistive technologies. To the best of our knowledge, this database
represents the richest corpus of dysarthric speech to date.
| [
{
"created": "Tue, 6 Apr 2021 14:32:47 GMT",
"version": "v1"
}
] | 2022-03-15 | [
[
"Turrisi",
"Rosanna",
""
],
[
"Braccia",
"Arianna",
""
],
[
"Emanuele",
"Marco",
""
],
[
"Giulietti",
"Simone",
""
],
[
"Pugliatti",
"Maura",
""
],
[
"Sensi",
"Mariachiara",
""
],
[
"Fadiga",
"Luciano",
""
],
[
"Badino",
"Leonardo",
""
]
] |
2104.02573 | AKM Bahalul Haque | Shahriar Rahman, Shazzadur Rahman and A K M Bahalul Haque | Prediction of Solar Radiation Using Artificial Neural Network | Published as open access, 12 pages, 13 images and 2 tables | Journal of Physics: Conference Series , 2021 | 10.1088/1742-6596/1767/1/012041 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Most solar applications and systems can be reliably used to generate
electricity and power in many homes and offices. Recently, there is an increase
in many solar required systems that can be found not only in electricity
generation but other applications such as solar distillation, water heating,
heating of buildings, meteorology and producing solar conversion energy.
Prediction of solar radiation is very significant in order to accomplish the
previously mentioned objectives. In this paper, the main target is to present
an algorithm that can be used to predict an hourly activity of solar radiation.
Using a dataset that consists of temperature of air, time, humidity, wind
speed, atmospheric pressure, direction of wind and solar radiation data, an
Artificial Neural Network (ANN) model is constructed to effectively forecast
solar radiation using the available weather forecast data. Two models are
created to efficiently create a system capable of interpreting patterns through
supervised learning data and predict the correct amount of radiation present in
the atmosphere. The results of the two statistical indicators: Mean Absolute
Error (MAE) and Mean Squared Error (MSE) are performed and compared with
observed and predicted data. These two models were able to generate efficient
predictions with sufficient performance accuracy.
| [
{
"created": "Thu, 1 Apr 2021 20:41:27 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Rahman",
"Shahriar",
""
],
[
"Rahman",
"Shazzadur",
""
],
[
"Haque",
"A K M Bahalul",
""
]
] |
2104.02576 | Jiaolong Xu | Chen Min and Jiaolong Xu and Liang Xiao and Dawei Zhao and Yiming Nie
and Bin Dai | Attentional Graph Neural Network for Parking-slot Detection | Accepted by RAL | IEEE Robotics and Automation Letters, vol.6, pp. 3445-3450, 2021 | 10.1109/LRA.2021.3064270 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has recently demonstrated its promising performance for
vision-based parking-slot detection. However, very few existing methods
explicitly take into account learning the link information of the
marking-points, resulting in complex post-processing and erroneous detection.
In this paper, we propose an attentional graph neural network based
parking-slot detection method, which refers the marking-points in an
around-view image as graph-structured data and utilize graph neural network to
aggregate the neighboring information between marking-points. Without any
manually designed post-processing, the proposed method is end-to-end trainable.
Extensive experiments have been conducted on public benchmark dataset, where
the proposed method achieves state-of-the-art accuracy. Code is publicly
available at \url{https://github.com/Jiaolong/gcn-parking-slot}.
| [
{
"created": "Tue, 6 Apr 2021 15:14:39 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Min",
"Chen",
""
],
[
"Xu",
"Jiaolong",
""
],
[
"Xiao",
"Liang",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Nie",
"Yiming",
""
],
[
"Dai",
"Bin",
""
]
] |
2104.02606 | Tanzila Rahman | Tanzila Rahman, Leonid Sigal | Weakly-supervised Audio-visual Sound Source Detection and Separation | 4 figures, 6 pages | IEEE International Conference on Multimedia and Expo (ICME) 2021 | null | null | cs.CV cs.SD eess.AS eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning how to localize and separate individual object sounds in the audio
channel of the video is a difficult task. Current state-of-the-art methods
predict audio masks from artificially mixed spectrograms, known as
Mix-and-Separate framework. We propose an audio-visual co-segmentation, where
the network learns both what individual objects look and sound like, from
videos labeled with only object labels. Unlike other recent visually-guided
audio source separation frameworks, our architecture can be learned in an
end-to-end manner and requires no additional supervision or bounding box
proposals. Specifically, we introduce weakly-supervised object segmentation in
the context of sound separation. We also formulate spectrogram mask prediction
using a set of learned mask bases, which combine using coefficients conditioned
on the output of object segmentation , a design that facilitates separation.
Extensive experiments on the MUSIC dataset show that our proposed approach
outperforms state-of-the-art methods on visually guided sound source separation
and sound denoising.
| [
{
"created": "Thu, 25 Mar 2021 10:17:55 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Rahman",
"Tanzila",
""
],
[
"Sigal",
"Leonid",
""
]
] |
2104.02640 | TrungTin Nguyen | TrungTin Nguyen, Hien Duy Nguyen, Faicel Chamroukhi and Florence
Forbes | A non-asymptotic approach for model selection via penalization in
high-dimensional mixture of experts models | To appear, Electronic Journal of Statistics, 2022 | Electronic Journal of Statistics 2022 | 10.1214/22-EJS2057 | 16 (2) 4742 - 4822 | math.ST cs.AI cs.LG stat.ME stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixture of experts (MoE) are a popular class of statistical and machine
learning models that have gained attention over the years due to their
flexibility and efficiency. In this work, we consider Gaussian-gated localized
MoE (GLoME) and block-diagonal covariance localized MoE (BLoME) regression
models to present nonlinear relationships in heterogeneous data with potential
hidden graph-structured interactions between high-dimensional predictors. These
models pose difficult statistical estimation and model selection questions,
both from a computational and theoretical perspective. This paper is devoted to
the study of the problem of model selection among a collection of GLoME or
BLoME models characterized by the number of mixture components, the complexity
of Gaussian mean experts, and the hidden block-diagonal structures of the
covariance matrices, in a penalized maximum likelihood estimation framework. In
particular, we establish non-asymptotic risk bounds that take the form of weak
oracle inequalities, provided that lower bounds for the penalties hold. The
good empirical behavior of our models is then demonstrated on synthetic and
real datasets.
| [
{
"created": "Tue, 6 Apr 2021 16:24:55 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2022 20:38:29 GMT",
"version": "v2"
},
{
"created": "Sun, 4 Sep 2022 14:45:19 GMT",
"version": "v3"
}
] | 2022-09-29 | [
[
"Nguyen",
"TrungTin",
""
],
[
"Nguyen",
"Hien Duy",
""
],
[
"Chamroukhi",
"Faicel",
""
],
[
"Forbes",
"Florence",
""
]
] |
2104.02653 | Ad\'in Ram\'irez Rivera | Miguel Rodr\'iguez Santander, Juan Hern\'andez Albarrac\'in, Ad\'in
Ram\'irez Rivera | On the Pitfalls of Learning with Limited Data: A Facial Expression
Recognition Case Study | To appear in Expert Systems with Applications | Expert Syst. Appl. 2021, 18 (1) 114991 | 10.1016/j.eswa.2021.114991 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning models need large amounts of data for training. In video
recognition and classification, significant advances were achieved with the
introduction of new large databases. However, the creation of large-databases
for training is infeasible in several scenarios. Thus, existing or small
collected databases are typically joined and amplified to train these models.
Nevertheless, training neural networks on limited data is not straightforward
and comes with a set of problems. In this paper, we explore the effects of
stacking databases, model initialization, and data amplification techniques
when training with limited data on deep learning models' performance. We
focused on the problem of Facial Expression Recognition from videos. We
performed an extensive study with four databases at a different complexity and
nine deep-learning architectures for video classification. We found that (i)
complex training sets translate better to more stable test sets when trained
with transfer learning and synthetically generated data, but their performance
yields a high variance; (ii) training with more detailed data translates to
more stable performance on novel scenarios (albeit with lower performance);
(iii) merging heterogeneous data is not a straightforward improvement, as the
type of augmentation and initialization is crucial; (iv) classical data
augmentation cannot fill the holes created by joining largely separated
datasets; and (v) inductive biases help to bridge the gap when paired with
synthetic data, but this data is not enough when working with standard
initialization techniques.
| [
{
"created": "Fri, 2 Apr 2021 18:53:41 GMT",
"version": "v1"
}
] | 2021-07-05 | [
[
"Santander",
"Miguel Rodríguez",
""
],
[
"Albarracín",
"Juan Hernández",
""
],
[
"Rivera",
"Adín Ramírez",
""
]
] |
2104.02756 | Fran\c{c}ois Mercier | Fran\c{c}ois Mercier | Efficient transfer learning for NLP with ELECTRA | Submission for ML Reproducibility Challenge 2020 | Machine Learning Reproducibility Challenge 2020 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Clark et al. [2020] claims that the ELECTRA approach is highly efficient in
NLP performances relative to computation budget. As such, this reproducibility
study focus on this claim, summarized by the following question: Can we use
ELECTRA to achieve close to SOTA performances for NLP in low-resource settings,
in term of compute cost?
| [
{
"created": "Tue, 6 Apr 2021 19:34:36 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Mercier",
"François",
""
]
] |
2104.02874 | XingJiao Wu | Xingjiao Wu, Ziling Hu, Xiangcheng Du, Jing Yang, Liang He | Document Layout Analysis via Dynamic Residual Feature Fusion | 7 pages, 6 figures | IEEE ICME 2021 ORAL | 10.1109/ICME51207.2021.9428465 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The document layout analysis (DLA) aims to split the document image into
different interest regions and understand the role of each region, which has
wide application such as optical character recognition (OCR) systems and
document retrieval. However, it is a challenge to build a DLA system because
the training data is very limited and lacks an efficient model. In this paper,
we propose an end-to-end united network named Dynamic Residual Fusion Network
(DRFN) for the DLA task. Specifically, we design a dynamic residual feature
fusion module which can fully utilize low-dimensional information and maintain
high-dimensional category information. Besides, to deal with the model
overfitting problem that is caused by lacking enough data, we propose the
dynamic select mechanism for efficient fine-tuning in limited train data. We
experiment with two challenging datasets and demonstrate the effectiveness of
the proposed module.
| [
{
"created": "Wed, 7 Apr 2021 02:57:09 GMT",
"version": "v1"
}
] | 2022-02-15 | [
[
"Wu",
"Xingjiao",
""
],
[
"Hu",
"Ziling",
""
],
[
"Du",
"Xiangcheng",
""
],
[
"Yang",
"Jing",
""
],
[
"He",
"Liang",
""
]
] |
2104.03042 | Akhil Mathur | Akhil Mathur, Daniel J. Beutel, Pedro Porto Buarque de Gusm\~ao,
Javier Fernandez-Marques, Taner Topal, Xinchi Qiu, Titouan Parcollet, Yan
Gao, Nicholas D. Lane | On-device Federated Learning with Flower | Accepted at the 2nd On-device Intelligence Workshop @ MLSys 2021.
arXiv admin note: substantial text overlap with arXiv:2007.14390 | On-device Intelligence Workshop at the Fourth Conference on
Machine Learning and Systems (MLSys), April 9, 2021 | null | null | cs.LG cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) allows edge devices to collaboratively learn a shared
prediction model while keeping their training data on the device, thereby
decoupling the ability to do machine learning from the need to store data in
the cloud. Despite the algorithmic advancements in FL, the support for
on-device training of FL algorithms on edge devices remains poor. In this
paper, we present an exploration of on-device FL on various smartphones and
embedded devices using the Flower framework. We also evaluate the system costs
of on-device FL and discuss how this quantification could be used to design
more efficient FL algorithms.
| [
{
"created": "Wed, 7 Apr 2021 10:42:14 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Mathur",
"Akhil",
""
],
[
"Beutel",
"Daniel J.",
""
],
[
"de Gusmão",
"Pedro Porto Buarque",
""
],
[
"Fernandez-Marques",
"Javier",
""
],
[
"Topal",
"Taner",
""
],
[
"Qiu",
"Xinchi",
""
],
[
"Parcollet",
"Titouan",
""
],
[
"Gao",
"Yan",
""
],
[
"Lane",
"Nicholas D.",
""
]
] |
2104.03054 | Immanuel Weber | Immanuel Weber, Jens Bongartz, Ribana Roscher | Artificial and beneficial -- Exploiting artificial images for aerial
vehicle detection | 14 pages, 13 figures, 4 tables | ISPRS Journal of Photogrammetry and Remote Sensing, Volume 175,
May 2021, Pages 158-170 | 10.1016/j.isprsjprs.2021.02.015 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Object detection in aerial images is an important task in environmental,
economic, and infrastructure-related tasks. One of the most prominent
applications is the detection of vehicles, for which deep learning approaches
are increasingly used. A major challenge in such approaches is the limited
amount of data that arises, for example, when more specialized and rarer
vehicles such as agricultural machinery or construction vehicles are to be
detected. This lack of data contrasts with the enormous data hunger of deep
learning methods in general and object recognition in particular. In this
article, we address this issue in the context of the detection of road vehicles
in aerial images. To overcome the lack of annotated data, we propose a
generative approach that generates top-down images by overlaying artificial
vehicles created from 2D CAD drawings on artificial or real backgrounds. Our
experiments with a modified RetinaNet object detection network show that adding
these images to small real-world datasets significantly improves detection
performance. In cases of very limited or even no real-world images, we observe
an improvement in average precision of up to 0.70 points. We address the
remaining performance gap to real-world datasets by analyzing the effect of the
image composition of background and objects and give insights into the
importance of background.
| [
{
"created": "Wed, 7 Apr 2021 11:06:15 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Weber",
"Immanuel",
""
],
[
"Bongartz",
"Jens",
""
],
[
"Roscher",
"Ribana",
""
]
] |
2104.03068 | Moi Hoon Yap | Moi Hoon Yap and Bill Cassidy and Joseph M. Pappachan and Claire
O'Shea and David Gillespie and Neil Reeves | Analysis Towards Classification of Infection and Ischaemia of Diabetic
Foot Ulcers | 4 pages, 6 figures and 3 tables | Conference: 2021 IEEE EMBS International Conference on Biomedical
and Health Informatics (BHI) | 10.1109/BHI50953.2021.9508563 | July 2021 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the Diabetic Foot Ulcers dataset (DFUC2021) for
analysis of pathology, focusing on infection and ischaemia. We describe the
data preparation of DFUC2021 for ground truth annotation, data curation and
data analysis. The final release of DFUC2021 consists of 15,683 DFU patches,
with 5,955 training, 5,734 for testing and 3,994 unlabeled DFU patches. The
ground truth labels are four classes, i.e. control, infection, ischaemia and
both conditions. We curate the dataset using image hashing techniques and
analyse the separability using UMAP projection. We benchmark the performance of
five key backbones of deep learning, i.e. VGG16, ResNet101, InceptionV3,
DenseNet121 and EfficientNet on DFUC2021. We report the optimised results of
these key backbones with different strategies. Based on our observations, we
conclude that EfficientNetB0 with data augmentation and transfer learning
provided the best results for multi-class (4-class) classification with
macro-average Precision, Recall and F1-score of 0.57, 0.62 and 0.55,
respectively. In ischaemia and infection recognition, when trained on
one-versus-all, EfficientNetB0 achieved comparable results with the state of
the art. Finally, we interpret the results with statistical analysis and
Grad-CAM visualisation.
| [
{
"created": "Wed, 7 Apr 2021 11:38:57 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jun 2021 06:49:30 GMT",
"version": "v2"
}
] | 2021-08-16 | [
[
"Yap",
"Moi Hoon",
""
],
[
"Cassidy",
"Bill",
""
],
[
"Pappachan",
"Joseph M.",
""
],
[
"O'Shea",
"Claire",
""
],
[
"Gillespie",
"David",
""
],
[
"Reeves",
"Neil",
""
]
] |
2104.03109 | Yilin Liu | Yilin Liu, Ke Xie, and Hui Huang | VGF-Net: Visual-Geometric Fusion Learning for Simultaneous Drone
Navigation and Height Mapping | Accepted by CVM 2021 | Graphical Models 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The drone navigation requires the comprehensive understanding of both visual
and geometric information in the 3D world. In this paper, we present a
Visual-Geometric Fusion Network(VGF-Net), a deep network for the fusion
analysis of visual/geometric data and the construction of 2.5D height maps for
simultaneous drone navigation in novel environments. Given an initial rough
height map and a sequence of RGB images, our VGF-Net extracts the visual
information of the scene, along with a sparse set of 3D keypoints that capture
the geometric relationship between objects in the scene. Driven by the data,
VGF-Net adaptively fuses visual and geometric information, forming a unified
Visual-Geometric Representation. This representation is fed to a new
Directional Attention Model(DAM), which helps enhance the visual-geometric
object relationship and propagates the informative data to dynamically refine
the height map and the corresponding keypoints. An entire end-to-end
information fusion and mapping system is formed, demonstrating remarkable
robustness and high accuracy on the autonomous drone navigation across complex
indoor and large-scale outdoor scenes. The dataset can be found in
http://vcc.szu.edu.cn/research/2021/VGFNet.
| [
{
"created": "Wed, 7 Apr 2021 13:18:40 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Liu",
"Yilin",
""
],
[
"Xie",
"Ke",
""
],
[
"Huang",
"Hui",
""
]
] |
2104.03154 | Lucas Schott | Lucas Schott, Hatem Hajri, Sylvain Lamprier | Improving Robustness of Deep Reinforcement Learning Agents: Environment
Attack based on the Critic Network | 8 pages, 8 figures | 2022 International Joint Conference on Neural Networks (IJCNN),
2022, pp. 1-8 | 10.1109/IJCNN55064.2022.9892901 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To improve policy robustness of deep reinforcement learning agents, a line of
recent works focus on producing disturbances of the environment. Existing
approaches of the literature to generate meaningful disturbances of the
environment are adversarial reinforcement learning methods. These methods set
the problem as a two-player game between the protagonist agent, which learns to
perform a task in an environment, and the adversary agent, which learns to
disturb the protagonist via modifications of the considered environment. Both
protagonist and adversary are trained with deep reinforcement learning
algorithms. Alternatively, we propose in this paper to build on gradient-based
adversarial attacks, usually used for classification tasks for instance, that
we apply on the critic network of the protagonist to identify efficient
disturbances of the environment. Rather than learning an attacker policy, which
usually reveals as very complex and unstable, we leverage the knowledge of the
critic network of the protagonist, to dynamically complexify the task at each
step of the learning process. We show that our method, while being faster and
lighter, leads to significantly better improvements in policy robustness than
existing methods of the literature.
| [
{
"created": "Wed, 7 Apr 2021 14:37:23 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Feb 2022 09:52:41 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Oct 2022 14:33:54 GMT",
"version": "v3"
}
] | 2022-10-04 | [
[
"Schott",
"Lucas",
""
],
[
"Hajri",
"Hatem",
""
],
[
"Lamprier",
"Sylvain",
""
]
] |
2104.03189 | Tunazzina Islam | Tunazzina Islam, Dan Goldwasser | Analysis of Twitter Users' Lifestyle Choices using Joint Embedding Model | accepted at 15th International AAAI Conference on Web and Social
Media (ICWSM-2021), 12 pages. Minor changes for camera-ready version | Proceedings of the International AAAI Conference on Web and Social
Media. 15, 1 (May 2021), 242-253 | 10.1609/icwsm.v15i1.18057 | null | cs.CL cs.AI cs.CY cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiview representation learning of data can help construct coherent and
contextualized users' representations on social media. This paper suggests a
joint embedding model, incorporating users' social and textual information to
learn contextualized user representations used for understanding their
lifestyle choices. We apply our model to tweets related to two lifestyle
activities, `Yoga' and `Keto diet' and use it to analyze users' activity type
and motivation. We explain the data collection and annotation process in detail
and provide an in-depth analysis of users from different classes based on their
Twitter content. Our experiments show that our model results in performance
improvements in both domains.
| [
{
"created": "Wed, 7 Apr 2021 15:29:36 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Apr 2021 15:36:19 GMT",
"version": "v2"
},
{
"created": "Tue, 4 May 2021 18:14:32 GMT",
"version": "v3"
}
] | 2023-07-04 | [
[
"Islam",
"Tunazzina",
""
],
[
"Goldwasser",
"Dan",
""
]
] |
2104.03236 | Herv\'e Le Borgne | Omar Adjali and Romaric Besan\c{c}on and Olivier Ferret and Herve Le
Borgne and Brigitte Grau | Multimodal Entity Linking for Tweets | null | In: Jose J. et al. (eds) Advances in Information Retrieval. ECIR
2020. Lecture Notes in Computer Science, vol 12035. Springer, Cham | 10.1007/978-3-030-45439-5_31 | null | cs.IR cs.CL cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In many information extraction applications, entity linking (EL) has emerged
as a crucial task that allows leveraging information about named entities from
a knowledge base. In this paper, we address the task of multimodal entity
linking (MEL), an emerging research field in which textual and visual
information is used to map an ambiguous mention to an entity in a knowledge
base (KB). First, we propose a method for building a fully annotated Twitter
dataset for MEL, where entities are defined in a Twitter KB. Then, we propose a
model for jointly learning a representation of both mentions and entities from
their textual and visual contexts. We demonstrate the effectiveness of the
proposed model by evaluating it on the proposed dataset and highlight the
importance of leveraging visual information when it is available.
| [
{
"created": "Wed, 7 Apr 2021 16:40:23 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Adjali",
"Omar",
""
],
[
"Besançon",
"Romaric",
""
],
[
"Ferret",
"Olivier",
""
],
[
"Borgne",
"Herve Le",
""
],
[
"Grau",
"Brigitte",
""
]
] |
2104.03252 | Maaike Van Roy | Maaike Van Roy, Pieter Robberechts, Wen-Chi Yang, Luc De Raedt, Jesse
Davis | Leaving Goals on the Pitch: Evaluating Decision Making in Soccer | Add missing funding | 2021 MIT Sloan Sports Analytics Conference | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Analysis of the popular expected goals (xG) metric in soccer has determined
that a (slightly) smaller number of high-quality attempts will likely yield
more goals than a slew of low-quality ones. This observation has driven a
change in shooting behavior. Teams are passing up on shots from outside the
penalty box, in the hopes of generating a better shot closer to goal later on.
This paper evaluates whether this decrease in long-distance shots is warranted.
Therefore, we propose a novel generic framework to reason about decision-making
in soccer by combining techniques from machine learning and artificial
intelligence (AI). First, we model how a team has behaved offensively over the
course of two seasons by learning a Markov Decision Process (MDP) from event
stream data. Second, we use reasoning techniques arising from the AI literature
on verification to each team's MDP. This allows us to reason about the efficacy
of certain potential decisions by posing counterfactual questions to the MDP.
Our key conclusion is that teams would score more goals if they shot more often
from outside the penalty box in a small number of team-specific locations. The
proposed framework can easily be extended and applied to analyze other aspects
of the game.
| [
{
"created": "Wed, 7 Apr 2021 16:56:31 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Feb 2023 10:31:20 GMT",
"version": "v2"
}
] | 2023-02-17 | [
[
"Van Roy",
"Maaike",
""
],
[
"Robberechts",
"Pieter",
""
],
[
"Yang",
"Wen-Chi",
""
],
[
"De Raedt",
"Luc",
""
],
[
"Davis",
"Jesse",
""
]
] |
2104.03308 | Prune Truong | Prune Truong and Martin Danelljan and Fisher Yu and Luc Van Gool | Warp Consistency for Unsupervised Learning of Dense Correspondences | Accepted to ICCV 2021 as an ORAL! | 2021 IEEE/CVF International Conference on Computer Vision (ICCV) | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key challenge in learning dense correspondences lies in the lack of
ground-truth matches for real image pairs. While photometric consistency losses
provide unsupervised alternatives, they struggle with large appearance changes,
which are ubiquitous in geometric and semantic matching tasks. Moreover,
methods relying on synthetic training pairs often suffer from poor
generalisation to real data.
We propose Warp Consistency, an unsupervised learning objective for dense
correspondence regression. Our objective is effective even in settings with
large appearance and view-point changes. Given a pair of real images, we first
construct an image triplet by applying a randomly sampled warp to one of the
original images. We derive and analyze all flow-consistency constraints arising
between the triplet. From our observations and empirical results, we design a
general unsupervised objective employing two of the derived constraints. We
validate our warp consistency loss by training three recent dense
correspondence networks for the geometric and semantic matching tasks. Our
approach sets a new state-of-the-art on several challenging benchmarks,
including MegaDepth, RobotCar and TSS. Code and models are at
github.com/PruneTruong/DenseMatching.
| [
{
"created": "Wed, 7 Apr 2021 17:58:22 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Apr 2021 13:06:59 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Aug 2021 14:08:18 GMT",
"version": "v3"
}
] | 2021-08-19 | [
[
"Truong",
"Prune",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Yu",
"Fisher",
""
],
[
"Van Gool",
"Luc",
""
]
] |
2104.03372 | Benjamin Doerr | Benjamin Doerr and Timo K\"otzing | Lower Bounds from Fitness Levels Made Easy | Extended version of a paper appearing in the proceedings of GECCO
2021 | Algorithmica 86(2): 367-395 (2024) | 10.1007/s00453-022-00952-w | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the first and easy to use techniques for proving run time bounds for
evolutionary algorithms is the so-called method of fitness levels by Wegener.
It uses a partition of the search space into a sequence of levels which are
traversed by the algorithm in increasing order, possibly skipping levels. An
easy, but often strong upper bound for the run time can then be derived by
adding the reciprocals of the probabilities to leave the levels (or upper
bounds for these). Unfortunately, a similarly effective method for proving
lower bounds has not yet been established. The strongest such method, proposed
by Sudholt (2013), requires a careful choice of the viscosity parameters
$\gamma_{i,j}$, $0 \le i < j \le n$.
In this paper we present two new variants of the method, one for upper and
one for lower bounds. Besides the level leaving probabilities, they only rely
on the probabilities that levels are visited at all. We show that these can be
computed or estimated without greater difficulties and apply our method to
reprove the following known results in an easy and natural way. (i) The precise
run time of the (1+1) EA on \textsc{LeadingOnes}. (ii) A lower bound for the
run time of the (1+1) EA on \textsc{OneMax}, tight apart from an $O(n)$ term.
(iii) A lower bound for the run time of the (1+1) EA on long $k$-paths. We also
prove a tighter lower bound for the run time of the (1+1) EA on jump functions
by showing that, regardless of the jump size, only with probability $O(2^{-n})$
the algorithm can avoid to jump over the valley of low fitness.
| [
{
"created": "Wed, 7 Apr 2021 19:50:53 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Apr 2021 09:54:13 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Apr 2021 08:18:18 GMT",
"version": "v3"
}
] | 2024-08-29 | [
[
"Doerr",
"Benjamin",
""
],
[
"Kötzing",
"Timo",
""
]
] |
2104.03419 | Ali Almadan | Ali Almadan and Ajita Rattani | Towards On-Device Face Recognition in Body-worn Cameras | 6 pages | IEEE International Workshop on Biometrics and Forensics (IWBF)
2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Face recognition technology related to recognizing identities is widely
adopted in intelligence gathering, law enforcement, surveillance, and consumer
applications. Recently, this technology has been ported to smartphones and
body-worn cameras (BWC). Face recognition technology in body-worn cameras is
used for surveillance, situational awareness, and keeping the officer safe.
Only a handful of academic studies exist in face recognition using the
body-worn camera. A recent study has assembled BWCFace facial image dataset
acquired using a body-worn camera and evaluated the ResNet-50 model for face
identification. However, for real-time inference in resource constraint
body-worn cameras and privacy concerns involving facial images, on-device face
recognition is required. To this end, this study evaluates lightweight
MobileNet-V2, EfficientNet-B0, LightCNN-9 and LightCNN-29 models for face
identification using body-worn camera. Experiments are performed on a publicly
available BWCface dataset. The real-time inference is evaluated on three mobile
devices. The comparative analysis is done with heavy-weight VGG-16 and
ResNet-50 models along with six hand-crafted features to evaluate the trade-off
between the performance and model size. Experimental results suggest the
difference in maximum rank-1 accuracy of lightweight LightCNN-29 over
best-performing ResNet-50 is \textbf{1.85\%} and the reduction in model
parameters is \textbf{23.49M}. Most of the deep models obtained similar
performances at rank-5 and rank-10. The inference time of LightCNNs is 2.1x
faster than other models on mobile devices. The least performance difference of
\textbf{14\%} is noted between LightCNN-29 and Local Phase Quantization (LPQ)
descriptor at rank-1. In most of the experimental settings, lightweight
LightCNN models offered the best trade-off between accuracy and the model size
in comparison to most of the models.
| [
{
"created": "Wed, 7 Apr 2021 22:24:57 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Almadan",
"Ali",
""
],
[
"Rattani",
"Ajita",
""
]
] |
2104.03531 | Zhao Kang | Juncheng Lv and Zhao Kang and Xiao Lu and Zenglin Xu | Pseudo-supervised Deep Subspace Clustering | null | IEEE Transactions on Image Processing 2021 | 10.1109/TIP.2021.3079800 | null | cs.CV cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved
impressive performance due to the powerful representation extracted using deep
neural networks while prioritizing categorical separability. However,
self-reconstruction loss of an AE ignores rich useful relation information and
might lead to indiscriminative representation, which inevitably degrades the
clustering performance. It is also challenging to learn high-level similarity
without feeding semantic labels. Another unsolved problem facing DSC is the
huge memory cost due to $n\times n$ similarity matrix, which is incurred by the
self-expression layer between an encoder and decoder. To tackle these problems,
we use pairwise similarity to weigh the reconstruction loss to capture local
structure information, while a similarity is learned by the self-expression
layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain
knowledge acquired during network training, are further employed to supervise
similarity learning. Joint learning and iterative training facilitate to obtain
an overall optimal solution. Extensive experiments on benchmark datasets
demonstrate the superiority of our approach. By combining with the $k$-nearest
neighbors algorithm, we further show that our method can address the
large-scale and out-of-sample problems.
| [
{
"created": "Thu, 8 Apr 2021 06:25:47 GMT",
"version": "v1"
}
] | 2021-05-17 | [
[
"Lv",
"Juncheng",
""
],
[
"Kang",
"Zhao",
""
],
[
"Lu",
"Xiao",
""
],
[
"Xu",
"Zenglin",
""
]
] |
2104.03668 | Olivier Rukundo | Olivier Rukundo, Marius Pedersen, {\O}istein Hovde | Advanced Image Enhancement Method for Distant Vessels and Structures in
Capsule Endoscopy | 8 pages, 12 figures, 4 tables | Computational and Mathematical Methods in Medicine (CMMM), Volume
2017, Article ID 9813165 | 10.1155/2017/9813165 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes an advanced method for contrast enhancement of capsule
endoscopic images, with the main objective to obtain sufficient information
about the vessels and structures in more distant (or darker) parts of capsule
endoscopic images. The proposed method (PM) combines two algorithms for the
enhancement of darker and brighter areas of capsule endoscopic images,
respectively. The half-unit weighted bilinear algorithm (HWB) proposed in our
previous work is used to enhance darker areas according to the darker map
content of its HSV's component V. Enhancement of brighter areas is achieved
thanks to the novel thresholded weighted-bilinear algorithm (TWB) developed to
avoid overexposure and enlargement of specular highlight spots while preserving
the hue, in such areas. The TWB performs enhancement operations following a
gradual increment of the brightness of the brighter map content of its HSV's
component V. In other words, the TWB decreases its averaged-weights as the
intensity content of the component V increases. Extensive experimental
demonstrations were conducted, and based on evaluation of the reference and PM
enhanced images, a gastroenterologist ({\O}H) concluded that the PM enhanced
images were the best ones based on the information about the vessels, contrast
in the images, and the view or visibility of the structures in more distant
parts of the capsule endoscopy images.
| [
{
"created": "Thu, 8 Apr 2021 10:37:36 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Rukundo",
"Olivier",
""
],
[
"Pedersen",
"Marius",
""
],
[
"Hovde",
"Øistein",
""
]
] |
2104.03765 | Yonghao Xu | Yonghao Xu, Bo Du, and Liangpei Zhang | Robust Self-Ensembling Network for Hyperspectral Image Classification | null | IEEE Trans. Neural Netw. Learn. Syst., 2022 | 10.1109/TNNLS.2022.3198142 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research has shown the great potential of deep learning algorithms in
the hyperspectral image (HSI) classification task. Nevertheless, training these
models usually requires a large amount of labeled data. Since the collection of
pixel-level annotations for HSI is laborious and time-consuming, developing
algorithms that can yield good performance in the small sample size situation
is of great significance. In this study, we propose a robust self-ensembling
network (RSEN) to address this problem. The proposed RSEN consists of two
subnetworks including a base network and an ensemble network. With the
constraint of both the supervised loss from the labeled data and the
unsupervised loss from the unlabeled data, the base network and the ensemble
network can learn from each other, achieving the self-ensembling mechanism. To
the best of our knowledge, the proposed method is the first attempt to
introduce the self-ensembling technique into the HSI classification task, which
provides a different view on how to utilize the unlabeled data in HSI to assist
the network training. We further propose a novel consistency filter to increase
the robustness of self-ensembling learning. Extensive experiments on three
benchmark HSI datasets demonstrate that the proposed algorithm can yield
competitive performance compared with the state-of-the-art methods. Code is
available online (\url{https://github.com/YonghaoXu/RSEN}).
| [
{
"created": "Thu, 8 Apr 2021 13:33:14 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Sep 2022 14:14:22 GMT",
"version": "v2"
}
] | 2023-08-09 | [
[
"Xu",
"Yonghao",
""
],
[
"Du",
"Bo",
""
],
[
"Zhang",
"Liangpei",
""
]
] |
2104.03821 | Wei Wang | Wei Wang, Zheng Dang, Yinlin Hu, Pascal Fua and Mathieu Salzmann | Robust Differentiable SVD | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI) PREPRINT 2021 | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI) 2021 | 10.1109/TPAMI.2021.3072422 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eigendecomposition of symmetric matrices is at the heart of many computer
vision algorithms. However, the derivatives of the eigenvectors tend to be
numerically unstable, whether using the SVD to compute them analytically or
using the Power Iteration (PI) method to approximate them. This instability
arises in the presence of eigenvalues that are close to each other. This makes
integrating eigendecomposition into deep networks difficult and often results
in poor convergence, particularly when dealing with large matrices.
While this can be mitigated by partitioning the data into small arbitrary
groups, doing so has no theoretical basis and makes it impossible to exploit
the full power of eigendecomposition. In previous work, we mitigated this using
SVD during the forward pass and PI to compute the gradients during the backward
pass. However, the iterative deflation procedure required to compute multiple
eigenvectors using PI tends to accumulate errors and yield inaccurate
gradients. Here, we show that the Taylor expansion of the SVD gradient is
theoretically equivalent to the gradient obtained using PI without relying in
practice on an iterative process and thus yields more accurate gradients. We
demonstrate the benefits of this increased accuracy for image classification
and style transfer.
| [
{
"created": "Thu, 8 Apr 2021 15:04:15 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Wang",
"Wei",
""
],
[
"Dang",
"Zheng",
""
],
[
"Hu",
"Yinlin",
""
],
[
"Fua",
"Pascal",
""
],
[
"Salzmann",
"Mathieu",
""
]
] |
2104.03829 | Abhijit Guha Roy | Abhijit Guha Roy, Jie Ren, Shekoofeh Azizi, Aaron Loh, Vivek
Natarajan, Basil Mustafa, Nick Pawlowski, Jan Freyberg, Yuan Liu, Zach
Beaver, Nam Vo, Peggy Bui, Samantha Winter, Patricia MacWilliams, Greg S.
Corrado, Umesh Telang, Yun Liu, Taylan Cemgil, Alan Karthikesalingam, Balaji
Lakshminarayanan, Jim Winkens | Does Your Dermatology Classifier Know What It Doesn't Know? Detecting
the Long-Tail of Unseen Conditions | Under Review, 19 Pages | Medical Image Analysis (2022) | 10.1016/j.media.2021.102274 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We develop and rigorously evaluate a deep learning based system that can
accurately classify skin conditions while detecting rare conditions for which
there is not enough data available for training a confident classifier. We
frame this task as an out-of-distribution (OOD) detection problem. Our novel
approach, hierarchical outlier detection (HOD) assigns multiple abstention
classes for each training outlier class and jointly performs a coarse
classification of inliers vs. outliers, along with fine-grained classification
of the individual classes. We demonstrate the effectiveness of the HOD loss in
conjunction with modern representation learning approaches (BiT, SimCLR, MICLe)
and explore different ensembling strategies for further improving the results.
We perform an extensive subgroup analysis over conditions of varying risk
levels and different skin types to investigate how the OOD detection
performance changes over each subgroup and demonstrate the gains of our
framework in comparison to baselines. Finally, we introduce a cost metric to
approximate downstream clinical impact. We use this cost metric to compare the
proposed method against a baseline system, thereby making a stronger case for
the overall system effectiveness in a real-world deployment scenario.
| [
{
"created": "Thu, 8 Apr 2021 15:15:22 GMT",
"version": "v1"
}
] | 2022-03-31 | [
[
"Roy",
"Abhijit Guha",
""
],
[
"Ren",
"Jie",
""
],
[
"Azizi",
"Shekoofeh",
""
],
[
"Loh",
"Aaron",
""
],
[
"Natarajan",
"Vivek",
""
],
[
"Mustafa",
"Basil",
""
],
[
"Pawlowski",
"Nick",
""
],
[
"Freyberg",
"Jan",
""
],
[
"Liu",
"Yuan",
""
],
[
"Beaver",
"Zach",
""
],
[
"Vo",
"Nam",
""
],
[
"Bui",
"Peggy",
""
],
[
"Winter",
"Samantha",
""
],
[
"MacWilliams",
"Patricia",
""
],
[
"Corrado",
"Greg S.",
""
],
[
"Telang",
"Umesh",
""
],
[
"Liu",
"Yun",
""
],
[
"Cemgil",
"Taylan",
""
],
[
"Karthikesalingam",
"Alan",
""
],
[
"Lakshminarayanan",
"Balaji",
""
],
[
"Winkens",
"Jim",
""
]
] |
2104.03888 | Manuel Carranza-Garc\'ia | Manuel Carranza-Garc\'ia, Pedro Lara-Ben\'itez, Jorge
Garc\'ia-Guti\'errez, Jos\'e C. Riquelme | Enhancing Object Detection for Autonomous Driving by Optimizing Anchor
Generation and Addressing Class Imbalance | null | Neurocomputing, 2021 | 10.1016/j.neucom.2021.04.001 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Object detection has been one of the most active topics in computer vision
for the past years. Recent works have mainly focused on pushing the
state-of-the-art in the general-purpose COCO benchmark. However, the use of
such detection frameworks in specific applications such as autonomous driving
is yet an area to be addressed. This study presents an enhanced 2D object
detector based on Faster R-CNN that is better suited for the context of
autonomous vehicles. Two main aspects are improved: the anchor generation
procedure and the performance drop in minority classes. The default uniform
anchor configuration is not suitable in this scenario due to the perspective
projection of the vehicle cameras. Therefore, we propose a perspective-aware
methodology that divides the image into key regions via clustering and uses
evolutionary algorithms to optimize the base anchors for each of them.
Furthermore, we add a module that enhances the precision of the second-stage
header network by including the spatial information of the candidate regions
proposed in the first stage. We also explore different re-weighting strategies
to address the foreground-foreground class imbalance, showing that the use of a
reduced version of focal loss can significantly improve the detection of
difficult and underrepresented objects in two-stage detectors. Finally, we
design an ensemble model to combine the strengths of the different learning
strategies. Our proposal is evaluated with the Waymo Open Dataset, which is the
most extensive and diverse up to date. The results demonstrate an average
accuracy improvement of 6.13% mAP when using the best single model, and of
9.69% mAP with the ensemble. The proposed modifications over the Faster R-CNN
do not increase computational cost and can easily be extended to optimize other
anchor-based detection frameworks.
| [
{
"created": "Thu, 8 Apr 2021 16:58:31 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Carranza-García",
"Manuel",
""
],
[
"Lara-Benítez",
"Pedro",
""
],
[
"García-Gutiérrez",
"Jorge",
""
],
[
"Riquelme",
"José C.",
""
]
] |
2104.03893 | Mehrshad Zandigohar | Mehrshad Zandigohar, Mo Han, Mohammadreza Sharif, Sezen Yagmur Gunay,
Mariusz P. Furmanek, Mathew Yarossi, Paolo Bonato, Cagdas Onal, Taskin Padir,
Deniz Erdogmus, Gunar Schirner | Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in
Prosthetic Hand Control | null | Front. Robot. AI 11 (2024) Sec. Biomedical Robotics | 10.3389/frobt.2024.1312554 | null | cs.RO cs.AI cs.CV cs.HC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: For transradial amputees, robotic prosthetic hands promise to
regain the capability to perform daily living activities. Current control
methods based on physiological signals such as electromyography (EMG) are prone
to yielding poor inference outcomes due to motion artifacts, muscle fatigue,
and many more. Vision sensors are a major source of information about the
environment state and can play a vital role in inferring feasible and intended
gestures. However, visual evidence is also susceptible to its own artifacts,
most often due to object occlusion, lighting changes, etc. Multimodal evidence
fusion using physiological and vision sensor measurements is a natural approach
due to the complementary strengths of these modalities. Methods: In this paper,
we present a Bayesian evidence fusion framework for grasp intent inference
using eye-view video, eye-gaze, and EMG from the forearm processed by neural
network models. We analyze individual and fused performance as a function of
time as the hand approaches the object to grasp it. For this purpose, we have
also developed novel data processing and augmentation techniques to train
neural network components. Results: Our results indicate that, on average,
fusion improves the instantaneous upcoming grasp type classification accuracy
while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64%
non-fused) and visual evidence (80.5% non-fused) individually, resulting in an
overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses
demonstrate that EMG and visual evidence show complementary strengths, and as a
consequence, fusion of multimodal evidence can outperform each individual
evidence modality at any given time.
| [
{
"created": "Thu, 8 Apr 2021 17:01:19 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Apr 2022 14:52:27 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Jul 2022 19:50:52 GMT",
"version": "v3"
},
{
"created": "Thu, 5 Oct 2023 21:26:48 GMT",
"version": "v4"
},
{
"created": "Tue, 27 Feb 2024 22:49:26 GMT",
"version": "v5"
}
] | 2024-02-29 | [
[
"Zandigohar",
"Mehrshad",
""
],
[
"Han",
"Mo",
""
],
[
"Sharif",
"Mohammadreza",
""
],
[
"Gunay",
"Sezen Yagmur",
""
],
[
"Furmanek",
"Mariusz P.",
""
],
[
"Yarossi",
"Mathew",
""
],
[
"Bonato",
"Paolo",
""
],
[
"Onal",
"Cagdas",
""
],
[
"Padir",
"Taskin",
""
],
[
"Erdogmus",
"Deniz",
""
],
[
"Schirner",
"Gunar",
""
]
] |
2104.03928 | Vinodkumar Prabhakaran | Vinodkumar Prabhakaran, Marek Rei, Ekaterina Shutova | How Metaphors Impact Political Discourse: A Large-Scale Topic-Agnostic
Study Using Neural Metaphor Detection | Published at ICWSM 2021. Please cite that version for academic
publications | The International AAAI Conference on Web and Social Media (ICWSM)
2021 | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metaphors are widely used in political rhetoric as an effective framing
device. While the efficacy of specific metaphors such as the war metaphor in
political discourse has been documented before, those studies often rely on
small number of hand-coded instances of metaphor use. Larger-scale
topic-agnostic studies are required to establish the general persuasiveness of
metaphors as a device, and to shed light on the broader patterns that guide
their persuasiveness. In this paper, we present a large-scale data-driven study
of metaphors used in political discourse. We conduct this study on a publicly
available dataset of over 85K posts made by 412 US politicians in their
Facebook public pages, up until Feb 2017. Our contributions are threefold: we
show evidence that metaphor use correlates with ideological leanings in complex
ways that depend on concurrent political events such as winning or losing
elections; we show that posts with metaphors elicit more engagement from their
audience overall even after controlling for various socio-political factors
such as gender and political party affiliation; and finally, we demonstrate
that metaphoricity is indeed the reason for increased engagement of posts,
through a fine-grained linguistic analysis of metaphorical vs. literal usages
of 513 words across 70K posts.
| [
{
"created": "Thu, 8 Apr 2021 17:16:31 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Prabhakaran",
"Vinodkumar",
""
],
[
"Rei",
"Marek",
""
],
[
"Shutova",
"Ekaterina",
""
]
] |
2104.03964 | Ankan Kumar Bhunia | Ankan Kumar Bhunia, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer,
Fahad Shahbaz Khan, Mubarak Shah | Handwriting Transformers | null | ICCV 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose a novel transformer-based styled handwritten text image generation
approach, HWT, that strives to learn both style-content entanglement as well as
global and local writing style patterns. The proposed HWT captures the long and
short range relationships within the style examples through a self-attention
mechanism, thereby encoding both global and local style patterns. Further, the
proposed transformer-based HWT comprises an encoder-decoder attention that
enables style-content entanglement by gathering the style representation of
each query character. To the best of our knowledge, we are the first to
introduce a transformer-based generative network for styled handwritten text
generation. Our proposed HWT generates realistic styled handwritten text images
and significantly outperforms the state-of-the-art demonstrated through
extensive qualitative, quantitative and human-based evaluations. The proposed
HWT can handle arbitrary length of text and any desired writing style in a
few-shot setting. Further, our HWT generalizes well to the challenging scenario
where both words and writing style are unseen during training, generating
realistic styled handwritten text images.
| [
{
"created": "Thu, 8 Apr 2021 17:59:43 GMT",
"version": "v1"
}
] | 2021-08-06 | [
[
"Bhunia",
"Ankan Kumar",
""
],
[
"Khan",
"Salman",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Shah",
"Mubarak",
""
]
] |
2104.04006 | Michail Mamalakis Mr | Michail Mamalakis, Andrew J. Swift, Bart Vorselaars, Surajit Ray,
Simonne Weeks, Weiping Ding, Richard H. Clayton, Louise S. Mackenzie, Abhirup
Banerjee | DenResCov-19: A deep transfer learning network for robust automatic
classification of COVID-19, pneumonia, and tuberculosis from X-rays | null | 2021, Computerized Medical Imaging and Graphics | 10.1016/j.compmedimag.2021.102008 | 102008, 0895-6111 | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | The global pandemic of COVID-19 is continuing to have a significant effect on
the well-being of global population, increasing the demand for rapid testing,
diagnosis, and treatment. Along with COVID-19, other etiologies of pneumonia
and tuberculosis constitute additional challenges to the medical system. In
this regard, the objective of this work is to develop a new deep transfer
learning pipeline to diagnose patients with COVID-19, pneumonia, and
tuberculosis, based on chest x-ray images. We observed in some instances
DenseNet and Resnet have orthogonal performances. In our proposed model, we
have created an extra layer with convolutional neural network blocks to combine
these two models to establish superior performance over either model. The same
strategy can be useful in other applications where two competing networks with
complementary performance are observed. We have tested the performance of our
proposed network on two-class (pneumonia vs healthy), three-class (including
COVID-19), and four-class (including tuberculosis) classification problems. The
proposed network has been able to successfully classify these lung diseases in
all four datasets and has provided significant improvement over the benchmark
networks of DenseNet, ResNet, and Inception-V3. These novel findings can
deliver a state-of-the-art pre-screening fast-track decision network to detect
COVID-19 and other lung pathologies.
| [
{
"created": "Thu, 8 Apr 2021 18:49:22 GMT",
"version": "v1"
}
] | 2021-11-04 | [
[
"Mamalakis",
"Michail",
""
],
[
"Swift",
"Andrew J.",
""
],
[
"Vorselaars",
"Bart",
""
],
[
"Ray",
"Surajit",
""
],
[
"Weeks",
"Simonne",
""
],
[
"Ding",
"Weiping",
""
],
[
"Clayton",
"Richard H.",
""
],
[
"Mackenzie",
"Louise S.",
""
],
[
"Banerjee",
"Abhirup",
""
]
] |
2104.04029 | Vida Adeli | Vida Adeli, Mahsa Ehsanpour, Ian Reid, Juan Carlos Niebles, Silvio
Savarese, Ehsan Adeli, Hamid Rezatofighi | TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild | null | IEEE/CVF International Conference on Computer Vision, pp.
13390-13400. 2021 | 10.1109/ICCV48922.2021.01314 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Joint forecasting of human trajectory and pose dynamics is a fundamental
building block of various applications ranging from robotics and autonomous
driving to surveillance systems. Predicting body dynamics requires capturing
subtle information embedded in the humans' interactions with each other and
with the objects present in the scene. In this paper, we propose a novel
TRajectory and POse Dynamics (nicknamed TRiPOD) method based on graph
attentional networks to model the human-human and human-object interactions
both in the input space and the output space (decoded future output). The model
is supplemented by a message passing interface over the graphs to fuse these
different levels of interactions efficiently. Furthermore, to incorporate a
real-world challenge, we propound to learn an indicator representing whether an
estimated body joint is visible/invisible at each frame, e.g. due to occlusion
or being outside the sensor field of view. Finally, we introduce a new
benchmark for this joint task based on two challenging datasets (PoseTrack and
3DPW) and propose evaluation metrics to measure the effectiveness of
predictions in the global space, even when there are invisible cases of joints.
Our evaluation shows that TRiPOD outperforms all prior work and
state-of-the-art specifically designed for each of the trajectory and pose
forecasting tasks.
| [
{
"created": "Thu, 8 Apr 2021 20:01:00 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Aug 2021 11:13:18 GMT",
"version": "v2"
}
] | 2022-07-07 | [
[
"Adeli",
"Vida",
""
],
[
"Ehsanpour",
"Mahsa",
""
],
[
"Reid",
"Ian",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Adeli",
"Ehsan",
""
],
[
"Rezatofighi",
"Hamid",
""
]
] |
2104.04076 | Omer Aydin | \"Omer Aydin, Cem Ali Kandemir, Umut Kira\c{c}, Feri\c{s}tah
Dalkili\c{c} | An artificial intelligence and Internet of things based automated
irrigation system | null | International Conference on Computer Technologies and Applications
in Food and Agriculture, 11-12 July 2019, Konya, Turkey. Pages:95-106 | null | null | cs.CY cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | It is not hard to see that the need for clean water is growing by considering
the decrease of the water sources day by day in the world. Potable fresh water
is also used for irrigation, so it should be planned to decrease freshwater
wastage. With the development of technology and the availability of cheaper and
more effective solutions, the efficiency of irrigation increased and the water
loss can be reduced. In particular, Internet of things (IoT) devices has begun
to be used in all areas. We can easily and precisely collect temperature,
humidity and mineral values from the irrigation field with the IoT devices and
sensors. Most of the operations and decisions about irrigation are carried out
by people. For people, it is hard to have all the real-time data such as
temperature, moisture and mineral levels in the decision-making process and
make decisions by considering them. People usually make decisions with their
experience. In this study, a wide range of information from the irrigation
field was obtained by using IoT devices and sensors. Data collected from IoT
devices and sensors sent via communication channels and stored on MongoDB. With
the help of Weka software, the data was normalized and the normalized data was
used as a learning set. As a result of the examinations, a decision tree (J48)
algorithm with the highest accuracy was chosen and an artificial intelligence
model was created. Decisions are used to manage operations such as starting,
maintaining and stopping the irrigation. The accuracy of the decisions was
evaluated and the irrigation system was tested with the results. There are
options to manage, view the system remotely and manually and also see the
system s decisions with the created mobile application.
| [
{
"created": "Thu, 1 Apr 2021 21:05:26 GMT",
"version": "v1"
}
] | 2021-04-12 | [
[
"Aydin",
"Ömer",
""
],
[
"Kandemir",
"Cem Ali",
""
],
[
"Kiraç",
"Umut",
""
],
[
"Dalkiliç",
"Feriştah",
""
]
] |
2104.04123 | Erkan Kayacan | Erdal Kayacan, Erkan Kayacan, Herman Ramon, Okyay Kaynak and Wouter
Saeys | Towards Agrobots: Trajectory Control of an Autonomous Tractor Using
Type-2 Fuzzy Logic Controllers | null | IEEE/ASME Transactions on Mechatronics, vol. 20, no. 1, pp.
287-298, Feb. 2015 | 10.1109/TMECH.2013.2291874. | null | cs.RO cs.AI cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Provision of some autonomous functions to an agricultural vehicle would
lighten the job of the operator but in doing so, the accuracy should not be
lost to still obtain an optimal yield. Autonomous navigation of an agricultural
vehicle involves the control of different dynamic subsystems, such as the yaw
angle dynamics and the longitudinal speed dynamics. In this study, a
proportional-integral-derivative controller is used to control the longitudinal
velocity of the tractor. For the control of the yaw angle dynamics, a
proportional-derivative controller works in parallel with a type-2 fuzzy neural
network. In such an arrangement, the former ensures the stability of the
related subsystem, while the latter learns the system dynamics and becomes the
leading controller. In this way, instead of modeling the interactions between
the subsystems prior to the design of model-based control, we develop a control
algorithm which learns the interactions online from the measured feedback
error. In addition to the control of the stated subsystems, a kinematic
controller is needed to correct the errors in both the x- and the y- axis for
the trajectory tracking problem of the tractor. To demonstrate the real-time
abilities of the proposed control scheme, an autonomous tractor is equipped
with the use of reasonably priced sensors and actuators. Experimental results
show the efficacy and efficiency of the proposed learning algorithm.
| [
{
"created": "Fri, 9 Apr 2021 00:46:23 GMT",
"version": "v1"
}
] | 2021-04-12 | [
[
"Kayacan",
"Erdal",
""
],
[
"Kayacan",
"Erkan",
""
],
[
"Ramon",
"Herman",
""
],
[
"Kaynak",
"Okyay",
""
],
[
"Saeys",
"Wouter",
""
]
] |
2104.04517 | Vaibhav Bhat | Vaibhav Bhat, Anita Yadav, Sonal Yadav, Dhivya Chandrasekaran, Vijay
Mago | AdCOFE: Advanced Contextual Feature Extraction in Conversations for
emotion classification | 12 pages, to be published in PeerJ Computer Science Journal | PeerJ Computer Science, 2021 | 10.7717/peerj-cs.786 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition in conversations is an important step in various virtual
chat bots which require opinion-based feedback, like in social media threads,
online support and many more applications. Current Emotion recognition in
conversations models face issues like (a) loss of contextual information in
between two dialogues of a conversation, (b) failure to give appropriate
importance to significant tokens in each utterance and (c) inability to pass on
the emotional information from previous utterances.The proposed model of
Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by
performing unique feature extraction using knowledge graphs, sentiment lexicons
and phrases of natural language at all levels (word and position embedding) of
the utterances. Experiments on the Emotion recognition in conversations dataset
show that AdCOFE is beneficial in capturing emotions in conversations.
| [
{
"created": "Fri, 9 Apr 2021 17:58:19 GMT",
"version": "v1"
}
] | 2021-12-16 | [
[
"Bhat",
"Vaibhav",
""
],
[
"Yadav",
"Anita",
""
],
[
"Yadav",
"Sonal",
""
],
[
"Chandrasekaran",
"Dhivya",
""
],
[
"Mago",
"Vijay",
""
]
] |
2104.04676 | Xutan Peng | Xutan Peng, Guanyi Chen, Chenghua Lin, Mark Stevenson | Highly Efficient Knowledge Graph Embedding Learning with Orthogonal
Procrustes Analysis | To appear at NAACL 2021 | NAACL-HLT 2021 | 10.18653/v1/2021.naacl-main.187 | null | cs.LG cs.AI cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Graph Embeddings (KGEs) have been intensively explored in recent
years due to their promise for a wide range of applications. However, existing
studies focus on improving the final model performance without acknowledging
the computational cost of the proposed approaches, in terms of execution time
and environmental impact. This paper proposes a simple yet effective KGE
framework which can reduce the training time and carbon footprint by orders of
magnitudes compared with state-of-the-art approaches, while producing
competitive performance. We highlight three technical innovations: full batch
learning via relational matrices, closed-form Orthogonal Procrustes Analysis
for KGEs, and non-negative-sampling training. In addition, as the first KGE
method whose entity embeddings also store full relation information, our
trained models encode rich semantics and are highly interpretable.
Comprehensive experiments and ablation studies involving 13 strong baselines
and two standard datasets verify the effectiveness and efficiency of our
algorithm.
| [
{
"created": "Sat, 10 Apr 2021 03:55:45 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Apr 2021 12:17:05 GMT",
"version": "v2"
}
] | 2022-01-25 | [
[
"Peng",
"Xutan",
""
],
[
"Chen",
"Guanyi",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Stevenson",
"Mark",
""
]
] |
2104.04733 | Nadeem Yousaf | Nadeem Yousaf, Sarfaraz Hussein, Waqas Sultani | Estimation of BMI from Facial Images using Semantic Segmentation based
Region-Aware Pooling | Accepted for publication in computers in biology and medicine | Computers in Biology and Medicine Volume 133, June 2021, Pages
104392 | 10.1016/j.compbiomed.2021.104392 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Body-Mass-Index (BMI) conveys important information about one's life such as
health and socio-economic conditions. Large-scale automatic estimation of BMIs
can help predict several societal behaviors such as health, job opportunities,
friendships, and popularity. The recent works have either employed hand-crafted
geometrical face features or face-level deep convolutional neural network
features for face to BMI prediction. The hand-crafted geometrical face feature
lack generalizability and face-level deep features don't have detailed local
information. Although useful, these methods missed the detailed local
information which is essential for exact BMI prediction. In this paper, we
propose to use deep features that are pooled from different face regions (eye,
nose, eyebrow, lips, etc.,) and demonstrate that this explicit pooling from
face regions can significantly boost the performance of BMI prediction. To
address the problem of accurate and pixel-level face regions localization, we
propose to use face semantic segmentation in our framework. Extensive
experiments are performed using different Convolutional Neural Network (CNN)
backbones including FaceNet and VGG-face on three publicly available datasets:
VisualBMI, Bollywood and VIP attributes. Experimental results demonstrate that,
as compared to the recent works, the proposed Reg-GAP gives a percentage
improvement of 22.4\% on VIP-attribute, 3.3\% on VisualBMI, and 63.09\% on the
Bollywood dataset.
| [
{
"created": "Sat, 10 Apr 2021 10:53:21 GMT",
"version": "v1"
}
] | 2021-04-26 | [
[
"Yousaf",
"Nadeem",
""
],
[
"Hussein",
"Sarfaraz",
""
],
[
"Sultani",
"Waqas",
""
]
] |
2104.04739 | Anna Glazkova | Mikhail Kotyushev, Anna Glazkova, Dmitry Morozov | MIPT-NSU-UTMN at SemEval-2021 Task 5: Ensembling Learning with
Pre-trained Language Models for Toxic Spans Detection | Accepted at SemEval-2021 Workshop, ACL-IJCNLP 2021 | Proceedings of the 15th International Workshop on Semantic
Evaluation (SemEval-2021)", pp. 913-918, 2021 | 10.18653/v1/2021.semeval-1.124 | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper describes our system for SemEval-2021 Task 5 on Toxic Spans
Detection. We developed ensemble models using BERT-based neural architectures
and post-processing to combine tokens into spans. We evaluated several
pre-trained language models using various ensemble techniques for toxic span
identification and achieved sizable improvements over our baseline fine-tuned
BERT models. Finally, our system obtained a F1-score of 67.55% on test data.
| [
{
"created": "Sat, 10 Apr 2021 11:27:32 GMT",
"version": "v1"
}
] | 2021-08-30 | [
[
"Kotyushev",
"Mikhail",
""
],
[
"Glazkova",
"Anna",
""
],
[
"Morozov",
"Dmitry",
""
]
] |
2104.04748 | Zhengxu Hou | Zhengxu Hou, Bang Liu, Ruihui Zhao, Zijing Ou, Yafei Liu, Xi Chen,
Yefeng Zheng | Imperfect also Deserves Reward: Multi-Level and Sequential Reward
Modeling for Better Dialog Management | 9 pages | NAACL 2021 | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For task-oriented dialog systems, training a Reinforcement Learning (RL)
based Dialog Management module suffers from low sample efficiency and slow
convergence speed due to the sparse rewards in RL.To solve this problem, many
strategies have been proposed to give proper rewards when training RL, but
their rewards lack interpretability and cannot accurately estimate the
distribution of state-action pairs in real dialogs. In this paper, we propose a
multi-level reward modeling approach that factorizes a reward into a
three-level hierarchy: domain, act, and slot. Based on inverse adversarial
reinforcement learning, our designed reward model can provide more accurate and
explainable reward signals for state-action pairs.Extensive evaluations show
that our approach can be applied to a wide range of reinforcement
learning-based dialog systems and significantly improves both the performance
and the speed of convergence.
| [
{
"created": "Sat, 10 Apr 2021 12:20:23 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Hou",
"Zhengxu",
""
],
[
"Liu",
"Bang",
""
],
[
"Zhao",
"Ruihui",
""
],
[
"Ou",
"Zijing",
""
],
[
"Liu",
"Yafei",
""
],
[
"Chen",
"Xi",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
2104.04805 | Kuan-Yu Chen | Fu-Hao Yu and Kuan-Yu Chen | Non-autoregressive Transformer-based End-to-end ASR using BERT | null | in IEEE/ACM Transactions on Audio, Speech, and Language
Processing, vol. 30, pp. 1474-1482, 2022 | 10.1109/TASLP.2022.3166400 | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based models have led to significant innovation in classical and
practical subjects as varied as speech processing, natural language processing,
and computer vision. On top of the Transformer, attention-based end-to-end
automatic speech recognition (ASR) models have recently become popular.
Specifically, non-autoregressive modeling, which boasts fast inference and
performance comparable to conventional autoregressive methods, is an emerging
research topic. In the context of natural language processing, the
bidirectional encoder representations from Transformers (BERT) model has
received widespread attention, partially due to its ability to infer
contextualized word representations and to enable superior performance for
downstream tasks while needing only simple fine-tuning. Motivated by the
success, we intend to view speech recognition as a downstream task of BERT,
thus an ASR system is expected to be deduced by performing fine-tuning.
Consequently, to not only inherit the advantages of non-autoregressive ASR
models but also enjoy the benefits of a pre-trained language model (e.g.,
BERT), we propose a non-autoregressive Transformer-based end-to-end ASR model
based on BERT. We conduct a series of experiments on the AISHELL-1 dataset that
demonstrate competitive or superior results for the model when compared to
state-of-the-art ASR systems.
| [
{
"created": "Sat, 10 Apr 2021 16:22:17 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Apr 2022 01:55:24 GMT",
"version": "v2"
},
{
"created": "Wed, 18 May 2022 01:17:16 GMT",
"version": "v3"
}
] | 2022-05-19 | [
[
"Yu",
"Fu-Hao",
""
],
[
"Chen",
"Kuan-Yu",
""
]
] |
2104.04884 | Abu Md Niamul Taufique | Abu Md Niamul Taufique, David W. Messinger | Hyperspectral Pigment Analysis of Cultural Heritage Artifacts Using the
Opaque Form of Kubelka-Munk Theory | 11 pages, 9 figures | Proc. SPIE 10986, Algorithms, Technologies, and Applications for
Multispectral and Hyperspectral Imagery XXV, 1098611, 2019 | 10.1117/12.2518451 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kubelka-Munk (K-M) theory has been successfully used to estimate pigment
concentrations in the pigment mixtures of modern paintings in spectral imagery.
In this study the single-constant K-M theory has been utilized for the
classification of green pigments in the Selden Map of China, a navigational map
of the South China Sea likely created in the early seventeenth century.
Hyperspectral data of the map was collected at the Bodleian Library, University
of Oxford, and can be used to estimate the pigment diversity, and spatial
distribution, within the map. This work seeks to assess the utility of
analyzing the data in the K/S space from Kubelka-Munk theory, as opposed to the
traditional reflectance domain. We estimate the dimensionality of the data and
extract endmembers in the reflectance domain. Then we perform linear unmixing
to estimate abundances in the K/S space, and following Bai, et al. (2017), we
perform a classification in the abundance space. Finally, due to the lack of
ground truth labels, the classification accuracy was estimated by computing the
mean spectrum of each class as the representative signature of that class, and
calculating the root mean squared error with all the pixels in that class to
create a spatial representation of the error. This highlights both the
magnitude of, and any spatial pattern in, the errors, indicating if a
particular pigment is not well modeled in this approach.
| [
{
"created": "Sun, 11 Apr 2021 00:22:37 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Taufique",
"Abu Md Niamul",
""
],
[
"Messinger",
"David W.",
""
]
] |
2104.04916 | Xutan Peng | Xutan Peng, Chenghua Lin, Mark Stevenson | Cross-Lingual Word Embedding Refinement by $\ell_{1}$ Norm Optimisation | To appear at NAACL 2021 | NAACL-HLT 2021 | 10.18653/v1/2021.naacl-main.214 | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages
in a shared high-dimensional space in which vectors representing words with
similar meaning (regardless of language) are closely located. Existing methods
for building high-quality CLWEs learn mappings that minimise the $\ell_{2}$
norm loss function. However, this optimisation objective has been demonstrated
to be sensitive to outliers. Based on the more robust Manhattan norm (aka.
$\ell_{1}$ norm) goodness-of-fit criterion, this paper proposes a simple
post-processing step to improve CLWEs. An advantage of this approach is that it
is fully agnostic to the training process of the original CLWEs and can
therefore be applied widely. Extensive experiments are performed involving ten
diverse languages and embeddings trained on different corpora. Evaluation
results based on bilingual lexicon induction and cross-lingual transfer for
natural language inference tasks show that the $\ell_{1}$ refinement
substantially outperforms four state-of-the-art baselines in both supervised
and unsupervised settings. It is therefore recommended that this strategy be
adopted as a standard for CLWE methods.
| [
{
"created": "Sun, 11 Apr 2021 04:37:54 GMT",
"version": "v1"
}
] | 2022-01-25 | [
[
"Peng",
"Xutan",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Stevenson",
"Mark",
""
]
] |
2104.04945 | Tomasz Szandala | Tomasz Szandala | Enhancing Deep Neural Network Saliency Visualizations with Gradual
Extrapolation | Published in IEEE Access:
https://ieeexplore.ieee.org/document/9468713 | IEEE Access, 2021 | 10.1109/ACCESS.2021.3093824 | null | cs.CV cs.AI cs.NE | http://creativecommons.org/licenses/by/4.0/ | In this paper, an enhancement technique for the class activation mapping
methods such as gradient-weighted class activation maps or excitation
backpropagation is proposed to present the visual explanations of decisions
from convolutional neural network-based models. The proposed idea, called
Gradual Extrapolation, can supplement any method that generates a heatmap
picture by sharpening the output. Instead of producing a coarse localization
map that highlights the important predictive regions in the image, the proposed
method outputs the specific shape that most contributes to the model output.
Thus, the proposed method improves the accuracy of saliency maps. The effect
has been achieved by the gradual propagation of the crude map obtained in the
deep layer through all preceding layers with respect to their activations. In
validation tests conducted on a selected set of images, the faithfulness,
interpretability, and applicability of the method are evaluated. The proposed
technique significantly improves the localization detection of the neural
networks attention at low additional computational costs. Furthermore, the
proposed method is applicable to a variety deep neural network models. The code
for the method can be found at
https://github.com/szandala/gradual-extrapolation
| [
{
"created": "Sun, 11 Apr 2021 07:39:35 GMT",
"version": "v1"
},
{
"created": "Sun, 27 Jun 2021 21:37:11 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Jul 2021 15:30:26 GMT",
"version": "v3"
}
] | 2021-07-08 | [
[
"Szandala",
"Tomasz",
""
]
] |
2104.04958 | Mario Di Mauro | Mario Di Mauro, Giovanni Galatro, Giancarlo Fortino, Antonio Liotta | Supervised Feature Selection Techniques in Network Intrusion Detection:
a Critical Review | null | Engineering Applications of Artificial Intelligence Volume 101,
May 2021, 104216 | 10.1016/j.engappai.2021.104216 | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning (ML) techniques are becoming an invaluable support for
network intrusion detection, especially in revealing anomalous flows, which
often hide cyber-threats. Typically, ML algorithms are exploited to
classify/recognize data traffic on the basis of statistical features such as
inter-arrival times, packets length distribution, mean number of flows, etc.
Dealing with the vast diversity and number of features that typically
characterize data traffic is a hard problem. This results in the following
issues: i) the presence of so many features leads to lengthy training processes
(particularly when features are highly correlated), while prediction accuracy
does not proportionally improve; ii) some of the features may introduce bias
during the classification process, particularly those that have scarce relation
with the data traffic to be classified. To this end, by reducing the feature
space and retaining only the most significant features, Feature Selection (FS)
becomes a crucial pre-processing step in network management and, specifically,
for the purposes of network intrusion detection. In this review paper, we
complement other surveys in multiple ways: i) evaluating more recent datasets
(updated w.r.t. obsolete KDD 99) by means of a designed-from-scratch
Python-based procedure; ii) providing a synopsis of most credited FS approaches
in the field of intrusion detection, including Multi-Objective Evolutionary
techniques; iii) assessing various experimental analyses such as feature
correlation, time complexity, and performance. Our comparisons offer useful
guidelines to network/security managers who are considering the incorporation
of ML concepts into network intrusion detection, where trade-offs between
performance and resource consumption are crucial.
| [
{
"created": "Sun, 11 Apr 2021 08:42:01 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Di Mauro",
"Mario",
""
],
[
"Galatro",
"Giovanni",
""
],
[
"Fortino",
"Giancarlo",
""
],
[
"Liotta",
"Antonio",
""
]
] |
2104.05125 | Evgeny Toropov | Evgeny Toropov, Paola A. Buitrago, Jose M. F. Moura | Shuffler: A Large Scale Data Management Tool for ML in Computer Vision | null | PEARC 2019 Article No 23 | 10.1145/3332186.3333046 | null | cs.CV cs.DC | http://creativecommons.org/licenses/by/4.0/ | Datasets in the computer vision academic research community are primarily
static. Once a dataset is accepted as a benchmark for a computer vision task,
researchers working on this task will not alter it in order to make their
results reproducible. At the same time, when exploring new tasks and new
applications, datasets tend to be an ever changing entity. A practitioner may
combine existing public datasets, filter images or objects in them, change
annotations or add new ones to fit a task at hand, visualize sample images, or
perhaps output statistics in the form of text or plots. In fact, datasets
change as practitioners experiment with data as much as with algorithms, trying
to make the most out of machine learning models. Given that ML and deep
learning call for large volumes of data to produce satisfactory results, it is
no surprise that the resulting data and software management associated to
dealing with live datasets can be quite complex. As far as we know, there is no
flexible, publicly available instrument to facilitate manipulating image data
and their annotations throughout a ML pipeline. In this work, we present
Shuffler, an open source tool that makes it easy to manage large computer
vision datasets. It stores annotations in a relational, human-readable
database. Shuffler defines over 40 data handling operations with annotations
that are commonly useful in supervised learning applied to computer vision and
supports some of the most well-known computer vision datasets. Finally, it is
easily extensible, making the addition of new operations and datasets a task
that is fast and easy to accomplish.
| [
{
"created": "Sun, 11 Apr 2021 22:27:28 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Toropov",
"Evgeny",
""
],
[
"Buitrago",
"Paola A.",
""
],
[
"Moura",
"Jose M. F.",
""
]
] |
2104.05154 | Wenjun Tang | Wenjun Tang, Hao Wang, Xian-Long Lee, Hong-Tzer Yang | Machine Learning Approach to Uncovering Residential Energy Consumption
Patterns Based on Socioeconomic and Smart Meter Data | null | Energy 2021 | null | null | cs.LG cs.AI cs.NA math.NA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The smart meter data analysis contributes to better planning and operations
for the power system. This study aims to identify the drivers of residential
energy consumption patterns from the socioeconomic perspective based on the
consumption and demographic data using machine learning. We model consumption
patterns by representative loads and reveal the relationship between load
patterns and socioeconomic characteristics. Specifically, we analyze the
real-world smart meter data and extract load patterns by clustering in a robust
way. We further identify the influencing socioeconomic attributes on load
patterns to improve our method's interpretability. The relationship between
consumers' load patterns and selected socioeconomic features is characterized
via machine learning models. The findings are as follows. (1) Twelve load
clusters, consisting of six for weekdays and six for weekends, exhibit a
diverse pattern of lifestyle and a difference between weekdays and weekends.
(2) Among various socioeconomic features, age and education level are suggested
to influence the load patterns. (3) Our proposed analytical model using feature
selection and machine learning is proved to be more effective than XGBoost and
conventional neural network model in mapping the relationship between load
patterns and socioeconomic features.
| [
{
"created": "Mon, 12 Apr 2021 01:57:14 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 03:54:46 GMT",
"version": "v2"
}
] | 2021-11-03 | [
[
"Tang",
"Wenjun",
""
],
[
"Wang",
"Hao",
""
],
[
"Lee",
"Xian-Long",
""
],
[
"Yang",
"Hong-Tzer",
""
]
] |
2104.05345 | Chunmei Feng | Chun-Mei Feng, Zhanyuan Yang, Geng Chen, Yong Xu, Ling Shao | Dual-Octave Convolution for Accelerated Parallel MR Image Reconstruction | Proceedings of the 35th AAAI Conference on Artificial Intelligence
(AAAI) 2021 | Proceedings of the 35th AAAI Conference on Artificial Intelligence
(AAAI) 2021 | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Magnetic resonance (MR) image acquisition is an inherently prolonged process,
whose acceleration by obtaining multiple undersampled images simultaneously
through parallel imaging has always been the subject of research. In this
paper, we propose the Dual-Octave Convolution (Dual-OctConv), which is capable
of learning multi-scale spatial-frequency features from both real and imaginary
components, for fast parallel MR image reconstruction. By reformulating the
complex operations using octave convolutions, our model shows a strong ability
to capture richer representations of MR images, while at the same time greatly
reducing the spatial redundancy. More specifically, the input feature maps and
convolutional kernels are first split into two components (i.e., real and
imaginary), which are then divided into four groups according to their spatial
frequencies. Then, our Dual-OctConv conducts intra-group information updating
and inter-group information exchange to aggregate the contextual information
across different groups. Our framework provides two appealing benefits: (i) it
encourages interactions between real and imaginary components at various
spatial frequencies to achieve richer representational capacity, and (ii) it
enlarges the receptive field by learning multiple spatial-frequency features of
both the real and imaginary components. We evaluate the performance of the
proposed model on the acceleration of multi-coil MR image reconstruction.
Extensive experiments are conducted on an {in vivo} knee dataset under
different undersampling patterns and acceleration factors. The experimental
results demonstrate the superiority of our model in accelerated parallel MR
image reconstruction. Our code is available at:
github.com/chunmeifeng/Dual-OctConv.
| [
{
"created": "Mon, 12 Apr 2021 10:51:05 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Feng",
"Chun-Mei",
""
],
[
"Yang",
"Zhanyuan",
""
],
[
"Chen",
"Geng",
""
],
[
"Xu",
"Yong",
""
],
[
"Shao",
"Ling",
""
]
] |
2104.05407 | Vladimir Ivanov | V. K. Ivanov, I. V. Obraztsov, B. V. Palyukh | Implementing an expert system to evaluate technical solutions
innovativeness | 12 pages, in Russian | Software & Systems. 2019. T. 4 (32) | 10.15827/0236-235X.128.696-707 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The paper presents a possible solution to the problem of algorithmization for
quantifying inno-vativeness indicators of technical products, inventions and
technologies. The concepts of technological nov-elty, relevance and
implementability as components of product innovation criterion are introduced.
Authors propose a model and algorithm to calculate every of these indicators of
innovativeness under conditions of incompleteness and inaccuracy, and sometimes
inconsistency of the initial information. The paper describes the developed
specialized software that is a promising methodological tool for using interval
estimations in accordance with the theory of evidence. These estimations are
used in the analysis of complex multicomponent systems, aggregations of large
volumes of fuzzy and incomplete data of various structures. Composition and
structure of a multi-agent expert system are presented. The purpose of such
system is to process groups of measurement results and to estimate indicators
values of objects innovativeness. The paper defines active elements of the
system, their functionality, roles, interaction order, input and output
inter-faces, as well as the general software functioning algorithm. It
describes implementation of software modules and gives an example of solving a
specific problem to determine the level of technical products innovation.
| [
{
"created": "Fri, 26 Mar 2021 10:11:44 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Ivanov",
"V. K.",
""
],
[
"Obraztsov",
"I. V.",
""
],
[
"Palyukh",
"B. V.",
""
]
] |
2104.05507 | Yun Zhao | Yun Zhao, Xuerui Yang, Jinchao Wang, Yongyu Gao, Chao Yan, Yuanfu Zhou | BART based semantic correction for Mandarin automatic speech recognition
system | submitted to INTERSPEECH2021 | Interspeech 2021 | 10.21437/Interspeech.2021-739 | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although automatic speech recognition (ASR) systems achieved significantly
improvements in recent years, spoken language recognition error occurs which
can be easily spotted by human beings. Various language modeling techniques
have been developed on post recognition tasks like semantic correction. In this
paper, we propose a Transformer based semantic correction method with
pretrained BART initialization, Experiments on 10000 hours Mandarin speech
dataset show that character error rate (CER) can be effectively reduced by
21.7% relatively compared to our baseline ASR system. Expert evaluation
demonstrates that actual improvement of our model surpasses what CER indicates.
| [
{
"created": "Fri, 26 Mar 2021 06:41:16 GMT",
"version": "v1"
}
] | 2021-12-21 | [
[
"Zhao",
"Yun",
""
],
[
"Yang",
"Xuerui",
""
],
[
"Wang",
"Jinchao",
""
],
[
"Gao",
"Yongyu",
""
],
[
"Yan",
"Chao",
""
],
[
"Zhou",
"Yuanfu",
""
]
] |
2104.05522 | Kin Gutierrez Olivares | Kin G. Olivares and Cristian Challu and Grzegorz Marcjasz and Rafa{\l}
Weron and Artur Dubrawski | Neural basis expansion analysis with exogenous variables: Forecasting
electricity prices with NBEATSx | 30 pages, 7 figures, 4 tables | International Journal of Forecasting 2022 | 10.1016/j.ijforecast.2022.03.001 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the neural basis expansion analysis (NBEATS) to incorporate
exogenous factors. The resulting method, called NBEATSx, improves on a well
performing deep learning model, extending its capabilities by including
exogenous variables and allowing it to integrate multiple sources of useful
information. To showcase the utility of the NBEATSx model, we conduct a
comprehensive study of its application to electricity price forecasting (EPF)
tasks across a broad range of years and markets. We observe state-of-the-art
performance, significantly improving the forecast accuracy by nearly 20% over
the original NBEATS model, and by up to 5% over other well established
statistical and machine learning methods specialized for these tasks.
Additionally, the proposed neural network has an interpretable configuration
that can structurally decompose time series, visualizing the relative impact of
trend and seasonal components and revealing the modeled processes' interactions
with exogenous factors. To assist related work we made the code available in
https://github.com/cchallu/nbeatsx.
| [
{
"created": "Mon, 12 Apr 2021 14:47:55 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 14:36:36 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Apr 2021 20:38:24 GMT",
"version": "v3"
},
{
"created": "Fri, 23 Apr 2021 12:48:00 GMT",
"version": "v4"
},
{
"created": "Thu, 27 Jan 2022 17:12:11 GMT",
"version": "v5"
},
{
"created": "Mon, 4 Apr 2022 14:13:29 GMT",
"version": "v6"
}
] | 2022-08-10 | [
[
"Olivares",
"Kin G.",
""
],
[
"Challu",
"Cristian",
""
],
[
"Marcjasz",
"Grzegorz",
""
],
[
"Weron",
"Rafał",
""
],
[
"Dubrawski",
"Artur",
""
]
] |
2104.05565 | Victor Uc-Cetina | Victor Uc-Cetina, Nicolas Navarro-Guerrero, Anabel Martin-Gonzalez,
Cornelius Weber, Stefan Wermter | Survey on reinforcement learning for language processing | null | Artificial Intelligence Review 2022 | 10.1007/s10462-022-10205-5 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years some researchers have explored the use of reinforcement
learning (RL) algorithms as key components in the solution of various natural
language processing tasks. For instance, some of these algorithms leveraging
deep neural learning have found their way into conversational systems. This
paper reviews the state of the art of RL methods for their possible use for
different problems of natural language processing, focusing primarily on
conversational systems, mainly due to their growing relevance. We provide
detailed descriptions of the problems as well as discussions of why RL is
well-suited to solve them. Also, we analyze the advantages and limitations of
these methods. Finally, we elaborate on promising research directions in
natural language processing that might benefit from reinforcement learning.
| [
{
"created": "Mon, 12 Apr 2021 15:33:11 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Mar 2022 17:00:00 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Mar 2022 21:02:38 GMT",
"version": "v3"
}
] | 2022-06-09 | [
[
"Uc-Cetina",
"Victor",
""
],
[
"Navarro-Guerrero",
"Nicolas",
""
],
[
"Martin-Gonzalez",
"Anabel",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Wermter",
"Stefan",
""
]
] |
2104.05606 | Minghan Li | Minghan Li, Shuai Li, Lida Li and Lei Zhang | Spatial Feature Calibration and Temporal Fusion for Effective One-stage
Video Instance Segmentation | null | CVPR2021 | null | null | cs.CV eess.IV | http://creativecommons.org/publicdomain/zero/1.0/ | Modern one-stage video instance segmentation networks suffer from two
limitations. First, convolutional features are neither aligned with anchor
boxes nor with ground-truth bounding boxes, reducing the mask sensitivity to
spatial location. Second, a video is directly divided into individual frames
for frame-level instance segmentation, ignoring the temporal correlation
between adjacent frames. To address these issues, we propose a simple yet
effective one-stage video instance segmentation framework by spatial
calibration and temporal fusion, namely STMask. To ensure spatial feature
calibration with ground-truth bounding boxes, we first predict regressed
bounding boxes around ground-truth bounding boxes, and extract features from
them for frame-level instance segmentation. To further explore temporal
correlation among video frames, we aggregate a temporal fusion module to infer
instance masks from each frame to its adjacent frames, which helps our
framework to handle challenging videos such as motion blur, partial occlusion
and unusual object-to-camera poses. Experiments on the YouTube-VIS valid set
show that the proposed STMask with ResNet-50/-101 backbone obtains 33.5 % /
36.8 % mask AP, while achieving 28.6 / 23.4 FPS on video instance segmentation.
The code is released online https://github.com/MinghanLi/STMask.
| [
{
"created": "Tue, 6 Apr 2021 09:26:58 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Li",
"Minghan",
""
],
[
"Li",
"Shuai",
""
],
[
"Li",
"Lida",
""
],
[
"Zhang",
"Lei",
""
]
] |
2104.05700 | Thamme Gowda | Thamme Gowda, Weiqiu You, Constantine Lignos, Jonathan May | Macro-Average: Rare Types Are Important Too | null | https://aclanthology.org/2021.naacl-main.90 | 10.18653/v1/2021.naacl-main.90 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | While traditional corpus-level evaluation metrics for machine translation
(MT) correlate well with fluency, they struggle to reflect adequacy.
Model-based MT metrics trained on segment-level human judgments have emerged as
an attractive replacement due to strong correlation results. These models,
however, require potentially expensive re-training for new domains and
languages. Furthermore, their decisions are inherently non-transparent and
appear to reflect unwelcome biases. We explore the simple type-based classifier
metric, MacroF1, and study its applicability to MT evaluation. We find that
MacroF1 is competitive on direct assessment, and outperforms others in
indicating downstream cross-lingual information retrieval task performance.
Further, we show that MacroF1 can be used to effectively compare supervised and
unsupervised neural machine translation, and reveal significant qualitative
differences in the methods' outputs.
| [
{
"created": "Mon, 12 Apr 2021 17:57:42 GMT",
"version": "v1"
}
] | 2022-09-16 | [
[
"Gowda",
"Thamme",
""
],
[
"You",
"Weiqiu",
""
],
[
"Lignos",
"Constantine",
""
],
[
"May",
"Jonathan",
""
]
] |
2104.05710 | Bereket Abera Yilma Mr. | Bereket Abera Yilma, Herv\'e Panetto, Yannick Naudet | Systemic formalisation of Cyber-Physical-Social System (CPSS): A
systematic literature review | null | Computers in Industry, Volume 129, 2021, 103458, ISSN 0166-3615 | 10.1016/j.compind.2021.103458 | Volume 129 | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | The notion of Cyber-Physical-Social System (CPSS) is an emerging concept
developed as a result of the need to understand the impact of Cyber-Physical
Systems (CPS) on humans and vice versa. This paradigm shift from CPS to CPSS
was mainly attributed to the increasing use of sensor-enabled smart devices and
the tight link with the users. The concept of CPSS has been around for over a
decade and it has gained increasing attention over the past few years. The
evolution to incorporate human aspects in the CPS research has unlocked a
number of research challenges. Particularly human dynamics brings additional
complexity that is yet to be explored. The exploration to conceptualise the
notion of CPSS has been partially addressed in few scientific literatures.
Although its conceptualisation has always been use-case dependent. Thus, there
is a lack of generic view as most works focus on specific domains. Furthermore,
the systemic core and design principles linking it with the theory of systems
are loose. This work aims at addressing these issues by first exploring and
analysing scientific literature to understand the complete spectrum of CPSS
through a Systematic Literature Review (SLR). Thereby identifying the
state-of-the-art perspectives on CPSS regarding definitions, underlining
principles and application areas. Subsequently, based on the findings of the
SLR, we propose a domain-independent definition and a meta-model for CPSS,
grounded in the Theory of Systems. Finally, a discussion on feasible future
research directions is presented based on the systemic notion and the proposed
meta-models.
| [
{
"created": "Sun, 11 Apr 2021 22:31:57 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Yilma",
"Bereket Abera",
""
],
[
"Panetto",
"Hervé",
""
],
[
"Naudet",
"Yannick",
""
]
] |
2104.05742 | Tarik A. Rashid | Nitish Maharjan, Abeer Alsadoon, P.W.C. Prasad, Salma Abdullah, Tarik
A. Rashid | A Novel Visualization System of Using Augmented Reality in Knee
Replacement Surgery: Enhanced Bidirectional Maximum Correntropy Algorithm | 27 pages | The International Journal of Medical Robotics and Computer
Assisted Surgery, 2020 | 10.1002/rcs.2154 | null | cs.CV cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background and aim: Image registration and alignment are the main limitations
of augmented reality-based knee replacement surgery. This research aims to
decrease the registration error, eliminate outcomes that are trapped in local
minima to improve the alignment problems, handle the occlusion, and maximize
the overlapping parts. Methodology: markerless image registration method was
used for Augmented reality-based knee replacement surgery to guide and
visualize the surgical operation. While weight least square algorithm was used
to enhance stereo camera-based tracking by filling border occlusion in right to
left direction and non-border occlusion from left to right direction. Results:
This study has improved video precision to 0.57 mm~0.61 mm alignment error.
Furthermore, with the use of bidirectional points, for example, forwards and
backwards directional cloud point, the iteration on image registration was
decreased. This has led to improve the processing time as well. The processing
time of video frames was improved to 7.4~11.74 fps. Conclusions: It seems clear
that this proposed system has focused on overcoming the misalignment difficulty
caused by movement of patient and enhancing the AR visualization during knee
replacement surgery. The proposed system was reliable and favorable which helps
in eliminating alignment error by ascertaining the optimal rigid transformation
between two cloud points and removing the outliers and non-Gaussian noise. The
proposed augmented reality system helps in accurate visualization and
navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels,
etc.
| [
{
"created": "Sat, 13 Mar 2021 19:18:16 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Maharjan",
"Nitish",
""
],
[
"Alsadoon",
"Abeer",
""
],
[
"Prasad",
"P. W. C.",
""
],
[
"Abdullah",
"Salma",
""
],
[
"Rashid",
"Tarik A.",
""
]
] |
2104.05744 | Tarik A. Rashid | Sagar Chhetri, Abeer Alsadoon, Thair Al Dala in, P. W. C. Prasad,
Tarik A. Rashid, Angelika Maag | Deep Learning for Vision-Based Fall Detection System: Enhanced Optical
Dynamic Flow | 16 pages | Computational Intelligence, 2020 | 10.1111/coin.12428 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate fall detection for the assistance of older people is crucial to
reduce incidents of deaths or injuries due to falls. Meanwhile, a vision-based
fall detection system has shown some significant results to detect falls.
Still, numerous challenges need to be resolved. The impact of deep learning has
changed the landscape of the vision-based system, such as action recognition.
The deep learning technique has not been successfully implemented in
vision-based fall detection systems due to the requirement of a large amount of
computation power and the requirement of a large amount of sample training
data. This research aims to propose a vision-based fall detection system that
improves the accuracy of fall detection in some complex environments such as
the change of light condition in the room. Also, this research aims to increase
the performance of the pre-processing of video images. The proposed system
consists of the Enhanced Dynamic Optical Flow technique that encodes the
temporal data of optical flow videos by the method of rank pooling, which
thereby improves the processing time of fall detection and improves the
classification accuracy in dynamic lighting conditions. The experimental
results showed that the classification accuracy of the fall detection improved
by around 3% and the processing time by 40 to 50ms. The proposed system
concentrates on decreasing the processing time of fall detection and improving
classification accuracy. Meanwhile, it provides a mechanism for summarizing a
video into a single image by using a dynamic optical flow technique, which
helps to increase the performance of image pre-processing steps.
| [
{
"created": "Thu, 18 Mar 2021 08:14:25 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Chhetri",
"Sagar",
""
],
[
"Alsadoon",
"Abeer",
""
],
[
"in",
"Thair Al Dala",
""
],
[
"Prasad",
"P. W. C.",
""
],
[
"Rashid",
"Tarik A.",
""
],
[
"Maag",
"Angelika",
""
]
] |
2104.05848 | Zhong Zhou | Zhong Zhou, Alex Waibel | Family of Origin and Family of Choice: Massively Parallel Lexiconized
Iterative Pretraining for Severely Low Resource Machine Translation | null | In Proceedings of the 3rd Workshop on Research in Computational
Typology and Multilingual NLP of the 20th Conference of the North American
Chapter of the Association for Computational Linguistics on Human Language
Technologies in 2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We translate a closed text that is known in advance into a severely low
resource language by leveraging massive source parallelism. In other words,
given a text in 124 source languages, we translate it into a severely low
resource language using only ~1,000 lines of low resource data without any
external help. Firstly, we propose a systematic method to rank and choose
source languages that are close to the low resource language. We call the
linguistic definition of language family Family of Origin (FAMO), and we call
the empirical definition of higher-ranked languages using our metrics Family of
Choice (FAMC). Secondly, we build an Iteratively Pretrained Multilingual
Order-preserving Lexiconized Transformer (IPML) to train on ~1,000 lines
(~3.5%) of low resource data. To translate named entities correctly, we build a
massive lexicon table for 2,939 Bible named entities in 124 source languages,
and include many that occur once and covers more than 66 severely low resource
languages. Moreover, we also build a novel method of combining translations
from different source languages into one. Using English as a hypothetical low
resource language, we get a +23.9 BLEU increase over a multilingual baseline,
and a +10.3 BLEU increase over our asymmetric baseline in the Bible dataset. We
get a 42.8 BLEU score for Portuguese-English translation on the medical EMEA
dataset. We also have good results for a real severely low resource Mayan
language, Eastern Pokomchi.
| [
{
"created": "Mon, 12 Apr 2021 22:32:58 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Apr 2021 19:54:42 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Apr 2021 14:12:27 GMT",
"version": "v3"
},
{
"created": "Wed, 19 May 2021 17:48:05 GMT",
"version": "v4"
},
{
"created": "Mon, 24 May 2021 12:56:39 GMT",
"version": "v5"
},
{
"created": "Thu, 30 Sep 2021 21:41:43 GMT",
"version": "v6"
},
{
"created": "Sat, 16 Oct 2021 02:27:52 GMT",
"version": "v7"
}
] | 2021-10-19 | [
[
"Zhou",
"Zhong",
""
],
[
"Waibel",
"Alex",
""
]
] |
2104.05892 | Jong Chul Ye | Yujin Oh and Jong Chul Ye | CXR Segmentation by AdaIN-based Domain Adaptation and Knowledge
Distillation | Accepted to ECCV 2022 | ECCV 2022, Part XXI, LNCS 13681 | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As segmentation labels are scarce, extensive researches have been conducted
to train segmentation networks with domain adaptation, semi-supervised or
self-supervised learning techniques to utilize abundant unlabeled dataset.
However, these approaches appear different from each other, so it is not clear
how these approaches can be combined for better performance. Inspired by recent
multi-domain image translation approaches, here we propose a novel segmentation
framework using adaptive instance normalization (AdaIN), so that a single
generator is trained to perform both domain adaptation and semi-supervised
segmentation tasks via knowledge distillation by simply changing task-specific
AdaIN codes. Specifically, our framework is designed to deal with difficult
situations in chest X-ray radiograph (CXR) segmentation, where labels are only
available for normal data, but the trained model should be applied to both
normal and abnormal data. The proposed network demonstrates great
generalizability under domain shift and achieves the state-of-the-art
performance for abnormal CXR segmentation.
| [
{
"created": "Tue, 13 Apr 2021 01:53:04 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Apr 2021 14:54:50 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Jul 2022 14:15:11 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Oct 2022 10:40:24 GMT",
"version": "v4"
}
] | 2022-10-12 | [
[
"Oh",
"Yujin",
""
],
[
"Ye",
"Jong Chul",
""
]
] |
2104.05915 | Rohitash Chandra | Rohitash Chandra, Mahir Jain, Manavendra Maharana, Pavel N. Krivitsky | Revisiting Bayesian Autoencoders with MCMC | null | R. Chandra, M. Jain, M. Maharana and P. N. Krivitsky, "Revisiting
Bayesian Autoencoders With MCMC," in IEEE Access, vol. 10, pp. 40482-40495,
2022, doi: 10.1109/ACCESS.2022.3163270 | 10.1109/ACCESS.2022.3163270 | null | cs.LG cs.AI stat.AP | http://creativecommons.org/licenses/by/4.0/ | Autoencoders gained popularity in the deep learning revolution given their
ability to compress data and provide dimensionality reduction. Although
prominent deep learning methods have been used to enhance autoencoders, the
need to provide robust uncertainty quantification remains a challenge. This has
been addressed with variational autoencoders so far. Bayesian inference via
Markov Chain Monte Carlo (MCMC) sampling has faced several limitations for
large models; however, recent advances in parallel computing and advanced
proposal schemes have opened routes less traveled. This paper presents Bayesian
autoencoders powered by MCMC sampling implemented using parallel computing and
Langevin-gradient proposal distribution. The results indicate that the proposed
Bayesian autoencoder provides similar performance accuracy when compared to
related methods in the literature. Furthermore, it provides uncertainty
quantification in the reduced data representation. This motivates further
applications of the Bayesian autoencoder framework for other deep learning
models.
| [
{
"created": "Tue, 13 Apr 2021 03:23:07 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Apr 2022 12:58:39 GMT",
"version": "v2"
}
] | 2022-04-29 | [
[
"Chandra",
"Rohitash",
""
],
[
"Jain",
"Mahir",
""
],
[
"Maharana",
"Manavendra",
""
],
[
"Krivitsky",
"Pavel N.",
""
]
] |
2104.05930 | Brenden Petersen | Joanne T. Kim, Mikel Landajuela, Brenden K. Petersen | Distilling Wikipedia mathematical knowledge into neural network models | 6 pages, 4 figures | 1st Mathematical Reasoning in General Artificial Intelligence
Workshop, ICLR 2021 | null | LLNL-CONF-820039 | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning applications to symbolic mathematics are becoming
increasingly popular, yet there lacks a centralized source of real-world
symbolic expressions to be used as training data. In contrast, the field of
natural language processing leverages resources like Wikipedia that provide
enormous amounts of real-world textual data. Adopting the philosophy of
"mathematics as language," we bridge this gap by introducing a pipeline for
distilling mathematical expressions embedded in Wikipedia into symbolic
encodings to be used in downstream machine learning tasks. We demonstrate that
a $\textit{mathematical}$ $\textit{language}$ $\textit{model}$ trained on this
"corpus" of expressions can be used as a prior to improve the performance of
neural-guided search for the task of symbolic regression.
| [
{
"created": "Tue, 13 Apr 2021 04:16:50 GMT",
"version": "v1"
}
] | 2022-07-06 | [
[
"Kim",
"Joanne T.",
""
],
[
"Landajuela",
"Mikel",
""
],
[
"Petersen",
"Brenden K.",
""
]
] |
2104.06048 | Emanuela Boros | Emanuela Boros and Antoine Doucet | Transformer-based Methods for Recognizing Ultra Fine-grained Entities
(RUFES) | null | https://tac.nist.gov/2020/KBP/RUFES/index.html | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper summarizes the participation of the Laboratoire Informatique,
Image et Interaction (L3i laboratory) of the University of La Rochelle in the
Recognizing Ultra Fine-grained Entities (RUFES) track within the Text Analysis
Conference (TAC) series of evaluation workshops. Our participation relies on
two neural-based models, one based on a pre-trained and fine-tuned language
model with a stack of Transformer layers for fine-grained entity extraction and
one out-of-the-box model for within-document entity coreference. We observe
that our approach has great potential in increasing the performance of
fine-grained entity recognition. Thus, the future work envisioned is to enhance
the ability of the models following additional experiments and a deeper
analysis of the results.
| [
{
"created": "Tue, 13 Apr 2021 09:23:16 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Boros",
"Emanuela",
""
],
[
"Doucet",
"Antoine",
""
]
] |
2104.06142 | Pramod Chunduri | Pramod Chunduri, Jaeho Bang, Yao Lu, Joy Arulraj | Zeus: Efficiently Localizing Actions in Videos using Reinforcement
Learning | null | In Proceedings of the 2022 International Conference on Management
of Data (SIGMOD '22). Philadelphia, PA, USA, 545-558 | 10.1145/3514221.3526181 | null | cs.CV cs.DB | http://creativecommons.org/licenses/by/4.0/ | Detection and localization of actions in videos is an important problem in
practice. State-of-the-art video analytics systems are unable to efficiently
and effectively answer such action queries because actions often involve a
complex interaction between objects and are spread across a sequence of frames;
detecting and localizing them requires computationally expensive deep neural
networks. It is also important to consider the entire sequence of frames to
answer the query effectively.
In this paper, we present ZEUS, a video analytics system tailored for
answering action queries. We present a novel technique for efficiently
answering these queries using deep reinforcement learning. ZEUS trains a
reinforcement learning agent that learns to adaptively modify the input video
segments that are subsequently sent to an action classification network. The
agent alters the input segments along three dimensions - sampling rate, segment
length, and resolution. To meet the user-specified accuracy target, ZEUS's
query optimizer trains the agent based on an accuracy-aware, aggregate reward
function. Evaluation on three diverse video datasets shows that ZEUS
outperforms state-of-the-art frame- and window-based filtering techniques by up
to 22.1x and 4.7x, respectively. It also consistently meets the user-specified
accuracy target across all queries.
| [
{
"created": "Tue, 6 Apr 2021 16:38:31 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Apr 2021 03:20:48 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Sep 2022 19:07:41 GMT",
"version": "v3"
}
] | 2022-09-29 | [
[
"Chunduri",
"Pramod",
""
],
[
"Bang",
"Jaeho",
""
],
[
"Lu",
"Yao",
""
],
[
"Arulraj",
"Joy",
""
]
] |
2104.06176 | Pedro Ricardo Ariel Salvador Bassi M.Sc. | Pedro R. A. S. Bassi, Romis Attux | COVID-19 detection using chest X-rays: is lung segmentation important
for generalization? | Text and figure improvements. Results did not change. Included DOI
and reference to the published article (Research on Biomedical Engineering,
Springer). Link for the published paper:
https://trebuchet.public.springernature.app/get_content/1ab346c8-06ea-49ed-92f3-deaec80f6988 | Research on Biomedical Engineering, Springer (2022) | 10.1007/s42600-022-00242-y | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Purpose: we evaluated the generalization capability of deep neural networks
(DNNs), trained to classify chest X-rays as Covid-19, normal or pneumonia,
using a relatively small and mixed dataset. Methods: we proposed a DNN to
perform lung segmentation and classification, stacking a segmentation module
(U-Net), an original intermediate module and a classification module
(DenseNet201). To evaluate generalization, we tested the DNN with an external
dataset (from distinct localities) and used Bayesian inference to estimate
probability distributions of performance metrics. Results: our DNN achieved
0.917 AUC on the external test dataset, and a DenseNet without segmentation,
0.906. Bayesian inference indicated mean accuracy of 76.1% and [0.695, 0.826]
95% HDI (highest density interval, which concentrates 95% of the metric's
probability mass) with segmentation and, without segmentation, 71.7% and
[0.646, 0.786]. Conclusion: employing a novel DNN evaluation technique, which
uses LRP and Brixia scores, we discovered that areas where radiologists found
strong Covid-19 symptoms are the most important for the stacked DNN
classification. External validation showed smaller accuracies than internal,
indicating difficulty in generalization, which is positively affected by
segmentation. Finally, the performance in the external dataset and the analysis
with LRP suggest that DNNs can be trained in small and mixed datasets and still
successfully detect Covid-19.
| [
{
"created": "Mon, 12 Apr 2021 09:06:28 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Nov 2021 03:29:34 GMT",
"version": "v2"
},
{
"created": "Wed, 2 Nov 2022 14:52:02 GMT",
"version": "v3"
}
] | 2022-11-03 | [
[
"Bassi",
"Pedro R. A. S.",
""
],
[
"Attux",
"Romis",
""
]
] |
2104.06191 | Bruno Lecouat | Bruno Lecouat, Jean Ponce, Julien Mairal | Lucas-Kanade Reloaded: End-to-End Super-Resolution from Raw Image Bursts | null | ICCV 2021 | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This presentation addresses the problem of reconstructing a high-resolution
image from multiple lower-resolution snapshots captured from slightly different
viewpoints in space and time. Key challenges for solving this problem include
(i) aligning the input pictures with sub-pixel accuracy, (ii) handling raw
(noisy) images for maximal faithfulness to native camera data, and (iii)
designing/learning an image prior (regularizer) well suited to the task. We
address these three challenges with a hybrid algorithm building on the insight
from Wronski et al. that aliasing is an ally in this setting, with parameters
that can be learned end to end, while retaining the interpretability of
classical approaches to inverse problems. The effectiveness of our approach is
demonstrated on synthetic and real image bursts, setting a new state of the art
on several benchmarks and delivering excellent qualitative results on real raw
bursts captured by smartphones and prosumer cameras.
| [
{
"created": "Tue, 13 Apr 2021 13:39:43 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Aug 2021 08:57:19 GMT",
"version": "v2"
}
] | 2021-08-24 | [
[
"Lecouat",
"Bruno",
""
],
[
"Ponce",
"Jean",
""
],
[
"Mairal",
"Julien",
""
]
] |
2104.06231 | Tongxue Zhou | Tongxue Zhou, St\'ephane Canu, Pierre Vera, Su Ruan | Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities | 12 pages, 10 figures, accepted by IEEE Transactions on Image
Processing (8 April 2021). arXiv admin note: text overlap with
arXiv:2003.08870, arXiv:2102.03111 | IEEE Transactions on Image Processing On page(s): 4263-4274 Print
ISSN: 1057-7149 Online ISSN: 1941-0042 | 10.1109/TIP.2021.3070752 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess
brain tumor. Accurately segmenting brain tumor from MR images is the key to
clinical diagnostics and treatment planning. In addition, multi-modal MR images
can provide complementary information for accurate brain tumor segmentation.
However, it's common to miss some imaging modalities in clinical practice. In
this paper, we present a novel brain tumor segmentation algorithm with missing
modalities. Since it exists a strong correlation between multi-modalities, a
correlation model is proposed to specially represent the latent multi-source
correlation. Thanks to the obtained correlation representation, the
segmentation becomes more robust in the case of missing modality. First, the
individual representation produced by each encoder is used to estimate the
modality independent parameter. Then, the correlation model transforms all the
individual representations to the latent multi-source correlation
representations. Finally, the correlation representations across modalities are
fused via attention mechanism into a shared representation to emphasize the
most important features for segmentation. We evaluate our model on BraTS 2018
and BraTS 2019 dataset, it outperforms the current state-of-the-art methods and
produces robust results when one or more modalities are missing.
| [
{
"created": "Tue, 13 Apr 2021 14:21:09 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 13:51:09 GMT",
"version": "v2"
}
] | 2021-04-21 | [
[
"Zhou",
"Tongxue",
""
],
[
"Canu",
"Stéphane",
""
],
[
"Vera",
"Pierre",
""
],
[
"Ruan",
"Su",
""
]
] |
2104.06309 | Hadi Sarieddeen Dr. | Sara Helal, Hadi Sarieddeen, Hayssam Dahrouj, Tareq Y. Al-Naffouri,
Mohamed Slim Alouini | Signal Processing and Machine Learning Techniques for Terahertz Sensing:
An Overview | null | IEEE Signal Processing Magazine, vol. 39, no. 5, pp. 42-62, Sept.
2022 | 10.1109/MSP.2022.3183808 | null | eess.SP cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the recent progress in Terahertz (THz) signal generation and
radiation methods, joint THz communications and sensing applications are
shaping the future of wireless systems. Towards this end, THz spectroscopy is
expected to be carried over user equipment devices to identify material and
gaseous components of interest. THz-specific signal processing techniques
should complement this re-surged interest in THz sensing for efficient
utilization of the THz band. In this paper, we present an overview of these
techniques, with an emphasis on signal pre-processing (standard normal variate
normalization, min-max normalization, and Savitzky-Golay filtering), feature
extraction (principal component analysis, partial least squares, t-distributed
stochastic neighbor embedding, and nonnegative matrix factorization), and
classification techniques (support vector machines, k-nearest neighbor,
discriminant analysis, and naive Bayes). We also address the effectiveness of
deep learning techniques by exploring their promising sensing capabilities at
the THz band. Lastly, we investigate the performance and complexity trade-offs
of the studied methods in the context of joint communications and sensing; we
motivate the corresponding use-cases, and we present few future research
directions in the field.
| [
{
"created": "Fri, 9 Apr 2021 01:38:34 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2022 22:58:58 GMT",
"version": "v2"
}
] | 2022-09-07 | [
[
"Helal",
"Sara",
""
],
[
"Sarieddeen",
"Hadi",
""
],
[
"Dahrouj",
"Hayssam",
""
],
[
"Al-Naffouri",
"Tareq Y.",
""
],
[
"Alouini",
"Mohamed Slim",
""
]
] |
2104.06316 | Tarik A. Rashid | Arjina Maharjan, Abeer Alsadoon, P.W.C. Prasad, Nada AlSallami, Tarik
A. Rashid, Ahmad Alrubaie, Sami Haddad | A Novel Solution of Using Mixed Reality in Bowel and Oral and
Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm | 27 pages | International Journal of Medical Robotics and Computer Assisted
Surgery,2020 | 10.1002/rcs.2161 | null | physics.med-ph cs.CV cs.GR cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background and aim: Most of the Mixed Reality models used in the surgical
telepresence are suffering from discrepancies in the boundary area and
spatial-temporal inconsistency due to the illumination variation in the video
frames. The aim behind this work is to propose a new solution that helps
produce the composite video by merging the augmented video of the surgery site
and the virtual hand of the remote expertise surgeon. The purpose of the
proposed solution is to decrease the processing time and enhance the accuracy
of merged video by decreasing the overlay and visualization error and removing
occlusion and artefacts. Methodology: The proposed system enhanced the mean
value cloning algorithm that helps to maintain the spatial-temporal consistency
of the final composite video. The enhanced algorithm includes the 3D mean value
coordinates and improvised mean value interpolant in the image cloning process,
which helps to reduce the sawtooth, smudging and discolouration artefacts
around the blending region. Results: As compared to the state of the art
solution, the accuracy in terms of overlay error of the proposed solution is
improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization
error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173
seconds from 0.211 seconds. Conclusion: Our solution helps make the object of
interest consistent with the light intensity of the target image by adding the
space distance that helps maintain the spatial consistency in the final merged
video.
| [
{
"created": "Wed, 17 Mar 2021 10:01:06 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Maharjan",
"Arjina",
""
],
[
"Alsadoon",
"Abeer",
""
],
[
"Prasad",
"P. W. C.",
""
],
[
"AlSallami",
"Nada",
""
],
[
"Rashid",
"Tarik A.",
""
],
[
"Alrubaie",
"Ahmad",
""
],
[
"Haddad",
"Sami",
""
]
] |
2104.06324 | Maciej Eder | Rafa{\l} L. G\'orski and Maciej Eder | Modeling the dynamics of language change: logistic regression,
Piotrowski's law, and a handful of examples in Polish | null | Journal of Quantitative Linguistics, 30 (2023): 86-103 | 10.1080/09296174.2022.2122751 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study discusses modeling diachronic processes by logistic regression. The
phenomenon of nonlinear changes in language was first observed by Raimund
Piotrowski (hence labelled as Piotrowski's law), even if actual linguistic
evidence usually speaks against using the notion of a "law" in this context. In
our study, we apply logistic regression models to 9 changes which occurred
between 15th and 18th century in the Polish language. The attested course of
the majority of these changes closely follow the expected values, which proves
that the language change might indeed resemble a nonlinear phase change
scenario. We also extend the original Piotrowski's approach by proposing
polynomial logistic regression for these cases which can hardly be described by
its standard version. Also, we propose to consider individual language change
cases jointly, in order to inspect their possible collinearity or, more likely,
their different dynamics in the function of time. Last but not least, we
evaluate our results by testing the influence of the subcorpus size on the
model's goodness-of-fit.
| [
{
"created": "Tue, 13 Apr 2021 16:03:36 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Apr 2021 08:53:54 GMT",
"version": "v2"
},
{
"created": "Sat, 28 May 2022 21:54:55 GMT",
"version": "v3"
}
] | 2023-03-30 | [
[
"Górski",
"Rafał L.",
""
],
[
"Eder",
"Maciej",
""
]
] |
2104.06402 | Esther Robb | Ting-I Hsieh, Esther Robb, Hwann-Tzong Chen, Jia-Bin Huang | DropLoss for Long-Tail Instance Segmentation | Code at https://github.com/timy90022/DropLoss | AAAI 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-tailed class distributions are prevalent among the practical
applications of object detection and instance segmentation. Prior work in
long-tail instance segmentation addresses the imbalance of losses between rare
and frequent categories by reducing the penalty for a model incorrectly
predicting a rare class label. We demonstrate that the rare categories are
heavily suppressed by correct background predictions, which reduces the
probability for all foreground categories with equal weight. Due to the
relative infrequency of rare categories, this leads to an imbalance that biases
towards predicting more frequent categories. Based on this insight, we develop
DropLoss -- a novel adaptive loss to compensate for this imbalance without a
trade-off between rare and frequent categories. With this loss, we show
state-of-the-art mAP across rare, common, and frequent categories on the LVIS
dataset.
| [
{
"created": "Tue, 13 Apr 2021 17:59:22 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Apr 2021 15:52:56 GMT",
"version": "v2"
}
] | 2021-04-20 | [
[
"Hsieh",
"Ting-I",
""
],
[
"Robb",
"Esther",
""
],
[
"Chen",
"Hwann-Tzong",
""
],
[
"Huang",
"Jia-Bin",
""
]
] |
2104.06439 | Maria Ponomareva | Boris Zhestiankin and Maria Ponomareva | Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT
Fine-tuning | Accepted to SemEval-2021 at ACL-IJCNLP | Proceedings of the 15th International Workshop on Semantic
Evaluation (SemEval-2021), pp. 163-168, 2021 | 10.18653/v1/2021.semeval-1.17 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents our contribution to SemEval-2021 Task 2: Multilingual and
Cross-lingual Word-in-Context Disambiguation (MCL-WiC). Our experiments cover
English (EN-EN) sub-track from the multilingual setting of the task. We
experiment with several pre-trained language models and investigate an impact
of different top-layers on fine-tuning. We find the combination of Cosine
Similarity and ReLU activation leading to the most effective fine-tuning
procedure. Our best model results in accuracy 92.7%, which is the fourth-best
score in EN-EN sub-track.
| [
{
"created": "Tue, 13 Apr 2021 18:28:58 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Zhestiankin",
"Boris",
""
],
[
"Ponomareva",
"Maria",
""
]
] |
2104.06510 | Pedro Henrique Suruagy Perrusi | Pedro Henrique Suruagy Perrusi, Anna Cazzaniga, Paul Baksic, Eleonora
Tagliabue, Elena de Momi, Hadrien Courtecuisse | Robotic needle steering in deformable tissues with extreme learning
machines | null | AUTOMED 2021, Jun 2021, Basel, Switzerland | null | null | cs.RO cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Control strategies for robotic needle steering in soft tissues must account
for complex interactions between the needle and the tissue to achieve accurate
needle tip positioning. Recent findings show faster robotic command rate can
improve the control stability in realistic scenarios. This study proposes the
use of Extreme Learning Machines to provide fast commands for robotic needle
steering. A synthetic dataset based on the inverse finite element simulation
control framework is used to train the model. Results show the model is capable
to infer commands 66% faster than the inverse simulation and reaches acceptable
precision even on previously unseen trajectories.
| [
{
"created": "Fri, 2 Apr 2021 07:04:29 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Perrusi",
"Pedro Henrique Suruagy",
""
],
[
"Cazzaniga",
"Anna",
""
],
[
"Baksic",
"Paul",
""
],
[
"Tagliabue",
"Eleonora",
""
],
[
"de Momi",
"Elena",
""
],
[
"Courtecuisse",
"Hadrien",
""
]
] |
2104.06517 | Eunjeong Koh | Eunjeong Koh and Shlomo Dubnov | Comparison and Analysis of Deep Audio Embeddings for Music Emotion
Recognition | AAAI Workshop on Affective Content Analysis 2021 Camera Ready Version | AAAI 2021 | null | null | cs.SD cs.AI cs.LG cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | Emotion is a complicated notion present in music that is hard to capture even
with fine-tuned feature engineering. In this paper, we investigate the utility
of state-of-the-art pre-trained deep audio embedding methods to be used in the
Music Emotion Recognition (MER) task. Deep audio embedding methods allow us to
efficiently capture the high dimensional features into a compact
representation. We implement several multi-class classifiers with deep audio
embeddings to predict emotion semantics in music. We investigate the
effectiveness of L3-Net and VGGish deep audio embedding methods for music
emotion inference over four music datasets. The experiments with several
classifiers on the task show that the deep audio embedding solutions can
improve the performances of the previous baseline MER models. We conclude that
deep audio embeddings represent musical emotion semantics for the MER task
without expert human engineering.
| [
{
"created": "Tue, 13 Apr 2021 21:09:54 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Koh",
"Eunjeong",
""
],
[
"Dubnov",
"Shlomo",
""
]
] |
2104.06534 | Rakhil Immidisetti | Rakhil Immidisetti, Shuowen Hu, Vishal M. Patel | Simultaneous Face Hallucination and Translation for Thermal to Visible
Face Verification using Axial-GAN | International Joint Conference on Biometrics (IJCB) | 2021 IEEE International Joint Conference on Biometrics (IJCB) | 10.1109/IJCB52358.2021.9484353 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Existing thermal-to-visible face verification approaches expect the thermal
and visible face images to be of similar resolution. This is unlikely in
real-world long-range surveillance systems, since humans are distant from the
cameras. To address this issue, we introduce the task of thermal-to-visible
face verification from low-resolution thermal images. Furthermore, we propose
Axial-Generative Adversarial Network (Axial-GAN) to synthesize high-resolution
visible images for matching. In the proposed approach we augment the GAN
framework with axial-attention layers which leverage the recent advances in
transformers for modelling long-range dependencies. We demonstrate the
effectiveness of the proposed method by evaluating on two different
thermal-visible face datasets. When compared to related state-of-the-art works,
our results show significant improvements in both image quality and face
verification performance, and are also much more efficient.
| [
{
"created": "Tue, 13 Apr 2021 22:34:28 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Aug 2021 22:57:59 GMT",
"version": "v2"
}
] | 2021-08-10 | [
[
"Immidisetti",
"Rakhil",
""
],
[
"Hu",
"Shuowen",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
2104.06557 | Sreya Francis | Sreya Francis, Irene Tenison, Irina Rish | Towards Causal Federated Learning For Enhanced Robustness and Privacy | null | ICLR 2021 Distributed and Private Machine Learning(DPML) Workshop | null | null | cs.LG cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning is an emerging privacy-preserving distributed machine
learning approach to building a shared model by performing distributed training
locally on participating devices (clients) and aggregating the local models
into a global one. As this approach prevents data collection and aggregation,
it helps in reducing associated privacy risks to a great extent. However, the
data samples across all participating clients are usually not independent and
identically distributed (non-iid), and Out of Distribution(OOD) generalization
for the learned models can be poor. Besides this challenge, federated learning
also remains vulnerable to various attacks on security wherein a few malicious
participating entities work towards inserting backdoors, degrading the
generated aggregated model as well as inferring the data owned by participating
entities. In this paper, we propose an approach for learning invariant (causal)
features common to all participating clients in a federated learning setup and
analyze empirically how it enhances the Out of Distribution (OOD) accuracy as
well as the privacy of the final learned model.
| [
{
"created": "Wed, 14 Apr 2021 00:08:45 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Francis",
"Sreya",
""
],
[
"Tenison",
"Irene",
""
],
[
"Rish",
"Irina",
""
]
] |
2104.06601 | Ye Zheng | Ye Zheng, Jiahong Wu, Yongqiang Qin, Faen Zhang, Li Cui | Zero-Shot Instance Segmentation | 8 pages, 6 figures | CVPR2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has significantly improved the precision of instance
segmentation with abundant labeled data. However, in many areas like medical
and manufacturing, collecting sufficient data is extremely hard and labeling
this data requires high professional skills. We follow this motivation and
propose a new task set named zero-shot instance segmentation (ZSI). In the
training phase of ZSI, the model is trained with seen data, while in the
testing phase, it is used to segment all seen and unseen instances. We first
formulate the ZSI task and propose a method to tackle the challenge, which
consists of Zero-shot Detector, Semantic Mask Head, Background Aware RPN and
Synchronized Background Strategy. We present a new benchmark for zero-shot
instance segmentation based on the MS-COCO dataset. The extensive empirical
results in this benchmark show that our method not only surpasses the
state-of-the-art results in zero-shot object detection task but also achieves
promising performance on ZSI. Our approach will serve as a solid baseline and
facilitate future research in zero-shot instance segmentation.
| [
{
"created": "Wed, 14 Apr 2021 03:02:48 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jun 2021 03:05:23 GMT",
"version": "v2"
}
] | 2021-06-02 | [
[
"Zheng",
"Ye",
""
],
[
"Wu",
"Jiahong",
""
],
[
"Qin",
"Yongqiang",
""
],
[
"Zhang",
"Faen",
""
],
[
"Cui",
"Li",
""
]
] |
2104.06714 | Benjamin Doerr | Denis Antipov, Maxim Buzdalov, Benjamin Doerr | Lazy Parameter Tuning and Control: Choosing All Parameters Randomly From
a Power-Law Distribution | Extended version of the paper that appeared at GECCO 2021. To appear
in Algorithmica | Algorithmica 86(2): 442-484 (2024) | 10.1007/s00453-023-01098-z | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most evolutionary algorithms have multiple parameters and their values
drastically affect the performance. Due to the often complicated interplay of
the parameters, setting these values right for a particular problem (parameter
tuning) is a challenging task. This task becomes even more complicated when the
optimal parameter values change significantly during the run of the algorithm
since then a dynamic parameter choice (parameter control) is necessary.
In this work, we propose a lazy but effective solution, namely choosing all
parameter values (where this makes sense) in each iteration randomly from a
suitably scaled power-law distribution. To demonstrate the effectiveness of
this approach, we perform runtime analyses of the $(1+(\lambda,\lambda))$
genetic algorithm with all three parameters chosen in this manner. We show that
this algorithm on the one hand can imitate simple hill-climbers like the
$(1+1)$ EA, giving the same asymptotic runtime on problems like OneMax,
LeadingOnes, or Minimum Spanning Tree. On the other hand, this algorithm is
also very efficient on jump functions, where the best static parameters are
very different from those necessary to optimize simple problems. We prove a
performance guarantee that is comparable to the best performance known for
static parameters. For the most interesting case that the jump size $k$ is
constant, we prove that our performance is asymptotically better than what can
be obtained with any static parameter choice. We complement our theoretical
results with a rigorous empirical study confirming what the asymptotic runtime
results suggest.
| [
{
"created": "Wed, 14 Apr 2021 09:17:18 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Oct 2021 17:33:37 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Nov 2021 15:45:17 GMT",
"version": "v3"
},
{
"created": "Fri, 24 Feb 2023 01:31:55 GMT",
"version": "v4"
},
{
"created": "Fri, 10 Mar 2023 12:18:38 GMT",
"version": "v5"
}
] | 2024-10-08 | [
[
"Antipov",
"Denis",
""
],
[
"Buzdalov",
"Maxim",
""
],
[
"Doerr",
"Benjamin",
""
]
] |
2104.06797 | Gaochang Wu | Gaochang Wu, Yebin Liu, Lu Fang, Tianyou Chai | Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network | 15 pages, 12 figures. Accepted by IEEE TPAMI | IEEE TPAMI, 2021 | 10.1109/TPAMI.2021.3073739 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The light field (LF) reconstruction is mainly confronted with two challenges,
large disparity and the non-Lambertian effect. Typical approaches either
address the large disparity challenge using depth estimation followed by view
synthesis or eschew explicit depth information to enable non-Lambertian
rendering, but rarely solve both challenges in a unified framework. In this
paper, we revisit the classic LF rendering framework to address both challenges
by incorporating it with advanced deep learning techniques. First, we
analytically show that the essential issue behind the large disparity and
non-Lambertian challenges is the aliasing problem. Classic LF rendering
approaches typically mitigate the aliasing with a reconstruction filter in the
Fourier domain, which is, however, intractable to implement within a deep
learning pipeline. Instead, we introduce an alternative framework to perform
anti-aliasing reconstruction in the image domain and analytically show
comparable efficacy on the aliasing issue. To explore the full potential, we
then embed the anti-aliasing framework into a deep neural network through the
design of an integrated architecture and trainable parameters. The network is
trained through end-to-end optimization using a peculiar training set,
including regular LFs and unstructured LFs. The proposed deep learning pipeline
shows a substantial superiority in solving both the large disparity and the
non-Lambertian challenges compared with other state-of-the-art approaches. In
addition to the view interpolation for an LF, we also show that the proposed
pipeline also benefits light field view extrapolation.
| [
{
"created": "Wed, 14 Apr 2021 12:03:25 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Apr 2021 02:38:30 GMT",
"version": "v2"
}
] | 2021-04-29 | [
[
"Wu",
"Gaochang",
""
],
[
"Liu",
"Yebin",
""
],
[
"Fang",
"Lu",
""
],
[
"Chai",
"Tianyou",
""
]
] |
2104.06815 | Guo-Wang Xie | Guo-Wang Xie, Fei Yin, Xu-Yao Zhang, and Cheng-Lin Liu | Dewarping Document Image By Displacement Flow Estimation with Fully
Convolutional Network | null | International Workshop on Document Analysis Systems. Springer,
Cham, 2020: 131-144 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As camera-based documents are increasingly used, the rectification of
distorted document images becomes a need to improve the recognition
performance. In this paper, we propose a novel framework for both rectifying
distorted document image and removing background finely, by estimating
pixel-wise displacements using a fully convolutional network (FCN). The
document image is rectified by transformation according to the displacements of
pixels. The FCN is trained by regressing displacements of synthesized distorted
documents, and to control the smoothness of displacements, we propose a Local
Smooth Constraint (LSC) in regularization. Our approach is easy to implement
and consumes moderate computing resource. Experiments proved that our approach
can dewarp document images effectively under various geometric distortions, and
has achieved the state-of-the-art performance in terms of local details and
overall effect.
| [
{
"created": "Wed, 14 Apr 2021 12:32:36 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Xie",
"Guo-Wang",
""
],
[
"Yin",
"Fei",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] |
2104.06924 | Jiaying Lu | Jiaying Lu, Jinho D. Choi | Evaluation of Unsupervised Entity and Event Salience Estimation | null | Proceedings of the 34rd International Florida Artificial
Intelligence Research Society Conference, 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Salience Estimation aims to predict term importance in documents. Due to few
existing human-annotated datasets and the subjective notion of salience,
previous studies typically generate pseudo-ground truth for evaluation.
However, our investigation reveals that the evaluation protocol proposed by
prior work is difficult to replicate, thus leading to few follow-up studies
existing. Moreover, the evaluation process is problematic: the entity linking
tool used for entity matching is very noisy, while the ignorance of event
argument for event evaluation leads to boosted performance. In this work, we
propose a light yet practical entity and event salience estimation evaluation
protocol, which incorporates the more reliable syntactic dependency parser.
Furthermore, we conduct a comprehensive analysis among popular entity and event
definition standards, and present our own definition for the Salience
Estimation task to reduce noise during the pseudo-ground truth generation
process. Furthermore, we construct dependency-based heterogeneous graphs to
capture the interactions of entities and events. The empirical results show
that both baseline methods and the novel GNN method utilizing the heterogeneous
graph consistently outperform the previous SOTA model in all proposed metrics.
| [
{
"created": "Wed, 14 Apr 2021 15:23:08 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Lu",
"Jiaying",
""
],
[
"Choi",
"Jinho D.",
""
]
] |
2104.06935 | Julian Chibane | Julian Chibane, Aayush Bansal, Verica Lazova, Gerard Pons-Moll | Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views
of Novel Scenes | IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2021 | IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2021 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent neural view synthesis methods have achieved impressive quality and
realism, surpassing classical pipelines which rely on multi-view
reconstruction. State-of-the-Art methods, such as NeRF, are designed to learn a
single scene with a neural network and require dense multi-view inputs. Testing
on a new scene requires re-training from scratch, which takes 2-3 days. In this
work, we introduce Stereo Radiance Fields (SRF), a neural view synthesis
approach that is trained end-to-end, generalizes to new scenes, and requires
only sparse views at test time. The core idea is a neural architecture inspired
by classical multi-view stereo methods, which estimates surface points by
finding similar image regions in stereo images. In SRF, we predict color and
density for each 3D point given an encoding of its stereo correspondence in the
input images. The encoding is implicitly learned by an ensemble of pair-wise
similarities -- emulating classical stereo. Experiments show that SRF learns
structure instead of overfitting on a scene. We train on multiple scenes of the
DTU dataset and generalize to new ones without re-training, requiring only 10
sparse and spread-out views as input. We show that 10-15 minutes of fine-tuning
further improve the results, achieving significantly sharper, more detailed
results than scene-specific models. The code, model, and videos are available
at https://virtualhumans.mpi-inf.mpg.de/srf/.
| [
{
"created": "Wed, 14 Apr 2021 15:38:57 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Chibane",
"Julian",
""
],
[
"Bansal",
"Aayush",
""
],
[
"Lazova",
"Verica",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.