id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
listlengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.03754
|
Nuno Fachada
|
Afonso Oliveira, Nuno Fachada, Jo\~ao P. Matos-Carvalho
|
Data Science for Geographic Information Systems
|
The peer-reviewed version of this paper is published in IEEE Xplore
at https://doi.org/10.1109/YEF-ECE62614.2024.10624902. This version is
typeset by the author and differs only in pagination and typographical detail
|
2024 8th International Young Engineers Forum on Electrical and
Computer Engineering (YEF-ECE), 1-7, IEEE, 2024
|
10.1109/YEF-ECE62614.2024.10624902
| null |
eess.IV cs.CV physics.geo-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The integration of data science into Geographic Information Systems (GIS) has
facilitated the evolution of these tools into complete spatial analysis
platforms. The adoption of machine learning and big data techniques has
equipped these platforms with the capacity to handle larger amounts of
increasingly complex data, transcending the limitations of more traditional
approaches. This work traces the historical and technical evolution of data
science and GIS as fields of study, highlighting the critical points of
convergence between domains, and underlining the many sectors that rely on this
integration. A GIS application is presented as a case study in the disaster
management sector where we utilize aerial data from Tr\'oia, Portugal, to
emphasize the process of insight extraction from raw data. We conclude by
outlining prospects for future research in integration of these fields in
general, and the developed application in particular.
|
[
{
"created": "Thu, 4 Apr 2024 18:50:58 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 17:14:33 GMT",
"version": "v2"
}
] |
2024-08-15
|
[
[
"Oliveira",
"Afonso",
""
],
[
"Fachada",
"Nuno",
""
],
[
"Matos-Carvalho",
"João P.",
""
]
] |
2404.03838
|
Frank Neumann
|
Benjamin Doerr, Joshua Knowles, Aneta Neumann, Frank Neumann
|
A Block-Coordinate Descent EMO Algorithm: Theoretical and Empirical
Analysis
|
Accepted at GECCO 2024
|
GECCO '24: Proceedings of the Genetic and Evolutionary Computation
Conference, 493 - 501, 2024. ACM
|
10.1145/3638529.3654169
| null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider whether conditions exist under which block-coordinate descent is
asymptotically efficient in evolutionary multi-objective optimization,
addressing an open problem. Block-coordinate descent, where an optimization
problem is decomposed into $k$ blocks of decision variables and each of the
blocks is optimized (with the others fixed) in a sequence, is a technique used
in some large-scale optimization problems such as airline scheduling, however
its use in multi-objective optimization is less studied. We propose a
block-coordinate version of GSEMO and compare its running time to the standard
GSEMO algorithm. Theoretical and empirical results on a bi-objective test
function, a variant of LOTZ, serve to demonstrate the existence of cases where
block-coordinate descent is faster. The result may yield wider insights into
this class of algorithms.
|
[
{
"created": "Thu, 4 Apr 2024 23:50:18 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2024 00:13:05 GMT",
"version": "v2"
}
] |
2024-07-17
|
[
[
"Doerr",
"Benjamin",
""
],
[
"Knowles",
"Joshua",
""
],
[
"Neumann",
"Aneta",
""
],
[
"Neumann",
"Frank",
""
]
] |
2404.03883
|
JudyX Yang
|
Judy X Yang, Jun Zhou, Jing Wang, Hui Tian, and Alan Wee-Chung Liew
|
LiDAR-Guided Cross-Attention Fusion for Hyperspectral Band Selection and
Image Classification
|
15 pages, 13 figures
|
IEEE - TGRS-2024-00264.R1 Final Files Received
| null | null |
eess.IV cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The fusion of hyperspectral and LiDAR data has been an active research topic.
Existing fusion methods have ignored the high-dimensionality and redundancy
challenges in hyperspectral images, despite that band selection methods have
been intensively studied for hyperspectral image (HSI) processing. This paper
addresses this significant gap by introducing a cross-attention mechanism from
the transformer architecture for the selection of HSI bands guided by LiDAR
data. LiDAR provides high-resolution vertical structural information, which can
be useful in distinguishing different types of land cover that may have similar
spectral signatures but different structural profiles. In our approach, the
LiDAR data are used as the "query" to search and identify the "key" from the
HSI to choose the most pertinent bands for LiDAR. This method ensures that the
selected HSI bands drastically reduce redundancy and computational requirements
while working optimally with the LiDAR data. Extensive experiments have been
undertaken on three paired HSI and LiDAR data sets: Houston 2013, Trento and
MUUFL. The results highlight the superiority of the cross-attention mechanism,
underlining the enhanced classification accuracy of the identified HSI bands
when fused with the LiDAR features. The results also show that the use of fewer
bands combined with LiDAR surpasses the performance of state-of-the-art fusion
models.
|
[
{
"created": "Fri, 5 Apr 2024 04:11:31 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 06:34:52 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Yang",
"Judy X",
""
],
[
"Zhou",
"Jun",
""
],
[
"Wang",
"Jing",
""
],
[
"Tian",
"Hui",
""
],
[
"Liew",
"Alan Wee-Chung",
""
]
] |
2404.03938
|
Gulsum Yigit
|
Gulsum Yigit and Mehmet Fatih Amasyali
|
Data Augmentation with In-Context Learning and Comparative Evaluation in
Math Word Problem Solving
|
Accepted in SN Computer Science
|
SN Computer Science, 5, 506 (2024)
|
10.1007/s42979-024-02853-x
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Math Word Problem (MWP) solving presents a challenging task in Natural
Language Processing (NLP). This study aims to provide MWP solvers with a more
diverse training set, ultimately improving their ability to solve various math
problems. We propose several methods for data augmentation by modifying the
problem texts and equations, such as synonym replacement, rule-based: question
replacement, and rule based: reversing question methodologies over two English
MWP datasets. This study extends by introducing a new in-context learning
augmentation method, employing the Llama-7b language model. This approach
involves instruction-based prompting for rephrasing the math problem texts.
Performance evaluations are conducted on 9 baseline models, revealing that
augmentation methods outperform baseline models. Moreover, concatenating
examples generated by various augmentation methods further improves
performance.
|
[
{
"created": "Fri, 5 Apr 2024 07:57:03 GMT",
"version": "v1"
}
] |
2024-05-02
|
[
[
"Yigit",
"Gulsum",
""
],
[
"Amasyali",
"Mehmet Fatih",
""
]
] |
2404.03978
|
Jiefeng Zhou
|
Jiefeng Zhou, Zhen Li, Yong Deng
|
Random Walk in Random Permutation Set Theory
|
27 pages, 8 figures; references added
|
Chaos: An Interdisciplinary Journal of Nonlinear Science(2024)
|
10.1063/5.0220154
|
34,9
|
cs.AI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Random walk is an explainable approach for modeling natural processes at the
molecular level. The Random Permutation Set Theory (RPST) serves as a framework
for uncertainty reasoning, extending the applicability of Dempster-Shafer
Theory. Recent explorations indicate a promising link between RPST and random
walk. In this study, we conduct an analysis and construct a random walk model
based on the properties of RPST, with Monte Carlo simulations of such random
walk. Our findings reveal that the random walk generated through RPST exhibits
characteristics similar to those of a Gaussian random walk and can be
transformed into a Wiener process through a specific limiting scaling
procedure. This investigation establishes a novel connection between RPST and
random walk theory, thereby not only expanding the applicability of RPST, but
also demonstrating the potential for combining the strengths of both approaches
to improve problem-solving abilities.
|
[
{
"created": "Fri, 5 Apr 2024 09:19:55 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 15:18:14 GMT",
"version": "v2"
}
] |
2024-09-27
|
[
[
"Zhou",
"Jiefeng",
""
],
[
"Li",
"Zhen",
""
],
[
"Deng",
"Yong",
""
]
] |
2404.03992
|
Mohammed Ghaith Altarabichi
|
Mohammed Ghaith Altarabichi, S{\l}awomir Nowaczyk, Sepideh Pashami,
Peyman Sheikholharam Mashhadi, Julia Handl
|
Rolling the dice for better deep learning performance: A study of
randomness techniques in deep neural networks
| null |
Information Sciences, p.120500 (2024)
|
10.1016/j.ins.2024.120500
| null |
cs.LG cs.AI cs.CV cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates how various randomization techniques impact Deep
Neural Networks (DNNs). Randomization, like weight noise and dropout, aids in
reducing overfitting and enhancing generalization, but their interactions are
poorly understood. The study categorizes randomness techniques into four types
and proposes new methods: adding noise to the loss function and random masking
of gradient updates. Using Particle Swarm Optimizer (PSO) for hyperparameter
optimization, it explores optimal configurations across MNIST, FASHION-MNIST,
CIFAR10, and CIFAR100 datasets. Over 30,000 configurations are evaluated,
revealing data augmentation and weight initialization randomness as main
performance contributors. Correlation analysis shows different optimizers
prefer distinct randomization types. The complete implementation and dataset
are available on GitHub.
|
[
{
"created": "Fri, 5 Apr 2024 10:02:32 GMT",
"version": "v1"
}
] |
2024-04-08
|
[
[
"Altarabichi",
"Mohammed Ghaith",
""
],
[
"Nowaczyk",
"Sławomir",
""
],
[
"Pashami",
"Sepideh",
""
],
[
"Mashhadi",
"Peyman Sheikholharam",
""
],
[
"Handl",
"Julia",
""
]
] |
2404.03996
|
Mohammed Ghaith Altarabichi
|
Mohammed Ghaith Altarabichi, S{\l}awomir Nowaczyk, Sepideh Pashami,
Peyman Sheikholharam Mashhadi
|
Fast Genetic Algorithm for feature selection -- A qualitative
approximation approach
| null |
Expert Systems with Applications, 211, p.118528 (2023)
|
10.1016/j.eswa.2022.118528
| null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Evolutionary Algorithms (EAs) are often challenging to apply in real-world
settings since evolutionary computations involve a large number of evaluations
of a typically expensive fitness function. For example, an evaluation could
involve training a new machine learning model. An approximation (also known as
meta-model or a surrogate) of the true function can be used in such
applications to alleviate the computation cost. In this paper, we propose a
two-stage surrogate-assisted evolutionary approach to address the computational
issues arising from using Genetic Algorithm (GA) for feature selection in a
wrapper setting for large datasets. We define 'Approximation Usefulness' to
capture the necessary conditions to ensure correctness of the EA computations
when an approximation is used. Based on this definition, we propose a procedure
to construct a lightweight qualitative meta-model by the active selection of
data instances. We then use a meta-model to carry out the feature selection
task. We apply this procedure to the GA-based algorithm CHC (Cross generational
elitist selection, Heterogeneous recombination and Cataclysmic mutation) to
create a Qualitative approXimations variant, CHCQX. We show that CHCQX
converges faster to feature subset solutions of significantly higher accuracy
(as compared to CHC), particularly for large datasets with over 100K instances.
We also demonstrate the applicability of the thinking behind our approach more
broadly to Swarm Intelligence (SI), another branch of the Evolutionary
Computation (EC) paradigm with results of PSOQX, a qualitative approximation
adaptation of the Particle Swarm Optimization (PSO) method. A GitHub repository
with the complete implementation is available.
|
[
{
"created": "Fri, 5 Apr 2024 10:15:24 GMT",
"version": "v1"
}
] |
2024-04-08
|
[
[
"Altarabichi",
"Mohammed Ghaith",
""
],
[
"Nowaczyk",
"Sławomir",
""
],
[
"Pashami",
"Sepideh",
""
],
[
"Mashhadi",
"Peyman Sheikholharam",
""
]
] |
2404.04040
|
Paola Natalia Ca\~nas Rodriguez
|
Paola Natalia Ca\~nas, Mikel Garc\'ia, Nerea Aranjuelo, Marcos Nieto,
Aitor Iglesias and Igor Rodr\'iguez
|
Dynamic Risk Assessment Methodology with an LDM-based System for Parking
Scenarios
| null |
2023 IEEE 26th International Conference on Intelligent
Transportation Systems (ITSC), Bilbao, Spain, 2023, pp. 5034-5039
|
10.1109/ITSC57777.2023.10422385
| null |
cs.CV cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper describes the methodology for building a dynamic risk assessment
for ADAS (Advanced Driving Assistance Systems) algorithms in parking scenarios,
fusing exterior and interior perception for a better understanding of the scene
and a more comprehensive risk estimation. This includes the definition of a
dynamic risk methodology that depends on the situation from inside and outside
the vehicle, the creation of a multi-sensor dataset of risk assessment for ADAS
benchmarking purposes, and a Local Dynamic Map (LDM) that fuses data from the
exterior and interior of the car to build an LDM-based Dynamic Risk Assessment
System (DRAS).
|
[
{
"created": "Fri, 5 Apr 2024 11:49:29 GMT",
"version": "v1"
}
] |
2024-04-08
|
[
[
"Cañas",
"Paola Natalia",
""
],
[
"García",
"Mikel",
""
],
[
"Aranjuelo",
"Nerea",
""
],
[
"Nieto",
"Marcos",
""
],
[
"Iglesias",
"Aitor",
""
],
[
"Rodríguez",
"Igor",
""
]
] |
2404.04042
|
Hele-Andra Kuulmets
|
Hele-Andra Kuulmets, Taido Purason, Agnes Luhtaru, Mark Fishel
|
Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer
| null |
Findings of the Association for Computational Linguistics: NAACL
2024, pages 3309-3325
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores cost-efficient methods to adapt pretrained Large Language
Models (LLMs) to new lower-resource languages, with a specific focus on
Estonian. Leveraging the Llama 2 model, we investigate the impact of combining
cross-lingual instruction-tuning with additional monolingual pretraining. Our
results demonstrate that even a relatively small amount of additional
monolingual pretraining followed by cross-lingual instruction-tuning
significantly enhances results on Estonian. Furthermore, we showcase
cross-lingual knowledge transfer from high-quality English instructions to
Estonian, resulting in improvements in commonsense reasoning and multi-turn
conversation capabilities. Our best model, named \textsc{Llammas}, represents
the first open-source instruction-following LLM for Estonian. Additionally, we
publish Alpaca-est, the first general task instruction dataset for Estonia.
These contributions mark the initial progress in the direction of developing
open-source LLMs for Estonian.
|
[
{
"created": "Fri, 5 Apr 2024 11:52:02 GMT",
"version": "v1"
}
] |
2024-07-03
|
[
[
"Kuulmets",
"Hele-Andra",
""
],
[
"Purason",
"Taido",
""
],
[
"Luhtaru",
"Agnes",
""
],
[
"Fishel",
"Mark",
""
]
] |
2404.04279
|
Lacour Philippe
|
Aur\'elien B\'enel (Tech-CICO), Joris Falip (Tech-CICO), Philippe
Lacour (UnB)
|
When Abel Kills Cain: What Machine Translation Cannot Capture
|
in French language
|
Ce qui \'echappe \'a l'Intelligence Artificielle, Hermann,
pp.111-129, 2024, 9791037038449
|
10.3166/lcn.10.4.103-132
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The article aims at identifying what, from a structural point of view, AI
based automatic translators cannot fully capture. It focuses on the machine's
mistakes, in order to try to explain its causes. The biblical story of Ca\"in
and Abel has been chosen because of its rich interpretive and critical
tradition, but also because of its semantic difficulty. The investigation
begins with the observation, for the translation of this text, of the language
pairs and interfaces offered by the best known machine translation services
(Google Translate, DeepL). A typology of the most frequent translation errors
is then established. Finally, contemporary translations are compared, in order
to underline the unique contribution of each. In conclusion, the article
suggests a revision of translation theory and, corArtificial Intelligence,
Translation, Limitations, Interpretation, Comparison, Unicityelatively, a
reformulation of its technology concerning cultural texts.
|
[
{
"created": "Tue, 2 Apr 2024 12:46:00 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Bénel",
"Aurélien",
"",
"Tech-CICO"
],
[
"Falip",
"Joris",
"",
"Tech-CICO"
],
[
"Lacour",
"Philippe",
"",
"UnB"
]
] |
2404.04310
|
Dmitry V. Dylov
|
Nikolay Kalmykov, Rishat Zagidullin, Oleg Rogov, Sergey Rykovanov,
Dmitry V. Dylov
|
Suppressing Modulation Instability with Reinforcement Learning
| null |
Chaos, Solitons & Fractals, 115197, Volume 186, 2024
|
10.1016/j.chaos.2024.115197
| null |
nlin.PS cs.AI cs.LG cs.SY eess.SY physics.app-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modulation instability is a phenomenon of spontaneous pattern formation in
nonlinear media, oftentimes leading to an unpredictable behaviour and a
degradation of a signal of interest. We propose an approach based on
reinforcement learning to suppress the unstable modes by optimizing the
parameters for the time modulation of the potential in the nonlinear system. We
test our approach in 1D and 2D cases and propose a new class of
physically-meaningful reward functions to guarantee tamed instability.
|
[
{
"created": "Fri, 5 Apr 2024 10:29:18 GMT",
"version": "v1"
}
] |
2024-07-24
|
[
[
"Kalmykov",
"Nikolay",
""
],
[
"Zagidullin",
"Rishat",
""
],
[
"Rogov",
"Oleg",
""
],
[
"Rykovanov",
"Sergey",
""
],
[
"Dylov",
"Dmitry V.",
""
]
] |
2404.04446
|
David Watson
|
David S. Watson, Jordan Penn, Lee M. Gunderson, Gecia
Bravo-Hermsdorff, Afsaneh Mastouri, Ricardo Silva
|
Bounding Causal Effects with Leaky Instruments
|
Camera ready version (UAI 2024)
|
40th Conference on Uncertainty in Artificial Intelligence (UAI
2024)
| null | null |
stat.ME cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Instrumental variables (IVs) are a popular and powerful tool for estimating
causal effects in the presence of unobserved confounding. However, classical
approaches rely on strong assumptions such as the $\textit{exclusion
criterion}$, which states that instrumental effects must be entirely mediated
by treatments. This assumption often fails in practice. When IV methods are
improperly applied to data that do not meet the exclusion criterion, estimated
causal effects may be badly biased. In this work, we propose a novel solution
that provides $\textit{partial}$ identification in linear systems given a set
of $\textit{leaky instruments}$, which are allowed to violate the exclusion
criterion to some limited degree. We derive a convex optimization objective
that provides provably sharp bounds on the average treatment effect under some
common forms of information leakage, and implement inference procedures to
quantify the uncertainty of resulting estimates. We demonstrate our method in a
set of experiments with simulated data, where it performs favorably against the
state of the art. An accompanying $\texttt{R}$ package, $\texttt{leakyIV}$, is
available from $\texttt{CRAN}$.
|
[
{
"created": "Fri, 5 Apr 2024 23:17:25 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 09:59:09 GMT",
"version": "v2"
}
] |
2024-05-09
|
[
[
"Watson",
"David S.",
""
],
[
"Penn",
"Jordan",
""
],
[
"Gunderson",
"Lee M.",
""
],
[
"Bravo-Hermsdorff",
"Gecia",
""
],
[
"Mastouri",
"Afsaneh",
""
],
[
"Silva",
"Ricardo",
""
]
] |
2404.04526
|
Sara Rojas
|
Sara Rojas, Julien Philip, Kai Zhang, Sai Bi, Fujun Luan, Bernard
Ghanem, Kalyan Sunkavall
|
DATENeRF: Depth-Aware Text-based Editing of NeRFs
|
3D Scene Editing, Neural Rendering, Diffusion Models, Accepted to
ECCV24
|
ECCV 2024
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent advancements in diffusion models have shown remarkable proficiency in
editing 2D images based on text prompts. However, extending these techniques to
edit scenes in Neural Radiance Fields (NeRF) is complex, as editing individual
2D frames can result in inconsistencies across multiple views. Our crucial
insight is that a NeRF scene's geometry can serve as a bridge to integrate
these 2D edits. Utilizing this geometry, we employ a depth-conditioned
ControlNet to enhance the coherence of each 2D image modification. Moreover, we
introduce an inpainting approach that leverages the depth information of NeRF
scenes to distribute 2D edits across different images, ensuring robustness
against errors and resampling challenges. Our results reveal that this
methodology achieves more consistent, lifelike, and detailed edits than
existing leading methods for text-driven NeRF scene editing.
|
[
{
"created": "Sat, 6 Apr 2024 06:48:16 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 11:17:28 GMT",
"version": "v2"
}
] |
2024-08-02
|
[
[
"Rojas",
"Sara",
""
],
[
"Philip",
"Julien",
""
],
[
"Zhang",
"Kai",
""
],
[
"Bi",
"Sai",
""
],
[
"Luan",
"Fujun",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Sunkavall",
"Kalyan",
""
]
] |
2404.04561
|
Jingyi Pan
|
Jingyi Pan, Zipeng Wang, Lin Wang
|
Co-Occ: Coupling Explicit Feature Fusion with Volume Rendering
Regularization for Multi-Modal 3D Semantic Occupancy Prediction
|
Accepted by IEEE Robotics and Automation Letters (RA-L)
|
IEEE Robotics and Automation Letters, Volume 9 Issue 6, 5687 -
5694, June 2024
|
10.1109/LRA.2024.3396092
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D semantic occupancy prediction is a pivotal task in the field of autonomous
driving. Recent approaches have made great advances in 3D semantic occupancy
predictions on a single modality. However, multi-modal semantic occupancy
prediction approaches have encountered difficulties in dealing with the
modality heterogeneity, modality misalignment, and insufficient modality
interactions that arise during the fusion of different modalities data, which
may result in the loss of important geometric and semantic information. This
letter presents a novel multi-modal, i.e., LiDAR-camera 3D semantic occupancy
prediction framework, dubbed Co-Occ, which couples explicit LiDAR-camera
feature fusion with implicit volume rendering regularization. The key insight
is that volume rendering in the feature space can proficiently bridge the gap
between 3D LiDAR sweeps and 2D images while serving as a physical
regularization to enhance LiDAR-camera fused volumetric representation.
Specifically, we first propose a Geometric- and Semantic-aware Fusion
(GSFusion) module to explicitly enhance LiDAR features by incorporating
neighboring camera features through a K-nearest neighbors (KNN) search. Then,
we employ volume rendering to project the fused feature back to the image
planes for reconstructing color and depth maps. These maps are then supervised
by input images from the camera and depth estimations derived from LiDAR,
respectively. Extensive experiments on the popular nuScenes and SemanticKITTI
benchmarks verify the effectiveness of our Co-Occ for 3D semantic occupancy
prediction. The project page is available at
https://rorisis.github.io/Co-Occ_project-page/.
|
[
{
"created": "Sat, 6 Apr 2024 09:01:19 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2024 12:50:16 GMT",
"version": "v2"
},
{
"created": "Wed, 22 May 2024 03:43:29 GMT",
"version": "v3"
}
] |
2024-05-24
|
[
[
"Pan",
"Jingyi",
""
],
[
"Wang",
"Zipeng",
""
],
[
"Wang",
"Lin",
""
]
] |
2404.04578
|
Roy Rudolf Huizen
|
Florentina Tatrin Kurniati, Daniel HF Manongga, Eko Sediyono, Sri
Yulianto Joko Prasetyo, Roy Rudolf Huizen
|
GLCM-Based Feature Combination for Extraction Model Optimization in
Object Detection Using Machine Learning
| null |
JITEKI,December 2023,
http://journal.uad.ac.id/index.php/JITEKI/article/view/27842
|
10.26555/jiteki.v9i4.27842
|
Vol. 9, No. 4, pp. 1196-1205
|
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the era of modern technology, object detection using the Gray Level
Co-occurrence Matrix (GLCM) extraction method plays a crucial role in object
recognition processes. It finds applications in real-time scenarios such as
security surveillance and autonomous vehicle navigation, among others.
Computational efficiency becomes a critical factor in achieving real-time
object detection. Hence, there is a need for a detection model with low
complexity and satisfactory accuracy. This research aims to enhance
computational efficiency by selecting appropriate features within the GLCM
framework. Two classification models, namely K-Nearest Neighbours (K-NN) and
Support Vector Machine (SVM), were employed, with the results indicating that
K-Nearest Neighbours (K-NN) outperforms SVM in terms of computational
complexity. Specifically, K-NN, when utilizing a combination of Correlation,
Energy, and Homogeneity features, achieves a 100% accuracy rate with low
complexity. Moreover, when using a combination of Energy and Homogeneity
features, K-NN attains an almost perfect accuracy level of 99.9889%, while
maintaining low complexity. On the other hand, despite SVM achieving 100%
accuracy in certain feature combinations, its high or very high complexity can
pose challenges, particularly in real-time applications. Therefore, based on
the trade-off between accuracy and complexity, the K-NN model with a
combination of Correlation, Energy, and Homogeneity features emerges as a more
suitable choice for real-time applications that demand high accuracy and low
complexity. This research provides valuable insights for optimizing object
detection in various applications requiring both high accuracy and rapid
responsiveness.
|
[
{
"created": "Sat, 6 Apr 2024 10:16:33 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Kurniati",
"Florentina Tatrin",
""
],
[
"Manongga",
"Daniel HF",
""
],
[
"Sediyono",
"Eko",
""
],
[
"Prasetyo",
"Sri Yulianto Joko",
""
],
[
"Huizen",
"Roy Rudolf",
""
]
] |
2404.04608
|
Bo Yuan
|
Danpei Zhao, Bo Yuan, Ziqiang Chen, Tian Li, Zhuoran Liu, Wentao Li,
Yue Gao
|
Panoptic Perception: A Novel Task and Fine-grained Dataset for Universal
Remote Sensing Image Interpretation
| null |
IEEE Transactions on Geoscience and Remote Sensing, 2024
|
10.1109/TGRS.2024.3392778
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current remote-sensing interpretation models often focus on a single task
such as detection, segmentation, or caption. However, the task-specific
designed models are unattainable to achieve the comprehensive multi-level
interpretation of images. The field also lacks support for multi-task joint
interpretation datasets. In this paper, we propose Panoptic Perception, a novel
task and a new fine-grained dataset (FineGrip) to achieve a more thorough and
universal interpretation for RSIs. The new task, 1) integrates pixel-level,
instance-level, and image-level information for universal image perception, 2)
captures image information from coarse to fine granularity, achieving deeper
scene understanding and description, and 3) enables various independent tasks
to complement and enhance each other through multi-task learning. By
emphasizing multi-task interactions and the consistency of perception results,
this task enables the simultaneous processing of fine-grained foreground
instance segmentation, background semantic segmentation, and global
fine-grained image captioning. Concretely, the FineGrip dataset includes 2,649
remote sensing images, 12,054 fine-grained instance segmentation masks
belonging to 20 foreground things categories, 7,599 background semantic masks
for 5 stuff classes and 13,245 captioning sentences. Furthermore, we propose a
joint optimization-based panoptic perception model. Experimental results on
FineGrip demonstrate the feasibility of the panoptic perception task and the
beneficial effect of multi-task joint optimization on individual tasks. The
dataset will be publicly available.
|
[
{
"created": "Sat, 6 Apr 2024 12:27:21 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 01:07:26 GMT",
"version": "v2"
}
] |
2024-04-29
|
[
[
"Zhao",
"Danpei",
""
],
[
"Yuan",
"Bo",
""
],
[
"Chen",
"Ziqiang",
""
],
[
"Li",
"Tian",
""
],
[
"Liu",
"Zhuoran",
""
],
[
"Li",
"Wentao",
""
],
[
"Gao",
"Yue",
""
]
] |
2404.04693
|
Guoyang Zhao
|
Bonan Liu, Guoyang Zhao, Jianhao Jiao, Guang Cai, Chengyang Li, Handi
Yin, Yuyang Wang, Ming Liu and Pan Hui
|
OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera
Fusion for Colorizing Point Clouds
|
2024 IEEE International Conference on Robotics and Automation (ICRA)
|
2024 IEEE International Conference on Robotics and Automation
(ICRA)
|
10.1109/ICRA57147.2024.10610292
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Colored point cloud, as a simple and efficient 3D representation, has many
advantages in various fields, including robotic navigation and scene
reconstruction. This representation is now commonly used in 3D reconstruction
tasks relying on cameras and LiDARs. However, fusing data from these two types
of sensors is poorly performed in many existing frameworks, leading to
unsatisfactory mapping results, mainly due to inaccurate camera poses. This
paper presents OmniColor, a novel and efficient algorithm to colorize point
clouds using an independent 360-degree camera. Given a LiDAR-based point cloud
and a sequence of panorama images with initial coarse camera poses, our
objective is to jointly optimize the poses of all frames for mapping images
onto geometric reconstructions. Our pipeline works in an off-the-shelf manner
that does not require any feature extraction or matching process. Instead, we
find optimal poses by directly maximizing the photometric consistency of LiDAR
maps. In experiments, we show that our method can overcome the severe visual
distortion of omnidirectional images and greatly benefit from the wide field of
view (FOV) of 360-degree cameras to reconstruct various scenarios with accuracy
and stability. The code will be released at
https://github.com/liubonan123/OmniColor/.
|
[
{
"created": "Sat, 6 Apr 2024 17:41:36 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Sep 2024 13:53:33 GMT",
"version": "v2"
}
] |
2024-09-27
|
[
[
"Liu",
"Bonan",
""
],
[
"Zhao",
"Guoyang",
""
],
[
"Jiao",
"Jianhao",
""
],
[
"Cai",
"Guang",
""
],
[
"Li",
"Chengyang",
""
],
[
"Yin",
"Handi",
""
],
[
"Wang",
"Yuyang",
""
],
[
"Liu",
"Ming",
""
],
[
"Hui",
"Pan",
""
]
] |
2404.04824
|
Mahardhika Pratama Assoc Prof
|
Muhammad Tanzil Furqon, Mahardhika Pratama, Lin Liu, Habibullah,
Kutluyil Dogancay
|
Mixup Domain Adaptations for Dynamic Remaining Useful Life Predictions
|
accepted for publication in Knowledge-based Systems
|
Knowledge-based Systems, 2024
| null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Remaining Useful Life (RUL) predictions play vital role for asset planning
and maintenance leading to many benefits to industries such as reduced
downtime, low maintenance costs, etc. Although various efforts have been
devoted to study this topic, most existing works are restricted for i.i.d
conditions assuming the same condition of the training phase and the deployment
phase. This paper proposes a solution to this problem where a mix-up domain
adaptation (MDAN) is put forward. MDAN encompasses a three-staged mechanism
where the mix-up strategy is not only performed to regularize the source and
target domains but also applied to establish an intermediate mix-up domain
where the source and target domains are aligned. The self-supervised learning
strategy is implemented to prevent the supervision collapse problem. Rigorous
evaluations have been performed where MDAN is compared to recently published
works for dynamic RUL predictions. MDAN outperforms its counterparts with
substantial margins in 12 out of 12 cases. In addition, MDAN is evaluated with
the bearing machine dataset where it beats prior art with significant gaps in 8
of 12 cases. Source codes of MDAN are made publicly available in
\url{https://github.com/furqon3009/MDAN}.
|
[
{
"created": "Sun, 7 Apr 2024 06:23:18 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Furqon",
"Muhammad Tanzil",
""
],
[
"Pratama",
"Mahardhika",
""
],
[
"Liu",
"Lin",
""
],
[
"Habibullah",
"",
""
],
[
"Dogancay",
"Kutluyil",
""
]
] |
2404.04869
|
Yiqun Duan
|
Yiqun Duan, Qiang Zhang, Renjing Xu
|
Prompting Multi-Modal Tokens to Enhance End-to-End Autonomous Driving
Imitation Learning with LLMs
| null |
Published as oral presentation paper atthe 2024 IEEE International
Conference on Robotics and Automation (ICRA2024), Yokohama, Japan
| null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The utilization of Large Language Models (LLMs) within the realm of
reinforcement learning, particularly as planners, has garnered a significant
degree of attention in recent scholarly literature. However, a substantial
proportion of existing research predominantly focuses on planning models for
robotics that transmute the outputs derived from perception models into
linguistic forms, thus adopting a `pure-language' strategy. In this research,
we propose a hybrid End-to-End learning framework for autonomous driving by
combining basic driving imitation learning with LLMs based on multi-modality
prompt tokens. Instead of simply converting perception results from the
separated train model into pure language input, our novelty lies in two
aspects. 1) The end-to-end integration of visual and LiDAR sensory input into
learnable multi-modality tokens, thereby intrinsically alleviating description
bias by separated pre-trained perception models. 2) Instead of directly letting
LLMs drive, this paper explores a hybrid setting of letting LLMs help the
driving model correct mistakes and complicated scenarios. The results of our
experiments suggest that the proposed methodology can attain driving scores of
49.21%, coupled with an impressive route completion rate of 91.34% in the
offline evaluation conducted via CARLA. These performance metrics are
comparable to the most advanced driving models.
|
[
{
"created": "Sun, 7 Apr 2024 08:31:12 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2024 11:43:31 GMT",
"version": "v2"
}
] |
2024-07-30
|
[
[
"Duan",
"Yiqun",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Xu",
"Renjing",
""
]
] |
2404.04983
|
Nora Ouzir
|
Aur\'elie Beaufr\`ere, Nora Ouzir, Paul Emile Zafar, Astrid
Laurent-Bellue, Miguel Albuquerque, Gwladys Lubuela, Jules Gr\'egory,
Catherine Guettier, K\'evin Mondet, Jean-Christophe Pesquet, Val\'erie
Paradis
|
Primary liver cancer classification from routine tumour biopsy using
weakly supervised deep learning
|
https://www.sciencedirect.com/science/article/pii/S2589555924000090
|
JHEP Reports, Volume 6, Issue 3, 2024
|
10.1016/j.jhepr.2024.101008
| null |
q-bio.TO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The diagnosis of primary liver cancers (PLCs) can be challenging, especially
on biopsies and for combined hepatocellular-cholangiocarcinoma (cHCC-CCA). We
automatically classified PLCs on routine-stained biopsies using a weakly
supervised learning method. Weak tumour/non-tumour annotations served as labels
for training a Resnet18 neural network, and the network's last convolutional
layer was used to extract new tumour tile features. Without knowledge of the
precise labels of the malignancies, we then applied an unsupervised clustering
algorithm. Our model identified specific features of hepatocellular carcinoma
(HCC) and intrahepatic cholangiocarcinoma (iCCA). Despite no specific features
of cHCC-CCA being recognized, the identification of HCC and iCCA tiles within a
slide could facilitate the diagnosis of primary liver cancers, particularly
cHCC-CCA.
Method and results: 166 PLC biopsies were divided into training, internal and
external validation sets: 90, 29 and 47 samples. Two liver pathologists
reviewed each whole-slide hematein eosin saffron (HES)-stained image (WSI).
After annotating the tumour/non-tumour areas, 256x256 pixel tiles were
extracted from the WSIs and used to train a ResNet18. The network was used to
extract new tile features. An unsupervised clustering algorithm was then
applied to the new tile features. In a two-cluster model, Clusters 0 and 1
contained mainly HCC and iCCA histological features. The diagnostic agreement
between the pathological diagnosis and the model predictions in the internal
and external validation sets was 100% (11/11) and 96% (25/26) for HCC and 78%
(7/9) and 87% (13/15) for iCCA, respectively. For cHCC-CCA, we observed a
highly variable proportion of tiles from each cluster (Cluster 0: 5-97%;
Cluster 1: 2-94%).
|
[
{
"created": "Sun, 7 Apr 2024 15:03:46 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Beaufrère",
"Aurélie",
""
],
[
"Ouzir",
"Nora",
""
],
[
"Zafar",
"Paul Emile",
""
],
[
"Laurent-Bellue",
"Astrid",
""
],
[
"Albuquerque",
"Miguel",
""
],
[
"Lubuela",
"Gwladys",
""
],
[
"Grégory",
"Jules",
""
],
[
"Guettier",
"Catherine",
""
],
[
"Mondet",
"Kévin",
""
],
[
"Pesquet",
"Jean-Christophe",
""
],
[
"Paradis",
"Valérie",
""
]
] |
2404.05073
|
Stefano Scanzio
|
Stefano Scanzio, Gianluca Cena, Adriano Valenzano
|
QRscript: Embedding a Programming Language in QR codes to support
Decision and Management
|
preprint, 8 pages
|
27th IEEE International Conference on Emerging Technologies and
Factory Automation (ETFA 2022)
|
10.1109/ETFA52439.2022.9921530
| null |
cs.NI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Embedding a programming language in a QR code is a new and extremely
promising opportunity, as it makes devices and objects smarter without
necessarily requiring an Internet connection. In this paper, all the steps
needed to translate a program written in a high-level programming language to
its binary representation encoded in a QR code, and the opposite process that,
starting from the QR code, executes it by means of a virtual machine, have been
carefully detailed. The proposed programming language was named QRscript, and
can be easily extended so as to integrate new features. One of the main design
goals was to produce a very compact target binary code. In particular, in this
work we propose a specific sub-language (a dialect) that is aimed at encoding
decision trees. Besides industrial scenarios, this is useful in many other
application fields. The reported example, related to the configuration of an
industrial networked device, highlights the potential of the proposed
technology, and permits to better understand all the translation steps.
|
[
{
"created": "Sun, 7 Apr 2024 21:02:55 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Scanzio",
"Stefano",
""
],
[
"Cena",
"Gianluca",
""
],
[
"Valenzano",
"Adriano",
""
]
] |
2404.05107
|
Yujian Xiong
|
Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang
|
Reconstructing Retinal Visual Images from 3T fMRI Data Enhanced by
Unsupervised Learning
|
Accepted by ISBI 2024
|
2024 IEEE International Symposium on Biomedical Imaging
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The reconstruction of human visual inputs from brain activity, particularly
through functional Magnetic Resonance Imaging (fMRI), holds promising avenues
for unraveling the mechanisms of the human visual system. Despite the
significant strides made by deep learning methods in improving the quality and
interpretability of visual reconstruction, there remains a substantial demand
for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The
challenge arises in integrating diverse smaller 3-Tesla datasets or
accommodating new subjects with brief and low-quality fMRI scans. In response
to these constraints, we propose a novel framework that generates enhanced 3T
fMRI data through an unsupervised Generative Adversarial Network (GAN),
leveraging unpaired training across two distinct fMRI datasets in 7T and 3T,
respectively. This approach aims to overcome the limitations of the scarcity of
high-quality 7-Tesla data and the challenges associated with brief and
low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the
reconstruction capabilities of the enhanced 3T fMRI data, highlighting its
proficiency in generating superior input visual images compared to
data-intensive methods trained and tested on a single subject.
|
[
{
"created": "Sun, 7 Apr 2024 23:31:37 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Xiong",
"Yujian",
""
],
[
"Zhu",
"Wenhui",
""
],
[
"Lu",
"Zhong-Lin",
""
],
[
"Wang",
"Yalin",
""
]
] |
2404.05143
|
Rohan Deepak Ajwani
|
Rohan Deepak Ajwani, Zining Zhu, Jonathan Rose, Frank Rudzicz
|
Plug and Play with Prompts: A Prompt Tuning Approach for Controlling
Text Generation
|
9 pages, 3 figures, Presented at Deployable AI Workshop at AAAI-2024
|
Presented at Deployable AI Workshop at AAAI-2024
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformer-based Large Language Models (LLMs) have shown exceptional
language generation capabilities in response to text-based prompts. However,
controlling the direction of generation via textual prompts has been
challenging, especially with smaller models. In this work, we explore the use
of Prompt Tuning to achieve controlled language generation. Generated text is
steered using prompt embeddings, which are trained using a small language
model, used as a discriminator. Moreover, we demonstrate that these prompt
embeddings can be trained with a very small dataset, with as low as a few
hundred training examples. Our method thus offers a data and parameter
efficient solution towards controlling language model outputs. We carry out
extensive evaluation on four datasets: SST-5 and Yelp (sentiment analysis),
GYAFC (formality) and JIGSAW (toxic language). Finally, we demonstrate the
efficacy of our method towards mitigating harmful, toxic, and biased text
generated by language models.
|
[
{
"created": "Mon, 8 Apr 2024 01:54:28 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Ajwani",
"Rohan Deepak",
""
],
[
"Zhu",
"Zining",
""
],
[
"Rose",
"Jonathan",
""
],
[
"Rudzicz",
"Frank",
""
]
] |
2404.05341
|
Shoffan Saifullah
|
Shoffan Saifullah, Andri Pranolo, and Rafa{\l} Dre\.zewski
|
Comparative Analysis of Image Enhancement Techniques for Brain Tumor
Segmentation: Contrast, Histogram, and Hybrid Approaches
|
9 Pages, & Figures, 2 Tables, International Conference on Computer
Science Electronics and Information (ICCSEI 2023)
|
E3S Web Conf. E3S Web Conf., Volume 501, 2024
|
10.1051/e3sconf/202450101020
| null |
eess.IV cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This study systematically investigates the impact of image enhancement
techniques on Convolutional Neural Network (CNN)-based Brain Tumor
Segmentation, focusing on Histogram Equalization (HE), Contrast Limited
Adaptive Histogram Equalization (CLAHE), and their hybrid variations. Employing
the U-Net architecture on a dataset of 3064 Brain MRI images, the research
delves into preprocessing steps, including resizing and enhancement, to
optimize segmentation accuracy. A detailed analysis of the CNN-based U-Net
architecture, training, and validation processes is provided. The comparative
analysis, utilizing metrics such as Accuracy, Loss, MSE, IoU, and DSC, reveals
that the hybrid approach CLAHE-HE consistently outperforms others. Results
highlight its superior accuracy (0.9982, 0.9939, 0.9936 for training, testing,
and validation, respectively) and robust segmentation overlap, with Jaccard
values of 0.9862, 0.9847, and 0.9864, and Dice values of 0.993, 0.9923, and
0.9932 for the same phases, emphasizing its potential in neuro-oncological
applications. The study concludes with a call for refinement in segmentation
methodologies to further enhance diagnostic precision and treatment planning in
neuro-oncology.
|
[
{
"created": "Mon, 8 Apr 2024 09:27:42 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Saifullah",
"Shoffan",
""
],
[
"Pranolo",
"Andri",
""
],
[
"Dreżewski",
"Rafał",
""
]
] |
2404.05447
|
Giulio Poggi
|
Gregory Sech, Giulio Poggi, Marina Ljubenovic, Marco Fiorucci, Arianna
Traviglia
|
Pansharpening of PRISMA products for archaeological prospection
| null |
IGARSS 2024 - 2024 IEEE International Geoscience and Remote
Sensing Symposium
|
10.1109/IGARSS53475.2024.10642261
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Hyperspectral data recorded from satellite platforms are often ill-suited for
geo-archaeological prospection due to low spatial resolution. The established
potential of hyperspectral data from airborne sensors in identifying
archaeological features has, on the other side, generated increased interest in
enhancing hyperspectral data to achieve higher spatial resolution. This
improvement is crucial for detecting traces linked to sub-surface
geo-archaeological features and can make satellite hyperspectral acquisitions
more suitable for archaeological research. This research assesses the usability
of pansharpened PRISMA satellite products in geo-archaeological prospections.
Three pan-sharpening methods (GSA, MTF-GLP and HySure) are compared
quantitatively and qualitatively and tested over the archaeological landscape
of Aquileia (Italy). The results suggest that the application of pansharpening
techniques makes hyperspectral satellite imagery highly suitable, under certain
conditions, to the identification of sub-surface archaeological features of
small and large size.
|
[
{
"created": "Mon, 8 Apr 2024 12:29:46 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2024 11:06:44 GMT",
"version": "v2"
}
] |
2024-09-23
|
[
[
"Sech",
"Gregory",
""
],
[
"Poggi",
"Giulio",
""
],
[
"Ljubenovic",
"Marina",
""
],
[
"Fiorucci",
"Marco",
""
],
[
"Traviglia",
"Arianna",
""
]
] |
2404.05458
|
EPTCS
|
Simon Tobias Lund (Technical University of Denmark), J{\o}rgen
Villadsen (Technical University of Denmark)
|
Teaching Higher-Order Logic Using Isabelle
|
In Proceedings ThEdu'23, arXiv:2404.03709
|
EPTCS 400, 2024, pp. 59-78
|
10.4204/EPTCS.400.5
| null |
cs.LO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a formalization of higher-order logic in the Isabelle proof
assistant, building directly on the foundational framework Isabelle/Pure and
developed to be as small and readable as possible. It should therefore serve as
a good introduction for someone looking into learning about higher-order logic
and proof assistants, without having to study the much more complex
Isabelle/HOL with heavier automation. To showcase our development and approach
we explain a sample proof, describe the axioms and rules of our higher-order
logic, and discuss our experience with teaching the subject in a classroom
setting.
|
[
{
"created": "Mon, 8 Apr 2024 12:40:27 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Lund",
"Simon Tobias",
"",
"Technical University of Denmark"
],
[
"Villadsen",
"Jørgen",
"",
"Technical University of Denmark"
]
] |
2404.05512
|
Giulio Poggi
|
Raveerat Jaturapitpornchai, Giulio Poggi, Gregory Sech, Ziga Kokalj,
Marco Fiorucci, Arianna Traviglia
|
Impact of LiDAR visualisations on semantic segmentation of
archaeological objects
| null |
IGARSS 2024 - 2024 IEEE International Geoscience and Remote
Sensing Symposium
|
10.1109/IGARSS53475.2024.10641182
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning methods in LiDAR-based archaeological research often leverage
visualisation techniques derived from Digital Elevation Models to enhance
characteristics of archaeological objects present in the images. This paper
investigates the impact of visualisations on deep learning performance through
a comprehensive testing framework. The study involves the use of eight semantic
segmentation models to evaluate seven diverse visualisations across two study
areas, encompassing five archaeological classes. Experimental results reveal
that the choice of appropriate visualisations can influence performance by up
to 8%. Yet, pinpointing one visualisation that outperforms the others in
segmenting all archaeological classes proves challenging. The observed
performance variation, reaching up to 25% across different model
configurations, underscores the importance of thoughtfully selecting model
configurations and LiDAR visualisations for successfully segmenting
archaeological objects.
|
[
{
"created": "Mon, 8 Apr 2024 13:35:14 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2024 11:05:49 GMT",
"version": "v2"
}
] |
2024-09-23
|
[
[
"Jaturapitpornchai",
"Raveerat",
""
],
[
"Poggi",
"Giulio",
""
],
[
"Sech",
"Gregory",
""
],
[
"Kokalj",
"Ziga",
""
],
[
"Fiorucci",
"Marco",
""
],
[
"Traviglia",
"Arianna",
""
]
] |
2404.05555
|
Seungyub Han
|
Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee
|
On the Convergence of Continual Learning with Adaptive Methods
|
Proceedings of the Thirty-Ninth Conference on Uncertainty in
Artificial Intelligence (UAI 2023), see
https://proceedings.mlr.press/v216/han23a.html
|
PMLR 216:809-818, 2023
| null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
One of the objectives of continual learning is to prevent catastrophic
forgetting in learning multiple tasks sequentially, and the existing solutions
have been driven by the conceptualization of the plasticity-stability dilemma.
However, the convergence of continual learning for each sequential task is less
studied so far. In this paper, we provide a convergence analysis of
memory-based continual learning with stochastic gradient descent and empirical
evidence that training current tasks causes the cumulative degradation of
previous tasks. We propose an adaptive method for nonconvex continual learning
(NCCL), which adjusts step sizes of both previous and current tasks with the
gradients. The proposed method can achieve the same convergence rate as the SGD
method when the catastrophic forgetting term which we define in the paper is
suppressed at each iteration. Further, we demonstrate that the proposed
algorithm improves the performance of continual learning over existing methods
for several image classification tasks.
|
[
{
"created": "Mon, 8 Apr 2024 14:28:27 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 08:44:13 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Han",
"Seungyub",
""
],
[
"Kim",
"Yeongmo",
""
],
[
"Cho",
"Taehyun",
""
],
[
"Lee",
"Jungwoo",
""
]
] |
2404.05623
|
Pietro Lesci
|
Pietro Lesci and Andreas Vlachos
|
AnchorAL: Computationally Efficient Active Learning for Large and
Imbalanced Datasets
|
Published at the NAACL 2024 Conference (main)
|
Proceedings of the 2024 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies
(Volume 1: Long Papers) (2024)
|
10.18653/v1/2024.naacl-long.467
| null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Active learning for imbalanced classification tasks is challenging as the
minority classes naturally occur rarely. Gathering a large pool of unlabelled
data is thus essential to capture minority instances. Standard pool-based
active learning is computationally expensive on large pools and often reaches
low accuracy by overfitting the initial decision boundary, thus failing to
explore the input space and find minority instances. To address these issues we
propose AnchorAL. At each iteration, AnchorAL chooses class-specific instances
from the labelled set, or anchors, and retrieves the most similar unlabelled
instances from the pool. This resulting subpool is then used for active
learning. Using a small, fixed-sized subpool AnchorAL allows scaling any active
learning strategy to large pools. By dynamically selecting different anchors at
each iteration it promotes class balance and prevents overfitting the initial
decision boundary, thus promoting the discovery of new clusters of minority
instances. In experiments across different classification tasks, active
learning strategies, and model architectures AnchorAL is (i) faster, often
reducing runtime from hours to minutes, (ii) trains more performant models,
(iii) and returns more balanced datasets than competing methods.
|
[
{
"created": "Mon, 8 Apr 2024 15:53:46 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 19:46:14 GMT",
"version": "v2"
}
] |
2024-10-17
|
[
[
"Lesci",
"Pietro",
""
],
[
"Vlachos",
"Andreas",
""
]
] |
2404.05667
|
Jiannan Ge
|
Jiannan Ge, Lingxi Xie, Hongtao Xie, Pandeng Li, Xiaopeng Zhang,
Yongdong Zhang, Qi Tian
|
AlignZeg: Mitigating Objective Misalignment for Zero-shot Semantic
Segmentation
| null |
ECCV 2024
|
10.1007/978-3-031-72775-7_9
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A serious issue that harms the performance of zero-shot visual recognition is
named objective misalignment, i.e., the learning objective prioritizes
improving the recognition accuracy of seen classes rather than unseen classes,
while the latter is the true target to pursue. This issue becomes more
significant in zero-shot image segmentation because the stronger (i.e.,
pixel-level) supervision brings a larger gap between seen and unseen classes.
To mitigate it, we propose a novel architecture named AlignZeg, which embodies
a comprehensive improvement of the segmentation pipeline, including proposal
extraction, classification, and correction, to better fit the goal of zero-shot
segmentation. (1) Mutually-Refined Proposal Extraction. AlignZeg harnesses a
mutual interaction between mask queries and visual features, facilitating
detailed class-agnostic mask proposal extraction. (2) Generalization-Enhanced
Proposal Classification. AlignZeg introduces synthetic data and incorporates
multiple background prototypes to allocate a more generalizable feature space.
(3) Predictive Bias Correction. During the inference stage, AlignZeg uses a
class indicator to find potential unseen class proposals followed by a
prediction postprocess to correct the prediction bias. Experiments demonstrate
that AlignZeg markedly enhances zero-shot semantic segmentation, as shown by an
average 3.8% increase in hIoU, primarily attributed to a 7.1% improvement in
identifying unseen classes, and we further validate that the improvement comes
from alleviating the objective misalignment issue.
|
[
{
"created": "Mon, 8 Apr 2024 16:51:33 GMT",
"version": "v1"
}
] |
2024-10-14
|
[
[
"Ge",
"Jiannan",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Xie",
"Hongtao",
""
],
[
"Li",
"Pandeng",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Tian",
"Qi",
""
]
] |
2404.05695
|
Yen-Jen Wang
|
Xinyang Gu, Yen-Jen Wang, Jianyu Chen
|
Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot
Sim2Real Transfer
| null |
ICRA 2024 Workshop on Agile Robotics
| null | null |
cs.RO cs.AI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Humanoid-Gym is an easy-to-use reinforcement learning (RL) framework based on
Nvidia Isaac Gym, designed to train locomotion skills for humanoid robots,
emphasizing zero-shot transfer from simulation to the real-world environment.
Humanoid-Gym also integrates a sim-to-sim framework from Isaac Gym to Mujoco
that allows users to verify the trained policies in different physical
simulations to ensure the robustness and generalization of the policies. This
framework is verified by RobotEra's XBot-S (1.2-meter tall humanoid robot) and
XBot-L (1.65-meter tall humanoid robot) in a real-world environment with
zero-shot sim-to-real transfer. The project website and source code can be
found at: https://sites.google.com/view/humanoid-gym/.
|
[
{
"created": "Mon, 8 Apr 2024 17:26:28 GMT",
"version": "v1"
},
{
"created": "Sat, 18 May 2024 10:00:30 GMT",
"version": "v2"
}
] |
2024-05-21
|
[
[
"Gu",
"Xinyang",
""
],
[
"Wang",
"Yen-Jen",
""
],
[
"Chen",
"Jianyu",
""
]
] |
2404.05735
|
Giorgio Nordo
|
Giorgio Nordo, Saeid Jafari, Arif Mehmood, Bhimraj Basumatary
|
A Python Framework for Neutrosophic Sets and Mappings
|
38 PAGES
|
Neutrosophic Sets and Systems 65, 2024
| null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present an open source framework developed in Python and
consisting of three distinct classes designed to manipulate in a simple and
intuitive way both symbolic representations of neutrosophic sets over universes
of various types as well as mappings between them. The capabilities offered by
this framework extend and generalize previous attempts to provide software
solutions to the manipulation of neutrosophic sets such as those proposed by
Salama et al., Saranya et al., El-Ghareeb, Topal et al. and Sleem. The code is
described in detail and many examples and use cases are also provided.
|
[
{
"created": "Sun, 24 Mar 2024 16:00:16 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Nordo",
"Giorgio",
""
],
[
"Jafari",
"Saeid",
""
],
[
"Mehmood",
"Arif",
""
],
[
"Basumatary",
"Bhimraj",
""
]
] |
2404.05908
|
Guilherme Seidyo Imai Aldeia
|
Guilherme Seidyo Imai Aldeia and Fabricio Olivetti de Franca (Federal
University of ABC)
|
Interpretability in Symbolic Regression: a benchmark of Explanatory
Methods using the Feynman data set
|
47 pages, 10 figures. This is a post peer-review, pre-copyedit
version of an article published in Genetic Programming and Evolvable Machines
Volume 23, pages 309-349, (2022). The final version is available on
https://link.springer.com/article/10.1007/s10710-022-09435-x
|
Aldeia, G.S.I., de Franca, F.O. Interpretability in symbolic
regression: a benchmark of explanatory methods using the Feynman data set.
Genet Program Evolvable Mach 23, 309-349 (2022)
|
10.1007/s10710-022-09435-x
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In some situations, the interpretability of the machine learning models plays
a role as important as the model accuracy. Interpretability comes from the need
to trust the prediction model, verify some of its properties, or even enforce
them to improve fairness. Many model-agnostic explanatory methods exists to
provide explanations for black-box models. In the regression task, the
practitioner can use white-boxes or gray-boxes models to achieve more
interpretable results, which is the case of symbolic regression. When using an
explanatory method, and since interpretability lacks a rigorous definition,
there is a need to evaluate and compare the quality and different explainers.
This paper proposes a benchmark scheme to evaluate explanatory methods to
explain regression models, mainly symbolic regression models. Experiments were
performed using 100 physics equations with different interpretable and
non-interpretable regression methods and popular explanation methods,
evaluating the performance of the explainers performance with several
explanation measures. In addition, we further analyzed four benchmarks from the
GP community. The results have shown that Symbolic Regression models can be an
interesting alternative to white-box and black-box models that is capable of
returning accurate models with appropriate explanations. Regarding the
explainers, we observed that Partial Effects and SHAP were the most robust
explanation models, with Integrated Gradients being unstable only with
tree-based models. This benchmark is publicly available for further
experiments.
|
[
{
"created": "Mon, 8 Apr 2024 23:46:59 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Aldeia",
"Guilherme Seidyo Imai",
"",
"Federal\n University of ABC"
],
[
"de Franca",
"Fabricio Olivetti",
"",
"Federal\n University of ABC"
]
] |
2404.06012
|
Kai Luan
|
Kai Luan and Chenghao Shi and Neng Wang and Yuwei Cheng and Huimin Lu
and Xieyuanli Chen
|
Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data
| null |
Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA),
2024
| null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The millimeter-wave radar sensor maintains stable performance under adverse
environmental conditions, making it a promising solution for all-weather
perception tasks, such as outdoor mobile robotics. However, the radar point
clouds are relatively sparse and contain massive ghost points, which greatly
limits the development of mmWave radar technology. In this paper, we propose a
novel point cloud super-resolution approach for 3D mmWave radar data, named
Radar-diffusion. Our approach employs the diffusion model defined by
mean-reverting stochastic differential equations(SDE). Using our proposed new
objective function with supervision from corresponding LiDAR point clouds, our
approach efficiently handles radar ghost points and enhances the sparse mmWave
radar point clouds to dense LiDAR-like point clouds. We evaluate our approach
on two different datasets, and the experimental results show that our method
outperforms the state-of-the-art baseline methods in 3D radar super-resolution
tasks. Furthermore, we demonstrate that our enhanced radar point cloud is
capable of downstream radar point-based registration tasks.
|
[
{
"created": "Tue, 9 Apr 2024 04:41:05 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Luan",
"Kai",
""
],
[
"Shi",
"Chenghao",
""
],
[
"Wang",
"Neng",
""
],
[
"Cheng",
"Yuwei",
""
],
[
"Lu",
"Huimin",
""
],
[
"Chen",
"Xieyuanli",
""
]
] |
2404.06033
|
Du Zhiying
|
Pan Mu, Zhiying Du, Jinyuan Liu, Cong Bai
|
Little Strokes Fell Great Oaks: Boosting the Hierarchical Features for
Multi-exposure Image Fusion
| null |
Proceedings of the 31st ACM International Conference on
Multimedia, October 2023, Pages 2985-2993
|
10.1145/3581783.3612561
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, deep learning networks have made remarkable strides in the
domain of multi-exposure image fusion. Nonetheless, prevailing approaches often
involve directly feeding over-exposed and under-exposed images into the
network, which leads to the under-utilization of inherent information present
in the source images. Additionally, unsupervised techniques predominantly
employ rudimentary weighted summation for color channel processing, culminating
in an overall desaturated final image tone. To partially mitigate these issues,
this study proposes a gamma correction module specifically designed to fully
leverage latent information embedded within source images. Furthermore, a
modified transformer block, embracing with self-attention mechanisms, is
introduced to optimize the fusion process. Ultimately, a novel color
enhancement algorithm is presented to augment image saturation while preserving
intricate details. The source code is available at
https://github.com/ZhiyingDu/BHFMEF.
|
[
{
"created": "Tue, 9 Apr 2024 05:44:00 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2024 12:55:49 GMT",
"version": "v2"
}
] |
2024-04-11
|
[
[
"Mu",
"Pan",
""
],
[
"Du",
"Zhiying",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Bai",
"Cong",
""
]
] |
2404.06170
|
Lakshmi Nair
|
Lakshmi Nair
|
CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using
Embeddings as Teachers
|
Short paper - 5 pages; 5 figures
|
Extended abstract: 28th IEEE High Performance Extreme Computing
Conference (HPEC) 2024 - Outstanding short paper award
| null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Contrastive Language-Image Pre-training (CLIP) has been shown to improve
zero-shot generalization capabilities of language and vision models. In this
paper, we extend CLIP for efficient knowledge distillation, by utilizing
embeddings as teachers. Typical knowledge distillation frameworks require
running forward passes through a teacher model, which is often prohibitive in
the case of billion or trillion parameter teachers. In these cases, using only
the embeddings of the teacher models to guide the distillation can yield
significant computational savings. Our preliminary findings show that
CLIP-based knowledge distillation with embeddings can outperform full scale
knowledge distillation using $9\times$ less memory and $8\times$ less training
time. Code available at: https://github.com/lnairGT/CLIP-Distillation/
|
[
{
"created": "Tue, 9 Apr 2024 09:49:57 GMT",
"version": "v1"
}
] |
2024-09-02
|
[
[
"Nair",
"Lakshmi",
""
]
] |
2404.06219
|
Bach Ha
|
Bach Ha, Birgit Schalter, Laura White, Joachim Koehler
|
Automatic Defect Detection in Sewer Network Using Deep Learning Based
Object Detector
| null |
(2023) In Proceedings of the 3rd International Conference on Image
Processing and Vision Engineering - IMPROVE; ISBN 978-989-758-642-2; ISSN
2795-4943, SciTePress, pages 188-198
|
10.5220/0011986300003497
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Maintaining sewer systems in large cities is important, but also time and
effort consuming, because visual inspections are currently done manually. To
reduce the amount of aforementioned manual work, defects within sewer pipes
should be located and classified automatically. In the past, multiple works
have attempted solving this problem using classical image processing, machine
learning, or a combination of those. However, each provided solution only focus
on detecting a limited set of defect/structure types, such as fissure, root,
and/or connection. Furthermore, due to the use of hand-crafted features and
small training datasets, generalization is also problematic. In order to
overcome these deficits, a sizable dataset with 14.7 km of various sewer pipes
were annotated by sewer maintenance experts in the scope of this work. On top
of that, an object detector (EfficientDet-D0) was trained for automatic defect
detection. From the result of several expermients, peculiar natures of defects
in the context of object detection, which greatly effect annotation and
training process, are found and discussed. At the end, the final detector was
able to detect 83% of defects in the test set; out of the missing 17%, only
0.77% are very severe defects. This work provides an example of applying deep
learning-based object detection into an important but quiet engineering field.
It also gives some practical pointers on how to annotate peculiar "object",
such as defects.
|
[
{
"created": "Tue, 9 Apr 2024 11:13:36 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Ha",
"Bach",
""
],
[
"Schalter",
"Birgit",
""
],
[
"White",
"Laura",
""
],
[
"Koehler",
"Joachim",
""
]
] |
2404.06279
|
Ehsan Pajouheshgar
|
Ehsan Pajouheshgar, Yitao Xu, Sabine S\"usstrunk
|
NoiseNCA: Noisy Seed Improves Spatio-Temporal Continuity of Neural
Cellular Automata
|
9 pages, 12 figures
|
Artificial Life (ALife) 2024
| null | null |
cs.CV cs.AI cs.GR cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Neural Cellular Automata (NCA) is a class of Cellular Automata where the
update rule is parameterized by a neural network that can be trained using
gradient descent. In this paper, we focus on NCA models used for texture
synthesis, where the update rule is inspired by partial differential equations
(PDEs) describing reaction-diffusion systems. To train the NCA model, the
spatio-temporal domain is discretized, and Euler integration is used to
numerically simulate the PDE. However, whether a trained NCA truly learns the
continuous dynamic described by the corresponding PDE or merely overfits the
discretization used in training remains an open question. We study NCA models
at the limit where space-time discretization approaches continuity. We find
that existing NCA models tend to overfit the training discretization,
especially in the proximity of the initial condition, also called "seed". To
address this, we propose a solution that utilizes uniform noise as the initial
condition. We demonstrate the effectiveness of our approach in preserving the
consistency of NCA dynamics across a wide range of spatio-temporal
granularities. Our improved NCA model enables two new test-time interactions by
allowing continuous control over the speed of pattern formation and the scale
of the synthesized patterns. We demonstrate this new NCA feature in our
interactive online demo. Our work reveals that NCA models can learn continuous
dynamics and opens new venues for NCA research from a dynamical system's
perspective.
|
[
{
"created": "Tue, 9 Apr 2024 13:02:33 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 14:15:27 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jun 2024 11:48:51 GMT",
"version": "v3"
}
] |
2024-06-17
|
[
[
"Pajouheshgar",
"Ehsan",
""
],
[
"Xu",
"Yitao",
""
],
[
"Süsstrunk",
"Sabine",
""
]
] |
2404.06337
|
Axel Barroso Laguna
|
Axel Barroso-Laguna, Sowmya Munukutla, Victor Adrian Prisacariu, Eric
Brachmann
|
Matching 2D Images in 3D: Metric Relative Pose from Metric
Correspondences
| null |
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Given two images, we can estimate the relative camera pose between them by
establishing image-to-image correspondences. Usually, correspondences are
2D-to-2D and the pose we estimate is defined only up to scale. Some
applications, aiming at instant augmented reality anywhere, require
scale-metric pose estimates, and hence, they rely on external depth estimators
to recover the scale. We present MicKey, a keypoint matching pipeline that is
able to predict metric correspondences in 3D camera space. By learning to match
3D coordinates across images, we are able to infer the metric relative pose
without depth measurements. Depth measurements are also not required for
training, nor are scene reconstructions or image overlap information. MicKey is
supervised only by pairs of images and their relative poses. MicKey achieves
state-of-the-art performance on the Map-Free Relocalisation benchmark while
requiring less supervision than competing approaches.
|
[
{
"created": "Tue, 9 Apr 2024 14:22:50 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Barroso-Laguna",
"Axel",
""
],
[
"Munukutla",
"Sowmya",
""
],
[
"Prisacariu",
"Victor Adrian",
""
],
[
"Brachmann",
"Eric",
""
]
] |
2404.06389
|
Nuno Fachada
|
Afonso Oliveira, Nuno Fachada, Jo\~ao P. Matos-Carvalho
|
Raster Forge: Interactive Raster Manipulation Library and GUI for Python
| null |
Software Impacts, 20, 100657, 2024
|
10.1016/j.simpa.2024.100657
| null |
eess.IV cs.CV cs.CY cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
Raster Forge is a Python library and graphical user interface for raster data
manipulation and analysis. The tool is focused on remote sensing applications,
particularly in wildfire management. It allows users to import, visualize, and
process raster layers for tasks such as image compositing or topographical
analysis. For wildfire management, it generates fuel maps using predefined
models. Its impact extends from disaster management to hydrological modeling,
agriculture, and environmental monitoring. Raster Forge can be a valuable asset
for geoscientists and researchers who rely on raster data analysis, enhancing
geospatial data processing and visualization across various disciplines.
|
[
{
"created": "Tue, 9 Apr 2024 15:31:48 GMT",
"version": "v1"
},
{
"created": "Sun, 19 May 2024 16:52:01 GMT",
"version": "v2"
}
] |
2024-05-21
|
[
[
"Oliveira",
"Afonso",
""
],
[
"Fachada",
"Nuno",
""
],
[
"Matos-Carvalho",
"João P.",
""
]
] |
2404.06455
|
Weronika Hryniewska-Guzik
|
Weronika Hryniewska-Guzik, Jakub Bilski, Bartosz Chrostowski, Jakub
Drak Sbahi, Przemys{\l}aw Biecek
|
A comparative analysis of deep learning models for lung segmentation on
X-ray images
|
published at the Polish Conference on Artificial Intelligence
(PP-RAI), 2024
|
Progress in Polish Artificial Intelligence Research 5 (2024) 65-72
|
10.17388/WUT.2024.0002.MiNI
| null |
eess.IV cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust and highly accurate lung segmentation in X-rays is crucial in medical
imaging. This study evaluates deep learning solutions for this task, ranking
existing methods and analyzing their performance under diverse image
modifications. Out of 61 analyzed papers, only nine offered implementation or
pre-trained models, enabling assessment of three prominent methods: Lung VAE,
TransResUNet, and CE-Net. The analysis revealed that CE-Net performs best,
demonstrating the highest values in dice similarity coefficient and
intersection over union metric.
|
[
{
"created": "Tue, 9 Apr 2024 16:55:23 GMT",
"version": "v1"
}
] |
2024-09-09
|
[
[
"Hryniewska-Guzik",
"Weronika",
""
],
[
"Bilski",
"Jakub",
""
],
[
"Chrostowski",
"Bartosz",
""
],
[
"Sbahi",
"Jakub Drak",
""
],
[
"Biecek",
"Przemysław",
""
]
] |
2404.06657
|
Irving Rondon
|
Carlos Osorio Quero, Daniel Leykam, and Irving Rondon Ojeda
|
Res-U2Net: Untrained Deep Learning for Phase Retrieval and Image
Reconstruction
|
16 pages, 8 figures, 4 Tables
|
Journal of the Optical Society of America A, Vol. 41, Issue 5, pp.
766-773 (2024)
|
10.1364/JOSAA.511074
| null |
eess.IV cs.CV physics.app-ph physics.optics
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Conventional deep learning-based image reconstruction methods require a large
amount of training data which can be hard to obtain in practice. Untrained deep
learning methods overcome this limitation by training a network to invert a
physical model of the image formation process. Here we present a novel
untrained Res-U2Net model for phase retrieval. We use the extracted phase
information to determine changes in an object's surface and generate a mesh
representation of its 3D structure. We compare the performance of Res-U2Net
phase retrieval against UNet and U2Net using images from the GDXRAY dataset.
|
[
{
"created": "Tue, 9 Apr 2024 23:47:53 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Quero",
"Carlos Osorio",
""
],
[
"Leykam",
"Daniel",
""
],
[
"Ojeda",
"Irving Rondon",
""
]
] |
2404.06842
|
Ziyang Chen
|
Ziyang Chen and Wei Long and He Yao and Yongjun Zhang and Bingshu Wang
and Yongbin Qin and Jia Wu
|
MoCha-Stereo: Motif Channel Attention Network for Stereo Matching
|
Accepted to CVPR 2024
|
The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2024
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning-based stereo matching techniques have made significant progress.
However, existing methods inevitably lose geometrical structure information
during the feature channel generation process, resulting in edge detail
mismatches. In this paper, the Motif Cha}nnel Attention Stereo Matching Network
(MoCha-Stereo) is designed to address this problem. We provide the Motif
Channel Correlation Volume (MCCV) to determine more accurate edge matching
costs. MCCV is achieved by projecting motif channels, which capture common
geometric structures in feature channels, onto feature maps and cost volumes.
In addition, edge variations in %potential feature channels of the
reconstruction error map also affect details matching, we propose the
Reconstruction Error Motif Penalty (REMP) module to further refine the
full-resolution disparity estimation. REMP integrates the frequency information
of typical channel features from the reconstruction error. MoCha-Stereo ranks
1st on the KITTI-2015 and KITTI-2012 Reflective leaderboards. Our structure
also shows excellent performance in Multi-View Stereo. Code is avaliable at
https://github.com/ZYangChen/MoCha-Stereo.
|
[
{
"created": "Wed, 10 Apr 2024 09:14:28 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2024 15:28:36 GMT",
"version": "v2"
}
] |
2024-04-12
|
[
[
"Chen",
"Ziyang",
""
],
[
"Long",
"Wei",
""
],
[
"Yao",
"He",
""
],
[
"Zhang",
"Yongjun",
""
],
[
"Wang",
"Bingshu",
""
],
[
"Qin",
"Yongbin",
""
],
[
"Wu",
"Jia",
""
]
] |
2404.07103
|
Bowen Jin
|
Bowen Jin, Chulin Xie, Jiawei Zhang, Kashob Kumar Roy, Yu Zhang, Zheng
Li, Ruirui Li, Xianfeng Tang, Suhang Wang, Yu Meng, Jiawei Han
|
Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on
Graphs
|
21 pages. Code: https://github.com/PeterGriffinJin/Graph-CoT
|
ACL 2024
| null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), while exhibiting exceptional performance,
suffer from hallucinations, especially on knowledge-intensive tasks. Existing
works propose to augment LLMs with individual text units retrieved from
external knowledge corpora to alleviate the issue. However, in many domains,
texts are interconnected (e.g., academic papers in a bibliographic graph are
linked by citations and co-authorships) which form a (text-attributed) graph.
The knowledge in such graphs is encoded not only in single texts/nodes but also
in their associated connections. To facilitate the research of augmenting LLMs
with graphs, we manually construct a Graph Reasoning Benchmark dataset called
GRBench, containing 1,740 questions that can be answered with the knowledge
from 10 domain graphs. Then, we propose a simple and effective framework called
Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging
LLMs to reason on the graph iteratively. Each Graph-CoT iteration consists of
three sub-steps: LLM reasoning, LLM-graph interaction, and graph execution. We
conduct systematic experiments with three LLM backbones on GRBench, where
Graph-CoT outperforms the baselines consistently. The code is available at
https://github.com/PeterGriffinJin/Graph-CoT.
|
[
{
"created": "Wed, 10 Apr 2024 15:41:53 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2024 23:36:18 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Oct 2024 13:55:08 GMT",
"version": "v3"
}
] |
2024-10-04
|
[
[
"Jin",
"Bowen",
""
],
[
"Xie",
"Chulin",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"Roy",
"Kashob Kumar",
""
],
[
"Zhang",
"Yu",
""
],
[
"Li",
"Zheng",
""
],
[
"Li",
"Ruirui",
""
],
[
"Tang",
"Xianfeng",
""
],
[
"Wang",
"Suhang",
""
],
[
"Meng",
"Yu",
""
],
[
"Han",
"Jiawei",
""
]
] |
2404.07185
|
Zohre Karimi
|
Zohre Karimi, Shing-Hei Ho, Bao Thach, Alan Kuntz, Daniel S. Brown
|
Reward Learning from Suboptimal Demonstrations with Applications in
Surgical Electrocautery
|
In proceedings of the International Symposium on Medical Robotics
(ISMR) 2024. Equal contribution from two first authors
|
2024 International Symposium on Medical Robotics (ISMR), pp. 1-7,
2024
|
10.1109/ISMR63436.2024.10585785
| null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automating robotic surgery via learning from demonstration (LfD) techniques
is extremely challenging. This is because surgical tasks often involve
sequential decision-making processes with complex interactions of physical
objects and have low tolerance for mistakes. Prior works assume that all
demonstrations are fully observable and optimal, which might not be practical
in the real world. This paper introduces a sample-efficient method that learns
a robust reward function from a limited amount of ranked suboptimal
demonstrations consisting of partial-view point cloud observations. The method
then learns a policy by optimizing the learned reward function using
reinforcement learning (RL). We show that using a learned reward function to
obtain a policy is more robust than pure imitation learning. We apply our
approach on a physical surgical electrocautery task and demonstrate that our
method can perform well even when the provided demonstrations are suboptimal
and the observations are high-dimensional point clouds. Code and videos
available here: https://sites.google.com/view/lfdinelectrocautery
|
[
{
"created": "Wed, 10 Apr 2024 17:40:27 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 00:23:03 GMT",
"version": "v2"
}
] |
2024-10-11
|
[
[
"Karimi",
"Zohre",
""
],
[
"Ho",
"Shing-Hei",
""
],
[
"Thach",
"Bao",
""
],
[
"Kuntz",
"Alan",
""
],
[
"Brown",
"Daniel S.",
""
]
] |
2404.07212
|
Raphael Achdou
|
Rapha\"el Achddou, Yann Gousseau, Sa\"id Ladjal
|
Hybrid Training of Denoising Networks to Improve the Texture Acutance of
Digital Cameras
| null |
Scale Space and Variational Methods in Computer Vision, May 2023,
Santa Margherita di Pula, Italy. pp.314-325
|
10.1007/978-3-031-31975-4_24
| null |
eess.IV cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to evaluate the capacity of a camera to render textures properly,
the standard practice, used by classical scoring protocols, is to compute the
frequential response to a dead leaves image target, from which is built a
texture acutance metric. In this work, we propose a mixed training procedure
for image restoration neural networks, relying on both natural and synthetic
images, that yields a strong improvement of this acutance metric without
impairing fidelity terms. The feasibility of the approach is demonstrated both
on the denoising of RGB images and the full development of RAW images, opening
the path to a systematic improvement of the texture acutance of real imaging
devices.
|
[
{
"created": "Tue, 20 Feb 2024 10:47:06 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Achddou",
"Raphaël",
""
],
[
"Gousseau",
"Yann",
""
],
[
"Ladjal",
"Saïd",
""
]
] |
2404.07227
|
Michael Timothy Bennett
|
Michael Timothy Bennett
|
Is Complexity an Illusion?
|
Accepted for publication in the Proceedings of the 17th Conference on
Artificial General Intelligence, 2024. Definitions shared with
arXiv:2302.00843
|
Proceedings of the 17th International Conference on Artificial
General Intelligence. 2024. Lecture Notes in Computer Science, vol 14951.
Springer. pp. 11-21
|
10.1007/978-3-031-65572-2_2
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Simplicity is held by many to be the key to general intelligence. Simpler
models tend to "generalise", identifying the cause or generator of data with
greater sample efficiency. The implications of the correlation between
simplicity and generalisation extend far beyond computer science, addressing
questions of physics and even biology. Yet simplicity is a property of form,
while generalisation is of function. In interactive settings, any correlation
between the two depends on interpretation. In theory there could be no
correlation and yet in practice, there is. Previous theoretical work showed
generalisation to be a consequence of "weak" constraints implied by function,
not form. Experiments demonstrated choosing weak constraints over simple forms
yielded a 110-500% improvement in generalisation rate. Here we show that all
constraints can take equally simple forms, regardless of weakness. However if
forms are spatially extended, then function is represented using a finite
subset of forms. If function is represented using a finite subset of forms,
then we can force a correlation between simplicity and generalisation by making
weak constraints take simple forms. If function is determined by a goal
directed process that favours versatility (e.g. natural selection), then
efficiency demands weak constraints take simple forms. Complexity has no causal
influence on generalisation, but appears to due to confounding.
|
[
{
"created": "Sun, 31 Mar 2024 13:36:55 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 09:08:35 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Apr 2024 10:44:36 GMT",
"version": "v3"
},
{
"created": "Thu, 30 May 2024 13:38:42 GMT",
"version": "v4"
}
] |
2024-07-19
|
[
[
"Bennett",
"Michael Timothy",
""
]
] |
2404.07673
|
Andr\'es Lou
|
Andr\'es Lou, Juan Antonio P\'erez-Ortiz, Felipe S\'anchez-Mart\'inez,
V\'ictor M. S\'anchez-Cartagena
|
Curated Datasets and Neural Models for Machine Translation of Informal
Registers between Mayan and Spanish Vernaculars
|
13 pages, 3 figures, 8 tables, Submitted to NAACL 2024
|
2024.naacl-long.156
| null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Mayan languages comprise a language family with an ancient history,
millions of speakers, and immense cultural value, that, nevertheless, remains
severely underrepresented in terms of resources and global exposure. In this
paper we develop, curate, and publicly release a set of corpora in several
Mayan languages spoken in Guatemala and Southern Mexico, which we call MayanV.
The datasets are parallel with Spanish, the dominant language of the region,
and are taken from official native sources focused on representing informal,
day-to-day, and non-domain-specific language. As such, and according to our
dialectometric analysis, they differ in register from most other available
resources. Additionally, we present neural machine translation models, trained
on as many resources and Mayan languages as possible, and evaluated exclusively
on our datasets. We observe lexical divergences between the dialects of Spanish
in our resources and the more widespread written standard of Spanish, and that
resources other than the ones we present do not seem to improve translation
performance, indicating that many such resources may not accurately capture
common, real-life language usage. The MayanV dataset is available at
https://github.com/transducens/mayanv.
|
[
{
"created": "Thu, 11 Apr 2024 12:09:47 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Lou",
"Andrés",
""
],
[
"Pérez-Ortiz",
"Juan Antonio",
""
],
[
"Sánchez-Martínez",
"Felipe",
""
],
[
"Sánchez-Cartagena",
"Víctor M.",
""
]
] |
2404.07732
|
Michael Painter
|
Michael Painter, Mohamed Baioumy, Nick Hawes, Bruno Lacerda
|
Monte Carlo Tree Search with Boltzmann Exploration
|
Camera ready version of NeurIPS2023 paper
|
Advances in Neural Information Processing Systems 36 (2024)
| null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Monte-Carlo Tree Search (MCTS) methods, such as Upper Confidence Bound
applied to Trees (UCT), are instrumental to automated planning techniques.
However, UCT can be slow to explore an optimal action when it initially appears
inferior to other actions. Maximum ENtropy Tree-Search (MENTS) incorporates the
maximum entropy principle into an MCTS approach, utilising Boltzmann policies
to sample actions, naturally encouraging more exploration. In this paper, we
highlight a major limitation of MENTS: optimal actions for the maximum entropy
objective do not necessarily correspond to optimal actions for the original
objective. We introduce two algorithms, Boltzmann Tree Search (BTS) and
Decaying ENtropy Tree-Search (DENTS), that address these limitations and
preserve the benefits of Boltzmann policies, such as allowing actions to be
sampled faster by using the Alias method. Our empirical analysis shows that our
algorithms show consistent high performance across several benchmark domains,
including the game of Go.
|
[
{
"created": "Thu, 11 Apr 2024 13:25:35 GMT",
"version": "v1"
}
] |
2024-04-12
|
[
[
"Painter",
"Michael",
""
],
[
"Baioumy",
"Mohamed",
""
],
[
"Hawes",
"Nick",
""
],
[
"Lacerda",
"Bruno",
""
]
] |
2404.07754
|
Felix Biessmann
|
Tuong Vy Nguyen and Alexander Glaser and Felix Biessmann
|
Generating Synthetic Satellite Imagery With Deep-Learning Text-to-Image
Models -- Technical Challenges and Implications for Monitoring and
Verification
|
https://resources.inmm.org/annual-meeting-proceedings/generating-synthetic-satellite-imagery-deep-learning-text-image-models
|
Presented at the Annual Meeting of the Institute of Nuclear
Materials Management (INMM), Vienna, 2023
| null | null |
cs.CV cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Novel deep-learning (DL) architectures have reached a level where they can
generate digital media, including photorealistic images, that are difficult to
distinguish from real data. These technologies have already been used to
generate training data for Machine Learning (ML) models, and large
text-to-image models like DALL-E 2, Imagen, and Stable Diffusion are achieving
remarkable results in realistic high-resolution image generation. Given these
developments, issues of data authentication in monitoring and verification
deserve a careful and systematic analysis: How realistic are synthetic images?
How easily can they be generated? How useful are they for ML researchers, and
what is their potential for Open Science? In this work, we use novel DL models
to explore how synthetic satellite images can be created using conditioning
mechanisms. We investigate the challenges of synthetic satellite image
generation and evaluate the results based on authenticity and state-of-the-art
metrics. Furthermore, we investigate how synthetic data can alleviate the lack
of data in the context of ML methods for remote-sensing. Finally we discuss
implications of synthetic satellite imagery in the context of monitoring and
verification.
|
[
{
"created": "Thu, 11 Apr 2024 14:00:20 GMT",
"version": "v1"
}
] |
2024-04-12
|
[
[
"Nguyen",
"Tuong Vy",
""
],
[
"Glaser",
"Alexander",
""
],
[
"Biessmann",
"Felix",
""
]
] |
2404.07766
|
Kai Luo
|
Kai Luo, Yakun Ju, Lin Qi, Kaixuan Wang and Junyu Dong
|
RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric
Stereo Network
|
17 pages,12 figures
|
Photonics 2023,10(5),548
|
10.3390/photonics10050548
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting accurate normal maps of objects from two-dimensional images in
regions of complex structure and spatial material variations is challenging
using photometric stereo methods due to the influence of surface reflection
properties caused by variations in object geometry and surface materials. To
address this issue, we propose a photometric stereo network called a RMAFF-PSN
that uses residual multiscale attentional feature fusion to handle the
``difficult'' regions of the object. Unlike previous approaches that only use
stacked convolutional layers to extract deep features from the input image, our
method integrates feature information from different resolution stages and
scales of the image. This approach preserves more physical information, such as
texture and geometry of the object in complex regions, through shallow-deep
stage feature extraction, double branching enhancement, and attention
optimization. To test the network structure under real-world conditions, we
propose a new real dataset called Simple PS data, which contains multiple
objects with varying structures and materials. Experimental results on a
publicly available benchmark dataset demonstrate that our method outperforms
most existing calibrated photometric stereo methods for the same number of
input images, especially in the case of highly non-convex object structures.
Our method also obtains good results under sparse lighting conditions.
|
[
{
"created": "Thu, 11 Apr 2024 14:05:37 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Apr 2024 13:14:54 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Luo",
"Kai",
""
],
[
"Ju",
"Yakun",
""
],
[
"Qi",
"Lin",
""
],
[
"Wang",
"Kaixuan",
""
],
[
"Dong",
"Junyu",
""
]
] |
2404.07851
|
Dayeon Ki
|
Dayeon Ki, Marine Carpuat
|
Guiding Large Language Models to Post-Edit Machine Translation with
Error Annotations
|
21 pages, 8 figures
|
NAACL 2024 Findings
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Machine Translation (MT) remains one of the last NLP tasks where large
language models (LLMs) have not yet replaced dedicated supervised systems. This
work exploits the complementary strengths of LLMs and supervised MT by guiding
LLMs to automatically post-edit MT with external feedback on its quality,
derived from Multidimensional Quality Metric (MQM) annotations. Working with
LLaMA-2 models, we consider prompting strategies varying the nature of feedback
provided and then fine-tune the LLM to improve its ability to exploit the
provided guidance. Through experiments on Chinese-English, English-German, and
English-Russian MQM data, we demonstrate that prompting LLMs to post-edit MT
improves TER, BLEU and COMET scores, although the benefits of fine-grained
feedback are not clear. Fine-tuning helps integrate fine-grained feedback more
effectively and further improves translation quality based on both automatic
and human evaluation.
|
[
{
"created": "Thu, 11 Apr 2024 15:47:10 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Ki",
"Dayeon",
""
],
[
"Carpuat",
"Marine",
""
]
] |
2404.07960
|
Kaiqi Yang
|
Kaiqi Yang, Yucheng Chu, Taylor Darwin, Ahreum Han, Hang Li, Hongzhi
Wen, Yasemin Copur-Gencturk, Jiliang Tang, Hui Liu
|
Content Knowledge Identification with Multi-Agent Large Language Models
(LLMs)
| null |
AIED 2024. Lecture Notes in Computer Science(), vol 14830.
Springer, Cham
|
10.1007/978-3-031-64299-9_23
| null |
cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Teachers' mathematical content knowledge (CK) is of vital importance and need
in teacher professional development (PD) programs. Computer-aided asynchronous
PD systems are the most recent proposed PD techniques, which aim to help
teachers improve their PD equally with fewer concerns about costs and
limitations of time or location. However, current automatic CK identification
methods, which serve as one of the core techniques of asynchronous PD systems,
face challenges such as diversity of user responses, scarcity of high-quality
annotated data, and low interpretability of the predictions. To tackle these
challenges, we propose a Multi-Agent LLMs-based framework, LLMAgent-CK, to
assess the user responses' coverage of identified CK learning goals without
human annotations. By taking advantage of multi-agent LLMs in strong
generalization ability and human-like discussions, our proposed LLMAgent-CK
presents promising CK identifying performance on a real-world mathematical CK
dataset MaCKT. Moreover, our case studies further demonstrate the working of
the multi-agent framework.
|
[
{
"created": "Fri, 22 Mar 2024 02:37:33 GMT",
"version": "v1"
}
] |
2024-09-06
|
[
[
"Yang",
"Kaiqi",
""
],
[
"Chu",
"Yucheng",
""
],
[
"Darwin",
"Taylor",
""
],
[
"Han",
"Ahreum",
""
],
[
"Li",
"Hang",
""
],
[
"Wen",
"Hongzhi",
""
],
[
"Copur-Gencturk",
"Yasemin",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Liu",
"Hui",
""
]
] |
2404.08064
|
Soroosh Tayebi Arasteh
|
Soroosh Tayebi Arasteh, Tomas Arias-Vergara, Paula Andrea Perez-Toro,
Tobias Weise, Kai Packhaeuser, Maria Schuster, Elmar Noeth, Andreas Maier,
Seung Hee Yang
|
The Impact of Speech Anonymization on Pathology and Its Limits
|
Published in Communications Medicine
|
Commun Med 4, (2024)
|
10.1038/s43856-024-00609-5
| null |
eess.AS cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Integration of speech into healthcare has intensified privacy concerns due to
its potential as a non-invasive biomarker containing individual biometric
information. In response, speaker anonymization aims to conceal personally
identifiable information while retaining crucial linguistic content. However,
the application of anonymization techniques to pathological speech, a critical
area where privacy is especially vital, has not been extensively examined. This
study investigates anonymization's impact on pathological speech across over
2,700 speakers from multiple German institutions, focusing on privacy,
pathological utility, and demographic fairness. We explore both
deep-learning-based and signal processing-based anonymization methods. We
document substantial privacy improvements across disorders-evidenced by equal
error rate increases up to 1933%, with minimal overall impact on utility.
Specific disorders such as Dysarthria, Dysphonia, and Cleft Lip and Palate
experience minimal utility changes, while Dysglossia shows slight improvements.
Our findings underscore that the impact of anonymization varies substantially
across different disorders. This necessitates disorder-specific anonymization
strategies to optimally balance privacy with diagnostic utility. Additionally,
our fairness analysis reveals consistent anonymization effects across most of
the demographics. This study demonstrates the effectiveness of anonymization in
pathological speech for enhancing privacy, while also highlighting the
importance of customized and disorder-specific approaches to account for
inversion attacks.
|
[
{
"created": "Thu, 11 Apr 2024 18:06:35 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Jun 2024 09:47:05 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Sep 2024 15:10:40 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Sep 2024 13:23:49 GMT",
"version": "v4"
}
] |
2024-09-23
|
[
[
"Arasteh",
"Soroosh Tayebi",
""
],
[
"Arias-Vergara",
"Tomas",
""
],
[
"Perez-Toro",
"Paula Andrea",
""
],
[
"Weise",
"Tobias",
""
],
[
"Packhaeuser",
"Kai",
""
],
[
"Schuster",
"Maria",
""
],
[
"Noeth",
"Elmar",
""
],
[
"Maier",
"Andreas",
""
],
[
"Yang",
"Seung Hee",
""
]
] |
2404.08322
|
Yuqing Cheng
|
Yuqing Cheng, Bo Chen, Fanjin Zhang, Jie Tang
|
BOND: Bootstrapping From-Scratch Name Disambiguation with Multi-task
Promoting
|
TheWebConf 2024 (WWW '24)
|
Proceedings of TheWebConf 2024 (WWW '24), May 13--17, 2024,
Singapore
|
10.1145/3589334.3645580.
| null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
From-scratch name disambiguation is an essential task for establishing a
reliable foundation for academic platforms. It involves partitioning documents
authored by identically named individuals into groups representing distinct
real-life experts. Canonically, the process is divided into two decoupled
tasks: locally estimating the pairwise similarities between documents followed
by globally grouping these documents into appropriate clusters. However, such a
decoupled approach often inhibits optimal information exchange between these
intertwined tasks. Therefore, we present BOND, which bootstraps the local and
global informative signals to promote each other in an end-to-end regime.
Specifically, BOND harnesses local pairwise similarities to drive global
clustering, subsequently generating pseudo-clustering labels. These global
signals further refine local pairwise characterizations. The experimental
results establish BOND's superiority, outperforming other advanced baselines by
a substantial margin. Moreover, an enhanced version, BOND+, incorporating
ensemble and post-match techniques, rivals the top methods in the WhoIsWho
competition.
|
[
{
"created": "Fri, 12 Apr 2024 08:28:52 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Cheng",
"Yuqing",
""
],
[
"Chen",
"Bo",
""
],
[
"Zhang",
"Fanjin",
""
],
[
"Tang",
"Jie",
""
]
] |
2404.08351
|
Guillaume Astruc
|
Guillaume Astruc, Nicolas Gonthier, Clement Mallet, Loic Landrieu
|
OmniSat: Self-Supervised Modality Fusion for Earth Observation
| null |
ECCV 2024
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The diversity and complementarity of sensors available for Earth Observations
(EO) calls for developing bespoke self-supervised multimodal learning
approaches. However, current multimodal EO datasets and models typically focus
on a single data type, either mono-date images or time series, which limits
their impact. To address this issue, we introduce OmniSat, a novel architecture
able to merge diverse EO modalities into expressive features without labels by
exploiting their alignment. To demonstrate the advantages of our approach, we
create two new multimodal datasets by augmenting existing ones with new
modalities. As demonstrated for three downstream tasks -- forestry, land cover
classification, and crop mapping -- OmniSat can learn rich representations
without supervision, leading to state-of-the-art performances in semi- and
fully supervised settings. Furthermore, our multimodal pretraining scheme
improves performance even when only one modality is available for inference.
The code and dataset are available at https://github.com/gastruc/OmniSat.
|
[
{
"created": "Fri, 12 Apr 2024 09:31:55 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2024 16:45:46 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jul 2024 08:16:14 GMT",
"version": "v3"
}
] |
2024-07-18
|
[
[
"Astruc",
"Guillaume",
""
],
[
"Gonthier",
"Nicolas",
""
],
[
"Mallet",
"Clement",
""
],
[
"Landrieu",
"Loic",
""
]
] |
2404.08353
|
Shiwei Lian
|
Shiwei Lian and Feitian Zhang
|
TDANet: Target-Directed Attention Network For Object-Goal Visual
Navigation With Zero-Shot Ability
| null |
IEEE Robotics and Automation Letters,2024
|
10.1109/LRA.2024.3440100
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generalization of the end-to-end deep reinforcement learning (DRL) for
object-goal visual navigation is a long-standing challenge since object classes
and placements vary in new test environments. Learning domain-independent
visual representation is critical for enabling the trained DRL agent with the
ability to generalize to unseen scenes and objects. In this letter, a
target-directed attention network (TDANet) is proposed to learn the end-to-end
object-goal visual navigation policy with zero-shot ability. TDANet features a
novel target attention (TA) module that learns both the spatial and semantic
relationships among objects to help TDANet focus on the most relevant observed
objects to the target. With the Siamese architecture (SA) design, TDANet
distinguishes the difference between the current and target states and
generates the domain-independent visual representation. To evaluate the
navigation performance of TDANet, extensive experiments are conducted in the
AI2-THOR embodied AI environment. The simulation results demonstrate a strong
generalization ability of TDANet to unseen scenes and target objects, with
higher navigation success rate (SR) and success weighted by length (SPL) than
other state-of-the-art models. TDANet is finally deployed on a wheeled robot in
real scenes, demonstrating satisfactory generalization of TDANet to the real
world.
|
[
{
"created": "Fri, 12 Apr 2024 09:44:18 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Aug 2024 07:20:43 GMT",
"version": "v2"
}
] |
2024-08-13
|
[
[
"Lian",
"Shiwei",
""
],
[
"Zhang",
"Feitian",
""
]
] |
2404.08403
|
Rita Gonz\'alez-M\'arquez
|
Rita Gonz\'alez-M\'arquez and Dmitry Kobak
|
Learning representations of learning representations
| null |
DMLR workshop at ICLR 2024
| null | null |
cs.CL cs.DL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The ICLR conference is unique among the top machine learning conferences in
that all submitted papers are openly available. Here we present the ICLR
dataset consisting of abstracts of all 24 thousand ICLR submissions from
2017-2024 with meta-data, decision scores, and custom keyword-based labels. We
find that on this dataset, bag-of-words representation outperforms most
dedicated sentence transformer models in terms of $k$NN classification
accuracy, and the top performing language models barely outperform TF-IDF. We
see this as a challenge for the NLP community. Furthermore, we use the ICLR
dataset to study how the field of machine learning has changed over the last
seven years, finding some improvement in gender balance. Using a 2D embedding
of the abstracts' texts, we describe a shift in research topics from 2017 to
2024 and identify hedgehogs and foxes among the authors with the highest number
of ICLR submissions.
|
[
{
"created": "Fri, 12 Apr 2024 11:30:16 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"González-Márquez",
"Rita",
""
],
[
"Kobak",
"Dmitry",
""
]
] |
2404.08433
|
Linhuang Wang
|
Linhuang Wang, Xin Kang, Fei Ding, Satoshi Nakagawa and Fuji Ren
|
MSSTNet: A Multi-Scale Spatio-Temporal CNN-Transformer Network for
Dynamic Facial Expression Recognition
|
Accepted to 2024 IEEE International Conference on Acoustics, Speech,
and Signal Processing (ICASSP 2024)
|
ICASSP 2024-2024 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, 2024: 3015-3019
|
10.1109/ICASSP48485.2024.10446699
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike typical video action recognition, Dynamic Facial Expression
Recognition (DFER) does not involve distinct moving targets but relies on
localized changes in facial muscles. Addressing this distinctive attribute, we
propose a Multi-Scale Spatio-temporal CNN-Transformer network (MSSTNet). Our
approach takes spatial features of different scales extracted by CNN and feeds
them into a Multi-scale Embedding Layer (MELayer). The MELayer extracts
multi-scale spatial information and encodes these features before sending them
into a Temporal Transformer (T-Former). The T-Former simultaneously extracts
temporal information while continually integrating multi-scale spatial
information. This process culminates in the generation of multi-scale
spatio-temporal features that are utilized for the final classification. Our
method achieves state-of-the-art results on two in-the-wild datasets.
Furthermore, a series of ablation experiments and visualizations provide
further validation of our approach's proficiency in leveraging spatio-temporal
information within DFER.
|
[
{
"created": "Fri, 12 Apr 2024 12:30:48 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Wang",
"Linhuang",
""
],
[
"Kang",
"Xin",
""
],
[
"Ding",
"Fei",
""
],
[
"Nakagawa",
"Satoshi",
""
],
[
"Ren",
"Fuji",
""
]
] |
2404.08504
|
Kai Kohyama
|
Kai Kohyama, Shintaro Shiba, Yoshimitsu Aoki
|
3D Human Scan With A Moving Event Camera
| null |
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Workshop On Computer Vision For Mixed Reality (CV4MR), Seattle, 2024
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Capturing a 3D human body is one of the important tasks in computer vision
with a wide range of applications such as virtual reality and sports analysis.
However, conventional frame cameras are limited by their temporal resolution
and dynamic range, which imposes constraints in real-world application setups.
Event cameras have the advantages of high temporal resolution and high dynamic
range (HDR), but the development of event-based methods is necessary to handle
data with different characteristics. This paper proposes a novel event-based
method for 3D pose estimation and human mesh recovery. Prior work on
event-based human mesh recovery require frames (images) as well as event data.
The proposed method solely relies on events; it carves 3D voxels by moving the
event camera around a stationary body, reconstructs the human pose and mesh by
attenuated rays, and fit statistical body models, preserving high-frequency
details. The experimental results show that the proposed method outperforms
conventional frame-based methods in the estimation accuracy of both pose and
body mesh. We also demonstrate results in challenging situations where a
conventional camera has motion blur. This is the first to demonstrate
event-only human mesh recovery, and we hope that it is the first step toward
achieving robust and accurate 3D human body scanning from vision sensors.
https://florpeng.github.io/event-based-human-scan/
|
[
{
"created": "Fri, 12 Apr 2024 14:34:24 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 10:18:56 GMT",
"version": "v2"
}
] |
2024-04-17
|
[
[
"Kohyama",
"Kai",
""
],
[
"Shiba",
"Shintaro",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] |
2404.08584
|
Ah Arnob
|
Abu Bakor Hayat Arnob, Xiangxue Wang, Yiping Jiao, Xiao Gan, Wenlong
Ming, and Jun Xu
|
Pathological Primitive Segmentation Based on Visual Foundation Model
with Zero-Shot Mask Generation
|
2024 IEEE International Symposium on Biomedical Imaging
|
10.1109/ISBI56570.2024
|
10.1109/ISBI56570.2024.10635539
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Medical image processing usually requires a model trained with carefully
crafted datasets due to unique image characteristics and domain-specific
challenges, especially in pathology. Primitive detection and segmentation in
digitized tissue samples are essential for objective and automated diagnosis
and prognosis of cancer. SAM (Segment Anything Model) has recently been
developed to segment general objects from natural images with high accuracy,
but it requires human prompts to generate masks. In this work, we present a
novel approach that adapts pre-trained natural image encoders of SAM for
detection-based region proposals. Regions proposed by a pre-trained encoder are
sent to cascaded feature propagation layers for projection. Then, local
semantic and global context is aggregated from multi-scale for bounding box
localization and classification. Finally, the SAM decoder uses the identified
bounding boxes as essential prompts to generate a comprehensive primitive
segmentation map. The entire base framework, SAM, requires no additional
training or fine-tuning but could produce an end-to-end result for two
fundamental segmentation tasks in pathology. Our method compares with
state-of-the-art models in F1 score for nuclei detection and binary/multiclass
panoptic(bPQ/mPQ) and mask quality(dice) for segmentation quality on the
PanNuke dataset while offering end-to-end efficiency. Our model also achieves
remarkable Average Precision (+4.5%) on the secondary dataset (HuBMAP Kidney)
compared to Faster RCNN. The code is publicly available at
https://github.com/learner-codec/autoprom_sam.
|
[
{
"created": "Fri, 12 Apr 2024 16:29:49 GMT",
"version": "v1"
}
] |
2024-10-10
|
[
[
"Arnob",
"Abu Bakor Hayat",
""
],
[
"Wang",
"Xiangxue",
""
],
[
"Jiao",
"Yiping",
""
],
[
"Gan",
"Xiao",
""
],
[
"Ming",
"Wenlong",
""
],
[
"Xu",
"Jun",
""
]
] |
2404.08630
|
Leif Azzoparrdi
|
Leif Azzopardi, Mateusz Dubiel, Martin Halvey, Jeffery Dalton
|
A Conceptual Framework for Conversational Search and Recommendation:
Conceptualizing Agent-Human Interactions During the Conversational Search
Process
| null |
The Second International Workshop on Conversational Approaches to
Information Retrieval (CAIR 2018) at ACM SIGIR
| null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The conversational search task aims to enable a user to resolve information
needs via natural language dialogue with an agent. In this paper, we aim to
develop a conceptual framework of the actions and intents of users and agents
explaining how these actions enable the user to explore the search space and
resolve their information need. We outline the different actions and intents,
before discussing key decision points in the conversation where the agent needs
to decide how to steer the conversational search process to a successful and/or
satisfactory conclusion. Essentially, this paper provides a conceptualization
of the conversational search process between an agent and user, which provides
a framework and a starting point for research, development and evaluation of
conversational search agents.
|
[
{
"created": "Fri, 12 Apr 2024 17:48:18 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Azzopardi",
"Leif",
""
],
[
"Dubiel",
"Mateusz",
""
],
[
"Halvey",
"Martin",
""
],
[
"Dalton",
"Jeffery",
""
]
] |
2404.08654
|
Hyunkyung Han
|
Hyunkyung Han, Jaesik Choi
|
Optimal path for Biomedical Text Summarization Using Pointer GPT
|
3 pages, 3 figures
|
KSC2023
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Biomedical text summarization is a critical tool that enables clinicians to
effectively ascertain patient status. Traditionally, text summarization has
been accomplished with transformer models, which are capable of compressing
long documents into brief summaries. However, transformer models are known to
be among the most challenging natural language processing (NLP) tasks.
Specifically, GPT models have a tendency to generate factual errors, lack
context, and oversimplify words. To address these limitations, we replaced the
attention mechanism in the GPT model with a pointer network. This modification
was designed to preserve the core values of the original text during the
summarization process. The effectiveness of the Pointer-GPT model was evaluated
using the ROUGE score. The results demonstrated that Pointer-GPT outperformed
the original GPT model. These findings suggest that pointer networks can be a
valuable addition to EMR systems and can provide clinicians with more accurate
and informative summaries of patient medical records. This research has the
potential to usher in a new paradigm in EMR systems and to revolutionize the
way that clinicians interact with patient medical records.
|
[
{
"created": "Fri, 22 Mar 2024 02:13:23 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Han",
"Hyunkyung",
""
],
[
"Choi",
"Jaesik",
""
]
] |
2404.08684
|
Renato P. dos Santos
|
Gian Alexandre Michaelsen, Renato P. dos Santos
|
Is English the New Programming Language? How About Pseudo-code
Engineering?
| null |
Acta Sci. (Canoas), 26(1), 157-204, Jan./Feb. 2024
| null | null |
cs.CL cs.AI cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Background: The integration of artificial intelligence (AI) into daily life,
particularly through chatbots utilizing natural language processing (NLP),
presents both revolutionary potential and unique challenges. This intended to
investigate how different input forms impact ChatGPT, a leading language model
by OpenAI, performance in understanding and executing complex, multi-intention
tasks. Design: Employing a case study methodology supplemented by discourse
analysis, the research analyzes ChatGPT's responses to inputs varying from
natural language to pseudo-code engineering. The study specifically examines
the model's proficiency across four categories: understanding of intentions,
interpretability, completeness, and creativity. Setting and Participants: As a
theoretical exploration of AI interaction, this study focuses on the analysis
of structured and unstructured inputs processed by ChatGPT, without direct
human participants. Data collection and analysis: The research utilizes
synthetic case scenarios, including the organization of a "weekly meal plan"
and a "shopping list," to assess ChatGPT's response to prompts in both natural
language and pseudo-code engineering. The analysis is grounded in the
identification of patterns, contradictions, and unique response elements across
different input formats. Results: Findings reveal that pseudo-code engineering
inputs significantly enhance the clarity and determinism of ChatGPT's
responses, reducing ambiguity inherent in natural language. Enhanced natural
language, structured through prompt engineering techniques, similarly improves
the model's interpretability and creativity. Conclusions: The study underscores
the potential of pseudo-code engineering in refining human-AI interaction and
achieving more deterministic, concise, and direct outcomes, advocating for its
broader application across disciplines requiring precise AI responses.
|
[
{
"created": "Mon, 8 Apr 2024 16:28:52 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Michaelsen",
"Gian Alexandre",
""
],
[
"Santos",
"Renato P. dos",
""
]
] |
2404.08685
|
Bhavith Chandra Challagundla
|
Bhavith Chandra Challagundla, Chakradhar Peddavenkatagari
|
Neural Sequence-to-Sequence Modeling with Attention by Leveraging Deep
Learning Architectures for Enhanced Contextual Understanding in Abstractive
Text Summarization
| null |
International Journal of Machine Learning and Cybernetics ( 2024 )
| null |
IJMLC_02_01_002
|
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic text summarization (TS) plays a pivotal role in condensing large
volumes of information into concise, coherent summaries, facilitating efficient
information retrieval and comprehension. This paper presents a novel framework
for abstractive TS of single documents, which integrates three dominant
aspects: structural, semantic, and neural-based approaches. The proposed
framework merges machine learning and knowledge-based techniques to achieve a
unified methodology. The framework consists of three main phases:
pre-processing, machine learning, and post-processing. In the pre-processing
phase, a knowledge-based Word Sense Disambiguation (WSD) technique is employed
to generalize ambiguous words, enhancing content generalization. Semantic
content generalization is then performed to address out-of-vocabulary (OOV) or
rare words, ensuring comprehensive coverage of the input document.
Subsequently, the generalized text is transformed into a continuous vector
space using neural language processing techniques. A deep sequence-to-sequence
(seq2seq) model with an attention mechanism is employed to predict a
generalized summary based on the vector representation. In the post-processing
phase, heuristic algorithms and text similarity metrics are utilized to refine
the generated summary further. Concepts from the generalized summary are
matched with specific entities, enhancing coherence and readability.
Experimental evaluations conducted on prominent datasets, including Gigaword,
Duc 2004, and CNN/DailyMail, demonstrate the effectiveness of the proposed
framework. Results indicate significant improvements in handling rare and OOV
words, outperforming existing state-of-the-art deep learning techniques. The
proposed framework presents a comprehensive and unified approach towards
abstractive TS, combining the strengths of structure, semantics, and
neural-based methodologies.
|
[
{
"created": "Mon, 8 Apr 2024 18:33:59 GMT",
"version": "v1"
}
] |
2024-04-22
|
[
[
"Challagundla",
"Bhavith Chandra",
""
],
[
"Peddavenkatagari",
"Chakradhar",
""
]
] |
2404.08760
|
Siyang Liu
|
Siyang Liu, Trish Maturi, Bowen Yi, Siqi Shen, Rada Mihalcea
|
The Generation Gap: Exploring Age Bias in the Value Systems of Large
Language Models
|
5 pages
|
The 2024 Conference on Empirical Methods in Natural Language
Processing
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the alignment of values in Large Language Models (LLMs) with
specific age groups, leveraging data from the World Value Survey across
thirteen categories. Through a diverse set of prompts tailored to ensure
response robustness, we find a general inclination of LLM values towards
younger demographics, especially when compared to the US population. Although a
general inclination can be observed, we also found that this inclination toward
younger groups can be different across different value categories.
Additionally, we explore the impact of incorporating age identity information
in prompts and observe challenges in mitigating value discrepancies with
different age cohorts. Our findings highlight the age bias in LLMs and provide
insights for future work. Materials for our analysis are available at \url{
https://github.com/MichiganNLP/Age-Bias-In-LLMs}
|
[
{
"created": "Fri, 12 Apr 2024 18:36:20 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2024 22:11:02 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Oct 2024 00:16:54 GMT",
"version": "v3"
},
{
"created": "Tue, 15 Oct 2024 09:10:09 GMT",
"version": "v4"
}
] |
2024-10-16
|
[
[
"Liu",
"Siyang",
""
],
[
"Maturi",
"Trish",
""
],
[
"Yi",
"Bowen",
""
],
[
"Shen",
"Siqi",
""
],
[
"Mihalcea",
"Rada",
""
]
] |
2404.08778
|
Xiaomeng Zhu
|
Xiaomeng Zhu, Talha Bilal, P\"ar M{\aa}rtensson, Lars Hanson,
M{\aa}rten Bj\"orkman, Atsuto Maki
|
Towards Sim-to-Real Industrial Parts Classification with Synthetic
Dataset
|
Published in 2023 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW)
|
2023 IEEE/CVF CVPRW, pp. 4454-4463
|
10.1109/CVPRW59228.2023.00468
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper is about effectively utilizing synthetic data for training deep
neural networks for industrial parts classification, in particular, by taking
into account the domain gap against real-world images. To this end, we
introduce a synthetic dataset that may serve as a preliminary testbed for the
Sim-to-Real challenge; it contains 17 objects of six industrial use cases,
including isolated and assembled parts. A few subsets of objects exhibit large
similarities in shape and albedo for reflecting challenging cases of industrial
parts. All the sample images come with and without random backgrounds and
post-processing for evaluating the importance of domain randomization. We call
it Synthetic Industrial Parts dataset (SIP-17). We study the usefulness of
SIP-17 through benchmarking the performance of five state-of-the-art deep
network models, supervised and self-supervised, trained only on the synthetic
data while testing them on real data. By analyzing the results, we deduce some
insights on the feasibility and challenges of using synthetic data for
industrial parts classification and for further developing larger-scale
synthetic datasets. Our dataset and code are publicly available.
|
[
{
"created": "Fri, 12 Apr 2024 19:04:59 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Zhu",
"Xiaomeng",
""
],
[
"Bilal",
"Talha",
""
],
[
"Mårtensson",
"Pär",
""
],
[
"Hanson",
"Lars",
""
],
[
"Björkman",
"Mårten",
""
],
[
"Maki",
"Atsuto",
""
]
] |
2404.08827
|
James Mullen Jr
|
James F. Mullen Jr, Prasoon Goyal, Robinson Piramuthu, Michael
Johnston, Dinesh Manocha, and Reza Ghanadan
|
"Don't forget to put the milk back!" Dataset for Enabling Embodied
Agents to Detect Anomalous Situations
| null |
IEEE Robotics and Automation Letters 9.10 (2024) 9087 - 9094
|
10.1109/LRA.2024.3430129
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Home robots intend to make their users lives easier. Our work assists in this
goal by enabling robots to inform their users of dangerous or unsanitary
anomalies in their home. Some examples of these anomalies include the user
leaving their milk out, forgetting to turn off the stove, or leaving poison
accessible to children. To move towards enabling home robots with these
abilities, we have created a new dataset, which we call SafetyDetect. The
SafetyDetect dataset consists of 1000 anomalous home scenes, each of which
contains unsafe or unsanitary situations for an agent to detect. Our approach
utilizes large language models (LLMs) alongside both a graph representation of
the scene and the relationships between the objects in the scene. Our key
insight is that this connected scene graph and the object relationships it
encodes enables the LLM to better reason about the scene -- especially as it
relates to detecting dangerous or unsanitary situations. Our most promising
approach utilizes GPT-4 and pursues a categorization technique where object
relations from the scene graph are classified as normal, dangerous, unsanitary,
or dangerous for children. This method is able to correctly identify over 90%
of anomalous scenarios in the SafetyDetect Dataset. Additionally, we conduct
real world experiments on a ClearPath TurtleBot where we generate a scene graph
from visuals of the real world scene, and run our approach with no
modification. This setup resulted in little performance loss. The SafetyDetect
Dataset and code will be released to the public upon this papers publication.
|
[
{
"created": "Fri, 12 Apr 2024 21:56:21 GMT",
"version": "v1"
}
] |
2024-10-16
|
[
[
"Mullen",
"James F.",
"Jr"
],
[
"Goyal",
"Prasoon",
""
],
[
"Piramuthu",
"Robinson",
""
],
[
"Johnston",
"Michael",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Ghanadan",
"Reza",
""
]
] |
2404.08858
|
Yan Ru Pei
|
Yan Ru Pei, Sasskia Br\"uers, S\'ebastien Crouzet, Douglas McLelland,
Olivier Coenen
|
A Lightweight Spatiotemporal Network for Online Eye Tracking with Event
Camera
|
8 pages, 3 figures
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2024, pp. 5780-5788
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Event-based data are commonly encountered in edge computing environments
where efficiency and low latency are critical. To interface with such data and
leverage their rich temporal features, we propose a causal spatiotemporal
convolutional network. This solution targets efficient implementation on
edge-appropriate hardware with limited resources in three ways: 1) deliberately
targets a simple architecture and set of operations (convolutions, ReLU
activations) 2) can be configured to perform online inference efficiently via
buffering of layer outputs 3) can achieve more than 90% activation sparsity
through regularization during training, enabling very significant efficiency
gains on event-based processors. In addition, we propose a general affine
augmentation strategy acting directly on the events, which alleviates the
problem of dataset scarcity for event-based systems. We apply our model on the
AIS 2024 event-based eye tracking challenge, reaching a score of 0.9916 p10
accuracy on the Kaggle private testset.
|
[
{
"created": "Sat, 13 Apr 2024 00:13:20 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Pei",
"Yan Ru",
""
],
[
"Brüers",
"Sasskia",
""
],
[
"Crouzet",
"Sébastien",
""
],
[
"McLelland",
"Douglas",
""
],
[
"Coenen",
"Olivier",
""
]
] |
2404.08974
|
Tom\'a\v{s} Sourada
|
Tom\'a\v{s} Sourada, Jana Strakov\'a, Rudolf Rosa
|
OOVs in the Spotlight: How to Inflect them?
|
Published in the proceedings of LREC-COLING 2024. 12 pages, 3 figures
|
Proceedings of the 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024), pp. 12455-12466
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We focus on morphological inflection in out-of-vocabulary (OOV) conditions,
an under-researched subtask in which state-of-the-art systems usually are less
effective. We developed three systems: a retrograde model and two
sequence-to-sequence (seq2seq) models based on LSTM and Transformer. For
testing in OOV conditions, we automatically extracted a large dataset of nouns
in the morphologically rich Czech language, with lemma-disjoint data splits,
and we further manually annotated a real-world OOV dataset of neologisms. In
the standard OOV conditions, Transformer achieves the best results, with
increasing performance in ensemble with LSTM, the retrograde model and
SIGMORPHON baselines. On the real-world OOV dataset of neologisms, the
retrograde model outperforms all neural models. Finally, our seq2seq models
achieve state-of-the-art results in 9 out of 16 languages from SIGMORPHON 2022
shared task data in the OOV evaluation (feature overlap) in the large data
condition. We release the Czech OOV Inflection Dataset for rigorous evaluation
in OOV conditions. Further, we release the inflection system with the seq2seq
models as a ready-to-use Python library.
|
[
{
"created": "Sat, 13 Apr 2024 11:40:06 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 10:21:38 GMT",
"version": "v2"
}
] |
2024-05-29
|
[
[
"Sourada",
"Tomáš",
""
],
[
"Straková",
"Jana",
""
],
[
"Rosa",
"Rudolf",
""
]
] |
2404.09016
|
Melike Nur Yegin
|
Melike Nur Ye\u{g}in and Mehmet Fatih Amasyal{\i}
|
Theoretical research on generative diffusion models: an overview
| null |
Neurocomputing Volume 608 , 1 December 2024, 128373
|
10.1016/j.neucom.2024.128373
| null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Generative diffusion models showed high success in many fields with a
powerful theoretical background. They convert the data distribution to noise
and remove the noise back to obtain a similar distribution. Many existing
reviews focused on the specific application areas without concentrating on the
research about the algorithm. Unlike them we investigated the theoretical
developments of the generative diffusion models. These approaches mainly divide
into two: training-based and sampling-based. Awakening to this allowed us a
clear and understandable categorization for the researchers who will make new
developments in the future.
|
[
{
"created": "Sat, 13 Apr 2024 14:08:56 GMT",
"version": "v1"
}
] |
2024-09-19
|
[
[
"Yeğin",
"Melike Nur",
""
],
[
"Amasyalı",
"Mehmet Fatih",
""
]
] |
2404.09136
|
Shahriar Noroozizadeh
|
Spandan Das, Vinay Samuel, and Shahriar Noroozizadeh
|
TLDR at SemEval-2024 Task 2: T5-generated clinical-Language summaries
for DeBERTa Report Analysis
| null |
In Proceedings of the 18th International Workshop on Semantic
Evaluation (SemEval-2024), pages 507-516, Mexico City, Mexico. Association
for Computational Linguistics
| null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces novel methodologies for the Natural Language Inference
for Clinical Trials (NLI4CT) task. We present TLDR (T5-generated
clinical-Language summaries for DeBERTa Report Analysis) which incorporates
T5-model generated premise summaries for improved entailment and contradiction
analysis in clinical NLI tasks. This approach overcomes the challenges posed by
small context windows and lengthy premises, leading to a substantial
improvement in Macro F1 scores: a 0.184 increase over truncated premises. Our
comprehensive experimental evaluation, including detailed error analysis and
ablations, confirms the superiority of TLDR in achieving consistency and
faithfulness in predictions against semantically altered inputs.
|
[
{
"created": "Sun, 14 Apr 2024 04:14:30 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Das",
"Spandan",
""
],
[
"Samuel",
"Vinay",
""
],
[
"Noroozizadeh",
"Shahriar",
""
]
] |
2404.09275
|
Quang Minh Dinh
|
Quang Minh Dinh, Minh Khoi Ho, Anh Quan Dang, Hung Phong Tran
|
TrafficVLM: A Controllable Visual Language Model for Traffic Video
Captioning
| null |
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2024, pp. 7134-7143
| null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Traffic video description and analysis have received much attention recently
due to the growing demand for efficient and reliable urban surveillance
systems. Most existing methods only focus on locating traffic event segments,
which severely lack descriptive details related to the behaviour and context of
all the subjects of interest in the events. In this paper, we present
TrafficVLM, a novel multi-modal dense video captioning model for vehicle ego
camera view. TrafficVLM models traffic video events at different levels of
analysis, both spatially and temporally, and generates long fine-grained
descriptions for the vehicle and pedestrian at different phases of the event.
We also propose a conditional component for TrafficVLM to control the
generation outputs and a multi-task fine-tuning paradigm to enhance
TrafficVLM's learning capability. Experiments show that TrafficVLM performs
well on both vehicle and overhead camera views. Our solution achieved
outstanding results in Track 2 of the AI City Challenge 2024, ranking us third
in the challenge standings. Our code is publicly available at
https://github.com/quangminhdinh/TrafficVLM.
|
[
{
"created": "Sun, 14 Apr 2024 14:51:44 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Dinh",
"Quang Minh",
""
],
[
"Ho",
"Minh Khoi",
""
],
[
"Dang",
"Anh Quan",
""
],
[
"Tran",
"Hung Phong",
""
]
] |
2404.09469
|
Dmitry Ignatov PhD
|
Dmitry Ignatov, Andrey Ignatov and Radu Timofte
|
Virtually Enriched NYU Depth V2 Dataset for Monocular Depth Estimation:
Do We Need Artificial Augmentation?
| null |
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition Workshops, pages 6177-6186, 2024
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present ANYU, a new virtually augmented version of the NYU depth v2
dataset, designed for monocular depth estimation. In contrast to the well-known
approach where full 3D scenes of a virtual world are utilized to generate
artificial datasets, ANYU was created by incorporating RGB-D representations of
virtual reality objects into the original NYU depth v2 images. We specifically
did not match each generated virtual object with an appropriate texture and a
suitable location within the real-world image. Instead, an assignment of
texture, location, lighting, and other rendering parameters was randomized to
maximize a diversity of the training data, and to show that it is randomness
that can improve the generalizing ability of a dataset. By conducting extensive
experiments with our virtually modified dataset and validating on the original
NYU depth v2 and iBims-1 benchmarks, we show that ANYU improves the monocular
depth estimation performance and generalization of deep neural networks with
considerably different architectures, especially for the current
state-of-the-art VPD model. To the best of our knowledge, this is the first
work that augments a real-world dataset with randomly generated virtual 3D
objects for monocular depth estimation. We make our ANYU dataset publicly
available in two training configurations with 10% and 100% additional
synthetically enriched RGB-D pairs of training images, respectively, for
efficient training and empirical exploration of virtual augmentation at
https://github.com/ABrain-One/ANYU
|
[
{
"created": "Mon, 15 Apr 2024 05:44:03 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Ignatov",
"Dmitry",
""
],
[
"Ignatov",
"Andrey",
""
],
[
"Timofte",
"Radu",
""
]
] |
2404.09475
|
Byeongkeun Kang
|
Byeongkeun Kang and Sinhae Cha and Yeejin Lee
|
Improving Weakly-Supervised Object Localization Using Adversarial
Erasing and Pseudo Label
|
15 pages
|
Engineering Applications of Artificial Intelligence, 2024
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly-supervised learning approaches have gained significant attention due
to their ability to reduce the effort required for human annotations in
training neural networks. This paper investigates a framework for
weakly-supervised object localization, which aims to train a neural network
capable of predicting both the object class and its location using only images
and their image-level class labels. The proposed framework consists of a shared
feature extractor, a classifier, and a localizer. The localizer predicts
pixel-level class probabilities, while the classifier predicts the object class
at the image level. Since image-level class labels are insufficient for
training the localizer, weakly-supervised object localization methods often
encounter challenges in accurately localizing the entire object region. To
address this issue, the proposed method incorporates adversarial erasing and
pseudo labels to improve localization accuracy. Specifically, novel losses are
designed to utilize adversarially erased foreground features and adversarially
erased feature maps, reducing dependence on the most discriminative region.
Additionally, the proposed method employs pseudo labels to suppress activation
values in the background while increasing them in the foreground. The proposed
method is applied to two backbone networks (MobileNetV1 and InceptionV3) and is
evaluated on three publicly available datasets (ILSVRC-2012, CUB-200-2011, and
PASCAL VOC 2012). The experimental results demonstrate that the proposed method
outperforms previous state-of-the-art methods across all evaluated metrics.
|
[
{
"created": "Mon, 15 Apr 2024 06:02:09 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Kang",
"Byeongkeun",
""
],
[
"Cha",
"Sinhae",
""
],
[
"Lee",
"Yeejin",
""
]
] |
2404.09502
|
Pin Tang
|
Pin Tang, Zhongdao Wang, Guoqing Wang, Jilai Zheng, Xiangxuan Ren,
Bailan Feng, Chao Ma
|
SparseOcc: Rethinking Sparse Latent Representation for Vision-Based
Semantic Occupancy Prediction
|
10 pages, 4 figures, accepted by CVPR 2024
|
IEEE Conference on Computer Vision and Pattern Recognition 2024
(CVPR 2024)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-based perception for autonomous driving requires an explicit modeling
of a 3D space, where 2D latent representations are mapped and subsequent 3D
operators are applied. However, operating on dense latent spaces introduces a
cubic time and space complexity, which limits scalability in terms of
perception range or spatial resolution. Existing approaches compress the dense
representation using projections like Bird's Eye View (BEV) or Tri-Perspective
View (TPV). Although efficient, these projections result in information loss,
especially for tasks like semantic occupancy prediction. To address this, we
propose SparseOcc, an efficient occupancy network inspired by sparse point
cloud processing. It utilizes a lossless sparse latent representation with
three key innovations. Firstly, a 3D sparse diffuser performs latent completion
using spatially decomposed 3D sparse convolutional kernels. Secondly, a feature
pyramid and sparse interpolation enhance scales with information from others.
Finally, the transformer head is redesigned as a sparse variant. SparseOcc
achieves a remarkable 74.9% reduction on FLOPs over the dense baseline.
Interestingly, it also improves accuracy, from 12.8% to 14.1% mIOU, which in
part can be attributed to the sparse representation's ability to avoid
hallucinations on empty voxels.
|
[
{
"created": "Mon, 15 Apr 2024 06:45:06 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Tang",
"Pin",
""
],
[
"Wang",
"Zhongdao",
""
],
[
"Wang",
"Guoqing",
""
],
[
"Zheng",
"Jilai",
""
],
[
"Ren",
"Xiangxuan",
""
],
[
"Feng",
"Bailan",
""
],
[
"Ma",
"Chao",
""
]
] |
2404.09530
|
Mohit Gupta
|
Avinash Anand, Raj Jaiswal, Mohit Gupta, Siddhesh S Bangar, Pijush
Bhuyan, Naman Lal, Rajeev Singh, Ritika Jha, Rajiv Ratn Shah, Shin'ichi Satoh
|
RanLayNet: A Dataset for Document Layout Detection used for Domain
Adaptation and Generalization
|
8 pages, 6 figures, MMAsia 2023 Proceedings of the 5th ACM
International Conference on Multimedia in Asia
|
In Proceedings of the 5th ACM International Conference on
Multimedia in Asia 2023. Association for Computing Machinery, NY, USA,
Article 74, pp. 1-6
|
10.1145/3595916.3626448
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large ground-truth datasets and recent advances in deep learning techniques
have been useful for layout detection. However, because of the restricted
layout diversity of these datasets, training on them requires a sizable number
of annotated instances, which is both expensive and time-consuming. As a
result, differences between the source and target domains may significantly
impact how well these models function. To solve this problem, domain adaptation
approaches have been developed that use a small quantity of labeled data to
adjust the model to the target domain. In this research, we introduced a
synthetic document dataset called RanLayNet, enriched with automatically
assigned labels denoting spatial positions, ranges, and types of layout
elements. The primary aim of this endeavor is to develop a versatile dataset
capable of training models with robustness and adaptability to diverse document
formats. Through empirical experimentation, we demonstrate that a deep layout
identification model trained on our dataset exhibits enhanced performance
compared to a model trained solely on actual documents. Moreover, we conduct a
comparative analysis by fine-tuning inference models using both PubLayNet and
IIIT-AR-13K datasets on the Doclaynet dataset. Our findings emphasize that
models enriched with our dataset are optimal for tasks such as achieving 0.398
and 0.588 mAP95 score in the scientific document domain for the TABLE class.
|
[
{
"created": "Mon, 15 Apr 2024 07:50:15 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Apr 2024 06:44:18 GMT",
"version": "v2"
}
] |
2024-04-22
|
[
[
"Anand",
"Avinash",
""
],
[
"Jaiswal",
"Raj",
""
],
[
"Gupta",
"Mohit",
""
],
[
"Bangar",
"Siddhesh S",
""
],
[
"Bhuyan",
"Pijush",
""
],
[
"Lal",
"Naman",
""
],
[
"Singh",
"Rajeev",
""
],
[
"Jha",
"Ritika",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Satoh",
"Shin'ichi",
""
]
] |
2404.09576
|
Jumbly Grindrod
|
Jumbly Grindrod
|
Large language models and linguistic intentionality
| null |
Synthese, Vol. 204: 71 (2024)
|
10.1007/s11229-024-04723-8
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Do large language models like Chat-GPT or LLaMa meaningfully use the words
they produce? Or are they merely clever prediction machines, simulating
language use by producing statistically plausible text? There have already been
some initial attempts to answer this question by showing that these models meet
the criteria for entering meaningful states according to metasemantic theories
of mental content. In this paper, I will argue for a different approach - that
we should instead consider whether language models meet the criteria given by
our best metasemantic theories of linguistic content. In that vein, I will
illustrate how this can be done by applying two such theories to the case of
language models: Gareth Evans' (1982) account of naming practices and Ruth
Millikan's (1984, 2004, 2005) teleosemantics. In doing so, I will argue that it
is a mistake to think that the failure of LLMs to meet plausible conditions for
mental intentionality thereby renders their outputs meaningless, and that a
distinguishing feature of linguistic intentionality - dependency on a
pre-existing linguistic system - allows for the plausible result LLM outputs
are meaningful.
|
[
{
"created": "Mon, 15 Apr 2024 08:37:26 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Sep 2024 08:35:51 GMT",
"version": "v2"
}
] |
2024-09-17
|
[
[
"Grindrod",
"Jumbly",
""
]
] |
2404.09722
|
Xun Yuan
|
Xun Yuan and Yang Yang and Prosanta Gope and Aryan Pasikhani and
Biplab Sikdar
|
VFLGAN: Vertical Federated Learning-based Generative Adversarial Network
for Vertically Partitioned Data Publication
| null |
Proceedings on Privacy Enhancing Technologies Symposium 4 (2024)
840-858
|
10.56553/popets-2024-0144
| null |
cs.LG cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the current artificial intelligence (AI) era, the scale and quality of the
dataset play a crucial role in training a high-quality AI model. However, good
data is not a free lunch and is always hard to access due to privacy
regulations like the General Data Protection Regulation (GDPR). A potential
solution is to release a synthetic dataset with a similar distribution to that
of the private dataset. Nevertheless, in some scenarios, it has been found that
the attributes needed to train an AI model belong to different parties, and
they cannot share the raw data for synthetic data publication due to privacy
regulations. In PETS 2023, Xue et al. proposed the first generative adversary
network-based model, VertiGAN, for vertically partitioned data publication.
However, after thoroughly investigating, we found that VertiGAN is less
effective in preserving the correlation among the attributes of different
parties. This article proposes a Vertical Federated Learning-based Generative
Adversarial Network, VFLGAN, for vertically partitioned data publication to
address the above issues. Our experimental results show that compared with
VertiGAN, VFLGAN significantly improves the quality of synthetic data. Taking
the MNIST dataset as an example, the quality of the synthetic dataset generated
by VFLGAN is 3.2 times better than that generated by VertiGAN w.r.t. the
Fr\'echet Distance. We also designed a more efficient and effective Gaussian
mechanism for the proposed VFLGAN to provide the synthetic dataset with a
differential privacy guarantee. On the other hand, differential privacy only
gives the upper bound of the worst-case privacy guarantee. This article also
proposes a practical auditing scheme that applies membership inference attacks
to estimate privacy leakage through the synthetic dataset.
|
[
{
"created": "Mon, 15 Apr 2024 12:25:41 GMT",
"version": "v1"
}
] |
2024-08-12
|
[
[
"Yuan",
"Xun",
""
],
[
"Yang",
"Yang",
""
],
[
"Gope",
"Prosanta",
""
],
[
"Pasikhani",
"Aryan",
""
],
[
"Sikdar",
"Biplab",
""
]
] |
2404.09753
|
Dongyang Fan
|
Nicolas Wagner, Dongyang Fan, Martin Jaggi
|
Personalized Collaborative Fine-Tuning for On-Device Large Language
Models
| null |
COLM 2024
| null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore on-device self-supervised collaborative fine-tuning of large
language models with limited local data availability. Taking inspiration from
the collaborative learning community, we introduce three distinct
trust-weighted gradient aggregation schemes: weight similarity-based,
prediction similarity-based and validation performance-based. To minimize
communication overhead, we integrate Low-Rank Adaptation (LoRA) and only
exchange LoRA weight updates. Our protocols, driven by prediction and
performance metrics, surpass both FedAvg and local fine-tuning methods, which
is particularly evident in realistic scenarios with more diverse local data
distributions. The results underscore the effectiveness of our approach in
addressing heterogeneity and scarcity within local datasets.
|
[
{
"created": "Mon, 15 Apr 2024 12:54:31 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2024 21:54:20 GMT",
"version": "v2"
}
] |
2024-08-08
|
[
[
"Wagner",
"Nicolas",
""
],
[
"Fan",
"Dongyang",
""
],
[
"Jaggi",
"Martin",
""
]
] |
2404.10180
|
Zhong Meng
|
Zelin Wu, Gan Song, Christopher Li, Pat Rondon, Zhong Meng, Xavier
Velez, Weiran Wang, Diamantino Caseiro, Golan Pundak, Tsendsuren Munkhdalai,
Angad Chandorkar, Rohit Prabhavalkar
|
Deferred NAM: Low-latency Top-K Context Injection via Deferred Context
Encoding for Non-Streaming ASR
|
9 pages, 3 figures, accepted by NAACL 2024 - Industry Track
|
2024 Annual Conference of the North American Chapter of the
Association for Computational Linguistics - Industry Track
| null | null |
cs.CL cs.AI cs.LG cs.NE eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Contextual biasing enables speech recognizers to transcribe important phrases
in the speaker's context, such as contact names, even if they are rare in, or
absent from, the training data. Attention-based biasing is a leading approach
which allows for full end-to-end cotraining of the recognizer and biasing
system and requires no separate inference-time components. Such biasers
typically consist of a context encoder; followed by a context filter which
narrows down the context to apply, improving per-step inference time; and,
finally, context application via cross attention. Though much work has gone
into optimizing per-frame performance, the context encoder is at least as
important: recognition cannot begin before context encoding ends. Here, we show
the lightweight phrase selection pass can be moved before context encoding,
resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K
phrases with a maximum pre-decoding delay under 33ms. With the addition of
phrase- and wordpiece-level cross-entropy losses, our technique also achieves
up to a 37.5% relative WER reduction over the baseline without the losses and
lightweight phrase selection pass.
|
[
{
"created": "Mon, 15 Apr 2024 23:28:13 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Apr 2024 13:43:26 GMT",
"version": "v2"
}
] |
2024-04-24
|
[
[
"Wu",
"Zelin",
""
],
[
"Song",
"Gan",
""
],
[
"Li",
"Christopher",
""
],
[
"Rondon",
"Pat",
""
],
[
"Meng",
"Zhong",
""
],
[
"Velez",
"Xavier",
""
],
[
"Wang",
"Weiran",
""
],
[
"Caseiro",
"Diamantino",
""
],
[
"Pundak",
"Golan",
""
],
[
"Munkhdalai",
"Tsendsuren",
""
],
[
"Chandorkar",
"Angad",
""
],
[
"Prabhavalkar",
"Rohit",
""
]
] |
2404.10218
|
Jing Zeng
|
Jing Zeng, Yanxu Li, Jiahao Sun, Qi Ye, Yunlong Ran, Jiming Chen
|
Autonomous Implicit Indoor Scene Reconstruction with Frontier
Exploration
|
7 pages
|
IEEE International Conference on Robotics and Automation (ICRA
2024)
| null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit neural representations have demonstrated significant promise for 3D
scene reconstruction. Recent works have extended their applications to
autonomous implicit reconstruction through the Next Best View (NBV) based
method. However, the NBV method cannot guarantee complete scene coverage and
often necessitates extensive viewpoint sampling, particularly in complex
scenes. In the paper, we propose to 1) incorporate frontier-based exploration
tasks for global coverage with implicit surface uncertainty-based
reconstruction tasks to achieve high-quality reconstruction. and 2) introduce a
method to achieve implicit surface uncertainty using color uncertainty, which
reduces the time needed for view selection. Further with these two tasks, we
propose an adaptive strategy for switching modes in view path planning, to
reduce time and maintain superior reconstruction quality. Our method exhibits
the highest reconstruction quality among all planning methods and superior
planning efficiency in methods involving reconstruction tasks. We deploy our
method on a UAV and the results show that our method can plan multi-task views
and reconstruct a scene with high quality.
|
[
{
"created": "Tue, 16 Apr 2024 01:59:03 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Zeng",
"Jing",
""
],
[
"Li",
"Yanxu",
""
],
[
"Sun",
"Jiahao",
""
],
[
"Ye",
"Qi",
""
],
[
"Ran",
"Yunlong",
""
],
[
"Chen",
"Jiming",
""
]
] |
2404.10378
|
Iv\'an De Andr\'es Tame
|
Ivan DeAndres-Tame, Ruben Tolosana, Pietro Melzi, Ruben
Vera-Rodriguez, Minchul Kim, Christian Rathgeb, Xiaoming Liu, Aythami
Morales, Julian Fierrez, Javier Ortega-Garcia, Zhizhou Zhong, Yuge Huang,
Yuxi Mi, Shouhong Ding, Shuigeng Zhou, Shuai He, Lingzhi Fu, Heng Cong,
Rongyu Zhang, Zhihong Xiao, Evgeny Smirnov, Anton Pimenov, Aleksei Grigorev,
Denis Timoshenko, Kaleb Mesfin Asfaw, Cheng Yaw Low, Hao Liu, Chuyi Wang,
Qing Zuo, Zhixiang He, Hatef Otroshi Shahreza, Anjith George, Alexander
Unnervik, Parsa Rahimi, S\'ebastien Marcel, Pedro C. Neto, Marco Huber, Jan
Niklas Kolf, Naser Damer, Fadi Boutros, Jaime S. Cardoso, Ana F. Sequeira,
Andrea Atzori, Gianni Fenu, Mirko Marras, Vitomir \v{S}truc, Jiang Yu,
Zhangjie Li, Jichun Li, Weisong Zhao, Zhen Lei, Xiangyu Zhu, Xiao-Yu Zhang,
Bernardo Biesseck, Pedro Vidal, Luiz Coelho, Roger Granada and David Menotti
|
Second Edition FRCSyn Challenge at CVPR 2024: Face Recognition Challenge
in the Era of Synthetic Data
|
arXiv admin note: text overlap with arXiv:2311.10476
|
IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRw 2024)
| null | null |
cs.CV cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Synthetic data is gaining increasing relevance for training machine learning
models. This is mainly motivated due to several factors such as the lack of
real data and intra-class variability, time and errors produced in manual
labeling, and in some cases privacy concerns, among others. This paper presents
an overview of the 2nd edition of the Face Recognition Challenge in the Era of
Synthetic Data (FRCSyn) organized at CVPR 2024. FRCSyn aims to investigate the
use of synthetic data in face recognition to address current technological
limitations, including data privacy concerns, demographic biases,
generalization to novel scenarios, and performance constraints in challenging
situations such as aging, pose variations, and occlusions. Unlike the 1st
edition, in which synthetic data from DCFace and GANDiffFace methods was only
allowed to train face recognition systems, in this 2nd edition we propose new
sub-tasks that allow participants to explore novel face generative methods. The
outcomes of the 2nd FRCSyn Challenge, along with the proposed experimental
protocol and benchmarking contribute significantly to the application of
synthetic data to face recognition.
|
[
{
"created": "Tue, 16 Apr 2024 08:15:10 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"DeAndres-Tame",
"Ivan",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Melzi",
"Pietro",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Kim",
"Minchul",
""
],
[
"Rathgeb",
"Christian",
""
],
[
"Liu",
"Xiaoming",
""
],
[
"Morales",
"Aythami",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Zhong",
"Zhizhou",
""
],
[
"Huang",
"Yuge",
""
],
[
"Mi",
"Yuxi",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Zhou",
"Shuigeng",
""
],
[
"He",
"Shuai",
""
],
[
"Fu",
"Lingzhi",
""
],
[
"Cong",
"Heng",
""
],
[
"Zhang",
"Rongyu",
""
],
[
"Xiao",
"Zhihong",
""
],
[
"Smirnov",
"Evgeny",
""
],
[
"Pimenov",
"Anton",
""
],
[
"Grigorev",
"Aleksei",
""
],
[
"Timoshenko",
"Denis",
""
],
[
"Asfaw",
"Kaleb Mesfin",
""
],
[
"Low",
"Cheng Yaw",
""
],
[
"Liu",
"Hao",
""
],
[
"Wang",
"Chuyi",
""
],
[
"Zuo",
"Qing",
""
],
[
"He",
"Zhixiang",
""
],
[
"Shahreza",
"Hatef Otroshi",
""
],
[
"George",
"Anjith",
""
],
[
"Unnervik",
"Alexander",
""
],
[
"Rahimi",
"Parsa",
""
],
[
"Marcel",
"Sébastien",
""
],
[
"Neto",
"Pedro C.",
""
],
[
"Huber",
"Marco",
""
],
[
"Kolf",
"Jan Niklas",
""
],
[
"Damer",
"Naser",
""
],
[
"Boutros",
"Fadi",
""
],
[
"Cardoso",
"Jaime S.",
""
],
[
"Sequeira",
"Ana F.",
""
],
[
"Atzori",
"Andrea",
""
],
[
"Fenu",
"Gianni",
""
],
[
"Marras",
"Mirko",
""
],
[
"Štruc",
"Vitomir",
""
],
[
"Yu",
"Jiang",
""
],
[
"Li",
"Zhangjie",
""
],
[
"Li",
"Jichun",
""
],
[
"Zhao",
"Weisong",
""
],
[
"Lei",
"Zhen",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"Zhang",
"Xiao-Yu",
""
],
[
"Biesseck",
"Bernardo",
""
],
[
"Vidal",
"Pedro",
""
],
[
"Coelho",
"Luiz",
""
],
[
"Granada",
"Roger",
""
],
[
"Menotti",
"David",
""
]
] |
2404.10407
|
Lisang Zhou
|
Feiyang Chen, Ziqian Luo, Lisang Zhou, Xueting Pan, Ying Jiang
|
Comprehensive Survey of Model Compression and Speed up for Vision
Transformers
| null |
Journal of Information, Technology and Policy (2024): 1-12
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vision Transformers (ViT) have marked a paradigm shift in computer vision,
outperforming state-of-the-art models across diverse tasks. However, their
practical deployment is hampered by high computational and memory demands. This
study addresses the challenge by evaluating four primary model compression
techniques: quantization, low-rank approximation, knowledge distillation, and
pruning. We methodically analyze and compare the efficacy of these techniques
and their combinations in optimizing ViTs for resource-constrained
environments. Our comprehensive experimental evaluation demonstrates that these
methods facilitate a balanced compromise between model accuracy and
computational efficiency, paving the way for wider application in edge
computing devices.
|
[
{
"created": "Tue, 16 Apr 2024 09:19:11 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Chen",
"Feiyang",
""
],
[
"Luo",
"Ziqian",
""
],
[
"Zhou",
"Lisang",
""
],
[
"Pan",
"Xueting",
""
],
[
"Jiang",
"Ying",
""
]
] |
2404.10474
|
Luca Piano
|
Pietro Recalcati, Fabio Garcea, Luca Piano, Fabrizio Lamberti, Lia
Morra
|
Toward a Realistic Benchmark for Out-of-Distribution Detection
| null |
2023 IEEE 10th International Conference on Data Science and
Advanced Analytics (DSAA)
|
10.1109/DSAA60987.2023.10302486
| null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks are increasingly used in a wide range of technologies
and services, but remain highly susceptible to out-of-distribution (OOD)
samples, that is, drawn from a different distribution than the original
training set. A common approach to address this issue is to endow deep neural
networks with the ability to detect OOD samples. Several benchmarks have been
proposed to design and validate OOD detection techniques. However, many of them
are based on far-OOD samples drawn from very different distributions, and thus
lack the complexity needed to capture the nuances of real-world scenarios. In
this work, we introduce a comprehensive benchmark for OOD detection, based on
ImageNet and Places365, that assigns individual classes as in-distribution or
out-of-distribution depending on the semantic similarity with the training set.
Several techniques can be used to determine which classes should be considered
in-distribution, yielding benchmarks with varying properties. Experimental
results on different OOD detection techniques show how their measured efficacy
depends on the selected benchmark and how confidence-based techniques may
outperform classifier-based ones on near-OOD samples.
|
[
{
"created": "Tue, 16 Apr 2024 11:29:43 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Recalcati",
"Pietro",
""
],
[
"Garcea",
"Fabio",
""
],
[
"Piano",
"Luca",
""
],
[
"Lamberti",
"Fabrizio",
""
],
[
"Morra",
"Lia",
""
]
] |
2404.10646
|
Niklas Strau{\ss}
|
Niklas Strau{\ss}, Lukas Rottkamp, Sebatian Schmoll, Matthias Schubert
|
Efficient Parking Search using Shared Fleet Data
|
Long Version; published at 2021 22nd IEEE International Conference on
Mobile Data Management (MDM)
|
2021 22nd IEEE International Conference on Mobile Data Management
(MDM)
|
10.1109/MDM52706.2021.00026
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding an available on-street parking spot is a relevant problem of
day-to-day life. In recent years, cities such as Melbourne and San Francisco
deployed sensors that provide real-time information about the occupation of
parking spots. Finding a free parking spot in such a smart environment can be
modeled and solved as a Markov decision process (MDP). The problem has to
consider uncertainty as available parking spots might not remain available
until arrival due to other vehicles also claiming spots in the meantime.
Knowing the parking intention of every vehicle in the environment would
eliminate this uncertainty. Unfortunately, it does currently not seem realistic
to have such data from all vehicles. In contrast, acquiring data from a subset
of vehicles or a vehicle fleet appears feasible and has the potential to reduce
uncertainty.
In this paper, we examine the question of how useful sharing data within a
vehicle fleet might be for the search times of particular drivers. We use fleet
data to better estimate the availability of parking spots at arrival. Since
optimal solutions for large scenarios are infeasible, we base our method on
approximate solutions, which have been shown to perform well in single-agent
settings. Our experiments are conducted on a simulation using real-world and
synthetic data from the city of Melbourne. The results indicate that fleet data
can significantly reduce search times for an available parking spot.
|
[
{
"created": "Tue, 16 Apr 2024 15:20:28 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Strauß",
"Niklas",
""
],
[
"Rottkamp",
"Lukas",
""
],
[
"Schmoll",
"Sebatian",
""
],
[
"Schubert",
"Matthias",
""
]
] |
2404.10683
|
Niklas Strau{\ss}
|
David Winkel, Niklas Strau{\ss}, Matthias Schubert, Thomas Seidl
|
Simplex Decomposition for Portfolio Allocation Constraints in
Reinforcement Learning
| null |
ECAI 2023 - 26th European Conference on Artificial Intelligence,
September 30 - October 4, 2023, Krakow, Poland
|
10.3233/FAIA230573
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Portfolio optimization tasks describe sequential decision problems in which
the investor's wealth is distributed across a set of assets. Allocation
constraints are used to enforce minimal or maximal investments into particular
subsets of assets to control for objectives such as limiting the portfolio's
exposure to a certain sector due to environmental concerns. Although methods
for constrained Reinforcement Learning (CRL) can optimize policies while
considering allocation constraints, it can be observed that these general
methods yield suboptimal results. In this paper, we propose a novel approach to
handle allocation constraints based on a decomposition of the constraint action
space into a set of unconstrained allocation problems. In particular, we
examine this approach for the case of two constraints. For example, an investor
may wish to invest at least a certain percentage of the portfolio into green
technologies while limiting the investment in the fossil energy sector. We show
that the action space of the task is equivalent to the decomposed action space,
and introduce a new reinforcement learning (RL) approach CAOSD, which is built
on top of the decomposition. The experimental evaluation on real-world
Nasdaq-100 data demonstrates that our approach consistently outperforms
state-of-the-art CRL benchmarks for portfolio optimization.
|
[
{
"created": "Tue, 16 Apr 2024 16:00:59 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Winkel",
"David",
""
],
[
"Strauß",
"Niklas",
""
],
[
"Schubert",
"Matthias",
""
],
[
"Seidl",
"Thomas",
""
]
] |
2404.10700
|
Georgy Perevozchikov
|
Georgy Perevozchikov, Nancy Mehta, Mahmoud Afifi and Radu Timofte
|
Rawformer: Unpaired Raw-to-Raw Translation for Learnable Camera ISPs
|
Accepted by ECCV 2024
|
https://eccv.ecva.net/Conferences/2024
| null | null |
eess.IV cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Modern smartphone camera quality heavily relies on the image signal processor
(ISP) to enhance captured raw images, utilizing carefully designed modules to
produce final output images encoded in a standard color space (e.g., sRGB).
Neural-based end-to-end learnable ISPs offer promising advancements,
potentially replacing traditional ISPs with their ability to adapt without
requiring extensive tuning for each new camera model, as is often the case for
nearly every module in traditional ISPs. However, the key challenge with the
recent learning-based ISPs is the urge to collect large paired datasets for
each distinct camera model due to the influence of intrinsic camera
characteristics on the formation of input raw images. This paper tackles this
challenge by introducing a novel method for unpaired learning of raw-to-raw
translation across diverse cameras. Specifically, we propose Rawformer, an
unsupervised Transformer-based encoder-decoder method for raw-to-raw
translation. It accurately maps raw images captured by a certain camera to the
target camera, facilitating the generalization of learnable ISPs to new unseen
cameras. Our method demonstrates superior performance on real camera datasets,
achieving higher accuracy compared to previous state-of-the-art techniques, and
preserving a more robust correlation between the original and translated raw
images. The codes and the pretrained models are available at
https://github.com/gosha20777/rawformer.
|
[
{
"created": "Tue, 16 Apr 2024 16:17:48 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2024 14:09:28 GMT",
"version": "v2"
}
] |
2024-07-16
|
[
[
"Perevozchikov",
"Georgy",
""
],
[
"Mehta",
"Nancy",
""
],
[
"Afifi",
"Mahmoud",
""
],
[
"Timofte",
"Radu",
""
]
] |
2404.10719
|
Shusheng Xu
|
Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei,
Guangju Wang, Chao Yu, Yi Wu
|
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
|
16 pages, 2 figures, 14 tables
|
ICML 2024
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning from Human Feedback (RLHF) is currently the most
widely used method to align large language models (LLMs) with human
preferences. Existing RLHF methods can be roughly categorized as either
reward-based or reward-free. Novel applications such as ChatGPT and Claude
leverage reward-based methods that first learn a reward model and apply
actor-critic algorithms, such as Proximal Policy Optimization (PPO). However,
in academic benchmarks, state-of-the-art results are often achieved via
reward-free methods, such as Direct Preference Optimization (DPO). Is DPO truly
superior to PPO? Why does PPO perform poorly on these benchmarks? In this
paper, we first conduct both theoretical and empirical studies on the
algorithmic properties of DPO and show that DPO may have fundamental
limitations. Moreover, we also comprehensively examine PPO and reveal the key
factors for the best performances of PPO in fine-tuning LLMs. Finally, we
benchmark DPO and PPO across a collection of RLHF testbeds, ranging from
dialogue to code generation. Experiment results demonstrate that PPO is able to
surpass other alignment methods in all cases and achieve state-of-the-art
results in challenging code competitions. Our code is publicly available at
https://github.com/openpsi-project/ReaLHF.
|
[
{
"created": "Tue, 16 Apr 2024 16:51:53 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Apr 2024 11:58:54 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Oct 2024 08:30:17 GMT",
"version": "v3"
}
] |
2024-10-11
|
[
[
"Xu",
"Shusheng",
""
],
[
"Fu",
"Wei",
""
],
[
"Gao",
"Jiaxuan",
""
],
[
"Ye",
"Wenjie",
""
],
[
"Liu",
"Weilin",
""
],
[
"Mei",
"Zhiyu",
""
],
[
"Wang",
"Guangju",
""
],
[
"Yu",
"Chao",
""
],
[
"Wu",
"Yi",
""
]
] |
2404.10786
|
Soumyendu Sarkar
|
Soumyendu Sarkar, Avisek Naug, Antonio Guillen, Ricardo Luna, Vineet
Gundecha, Ashwin Ramesh Babu, Sajad Mousavi
|
Sustainability of Data Center Digital Twins with Reinforcement Learning
|
2024 Proceedings of the AAAI Conference on Artificial Intelligence
|
Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 38, no. 20, pp. 22322-22330, Mar. 2024
|
10.1609/aaai.v38i20.30238
| null |
cs.DC cs.AI cs.LG cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid growth of machine learning (ML) has led to an increased demand for
computational power, resulting in larger data centers (DCs) and higher energy
consumption. To address this issue and reduce carbon emissions, intelligent
design and control of DC components such as IT servers, cabinets, HVAC cooling,
flexible load shifting, and battery energy storage are essential. However, the
complexity of designing and controlling them in tandem presents a significant
challenge. While some individual components like CFD-based design and
Reinforcement Learning (RL) based HVAC control have been researched, there's a
gap in the holistic design and optimization covering all elements
simultaneously. To tackle this, we've developed DCRL-Green, a multi-agent RL
environment that empowers the ML community to design data centers and research,
develop, and refine RL controllers for carbon footprint reduction in DCs. It is
a flexible, modular, scalable, and configurable platform that can handle large
High Performance Computing (HPC) clusters. Furthermore, in its default setup,
DCRL-Green provides a benchmark for evaluating single as well as multi-agent RL
algorithms. It easily allows users to subclass the default implementations and
design their own control approaches, encouraging community development for
sustainable data centers. Open Source Link:
https://github.com/HewlettPackard/dc-rl
|
[
{
"created": "Tue, 16 Apr 2024 18:22:30 GMT",
"version": "v1"
}
] |
2024-04-18
|
[
[
"Sarkar",
"Soumyendu",
""
],
[
"Naug",
"Avisek",
""
],
[
"Guillen",
"Antonio",
""
],
[
"Luna",
"Ricardo",
""
],
[
"Gundecha",
"Vineet",
""
],
[
"Babu",
"Ashwin Ramesh",
""
],
[
"Mousavi",
"Sajad",
""
]
] |
2404.10904
|
Florian Blume
|
Marah Halawa and Florian Blume and Pia Bideau and Martin Maier and
Rasha Abdel Rahman and Olaf Hellwich
|
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression
Recognition
|
The paper will appear in the CVPR 2024 workshops proceedings
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2024, pp. 4604-4614
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human communication is multi-modal; e.g., face-to-face interaction involves
auditory signals (speech) and visual signals (face movements and hand
gestures). Hence, it is essential to exploit multiple modalities when designing
machine learning-based facial expression recognition systems. In addition,
given the ever-growing quantities of video data that capture human facial
expressions, such systems should utilize raw unlabeled videos without requiring
expensive annotations. Therefore, in this work, we employ a multitask
multi-modal self-supervised learning method for facial expression recognition
from in-the-wild video data. Our model combines three self-supervised objective
functions: First, a multi-modal contrastive loss, that pulls diverse data
modalities of the same video together in the representation space. Second, a
multi-modal clustering loss that preserves the semantic structure of input data
in the representation space. Finally, a multi-modal data reconstruction loss.
We conduct a comprehensive study on this multimodal multi-task self-supervised
learning method on three facial expression recognition benchmarks. To that end,
we examine the performance of learning through different combinations of
self-supervised tasks on the facial expression recognition downstream task. Our
model ConCluGen outperforms several multi-modal self-supervised and fully
supervised baselines on the CMU-MOSEI dataset. Our results generally show that
multi-modal self-supervision tasks offer large performance gains for
challenging tasks such as facial expression recognition, while also reducing
the amount of manual annotations required. We release our pre-trained models as
well as source code publicly
|
[
{
"created": "Tue, 16 Apr 2024 20:51:36 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Sep 2024 09:42:07 GMT",
"version": "v2"
}
] |
2024-09-05
|
[
[
"Halawa",
"Marah",
""
],
[
"Blume",
"Florian",
""
],
[
"Bideau",
"Pia",
""
],
[
"Maier",
"Martin",
""
],
[
"Rahman",
"Rasha Abdel",
""
],
[
"Hellwich",
"Olaf",
""
]
] |
2404.10991
|
Soumyendu Sarkar
|
Soumyendu Sarkar, Vineet Gundecha, Sahand Ghorbanpour, Alexander
Shmakov, Ashwin Ramesh Babu, Avisek Naug, Alexandre Pichard, Mathieu Cocho
|
Function Approximation for Reinforcement Learning Controller for Energy
from Spread Waves
|
IJCAI 2023, Proceedings of the Thirty-Second International Joint
Conference on Artificial IntelligenceAugust 2023
|
IJCAI 2023, Proceedings of the Thirty-Second International Joint
Conference on Artificial IntelligenceAugust 2023, Article No 688, Pages 6201
to 6209
|
10.24963/ijcai.2023/688
| null |
cs.AI cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The industrial multi-generator Wave Energy Converters (WEC) must handle
multiple simultaneous waves coming from different directions called spread
waves. These complex devices in challenging circumstances need controllers with
multiple objectives of energy capture efficiency, reduction of structural
stress to limit maintenance, and proactive protection against high waves. The
Multi-Agent Reinforcement Learning (MARL) controller trained with the Proximal
Policy Optimization (PPO) algorithm can handle these complexities. In this
paper, we explore different function approximations for the policy and critic
networks in modeling the sequential nature of the system dynamics and find that
they are key to better performance. We investigated the performance of a fully
connected neural network (FCN), LSTM, and Transformer model variants with
varying depths and gated residual connections. Our results show that the
transformer model of moderate depth with gated residual connections around the
multi-head attention, multi-layer perceptron, and the transformer block (STrXL)
proposed in this paper is optimal and boosts energy efficiency by an average of
22.1% for these complex spread waves over the existing spring damper (SD)
controller. Furthermore, unlike the default SD controller, the transformer
controller almost eliminated the mechanical stress from the rotational yaw
motion for angled waves. Demo: https://tinyurl.com/yueda3jh
|
[
{
"created": "Wed, 17 Apr 2024 02:04:10 GMT",
"version": "v1"
}
] |
2024-04-18
|
[
[
"Sarkar",
"Soumyendu",
""
],
[
"Gundecha",
"Vineet",
""
],
[
"Ghorbanpour",
"Sahand",
""
],
[
"Shmakov",
"Alexander",
""
],
[
"Babu",
"Ashwin Ramesh",
""
],
[
"Naug",
"Avisek",
""
],
[
"Pichard",
"Alexandre",
""
],
[
"Cocho",
"Mathieu",
""
]
] |
2404.11015
|
Zhaorui Zhang
|
Haotian Xu, Zhaorui Zhang, Sheng Di, Benben Liu, Khalid Ayed Alharthi,
Jiannong Cao
|
FedFa: A Fully Asynchronous Training Paradigm for Federated Learning
| null |
IJCAI 2024: the 33rd International Joint Conference on Artificial
Intelligence
| null | null |
cs.LG cs.AI cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Federated learning has been identified as an efficient decentralized training
paradigm for scaling the machine learning model training on a large number of
devices while guaranteeing the data privacy of the trainers. FedAvg has become
a foundational parameter update strategy for federated learning, which has been
promising to eliminate the effect of the heterogeneous data across clients and
guarantee convergence. However, the synchronization parameter update barriers
for each communication round during the training significant time on waiting,
slowing down the training procedure. Therefore, recent state-of-the-art
solutions propose using semi-asynchronous approaches to mitigate the waiting
time cost with guaranteed convergence. Nevertheless, emerging semi-asynchronous
approaches are unable to eliminate the waiting time completely.
We propose a full asynchronous training paradigm, called FedFa, which can
guarantee model convergence and eliminate the waiting time completely for
federated learning by using a few buffered results on the server for parameter
updating. Further, we provide theoretical proof of the convergence rate for our
proposed FedFa. Extensive experimental results indicate our approach
effectively improves the training performance of federated learning by up to 6x
and 4x speedup compared to the state-of-the-art synchronous and
semi-asynchronous strategies while retaining high accuracy in both IID and
Non-IID scenarios.
|
[
{
"created": "Wed, 17 Apr 2024 02:46:59 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Apr 2024 14:26:07 GMT",
"version": "v2"
}
] |
2024-04-23
|
[
[
"Xu",
"Haotian",
""
],
[
"Zhang",
"Zhaorui",
""
],
[
"Di",
"Sheng",
""
],
[
"Liu",
"Benben",
""
],
[
"Alharthi",
"Khalid Ayed",
""
],
[
"Cao",
"Jiannong",
""
]
] |
2404.11122
|
Pierre Lepagnol
|
Pierre Lepagnol (LISN), Thomas Gerald (LISN), Sahar Ghannay (LISN),
Christophe Servan (STL, ILES), Sophie Rosset (LISN)
|
Small Language Models are Good Too: An Empirical Study of Zero-Shot
Classification
| null |
LREC-COLING 2024, May 2024, TURIN, Italy
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study is part of the debate on the efficiency of large versus small
language models for text classification by prompting.We assess the performance
of small language models in zero-shot text classification, challenging the
prevailing dominance of large models.Across 15 datasets, our investigation
benchmarks language models from 77M to 40B parameters using different
architectures and scoring functions. Our findings reveal that small models can
effectively classify texts, getting on par with or surpassing their larger
counterparts.We developed and shared a comprehensive open-source repository
that encapsulates our methodologies. This research underscores the notion that
bigger isn't always better, suggesting that resource-efficient small models may
offer viable solutions for specific data classification challenges.
|
[
{
"created": "Wed, 17 Apr 2024 07:10:28 GMT",
"version": "v1"
}
] |
2024-04-18
|
[
[
"Lepagnol",
"Pierre",
"",
"LISN"
],
[
"Gerald",
"Thomas",
"",
"LISN"
],
[
"Ghannay",
"Sahar",
"",
"LISN"
],
[
"Servan",
"Christophe",
"",
"STL, ILES"
],
[
"Rosset",
"Sophie",
"",
"LISN"
]
] |
2404.11265
|
Zixuan Zhu
|
Zixuan Zhu, Rui Wang, Cong Zou, Lihua Jing
|
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a
Clean Model on Poisoned Data
|
13 pages, 6 figures, published to ICCV
|
Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV). 2023: 155-164
|
10.1109/ICCV51070.2023.00021
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, backdoor attacks have posed a serious security threat to the
training process of deep neural networks (DNNs). The attacked model behaves
normally on benign samples but outputs a specific result when the trigger is
present. However, compared with the rocketing progress of backdoor attacks,
existing defenses are difficult to deal with these threats effectively or
require benign samples to work, which may be unavailable in real scenarios. In
this paper, we find that the poisoned samples and benign samples can be
distinguished with prediction entropy. This inspires us to propose a novel
dual-network training framework: The Victim and The Beneficiary (V&B), which
exploits a poisoned model to train a clean model without extra benign samples.
Firstly, we sacrifice the Victim network to be a powerful poisoned sample
detector by training on suspicious samples. Secondly, we train the Beneficiary
network on the credible samples selected by the Victim to inhibit backdoor
injection. Thirdly, a semi-supervised suppression strategy is adopted for
erasing potential backdoors and improving model performance. Furthermore, to
better inhibit missed poisoned samples, we propose a strong data augmentation
method, AttentionMix, which works well with our proposed V&B framework.
Extensive experiments on two widely used datasets against 6 state-of-the-art
attacks demonstrate that our framework is effective in preventing backdoor
injection and robust to various attacks while maintaining the performance on
benign samples. Our code is available at https://github.com/Zixuan-Zhu/VaB.
|
[
{
"created": "Wed, 17 Apr 2024 11:15:58 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2024 15:59:32 GMT",
"version": "v2"
}
] |
2024-06-03
|
[
[
"Zhu",
"Zixuan",
""
],
[
"Wang",
"Rui",
""
],
[
"Zou",
"Cong",
""
],
[
"Jing",
"Lihua",
""
]
] |
2404.11335
|
Vladimir Somers
|
Vladimir Somers, Victor Joos, Anthony Cioppa, Silvio Giancola, Seyed
Abolfazl Ghasemzadeh, Floriane Magera, Baptiste Standaert, Amir Mohammad
Mansourian, Xin Zhou, Shohreh Kasaei, Bernard Ghanem, Alexandre Alahi, Marc
Van Droogenbroeck, Christophe De Vleeschouwer
|
SoccerNet Game State Reconstruction: End-to-End Athlete Tracking and
Identification on a Minimap
| null |
2024 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Work. (CVPRW)
| null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Tracking and identifying athletes on the pitch holds a central role in
collecting essential insights from the game, such as estimating the total
distance covered by players or understanding team tactics. This tracking and
identification process is crucial for reconstructing the game state, defined by
the athletes' positions and identities on a 2D top-view of the pitch, (i.e. a
minimap). However, reconstructing the game state from videos captured by a
single camera is challenging. It requires understanding the position of the
athletes and the viewpoint of the camera to localize and identify players
within the field. In this work, we formalize the task of Game State
Reconstruction and introduce SoccerNet-GSR, a novel Game State Reconstruction
dataset focusing on football videos. SoccerNet-GSR is composed of 200 video
sequences of 30 seconds, annotated with 9.37 million line points for pitch
localization and camera calibration, as well as over 2.36 million athlete
positions on the pitch with their respective role, team, and jersey number.
Furthermore, we introduce GS-HOTA, a novel metric to evaluate game state
reconstruction methods. Finally, we propose and release an end-to-end baseline
for game state reconstruction, bootstrapping the research on this task. Our
experiments show that GSR is a challenging novel task, which opens the field
for future research. Our dataset and codebase are publicly available at
https://github.com/SoccerNet/sn-gamestate.
|
[
{
"created": "Wed, 17 Apr 2024 12:53:45 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Somers",
"Vladimir",
""
],
[
"Joos",
"Victor",
""
],
[
"Cioppa",
"Anthony",
""
],
[
"Giancola",
"Silvio",
""
],
[
"Ghasemzadeh",
"Seyed Abolfazl",
""
],
[
"Magera",
"Floriane",
""
],
[
"Standaert",
"Baptiste",
""
],
[
"Mansourian",
"Amir Mohammad",
""
],
[
"Zhou",
"Xin",
""
],
[
"Kasaei",
"Shohreh",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Alahi",
"Alexandre",
""
],
[
"Van Droogenbroeck",
"Marc",
""
],
[
"De Vleeschouwer",
"Christophe",
""
]
] |
2404.11691
|
Mohit Gupta
|
Vansh Gupta, Mohit Gupta, Jai Garg, Nitesh Garg
|
Improvement in Semantic Address Matching using Natural Language
Processing
|
5 pages, 7 tables, 2021 2nd International Conference for Emerging
Technology (INCET)
|
2021 2nd International Conference for Emerging Technology (INCET),
Belagavi, India, 2021, pp. 1-5
|
10.1109/INCET51464.2021.9456342
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Address matching is an important task for many businesses especially delivery
and take out companies which help them to take out a certain address from their
data warehouse. Existing solution uses similarity of strings, and edit distance
algorithms to find out the similar addresses from the address database, but
these algorithms could not work effectively with redundant, unstructured, or
incomplete address data. This paper discuss semantic Address matching
technique, by which we can find out a particular address from a list of
possible addresses. We have also reviewed existing practices and their
shortcoming. Semantic address matching is an essentially NLP task in the field
of deep learning. Through this technique We have the ability to triumph the
drawbacks of existing methods like redundant or abbreviated data problems. The
solution uses the OCR on invoices to extract the address and create the data
pool of addresses. Then this data is fed to the algorithm BM-25 for scoring the
best matching entries. Then to observe the best result, this will pass through
BERT for giving the best possible result from the similar queries. Our
investigation exhibits that our methodology enormously improves both accuracy
and review of cutting-edge technology existing techniques.
|
[
{
"created": "Wed, 17 Apr 2024 18:42:36 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Gupta",
"Vansh",
""
],
[
"Gupta",
"Mohit",
""
],
[
"Garg",
"Jai",
""
],
[
"Garg",
"Nitesh",
""
]
] |
2404.11875
|
Adrita Barua
|
Adrita Barua, Cara Widmer, Pascal Hitzler
|
Concept Induction using LLMs: a user experiment for assessment
| null |
Neural-Symbolic Learning and Reasoning, NeSy 2024, Lecture Notes
in Computer Science, vol. 14980, pp. 132-148, 2024
|
10.1007/978-3-031-71170-1
| null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Explainable Artificial Intelligence (XAI) poses a significant challenge in
providing transparent and understandable insights into complex AI models.
Traditional post-hoc algorithms, while useful, often struggle to deliver
interpretable explanations. Concept-based models offer a promising avenue by
incorporating explicit representations of concepts to enhance interpretability.
However, existing research on automatic concept discovery methods is often
limited by lower-level concepts, costly human annotation requirements, and a
restricted domain of background knowledge. In this study, we explore the
potential of a Large Language Model (LLM), specifically GPT-4, by leveraging
its domain knowledge and common-sense capability to generate high-level
concepts that are meaningful as explanations for humans, for a specific setting
of image classification. We use minimal textual object information available in
the data via prompting to facilitate this process. To evaluate the output, we
compare the concepts generated by the LLM with two other methods: concepts
generated by humans and the ECII heuristic concept induction system. Since
there is no established metric to determine the human understandability of
concepts, we conducted a human study to assess the effectiveness of the
LLM-generated concepts. Our findings indicate that while human-generated
explanations remain superior, concepts derived from GPT-4 are more
comprehensible to humans compared to those generated by ECII.
|
[
{
"created": "Thu, 18 Apr 2024 03:22:02 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2024 20:26:55 GMT",
"version": "v2"
}
] |
2024-09-24
|
[
[
"Barua",
"Adrita",
""
],
[
"Widmer",
"Cara",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
2404.11917
|
Dawei Zhan
|
Dawei Zhan
|
Expected Coordinate Improvement for High-Dimensional Bayesian
Optimization
| null |
Swarm and Evolutionary Computation, 2024, 91, 101745
|
10.1016/j.swevo.2024.101745
| null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Bayesian optimization (BO) algorithm is very popular for solving
low-dimensional expensive optimization problems. Extending Bayesian
optimization to high dimension is a meaningful but challenging task. One of the
major challenges is that it is difficult to find good infill solutions as the
acquisition functions are also high-dimensional. In this work, we propose the
expected coordinate improvement (ECI) criterion for high-dimensional Bayesian
optimization. The proposed ECI criterion measures the potential improvement we
can get by moving the current best solution along one coordinate. The proposed
approach selects the coordinate with the highest ECI value to refine in each
iteration and covers all the coordinates gradually by iterating over the
coordinates. The greatest advantage of the proposed ECI-BO (expected coordinate
improvement based Bayesian optimization) algorithm over the standard BO
algorithm is that the infill selection problem of the proposed algorithm is
always a one-dimensional problem thus can be easily solved. Numerical
experiments show that the proposed algorithm can achieve significantly better
results than the standard BO algorithm and competitive results when compared
with five state-of-the-art high-dimensional BOs. This work provides a simple
but efficient approach for high-dimensional Bayesian optimization.
|
[
{
"created": "Thu, 18 Apr 2024 05:48:15 GMT",
"version": "v1"
}
] |
2024-10-15
|
[
[
"Zhan",
"Dawei",
""
]
] |
2404.12062
|
Jinwu Wang
|
Jinwu Wang, Wei Mao, Miaomiao Liu
|
MIDGET: Music Conditioned 3D Dance Generation
|
12 pages, 6 figures Published in AI 2023: Advances in Artificial
Intelligence
|
In Australasian Joint Conference on Artificial Intelligence (pp.
277-288). Singapore: Springer Nature Singapore 2023
|
10.1007/978-981-99-8388-9_23
| null |
cs.SD cs.CV cs.GR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a MusIc conditioned 3D Dance GEneraTion model,
named MIDGET based on Dance motion Vector Quantised Variational AutoEncoder
(VQ-VAE) model and Motion Generative Pre-Training (GPT) model to generate
vibrant and highquality dances that match the music rhythm. To tackle
challenges in the field, we introduce three new components: 1) a pre-trained
memory codebook based on the Motion VQ-VAE model to store different human pose
codes, 2) employing Motion GPT model to generate pose codes with music and
motion Encoders, 3) a simple framework for music feature extraction. We compare
with existing state-of-the-art models and perform ablation experiments on
AIST++, the largest publicly available music-dance dataset. Experiments
demonstrate that our proposed framework achieves state-of-the-art performance
on motion quality and its alignment with the music.
|
[
{
"created": "Thu, 18 Apr 2024 10:20:37 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Wang",
"Jinwu",
""
],
[
"Mao",
"Wei",
""
],
[
"Liu",
"Miaomiao",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.