id_1
stringlengths 9
14
| abstract_1
stringlengths 296
1.92k
| authors_1
stringlengths 6
1.32k
| title_1
stringlengths 12
159
| journal-ref_1
stringclasses 110
values | categories_1
stringclasses 154
values | update_date_1
timestamp[ns] | id_2
stringlengths 9
14
| abstract_2
stringlengths 296
1.92k
| authors_2
stringlengths 6
1.32k
| title_2
stringlengths 12
159
| journal-ref_2
stringclasses 110
values | categories_2
stringclasses 154
values | update_date_2
timestamp[ns] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.11511
|
We present ComplexityNet, a streamlined language model designed for assessing
task complexity. This model predicts the likelihood of accurate output by
various language models, each with different capabilities. Our initial
application of ComplexityNet involves the Mostly Basic Python Problems (MBPP)
dataset. We pioneered the creation of the first set of labels to define task
complexity. ComplexityNet achieved a notable 79% accuracy in determining task
complexity, a significant improvement over the 34% accuracy of the original,
non fine-tuned model. Furthermore, ComplexityNet effectively reduces
computational resource usage by 90% compared to using the highest complexity
model, while maintaining a high code generation accuracy of 86.7%. This study
demonstrates that fine-tuning smaller models to categorize tasks based on their
complexity can lead to a more balanced trade-off between accuracy and
efficiency in the use of Large Language Models. Our findings suggest a
promising direction for optimizing LLM applications, especially in
resource-constrained environments.
|
Henry Bae, Aghyad Deeb, Alex Fleury, Kehang Zhu
|
ComplexityNet: Increasing LLM Inference Efficiency by Learning Task
Complexity
| null |
cs.CL cs.AI cs.LG
| 2024-10-16T00:00:00 |
2502.13647
|
Instruction tuning in low-resource languages remains underexplored due to
limited text data, particularly in government and cultural domains. To address
this, we introduce and open-source a large-scale (10,600 samples)
instruction-following (IFT) dataset, covering key institutional and cultural
knowledge relevant to Kazakhstan. Our dataset enhances LLMs' understanding of
procedural, legal, and structural governance topics. We employ LLM-assisted
data generation, comparing open-weight and closed-weight models for dataset
construction, and select GPT-4o as the backbone. Each entity of our dataset
undergoes full manual verification to ensure high quality. We also show that
fine-tuning Qwen, Falcon, and Gemma on our dataset leads to consistent
performance improvements in both multiple-choice and generative tasks,
demonstrating the potential of LLM-assisted instruction tuning for low-resource
languages.
|
Nurkhan Laiyk, Daniil Orel, Rituraj Joshi, Maiya Goloburda, Yuxia
Wang, Preslav Nakov, Fajri Koto
|
Instruction Tuning on Public Government and Cultural Data for
Low-Resource Language: a Case Study in Kazakh
| null |
cs.CL
| 2025-02-20T00:00:00 |
2309.02427
|
Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence.
|
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L.
Griffiths
|
Cognitive Architectures for Language Agents
| null |
cs.AI cs.CL cs.LG cs.SC
| 2024-03-18T00:00:00 |
1804.09552
|
Transcribing voice communications in NASA's launch control center is
important for information utilization. However, automatic speech recognition in
this environment is particularly challenging due to the lack of training data,
unfamiliar words in acronyms, multiple different speakers and accents, and
conversational characteristics of speaking. We used bidirectional deep
recurrent neural networks to train and test speech recognition performance. We
showed that data augmentation and custom language models can improve speech
recognition accuracy. Transcribing communications from the launch control
center will help the machine analyze information and accelerate knowledge
generation.
|
Kyongsik Yun, Joseph Osborne, Madison Lee, Thomas Lu, Edward Chow
|
Automatic speech recognition for launch control center communication
using recurrent neural networks with data augmentation and custom language
model
| null |
cs.CL cs.HC
| 2018-04-26T00:00:00 |
1206.6423
|
As robots become more ubiquitous and capable, it becomes ever more important
to enable untrained users to easily interact with them. Recently, this has led
to study of the language grounding problem, where the goal is to extract
representations of the meanings of natural language tied to perception and
actuation in the physical world. In this paper, we present an approach for
joint learning of language and perception models for grounded attribute
induction. Our perception model includes attribute classifiers, for example to
detect object color and shape, and the language model is based on a
probabilistic categorial grammar that enables the construction of rich,
compositional meaning representations. The approach is evaluated on the task of
interpreting sentences that describe sets of objects in a physical workspace.
We demonstrate accurate task performance and effective latent-variable concept
induction in physical grounded scenes.
|
Cynthia Matuszek (University of Washington), Nicholas FitzGerald
(University of Washington), Luke Zettlemoyer (University of Washington),
Liefeng Bo (University of Washington), Dieter Fox (University of Washington)
|
A Joint Model of Language and Perception for Grounded Attribute Learning
| null |
cs.CL cs.LG cs.RO
| 2012-07-03T00:00:00 |
cmp-lg/9703001
|
In this paper, a method of domain adaptation for clustered language models is
developed. It is based on a previously developed clustering algorithm, but with
a modified optimisation criterion. The results are shown to be slightly
superior to the previously published 'Fillup' method, which can be used to
adapt standard n-gram models. However, the improvement both methods give
compared to models built from scratch on the adaptation data is quite small
(less than 11% relative improvement in word error rate). This suggests that
both methods are still unsatisfactory from a practical point of view.
|
Joerg P. Ueberla (Forum Technology - DRA Malvern)
|
Domain Adaptation with Clustered Language Models
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
1911.03353
|
We introduce a new scientific named entity recognizer called SEPT, which
stands for Span Extractor with Pre-trained Transformers. In recent papers, span
extractors have been demonstrated to be a powerful model compared with sequence
labeling models. However, we discover that with the development of pre-trained
language models, the performance of span extractors appears to become similar
to sequence labeling models. To keep the advantages of span representation, we
modified the model by under-sampling to balance the positive and negative
samples and reduce the search space. Furthermore, we simplify the origin
network architecture to combine the span extractor with BERT. Experiments
demonstrate that even simplified architecture achieves the same performance and
SEPT achieves a new state of the art result in scientific named entity
recognition even without relation information involved.
|
Tan Yan, Heyan Huang, Xian-Ling Mao
|
SEPT: Improving Scientific Named Entity Recognition with Span
Representation
| null |
cs.CL cs.IR
| 2020-10-14T00:00:00 |
2010.01063
|
Neural networks trained on natural language processing tasks capture syntax
even though it is not provided as a supervision signal. This indicates that
syntactic analysis is essential to the understating of language in artificial
intelligence systems. This overview paper covers approaches of evaluating the
amount of syntactic information included in the representations of words for
different neural network architectures. We mainly summarize re-search on
English monolingual data on language modeling tasks and multilingual data for
neural machine translation systems and multilingual language models. We
describe which pre-trained models and representations of language are best
suited for transfer to syntactic tasks.
|
Tomasz Limisiewicz and David Mare\v{c}ek
|
Syntax Representation in Word Embeddings and Neural Networks -- A Survey
|
Proceedings of the 20th Conference ITAT 2020: Automata, Formal and
Natural Languages Workshop
|
cs.CL
| 2020-10-05T00:00:00 |
1806.09055
|
This paper addresses the scalability challenge of architecture search by
formulating the task in a differentiable manner. Unlike conventional approaches
of applying evolution or reinforcement learning over a discrete and
non-differentiable search space, our method is based on the continuous
relaxation of the architecture representation, allowing efficient search of the
architecture using gradient descent. Extensive experiments on CIFAR-10,
ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in
discovering high-performance convolutional architectures for image
classification and recurrent architectures for language modeling, while being
orders of magnitude faster than state-of-the-art non-differentiable techniques.
Our implementation has been made publicly available to facilitate further
research on efficient architecture search algorithms.
|
Hanxiao Liu, Karen Simonyan, Yiming Yang
|
DARTS: Differentiable Architecture Search
| null |
cs.LG cs.CL cs.CV stat.ML
| 2019-04-24T00:00:00 |
1401.2258
|
This work compares concept models for cross-language retrieval: First, we
adapt probabilistic Latent Semantic Analysis (pLSA) for multilingual documents.
Experiments with different weighting schemes show that a weighting method
favoring documents of similar length in both language sides gives best results.
Considering that both monolingual and multilingual Latent Dirichlet Allocation
(LDA) behave alike when applied for such documents, we use a training corpus
built on Wikipedia where all documents are length-normalized and obtain
improvements over previously reported scores for LDA. Another focus of our work
is on model combination. For this end we include Explicit Semantic Analysis
(ESA) in the experiments. We observe that ESA is not competitive with LDA in a
query based retrieval task on CLEF 2000 data. The combination of machine
translation with concept models increased performance by 21.1% map in
comparison to machine translation alone. Machine translation relies on parallel
corpora, which may not be available for many language pairs. We further explore
how much cross-lingual information can be carried over by a specific
information source in Wikipedia, namely linked text. The best results are
obtained using a language modeling approach, entirely without information from
parallel corpora. The need for smoothing raises interesting questions on
soundness and efficiency. Link models capture only a certain kind of
information and suggest weighting schemes to emphasize particular words. For a
combined model, another interesting question is therefore how to integrate
different weighting schemes. Using a very simple combination scheme, we obtain
results that compare favorably to previously reported results on the CLEF 2000
dataset.
|
Benjamin Roth
|
Assessing Wikipedia-Based Cross-Language Retrieval Models
| null |
cs.IR cs.CL
| 2014-01-13T00:00:00 |
2202.13047
|
Crowdsourced dialogue corpora are usually limited in scale and topic coverage
due to the expensive cost of data curation. This would hinder the
generalization of downstream dialogue models to open-domain topics. In this
work, we leverage large language models for dialogue augmentation in the task
of emotional support conversation (ESC). By treating dialogue augmentation as a
dialogue completion task, we prompt a fine-tuned language model to complete
full dialogues from available dialogue posts of various topics, which are then
postprocessed based on heuristics. Applying this approach, we construct AugESC,
an augmented dataset for the ESC task, which largely extends the scale and
topic coverage of the crowdsourced ESConv corpus. Through comprehensive human
evaluation, we demonstrate that our approach is superior to strong baselines of
dialogue augmentation and that AugESC has comparable dialogue quality to the
crowdsourced corpus. We also conduct human interactive evaluation and prove
that post-training on AugESC improves downstream dialogue models'
generalization ability to open-domain topics. These results suggest the utility
of AugESC and highlight the potential of large language models in improving
data-scarce dialogue generation tasks.
|
Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, Minlie Huang
|
AugESC: Dialogue Augmentation with Large Language Models for Emotional
Support Conversation
| null |
cs.CL
| 2023-05-19T00:00:00 |
2308.06039
|
In learning to defer, a predictor identifies risky decisions and defers them
to a human expert. One key issue with this setup is that the expert may end up
over-relying on the machine's decisions, due to anchoring bias. At the same
time, whenever the machine chooses the deferral option the expert has to take
decisions entirely unassisted. As a remedy, we propose learning to guide (LTG),
an alternative framework in which -- rather than suggesting ready-made
decisions -- the machine provides guidance useful to guide decision-making, and
the human is entirely responsible for coming up with a decision. We also
introduce SLOG, an LTG implementation that leverages (a small amount of) human
supervision to convert a generic large language model into a module capable of
generating textual guidance, and present preliminary but promising results on a
medical diagnosis task.
|
Debodeep Banerjee, Stefano Teso, Andrea Passerini
|
Learning to Guide Human Experts via Personalized Large Language Models
| null |
cs.AI cs.CL
| 2023-08-14T00:00:00 |
1412.8419
|
Generating a novel textual description of an image is an interesting problem
that connects computer vision and natural language processing. In this paper,
we present a simple model that is able to generate descriptive sentences given
a sample image. This model has a strong focus on the syntax of the
descriptions. We train a purely bilinear model that learns a metric between an
image representation (generated from a previously trained Convolutional Neural
Network) and phrases that are used to described them. The system is then able
to infer phrases from a given image sample. Based on caption syntax statistics,
we propose a simple language model that can produce relevant descriptions for a
given test image using the phrases inferred. Our approach, which is
considerably simpler than state-of-the-art models, achieves comparable results
on the recently release Microsoft COCO dataset.
|
Remi Lebret and Pedro O. Pinheiro and Ronan Collobert
|
Simple Image Description Generator via a Linear Phrase-Based Approach
| null |
cs.CL cs.CV cs.NE
| 2015-04-14T00:00:00 |
2311.09210
|
Retrieval-augmented language models (RALMs) represent a substantial
advancement in the capabilities of large language models, notably in reducing
factual hallucination by leveraging external knowledge sources. However, the
reliability of the retrieved information is not always guaranteed. The
retrieval of irrelevant data can lead to misguided responses, and potentially
causing the model to overlook its inherent knowledge, even when it possesses
adequate information to address the query. Moreover, standard RALMs often
struggle to assess whether they possess adequate knowledge, both intrinsic and
retrieved, to provide an accurate answer. In situations where knowledge is
lacking, these systems should ideally respond with "unknown" when the answer is
unattainable. In response to these challenges, we introduces Chain-of-Noting
(CoN), a novel approach aimed at improving the robustness of RALMs in facing
noisy, irrelevant documents and in handling unknown scenarios. The core idea of
CoN is to generate sequential reading notes for retrieved documents, enabling a
thorough evaluation of their relevance to the given question and integrating
this information to formulate the final answer. We employed ChatGPT to create
training data for CoN, which was subsequently trained on an LLaMa-2 7B model.
Our experiments across four open-domain QA benchmarks show that RALMs equipped
with CoN significantly outperform standard RALMs. Notably, CoN achieves an
average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
|
Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, Dong
Yu
|
Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language
Models
| null |
cs.CL cs.AI
| 2024-10-04T00:00:00 |
2405.01159
|
This paper presents the TartuNLP team submission to EvaLatin 2024 shared task
of the emotion polarity detection for historical Latin texts. Our system relies
on two distinct approaches to annotating training data for supervised learning:
1) creating heuristics-based labels by adopting the polarity lexicon provided
by the organizers and 2) generating labels with GPT4. We employed parameter
efficient fine-tuning using the adapters framework and experimented with both
monolingual and cross-lingual knowledge transfer for training language and task
adapters. Our submission with the LLM-generated labels achieved the overall
first place in the emotion polarity detection task. Our results show that
LLM-based annotations show promising results on texts in Latin.
|
Aleksei Dorkin and Kairit Sirts
|
TartuNLP at EvaLatin 2024: Emotion Polarity Detection
|
Proceedings of the Third Workshop on Language Technologies for
Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024
|
cs.CL
| 2024-12-10T00:00:00 |
2405.06424
|
Assessing response quality to instructions in language models is vital but
challenging due to the complexity of human language across different contexts.
This complexity often results in ambiguous or inconsistent interpretations,
making accurate assessment difficult. To address this issue, we propose a novel
Uncertainty-aware Reward Model (URM) that introduces a robust uncertainty
estimation for the quality of paired responses based on Bayesian approximation.
Trained with preference datasets, our uncertainty-enabled proxy not only scores
rewards for responses but also evaluates their inherent uncertainty. Empirical
results demonstrate significant benefits of incorporating the proposed proxy
into language model training. Our method boosts the instruction following
capability of language models by refining data curation for training and
improving policy optimization objectives, thereby surpassing existing methods
by a large margin on benchmarks such as Vicuna and MT-bench. These findings
highlight that our proposed approach substantially advances language model
training and paves a new way of harnessing uncertainty within language models.
|
JoonHo Lee, Jae Oh Woo, Juree Seok, Parisa Hassanzadeh, Wooseok Jang,
JuYoun Son, Sima Didari, Baruch Gutow, Heng Hao, Hankyu Moon, Wenjun Hu,
Yeong-Dae Kwon, Taehee Lee and Seungjai Min
|
Improving Instruction Following in Language Models through Proxy-Based
Uncertainty Estimation
| null |
cs.CL cs.AI cs.LG
| 2025-02-03T00:00:00 |
1806.09055
|
This paper addresses the scalability challenge of architecture search by
formulating the task in a differentiable manner. Unlike conventional approaches
of applying evolution or reinforcement learning over a discrete and
non-differentiable search space, our method is based on the continuous
relaxation of the architecture representation, allowing efficient search of the
architecture using gradient descent. Extensive experiments on CIFAR-10,
ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in
discovering high-performance convolutional architectures for image
classification and recurrent architectures for language modeling, while being
orders of magnitude faster than state-of-the-art non-differentiable techniques.
Our implementation has been made publicly available to facilitate further
research on efficient architecture search algorithms.
|
Hanxiao Liu, Karen Simonyan, Yiming Yang
|
DARTS: Differentiable Architecture Search
| null |
cs.LG cs.CL cs.CV stat.ML
| 2019-04-24T00:00:00 |
1807.03583
|
Smoothing is an essential tool in many NLP tasks, therefore numerous
techniques have been developed for this purpose in the past. One of the most
widely used smoothing methods are the Kneser-Ney smoothing (KNS) and its
variants, including the Modified Kneser-Ney smoothing (MKNS), which are widely
considered to be among the best smoothing methods available. Although when
creating the original KNS the intention of the authors was to develop such a
smoothing method that preserves the marginal distributions of the original
model, this property was not maintained when developing the MKNS.
In this article I would like to overcome this and propose such a refined
version of the MKNS that preserves these marginal distributions while keeping
the advantages of both previous versions. Beside its advantageous properties,
this novel smoothing method is shown to achieve about the same results as the
MKNS in a standard language modelling task.
|
Andr\'as Dob\'o
|
Multi-D Kneser-Ney Smoothing Preserving the Original Marginal
Distributions
|
Research in Computing Science, 147 (6), 11-25
|
cs.CL
| 2019-02-08T00:00:00 |
2111.07180
|
In natural language processing, most models try to learn semantic
representations merely from texts. The learned representations encode the
distributional semantics but fail to connect to any knowledge about the
physical world. In contrast, humans learn language by grounding concepts in
perception and action and the brain encodes grounded semantics for cognition.
Inspired by this notion and recent work in vision-language learning, we design
a two-stream model for grounding language learning in vision. The model
includes a VGG-based visual stream and a Bert-based language stream. The two
streams merge into a joint representational space. Through cross-modal
contrastive learning, the model first learns to align visual and language
representations with the MS COCO dataset. The model further learns to retrieve
visual objects with language queries through a cross-modal attention module and
to infer the visual relations between the retrieved objects through a bilinear
operator with the Visual Genome dataset. After training, the language stream of
this model is a stand-alone language model capable of embedding concepts in a
visually grounded semantic space. This semantic space manifests principal
dimensions explainable with human intuition and neurobiological knowledge. Word
embeddings in this semantic space are predictive of human-defined norms of
semantic features and are segregated into perceptually distinctive clusters.
Furthermore, the visually grounded language model also enables compositional
language understanding based on visual knowledge and multimodal image search
with queries based on images, texts, or their combinations.
|
Yizhen Zhang, Minkyu Choi, Kuan Han, Zhongming Liu
|
Explainable Semantic Space by Grounding Language to Vision with
Cross-Modal Contrastive Learning
| null |
cs.CL cs.LG
| 2021-11-16T00:00:00 |
cmp-lg/9612005
|
The Maximum Entropy Modeling Toolkit supports parameter estimation and
prediction for statistical language models in the maximum entropy framework.
The maximum entropy framework provides a constructive method for obtaining the
unique conditional distribution p*(y|x) that satisfies a set of linear
constraints and maximizes the conditional entropy H(p|f) with respect to the
empirical distribution f(x). The maximum entropy distribution p*(y|x) also has
a unique parametric representation in the class of exponential models, as
m(y|x) = r(y|x)/Z(x) where the numerator m(y|x) = prod_i alpha_i^g_i(x,y) is a
product of exponential weights, with alpha_i = exp(lambda_i), and the
denominator Z(x) = sum_y r(y|x) is required to satisfy the axioms of
probability.
This manual explains how to build maximum entropy models for discrete domains
with the Maximum Entropy Modeling Toolkit (MEMT). First we summarize the steps
necessary to implement a language model using the toolkit. Next we discuss the
executables provided by the toolkit and explain the file formats required by
the toolkit. Finally, we review the maximum entropy framework and apply it to
the problem of statistical language modeling.
Keywords: statistical language models, maximum entropy, exponential models,
improved iterative scaling, Markov models, triggers.
|
Eric Sven Ristad
|
Maximum Entropy Modeling Toolkit
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
1705.01346
|
Recurrent Neural Network (RNN) has been widely applied for sequence modeling.
In RNN, the hidden states at current step are full connected to those at
previous step, thus the influence from less related features at previous step
may potentially decrease model's learning ability. We propose a simple
technique called parallel cells (PCs) to enhance the learning ability of
Recurrent Neural Network (RNN). In each layer, we run multiple small RNN cells
rather than one single large cell. In this paper, we evaluate PCs on 2 tasks.
On language modeling task on PTB (Penn Tree Bank), our model outperforms state
of art models by decreasing perplexity from 78.6 to 75.3. On Chinese-English
translation task, our model increases BLEU score for 0.39 points than baseline
model.
|
Danhao Zhu, Si Shen, Xin-Yu Dai and Jiajun Chen
|
Going Wider: Recurrent Neural Network With Parallel Cells
| null |
cs.CL cs.LG cs.NE
| 2017-05-04T00:00:00 |
1810.12387
|
Most language modeling methods rely on large-scale data to statistically
learn the sequential patterns of words. In this paper, we argue that words are
atomic language units but not necessarily atomic semantic units. Inspired by
HowNet, we use sememes, the minimum semantic units in human languages, to
represent the implicit semantics behind words for language modeling, named
Sememe-Driven Language Model (SDLM). More specifically, to predict the next
word, SDLM first estimates the sememe distribution gave textual context.
Afterward, it regards each sememe as a distinct semantic expert, and these
experts jointly identify the most probable senses and the corresponding word.
In this way, SDLM enables language models to work beyond word-level
manipulation to fine-grained sememe-level semantics and offers us more powerful
tools to fine-tune language models and improve the interpretability as well as
the robustness of language models. Experiments on language modeling and the
downstream application of headline gener- ation demonstrate the significant
effect of SDLM. Source code and data used in the experiments can be accessed at
https:// github.com/thunlp/SDLM-pytorch.
|
Yihong Gu, Jun Yan, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun,
Fen Lin, Leyu Lin
|
Language Modeling with Sparse Product of Sememe Experts
| null |
cs.CL cs.LG
| 2018-10-31T00:00:00 |
1911.03829
|
Large-scale pre-trained language model such as BERT has achieved great
success in language understanding tasks. However, it remains an open question
how to utilize BERT for language generation. In this paper, we present a novel
approach, Conditional Masked Language Modeling (C-MLM), to enable the
finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is
exploited as extra supervision to improve conventional Seq2Seq models (student)
for better text generation performance. By leveraging BERT's idiosyncratic
bidirectional nature, distilling knowledge learned in BERT can encourage
auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level
supervision for coherent text generation. Experiments show that the proposed
approach significantly outperforms strong Transformer baselines on multiple
language generation tasks such as machine translation and text summarization.
Our proposed model also achieves new state of the art on IWSLT German-English
and English-Vietnamese MT datasets. Code is available at
https://github.com/ChenRocks/Distill-BERT-Textgen.
|
Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, Jingjing Liu
|
Distilling Knowledge Learned in BERT for Text Generation
| null |
cs.CL cs.LG
| 2020-07-21T00:00:00 |
1504.01182
|
Machine dialect interpretation assumes a real part in encouraging man-machine
correspondence and in addition men-men correspondence in Natural Language
Processing (NLP). Machine Translation (MT) alludes to utilizing machine to
change one dialect to an alternate. Statistical Machine Translation is a type
of MT consisting of Language Model (LM), Translation Model (TM) and decoder. In
this paper, Bengali to Assamese Statistical Machine Translation Model has been
created by utilizing Moses. Other translation tools like IRSTLM for Language
Model and GIZA-PP-V1.0.7 for Translation model are utilized within this
framework which is accessible in Linux situations. The purpose of the LM is to
encourage fluent output and the purpose of TM is to encourage similarity
between input and output, the decoder increases the probability of translated
text in target language. A parallel corpus of 17100 sentences in Bengali and
Assamese has been utilized for preparing within this framework. Measurable MT
procedures have not so far been generally investigated for Indian dialects. It
might be intriguing to discover to what degree these models can help the
immense continuous MT deliberations in the nation.
|
Nayan Jyoti Kalita, Baharul Islam
|
Bengali to Assamese Statistical Machine Translation using Moses (Corpus
Based)
| null |
cs.CL
| 2015-04-07T00:00:00 |
1502.00512
|
This paper investigates the scaling properties of Recurrent Neural Network
Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and
address the questions of how RNNLMs scale with respect to model size,
training-set size, computational costs and memory. Our analysis shows that
despite being more costly to train, RNNLMs obtain much lower perplexities on
standard benchmarks than n-gram models. We train the largest known RNNs and
present relative word error rates gains of 18% on an ASR task. We also present
the new lowest perplexities on the recently released billion word language
modelling benchmark, 1 BLEU point gain on machine translation and a 17%
relative hit rate gain in word prediction.
|
Will Williams, Niranjani Prasad, David Mrva, Tom Ash, Tony Robinson
|
Scaling Recurrent Neural Network Language Models
| null |
cs.CL cs.LG
| 2015-02-03T00:00:00 |
2102.07396
|
We explore cross-lingual transfer of register classification for web
documents. Registers, that is, text varieties such as blogs or news are one of
the primary predictors of linguistic variation and thus affect the automatic
processing of language. We introduce two new register annotated corpora,
FreCORE and SweCORE, for French and Swedish. We demonstrate that deep
pre-trained language models perform strongly in these languages and outperform
previous state-of-the-art in English and Finnish. Specifically, we show 1) that
zero-shot cross-lingual transfer from the large English CORE corpus can match
or surpass previously published monolingual models, and 2) that lightweight
monolingual classification requiring very little training data can reach or
surpass our zero-shot performance. We further analyse classification results
finding that certain registers continue to pose challenges in particular for
cross-lingual transfer.
|
Liina Repo, Valtteri Skantsi, Samuel R\"onnqvist, Saara Hellstr\"om,
Miika Oinonen, Anna Salmela, Douglas Biber, Jesse Egbert, Sampo Pyysalo and
Veronika Laippala
|
Beyond the English Web: Zero-Shot Cross-Lingual and Lightweight
Monolingual Classification of Registers
| null |
cs.CL
| 2021-02-16T00:00:00 |
2006.10964
|
COVID-19 has resulted in an ongoing pandemic and as of 12 June 2020, has
caused more than 7.4 million cases and over 418,000 deaths. The highly dynamic
and rapidly evolving situation with COVID-19 has made it difficult to access
accurate, on-demand information regarding the disease. Online communities,
forums, and social media provide potential venues to search for relevant
questions and answers, or post questions and seek answers from other members.
However, due to the nature of such sites, there are always a limited number of
relevant questions and responses to search from, and posted questions are
rarely answered immediately. With the advancements in the field of natural
language processing, particularly in the domain of language models, it has
become possible to design chatbots that can automatically answer consumer
questions. However, such models are rarely applied and evaluated in the
healthcare domain, to meet the information needs with accurate and up-to-date
healthcare data. In this paper, we propose to apply a language model for
automatically answering questions related to COVID-19 and qualitatively
evaluate the generated responses. We utilized the GPT-2 language model and
applied transfer learning to retrain it on the COVID-19 Open Research Dataset
(CORD-19) corpus. In order to improve the quality of the generated responses,
we applied 4 different approaches, namely tf-idf, BERT, BioBERT, and USE to
filter and retain relevant sentences in the responses. In the performance
evaluation step, we asked two medical experts to rate the responses. We found
that BERT and BioBERT, on average, outperform both tf-idf and USE in
relevance-based sentence filtering tasks. Additionally, based on the chatbot,
we created a user-friendly interactive web application to be hosted online.
|
David Oniani, Yanshan Wang
|
A Qualitative Evaluation of Language Models on Automatic
Question-Answering for COVID-19
| null |
cs.IR cs.AI cs.CL
| 2020-06-25T00:00:00 |
1904.10641
|
Machine-translated text plays an important role in modern life by smoothing
communication from various communities using different languages. However,
unnatural translation may lead to misunderstanding, a detector is thus needed
to avoid the unfortunate mistakes. While a previous method measured the
naturalness of continuous words using a N-gram language model, another method
matched noncontinuous words across sentences but this method ignores such words
in an individual sentence. We have developed a method matching similar words
throughout the paragraph and estimating the paragraph-level coherence, that can
identify machine-translated text. Experiment evaluates on 2000 English
human-generated and 2000 English machine-translated paragraphs from German
showing that the coherence-based method achieves high performance (accuracy =
87.0%; equal error rate = 13.0%). It is efficiently better than previous
methods (best accuracy = 72.4%; equal error rate = 29.7%). Similar experiments
on Dutch and Japanese obtain 89.2% and 97.9% accuracy, respectively. The
results demonstrate the persistence of the proposed method in various languages
with different resource levels.
|
Hoang-Quoc Nguyen-Son and Tran Phuong Thao and Seira Hidano and
Shinsaku Kiyomoto
|
Detecting Machine-Translated Paragraphs by Matching Similar Words
| null |
cs.CL
| 2019-04-25T00:00:00 |
1807.09433
|
Recent advances in statistical machine translation via the adoption of neural
sequence-to-sequence models empower the end-to-end system to achieve
state-of-the-art in many WMT benchmarks. The performance of such machine
translation (MT) system is usually evaluated by automatic metric BLEU when the
golden references are provided for validation. However, for model inference or
production deployment, the golden references are prohibitively available or
require expensive human annotation with bilingual expertise. In order to
address the issue of quality evaluation (QE) without reference, we propose a
general framework for automatic evaluation of translation output for most WMT
quality evaluation tasks. We first build a conditional target language model
with a novel bidirectional transformer, named neural bilingual expert model,
which is pre-trained on large parallel corpora for feature extraction. For QE
inference, the bilingual expert model can simultaneously produce the joint
latent representation between the source and the translation, and real-valued
measurements of possible erroneous tokens based on the prior knowledge learned
from parallel data. Subsequently, the features will further be fed into a
simple Bi-LSTM predictive model for quality evaluation. The experimental
results show that our approach achieves the state-of-the-art performance in the
quality estimation track of WMT 2017/2018.
|
Kai Fan, Jiayi Wang, Bo Li, Fengming Zhou, Boxing Chen, Luo Si
|
"Bilingual Expert" Can Find Translation Errors
| null |
cs.CL
| 2018-11-20T00:00:00 |
2105.10419
|
Existing models of multilingual sentence embeddings require large parallel
data resources which are not available for low-resource languages. We propose a
novel unsupervised method to derive multilingual sentence embeddings relying
only on monolingual data. We first produce a synthetic parallel corpus using
unsupervised machine translation, and use it to fine-tune a pretrained
cross-lingual masked language model (XLM) to derive the multilingual sentence
representations. The quality of the representations is evaluated on two
parallel corpus mining tasks with improvements of up to 22 F1 points over
vanilla XLM. In addition, we observe that a single synthetic bilingual corpus
is able to improve results for other language pairs.
|
Ivana Kvapil{\i}kova, Mikel Artetxe, Gorka Labaka, Eneko Agirre,
Ond\v{r}ej Bojar
|
Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
|
Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics - Student Research Workshop, pages 255-262,
Association for Computational Linguistics, 2020
|
cs.CL
| 2021-05-24T00:00:00 |
cs/9912016
|
We present a technique which complements Hidden Markov Models by
incorporating some lexicalized states representing syntactically uncommon
words. Our approach examines the distribution of transitions, selects the
uncommon words, and makes lexicalized states for the words. We performed a
part-of-speech tagging experiment on the Brown corpus to evaluate the resultant
language model and discovered that this technique improved the tagging accuracy
by 0.21% at the 95% level of confidence.
|
Jin-Dong Kim and Sang-Zoo Lee and Hae-Chang Rim
|
HMM Specialization with Selective Lexicalization
|
Proceedings of the 1999 Joint SIGDAT Conference on Empirical
Methods in Natural Language Processing and Very Large Corpora, pp.121-127,
1999
|
cs.CL cs.LG
| 2007-05-23T00:00:00 |
2008.07267
|
Natural language processing (NLP) and neural networks (NNs) have both
undergone significant changes in recent years. For active learning (AL)
purposes, NNs are, however, less commonly used -- despite their current
popularity. By using the superior text classification performance of NNs for
AL, we can either increase a model's performance using the same amount of data
or reduce the data and therefore the required annotation efforts while keeping
the same performance. We review AL for text classification using deep neural
networks (DNNs) and elaborate on two main causes which used to hinder the
adoption: (a) the inability of NNs to provide reliable uncertainty estimates,
on which the most commonly used query strategies rely, and (b) the challenge of
training DNNs on small data. To investigate the former, we construct a taxonomy
of query strategies, which distinguishes between data-based, model-based, and
prediction-based instance selection, and investigate the prevalence of these
classes in recent research. Moreover, we review recent NN-based advances in NLP
like word embeddings or language models in the context of (D)NNs, survey the
current state-of-the-art at the intersection of AL, text classification, and
DNNs and relate recent advances in NLP to AL. Finally, we analyze recent work
in AL for text classification, connect the respective query strategies to the
taxonomy, and outline commonalities and shortcomings. As a result, we highlight
gaps in current research and present open research questions.
|
Christopher Schr\"oder and Andreas Niekler
|
A Survey of Active Learning for Text Classification using Deep Neural
Networks
| null |
cs.CL cs.LG
| 2020-08-18T00:00:00 |
1704.08352
|
Words can be represented by composing the representations of subword units
such as word segments, characters, and/or character n-grams. While such
representations are effective and may capture the morphological regularities of
words, they have not been systematically compared, and it is not understood how
they interact with different morphological typologies. On a language modeling
task, we present experiments that systematically vary (1) the basic unit of
representation, (2) the composition of these representations, and (3) the
morphological typology of the language modeled. Our results extend previous
findings that character representations are effective across typologies, and we
find that a previously unstudied combination of character trigram
representations composed with bi-LSTMs outperforms most others. But we also
find room for improvement: none of the character-level models match the
predictive accuracy of a model with access to true morphological analyses, even
when learned from an order of magnitude more data.
|
Clara Vania and Adam Lopez
|
From Characters to Words to in Between: Do We Capture Morphology?
| null |
cs.CL
| 2017-04-28T00:00:00 |
1902.09969
|
Inspired by recent advances in leveraging multiple modalities in machine
translation, we introduce an encoder-decoder pipeline that uses (1) specific
objects within an image and their object labels, (2) a language model for
decoding joint embedding of object features and the object labels. Our pipeline
merges prior detected objects from the image and their object labels and then
learns the sequences of captions describing the particular image. The decoder
model learns to extract descriptions for the image from scratch by decoding the
joint representation of the object visual features and their object classes
conditioned by the encoder component. The idea of the model is to concentrate
only on the specific objects of the image and their labels for generating
descriptions of the image rather than visual feature of the entire image. The
model needs to be calibrated more by adjusting the parameters and settings to
result in better accuracy and performance.
|
Ashutosh Mishra, Marcus Liwicki
|
Using Deep Object Features for Image Descriptions
| null |
cs.CV cs.CL cs.LG
| 2019-02-27T00:00:00 |
2403.02615
|
We present a comprehensive evaluation of large language models(LLMs)' ability
to reason about composition relations through a benchmark encompassing 1,500
test cases in English, designed to cover six distinct types of composition
relations: Positional, Comparative, Personal, Mathematical, Identity, and
Other. Acknowledging the significance of multilingual capabilities, we expanded
our assessment to include translations of these cases into Chinese, Japanese,
French, and Korean. Our Multilingual Composition Relation (MCR) benchmark aims
at investigating the robustness and adaptability of LLMs in handling
composition relation reasoning across diverse linguistic contexts.
|
Jinman Zhao, Xueyan Zhang
|
Exploring the Limitations of Large Language Models in Compositional
Relation Reasoning
| null |
cs.CL
| 2024-09-24T00:00:00 |
1905.13150
|
In the broadcast domain there is an abundance of related text data and
partial transcriptions, such as closed captions and subtitles. This text data
can be used for lightly supervised training, in which text matching the audio
is selected using an existing speech recognition model. Current approaches to
light supervision typically filter the data based on matching error rates
between the transcriptions and biased decoding hypotheses. In contrast,
semi-supervised training does not require matching text data, instead
generating a hypothesis using a background language model. State-of-the-art
semi-supervised training uses lattice-based supervision with the lattice-free
MMI (LF-MMI) objective function. We propose a technique to combine inaccurate
transcriptions with the lattices generated for semi-supervised training, thus
preserving uncertainty in the lattice where appropriate. We demonstrate that
this combined approach reduces the expected error rates over the lattices, and
reduces the word error rate (WER) on a broadcast task.
|
Joachim Fainberg, Ond\v{r}ej Klejch, Steve Renals, Peter Bell
|
Lattice-based lightly-supervised acoustic model training
| null |
cs.CL cs.SD eess.AS
| 2019-07-16T00:00:00 |
cs/0108006
|
Maximum entropy models are considered by many to be one of the most promising
avenues of language modeling research. Unfortunately, long training times make
maximum entropy research difficult. We present a novel speedup technique: we
change the form of the model to use classes. Our speedup works by creating two
maximum entropy models, the first of which predicts the class of each word, and
the second of which predicts the word itself. This factoring of the model leads
to fewer non-zero indicator functions, and faster normalization, achieving
speedups of up to a factor of 35 over one of the best previous techniques. It
also results in typically slightly lower perplexities. The same trick can be
used to speed training of other machine learning techniques, e.g. neural
networks, applied to any problem with a large number of outputs, such as
language modeling.
|
Joshua Goodman
|
Classes for Fast Maximum Entropy Training
|
Proceedings of ICASSP-2001, Utah, May 2001
|
cs.CL
| 2007-05-23T00:00:00 |
2410.18963
|
Large language models (LLMs) and large multimodal models (LMMs) have shown
great potential in automating complex tasks like web browsing and gaming.
However, their ability to generalize across diverse applications remains
limited, hindering broader utility. To address this challenge, we present
OSCAR: Operating System Control via state-Aware reasoning and Re-planning.
OSCAR is a generalist agent designed to autonomously navigate and interact with
various desktop and mobile applications through standardized controls, such as
mouse and keyboard inputs, while processing screen images to fulfill user
commands. OSCAR translates human instructions into executable Python code,
enabling precise control over graphical user interfaces (GUIs). To enhance
stability and adaptability, OSCAR operates as a state machine, equipped with
error-handling mechanisms and dynamic task re-planning, allowing it to
efficiently adjust to real-time feedback and exceptions. We demonstrate OSCAR's
effectiveness through extensive experiments on diverse benchmarks across
desktop and mobile platforms, where it transforms complex workflows into simple
natural language commands, significantly boosting user productivity. Our code
will be open-source upon publication.
|
Xiaoqiang Wang and Bang Liu
|
OSCAR: Operating System Control via State-Aware Reasoning and
Re-Planning
| null |
cs.AI cs.CL
| 2024-10-25T00:00:00 |
2008.08547
|
Pre-trained language model word representation, such as BERT, have been
extremely successful in several Natural Language Processing tasks significantly
improving on the state-of-the-art. This can largely be attributed to their
ability to better capture semantic information contained within a sentence.
Several tasks, however, can benefit from information available at a corpus
level, such as Term Frequency-Inverse Document Frequency (TF-IDF). In this work
we test the effectiveness of integrating this information with BERT on the task
of identifying abuse on social media and show that integrating this information
with BERT does indeed significantly improve performance. We participate in
Sub-Task A (abuse detection) wherein we achieve a score within two points of
the top performing team and in Sub-Task B (target detection) wherein we are
ranked 4 of the 44 participating teams.
|
Wah Meng Lim and Harish Tayyar Madabushi
|
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information
| null |
cs.CL
| 2020-08-20T00:00:00 |
2103.07052
|
We propose an unsupervised solution to the Authorship Verification task that
utilizes pre-trained deep language models to compute a new metric called
DV-Distance. The proposed metric is a measure of the difference between the two
authors comparing against pre-trained language models. Our design addresses the
problem of non-comparability in authorship verification, frequently encountered
in small or cross-domain corpora. To the best of our knowledge, this paper is
the first one to introduce a method designed with non-comparability in mind
from the ground up, rather than indirectly. It is also one of the first to use
Deep Language Models in this setting. The approach is intuitive, and it is easy
to understand and interpret through visualization. Experiments on four datasets
show our methods matching or surpassing current state-of-the-art and strong
baselines in most tasks.
|
Yifan Zhang, Dainis Boumber, Marjan Hosseinia, Fan Yang, Arjun
Mukherjee
|
Improving Authorship Verification using Linguistic Divergence
| null |
cs.CL cs.IR cs.LG
| 2021-03-15T00:00:00 |
2311.09358
|
Large language models (LLMs) have shown remarkable achievements in natural
language processing tasks, producing high-quality outputs. However, LLMs still
exhibit limitations, including the generation of factually incorrect
information. In safety-critical applications, it is important to assess the
confidence of LLM-generated content to make informed decisions. Retrieval
Augmented Language Models (RALMs) is relatively a new area of research in NLP.
RALMs offer potential benefits for scientific NLP tasks, as retrieved
documents, can serve as evidence to support model-generated content. This
inclusion of evidence enhances trustworthiness, as users can verify and explore
the retrieved documents to validate model outputs. Quantifying uncertainty in
RALM generations further improves trustworthiness, with retrieved text and
confidence scores contributing to a comprehensive and reliable model for
scientific applications. However, there is limited to no research on UQ for
RALMs, particularly in scientific contexts. This study aims to address this gap
by conducting a comprehensive evaluation of UQ in RALMs, focusing on scientific
tasks. This research investigates how uncertainty scores vary when scientific
knowledge is incorporated as pretraining and retrieval data and explores the
relationship between uncertainty scores and the accuracy of model-generated
outputs. We observe that an existing RALM finetuned with scientific knowledge
as the retrieval data tends to be more confident in generating predictions
compared to the model pretrained only with scientific knowledge. We also found
that RALMs are overconfident in their predictions, making inaccurate
predictions more confidently than accurate ones. Scientific knowledge provided
either as pretraining or retrieval corpus does not help alleviate this issue.
We released our code, data and dashboards at https://github.com/pnnl/EXPERT2.
|
Sridevi Wagle, Sai Munikoti, Anurag Acharya, Sara Smith, Sameera
Horawalavithana
|
Empirical evaluation of Uncertainty Quantification in
Retrieval-Augmented Language Models for Science
| null |
cs.CL cs.AI
| 2023-11-17T00:00:00 |
1904.04697
|
Chinese word segmentation and dependency parsing are two fundamental tasks
for Chinese natural language processing. The dependency parsing is defined on
word-level. Therefore word segmentation is the precondition of dependency
parsing, which makes dependency parsing suffer from error propagation and
unable to directly make use of the character-level pre-trained language model
(such as BERT). In this paper, we propose a graph-based model to integrate
Chinese word segmentation and dependency parsing. Different from previous
transition-based joint models, our proposed model is more concise, which
results in fewer efforts of feature engineering. Our graph-based joint model
achieves better performance than previous joint models and state-of-the-art
results in both Chinese word segmentation and dependency parsing. Besides, when
BERT is combined, our model can substantially reduce the performance gap of
dependency parsing between joint models and gold-segmented word-based models.
Our code is publicly available at https://github.com/fastnlp/JointCwsParser.
|
Hang Yan, Xipeng Qiu, Xuanjing Huang
|
A Graph-based Model for Joint Chinese Word Segmentation and Dependency
Parsing
| null |
cs.CL cs.AI
| 2019-12-19T00:00:00 |
2203.11199
|
Recently, the problem of robustness of pre-trained language models (PrLMs)
has received increasing research interest. Latest studies on adversarial
attacks achieve high attack success rates against PrLMs, claiming that PrLMs
are not robust. However, we find that the adversarial samples that PrLMs fail
are mostly non-natural and do not appear in reality. We question the validity
of current evaluation of robustness of PrLMs based on these non-natural
adversarial samples and propose an anomaly detector to evaluate the robustness
of PrLMs with more natural adversarial samples. We also investigate two
applications of the anomaly detector: (1) In data augmentation, we employ the
anomaly detector to force generating augmented data that are distinguished as
non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply
the anomaly detector to a defense framework to enhance the robustness of PrLMs.
It can be used to defend all types of attacks and achieves higher accuracy on
both adversarial samples and compliant samples than other defense frameworks.
|
Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
|
Distinguishing Non-natural from Natural Adversarial Samples for More
Robust Pre-trained Language Model
| null |
cs.LG cs.CL cs.CR
| 2022-03-23T00:00:00 |
2305.13707
|
Language models have graduated from being research prototypes to
commercialized products offered as web APIs, and recent works have highlighted
the multilingual capabilities of these products. The API vendors charge their
users based on usage, more specifically on the number of ``tokens'' processed
or generated by the underlying language models. What constitutes a token,
however, is training data and model dependent with a large variance in the
number of tokens required to convey the same information in different
languages. In this work, we analyze the effect of this non-uniformity on the
fairness of an API's pricing policy across languages. We conduct a systematic
analysis of the cost and utility of OpenAI's language model API on multilingual
benchmarks in 22 typologically diverse languages. We show evidence that
speakers of a large number of the supported languages are overcharged while
obtaining poorer results. These speakers tend to also come from regions where
the APIs are less affordable to begin with. Through these analyses, we aim to
increase transparency around language model APIs' pricing policies and
encourage the vendors to make them more equitable.
|
Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David R.
Mortensen, Noah A. Smith, Yulia Tsvetkov
|
Do All Languages Cost the Same? Tokenization in the Era of Commercial
Language Models
| null |
cs.CL
| 2023-05-24T00:00:00 |
2306.08000
|
Recent advances in zero-shot learning have enabled the use of paired
image-text data to replace structured labels, replacing the need for expert
annotated datasets. Models such as CLIP-based CheXzero utilize these
advancements in the domain of chest X-ray interpretation. We hypothesize that
domain pre-trained models such as CXR-BERT, BlueBERT, and ClinicalBERT offer
the potential to improve the performance of CLIP-like models with specific
domain knowledge by replacing BERT weights at the cost of breaking the original
model's alignment. We evaluate the performance of zero-shot classification
models with domain-specific pre-training for detecting low-prevalence
pathologies. Even though replacing the weights of the original CLIP-BERT
degrades model performance on commonly found pathologies, we show that
pre-trained text towers perform exceptionally better on low-prevalence
diseases. This motivates future ensemble models with a combination of
differently trained language models for maximal performance.
|
Aakash Mishra, Rajat Mittal, Christy Jestin, Kostas Tingos, Pranav
Rajpurkar
|
Improving Zero-Shot Detection of Low Prevalence Chest Pathologies using
Domain Pre-trained Language Models
| null |
physics.med-ph cs.CL cs.CV cs.LG eess.IV
| 2023-06-16T00:00:00 |
2201.11147
|
Self-supervised protein language models have proved their effectiveness in
learning the proteins representations. With the increasing computational power,
current protein language models pre-trained with millions of diverse sequences
can advance the parameter scale from million-level to billion-level and achieve
remarkable improvement. However, those prevailing approaches rarely consider
incorporating knowledge graphs (KGs), which can provide rich structured
knowledge facts for better protein representations. We argue that informative
biology knowledge in KGs can enhance protein representation with external
knowledge. In this work, we propose OntoProtein, the first general framework
that makes use of structure in GO (Gene Ontology) into protein pre-training
models. We construct a novel large-scale knowledge graph that consists of GO
and its related proteins, and gene annotation texts or protein sequences
describe all nodes in the graph. We propose novel contrastive learning with
knowledge-aware negative sampling to jointly optimize the knowledge graph and
protein embedding during pre-training. Experimental results show that
OntoProtein can surpass state-of-the-art methods with pre-trained protein
language models in TAPE benchmark and yield better performance compared with
baselines in protein-protein interaction and protein function prediction. Code
and datasets are available in https://github.com/zjunlp/OntoProtein.
|
Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Siyuan Cheng, Haosen Hong,
Shumin Deng, Jiazhang Lian, Qiang Zhang, Huajun Chen
|
OntoProtein: Protein Pretraining With Gene Ontology Embedding
| null |
q-bio.BM cs.AI cs.CL cs.IR cs.LG
| 2022-11-02T00:00:00 |
2305.12710
|
Real-world domain experts (e.g., doctors) rarely annotate only a decision
label in their day-to-day workflow without providing explanations. Yet,
existing low-resource learning techniques, such as Active Learning (AL), that
aim to support human annotators mostly focus on the label while neglecting the
natural language explanation of a data point. This work proposes a novel AL
architecture to support experts' real-world need for label and explanation
annotations in low-resource scenarios. Our AL architecture leverages an
explanation-generation model to produce explanations guided by human
explanations, a prediction model that utilizes generated explanations toward
prediction faithfully, and a novel data diversity-based AL sampling strategy
that benefits from the explanation annotations. Automated and human evaluations
demonstrate the effectiveness of incorporating explanations into AL sampling
and the improved human annotation efficiency and trustworthiness with our AL
architecture. Additional ablation studies illustrate the potential of our AL
architecture for transfer learning, generalizability, and integration with
large language models (LLMs). While LLMs exhibit exceptional
explanation-generation capabilities for relatively simple tasks, their
effectiveness in complex real-world tasks warrants further in-depth study.
|
Bingsheng Yao, Ishan Jindal, Lucian Popa, Yannis Katsis, Sayan Ghosh,
Lihong He, Yuxuan Lu, Shashank Srivastava, Yunyao Li, James Hendler, Dakuo
Wang
|
Beyond Labels: Empowering Human Annotators with Natural Language
Explanations through a Novel Active-Learning Architecture
| null |
cs.CL
| 2023-10-24T00:00:00 |
2009.04016
|
This paper describes Brown University's submission to the TREC 2019 Deep
Learning track. We followed a 2-phase method for producing a ranking of
passages for a given input query: In the the first phase, the user's query is
expanded by appending 3 queries generated by a transformer model which was
trained to rephrase an input query into semantically similar queries. The
expanded query can exhibit greater similarity in surface form and vocabulary
overlap with the passages of interest and can therefore serve as enriched input
to any downstream information retrieval method. In the second phase, we use a
BERT-based model pre-trained for language modeling but fine-tuned for query -
document relevance prediction to compute relevance scores for a set of 1000
candidate passages per query and subsequently obtain a ranking of passages by
sorting them based on the predicted relevance scores. According to the results
published in the official Overview of the TREC Deep Learning Track 2019, our
team ranked 3rd in the passage retrieval task (including full ranking and
re-ranking), and 2nd when considering only re-ranking submissions.
|
George Zerveas, Ruochen Zhang, Leila Kim, Carsten Eickhoff
|
Brown University at TREC Deep Learning 2019
|
Proceedings of the Twenty-Eighth Text REtrieval Conference, TREC
2019, Gaithersburg, Maryland, USA, November 13-15, 2019. NIST Special
Publication 1250, National Institute of Standards and Technology (NIST) 2019
|
cs.IR cs.CL cs.LG
| 2020-09-10T00:00:00 |
1708.00077
|
Recurrent neural networks show state-of-the-art results in many text analysis
tasks but often require a lot of memory to store their weights. Recently
proposed Sparse Variational Dropout eliminates the majority of the weights in a
feed-forward neural network without significant loss of quality. We apply this
technique to sparsify recurrent neural networks. To account for recurrent
specifics we also rely on Binary Variational Dropout for RNN. We report 99.5%
sparsity level on sentiment analysis task without a quality drop and up to 87%
sparsity level on language modeling task with slight loss of accuracy.
|
Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov
|
Bayesian Sparsification of Recurrent Neural Networks
| null |
stat.ML cs.CL cs.LG
| 2017-08-02T00:00:00 |
1703.09137
|
When a recurrent neural network language model is used for caption
generation, the image information can be fed to the neural network either by
directly incorporating it in the RNN -- conditioning the language model by
`injecting' image features -- or in a layer following the RNN -- conditioning
the language model by `merging' image features. While both options are attested
in the literature, there is as yet no systematic comparison between the two. In
this paper we empirically show that it is not especially detrimental to
performance whether one architecture is used or another. The merge architecture
does have practical advantages, as conditioning by merging allows the RNN's
hidden state vector to shrink in size by up to four times. Our results suggest
that the visual and linguistic modalities for caption generation need not be
jointly encoded by the RNN as that yields large, memory-intensive models with
few tangible advantages in performance; rather, the multimodal integration
should be delayed to a subsequent stage.
|
Marc Tanti (1), Albert Gatt (1), Kenneth P. Camilleri (1) ((1)
University of Malta)
|
Where to put the Image in an Image Caption Generator
| null |
cs.NE cs.CL cs.CV
| 2018-03-15T00:00:00 |
1508.02091
|
We examine the possibility that recent promising results in automatic caption
generation are due primarily to language models. By varying image
representation quality produced by a convolutional neural network, we find that
a state-of-the-art neural captioning algorithm is able to produce quality
captions even when provided with surprisingly poor image representations. We
replicate this result in a new, fine-grained, transfer learned captioning
domain, consisting of 66K recipe image/title pairs. We also provide some
experiments regarding the appropriateness of datasets for automatic captioning,
and find that having multiple captions per image is beneficial, but not an
absolute requirement.
|
Jack Hessel, Nicolas Savva, Michael J. Wilber
|
Image Representations and New Domains in Neural Image Captioning
| null |
cs.CL cs.CV
| 2015-08-11T00:00:00 |
2502.16761
|
Large language models (LLMs) present novel opportunities in public opinion
research by predicting survey responses in advance during the early stages of
survey design. Prior methods steer LLMs via descriptions of subpopulations as
LLMs' input prompt, yet such prompt engineering approaches have struggled to
faithfully predict the distribution of survey responses from human subjects. In
this work, we propose directly fine-tuning LLMs to predict response
distributions by leveraging unique structural characteristics of survey data.
To enable fine-tuning, we curate SubPOP, a significantly scaled dataset of
3,362 questions and 70K subpopulation-response pairs from well-established
public opinion surveys. We show that fine-tuning on SubPOP greatly improves the
match between LLM predictions and human responses across various
subpopulations, reducing the LLM-human gap by up to 46% compared to baselines,
and achieves strong generalization to unseen surveys and subpopulations. Our
findings highlight the potential of survey-based fine-tuning to improve opinion
prediction for diverse, real-world subpopulations and therefore enable more
efficient survey designs. Our code is available at
https://github.com/JosephJeesungSuh/subpop.
|
Joseph Suh, Erfan Jahanparast, Suhong Moon, Minwoo Kang, Serina Chang
|
Language Model Fine-Tuning on Scaled Survey Data for Predicting
Distributions of Public Opinions
| null |
cs.CL
| 2025-02-25T00:00:00 |
cs/0105016
|
This paper describes the functioning of a broad-coverage probabilistic
top-down parser, and its application to the problem of language modeling for
speech recognition. The paper first introduces key notions in language modeling
and probabilistic parsing, and briefly reviews some previous approaches to
using syntactic structure for language modeling. A lexicalized probabilistic
top-down parser is then presented, which performs very well, in terms of both
the accuracy of returned parses and the efficiency with which they are found,
relative to the best broad-coverage statistical parsers. A new language model
which utilizes probabilistic top-down parsing is then outlined, and empirical
results show that it improves upon previous work in test corpus perplexity.
Interpolation with a trigram model yields an exceptional improvement relative
to the improvement observed by other models, demonstrating the degree to which
the information captured by our parsing model is orthogonal to that captured by
a trigram model. A small recognition experiment also demonstrates the utility
of the model.
|
Brian Roark
|
Probabilistic top-down parsing and language modeling
| null |
cs.CL
| 2007-05-23T00:00:00 |
1912.00159
|
This paper presents SwissCrawl, the largest Swiss German text corpus to date.
Composed of more than half a million sentences, it was generated using a
customized web scraping tool that could be applied to other low-resource
languages as well. The approach demonstrates how freely available web pages can
be used to construct comprehensive text corpora, which are of fundamental
importance for natural language processing. In an experimental evaluation, we
show that using the new corpus leads to significant improvements for the task
of language modeling. To capture new content, our approach will run
continuously to keep increasing the corpus over time.
|
Lucy Linder, Michael Jungo, Jean Hennebert, Claudiu Musat, Andreas
Fischer
|
Automatic Creation of Text Corpora for Low-Resource Languages from the
Internet: The Case of Swiss German
|
Proceedings of The 12th Language Resources and Evaluation
Conference, LREC (2020) 2706-2711
|
cs.CL
| 2020-06-17T00:00:00 |
2010.11428
|
For various speech-related tasks, confidence scores from a speech recogniser
are a useful measure to assess the quality of transcriptions. In traditional
hidden Markov model-based automatic speech recognition (ASR) systems,
confidence scores can be reliably obtained from word posteriors in decoding
lattices. However, for an ASR system with an auto-regressive decoder, such as
an attention-based sequence-to-sequence model, computing word posteriors is
difficult. An obvious alternative is to use the decoder softmax probability as
the model confidence. In this paper, we first examine how some commonly used
regularisation methods influence the softmax-based confidence scores and study
the overconfident behaviour of end-to-end models. Then we propose a lightweight
and effective approach named confidence estimation module (CEM) on top of an
existing end-to-end ASR model. Experiments on LibriSpeech show that CEM can
mitigate the overconfidence problem and can produce more reliable confidence
scores with and without shallow fusion of a language model. Further analysis
shows that CEM generalises well to speech from a moderately mismatched domain
and can potentially improve downstream tasks such as semi-supervised learning.
|
Qiujia Li, David Qiu, Yu Zhang, Bo Li, Yanzhang He, Philip C.
Woodland, Liangliang Cao, Trevor Strohman
|
Confidence Estimation for Attention-based Sequence-to-sequence Models
for Speech Recognition
| null |
eess.AS cs.CL cs.LG
| 2020-10-27T00:00:00 |
2310.01041
|
Despite the remarkable advances in language modeling, current mainstream
decoding methods still struggle to generate texts that align with human texts
across different aspects. In particular, sampling-based methods produce
less-repetitive texts which are often disjunctive in discourse, while
search-based methods maintain topic coherence at the cost of increased
repetition. Overall, these methods fall short in achieving holistic alignment
across a broad range of aspects. In this work, we frame decoding from a
language model as an optimization problem with the goal of strictly matching
the expected performance with human texts measured by multiple metrics of
desired aspects simultaneously. The resulting decoding distribution enjoys an
analytical solution that scales the input language model distribution via a
sequence-level energy function defined by these metrics. And most importantly,
we prove that this induced distribution is guaranteed to improve the perplexity
on human texts, which suggests a better approximation to the underlying
distribution of human texts. To facilitate tractable sampling from this
globally normalized distribution, we adopt the Sampling-Importance-Resampling
technique. Experiments on various domains and model scales demonstrate the
superiority of our method in metrics alignment with human texts and human
evaluation over strong baselines.
|
Haozhe Ji, Pei Ke, Hongning Wang, Minlie Huang
|
Language Model Decoding as Direct Metrics Optimization
|
The Twelfth International Conference on Learning Representations
(ICLR 2024)
|
cs.CL
| 2024-06-06T00:00:00 |
2202.12226
|
Sampling is a promising bottom-up method for exposing what generative models
have learned about language, but it remains unclear how to generate
representative samples from popular masked language models (MLMs) like BERT.
The MLM objective yields a dependency network with no guarantee of consistent
conditional distributions, posing a problem for naive approaches. Drawing from
theories of iterated learning in cognitive science, we explore the use of
serial reproduction chains to sample from BERT's priors. In particular, we
observe that a unique and consistent estimator of the ground-truth joint
distribution is given by a Generative Stochastic Network (GSN) sampler, which
randomly selects which token to mask and reconstruct on each step. We show that
the lexical and syntactic statistics of sentences from GSN chains closely match
the ground-truth corpus distribution and perform better than other methods in a
large corpus of naturalness judgments. Our findings establish a firmer
theoretical foundation for bottom-up probing and highlight richer deviations
from human priors.
|
Takateru Yamakoshi, Thomas L. Griffiths, Robert D. Hawkins
|
Probing BERT's priors with serial reproduction chains
| null |
cs.CL
| 2022-03-21T00:00:00 |
1902.06000
|
Semantic parsing using hierarchical representations has recently been
proposed for task oriented dialog with promising results [Gupta et al 2018]. In
this paper, we present three different improvements to the model:
contextualized embeddings, ensembling, and pairwise re-ranking based on a
language model. We taxonomize the errors possible for the hierarchical
representation, such as wrong top intent, missing spans or split spans, and
show that the three approaches correct different kinds of errors. The best
model combines the three techniques and gives 6.4% better exact match accuracy
than the state-of-the-art, with an error reduction of 33%, resulting in a new
state-of-the-art result on the Task Oriented Parsing (TOP) dataset.
|
Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal
Mohit, Mike Lewis, Luke Zettlemoyer
|
Improving Semantic Parsing for Task Oriented Dialog
| null |
cs.CL cs.AI
| 2019-02-19T00:00:00 |
2406.02378
|
Large Language Models (LLMs) are able to improve their responses when
instructed to do so, a capability known as self-correction. When instructions
provide only the task's goal without specific details about potential issues in
the response, LLMs must rely on their internal knowledge to improve response
quality, a process referred to as intrinsic self-correction. The empirical
success of intrinsic self-correction is evident in various applications, but
how and why it is effective remains unknown. In this paper, we unveil that
intrinsic self-correction can be progressively improved, allowing it to
approach a converged state. Our findings are verified in: (1) the scenario of
multi-round question answering, by comprehensively demonstrating that intrinsic
self-correction can progressively introduce performance gains through iterative
interactions, ultimately converging to stable performance; and (2) the context
of intrinsic self-correction for enhanced morality, in which we provide
empirical evidence that iteratively applying instructions reduces model
uncertainty towards convergence, which then leads to convergence of both the
calibration error and self-correction performance, ultimately resulting in a
stable state of intrinsic self-correction. Furthermore, we introduce a
mathematical formulation and a simulation task indicating that the latent
concepts activated by self-correction instructions drive the reduction of model
uncertainty. Based on our experimental results and analysis of the convergence
of intrinsic self-correction, we reveal its underlying mechanism: consistent
injected instructions reduce model uncertainty which yields converged, improved
performance.
|
Guangliang Liu, Haitao Mao, Bochuan Cao, Zhiyu Xue, Xitong Zhang,
Rongrong Wang, Jiliang Tang, Kristen Johnson
|
On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and
Latent Concept
| null |
cs.CL
| 2024-11-11T00:00:00 |
1602.07393
|
Authorship attribution refers to the task of automatically determining the
author based on a given sample of text. It is a problem with a long history and
has a wide range of application. Building author profiles using language models
is one of the most successful methods to automate this task. New language
modeling methods based on neural networks alleviate the curse of dimensionality
and usually outperform conventional N-gram methods. However, there have not
been much research applying them to authorship attribution. In this paper, we
present a novel setup of a Neural Network Language Model (NNLM) and apply it to
a database of text samples from different authors. We investigate how the NNLM
performs on a task with moderate author set size and relatively limited
training and test data, and how the topics of the text samples affect the
accuracy. NNLM achieves nearly 2.5% reduction in perplexity, a measurement of
fitness of a trained language model to the test data. Given 5 random test
sentences, it also increases the author classification accuracy by 3.43% on
average, compared with the N-gram methods using SRILM tools. An open source
implementation of our methodology is freely available at
https://github.com/zge/authorship-attribution/.
|
Zhenhao Ge and Yufang Sun
|
Domain Specific Author Attribution Based on Feedforward Neural Network
Language Models
| null |
cs.CL cs.LG cs.NE
| 2016-02-25T00:00:00 |
1909.02560
|
Revealing the robustness issues of natural language processing models and
improving their robustness is important to their performance under difficult
situations. In this paper, we study the robustness of paraphrase identification
models from a new perspective -- via modification with shared words, and we
show that the models have significant robustness issues when facing such
modifications. To modify an example consisting of a sentence pair, we either
replace some words shared by both sentences or introduce new shared words. We
aim to construct a valid new example such that a target model makes a wrong
prediction. To find a modification solution, we use beam search constrained by
heuristic rules, and we leverage a BERT masked language model for generating
substitution words compatible with the context. Experiments show that the
performance of the target models has a dramatic drop on the modified examples,
thereby revealing the robustness issue. We also show that adversarial training
can mitigate this issue.
|
Zhouxing Shi, Minlie Huang
|
Robustness to Modification with Shared Words in Paraphrase
Identification
| null |
cs.CL
| 2020-10-06T00:00:00 |
1908.05731
|
Previous work on neural noisy channel modeling relied on latent variable
models that incrementally process the source and target sentence. This makes
decoding decisions based on partial source prefixes even though the full source
is available. We pursue an alternative approach based on standard sequence to
sequence models which utilize the entire source. These models perform
remarkably well as channel models, even though they have neither been trained
on, nor designed to factor over incomplete target sentences. Experiments with
neural language models trained on billions of words show that noisy channel
models can outperform a direct model by up to 3.2 BLEU on WMT'17 German-English
translation. We evaluate on four language-pairs and our channel models
consistently outperform strong alternatives such right-to-left reranking models
and ensembles of direct models.
|
Kyra Yee and Nathan Ng and Yann N. Dauphin and Michael Auli
|
Simple and Effective Noisy Channel Modeling for Neural Machine
Translation
| null |
cs.CL
| 2019-08-19T00:00:00 |
1904.03651
|
Neural sequence-to-sequence models are currently the dominant approach in
several natural language processing tasks, but require large parallel corpora.
We present a sequence-to-sequence-to-sequence autoencoder (SEQ^3), consisting
of two chained encoder-decoder pairs, with words used as a sequence of discrete
latent variables. We apply the proposed model to unsupervised abstractive
sentence compression, where the first and last sequences are the input and
reconstructed sentences, respectively, while the middle sequence is the
compressed sentence. Constraining the length of the latent word sequences
forces the model to distill important information from the input. A pretrained
language model, acting as a prior over the latent sequences, encourages the
compressed sentences to be human-readable. Continuous relaxations enable us to
sample from categorical distributions, allowing gradient-based optimization,
unlike alternatives that rely on reinforcement learning. The proposed model
does not require parallel text-summary pairs, achieving promising results in
unsupervised sentence compression on benchmark datasets.
|
Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alexandros
Potamianos
|
SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for
Unsupervised Abstractive Sentence Compression
| null |
cs.CL
| 2019-06-11T00:00:00 |
2109.04867
|
As neural language models approach human performance on NLP benchmark tasks,
their advances are widely seen as evidence of an increasingly complex
understanding of syntax. This view rests upon a hypothesis that has not yet
been empirically tested: that word order encodes meaning essential to
performing these tasks. We refute this hypothesis in many cases: in the GLUE
suite and in various genres of English text, the words in a sentence or phrase
can rarely be permuted to form a phrase carrying substantially different
information. Our surprising result relies on inference by iterative shuffling
(IBIS), a novel, efficient procedure that finds the ordering of a bag of words
having the highest likelihood under a fixed language model. IBIS can use any
black-box model without additional training and is superior to existing word
ordering algorithms. Coalescing our findings, we discuss how shuffling
inference procedures such as IBIS can benefit language modeling and constrained
generation.
|
Nikolay Malkin, Sameera Lanka, Pranav Goel, Nebojsa Jojic
|
Studying word order through iterative shuffling
| null |
cs.CL
| 2021-09-13T00:00:00 |
1909.08582
|
Training code-switched language models is difficult due to lack of data and
complexity in the grammatical structure. Linguistic constraint theories have
been used for decades to generate artificial code-switching sentences to cope
with this issue. However, this require external word alignments or constituency
parsers that create erroneous results on distant languages. We propose a
sequence-to-sequence model using a copy mechanism to generate code-switching
data by leveraging parallel monolingual translations from a limited source of
code-switching data. The model learns how to combine words from parallel
sentences and identifies when to switch one language to the other. Moreover, it
captures code-switching constraints by attending and aligning the words in
inputs, without requiring any external knowledge. Based on experimental
results, the language model trained with the generated sentences achieves
state-of-the-art performance and improves end-to-end automatic speech
recognition.
|
Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, Pascale Fung
|
Code-Switched Language Models Using Neural Based Synthetic Data from
Parallel Sentences
| null |
cs.CL
| 2019-09-19T00:00:00 |
1611.01702
|
In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based
language model designed to directly capture the global semantic meaning
relating words in a document via latent topics. Because of their sequential
nature, RNNs are good at capturing the local structure of a word sequence -
both semantic and syntactic - but might face difficulty remembering long-range
dependencies. Intuitively, these long-range dependencies are of semantic
nature. In contrast, latent topic models are able to capture the global
underlying semantic structure of a document but do not account for word
ordering. The proposed TopicRNN model integrates the merits of RNNs and latent
topic models: it captures local (syntactic) dependencies using an RNN and
global (semantic) dependencies using latent topics. Unlike previous work on
contextual RNN language modeling, our model is learned end-to-end. Empirical
results on word prediction show that TopicRNN outperforms existing contextual
RNN baselines. In addition, TopicRNN can be used as an unsupervised feature
extractor for documents. We do this for sentiment analysis on the IMDB movie
review dataset and report an error rate of $6.28\%$. This is comparable to the
state-of-the-art $5.91\%$ resulting from a semi-supervised approach. Finally,
TopicRNN also yields sensible topics, making it a useful alternative to
document models such as latent Dirichlet allocation.
|
Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley
|
TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency
| null |
cs.CL cs.AI cs.LG stat.ML
| 2017-02-28T00:00:00 |
cmp-lg/9502029
|
This paper proposes a corpus-based language model for topic identification.
We analyze the association of noun-noun and noun-verb pairs in LOB Corpus. The
word association norms are based on three factors: 1) word importance, 2) pair
co-occurrence, and 3) distance. They are trained on the paragraph and sentence
levels for noun-noun and noun-verb pairs, respectively. Under the topic
coherence postulation, the nouns that have the strongest connectivities with
the other nouns and verbs in the discourse form the preferred topic set. The
collocational semantics then is used to identify the topics from paragraphs and
to discuss the topic shift phenomenon among paragraphs.
|
Kuang-hua Chen (Department of Computer Science and Information
Engineering, National Taiwan University)
|
Topic Identification in Discourse
| null |
cmp-lg cs.CL
| 2016-08-31T00:00:00 |
1609.03777
|
Recurrent neural network (RNN) based character-level language models (CLMs)
are extremely useful for modeling out-of-vocabulary words by nature. However,
their performance is generally much worse than the word-level language models
(WLMs), since CLMs need to consider longer history of tokens to properly
predict the next one. We address this problem by proposing hierarchical RNN
architectures, which consist of multiple modules with different timescales.
Despite the multi-timescale structures, the input and output layers operate
with the character-level clock, which allows the existing RNN CLM training
approaches to be directly applicable without any modifications. Our CLM models
show better perplexity than Kneser-Ney (KN) 5-gram WLMs on the One Billion Word
Benchmark with only 2% of parameters. Also, we present real-time
character-level end-to-end speech recognition examples on the Wall Street
Journal (WSJ) corpus, where replacing traditional mono-clock RNN CLMs with the
proposed models results in better recognition accuracies even though the number
of parameters are reduced to 30%.
|
Kyuyeon Hwang, Wonyong Sung
|
Character-Level Language Modeling with Hierarchical Recurrent Neural
Networks
| null |
cs.LG cs.CL cs.NE
| 2017-02-03T00:00:00 |
2203.11856
|
Analyzing gender is critical to study mental health (MH) support in CVD
(cardiovascular disease). The existing studies on using social media for
extracting MH symptoms consider symptom detection and tend to ignore user
context, disease, or gender. The current study aims to design and evaluate a
system to capture how MH symptoms associated with CVD are expressed differently
with the gender on social media. We observe that the reliable detection of MH
symptoms expressed by persons with heart disease in user posts is challenging
because of the co-existence of (dis)similar MH symptoms in one post and due to
variation in the description of symptoms based on gender. We collect a corpus
of $150k$ items (posts and comments) annotated using the subreddit labels and
transfer learning approaches. We propose GeM, a novel task-adaptive multi-task
learning approach to identify the MH symptoms in CVD patients based on gender.
Specifically, we adapt a knowledge-assisted RoBERTa based bi-encoder model to
capture CVD-related MH symptoms. Moreover, it enhances the reliability for
differentiating the gender language in MH symptoms when compared to the
state-of-art language models. Our model achieves high (statistically
significant) performance and predicts four labels of MH issues and two gender
labels, which outperforms RoBERTa, improving the recall by 2.14% on the symptom
identification task and by 2.55% on the gender identification task.
|
Usha Lokala, Aseem Srivastava, Triyasha Ghosh Dastidar, Tanmoy
Chakraborty, Md Shad Akthar, Maryam Panahiazar, and Amit Sheth
|
A Computational Approach to Understand Mental Health from Reddit:
Knowledge-aware Multitask Learning Framework
| null |
cs.CL cs.AI
| 2022-03-23T00:00:00 |
1906.00346
|
Medication recommendation is an important healthcare application. It is
commonly formulated as a temporal prediction task. Hence, most existing works
only utilize longitudinal electronic health records (EHRs) from a small number
of patients with multiple visits ignoring a large number of patients with a
single visit (selection bias). Moreover, important hierarchical knowledge such
as diagnosis hierarchy is not leveraged in the representation learning process.
To address these challenges, we propose G-BERT, a new model to combine the
power of Graph Neural Networks (GNNs) and BERT (Bidirectional Encoder
Representations from Transformers) for medical code representation and
medication recommendation. We use GNNs to represent the internal hierarchical
structures of medical codes. Then we integrate the GNN representation into a
transformer-based visit encoder and pre-train it on EHR data from patients only
with a single visit. The pre-trained visit encoder and representation are then
fine-tuned for downstream predictive tasks on longitudinal EHRs from patients
with multiple visits. G-BERT is the first to bring the language model
pre-training schema into the healthcare domain and it achieved state-of-the-art
performance on the medication recommendation task.
|
Junyuan Shang, Tengfei Ma, Cao Xiao, Jimeng Sun
|
Pre-training of Graph Augmented Transformers for Medication
Recommendation
| null |
cs.AI cs.CL cs.LG
| 2019-11-28T00:00:00 |
1709.01679
|
This study addresses the problem of identifying the meaning of unknown words
or entities in a discourse with respect to the word embedding approaches used
in neural language models. We proposed a method for on-the-fly construction and
exploitation of word embeddings in both the input and output layers of a neural
model by tracking contexts. This extends the dynamic entity representation used
in Kobayashi et al. (2016) and incorporates a copy mechanism proposed
independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we
construct a new task and dataset called Anonymized Language Modeling for
evaluating the ability to capture word meanings while reading. Experiments
conducted using our novel dataset show that the proposed variant of RNN
language model outperformed the baseline model. Furthermore, the experiments
also demonstrate that dynamic updates of an output layer help a model predict
reappearing entities, whereas those of an input layer are effective to predict
words following reappearing entities.
|
Sosuke Kobayashi, Naoaki Okazaki, Kentaro Inui
|
A Neural Language Model for Dynamically Representing the Meanings of
Unknown Words and Entities in a Discourse
| null |
cs.CL
| 2017-10-18T00:00:00 |
1611.00196
|
In many natural language processing (NLP) tasks, a document is commonly
modeled as a bag of words using the term frequency-inverse document frequency
(TF-IDF) vector. One major shortcoming of the frequency-based TF-IDF feature
vector is that it ignores word orders that carry syntactic and semantic
relationships among the words in a document, and they can be important in some
NLP tasks such as genre classification. This paper proposes a novel distributed
vector representation of a document: a simple recurrent-neural-network language
model (RNN-LM) or a long short-term memory RNN language model (LSTM-LM) is
first created from all documents in a task; some of the LM parameters are then
adapted by each document, and the adapted parameters are vectorized to
represent the document. The new document vectors are labeled as DV-RNN and
DV-LSTM respectively. We believe that our new document vectors can capture some
high-level sequential information in the documents, which other current
document representations fail to capture. The new document vectors were
evaluated in the genre classification of documents in three corpora: the Brown
Corpus, the BNC Baby Corpus and an artificially created Penn Treebank dataset.
Their classification performances are compared with the performance of TF-IDF
vector and the state-of-the-art distributed memory model of paragraph vector
(PV-DM). The results show that DV-LSTM significantly outperforms TF-IDF and
PV-DM in most cases, and combinations of the proposed document vectors with
TF-IDF or PV-DM may further improve performance.
|
Wei Li, Brian Kan Wing Mak
|
Recurrent Neural Network Language Model Adaptation Derived Document
Vector
| null |
cs.CL
| 2016-12-15T00:00:00 |
1806.05059
|
Sequence-to-sequence attention-based models integrate an acoustic,
pronunciation and language model into a single neural network, which make them
very suitable for multilingual automatic speech recognition (ASR). In this
paper, we are concerned with multilingual speech recognition on low-resource
languages by a single Transformer, one of sequence-to-sequence attention-based
models. Sub-words are employed as the multilingual modeling unit without using
any pronunciation lexicon. First, we show that a single multilingual ASR
Transformer performs well on low-resource languages despite of some language
confusion. We then look at incorporating language information into the model by
inserting the language symbol at the beginning or at the end of the original
sub-words sequence under the condition of language information being known
during training. Experiments on CALLHOME datasets demonstrate that the
multilingual ASR Transformer with the language symbol at the end performs
better and can obtain relatively 10.5\% average word error rate (WER) reduction
compared to SHL-MLSTM with residual learning. We go on to show that, assuming
the language information being known during training and testing, about
relatively 12.4\% average WER reduction can be observed compared to SHL-MLSTM
with residual learning through giving the language symbol as the sentence start
token.
|
Shiyu Zhou, Shuang Xu, Bo Xu
|
Multilingual End-to-End Speech Recognition with A Single Transformer on
Low-Resource Languages
| null |
eess.AS cs.CL cs.SD
| 2018-06-15T00:00:00 |
1908.10322
|
Purely character-based language models (LMs) have been lagging in quality on
large scale datasets, and current state-of-the-art LMs rely on word
tokenization. It has been assumed that injecting the prior knowledge of a
tokenizer into the model is essential to achieving competitive results. In this
paper, we show that contrary to this conventional wisdom, tokenizer-free LMs
with sufficient capacity can achieve competitive performance on a large scale
dataset. We train a vanilla transformer network with 40 self-attention layers
on the One Billion Word (lm1b) benchmark and achieve a new state of the art for
tokenizer-free LMs, pushing these models to be on par with their word-based
counterparts.
|
Dokook Choe, Rami Al-Rfou, Mandy Guo, Heeyoung Lee, Noah Constant
|
Bridging the Gap for Tokenizer-Free Language Models
| null |
cs.CL cs.AI cs.IR cs.LG
| 2019-08-28T00:00:00 |
2203.00759
|
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in
a parameter-efficient way. Here, we explore the use of HyperNetworks to
generate hyper-prompts: we propose HyperPrompt, a novel architecture for
prompt-based task-conditioning of self-attention in Transformers. The
hyper-prompts are end-to-end learnable via generation by a HyperNetwork.
HyperPrompt allows the network to learn task-specific feature maps where the
hyper-prompts serve as task global memories for the queries to attend to, at
the same time enabling flexible information sharing among tasks. We show that
HyperPrompt is competitive against strong multi-task learning baselines with as
few as $0.14\%$ of additional task-conditioning parameters, achieving great
parameter and computational efficiency. Through extensive empirical
experiments, we demonstrate that HyperPrompt can achieve superior performances
over strong T5 multi-task learning baselines and parameter-efficient adapter
variants including Prompt-Tuning and HyperFormer++ on Natural Language
Understanding benchmarks of GLUE and SuperGLUE across many model sizes.
|
Yun He, Huaixiu Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi
Aribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, Ed
H. Chi
|
HyperPrompt: Prompt-based Task-Conditioning of Transformers
| null |
cs.CL cs.LG
| 2022-06-16T00:00:00 |
2410.01487
|
Recent work investigates whether LMs learn human-like linguistic
generalizations and representations from developmentally plausible amounts of
data. Yet, the basic linguistic units processed in these LMs are determined by
subword-based tokenization, which limits their validity as models of learning
at and below the word level. In this paper, we explore the potential of
tokenization-free, phoneme- and grapheme-based language models. We demonstrate
that small models based on the Llama architecture can achieve strong linguistic
performance on standard syntactic and novel lexical/phonetic benchmarks when
trained with character-level vocabularies. We further show that phoneme-based
models almost match grapheme-based models in standard tasks and novel
evaluations. Our findings suggest a promising direction for creating more
linguistically plausible language models that are better suited for
computational studies of language acquisition and processing.
|
Bastian Bunzeck, Daniel Duran, Leonie Schade, Sina Zarrie{\ss}
|
Small Language Models Also Work With Small Vocabularies: Probing the
Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas
| null |
cs.CL
| 2025-01-07T00:00:00 |
2305.10786
|
Prior studies diagnose the anisotropy problem in sentence representations
from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis
reveals that the sentence embeddings from BERT suffer from a bias towards
uninformative words, limiting the performance in semantic textual similarity
(STS) tasks. To address this bias, we propose a simple and efficient
unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words
with model-based importance estimations and computes the weighted average of
word representations from pre-trained models as sentence embeddings. Ditto can
be easily applied to any pre-trained language model as a postprocessing
operation. Compared to prior sentence embedding approaches, Ditto does not add
parameters nor requires any learning. Empirical evaluations demonstrate that
our proposed Ditto can alleviate the anisotropy problem and improve various
pre-trained models on STS tasks.
|
Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Chong Deng, Hai Yu,
Jiaqing Liu, Yukun Ma, Chong Zhang
|
Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
| null |
cs.CL
| 2023-10-24T00:00:00 |
2106.08367
|
Transformer-based language models benefit from conditioning on contexts of
hundreds to thousands of previous tokens. What aspects of these contexts
contribute to accurate model prediction? We describe a series of experiments
that measure usable information by selectively ablating lexical and structural
information in transformer language models trained on English Wikipedia. In
both mid- and long-range contexts, we find that several extremely destructive
context manipulations -- including shuffling word order within sentences and
deleting all words other than nouns -- remove less than 15% of the usable
information. Our results suggest that long contexts, but not their detailed
syntactic and propositional content, are important for the low perplexity of
current transformer language models.
|
Joe O'Connor and Jacob Andreas
|
What Context Features Can Transformer Language Models Use?
| null |
cs.CL
| 2021-06-17T00:00:00 |
2502.11843
|
Large Language Models (LLMs) are widely used as conversational agents,
exploiting their capabilities in various sectors such as education, law,
medicine, and more. However, LLMs are often subjected to context-shifting
behaviour, resulting in a lack of consistent and interpretable
personality-aligned interactions. Adherence to psychological traits lacks
comprehensive analysis, especially in the case of dyadic (pairwise)
conversations. We examine this challenge from two viewpoints, initially using
two conversation agents to generate a discourse on a certain topic with an
assigned personality from the OCEAN framework (Openness, Conscientiousness,
Extraversion, Agreeableness, and Neuroticism) as High/Low for each trait. This
is followed by using multiple judge agents to infer the original traits
assigned to explore prediction consistency, inter-model agreement, and
alignment with the assigned personality. Our findings indicate that while LLMs
can be guided toward personality-driven dialogue, their ability to maintain
personality traits varies significantly depending on the combination of models
and discourse settings. These inconsistencies emphasise the challenges in
achieving stable and interpretable personality-aligned interactions in LLMs.
|
Pranav Bhandari and Nicolas Fay and Michael Wise and Amitava Datta and
Stephanie Meek and Usman Naseem and Mehwish Nasim
|
Can LLM Agents Maintain a Persona in Discourse?
| null |
cs.CL cs.AI cs.SI
| 2025-02-18T00:00:00 |
1907.03064
|
We present improvements in automatic speech recognition (ASR) for Somali, a
currently extremely under-resourced language. This forms part of a continuing
United Nations (UN) effort to employ ASR-based keyword spotting systems to
support humanitarian relief programmes in rural Africa. Using just 1.57 hours
of annotated speech data as a seed corpus, we increase the pool of training
data by applying semi-supervised training to 17.55 hours of untranscribed
speech. We make use of factorised time-delay neural networks (TDNN-F) for
acoustic modelling, since these have recently been shown to be effective in
resource-scarce situations. Three semi-supervised training passes were
performed, where the decoded output from each pass was used for acoustic model
training in the subsequent pass. The automatic transcriptions from the best
performing pass were used for language model augmentation. To ensure the
quality of automatic transcriptions, decoder confidence is used as a threshold.
The acoustic and language models obtained from the semi-supervised approach
show significant improvement in terms of WER and perplexity compared to the
baseline. Incorporating the automatically generated transcriptions yields a
6.55\% improvement in language model perplexity. The use of 17.55 hour of
Somali acoustic data in semi-supervised training shows an improvement of 7.74\%
relative over the baseline.
|
Astik Biswas, Raghav Menon, Ewald van der Westhuizen, Thomas Niesler
|
Improved low-resource Somali speech recognition by semi-supervised
acoustic and language model training
| null |
cs.CL cs.LG eess.AS
| 2019-07-09T00:00:00 |
2202.13047
|
Crowdsourced dialogue corpora are usually limited in scale and topic coverage
due to the expensive cost of data curation. This would hinder the
generalization of downstream dialogue models to open-domain topics. In this
work, we leverage large language models for dialogue augmentation in the task
of emotional support conversation (ESC). By treating dialogue augmentation as a
dialogue completion task, we prompt a fine-tuned language model to complete
full dialogues from available dialogue posts of various topics, which are then
postprocessed based on heuristics. Applying this approach, we construct AugESC,
an augmented dataset for the ESC task, which largely extends the scale and
topic coverage of the crowdsourced ESConv corpus. Through comprehensive human
evaluation, we demonstrate that our approach is superior to strong baselines of
dialogue augmentation and that AugESC has comparable dialogue quality to the
crowdsourced corpus. We also conduct human interactive evaluation and prove
that post-training on AugESC improves downstream dialogue models'
generalization ability to open-domain topics. These results suggest the utility
of AugESC and highlight the potential of large language models in improving
data-scarce dialogue generation tasks.
|
Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, Minlie Huang
|
AugESC: Dialogue Augmentation with Large Language Models for Emotional
Support Conversation
| null |
cs.CL
| 2023-05-19T00:00:00 |
2111.11520
|
Open book question answering is a subset of question answering tasks where
the system aims to find answers in a given set of documents (open-book) and
common knowledge about a topic. This article proposes a solution for answering
natural language questions from a corpus of Amazon Web Services (AWS) technical
documents with no domain-specific labeled data (zero-shot). These questions can
have yes-no-none answers, short answers, long answers, or any combination of
the above. This solution comprises a two-step architecture in which a retriever
finds the right document and an extractor finds the answers in the retrieved
document. We are introducing a new test dataset for open-book QA based on real
customer questions on AWS technical documentation. After experimenting with
several information retrieval systems and extractor models based on extractive
language models, the solution attempts to find the yes-no-none answers and text
answers in the same pass. The model is trained on the The Stanford Question
Answering Dataset - SQuAD (Rajpurkaret al., 2016) and Natural Questions
(Kwiatkowski et al., 2019) datasets. We were able to achieve 49% F1 and 39%
exact match score (EM) end-to-end with no domain-specific training.
|
Sia Gholami and Mehdi Noori
|
Zero-Shot Open-Book Question Answering
| null |
cs.CL cs.IR cs.LG
| 2021-11-24T00:00:00 |
1601.00248
|
Perplexity (per word) is the most widely used metric for evaluating language
models. Despite this, there has been no dearth of criticism for this metric.
Most of these criticisms center around lack of correlation with extrinsic
metrics like word error rate (WER), dependence upon shared vocabulary for model
comparison and unsuitability for unnormalized language model evaluation. In
this paper, we address the last problem and propose a new discriminative
entropy based intrinsic metric that works for both traditional word level
models and unnormalized language models like sentence level models. We also
propose a discriminatively trained sentence level interpretation of recurrent
neural network based language model (RNN) as an example of unnormalized
sentence level model. We demonstrate that for word level models, contrastive
entropy shows a strong correlation with perplexity. We also observe that when
trained at lower distortion levels, sentence level RNN considerably outperforms
traditional RNNs on this new metric.
|
Kushal Arora, Anand Rangarajan
|
Contrastive Entropy: A new evaluation metric for unnormalized language
models
| null |
cs.CL
| 2016-04-01T00:00:00 |
1804.08881
|
Language models have primarily been evaluated with perplexity. While
perplexity quantifies the most comprehensible prediction performance, it does
not provide qualitative information on the success or failure of models.
Another approach for evaluating language models is thus proposed, using the
scaling properties of natural language. Five such tests are considered, with
the first two accounting for the vocabulary population and the other three for
the long memory of natural language. The following models were evaluated with
these tests: n-grams, probabilistic context-free grammar (PCFG), Simon and
Pitman-Yor (PY) processes, hierarchical PY, and neural language models. Only
the neural language models exhibit the long memory properties of natural
language, but to a limited degree. The effectiveness of every test of these
models is also discussed.
|
Shuntaro Takahashi and Kumiko Tanaka-Ishii
|
Assessing Language Models with Scaling Properties
| null |
cs.CL
| 2018-04-25T00:00:00 |
2203.14465
|
Generating step-by-step "chain-of-thought" rationales improves language model
performance on complex reasoning tasks like mathematics or commonsense
question-answering. However, inducing language model rationale generation
currently requires either constructing massive rationale datasets or
sacrificing accuracy by using only few-shot inference. We propose a technique
to iteratively leverage a small number of rationale examples and a large
dataset without rationales, to bootstrap the ability to perform successively
more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR),
relies on a simple loop: generate rationales to answer many questions, prompted
with a few rationale examples; if the generated answers are wrong, try again to
generate a rationale given the correct answer; fine-tune on all the rationales
that ultimately yielded correct answers; repeat. We show that STaR
significantly improves performance on multiple datasets compared to a model
fine-tuned to directly predict final answers, and performs comparably to
fine-tuning a 30$\times$ larger state-of-the-art language model on
CommensenseQA. Thus, STaR lets a model improve itself by learning from its own
generated reasoning.
|
Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman
|
STaR: Bootstrapping Reasoning With Reasoning
| null |
cs.LG cs.AI cs.CL
| 2022-05-23T00:00:00 |
1711.03953
|
We formulate language modeling as a matrix factorization problem, and show
that the expressiveness of Softmax-based models (including the majority of
neural language models) is limited by a Softmax bottleneck. Given that natural
language is highly context-dependent, this further implies that in practice
Softmax with distributed word embeddings does not have enough capacity to model
natural language. We propose a simple and effective method to address this
issue, and improve the state-of-the-art perplexities on Penn Treebank and
WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on
the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points
in perplexity.
|
Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W. Cohen
|
Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
| null |
cs.CL cs.LG
| 2018-03-06T00:00:00 |
2010.03680
|
Sequence labeling is an important technique employed for many Natural
Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot
tagging for dialog systems and semantic parsing. Large-scale pre-trained
language models obtain very good performance on these tasks when fine-tuned on
large amounts of task-specific labeled data. However, such large-scale labeled
datasets are difficult to obtain for several tasks and domains due to the high
cost of human annotation as well as privacy and data access constraints for
sensitive user applications. This is exacerbated for sequence labeling tasks
requiring such annotations at token-level. In this work, we develop techniques
to address the label scarcity challenge for neural sequence labeling models.
Specifically, we develop self-training and meta-learning techniques for
training neural sequence taggers with few labels. While self-training serves as
an effective mechanism to learn from large amounts of unlabeled data --
meta-learning helps in adaptive sample re-weighting to mitigate error
propagation from noisy pseudo-labels. Extensive experiments on six benchmark
datasets including two for massive multilingual NER and four slot tagging
datasets for task-oriented dialog systems demonstrate the effectiveness of our
method. With only 10 labeled examples for each class for each task, our method
obtains 10% improvement over state-of-the-art systems demonstrating its
effectiveness for the low-resource setting.
|
Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu,
Jing Gao, Ahmed Hassan Awadallah
|
Adaptive Self-training for Few-shot Neural Sequence Labeling
| null |
cs.CL cs.AI cs.LG
| 2020-12-14T00:00:00 |
2305.14793
|
Methods to generate text from structured data have advanced significantly in
recent years, primarily due to fine-tuning of pre-trained language models on
large datasets. However, such models can fail to produce output faithful to the
input data, particularly on out-of-domain data. Sufficient annotated data is
often not available for specific domains, leading us to seek an unsupervised
approach to improve the faithfulness of output text. Since the problem is
fundamentally one of consistency between the representations of the structured
data and text, we evaluate the effectiveness of cycle training in this work.
Cycle training uses two models which are inverses of each other: one that
generates text from structured data, and one which generates the structured
data from natural language text. We show that cycle training, when initialized
with a small amount of supervised data (100 samples in our case), achieves
nearly the same performance as fully supervised approaches for the data-to-text
generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform
extensive empirical analysis with automated evaluation metrics and a newly
designed human evaluation schema to reveal different cycle training strategies'
effectiveness of reducing various types of generation errors. Our code is
publicly available at https://github.com/Edillower/CycleNLG.
|
Zhuoer Wang, Marcus Collins, Nikhita Vedula, Simone Filice, Shervin
Malmasi, Oleg Rokhlenko
|
Faithful Low-Resource Data-to-Text Generation through Cycle Training
| null |
cs.CL
| 2023-07-12T00:00:00 |
2403.10205
|
While text summarization is a well-known NLP task, in this paper, we
introduce a novel and useful variant of it called functionality extraction from
Git README files. Though this task is a text2text generation at an abstract
level, it involves its own peculiarities and challenges making existing
text2text generation systems not very useful. The motivation behind this task
stems from a recent surge in research and development activities around the use
of large language models for code-related tasks, such as code refactoring, code
summarization, etc. We also release a human-annotated dataset called FuncRead,
and develop a battery of models for the task. Our exhaustive experimentation
shows that small size fine-tuned models beat any baseline models that can be
designed using popular black-box or white-box large language models (LLMs) such
as ChatGPT and Bard. Our best fine-tuned 7 Billion CodeLlama model exhibit 70%
and 20% gain on the F1 score against ChatGPT and Bard respectively.
|
Prince Kumar, Srikanth Tamilselvam, Dinesh Garg
|
Read between the lines -- Functionality Extraction From READMEs
| null |
cs.CL cs.AI
| 2024-03-18T00:00:00 |
2206.13749
|
On e-commerce platforms, predicting if two products are compatible with each
other is an important functionality to achieve trustworthy product
recommendation and search experience for consumers. However, accurately
predicting product compatibility is difficult due to the heterogeneous product
data and the lack of manually curated training data. We study the problem of
discovering effective labeling rules that can enable weakly-supervised product
compatibility prediction. We develop AMRule, a multi-view rule discovery
framework that can (1) adaptively and iteratively discover novel rulers that
can complement the current weakly-supervised model to improve compatibility
prediction; (2) discover interpretable rules from both structured attribute
tables and unstructured product descriptions. AMRule adaptively discovers
labeling rules from large-error instances via a boosting-style strategy, the
high-quality rules can remedy the current model's weak spots and refine the
model iteratively. For rule discovery from structured product attributes, we
generate composable high-order rules from decision trees; and for rule
discovery from unstructured product descriptions, we generate prompt-based
rules from a pre-trained language model. Experiments on 4 real-world datasets
show that AMRule outperforms the baselines by 5.98% on average and improves
rule quality and rule proposal efficiency.
|
Rongzhi Zhang, Rebecca West, Xiquan Cui, Chao Zhang
|
Adaptive Multi-view Rule Discovery for Weakly-Supervised Compatible
Products Prediction
| null |
cs.LG cs.CL
| 2022-06-29T00:00:00 |
1906.01733
|
Recent work on Grammatical Error Correction (GEC) has highlighted the
importance of language modeling in that it is certainly possible to achieve
good performance by comparing the probabilities of the proposed edits. At the
same time, advancements in language modeling have managed to generate
linguistic output, which is almost indistinguishable from that of
human-generated text. In this paper, we up the ante by exploring the potential
of more sophisticated language models in GEC and offer some key insights on
their strengths and weaknesses. We show that, in line with recent results in
other NLP tasks, Transformer architectures achieve consistently high
performance and provide a competitive baseline for future machine learning
models.
|
Dimitrios Alikaniotis and Vipul Raheja
|
The Unreasonable Effectiveness of Transformer Language Models in
Grammatical Error Correction
| null |
cs.CL cs.LG cs.NE
| 2019-06-06T00:00:00 |
2310.01041
|
Despite the remarkable advances in language modeling, current mainstream
decoding methods still struggle to generate texts that align with human texts
across different aspects. In particular, sampling-based methods produce
less-repetitive texts which are often disjunctive in discourse, while
search-based methods maintain topic coherence at the cost of increased
repetition. Overall, these methods fall short in achieving holistic alignment
across a broad range of aspects. In this work, we frame decoding from a
language model as an optimization problem with the goal of strictly matching
the expected performance with human texts measured by multiple metrics of
desired aspects simultaneously. The resulting decoding distribution enjoys an
analytical solution that scales the input language model distribution via a
sequence-level energy function defined by these metrics. And most importantly,
we prove that this induced distribution is guaranteed to improve the perplexity
on human texts, which suggests a better approximation to the underlying
distribution of human texts. To facilitate tractable sampling from this
globally normalized distribution, we adopt the Sampling-Importance-Resampling
technique. Experiments on various domains and model scales demonstrate the
superiority of our method in metrics alignment with human texts and human
evaluation over strong baselines.
|
Haozhe Ji, Pei Ke, Hongning Wang, Minlie Huang
|
Language Model Decoding as Direct Metrics Optimization
|
The Twelfth International Conference on Learning Representations
(ICLR 2024)
|
cs.CL
| 2024-06-06T00:00:00 |
2106.02902
|
Probing complex language models has recently revealed several insights into
linguistic and semantic patterns found in the learned representations. In this
article, we probe BERT specifically to understand and measure the relational
knowledge it captures in its parametric memory. While probing for linguistic
understanding is commonly applied to all layers of BERT as well as fine-tuned
models, this has not been done for factual knowledge. We utilize existing
knowledge base completion tasks (LAMA) to probe every layer of pre-trained as
well as fine-tuned BERT models(ranking, question answering, NER). Our findings
show that knowledge is not just contained in BERT's final layers. Intermediate
layers contribute a significant amount (17-60%) to the total knowledge found.
Probing intermediate layers also reveals how different types of knowledge
emerge at varying rates. When BERT is fine-tuned, relational knowledge is
forgotten. The extent of forgetting is impacted by the fine-tuning objective
and the training data. We found that ranking models forget the least and retain
more knowledge in their final layer compared to masked language modeling and
question-answering. However, masked language modeling performed the best at
acquiring new knowledge from the training data. When it comes to learning
facts, we found that capacity and fact density are key factors. We hope this
initial work will spur further research into understanding the parametric
memory of language models and the effect of training objectives on factual
knowledge. The code to repeat the experiments is publicly available on GitHub.
|
Jonas Wallat, Jaspreet Singh, Avishek Anand
|
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
| null |
cs.CL
| 2021-09-09T00:00:00 |
2110.05354
|
Text-only adaptation of an end-to-end (E2E) model remains a challenging task
for automatic speech recognition (ASR). Language model (LM) fusion-based
approaches require an additional external LM during inference, significantly
increasing the computation cost. To overcome this, we propose an internal LM
adaptation (ILMA) of the E2E model using text-only data. Trained with
audio-transcript pairs, an E2E model implicitly learns an internal LM that
characterizes the token sequence probability which is approximated by the E2E
model output after zeroing out the encoder contribution. During ILMA, we
fine-tune the internal LM, i.e., the E2E components excluding the encoder, to
minimize a cross-entropy loss. To make ILMA effective, it is essential to train
the E2E model with an internal LM loss besides the standard E2E loss.
Furthermore, we propose to regularize ILMA by minimizing the Kullback-Leibler
divergence between the output distributions of the adapted and unadapted
internal LMs. ILMA is the most effective when we update only the last linear
layer of the joint network. ILMA enables a fast text-only adaptation of the E2E
model without increasing the run-time computational cost. Experimented with
30K-hour trained transformer transducer models, ILMA achieves up to 34.9%
relative word error rate reduction from the unadapted baseline.
|
Zhong Meng, Yashesh Gaur, Naoyuki Kanda, Jinyu Li, Xie Chen, Yu Wu,
Yifan Gong
|
Internal Language Model Adaptation with Text-Only Data for End-to-End
Speech Recognition
|
Interspeech 2022, Incheon, Korea
|
cs.CL cs.AI cs.LG cs.SD eess.AS
| 2022-11-01T00:00:00 |
1503.05034
|
We propose a novel convolutional architecture, named $gen$CNN, for word
sequence prediction. Different from previous work on neural network-based
language modeling and generation (e.g., RNN or LSTM), we choose not to greedily
summarize the history of words as a fixed length vector. Instead, we use a
convolutional neural network to predict the next word with the history of words
of variable length. Also different from the existing feedforward networks for
language modeling, our model can effectively fuse the local correlation and
global correlation in the word sequence, with a convolution-gating strategy
specifically designed for the task. We argue that our model can give adequate
representation of the history, and therefore can naturally exploit both the
short and long range dependencies. Our model is fast, easy to train, and
readily parallelized. Our extensive experiments on text generation and $n$-best
re-ranking in machine translation show that $gen$CNN outperforms the
state-of-the-arts with big margins.
|
Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu
|
$gen$CNN: A Convolutional Architecture for Word Sequence Prediction
| null |
cs.CL
| 2015-04-27T00:00:00 |
2103.13610
|
Speech-enabled systems typically first convert audio to text through an
automatic speech recognition (ASR) model and then feed the text to downstream
natural language processing (NLP) modules. The errors of the ASR system can
seriously downgrade the performance of the NLP modules. Therefore, it is
essential to make them robust to the ASR errors. Previous work has shown it is
effective to employ data augmentation methods to solve this problem by
injecting ASR noise during the training process. In this paper, we utilize the
prevalent pre-trained language model to generate training samples with
ASR-plausible noise. Compare to the previous methods, our approach generates
ASR noise that better fits the real-world error distribution. Experimental
results on spoken language translation(SLT) and spoken language understanding
(SLU) show that our approach effectively improves the system robustness against
the ASR errors and achieves state-of-the-art results on both tasks.
|
Tong Cui, Jinghui Xiao, Liangyou Li, Xin Jiang, Qun Liu
|
An Approach to Improve Robustness of NLP Systems against ASR Errors
| null |
cs.CL
| 2021-03-26T00:00:00 |
1302.1123
|
The paper revives an older approach to acoustic modeling that borrows from
n-gram language modeling in an attempt to scale up both the amount of training
data and model size (as measured by the number of parameters in the model), to
approximately 100 times larger than current sizes used in automatic speech
recognition. In such a data-rich setting, we can expand the phonetic context
significantly beyond triphones, as well as increase the number of Gaussian
mixture components for the context-dependent states that allow it. We have
experimented with contexts that span seven or more context-independent phones,
and up to 620 mixture components per state. Dealing with unseen phonetic
contexts is accomplished using the familiar back-off technique used in language
modeling due to implementation simplicity. The back-off acoustic model is
estimated, stored and served using MapReduce distributed computing
infrastructure.
Speech recognition experiments are carried out in an N-best list rescoring
framework for Google Voice Search. Training big models on large amounts of data
proves to be an effective way to increase the accuracy of a state-of-the-art
automatic speech recognition system. We use 87,000 hours of training data
(speech along with transcription) obtained by filtering utterances in Voice
Search logs on automatic speech recognition confidence. Models ranging in size
between 20--40 million Gaussians are estimated using maximum likelihood
training. They achieve relative reductions in word-error-rate of 11% and 6%
when combined with first-pass models trained using maximum likelihood, and
boosted maximum mutual information, respectively. Increasing the context size
beyond five phones (quinphones) does not help.
|
Ciprian Chelba, Peng Xu, Fernando Pereira, Thomas Richardson
|
Large Scale Distributed Acoustic Modeling With Back-off N-grams
| null |
cs.CL
| 2013-02-06T00:00:00 |
2204.10281
|
Gender-neutral pronouns have recently been introduced in many languages to a)
include non-binary people and b) as a generic singular. Recent results from
psycholinguistics suggest that gender-neutral pronouns (in Swedish) are not
associated with human processing difficulties. This, we show, is in sharp
contrast with automated processing. We show that gender-neutral pronouns in
Danish, English, and Swedish are associated with higher perplexity, more
dispersed attention patterns, and worse downstream performance. We argue that
such conservativity in language models may limit widespread adoption of
gender-neutral pronouns and must therefore be resolved.
|
Stephanie Brandl, Ruixiang Cui, Anders S{\o}gaard
|
How Conservative are Language Models? Adapting to the Introduction of
Gender-Neutral Pronouns
| null |
cs.CL
| 2022-05-04T00:00:00 |
1811.02134
|
This work explores better adaptation methods to low-resource languages using
an external language model (LM) under the framework of transfer learning. We
first build a language-independent ASR system in a unified sequence-to-sequence
(S2S) architecture with a shared vocabulary among all languages. During
adaptation, we perform LM fusion transfer, where an external LM is integrated
into the decoder network of the attention-based S2S model in the whole
adaptation stage, to effectively incorporate linguistic context of the target
language. We also investigate various seed models for transfer learning.
Experimental evaluations using the IARPA BABEL data set show that LM fusion
transfer improves performances on all target five languages compared with
simple transfer learning when the external text data is available. Our final
system drastically reduces the performance gap from the hybrid systems.
|
Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya
Kawahara, Shinji Watanabe
|
Transfer learning of language-independent end-to-end ASR with language
model fusion
| null |
cs.CL
| 2019-05-08T00:00:00 |
1606.03352
|
Recently a variety of LSTM-based conditional language models (LM) have been
applied across a range of language generation tasks. In this work we study
various model architectures and different ways to represent and aggregate the
source information in an end-to-end neural dialogue system framework. A method
called snapshot learning is also proposed to facilitate learning from
supervised sequential signals by applying a companion cross-entropy objective
function to the conditioning vector. The experimental and analytical results
demonstrate firstly that competition occurs between the conditioning vector and
the LM, and the differing architectures provide different trade-offs between
the two. Secondly, the discriminative power and transparency of the
conditioning vector is key to providing both model interpretability and better
performance. Thirdly, snapshot learning leads to consistent performance
improvements independent of which architecture is used.
|
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona,
Pei-Hao Su, Stefan Ultes, David Vandyke, Steve Young
|
Conditional Generation and Snapshot Learning in Neural Dialogue Systems
| null |
cs.CL cs.NE stat.ML
| 2016-06-13T00:00:00 |
1602.06064
|
We propose to train bi-directional neural network language model(NNLM) with
noise contrastive estimation(NCE). Experiments are conducted on a rescore task
on the PTB data set. It is shown that NCE-trained bi-directional NNLM
outperformed the one trained by conventional maximum likelihood training. But
still(regretfully), it did not out-perform the baseline uni-directional NNLM.
|
Tianxing He, Yu Zhang, Jasha Droppo, Kai Yu
|
On Training Bi-directional Neural Network Language Model with Noise
Contrastive Estimation
| null |
cs.CL
| 2016-02-26T00:00:00 |
2409.02060
|
We introduce OLMoE, a fully open, state-of-the-art language model leveraging
sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but
uses only 1B per input token. We pretrain it on 5 trillion tokens and further
adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available
models with similar active parameters, even surpassing larger ones like
Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE
training, analyze routing in our model showing high specialization, and
open-source all aspects of our work: model weights, training data, code, and
logs.
|
Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob
Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert,
Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden,
Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah
A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi
|
OLMoE: Open Mixture-of-Experts Language Models
| null |
cs.CL cs.AI cs.LG
| 2025-03-04T00:00:00 |
2305.10971
|
Africa has over 2000 indigenous languages but they are under-represented in
NLP research due to lack of datasets. In recent years, there have been progress
in developing labeled corpora for African languages. However, they are often
available in a single domain and may not generalize to other domains. In this
paper, we focus on the task of sentiment classification for cross domain
adaptation. We create a new dataset, NollySenti - based on the Nollywood movie
reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo,
Nigerian-Pidgin, and Yoruba. We provide an extensive empirical evaluation using
classical machine learning methods and pre-trained language models. Leveraging
transfer learning, we compare the performance of cross-domain adaptation from
Twitter domain, and cross-lingual adaptation from English language. Our
evaluation shows that transfer from English in the same target domain leads to
more than 5% improvement in accuracy compared to transfer from Twitter in the
same language. To further mitigate the domain difference, we leverage machine
translation (MT) from English to other Nigerian languages, which leads to a
further improvement of 7% over cross-lingual evaluation. While MT to
low-resource languages are often of low quality, through human evaluation, we
show that most of the translated sentences preserve the sentiment of the
original English reviews.
|
Iyanuoluwa Shode, David Ifeoluwa Adelani, Jing Peng, Anna Feldman
|
NollySenti: Leveraging Transfer Learning and Machine Translation for
Nigerian Movie Sentiment Classification
| null |
cs.CL
| 2023-08-23T00:00:00 |
2002.11268
|
This article describes a density ratio approach to integrating external
Language Models (LMs) into end-to-end models for Automatic Speech Recognition
(ASR). Applied to a Recurrent Neural Network Transducer (RNN-T) ASR model
trained on a given domain, a matched in-domain RNN-LM, and a target domain
RNN-LM, the proposed method uses Bayes' Rule to define RNN-T posteriors for the
target domain, in a manner directly analogous to the classic hybrid model for
ASR based on Deep Neural Networks (DNNs) or LSTMs in the Hidden Markov Model
(HMM) framework (Bourlard & Morgan, 1994). The proposed approach is evaluated
in cross-domain and limited-data scenarios, for which a significant amount of
target domain text data is used for LM training, but only limited (or no)
{audio, transcript} training data pairs are used to train the RNN-T.
Specifically, an RNN-T model trained on paired audio & transcript data from
YouTube is evaluated for its ability to generalize to Voice Search data. The
Density Ratio method was found to consistently outperform the dominant approach
to LM and end-to-end ASR integration, Shallow Fusion.
|
Erik McDermott, Hasim Sak, Ehsan Variani
|
A Density Ratio Approach to Language Model Fusion in End-To-End
Automatic Speech Recognition
| null |
eess.AS cs.CL cs.SD
| 2020-03-02T00:00:00 |
2006.12040
|
A language model can be used to predict the next word during authoring, to
correct spelling or to accelerate writing (e.g., in sms or emails). Language
models, however, have only been applied in a very small scale to assist
physicians during authoring (e.g., discharge summaries or radiology reports).
But along with the assistance to the physician, computer-based systems which
expedite the patient's exit also assist in decreasing the hospital infections.
We employed statistical and neural language modeling to predict the next word
of a clinical text and assess all the models in terms of accuracy and keystroke
discount in two datasets with radiology reports. We show that a neural language
model can achieve as high as 51.3% accuracy in radiology reports (one out of
two words predicted correctly). We also show that even when the models are
employed only for frequent words, the physician can save valuable time.
|
John Pavlopoulos and Panagiotis Papapetrou
|
Clinical Predictive Keyboard using Statistical and Neural Language
Modeling
| null |
cs.CL
| 2020-06-23T00:00:00 |
1908.08529
|
Diverse and accurate vision+language modeling is an important goal to retain
creative freedom and maintain user engagement. However, adequately capturing
the intricacies of diversity in language models is challenging. Recent works
commonly resort to latent variable models augmented with more or less
supervision from object detectors or part-of-speech tags. Common to all those
methods is the fact that the latent variable either only initializes the
sentence generation process or is identical across the steps of generation.
Both methods offer no fine-grained control. To address this concern, we propose
Seq-CVAE which learns a latent space for every word position. We encourage this
temporal latent space to capture the 'intention' about how to complete the
sentence by mimicking a representation which summarizes the future. We
illustrate the efficacy of the proposed approach to anticipate the sentence
continuation on the challenging MSCOCO dataset, significantly improving
diversity metrics compared to baselines while performing on par w.r.t sentence
quality.
|
Jyoti Aneja, Harsh Agrawal, Dhruv Batra, Alexander Schwing
|
Sequential Latent Spaces for Modeling the Intention During Diverse Image
Captioning
| null |
cs.CV cs.CL cs.LG stat.ML
| 2019-08-23T00:00:00 |
1703.03097
|
Extracting useful entities and attribute values from illicit domains such as
human trafficking is a challenging problem with the potential for widespread
social impact. Such domains employ atypical language models, have `long tails'
and suffer from the problem of concept drift. In this paper, we propose a
lightweight, feature-agnostic Information Extraction (IE) paradigm specifically
designed for such domains. Our approach uses raw, unlabeled text from an
initial corpus, and a few (12-120) seed annotations per domain-specific
attribute, to learn robust IE models for unobserved pages and websites.
Empirically, we demonstrate that our approach can outperform feature-centric
Conditional Random Field baselines by over 18\% F-Measure on five annotated
sets of real-world human trafficking datasets in both low-supervision and
high-supervision settings. We also show that our approach is demonstrably
robust to concept drift, and can be efficiently bootstrapped even in a serial
computing environment.
|
Mayank Kejriwal, Pedro Szekely
|
Information Extraction in Illicit Domains
| null |
cs.CL cs.AI
| 2017-03-10T00:00:00 |
2109.12781
|
Event extraction in commodity news is a less researched area as compared to
generic event extraction. However, accurate event extraction from commodity
news is useful in abroad range of applications such as under-standing event
chains and learning event-event relations, which can then be used for commodity
price prediction. The events found in commodity news exhibit characteristics
different from generic events, hence posing a unique challenge in event
extraction using existing methods. This paper proposes an effective use of
Graph Convolutional Networks(GCN) with a pruned dependency parse tree, termed
contextual sub-tree, for better event ex-traction in commodity news. The event
ex-traction model is trained using feature embed-dings from ComBERT, a
BERT-based masked language model that was produced through domain-adaptive
pre-training on a commodity news corpus. Experimental results show the
efficiency of the proposed solution, which out-performs existing methods with
F1 scores as high as 0.90. Furthermore, our pre-trained language model
outperforms GloVe by 23%, and BERT and RoBERTa by 7% in terms of argument roles
classification. For the goal of re-producibility, the code and trained models
are made publicly available1.
|
Meisin Lee, Lay-Ki Soon, Eu-Gene Siew
|
Effective Use of Graph Convolution Network and Contextual Sub-Tree
forCommodity News Event Extraction
| null |
cs.CL cs.AI
| 2021-09-28T00:00:00 |
cs/0305041
|
Factorization of statistical language models is the task that we resolve the
most discriminative model into factored models and determine a new model by
combining them so as to provide better estimate. Most of previous works mainly
focus on factorizing models of sequential events, each of which allows only one
factorization manner. To enable parallel factorization, which allows a model
event to be resolved in more than one ways at the same time, we propose a
general framework, where we adopt a backing-off lattice to reflect parallel
factorizations and to define the paths along which a model is resolved into
factored models, we use a mixture model to combine parallel paths in the
lattice, and generalize Katz's backing-off method to integrate all the mixture
models got by traversing the entire lattice. Based on this framework, we
formulate two types of model factorizations that are used in natural language
modeling.
|
Wei Wang
|
Factorization of Language Models through Backing-Off Lattices
| null |
cs.CL
| 2007-05-23T00:00:00 |
1604.01729
|
This paper investigates how linguistic knowledge mined from large text
corpora can aid the generation of natural language descriptions of videos.
Specifically, we integrate both a neural language model and distributional
semantics trained on large text corpora into a recent LSTM-based architecture
for video description. We evaluate our approach on a collection of Youtube
videos as well as two large movie description datasets showing significant
improvements in grammaticality while modestly improving descriptive quality.
|
Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, Kate
Saenko
|
Improving LSTM-based Video Description with Linguistic Knowledge Mined
from Text
|
Proc.EMNLP (2016) pg.1961-1966
|
cs.CL cs.CV
| 2016-11-30T00:00:00 |
1602.06064
|
We propose to train bi-directional neural network language model(NNLM) with
noise contrastive estimation(NCE). Experiments are conducted on a rescore task
on the PTB data set. It is shown that NCE-trained bi-directional NNLM
outperformed the one trained by conventional maximum likelihood training. But
still(regretfully), it did not out-perform the baseline uni-directional NNLM.
|
Tianxing He, Yu Zhang, Jasha Droppo, Kai Yu
|
On Training Bi-directional Neural Network Language Model with Noise
Contrastive Estimation
| null |
cs.CL
| 2016-02-26T00:00:00 |
cmp-lg/9606002
|
In this paper, a hierarchical context definition is added to an existing
clustering algorithm in order to increase its robustness. The resulting
algorithm, which clusters contexts and events separately, is used to experiment
with different ways of defining the context a language model takes into
account. The contexts range from standard bigram and trigram contexts to part
of speech five-grams. Although none of the models can compete directly with a
backoff trigram, they give up to 9\% improvement in perplexity when
interpolated with a trigram. Moreover, the modified version of the algorithm
leads to a performance increase over the original version of up to 12\%.
|
J.P. Ueberla and I.R. Gransden
|
Clustered Language Models with Context-Equivalent States
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
2501.06101
|
Problem-solving therapy (PST) is a structured psychological approach that
helps individuals manage stress and resolve personal issues by guiding them
through problem identification, solution brainstorming, decision-making, and
outcome evaluation. As mental health care increasingly adopts technologies like
chatbots and large language models (LLMs), it is important to thoroughly
understand how each session of PST is conducted before attempting to automate
it. We developed a comprehensive framework for PST annotation using established
PST Core Strategies and a set of novel Facilitative Strategies to analyze a
corpus of real-world therapy transcripts to determine which strategies are most
prevalent. Using various LLMs and transformer-based models, we found that
GPT-4o outperformed all models, achieving the highest accuracy (0.76) in
identifying all strategies. To gain deeper insights, we examined how strategies
are applied by analyzing Therapeutic Dynamics (autonomy, self-disclosure, and
metaphor), and linguistic patterns within our labeled data. Our research
highlights LLMs' potential to automate therapy dialogue analysis, offering a
scalable tool for mental health interventions. Our framework enhances PST by
improving accessibility, effectiveness, and personalized support for
therapists.
|
Elham Aghakhani, Lu Wang, Karla T. Washington, George Demiris, Jina
Huh-Yoo, Rezvaneh Rezapour
|
From Conversation to Automation: Leveraging LLMs for Problem-Solving
Therapy Analysis
| null |
cs.CL
| 2025-02-20T00:00:00 |
2010.04746
|
We solve difficult word-based substitution codes by constructing a decoding
lattice and searching that lattice with a neural language model. We apply our
method to a set of enciphered letters exchanged between US Army General James
Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s,
obtained from the US Library of Congress. We are able to decipher 75.1% of the
cipher-word tokens correctly.
|
Christopher Chu, Raphael Valenti, Kevin Knight
|
Solving Historical Dictionary Codes with a Neural Language Model
| null |
cs.CL
| 2020-10-13T00:00:00 |
2502.16761
|
Large language models (LLMs) present novel opportunities in public opinion
research by predicting survey responses in advance during the early stages of
survey design. Prior methods steer LLMs via descriptions of subpopulations as
LLMs' input prompt, yet such prompt engineering approaches have struggled to
faithfully predict the distribution of survey responses from human subjects. In
this work, we propose directly fine-tuning LLMs to predict response
distributions by leveraging unique structural characteristics of survey data.
To enable fine-tuning, we curate SubPOP, a significantly scaled dataset of
3,362 questions and 70K subpopulation-response pairs from well-established
public opinion surveys. We show that fine-tuning on SubPOP greatly improves the
match between LLM predictions and human responses across various
subpopulations, reducing the LLM-human gap by up to 46% compared to baselines,
and achieves strong generalization to unseen surveys and subpopulations. Our
findings highlight the potential of survey-based fine-tuning to improve opinion
prediction for diverse, real-world subpopulations and therefore enable more
efficient survey designs. Our code is available at
https://github.com/JosephJeesungSuh/subpop.
|
Joseph Suh, Erfan Jahanparast, Suhong Moon, Minwoo Kang, Serina Chang
|
Language Model Fine-Tuning on Scaled Survey Data for Predicting
Distributions of Public Opinions
| null |
cs.CL
| 2025-02-25T00:00:00 |
1703.02573
|
Data noising is an effective technique for regularizing neural network
models. While noising is widely adopted in application domains such as vision
and speech, commonly used noising primitives have not been developed for
discrete sequence-level settings such as language modeling. In this paper, we
derive a connection between input noising in neural network language models and
smoothing in $n$-gram models. Using this connection, we draw upon ideas from
smoothing to develop effective noising schemes. We demonstrate performance
gains when applying the proposed schemes to language modeling and machine
translation. Finally, we provide empirical analysis validating the relationship
between noising and smoothing.
|
Ziang Xie, Sida I. Wang, Jiwei Li, Daniel L\'evy, Aiming Nie, Dan
Jurafsky, Andrew Y. Ng
|
Data Noising as Smoothing in Neural Network Language Models
| null |
cs.LG cs.CL
| 2017-03-09T00:00:00 |
2011.07960
|
Syntax is fundamental to our thinking about language. Failing to capture the
structure of input language could lead to generalization problems and
over-parametrization. In the present work, we propose a new syntax-aware
language model: Syntactic Ordered Memory (SOM). The model explicitly models the
structure with an incremental parser and maintains the conditional probability
setting of a standard language model (left-to-right). To train the incremental
parser and avoid exposure bias, we also propose a novel dynamic oracle, so that
SOM is more robust to wrong parsing decisions. Experiments show that SOM can
achieve strong results in language modeling, incremental parsing and syntactic
generalization tests, while using fewer parameters than other models.
|
Yikang Shen, Shawn Tan, Alessandro Sordoni, Siva Reddy, Aaron
Courville
|
Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle
|
NAACL 2021
|
cs.CL cs.LG
| 2021-05-12T00:00:00 |
1405.3515
|
We provide a method for automatically detecting change in language across
time through a chronologically trained neural language model. We train the
model on the Google Books Ngram corpus to obtain word vector representations
specific to each year, and identify words that have changed significantly from
1900 to 2009. The model identifies words such as "cell" and "gay" as having
changed during that time period. The model simultaneously identifies the
specific years during which such words underwent change.
|
Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, Slav Petrov
|
Temporal Analysis of Language through Neural Language Models
|
Proceedings of the ACL 2014 Workshop on Language Technologies and
Computational Social Science. June, 2014. 61--65
|
cs.CL
| 2014-08-26T00:00:00 |
2305.18703
|
Large language models (LLMs) have significantly advanced the field of natural
language processing (NLP), providing a highly useful, task-agnostic foundation
for a wide range of applications. However, directly applying LLMs to solve
sophisticated problems in specific domains meets many hurdles, caused by the
heterogeneity of domain data, the sophistication of domain knowledge, the
uniqueness of domain objectives, and the diversity of the constraints (e.g.,
various social norms, cultural conformity, religious beliefs, and ethical
standards in the domain applications). Domain specification techniques are key
to make large language models disruptive in many applications. Specifically, to
solve these hurdles, there has been a notable increase in research and
practices conducted in recent years on the domain specialization of LLMs. This
emerging field of study, with its substantial potential for impact,
necessitates a comprehensive and systematic review to better summarize and
guide ongoing work in this area. In this article, we present a comprehensive
survey on domain specification techniques for large language models, an
emerging direction critical for large language model applications. First, we
propose a systematic taxonomy that categorizes the LLM domain-specialization
techniques based on the accessibility to LLMs and summarizes the framework for
all the subcategories as well as their relations and differences to each other.
Second, we present an extensive taxonomy of critical application domains that
can benefit dramatically from specialized LLMs, discussing their practical
significance and open challenges. Last, we offer our insights into the current
research status and future trends in this area.
|
Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng,
Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao
Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang,
Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian
Pei, Carl Yang, and Liang Zhao
|
Domain Specialization as the Key to Make Large Language Models
Disruptive: A Comprehensive Survey
| null |
cs.CL cs.AI
| 2024-04-01T00:00:00 |
cs/0006025
|
A criterion for pruning parameters from N-gram backoff language models is
developed, based on the relative entropy between the original and the pruned
model. It is shown that the relative entropy resulting from pruning a single
N-gram can be computed exactly and efficiently for backoff models. The relative
entropy measure can be expressed as a relative change in training set
perplexity. This leads to a simple pruning criterion whereby all N-grams that
change perplexity by less than a threshold are removed from the model.
Experiments show that a production-quality Hub4 LM can be reduced to 26% its
original size without increasing recognition error. We also compare the
approach to a heuristic pruning criterion by Seymore and Rosenfeld (1996), and
show that their approach can be interpreted as an approximation to the relative
entropy criterion. Experimentally, both approaches select similar sets of
N-grams (about 85% overlap), with the exact relative entropy criterion giving
marginally better performance.
|
A. Stolcke
|
Entropy-based Pruning of Backoff Language Models
|
Proceedings DARPA Broadcast News Transcription and Understanding
Workshop, pp. 270-274, Lansdowne, VA, 1998
|
cs.CL
| 2007-05-23T00:00:00 |
2501.01743
|
Legal articles often include vague concepts for adapting to the ever-changing
society. Providing detailed interpretations of these concepts is a critical and
challenging task even for legal practitioners. It requires meticulous and
professional annotations and summarizations by legal experts, which are
admittedly time-consuming and expensive to collect at scale. By emulating legal
experts' doctrinal method, we introduce a novel framework, ATRIE, using large
language models (LLMs) to AuTomatically Retrieve concept-related information,
Interpret legal concepts, and Evaluate generated interpretations, eliminating
dependence on legal experts. ATRIE comprises a legal concept interpreter and a
legal concept interpretation evaluator. The interpreter uses LLMs to retrieve
relevant information from judicial precedents and interpret legal concepts. The
evaluator uses performance changes on legal concept entailment, a downstream
task we propose, as a proxy of interpretation quality. Automatic and
multifaceted human evaluations indicate that the quality of our interpretations
is comparable to those written by legal experts, with superior
comprehensiveness and readability. Although there remains a slight gap in
accuracy, it can already assist legal practitioners in improving the efficiency
of concept interpretation.
|
Kangcheng Luo, Quzhe Huang, Cong Jiang, Yansong Feng
|
Automating Legal Concept Interpretation with LLMs: Retrieval,
Generation, and Evaluation
| null |
cs.CL cs.AI
| 2025-02-18T00:00:00 |
1711.01048
|
In this work, we present a simple and elegant approach to language modeling
for bilingual code-switched text. Since code-switching is a blend of two or
more different languages, a standard bilingual language model can be improved
upon by using structures of the monolingual language models. We propose a novel
technique called dual language models, which involves building two
complementary monolingual language models and combining them using a
probabilistic model for switching between the two. We evaluate the efficacy of
our approach using a conversational Mandarin-English speech corpus. We prove
the robustness of our model by showing significant improvements in perplexity
measures over the standard bilingual language model without the use of any
external information. Similar consistent improvements are also reflected in
automatic speech recognition error rates.
|
Saurabh Garg, Tanmay Parekh, Preethi Jyothi
|
Dual Language Models for Code Switched Speech Recognition
| null |
cs.CL
| 2018-08-06T00:00:00 |
2205.15172
|
Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively.
|
Guilherme Moraes Rosa and Luiz Bonifacio and Vitor Jeronymo and Hugo
Abonizio and Roberto Lotufo and Rodrigo Nogueira
|
Billions of Parameters Are Worth More Than In-domain Training Data: A
case study in the Legal Case Entailment Task
| null |
cs.CL
| 2022-05-31T00:00:00 |
2309.11499
|
This paper presents DreamLLM, a learning framework that first achieves
versatile Multimodal Large Language Models (MLLMs) empowered with frequently
overlooked synergy between multimodal comprehension and creation. DreamLLM
operates on two fundamental principles. The first focuses on the generative
modeling of both language and image posteriors by direct sampling in the raw
multimodal space. This approach circumvents the limitations and information
loss inherent to external feature extractors like CLIP, and a more thorough
multimodal understanding is obtained. Second, DreamLLM fosters the generation
of raw, interleaved documents, modeling both text and image contents, along
with unstructured layouts. This allows DreamLLM to learn all conditional,
marginal, and joint multimodal distributions effectively. As a result, DreamLLM
is the first MLLM capable of generating free-form interleaved content.
Comprehensive experiments highlight DreamLLM's superior performance as a
zero-shot multimodal generalist, reaping from the enhanced learning synergy.
Project page: https://dreamllm.github.io.
|
Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong
Yang, Liang Zhao, Jianjian Sun, Hongyu Zhou, Haoran Wei, Xiangwen Kong,
Xiangyu Zhang, Kaisheng Ma, Li Yi
|
DreamLLM: Synergistic Multimodal Comprehension and Creation
| null |
cs.CV cs.CL cs.LG
| 2024-03-19T00:00:00 |
2310.02949
|
Warning: This paper contains examples of harmful language, and reader
discretion is recommended. The increasing open release of powerful large
language models (LLMs) has facilitated the development of downstream
applications by reducing the essential cost of data annotation and computation.
To ensure AI safety, extensive safety-alignment measures have been conducted to
armor these models against malicious use (primarily hard prompt attack).
However, beneath the seemingly resilient facade of the armor, there might lurk
a shadow. By simply tuning on 100 malicious examples with 1 GPU hour, these
safely aligned LLMs can be easily subverted to generate harmful content.
Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of
data can elicit safely-aligned models to adapt to harmful tasks without
sacrificing model helpfulness. Remarkably, the subverted models retain their
capability to respond appropriately to regular inquiries. Experiments across 8
models released by 5 different organizations (LLaMa-2, Falcon, InternLM,
BaiChuan2, Vicuna) demonstrate the effectiveness of shadow alignment attack.
Besides, the single-turn English-only attack successfully transfers to
multi-turn dialogue and other languages. This study serves as a clarion call
for a collective effort to overhaul and fortify the safety of open-source LLMs
against malicious attackers.
|
Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang,
Xun Zhao, Dahua Lin
|
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
| null |
cs.CL cs.AI cs.CR cs.LG
| 2023-10-05T00:00:00 |
1906.00080
|
In this paper, we present Smart Compose, a novel system for generating
interactive, real-time suggestions in Gmail that assists users in writing mails
by reducing repetitive typing. In the design and deployment of such a
large-scale and complicated system, we faced several challenges including model
selection, performance evaluation, serving and other practical issues. At the
core of Smart Compose is a large-scale neural language model. We leveraged
state-of-the-art machine learning techniques for language model training which
enabled high-quality suggestion prediction, and constructed novel serving
infrastructure for high-throughput and real-time inference. Experimental
results show the effectiveness of our proposed system design and deployment
approach. This system is currently being served in Gmail.
|
Mia Xu Chen, Benjamin N Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang,
Justin Lu, Jackie Tsay, Yinan Wang, Andrew M. Dai, Zhifeng Chen, Timothy
Sohn, Yonghui Wu
|
Gmail Smart Compose: Real-Time Assisted Writing
| null |
cs.CL cs.LG
| 2019-06-04T00:00:00 |
1704.08012
|
Language models are typically applied at the sentence level, without access
to the broader document context. We present a neural language model that
incorporates document context in the form of a topic model-like architecture,
thus providing a succinct representation of the broader document context
outside of the current sentence. Experiments over a range of datasets
demonstrate that our model outperforms a pure sentence-based model in terms of
language model perplexity, and leads to topics that are potentially more
coherent than those produced by a standard LDA topic model. Our model also has
the ability to generate related sentences for a topic, providing another way to
interpret topics.
|
Jey Han Lau and Timothy Baldwin and Trevor Cohn
|
Topically Driven Neural Language Model
|
In Proceedings of the 55th Annual Meeting of the Association for
Computational Linguistics (ACL 2017), pp. 355--365
|
cs.CL
| 2017-10-16T00:00:00 |
1505.01809
|
Two recent approaches have achieved state-of-the-art results in image
captioning. The first uses a pipelined process where a set of candidate words
is generated by a convolutional neural network (CNN) trained on images, and
then a maximum entropy (ME) language model is used to arrange these words into
a coherent sentence. The second uses the penultimate activation layer of the
CNN as input to a recurrent neural network (RNN) that then generates the
caption sequence. In this paper, we compare the merits of these different
language modeling approaches for the first time by using the same
state-of-the-art CNN as input. We examine issues in the different approaches,
including linguistic irregularities, caption repetition, and data set overlap.
By combining key aspects of the ME and RNN methods, we achieve a new record
performance over previously published results on the benchmark COCO dataset.
However, the gains we see in BLEU do not translate to human judgments.
|
Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong
He, Geoffrey Zweig, Margaret Mitchell
|
Language Models for Image Captioning: The Quirks and What Works
| null |
cs.CL cs.AI cs.CV cs.LG
| 2015-10-16T00:00:00 |
2211.07715
|
Transformer-based language models have become the standard approach to
solving natural language processing tasks. However, industry adoption usually
requires the maximum throughput to comply with certain latency constraints that
prevents Transformer models from being used in production. To address this gap,
model compression techniques such as quantization and pruning may be used to
improve inference efficiency. However, these compression techniques require
specialized software to apply and deploy at scale. In this work, we propose a
new pipeline for creating and running Fast Transformer models on CPUs,
utilizing hardware-aware pruning, knowledge distillation, quantization, and our
own Transformer inference runtime engine with optimized kernels for sparse and
quantized operators. We demonstrate the efficiency of our pipeline by creating
a Fast DistilBERT model showing minimal accuracy loss on the question-answering
SQuADv1.1 benchmark, and throughput results under typical production
constraints and environments. Our results outperform existing state-of-the-art
Neural Magic's DeepSparse runtime performance by up to 50% and up to 4.1x
performance speedup over ONNX Runtime. Source code is publicly available at
https://github.com/intel/intel-extension-for-transformers.
|
Haihao Shen, Ofir Zafrir, Bo Dong, Hengyu Meng, Xinyu Ye, Zhe Wang, Yi
Ding, Hanwen Chang, Guy Boudoukh, and Moshe Wasserblat
|
Fast DistilBERT on CPUs
| null |
cs.CL cs.AI cs.LG
| 2022-12-08T00:00:00 |
2503.08404
|
The purpose of this study is to assess how large language models (LLMs) can
be used for fact-checking and contribute to the broader debate on the use of
automated means for veracity identification. To achieve this purpose, we use AI
auditing methodology that systematically evaluates performance of five LLMs
(ChatGPT 4, Llama 3 (70B), Llama 3.1 (405B), Claude 3.5 Sonnet, and Google
Gemini) using prompts regarding a large set of statements fact-checked by
professional journalists (16,513). Specifically, we use topic modeling and
regression analysis to investigate which factors (e.g. topic of the prompt or
the LLM type) affect evaluations of true, false, and mixed statements. Our
findings reveal that while ChatGPT 4 and Google Gemini achieved higher accuracy
than other models, overall performance across models remains modest. Notably,
the results indicate that models are better at identifying false statements,
especially on sensitive topics such as COVID-19, American political
controversies, and social issues, suggesting possible guardrails that may
enhance accuracy on these topics. The major implication of our findings is that
there are significant challenges for using LLMs for factchecking, including
significant variation in performance across different LLMs and unequal quality
of outputs for specific topics which can be attributed to deficits of training
data. Our research highlights the potential and limitations of LLMs in
political fact-checking, suggesting potential avenues for further improvements
in guardrails as well as fine-tuning.
|
Elizaveta Kuznetsova, Ilaria Vitulano, Mykola Makhortykh, Martha
Stolze, Tomas Nagy, Victoria Vziatysheva
|
Fact-checking with Generative AI: A Systematic Cross-Topic Examination
of LLMs Capacity to Detect Veracity of Political Information
| null |
cs.CL cs.CY
| 2025-03-12T00:00:00 |
1805.06087
|
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models,
but when used to generate natural language their output tends to be overly
generic, repetitive, and self-contradictory. We postulate that the objective
function optimized by RNN language models, which amounts to the overall
perplexity of a text, is not expressive enough to capture the notion of
communicative goals described by linguistic principles such as Grice's Maxims.
We propose learning a mixture of multiple discriminative models that can be
used to complement the RNN generator and guide the decoding process. Human
evaluation demonstrates that text generated by our system is preferred over
that of baselines by a large margin and significantly enhances the overall
coherence, style, and information content of the generated text.
|
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub,
and Yejin Choi
|
Learning to Write with Cooperative Discriminators
| null |
cs.CL
| 2018-05-17T00:00:00 |
1711.06351
|
A hallmark of human intelligence is the ability to ask rich, creative, and
revealing questions. Here we introduce a cognitive model capable of
constructing human-like questions. Our approach treats questions as formal
programs that, when executed on the state of the world, output an answer. The
model specifies a probability distribution over a complex, compositional space
of programs, favoring concise programs that help the agent learn in the current
context. We evaluate our approach by modeling the types of open-ended questions
generated by humans who were attempting to learn about an ambiguous situation
in a game. We find that our model predicts what questions people will ask, and
can creatively produce novel questions that were not present in the training
set. In addition, we compare a number of model variants, finding that both
question informativeness and complexity are important for producing human-like
questions.
|
Anselm Rothe, Brenden M. Lake, Todd M. Gureckis
|
Question Asking as Program Generation
|
Rothe, A., Lake, B. M., and Gureckis, T. M. (2017). Question
asking as program generation. Advances in Neural Information Processing
Systems 30
|
cs.CL cs.AI cs.LG
| 2017-11-20T00:00:00 |
2103.10685
|
Large-scale pre-trained language models have demonstrated strong capabilities
of generating realistic text. However, it remains challenging to control the
generation results. Previous approaches such as prompting are far from
sufficient, which limits the usage of language models. To tackle this
challenge, we propose an innovative method, inverse prompting, to better
control text generation. The core idea of inverse prompting is to use generated
text to inversely predict the prompt during beam search, which enhances the
relevance between the prompt and the generated text and provides better
controllability. Empirically, we pre-train a large-scale Chinese language model
to perform a systematic study using human evaluation on the tasks of
open-domain poem generation and open-domain long-form question answering. Our
results show that our proposed method substantially outperforms the baselines
and that our generation quality is close to human performance on some of the
tasks.
Narrators can try our poem generation demo at
https://pretrain.aminer.cn/apps/poetry.html, while our QA demo can be found at
https://pretrain.aminer.cn/app/qa. For researchers, the code is provided in
https://github.com/THUDM/InversePrompting.
|
Xu Zou, Da Yin, Qingyang Zhong, Ming Ding, Hongxia Yang, Zhilin Yang,
Jie Tang
|
Controllable Generation from Pre-trained Language Models via Inverse
Prompting
| null |
cs.CL cs.AI cs.LG
| 2021-11-10T00:00:00 |
1509.08874
|
This research explores effects of various training settings between Polish
and English Statistical Machine Translation systems for spoken language.
Various elements of the TED parallel text corpora for the IWSLT 2014 evaluation
campaign were used as the basis for training of language models, and for
development, tuning and testing of the translation system as well as Wikipedia
based comparable corpora prepared by us. The BLEU, NIST, METEOR and TER metrics
were used to evaluate the effects of data preparations on translation results.
Our experiments included systems, which use lemma and morphological information
on Polish words. We also conducted a deep analysis of provided Polish data as
preparatory work for the automatic data correction and cleaning phase.
|
Krzysztof Wo{\l}k, Krzysztof Marasek
|
Polish - English Speech Statistical Machine Translation Systems for the
IWSLT 2014
| null |
cs.CL
| 2015-09-30T00:00:00 |
2105.09938
|
While programming is one of the most broadly applicable skills in modern
society, modern machine learning models still cannot code solutions to basic
problems. Despite its importance, there has been surprisingly little work on
evaluating code generation, and it can be difficult to accurately assess code
generation performance rigorously. To meet this challenge, we introduce APPS, a
benchmark for code generation. Unlike prior work in more restricted settings,
our benchmark measures the ability of models to take an arbitrary natural
language specification and generate satisfactory Python code. Similar to how
companies assess candidate software developers, we then evaluate models by
checking their generated code on test cases. Our benchmark includes 10,000
problems, which range from having simple one-line solutions to being
substantial algorithmic challenges. We fine-tune large language models on both
GitHub and our training set, and we find that the prevalence of syntax errors
is decreasing exponentially as models improve. Recent models such as GPT-Neo
can pass approximately 20% of the test cases of introductory problems, so we
find that machine learning models are now beginning to learn how to code. As
the social significance of automatic code generation increases over the coming
years, our benchmark can provide an important measure for tracking
advancements.
|
Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika
and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He
and Dawn Song and Jacob Steinhardt
|
Measuring Coding Challenge Competence With APPS
| null |
cs.SE cs.CL cs.LG
| 2021-11-10T00:00:00 |
2306.11444
|
We motivate and formally define a new task for fine-tuning rule-like
generalization in large language models. It is conjectured that the
shortcomings of current LLMs are due to a lack of ability to generalize. It has
been argued that, instead, humans are better at generalization because they
have a tendency at extracting rules from complex data. We try to recreate this
tendency to rule-based generalization. When exposed to tests of analytic
intelligence, for example, the visual RAVEN IQ test, human problem-solvers
identify the relevant objects in the picture and their relevant attributes and
reason based on rules applied to these objects and attributes. Based on the
induced rules, they are able to provide a solution to the test. We propose a
task that translates this IQ task into language. In this paper, we provide the
formal specification for the task and the generative process of its datasets.
|
Paola Merlo
|
Blackbird language matrices (BLM), a new task for rule-like
generalization in neural networks: Motivations and Formal Specifications
| null |
cs.CL
| 2023-06-21T00:00:00 |
1707.07413
|
In this work, we perform an empirical comparison among the CTC,
RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech
recognition. We show that, without any language model, Seq2Seq and
RNN-Transducer models both outperform the best reported CTC models with a
language model, on the popular Hub5'00 benchmark. On our internal diverse
dataset, these trends continue - RNNTransducer models rescored with a language
model after beam search outperform our best CTC models. These results simplify
the speech recognition pipeline so that decoding can now be expressed purely as
neural network operations. We also study how the choice of encoder architecture
affects the performance of the three models - when all encoder layers are
forward only, and when encoders downsample the input representation
aggressively.
|
Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur,
Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao
Zhu
|
Exploring Neural Transducers for End-to-End Speech Recognition
| null |
cs.CL cs.NE
| 2017-07-25T00:00:00 |
2104.14690
|
Large pre-trained language models (LMs) have demonstrated remarkable ability
as few-shot learners. However, their success hinges largely on scaling model
parameters to a degree that makes it challenging to train and serve. In this
paper, we propose a new approach, named as EFL, that can turn small LMs into
better few-shot learners. The key idea of this approach is to reformulate
potential NLP task into an entailment one, and then fine-tune the model with as
little as 8 examples. We further demonstrate our proposed method can be: (i)
naturally combined with an unsupervised contrastive learning-based data
augmentation method; (ii) easily extended to multilingual few-shot learning. A
systematic evaluation on 18 standard NLP tasks demonstrates that this approach
improves the various existing SOTA few-shot learning methods by 12\%, and
yields competitive few-shot performance with 500 times larger models, such as
GPT-3.
|
Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, Hao Ma
|
Entailment as Few-Shot Learner
| null |
cs.CL cs.AI
| 2021-05-03T00:00:00 |
2410.18798
|
Solving complex chart Q&A tasks requires advanced visual reasoning abilities
in multimodal large language models (MLLMs). Recent studies highlight that
these abilities consist of two main parts: recognizing key information from
visual inputs and conducting reasoning over it. Thus, a promising approach to
enhance MLLMs is to construct relevant training data focusing on the two
aspects. However, collecting and annotating complex charts and questions is
costly and time-consuming, and ensuring the quality of annotated answers
remains a challenge. In this paper, we propose Code-as-Intermediary Translation
(CIT), a cost-effective, efficient and easily scalable data synthesis method
for distilling visual reasoning abilities from LLMs to MLLMs. The code serves
as an intermediary that translates visual chart representations into textual
representations, enabling LLMs to understand cross-modal information.
Specifically, we employ text-based synthesizing techniques to construct
chart-plotting code and produce ReachQA, a dataset containing 3k
reasoning-intensive charts and 20k Q&A pairs to enhance both recognition and
reasoning abilities. Experiments show that when fine-tuned with our data,
models not only perform well on chart-related benchmarks, but also demonstrate
improved multimodal reasoning abilities on general mathematical benchmarks like
MathVista. The code and dataset are publicly available at
https://github.com/hewei2001/ReachQA.
|
Wei He, Zhiheng Xi, Wanxu Zhao, Xiaoran Fan, Yiwen Ding, Zifei Shan,
Tao Gui, Qi Zhang, Xuanjing Huang
|
Distill Visual Chart Reasoning Ability from LLMs to MLLMs
| null |
cs.CL
| 2024-10-25T00:00:00 |
2308.09957
|
LLMs like GPT are great at tasks involving English which dominates in their
training data. In this paper, we look at how they cope with tasks involving
languages that are severely under-represented in their training data, in the
context of data-to-text generation for Irish, Maltese, Welsh and Breton. During
the prompt-engineering phase we tested a range of prompt types and formats on
GPT-3.5 and~4 with a small sample of example input/output pairs. We then fully
evaluated the two most promising prompts in two scenarios: (i) direct
generation into the under-resourced language, and (ii) generation into English
followed by translation into the under-resourced language. We find that
few-shot prompting works better for direct generation into under-resourced
languages, but that the difference disappears when pivoting via English. The
few-shot + translation system variants were submitted to the WebNLG 2023 shared
task where they outperformed competitor systems by substantial margins in all
languages on all metrics. We conclude that good performance on under-resourced
languages can be achieved out-of-the box with state-of-the-art LLMs. However,
our best results (for Welsh) remain well below the lowest ranked English system
at WebNLG'20.
|
Michela Lorandi and Anya Belz
|
Data-to-text Generation for Severely Under-Resourced Languages with
GPT-3.5: A Bit of Help Needed from Google Translate
| null |
cs.CL cs.AI
| 2023-08-22T00:00:00 |
1605.03832
|
We introduce polyglot language models, recurrent neural network models
trained to predict symbol sequences in many different languages using shared
representations of symbols and conditioning on typological information about
the language to be predicted. We apply these to the problem of modeling phone
sequences---a domain in which universal symbol inventories and
cross-linguistically shared feature representations are a natural fit.
Intrinsic evaluation on held-out perplexity, qualitative analysis of the
learned representations, and extrinsic evaluation in two downstream
applications that make use of phonetic features show (i) that polyglot models
better generalize to held-out data than comparable monolingual models and (ii)
that polyglot phonetic feature representations are of higher quality than those
learned monolingually.
|
Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample,
Patrick Littell, David Mortensen, Alan W Black, Lori Levin and Chris Dyer
|
Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic
Representation Learning
| null |
cs.CL
| 2016-05-13T00:00:00 |
1705.02669
|
Online review communities are dynamic as users join and leave, adopt new
vocabulary, and adapt to evolving trends. Recent work has shown that
recommender systems benefit from explicit consideration of user experience.
However, prior work assumes a fixed number of discrete experience levels,
whereas in reality users gain experience and mature continuously over time.
This paper presents a new model that captures the continuous evolution of user
experience, and the resulting language model in reviews and other posts. Our
model is unsupervised and combines principles of Geometric Brownian Motion,
Brownian Motion, and Latent Dirichlet Allocation to trace a smooth temporal
progression of user experience and language model respectively. We develop
practical algorithms for estimating the model parameters from data and for
inference with our model (e.g., to recommend items). Extensive experiments with
five real-world datasets show that our model not only fits data better than
discrete-model baselines, but also outperforms state-of-the-art methods for
predicting item ratings.
|
Subhabrata Mukherjee, Stephan Guennemann, Gerhard Weikum
|
Item Recommendation with Continuous Experience Evolution of Users using
Brownian Motion
| null |
cs.AI cs.CL cs.IR cs.SI stat.ML
| 2017-08-10T00:00:00 |
1704.08352
|
Words can be represented by composing the representations of subword units
such as word segments, characters, and/or character n-grams. While such
representations are effective and may capture the morphological regularities of
words, they have not been systematically compared, and it is not understood how
they interact with different morphological typologies. On a language modeling
task, we present experiments that systematically vary (1) the basic unit of
representation, (2) the composition of these representations, and (3) the
morphological typology of the language modeled. Our results extend previous
findings that character representations are effective across typologies, and we
find that a previously unstudied combination of character trigram
representations composed with bi-LSTMs outperforms most others. But we also
find room for improvement: none of the character-level models match the
predictive accuracy of a model with access to true morphological analyses, even
when learned from an order of magnitude more data.
|
Clara Vania and Adam Lopez
|
From Characters to Words to in Between: Do We Capture Morphology?
| null |
cs.CL
| 2017-04-28T00:00:00 |
1803.06456
|
We propose two models for a special case of authorship verification problem.
The task is to investigate whether the two documents of a given pair are
written by the same author. We consider the authorship verification problem for
both small and large scale datasets. The underlying small-scale problem has two
main challenges: First, the authors of the documents are unknown to us because
no previous writing samples are available. Second, the two documents are short
(a few hundred to a few thousand words) and may differ considerably in the
genre and/or topic. To solve it we propose transformation encoder to transform
one document of the pair into the other. This document transformation generates
a loss which is used as a recognizable feature to verify if the authors of the
pair are identical. For the large scale problem where various authors are
engaged and more examples are available with larger length, a parallel
recurrent neural network is proposed. It compares the language models of the
two documents. We evaluate our methods on various types of datasets including
Authorship Identification datasets of PAN competition, Amazon reviews, and
machine learning articles. Experiments show that both methods achieve stable
and competitive performance compared to the baselines.
|
Marjan Hosseinia and Arjun Mukherjee
|
Experiments with Neural Networks for Small and Large Scale Authorship
Verification
| null |
cs.CL
| 2018-03-20T00:00:00 |
cmp-lg/9612005
|
The Maximum Entropy Modeling Toolkit supports parameter estimation and
prediction for statistical language models in the maximum entropy framework.
The maximum entropy framework provides a constructive method for obtaining the
unique conditional distribution p*(y|x) that satisfies a set of linear
constraints and maximizes the conditional entropy H(p|f) with respect to the
empirical distribution f(x). The maximum entropy distribution p*(y|x) also has
a unique parametric representation in the class of exponential models, as
m(y|x) = r(y|x)/Z(x) where the numerator m(y|x) = prod_i alpha_i^g_i(x,y) is a
product of exponential weights, with alpha_i = exp(lambda_i), and the
denominator Z(x) = sum_y r(y|x) is required to satisfy the axioms of
probability.
This manual explains how to build maximum entropy models for discrete domains
with the Maximum Entropy Modeling Toolkit (MEMT). First we summarize the steps
necessary to implement a language model using the toolkit. Next we discuss the
executables provided by the toolkit and explain the file formats required by
the toolkit. Finally, we review the maximum entropy framework and apply it to
the problem of statistical language modeling.
Keywords: statistical language models, maximum entropy, exponential models,
improved iterative scaling, Markov models, triggers.
|
Eric Sven Ristad
|
Maximum Entropy Modeling Toolkit
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
2205.15172
|
Recent work has shown that language models scaled to billions of parameters,
such as GPT-3, perform remarkably well in zero-shot and few-shot scenarios. In
this work, we experiment with zero-shot models in the legal case entailment
task of the COLIEE 2022 competition. Our experiments show that scaling the
number of parameters in a language model improves the F1 score of our previous
zero-shot result by more than 6 points, suggesting that stronger zero-shot
capability may be a characteristic of larger models, at least for this task.
Our 3B-parameter zero-shot model outperforms all models, including ensembles,
in the COLIEE 2021 test set and also achieves the best performance of a single
model in the COLIEE 2022 competition, second only to the ensemble composed of
the 3B model itself and a smaller version of the same model. Despite the
challenges posed by large language models, mainly due to latency constraints in
real-time applications, we provide a demonstration of our zero-shot monoT5-3b
model being used in production as a search engine, including for legal
documents. The code for our submission and the demo of our system are available
at https://github.com/neuralmind-ai/coliee and
https://neuralsearchx.neuralmind.ai, respectively.
|
Guilherme Moraes Rosa and Luiz Bonifacio and Vitor Jeronymo and Hugo
Abonizio and Roberto Lotufo and Rodrigo Nogueira
|
Billions of Parameters Are Worth More Than In-domain Training Data: A
case study in the Legal Case Entailment Task
| null |
cs.CL
| 2022-05-31T00:00:00 |
1511.07788
|
Spoken language translation (SLT) is becoming more important in the
increasingly globalized world, both from a social and economic point of view.
It is one of the major challenges for automatic speech recognition (ASR) and
machine translation (MT), driving intense research activities in these areas.
While past research in SLT, due to technology limitations, dealt mostly with
speech recorded under controlled conditions, today's major challenge is the
translation of spoken language as it can be found in real life. Considered
application scenarios range from portable translators for tourists, lectures
and presentations translation, to broadcast news and shows with live
captioning. We would like to present PJIIT's experiences in the SLT gained from
the Eu-Bridge 7th framework project and the U-Star consortium activities for
the Polish/English language pair. Presented research concentrates on ASR
adaptation for Polish (state-of-the-art acoustic models: DBN-BLSTM training,
Kaldi: LDA+MLLT+SAT+MMI), language modeling for ASR & MT (text normalization,
RNN-based LMs, n-gram model domain interpolation) and statistical translation
techniques (hierarchical models, factored translation models, automatic casing
and punctuation, comparable and bilingual corpora preparation). While results
for the well-defined domains (phrases for travelers, parliament speeches,
medical documentation, movie subtitling) are very encouraging, less defined
domains (presentation, lectures) still form a challenge. Our progress in the
IWSLT TED task (MT only) will be presented, as well as current progress in the
Polish ASR.
|
Krzysztof Marasek, {\L}ukasz Brocki, Danijel Korzinek, Krzysztof
Wo{\l}k, Ryszard Gubrynowicz
|
Spoken Language Translation for Polish
| null |
cs.CL
| 2015-11-25T00:00:00 |
2004.14975
|
How does language model pretraining help transfer learning? We consider a
simple ablation technique for determining the impact of each pretrained layer
on transfer task performance. This method, partial reinitialization, involves
replacing different layers of a pretrained model with random weights, then
finetuning the entire model on the transfer task and observing the change in
performance. This technique reveals that in BERT, layers with high probing
performance on downstream GLUE tasks are neither necessary nor sufficient for
high accuracy on those tasks. Furthermore, the benefit of using pretrained
parameters for a layer varies dramatically with finetuning dataset size:
parameters that provide tremendous performance improvement when data is
plentiful may provide negligible benefits in data-scarce settings. These
results reveal the complexity of the transfer learning process, highlighting
the limitations of methods that operate on frozen models or single data
samples.
|
Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah Goodman
|
Investigating Transferability in Pretrained Language Models
| null |
cs.CL cs.AI cs.LG
| 2020-11-11T00:00:00 |
1404.1521
|
One of the major research trends currently is the evolution of heterogeneous
parallel computing. GP-GPU computing is being widely used and several
applications have been designed to exploit the massive parallelism that
GP-GPU's have to offer. While GPU's have always been widely used in areas of
computer vision for image processing, little has been done to investigate
whether the massive parallelism provided by GP-GPU's can be utilized
effectively for Natural Language Processing(NLP) tasks. In this work, we
investigate and explore the power of GP-GPU's in the task of learning language
models. More specifically, we investigate the performance of training Polyglot
language models using deep belief neural networks. We evaluate the performance
of training the model on the GPU and present optimizations that boost the
performance on the GPU.One of the key optimizations, we propose increases the
performance of a function involved in calculating and updating the gradient by
approximately 50 times on the GPU for sufficiently large batch sizes. We show
that with the above optimizations, the GP-GPU's performance on the task
increases by factor of approximately 3-4. The optimizations we made are generic
Theano optimizations and hence potentially boost the performance of other
models which rely on these operations.We also show that these optimizations
result in the GPU's performance at this task being now comparable to that on
the CPU. We conclude by presenting a thorough evaluation of the applicability
of GP-GPU's for this task and highlight the factors limiting the performance of
training a Polyglot model on the GPU.
|
Vivek Kulkarni, Rami Al-Rfou', Bryan Perozzi, Steven Skiena
|
Exploring the power of GPU's for training Polyglot language models
| null |
cs.LG cs.CL
| 2014-04-16T00:00:00 |
2307.16230
|
Recently, text watermarking algorithms for large language models (LLMs) have
been proposed to mitigate the potential harms of text generated by LLMs,
including fake news and copyright issues. However, current watermark detection
algorithms require the secret key used in the watermark generation process,
making them susceptible to security breaches and counterfeiting during public
detection. To address this limitation, we propose an unforgeable publicly
verifiable watermark algorithm named UPV that uses two different neural
networks for watermark generation and detection, instead of using the same key
at both stages. Meanwhile, the token embedding parameters are shared between
the generation and detection networks, which makes the detection network
achieve a high accuracy very efficiently. Experiments demonstrate that our
algorithm attains high detection accuracy and computational efficiency through
neural networks. Subsequent analysis confirms the high complexity involved in
forging the watermark from the detection network. Our code is available at
\href{https://github.com/THU-BPM/unforgeable_watermark}{https://github.com/THU-BPM/unforgeable\_watermark}.
Additionally, our algorithm could also be accessed through MarkLLM
\citep{pan2024markllm} \footnote{https://github.com/THU-BPM/MarkLLM}.
|
Aiwei Liu, Leyi Pan, Xuming Hu, Shu'ang Li, Lijie Wen, Irwin King and
Philip S. Yu
|
An Unforgeable Publicly Verifiable Watermark for Large Language Models
| null |
cs.CL
| 2024-05-28T00:00:00 |
1507.01193
|
Recent work on language modelling has shifted focus from count-based models
to neural models. In these works, the words in each sentence are always
considered in a left-to-right order. In this paper we show how we can improve
the performance of the recurrent neural network (RNN) language model by
incorporating the syntactic dependencies of a sentence, which have the effect
of bringing relevant contexts closer to the word being predicted. We evaluate
our approach on the Microsoft Research Sentence Completion Challenge and show
that the dependency RNN proposed improves over the RNN by about 10 points in
accuracy. Furthermore, we achieve results comparable with the state-of-the-art
models on this task.
|
Piotr Mirowski, Andreas Vlachos
|
Dependency Recurrent Neural Language Models for Sentence Completion
| null |
cs.CL cs.AI cs.LG
| 2015-07-07T00:00:00 |
cmp-lg/9801001
|
We describe a simple variant of the interpolated Markov model with
non-emitting state transitions and prove that it is strictly more powerful than
any Markov model. More importantly, the non-emitting model outperforms the
classic interpolated model on the natural language texts under a wide range of
experimental conditions, with only a modest increase in computational
requirements. The non-emitting model is also much less prone to overfitting.
Keywords: Markov model, interpolated Markov model, hidden Markov model,
mixture modeling, non-emitting state transitions, state-conditional
interpolation, statistical language model, discrete time series, Brown corpus,
Wall Street Journal.
|
Eric Sven Ristad and Robert G. Thomas
|
Hierarchical Non-Emitting Markov Models
| null |
cmp-lg cs.CL
| 2007-05-23T00:00:00 |
cmp-lg/9603002
|
Phrase-structure grammars are effective models for important syntactic and
semantic aspects of natural languages, but can be computationally too demanding
for use as language models in real-time speech recognition. Therefore,
finite-state models are used instead, even though they lack expressive power.
To reconcile those two alternatives, we designed an algorithm to compute
finite-state approximations of context-free grammars and
context-free-equivalent augmented phrase-structure grammars. The approximation
is exact for certain context-free grammars generating regular languages,
including all left-linear and right-linear context-free grammars. The algorithm
has been used to build finite-state language models for limited-domain speech
recognition tasks.
|
Fernando C. N. Pereira and Rebecca N. Wright (AT&T Research)
|
Finite-State Approximation of Phrase-Structure Grammars
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
2501.12162
|
This paper introduces AdaServe, the first LLM serving system to support SLO
customization through fine-grained speculative decoding. AdaServe leverages the
logits of a draft model to predict the speculative accuracy of tokens and
employs a theoretically optimal algorithm to construct token trees for
verification. To accommodate diverse SLO requirements without compromising
throughput, AdaServe employs a speculation-and-selection scheme that first
constructs candidate token trees for each request and then dynamically selects
tokens to meet individual SLO constraints while optimizing throughput.
Comprehensive evaluations demonstrate that AdaServe achieves up to 73% higher
SLO attainment and 74% higher goodput compared to state-of-the-art systems.
These results underscore AdaServe's potential to enhance the efficiency and
adaptability of LLM deployments across varied application scenarios.
|
Zikun Li, Zhuofu Chen, Remi Delacourt, Gabriele Oliaro, Zeyu Wang,
Qinghan Chen, Shuhuai Lin, April Yang, Zhihao Zhang, Zhuoming Chen, Sean Lai,
Xupeng Miao, Zhihao Jia
|
AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative
Decoding
| null |
cs.CL cs.AI cs.DC cs.LG
| 2025-01-22T00:00:00 |
2502.11401
|
A new trend uses LLMs as dense text encoders via contrastive learning.
However, since LLM embeddings predict the probability distribution of the next
token, they are inherently generative and distributive, conflicting with
contrastive learning, which requires embeddings to capture full-text semantics
and align via cosine similarity. This discrepancy hinders the full utilization
of LLMs' pre-training capabilities, resulting in inefficient learning. In
response to this issue, we propose AutoRegEmbed, a new contrastive learning
method built on embedding conditional probability distributions, which
integrates two core tasks: information compression and conditional distribution
alignment. The information compression task encodes text into the embedding
space, ensuring that the embedding vectors capture global semantics. The
conditional distribution alignment task focuses on aligning text embeddings
with positive samples embeddings by leveraging the conditional distribution of
embeddings while simultaneously reducing the likelihood of generating negative
samples from text embeddings, thereby achieving embedding alignment and
uniformity. Experimental results demonstrate that our method significantly
outperforms traditional contrastive learning approaches and achieves
performance comparable to state-of-the-art models when using the same amount of
data.
|
Jingcheng Deng, Zhongtao Jiang, Liang Pang, Liwei Chen, Kun Xu, Zihao
Wei, Huawei Shen, Xueqi Cheng
|
Following the Autoregressive Nature of LLM Embeddings via Compression
and Alignment
| null |
cs.CL
| 2025-02-28T00:00:00 |
2309.13734
|
Stance classification, the task of predicting the viewpoint of an author on a
subject of interest, has long been a focal point of research in domains ranging
from social science to machine learning. Current stance detection methods rely
predominantly on manual annotation of sentences, followed by training a
supervised machine learning model. However, this manual annotation process
requires laborious annotation effort, and thus hampers its potential to
generalize across different contexts. In this work, we investigate the use of
Large Language Models (LLMs) as a stance detection methodology that can reduce
or even eliminate the need for manual annotations. We investigate 10
open-source models and 7 prompting schemes, finding that LLMs are competitive
with in-domain supervised models but are not necessarily consistent in their
performance. We also fine-tuned the LLMs, but discovered that fine-tuning
process does not necessarily lead to better performance. In general, we
discover that LLMs do not routinely outperform their smaller supervised machine
learning models, and thus call for stance detection to be a benchmark for which
LLMs also optimize for. The code used in this study is available at
\url{https://github.com/ijcruic/LLM-Stance-Labeling}
|
Iain J. Cruickshank and Lynnette Hui Xian Ng
|
Prompting and Fine-Tuning Open-Sourced Large Language Models for Stance
Classification
| null |
cs.CL cs.AI
| 2024-03-07T00:00:00 |
2503.02911
|
Autonomous driving (AD) testing constitutes a critical methodology for
assessing performance benchmarks prior to product deployment. The creation of
segmented scenarios within a simulated environment is acknowledged as a robust
and effective strategy; however, the process of tailoring these scenarios often
necessitates laborious and time-consuming manual efforts, thereby hindering the
development and implementation of AD technologies. In response to this
challenge, we introduce Text2Scenario, a framework that leverages a Large
Language Model (LLM) to autonomously generate simulation test scenarios that
closely align with user specifications, derived from their natural language
inputs. Specifically, an LLM, equipped with a meticulously engineered input
prompt scheme functions as a text parser for test scenario descriptions,
extracting from a hierarchically organized scenario repository the components
that most accurately reflect the user's preferences. Subsequently, by
exploiting the precedence of scenario components, the process involves
sequentially matching and linking scenario representations within a Domain
Specific Language corpus, ultimately fabricating executable test scenarios. The
experimental results demonstrate that such prompt engineering can meticulously
extract the nuanced details of scenario elements embedded within various
descriptive formats, with the majority of generated scenarios aligning closely
with the user's initial expectations, allowing for the efficient and precise
evaluation of diverse AD stacks void of the labor-intensive need for manual
scenario configuration. Project page:
https://caixxuan.github.io/Text2Scenario.GitHub.io.
|
Xuan Cai, Xuesong Bai, Zhiyong Cui, Danmu Xie, Daocheng Fu, Haiyang
Yu, Yilong Ren
|
Text2Scenario: Text-Driven Scenario Generation for Autonomous Driving
Test
| null |
cs.SE cs.AI cs.CL
| 2025-03-06T00:00:00 |
2203.00759
|
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in
a parameter-efficient way. Here, we explore the use of HyperNetworks to
generate hyper-prompts: we propose HyperPrompt, a novel architecture for
prompt-based task-conditioning of self-attention in Transformers. The
hyper-prompts are end-to-end learnable via generation by a HyperNetwork.
HyperPrompt allows the network to learn task-specific feature maps where the
hyper-prompts serve as task global memories for the queries to attend to, at
the same time enabling flexible information sharing among tasks. We show that
HyperPrompt is competitive against strong multi-task learning baselines with as
few as $0.14\%$ of additional task-conditioning parameters, achieving great
parameter and computational efficiency. Through extensive empirical
experiments, we demonstrate that HyperPrompt can achieve superior performances
over strong T5 multi-task learning baselines and parameter-efficient adapter
variants including Prompt-Tuning and HyperFormer++ on Natural Language
Understanding benchmarks of GLUE and SuperGLUE across many model sizes.
|
Yun He, Huaixiu Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi
Aribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, Ed
H. Chi
|
HyperPrompt: Prompt-based Task-Conditioning of Transformers
| null |
cs.CL cs.LG
| 2022-06-16T00:00:00 |
2412.18139
|
The in-image machine translation task involves translating text embedded
within images, with the translated results presented in image format. While
this task has numerous applications in various scenarios such as film poster
translation and everyday scene image translation, existing methods frequently
neglect the aspect of consistency throughout this process. We propose the need
to uphold two types of consistency in this task: translation consistency and
image generation consistency. The former entails incorporating image
information during translation, while the latter involves maintaining
consistency between the style of the text-image and the original image,
ensuring background integrity. To address these consistency requirements, we
introduce a novel two-stage framework named HCIIT (High-Consistency In-Image
Translation) which involves text-image translation using a multimodal
multilingual large language model in the first stage and image backfilling with
a diffusion model in the second stage. Chain of thought learning is utilized in
the first stage to enhance the model's ability to leverage image information
during translation. Subsequently, a diffusion model trained for
style-consistent text-image generation ensures uniformity in text style within
images and preserves background details. A dataset comprising 400,000
style-consistent pseudo text-image pairs is curated for model training. Results
obtained on both curated test sets and authentic image test sets validate the
effectiveness of our framework in ensuring consistency and producing
high-quality translated images.
|
Chengpeng Fu, Xiaocheng Feng, Yichong Huang, Wenshuai Huo, Baohang Li,
Zhirui Zhang, Yunfei Lu, Dandan Tu, Duyu Tang, Hui Wang, Bing Qin, Ting Liu
|
Ensuring Consistency for In-Image Translation
| null |
cs.CL
| 2024-12-25T00:00:00 |
2105.09938
|
While programming is one of the most broadly applicable skills in modern
society, modern machine learning models still cannot code solutions to basic
problems. Despite its importance, there has been surprisingly little work on
evaluating code generation, and it can be difficult to accurately assess code
generation performance rigorously. To meet this challenge, we introduce APPS, a
benchmark for code generation. Unlike prior work in more restricted settings,
our benchmark measures the ability of models to take an arbitrary natural
language specification and generate satisfactory Python code. Similar to how
companies assess candidate software developers, we then evaluate models by
checking their generated code on test cases. Our benchmark includes 10,000
problems, which range from having simple one-line solutions to being
substantial algorithmic challenges. We fine-tune large language models on both
GitHub and our training set, and we find that the prevalence of syntax errors
is decreasing exponentially as models improve. Recent models such as GPT-Neo
can pass approximately 20% of the test cases of introductory problems, so we
find that machine learning models are now beginning to learn how to code. As
the social significance of automatic code generation increases over the coming
years, our benchmark can provide an important measure for tracking
advancements.
|
Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika
and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He
and Dawn Song and Jacob Steinhardt
|
Measuring Coding Challenge Competence With APPS
| null |
cs.SE cs.CL cs.LG
| 2021-11-10T00:00:00 |
2112.06377
|
Fast-developing fields such as Artificial Intelligence (AI) often outpace the
efforts of encyclopedic sources such as Wikipedia, which either do not
completely cover recently-introduced topics or lack such content entirely. As a
result, methods for automatically producing content are valuable tools to
address this information overload. We show that recent advances in pretrained
language modeling can be combined for a two-stage extractive and abstractive
approach for Wikipedia lead paragraph generation. We extend this approach to
generate longer Wikipedia-style summaries with sections and examine how such
methods struggle in this application through detailed studies with 100
reference human-collected surveys. This is the first study on utilizing web
resources for long Wikipedia-style summaries to the best of our knowledge.
|
Irene Li, Alexander Fabbri, Rina Kawamura, Yixin Liu, Xiangru Tang,
Jaesung Tae, Chang Shen, Sally Ma, Tomoe Mizutani, Dragomir Radev
|
Surfer100: Generating Surveys From Web Resources, Wikipedia-style
| null |
cs.CL cs.LG
| 2022-06-23T00:00:00 |
cmp-lg/9706007
|
We consider the use of language models whose size and accuracy are
intermediate between different order n-gram models. Two types of models are
studied in particular. Aggregate Markov models are class-based bigram models in
which the mapping from words to classes is probabilistic. Mixed-order Markov
models combine bigram models whose predictions are conditioned on different
words. Both types of models are trained by Expectation-Maximization (EM)
algorithms for maximum likelihood estimation. We examine smoothing procedures
in which these models are interposed between different order n-grams. This is
found to significantly reduce the perplexity of unseen word combinations.
|
Lawrence Saul and Fernando Pereira (AT&T Labs -- Research)
|
Aggregate and mixed-order Markov models for statistical language
processing
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
1911.03090
|
Pretrained transformer-based language models have achieved state of the art
across countless tasks in natural language processing. These models are highly
expressive, comprising at least a hundred million parameters and a dozen
layers. Recent evidence suggests that only a few of the final layers need to be
fine-tuned for high quality on downstream tasks. Naturally, a subsequent
research question is, "how many of the last layers do we need to fine-tune?" In
this paper, we precisely answer this question. We examine two recent pretrained
language models, BERT and RoBERTa, across standard tasks in textual entailment,
semantic similarity, sentiment analysis, and linguistic acceptability. We vary
the number of final layers that are fine-tuned, then study the resulting change
in task-specific effectiveness. We show that only a fourth of the final layers
need to be fine-tuned to achieve 90% of the original quality. Surprisingly, we
also find that fine-tuning all layers does not always help.
|
Jaejun Lee, Raphael Tang, Jimmy Lin
|
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
| null |
cs.CL
| 2019-11-11T00:00:00 |
1905.05583
|
Language model pre-training has proven to be useful in learning universal
language representations. As a state-of-the-art language model pre-training
model, BERT (Bidirectional Encoder Representations from Transformers) has
achieved amazing results in many language understanding tasks. In this paper,
we conduct exhaustive experiments to investigate different fine-tuning methods
of BERT on text classification task and provide a general solution for BERT
fine-tuning. Finally, the proposed solution obtains new state-of-the-art
results on eight widely-studied text classification datasets.
|
Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang
|
How to Fine-Tune BERT for Text Classification?
| null |
cs.CL
| 2020-02-06T00:00:00 |
1908.09203
|
Large language models have a range of beneficial uses: they can assist in
prose, poetry, and programming; analyze dataset biases; and more. However,
their flexibility and generative capabilities also raise misuse concerns. This
report discusses OpenAI's work related to the release of its GPT-2 language
model. It discusses staged release, which allows time between model releases to
conduct risk and benefit analyses as model sizes increased. It also discusses
ongoing partnership-based research and provides recommendations for better
coordination and responsible publication in AI.
|
Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel
Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah
Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, Jasmine
Wang
|
Release Strategies and the Social Impacts of Language Models
| null |
cs.CL cs.AI cs.CY
| 2019-11-14T00:00:00 |
2403.12285
|
There are multiple sources of financial news online which influence market
movements and trader's decisions. This highlights the need for accurate
sentiment analysis, in addition to having appropriate algorithmic trading
techniques, to arrive at better informed trading decisions. Standard lexicon
based sentiment approaches have demonstrated their power in aiding financial
decisions. However, they are known to suffer from issues related to context
sensitivity and word ordering. Large Language Models (LLMs) can also be used in
this context, but they are not finance-specific and tend to require significant
computational resources. To facilitate a finance specific LLM framework, we
introduce a novel approach based on the Llama 2 7B foundational model, in order
to benefit from its generative nature and comprehensive language manipulation.
This is achieved by fine-tuning the Llama2 7B model on a small portion of
supervised financial sentiment analysis data, so as to jointly handle the
complexities of financial lexicon and context, and further equipping it with a
neural network based decision mechanism. Such a generator-classifier scheme,
referred to as FinLlama, is trained not only to classify the sentiment valence
but also quantify its strength, thus offering traders a nuanced insight into
financial news articles. Complementing this, the implementation of
parameter-efficient fine-tuning through LoRA optimises trainable parameters,
thus minimising computational and memory requirements, without sacrificing
accuracy. Simulation results demonstrate the ability of the proposed FinLlama
to provide a framework for enhanced portfolio management decisions and
increased market returns. These results underpin the ability of FinLlama to
construct high-return portfolios which exhibit enhanced resilience, even during
volatile periods and unpredictable market events.
|
Thanos Konstantinidis, Giorgos Iacovides, Mingxue Xu, Tony G.
Constantinides, Danilo Mandic
|
FinLlama: Financial Sentiment Classification for Algorithmic Trading
Applications
| null |
cs.CL cs.LG q-fin.ST q-fin.TR
| 2024-03-20T00:00:00 |
1907.05774
|
Data scarcity is a long-standing and crucial challenge that hinders quick
development of task-oriented dialogue systems across multiple domains:
task-oriented dialogue models are expected to learn grammar, syntax, dialogue
reasoning, decision making, and language generation from absurdly small amounts
of task-specific data. In this paper, we demonstrate that recent progress in
language modeling pre-training and transfer learning shows promise to overcome
this problem. We propose a task-oriented dialogue model that operates solely on
text input: it effectively bypasses explicit policy and language generation
modules. Building on top of the TransferTransfo framework (Wolf et al., 2019)
and generative model pre-training (Radford et al., 2019), we validate the
approach on complex multi-domain task-oriented dialogues from the MultiWOZ
dataset. Our automatic and human evaluations show that the proposed model is on
par with a strong task-specific neural baseline. In the long run, our approach
holds promise to mitigate the data scarcity problem, and to support the
construction of more engaging and more eloquent task-oriented conversational
agents.
|
Pawe{\l} Budzianowski and Ivan Vuli\'c
|
Hello, It's GPT-2 -- How Can I Help You? Towards the Use of Pretrained
Language Models for Task-Oriented Dialogue Systems
| null |
cs.CL
| 2019-08-06T00:00:00 |
2201.11473
|
Reasoning over natural language is a long-standing goal for the research
community. However, studies have shown that existing language models are
inadequate in reasoning. To address the issue, we present POET, a novel
reasoning pre-training paradigm. Through pre-training language models with
programs and their execution results, POET empowers language models to harvest
the reasoning knowledge possessed by program executors via a data-driven
approach. POET is conceptually simple and can be instantiated by different
kinds of program executors. In this paper, we showcase two simple instances
POET-Math and POET-Logic, in addition to a complex instance, POET-SQL.
Experimental results on six benchmarks demonstrate that POET can significantly
boost model performance in natural language reasoning, such as numerical
reasoning, logical reasoning, and multi-hop reasoning. POET opens a new gate on
reasoning-enhancement pre-training, and we hope our analysis would shed light
on the future research of reasoning like program executors.
|
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan
Gao, Jian-Guang Lou, Weizhu Chen
|
Reasoning Like Program Executors
| null |
cs.CL cs.AI cs.SC
| 2022-10-25T00:00:00 |
2310.17407
|
Can a machine understand the meanings of natural language? Recent
developments in the generative large language models (LLMs) of artificial
intelligence have led to the belief that traditional philosophical assumptions
about machine understanding of language need to be revised. This article
critically evaluates the prevailing tendency to regard machine language
performance as mere syntactic manipulation and the simulation of understanding,
which is only partial and very shallow, without sufficient referential
grounding in the world. The aim is to highlight the conditions crucial to
attributing natural language understanding to state-of-the-art LLMs, where it
can be legitimately argued that LLMs not only use syntax but also semantics,
their understanding not being simulated but duplicated; and determine how they
ground the meanings of linguistic expressions.
|
Vladim\'ir Havl\'ik
|
Meaning and understanding in large language models
| null |
cs.CL
| 2023-10-27T00:00:00 |
1601.00248
|
Perplexity (per word) is the most widely used metric for evaluating language
models. Despite this, there has been no dearth of criticism for this metric.
Most of these criticisms center around lack of correlation with extrinsic
metrics like word error rate (WER), dependence upon shared vocabulary for model
comparison and unsuitability for unnormalized language model evaluation. In
this paper, we address the last problem and propose a new discriminative
entropy based intrinsic metric that works for both traditional word level
models and unnormalized language models like sentence level models. We also
propose a discriminatively trained sentence level interpretation of recurrent
neural network based language model (RNN) as an example of unnormalized
sentence level model. We demonstrate that for word level models, contrastive
entropy shows a strong correlation with perplexity. We also observe that when
trained at lower distortion levels, sentence level RNN considerably outperforms
traditional RNNs on this new metric.
|
Kushal Arora, Anand Rangarajan
|
Contrastive Entropy: A new evaluation metric for unnormalized language
models
| null |
cs.CL
| 2016-04-01T00:00:00 |
2502.06207
|
Large Language Models (LLMs) have become essential for offensive language
detection, yet their ability to handle annotation disagreement remains
underexplored. Disagreement samples, which arise from subjective
interpretations, pose a unique challenge due to their ambiguous nature.
Understanding how LLMs process these cases, particularly their confidence
levels, can offer insight into their alignment with human annotators. This
study systematically evaluates the performance of multiple LLMs in detecting
offensive language at varying levels of annotation agreement. We analyze binary
classification accuracy, examine the relationship between model confidence and
human disagreement, and explore how disagreement samples influence model
decision-making during few-shot learning and instruction fine-tuning. Our
findings reveal that LLMs struggle with low-agreement samples, often exhibiting
overconfidence in these ambiguous cases. However, utilizing disagreement
samples in training improves both detection accuracy and model alignment with
human judgment. These insights provide a foundation for enhancing LLM-based
offensive language detection in real-world moderation tasks.
|
Junyu Lu, Kai Ma, Kaichun Wang, Kelaiti Xiao, Roy Ka-Wei Lee, Bo Xu,
Liang Yang, Hongfei Lin
|
Unveiling the Capabilities of Large Language Models in Detecting
Offensive Language with Annotation Disagreement
| null |
cs.CL cs.AI
| 2025-02-18T00:00:00 |
0907.1814
|
We present BayeSum (for ``Bayesian summarization''), a model for sentence
extraction in query-focused summarization. BayeSum leverages the common case in
which multiple documents are relevant to a single query. Using these documents
as reinforcement for query terms, BayeSum is not afflicted by the paucity of
information in short queries. We show that approximate inference in BayeSum is
possible on large data sets and results in a state-of-the-art summarization
system. Furthermore, we show how BayeSum can be understood as a justified query
expansion technique in the language modeling for IR framework.
|
Hal Daum\'e III
|
Bayesian Query-Focused Summarization
|
ACL 2006
|
cs.CL cs.IR cs.LG
| 2009-07-13T00:00:00 |
1501.05203
|
Reordering is a challenge to machine translation (MT) systems. In MT, the
widely used approach is to apply word based language model (LM) which considers
the constituent units of a sentence as words. In speech recognition (SR), some
phrase based LM have been proposed. However, those LMs are not necessarily
suitable or optimal for reordering. We propose two phrase based LMs which
considers the constituent units of a sentence as phrases. Experiments show that
our phrase based LMs outperform the word based LM with the respect of
perplexity and n-best list re-ranking.
|
Geliang Chen
|
Phrase Based Language Model for Statistical Machine Translation:
Empirical Study
| null |
cs.CL
| 2015-02-19T00:00:00 |
1809.00042
|
RNN language models have achieved state-of-the-art perplexity results and
have proven useful in a suite of NLP tasks, but it is as yet unclear what
syntactic generalizations they learn. Here we investigate whether
state-of-the-art RNN language models represent long-distance filler-gap
dependencies and constraints on them. Examining RNN behavior on experimentally
controlled sentences designed to expose filler-gap dependencies, we show that
RNNs can represent the relationship in multiple syntactic positions and over
large spans of text. Furthermore, we show that RNNs learn a subset of the known
restrictions on filler-gap dependencies, known as island constraints: RNNs show
evidence for wh-islands, adjunct islands, and complex NP islands. These studies
demonstrates that state-of-the-art RNN models are able to learn and generalize
about empty syntactic positions.
|
Ethan Wilcox, Roger Levy, Takashi Morita and Richard Futrell
|
What do RNN Language Models Learn about Filler-Gap Dependencies?
| null |
cs.CL
| 2018-09-05T00:00:00 |
2106.02834
|
Pre-trained multilingual language models (LMs) have achieved state-of-the-art
results in cross-lingual transfer, but they often lead to an inequitable
representation of languages due to limited capacity, skewed pre-training data,
and sub-optimal vocabularies. This has prompted the creation of an ever-growing
pre-trained model universe, where each model is trained on large amounts of
language or domain specific data with a carefully curated, linguistically
informed vocabulary. However, doing so brings us back full circle and prevents
one from leveraging the benefits of multilinguality. To address the gaps at
both ends of the spectrum, we propose MergeDistill, a framework to merge
pre-trained LMs in a way that can best leverage their assets with minimal
dependencies, using task-agnostic knowledge distillation. We demonstrate the
applicability of our framework in a practical setting by leveraging
pre-existing teacher LMs and training student LMs that perform competitively
with or even outperform teacher LMs trained on several orders of magnitude more
data and with a fixed model capacity. We also highlight the importance of
teacher selection and its impact on student model performance.
|
Simran Khanuja, Melvin Johnson, Partha Talukdar
|
MergeDistill: Merging Pre-trained Language Models using Distillation
| null |
cs.CL
| 2021-06-08T00:00:00 |
2502.11843
|
Large Language Models (LLMs) are widely used as conversational agents,
exploiting their capabilities in various sectors such as education, law,
medicine, and more. However, LLMs are often subjected to context-shifting
behaviour, resulting in a lack of consistent and interpretable
personality-aligned interactions. Adherence to psychological traits lacks
comprehensive analysis, especially in the case of dyadic (pairwise)
conversations. We examine this challenge from two viewpoints, initially using
two conversation agents to generate a discourse on a certain topic with an
assigned personality from the OCEAN framework (Openness, Conscientiousness,
Extraversion, Agreeableness, and Neuroticism) as High/Low for each trait. This
is followed by using multiple judge agents to infer the original traits
assigned to explore prediction consistency, inter-model agreement, and
alignment with the assigned personality. Our findings indicate that while LLMs
can be guided toward personality-driven dialogue, their ability to maintain
personality traits varies significantly depending on the combination of models
and discourse settings. These inconsistencies emphasise the challenges in
achieving stable and interpretable personality-aligned interactions in LLMs.
|
Pranav Bhandari and Nicolas Fay and Michael Wise and Amitava Datta and
Stephanie Meek and Usman Naseem and Mehwish Nasim
|
Can LLM Agents Maintain a Persona in Discourse?
| null |
cs.CL cs.AI cs.SI
| 2025-02-18T00:00:00 |
cmp-lg/9606021
|
We present an iterative procedure to build a Chinese language model (LM). We
segment Chinese text into words based on a word-based Chinese language model.
However, the construction of a Chinese LM itself requires word boundaries. To
get out of the chicken-and-egg problem, we propose an iterative procedure that
alternates two operations: segmenting text into words and building an LM.
Starting with an initial segmented corpus and an LM based upon it, we use a
Viterbi-liek algorithm to segment another set of data. Then, we build an LM
based on the second set and use the resulting LM to segment again the first
corpus. The alternating procedure provides a self-organized way for the
segmenter to detect automatically unseen words and correct segmentation errors.
Our preliminary experiment shows that the alternating procedure not only
improves the accuracy of our segmentation, but discovers unseen words
surprisingly well. The resulting word-based LM has a perplexity of 188 for a
general Chinese corpus.
|
Xiaoqiang Luo (Center for Language and Speech Processing, The Johns
Hopkins University) and Salim Roukos (IBM T. J. Watson Research Center)
|
An Iterative Algorithm to Build Chinese Language Models
| null |
cmp-lg cs.CL
| 2008-02-03T00:00:00 |
2306.01506
|
Self-supervised techniques for learning speech representations have been
shown to develop linguistic competence from exposure to speech without the need
for human labels. In order to fully realize the potential of these approaches
and further our understanding of how infants learn language, simulations must
closely emulate real-life situations by training on developmentally plausible
corpora and benchmarking against appropriate test sets. To this end, we propose
a language-acquisition-friendly benchmark to probe spoken language models at
the lexical and syntactic levels, both of which are compatible with the
vocabulary typical of children's language experiences. This paper introduces
the benchmark and summarizes a range of experiments showing its usefulness. In
addition, we highlight two exciting challenges that need to be addressed for
further progress: bridging the gap between text and speech and between clean
speech and in-the-wild speech.
|
Marvin Lavechin and Yaya Sy and Hadrien Titeux and Mar\'ia Andrea Cruz
Bland\'on and Okko R\"as\"anen and Herv\'e Bredin and Emmanuel Dupoux and
Alejandrina Cristia
|
BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models
| null |
cs.CL eess.AS stat.ML
| 2025-03-12T00:00:00 |
2404.17283
|
Retrieval-augmented language models have exhibited promising performance
across various areas of natural language processing (NLP), including
fact-critical tasks. However, due to the black-box nature of advanced large
language models (LLMs) and the non-retrieval-oriented supervision signal of
specific tasks, the training of retrieval model faces significant challenges
under the setting of black-box LLM. We propose an approach leveraging
Fine-grained Feedback with Reinforcement Retrieval (FFRR) to enhance
fact-checking on news claims by using black-box LLM. FFRR adopts a two-level
strategy to gather fine-grained feedback from the LLM, which serves as a reward
for optimizing the retrieval policy, by rating the retrieved documents based on
the non-retrieval ground truth of the task. We evaluate our model on two public
datasets for real-world news claim verification, and the results demonstrate
that FFRR achieves significant improvements over strong LLM-enabled and non-LLM
baselines.
|
Xuan Zhang and Wei Gao
|
Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact
Checking News Claims with Black-Box LLM
| null |
cs.CL
| 2024-04-29T00:00:00 |
1910.14549
|
Semantic parsing is the task of transforming sentences from natural language
into formal representations of predicate-argument structures. Under this
research area, frame-semantic parsing has attracted much interest. This parsing
approach leverages the lexical information defined in FrameNet to associate
marked predicates or targets with semantic frames, thereby assigning semantic
roles to sentence components based on pre-specified frame elements in FrameNet.
In this paper, a deep neural network architecture known as Positional
Attention-based Frame Identification with BERT (PAFIBERT) is presented as a
solution to the frame identification subtask in frame-semantic parsing.
Although the importance of this subtask is well-established, prior research has
yet to find a robust solution that works satisfactorily for both in-domain and
out-of-domain data. This study thus set out to improve frame identification in
light of recent advancements of language modeling and transfer learning in
natural language processing. The proposed method is partially empowered by
BERT, a pre-trained language model that excels at capturing contextual
information in texts. By combining the language representation power of BERT
with a position-based attention mechanism, PAFIBERT is able to attend to
target-specific contexts in sentences for disambiguating targets and
associating them with the most suitable semantic frames. Under various
experimental settings, PAFIBERT outperformed existing solutions by a
significant margin, achieving new state-of-the-art results for both in-domain
and out-of-domain benchmark test sets.
|
Sang-Sang Tan (1), Jin-Cheon Na (1) ((1) Nanyang Technological
University, Singapore)
|
Positional Attention-based Frame Identification with BERT: A Deep
Learning Approach to Target Disambiguation and Semantic Frame Selection
| null |
cs.CL cs.LG
| 2019-11-01T00:00:00 |
0907.1814
|
We present BayeSum (for ``Bayesian summarization''), a model for sentence
extraction in query-focused summarization. BayeSum leverages the common case in
which multiple documents are relevant to a single query. Using these documents
as reinforcement for query terms, BayeSum is not afflicted by the paucity of
information in short queries. We show that approximate inference in BayeSum is
possible on large data sets and results in a state-of-the-art summarization
system. Furthermore, we show how BayeSum can be understood as a justified query
expansion technique in the language modeling for IR framework.
|
Hal Daum\'e III
|
Bayesian Query-Focused Summarization
|
ACL 2006
|
cs.CL cs.IR cs.LG
| 2009-07-13T00:00:00 |
2305.10786
|
Prior studies diagnose the anisotropy problem in sentence representations
from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis
reveals that the sentence embeddings from BERT suffer from a bias towards
uninformative words, limiting the performance in semantic textual similarity
(STS) tasks. To address this bias, we propose a simple and efficient
unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words
with model-based importance estimations and computes the weighted average of
word representations from pre-trained models as sentence embeddings. Ditto can
be easily applied to any pre-trained language model as a postprocessing
operation. Compared to prior sentence embedding approaches, Ditto does not add
parameters nor requires any learning. Empirical evaluations demonstrate that
our proposed Ditto can alleviate the anisotropy problem and improve various
pre-trained models on STS tasks.
|
Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Chong Deng, Hai Yu,
Jiaqing Liu, Yukun Ma, Chong Zhang
|
Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
| null |
cs.CL
| 2023-10-24T00:00:00 |
2310.01041
|
Despite the remarkable advances in language modeling, current mainstream
decoding methods still struggle to generate texts that align with human texts
across different aspects. In particular, sampling-based methods produce
less-repetitive texts which are often disjunctive in discourse, while
search-based methods maintain topic coherence at the cost of increased
repetition. Overall, these methods fall short in achieving holistic alignment
across a broad range of aspects. In this work, we frame decoding from a
language model as an optimization problem with the goal of strictly matching
the expected performance with human texts measured by multiple metrics of
desired aspects simultaneously. The resulting decoding distribution enjoys an
analytical solution that scales the input language model distribution via a
sequence-level energy function defined by these metrics. And most importantly,
we prove that this induced distribution is guaranteed to improve the perplexity
on human texts, which suggests a better approximation to the underlying
distribution of human texts. To facilitate tractable sampling from this
globally normalized distribution, we adopt the Sampling-Importance-Resampling
technique. Experiments on various domains and model scales demonstrate the
superiority of our method in metrics alignment with human texts and human
evaluation over strong baselines.
|
Haozhe Ji, Pei Ke, Hongning Wang, Minlie Huang
|
Language Model Decoding as Direct Metrics Optimization
|
The Twelfth International Conference on Learning Representations
(ICLR 2024)
|
cs.CL
| 2024-06-06T00:00:00 |
2302.05900
|
Text generation from Abstract Meaning Representation (AMR) has substantially
benefited from the popularized Pretrained Language Models (PLMs). Myriad
approaches have linearized the input graph as a sequence of tokens to fit the
PLM tokenization requirements. Nevertheless, this transformation jeopardizes
the structural integrity of the graph and is therefore detrimental to its
resulting representation. To overcome this issue, Ribeiro et al. have recently
proposed StructAdapt, a structure-aware adapter which injects the input graph
connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we
investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text,
and, in parallel, we examine the robustness of StructAdapt. Through ablation
studies, graph attack and link prediction, we reveal that RPE might be
partially encoding input graphs. We suggest further research regarding the role
of RPE will provide valuable insights for Graph-to-Text generation.
|
Sebastien Montella, Alexis Nasr, Johannes Heinecke, Frederic Bechet,
Lina M. Rojas-Barahona
|
Investigating the Effect of Relative Positional Embeddings on
AMR-to-Text Generation with Structural Adapters
| null |
cs.CL
| 2023-02-14T00:00:00 |
1807.06441
|
Recently, recurrent neural networks have become state-of-the-art in acoustic
modeling for automatic speech recognition. The long short-term memory (LSTM)
units are the most popular ones. However, alternative units like gated
recurrent unit (GRU) and its modifications outperformed LSTM in some
publications. In this paper, we compared five neural network (NN) architectures
with various adaptation and feature normalization techniques. We have evaluated
feature-space maximum likelihood linear regression, five variants of i-vector
adaptation and two variants of cepstral mean normalization. The most adaptation
and normalization techniques were developed for feed-forward NNs and, according
to results in this paper, not all of them worked also with RNNs. For
experiments, we have chosen a well known and available TIMIT phone recognition
task. The phone recognition is much more sensitive to the quality of AM than
large vocabulary task with a complex language model. Also, we published the
open-source scripts to easily replicate the results and to help continue the
development.
|
Jan Vanek, Josef Michalek, Jan Zelinka, Josef Psutka
|
A Comparison of Adaptation Techniques and Recurrent Neural Network
Architectures
| null |
eess.AS cs.CL cs.SD
| 2018-07-18T00:00:00 |
1911.03937
|
Unsupervised neural machine translation(NMT) is associated with noise and
errors in synthetic data when executing vanilla back-translations. Here, we
explicitly exploits language model(LM) to drive construction of an unsupervised
NMT system. This features two steps. First, we initialize NMT models using
synthetic data generated via temporary statistical machine translation(SMT).
Second, unlike vanilla back-translation, we formulate a weight function, that
scores synthetic data at each step of subsequent iterative training; this
allows unsupervised training to an improved outcome. We present the detailed
mathematical construction of our method. Experimental WMT2014 English-French,
and WMT2016 English-German and English-Russian translation tasks revealed that
our method outperforms the best prior systems by more than 3 BLEU points.
|
Wei Zhang, Youyuan Lin, Ruoran Ren, Xiaodong Wang, Zhenshuang Liang,
Zhen Huang
|
Language Model-Driven Unsupervised Neural Machine Translation
| null |
cs.CL
| 2019-11-12T00:00:00 |
2109.03570
|
This work presents biomedical and clinical language models for Spanish by
experimenting with different pretraining choices, such as masking at word and
subword level, varying the vocabulary size and testing with domain data,
looking for better language representations. Interestingly, in the absence of
enough clinical data to train a model from scratch, we applied mixed-domain
pretraining and cross-domain transfer approaches to generate a performant
bio-clinical model suitable for real-world clinical data. We evaluated our
models on Named Entity Recognition (NER) tasks for biomedical documents and
challenging hospital discharge reports. When compared against the competitive
mBERT and BETO models, we outperform them in all NER tasks by a significant
margin. Finally, we studied the impact of the model's vocabulary on the NER
performances by offering an interesting vocabulary-centric analysis. The
results confirm that domain-specific pretraining is fundamental to achieving
higher performances in downstream NER tasks, even within a mid-resource
scenario. To the best of our knowledge, we provide the first biomedical and
clinical transformer-based pretrained language models for Spanish, intending to
boost native Spanish NLP applications in biomedicine. Our best models are
freely available in the HuggingFace hub: https://huggingface.co/BSC-TeMU.
|
Casimiro Pio Carrino, Jordi Armengol-Estap\'e, Asier
Guti\'errez-Fandi\~no, Joan Llop-Palao, Marc P\`amies, Aitor Gonzalez-Agirre,
Marta Villegas
|
Biomedical and Clinical Language Models for Spanish: On the Benefits of
Domain-Specific Pretraining in a Mid-Resource Scenario
| null |
cs.CL
| 2021-09-20T00:00:00 |
1603.03185
|
We describe a large vocabulary speech recognition system that is accurate,
has low latency, and yet has a small enough memory and computational footprint
to run faster than real-time on a Nexus 5 Android smartphone. We employ a
quantized Long Short-Term Memory (LSTM) acoustic model trained with
connectionist temporal classification (CTC) to directly predict phoneme
targets, and further reduce its memory footprint using an SVD-based compression
scheme. Additionally, we minimize our memory footprint by using a single
language model for both dictation and voice command domains, constructed using
Bayesian interpolation. Finally, in order to properly handle device-specific
information, such as proper names and other context-dependent information, we
inject vocabulary items into the decoder graph and bias the language model
on-the-fly. Our system achieves 13.5% word error rate on an open-ended
dictation task, running with a median speed that is seven times faster than
real-time.
|
Ian McGraw, Rohit Prabhavalkar, Raziel Alvarez, Montse Gonzalez
Arenas, Kanishka Rao, David Rybach, Ouais Alsharif, Hasim Sak, Alexander
Gruenstein, Francoise Beaufays, Carolina Parada
|
Personalized Speech recognition on mobile devices
| null |
cs.CL cs.LG cs.SD
| 2016-03-15T00:00:00 |
1707.00117
|
This paper presents a Semantic Attribute Modulation (SAM) for language
modeling and style variation. The semantic attribute modulation includes
various document attributes, such as titles, authors, and document categories.
We consider two types of attributes, (title attributes and category
attributes), and a flexible attribute selection scheme by automatically scoring
them via an attribute attention mechanism. The semantic attributes are embedded
into the hidden semantic space as the generation inputs. With the attributes
properly harnessed, our proposed SAM can generate interpretable texts with
regard to the input attributes. Qualitative analysis, including word semantic
analysis and attention values, shows the interpretability of SAM. On several
typical text datasets, we empirically demonstrate the superiority of the
Semantic Attribute Modulated language model with different combinations of
document attributes. Moreover, we present a style variation for the lyric
generation using SAM, which shows a strong connection between the style
variation and the semantic attributes.
|
Wenbo Hu, Lifeng Hua, Lei Li, Hang Su, Tian Wang, Ning Chen, Bo Zhang
|
SAM: Semantic Attribute Modulation for Language Modeling and Style
Variation
| null |
cs.CL cs.LG stat.ML
| 2017-09-15T00:00:00 |
2004.03090
|
Existing conversational datasets consist either of written proxies for dialog
or small-scale transcriptions of natural speech. We introduce 'Interview': a
large-scale (105K conversations) media dialog dataset collected from news
interview transcripts. Compared to existing large-scale proxies for
conversational data, language models trained on our dataset exhibit better
zero-shot out-of-domain performance on existing spoken dialog datasets,
demonstrating its usefulness in modeling real-world conversations. 'Interview'
contains speaker role annotations for each turn, facilitating the development
of engaging, responsive dialog systems. In fact, experiments on two dialog
tasks show that leveraging such labels improves performance over strong
speaker-agnostic baselines, and enabling models to generate more specific and
inquisitive responses in interview-style conversations.
|
Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, Julian McAuley
|
Interview: A Large-Scale Open-Source Corpus of Media Dialog
| null |
cs.CL
| 2020-04-08T00:00:00 |
1806.00913
|
Self-normalizing discriminative models approximate the normalized probability
of a class without having to compute the partition function. In the context of
language modeling, this property is particularly appealing as it may
significantly reduce run-times due to large word vocabularies. In this study,
we provide a comprehensive investigation of language modeling
self-normalization. First, we theoretically analyze the inherent
self-normalization properties of Noise Contrastive Estimation (NCE) language
models. Then, we compare them empirically to softmax-based approaches, which
are self-normalized using explicit regularization, and suggest a hybrid model
with compelling properties. Finally, we uncover a surprising negative
correlation between self-normalization and perplexity across the board, as well
as some regularity in the observed errors, which may potentially be used for
improving self-normalization algorithms in the future.
|
Jacob Goldberger and Oren Melamud
|
Self-Normalization Properties of Language Modeling
| null |
cs.CL
| 2018-06-05T00:00:00 |
1606.06031
|
We introduce LAMBADA, a dataset to evaluate the capabilities of computational
models for text understanding by means of a word prediction task. LAMBADA is a
collection of narrative passages sharing the characteristic that human subjects
are able to guess their last word if they are exposed to the whole passage, but
not if they only see the last sentence preceding the target word. To succeed on
LAMBADA, computational models cannot simply rely on local context, but must be
able to keep track of information in the broader discourse. We show that
LAMBADA exemplifies a wide range of linguistic phenomena, and that none of
several state-of-the-art language models reaches accuracy above 1% on this
novel benchmark. We thus propose LAMBADA as a challenging test set, meant to
encourage the development of new models capable of genuine understanding of
broad context in natural language text.
|
Denis Paperno (1), Germ\'an Kruszewski (1), Angeliki Lazaridou (1),
Quan Ngoc Pham (1), Raffaella Bernardi (1), Sandro Pezzelle (1), Marco Baroni
(1), Gemma Boleda (1), Raquel Fern\'andez (2) ((1) CIMeC - Center for
Mind/Brain Sciences, University of Trento, (2) Institute for Logic, Language
& Computation, University of Amsterdam)
|
The LAMBADA dataset: Word prediction requiring a broad discourse context
| null |
cs.CL cs.AI cs.LG
| 2016-06-21T00:00:00 |
2109.00025
|
Sense representations have gone beyond word representations like Word2Vec,
GloVe and FastText and achieved innovative performance on a wide range of
natural language processing tasks. Although very useful in many applications,
the traditional approaches for generating word embeddings have a strict
drawback: they produce a single vector representation for a given word ignoring
the fact that ambiguous words can assume different meanings. In this paper, we
explore unsupervised sense representations which, different from traditional
word embeddings, are able to induce different senses of a word by analyzing its
contextual semantics in a text. The unsupervised sense representations
investigated in this paper are: sense embeddings and deep neural language
models. We present the first experiments carried out for generating sense
embeddings for Portuguese. Our experiments show that the sense embedding model
(Sense2vec) outperformed traditional word embeddings in syntactic and semantic
analogies task, proving that the language resource generated here can improve
the performance of NLP tasks in Portuguese. We also evaluated the performance
of pre-trained deep neural language models (ELMo and BERT) in two transfer
learning approaches: feature based and fine-tuning, in the semantic textual
similarity task. Our experiments indicate that the fine tuned Multilingual and
Portuguese BERT language models were able to achieve better accuracy than the
ELMo model and baselines.
|
Jessica Rodrigues da Silva, Helena de Medeiros Caseli
|
Sense representations for Portuguese: experiments with sense embeddings
and deep neural language models
|
Language Resources and Evaluation (2021)
|
cs.CL cs.LG
| 2021-09-02T00:00:00 |
2205.12506
|
Large language models are shown to present privacy risks through memorization
of training data, and several recent works have studied such risks for the
pre-training phase. Little attention, however, has been given to the
fine-tuning phase and it is not well understood how different fine-tuning
methods (such as fine-tuning the full model, the model head, and adapter)
compare in terms of memorization risk. This presents increasing concern as the
"pre-train and fine-tune" paradigm proliferates. In this paper, we empirically
study memorization of fine-tuning methods using membership inference and
extraction attacks, and show that their susceptibility to attacks is very
different. We observe that fine-tuning the head of the model has the highest
susceptibility to attacks, whereas fine-tuning smaller adapters appears to be
less vulnerable to known extraction attacks.
|
Fatemehsadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans,
Taylor Berg-Kirkpatrick
|
Memorization in NLP Fine-tuning Methods
| null |
cs.CL cs.LG
| 2022-11-07T00:00:00 |
1302.2569
|
We propose a new statistical model for computational linguistics. Rather than
trying to estimate directly the probability distribution of a random sentence
of the language, we define a Markov chain on finite sets of sentences with many
finite recurrent communicating classes and define our language model as the
invariant probability measures of the chain on each recurrent communicating
class. This Markov chain, that we call a communication model, recombines at
each step randomly the set of sentences forming its current state, using some
grammar rules. When the grammar rules are fixed and known in advance instead of
being estimated on the fly, we can prove supplementary mathematical properties.
In particular, we can prove in this case that all states are recurrent states,
so that the chain defines a partition of its state space into finite recurrent
communicating classes. We show that our approach is a decisive departure from
Markov models at the sentence level and discuss its relationships with Context
Free Grammars. Although the toric grammars we use are closely related to
Context Free Grammars, the way we generate the language from the grammar is
qualitatively different. Our communication model has two purposes. On the one
hand, it is used to define indirectly the probability distribution of a random
sentence of the language. On the other hand it can serve as a (crude) model of
language transmission from one speaker to another speaker through the
communication of a (large) set of sentences.
|
Olivier Catoni and Thomas Mainguy
|
Toric grammars: a new statistical approach to natural language modeling
| null |
stat.ML cs.CL math.PR
| 2013-02-12T00:00:00 |
2308.06039
|
In learning to defer, a predictor identifies risky decisions and defers them
to a human expert. One key issue with this setup is that the expert may end up
over-relying on the machine's decisions, due to anchoring bias. At the same
time, whenever the machine chooses the deferral option the expert has to take
decisions entirely unassisted. As a remedy, we propose learning to guide (LTG),
an alternative framework in which -- rather than suggesting ready-made
decisions -- the machine provides guidance useful to guide decision-making, and
the human is entirely responsible for coming up with a decision. We also
introduce SLOG, an LTG implementation that leverages (a small amount of) human
supervision to convert a generic large language model into a module capable of
generating textual guidance, and present preliminary but promising results on a
medical diagnosis task.
|
Debodeep Banerjee, Stefano Teso, Andrea Passerini
|
Learning to Guide Human Experts via Personalized Large Language Models
| null |
cs.AI cs.CL
| 2023-08-14T00:00:00 |
2004.01881
|
In this paper, we formulate a more realistic and difficult problem setup for
the intent detection task in natural language understanding, namely Generalized
Few-Shot Intent Detection (GFSID). GFSID aims to discriminate a joint label
space consisting of both existing intents which have enough labeled data and
novel intents which only have a few examples for each class. To approach this
problem, we propose a novel model, Conditional Text Generation with BERT
(CG-BERT). CG-BERT effectively leverages a large pre-trained language model to
generate text conditioned on the intent label. By modeling the utterance
distribution with variational inference, CG-BERT can generate diverse
utterances for the novel intents even with only a few utterances available.
Experimental results show that CG-BERT achieves state-of-the-art performance on
the GFSID task with 1-shot and 5-shot settings on two real-world datasets.
|
Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu
|
CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot
Intent Detection
| null |
cs.CL cs.LG
| 2020-04-07T00:00:00 |
1511.06391
|
Sequences have become first class citizens in supervised learning thanks to
the resurgence of recurrent neural networks. Many complex tasks that require
mapping from or to a sequence of observations can now be formulated with the
sequence-to-sequence (seq2seq) framework which employs the chain rule to
efficiently represent the joint probability of sequences. In many cases,
however, variable sized inputs and/or outputs might not be naturally expressed
as sequences. For instance, it is not clear how to input a set of numbers into
a model where the task is to sort them; similarly, we do not know how to
organize outputs when they correspond to random variables and the task is to
model their unknown joint probability. In this paper, we first show using
various examples that the order in which we organize input and/or output data
matters significantly when learning an underlying model. We then discuss an
extension of the seq2seq framework that goes beyond sequences and handles input
sets in a principled way. In addition, we propose a loss which, by searching
over possible orders during training, deals with the lack of structure of
output sets. We show empirical evidence of our claims regarding ordering, and
on the modifications to the seq2seq framework on benchmark language modeling
and parsing tasks, as well as two artificial tasks -- sorting numbers and
estimating the joint probability of unknown graphical models.
|
Oriol Vinyals, Samy Bengio, Manjunath Kudlur
|
Order Matters: Sequence to sequence for sets
| null |
stat.ML cs.CL cs.LG
| 2016-02-25T00:00:00 |
1707.05266
|
In this study, we introduce a new approach for learning language models by
training them to estimate word-context pointwise mutual information (PMI), and
then deriving the desired conditional probabilities from PMI at test time.
Specifically, we show that with minor modifications to word2vec's algorithm, we
get principled language models that are closely related to the well-established
Noise Contrastive Estimation (NCE) based language models. A compelling aspect
of our approach is that our models are trained with the same simple negative
sampling objective function that is commonly used in word2vec to learn word
embeddings.
|
Oren Melamud, Ido Dagan, Jacob Goldberger
|
A Simple Language Model based on PMI Matrix Approximations
| null |
cs.CL
| 2017-07-18T00:00:00 |
1809.08731
|
Motivated by recent findings on the probabilistic modeling of acceptability
judgments, we propose syntactic log-odds ratio (SLOR), a normalized language
model score, as a metric for referenceless fluency evaluation of natural
language generation output at the sentence level. We further introduce WPSLOR,
a novel WordPiece-based version, which harnesses a more compact language model.
Even though word-overlap metrics like ROUGE are computed with the help of
hand-written references, our referenceless methods obtain a significantly
higher correlation with human fluency scores on a benchmark dataset of
compressed sentences. Finally, we present ROUGE-LM, a reference-based metric
which is a natural extension of WPSLOR to the case of available references. We
show that ROUGE-LM yields a significantly higher correlation with human
judgments than all baseline metrics, including WPSLOR on its own.
|
Katharina Kann, Sascha Rothe and Katja Filippova
|
Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!
| null |
cs.CL
| 2018-09-25T00:00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.