matlok
's Collections
Papers - Text - Bidirectional Encoders
updated
BioBERT: a pre-trained biomedical language representation model for
biomedical text mining
Paper
•
1901.08746
•
Published
•
3
Pretraining-Based Natural Language Generation for Text Summarization
Paper
•
1902.09243
•
Published
•
2
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper
•
1907.11692
•
Published
•
7
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Paper
•
2006.03654
•
Published
•
3
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with
Gradient-Disentangled Embedding Sharing
Paper
•
2111.09543
•
Published
•
2
Wave Network: An Ultra-Small Language Model
Paper
•
2411.02674
•
Published
•
3
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Paper
•
2203.03897
•
Published
•
1
BERTs are Generative In-Context Learners
Paper
•
2406.04823
•
Published
•
1
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for
Fast, Memory Efficient, and Long Context Finetuning and Inference
Paper
•
2412.13663
•
Published
•
103