Iterative Forward Tuning Boosts In-Context Learning in Language Models Paper • 2305.13016 • Published May 22, 2023
PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts Paper • 2305.14839 • Published May 24, 2023 • 1
One Shot Learning as Instruction Data Prospector for Large Language Models Paper • 2312.10302 • Published Dec 16, 2023 • 3
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions Paper • 2406.15877 • Published Jun 22 • 45
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning Paper • 2301.13808 • Published Jan 31, 2023
Graphix-T5: Mixing Pre-Trained Transformers with Graph-Aware Layers for Text-to-SQL Parsing Paper • 2301.07507 • Published Jan 18, 2023
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement Paper • 2409.12122 • Published Sep 18 • 3
ExecRepoBench: Multi-level Executable Code Completion Evaluation Paper • 2412.11990 • Published 9 days ago
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition Paper • 2310.05492 • Published Oct 9, 2023 • 2
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training Paper • 2001.04063 • Published Jan 13, 2020
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models Paper • 2310.16517 • Published Oct 25, 2023 • 1
Bridging Subword Gaps in Pretrain-Finetune Paradigm for Natural Language Generation Paper • 2106.06125 • Published Jun 11, 2021
PolyLM: An Open Source Polyglot Large Language Model Paper • 2307.06018 • Published Jul 12, 2023 • 25
LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback Paper • 2406.14024 • Published Jun 20
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement Paper • 2409.12122 • Published Sep 18 • 3