Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization Paper • 2311.09184 • Published Nov 15, 2023 • 1
Investigating Data Contamination in Modern Benchmarks for Large Language Models Paper • 2311.09783 • Published Nov 16, 2023 • 2
MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning Paper • 2311.10537 • Published Nov 16, 2023 • 3
Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science Paper • 2402.04247 • Published Feb 6 • 1
ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples Paper • 2210.12374 • Published Oct 22, 2022
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies Paper • 2305.12586 • Published May 21, 2023
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications Paper • 2408.11878 • Published Aug 20 • 52
Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation Paper • 2212.07981 • Published Dec 15, 2022
TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models Paper • 2410.23266 • Published Oct 30 • 20
M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark for Evaluating Foundation Models Paper • 2411.04075 • Published Nov 6 • 15
DocMath-Eval: Evaluating Numerical Reasoning Capabilities of LLMs in Understanding Long Documents with Tabular Data Paper • 2311.09805 • Published Nov 16, 2023 • 3
QTSumm: A New Benchmark for Query-Focused Table Summarization Paper • 2305.14303 • Published May 23, 2023
Large Language Models are Effective Table-to-Text Generators, Evaluators, and Feedback Providers Paper • 2305.14987 • Published May 24, 2023 • 1
ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks Paper • 2311.09835 • Published Nov 16, 2023 • 9
Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data? Paper • 2309.08963 • Published Sep 16, 2023 • 9