-
Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models
Paper • 2412.16247 • Published • 1 -
Inferring Functionality of Attention Heads from their Parameters
Paper • 2412.11965 • Published • 2 -
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
Paper • 2412.08686 • Published • 1 -
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 71
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06102
-
AtP*: An efficient and scalable method for localizing LLM behaviour to components
Paper • 2403.00745 • Published • 12 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 606 -
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
Paper • 2402.16840 • Published • 23 -
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Paper • 2402.13753 • Published • 114
-
Understanding LLMs: A Comprehensive Overview from Training to Inference
Paper • 2401.02038 • Published • 62 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 181 -
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
Paper • 2401.01055 • Published • 54 -
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Paper • 2401.01325 • Published • 27
-
The LLM Surgeon
Paper • 2312.17244 • Published • 9 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 66 -
Patchscope: A Unifying Framework for Inspecting Hidden Representations of Language Models
Paper • 2401.06102 • Published • 20 -
Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing
Paper • 2407.08770 • Published • 20
-
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper • 2311.03285 • Published • 28 -
Tailoring Self-Rationalizers with Multi-Reward Distillation
Paper • 2311.02805 • Published • 3 -
Ultra-Long Sequence Distributed Transformer
Paper • 2311.02382 • Published • 2 -
OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Paper • 2309.11235 • Published • 16
-
A technical note on bilinear layers for interpretability
Paper • 2305.03452 • Published • 1 -
Interpreting Transformer's Attention Dynamic Memory and Visualizing the Semantic Information Flow of GPT
Paper • 2305.13417 • Published • 1 -
Explainable AI for Pre-Trained Code Models: What Do They Learn? When They Do Not Work?
Paper • 2211.12821 • Published • 1 -
The Linear Representation Hypothesis and the Geometry of Large Language Models
Paper • 2311.03658 • Published • 1