-
Safer-Instruct: Aligning Language Models with Automated Preference Data
Paper • 2311.08685 • Published • 1 -
CLIMB: A Benchmark of Clinical Bias in Large Language Models
Paper • 2407.05250 • Published • 2 -
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
Paper • 2502.14296 • Published • 41 -
WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Paper • 2408.15549 • Published • 1

Language, Intelligence, and Model Ethics Lab
non-profit
AI & ML interests
Natural Language Processing
Recent Activity
Organization Card
LIME NLP is part of the USC NLP Group. Our team's primary focus is on creating trustworthy NLP models. We meticulously investigate the ethical consequences and broader societal effects of NLP models, striving to ensure that language technologies are constructed and employed in ways that align with ethical guidelines and uphold human values.
Collections
1
models
None public yet