LLaMAdelic / README.md
choco58's picture
Update README.md
412a79c verified
metadata
library_name: transformers
tags:
  - LLaMAdelic
  - Conversational AI
  - Personality
  - Persona-dialogue
  - Dialogue-systems
  - Human-like assistant
  - LLaMA
  - LLaMA-8B

LLaMAdelic: Conversational Personality Model 🌊✨

Welcome to LLaMAdelic—a conversational model fine-tuned from LLaMA 3 8B Instruct, capturing nuanced personality traits that make AI interactions feel more authentic and relatable. Whether it’s about balancing conscientious responses or tapping into empathetic reflections, LLaMAdelic is here to explore the depths of the human-like personality spectrum.

Model Overview: LLaMAdelic

Model Name: LLaMAdelic

  • Architecture: LLaMA 3 8B Instruct
  • Training Objective: Personality-Enhanced Conversational AI
  • Training Dataset: Fine-tuned on conversational data to reflect Big 5 personality traits.
  • Training Duration: 4-5 days on A100 GPU (training parameters can be found in appendix of the paper)

Why "LLaMAdelic"?

The name "LLaMAdelic" reflects our aim to bring a rich, nuanced personality to conversational AI. Just as the Big 5 personality traits (OCEAN) encapsulate the subtle layers of human interaction, LLaMAdelic seeks to capture these nuanced dimensions — openness, conscientiousness, extraversion, agreeableness, and neuroticism — making conversations with AI feel more genuinely human. It’s not just another model; it’s designed to add depth, authenticity, and a hint of human-like character to every interaction.


Scope of Applications

LLaMAdelic is designed to add a splash of personality to various conversational tasks. Here's what it can handle:

  • Conversational Agents: Engage users with relatable and personality-driven conversations.
  • Text Generation: Generate human-like text for articles, chats, and creative writing with a personal touch.
  • Question-Answering: Answer questions with a flair of personality, making responses more relatable.
  • Educational and Therapy Bots: Assist in applications where personality-sensitive responses can improve user engagement and retention.

Intended Use

LLaMAdelic is built for those aiming to inject personality into conversational systems, whether it’s for customer service bots, therapy support, or just plain fun AI companions. It’s particularly suited to applications where capturing nuances like openness, agreeableness, and neuroticism (yes, even those angsty replies!) can enhance user experience.

Data and Training

The model has been trained on an extensive conversational dataset. Our goal was to align model responses with intrinsic personality traits, enabling LLaMAdelic to tailor its tone and style depending on conversational context. More information on the dataset will be shared soon.

Results

Personality Evaluation on EleutherAI/lm-evaluation-harness (OCEAN Personality Benchmark)

Model Description Openness Conscientiousness Extraversion Agreeableness Neuroticism AVG
LLaMA 8B ins Zeroshot 0.8760 0.7620 0.7170 0.9500 0.5220 0.7654
LLaMAdelic Fine-tuned on Conversational Data 0.9150 0.7840 0.6680 0.9440 0.7040 0.8030

Performance and Limitations

While LLaMAdelic brings vibrant and personality-driven conversations to the table, it does have limitations:

  • Personality Representation: LLaMAdelic is trained for personality alignment, so it may sacrifice some general knowledge capabilities in favor of personality-specific responses. A detailed evaluation will be updated soon.
  • Sensitive Topics: Despite strong filtering, caution is advised when deploying in high-stakes environments.
  • Computational Load: The LLaMA 8B backbone requires substantial resources, which may limit deployment in real-time settings without sufficient hardware.

Ethical Considerations

We made sure to avoid toxic or inappropriate dialogues by tagging any dialogue with over 25% toxic utterances for separate review. Ethical considerations are a priority, and LLaMAdelic was designed with responsible AI practices in mind. For details on ethical data practices, see the Appendix.


Future Updates

Stay tuned for more information on LLaMAdelic!


Citation

@inproceedings{pal-etal-2025-beyond,
    title = "Beyond Discrete Personas: Personality Modeling Through Journal Intensive Conversations",
    author = "Pal, Sayantan  and
      Das, Souvik  and
      Srihari, Rohini K.",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.470/",
    pages = "7055--7074",
    abstract = "Large Language Models (LLMs) have significantly improved personalized conversational capabilities. However, existing datasets like Persona Chat, Synthetic Persona Chat, and Blended Skill Talk rely on static, predefined personas. This approach often results in dialogues that fail to capture human personalities' fluid and evolving nature. To overcome these limitations, we introduce a novel dataset with around 400,000 dialogues and a framework for generating personalized conversations using long-form journal entries from Reddit. Our approach clusters journal entries for each author and filters them by selecting the most representative cluster, ensuring that the retained entries best reflect the author`s personality. We further refine the data by capturing the Big Five personality traits{---}openness, conscientiousness, extraversion, agreeableness, and neuroticism{---}ensuring that dialogues authentically reflect an individual`s personality. Using Llama 3 70B, we generate high-quality, personality-rich dialogues grounded in these journal entries. Fine-tuning models on this dataset leads to an 11{\%} improvement in capturing personality traits on average, outperforming existing approaches in generating more coherent and personality-driven dialogues."
}