Post
622
I'm excited to announce that my internship paper at Parameter Lab was accepted to Findings of #NAACL2025 π
TLDR: Stating an LLM was trained on a sentence might not be possible π₯ , but it is possible for large enough amounts of tokens, such as long documents or multiple documents! π€―
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models (2411.00154)
π https://github.com/parameterlab/mia-scaling
TLDR: Stating an LLM was trained on a sentence might not be possible π₯ , but it is possible for large enough amounts of tokens, such as long documents or multiple documents! π€―
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models (2411.00154)
π https://github.com/parameterlab/mia-scaling