Mohammed Hamdy

mmhamdy

AI & ML interests

TechBio | AI4Sci | NLP | Reinforcement Learning

Recent Activity

Organizations

Massive Text Embedding Benchmark's profile picture Blog-explorers's profile picture Hugging Face for Computer Vision's profile picture ASAS AI's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Cohere Labs Community's profile picture M4-ai's profile picture LLMem's profile picture Hugging Face Discord Community's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture MOTH Lab's profile picture

mmhamdy's activity

posted an update 22 days ago
view post
Post
1575
What inspired the Transformer architecture in the "Attention Is All You Need" paper? And how were various ideas combined to create this groundbreaking model?

In this lengthy article, I explore the story and the origins of some of the ideas introduced in the paper. We'll explore everything from the fundamental attention mechanism that lies at its heart to the surprisingly simple explanation for its name, Transformer.

šŸ’” Examples of ideas explored in the article:

āœ… What was the inspiration for the attention mechanism?
āœ… How did we go from attention to self-attention?
āœ… Did the team have any other names in mind for the model?

and more...

I aim to tell the story of Transformers as I would have wanted to read it, and hopefully, one that appeals to others interested in the details of this fascinating idea. This narrative draws from video interviews, lectures, articles, tweets/Xs, and some digging into the literature. I have done my best to be accurate, but errors are possible. If you find inaccuracies or have any additions, please do reach out, and I will gladly make the necessary updates.

Read the article: https://huggingface.co/blog/mmhamdy/pandemonium-the-transformers-story
published an article 22 days ago
published an article 26 days ago
view article
Article

Osirian AI: A Call For The Resurrection And Reuse Of Deep Learning Models.

By mmhamdy •
upvoted an article about 2 months ago
view article
Article

A Deepdive into Aya Vision: Advancing the Frontier of Multilingual Multimodality

• 73
posted an update about 2 months ago
view post
Post
2754
šŸŽ‰ We're excited to introduce MemoryCode, a novel synthetic dataset designed to rigorously evaluate LLMs' ability to track and execute coding instructions across multiple sessions. MemoryCode simulates realistic workplace scenarios where a mentee (the LLM) receives coding instructions from a mentor amidst a stream of both relevant and irrelevant information.

šŸ’” But what makes MemoryCode unique?! The combination of the following:

āœ… Multi-Session Dialogue Histories: MemoryCode consists of chronological sequences of dialogues between a mentor and a mentee, mirroring real-world interactions between coworkers.

āœ… Interspersed Irrelevant Information: Critical instructions are deliberately interspersed with unrelated content, replicating the information overload common in office environments.

āœ… Instruction Updates: Coding rules and conventions can be updated multiple times throughout the dialogue history, requiring LLMs to track and apply the most recent information.

āœ… Prospective Memory: Unlike previous datasets that cue information retrieval, MemoryCode requires LLMs to spontaneously recall and apply relevant instructions without explicit prompts.

āœ… Practical Task Execution: LLMs are evaluated on their ability to use the retrieved information to perform practical coding tasks, bridging the gap between information recall and real-world application.

šŸ“Œ Our Findings

1ļøāƒ£ While even small models can handle isolated coding instructions, the performance of top-tier models like GPT-4o dramatically deteriorates when instructions are spread across multiple sessions.

2ļøāƒ£ This performance drop isn't simply due to the length of the context. Our analysis indicates that LLMs struggle to reason compositionally over sequences of instructions and updates. They have difficulty keeping track of which instructions are current and how to apply them.

šŸ”— Paper: From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions (2502.13791)
šŸ“¦ Code: https://github.com/for-ai/MemoryCode