Top Contributors: Dataset Downloads

community
Activity Feed

AI & ML interests

πŸ“Š Creators of datasets with the most cumulative new downloads each month (users only, no orgs)

Recent Activity

TopContributors-DatasetDownloads's activity

SaylorTwiftΒ 
posted an update about 1 month ago
chansungΒ 
posted an update about 1 month ago
view post
Post
1805
πŸŽ™οΈ Listen to the audio "Podcast" of every single Hugging Face Daily Papers.

Now, "AI Paper Reviewer" project can automatically generates audio podcasts on any papers published on arXiv, and this is integrated into the GitHub Action pipeline. I sounds pretty similar to hashtag#NotebookLM in my opinion.

πŸŽ™οΈ Try out yourself at https://deep-diver.github.io/ai-paper-reviewer/

This audio podcast is powered by Google technologies: 1) Google DeepMind Gemini 1.5 Flash model to generate scripts of a podcast, then 2) Google Cloud Vertex AI's Text to Speech model to synthesize the voice turning the scripts into the natural sounding voices (with latest addition of "Journey" voice style)

"AI Paper Reviewer" is also an open source project. Anyone can use it to build and own a personal blog on any papers of your interests. Hence, checkout the project repository below if you are interested in!
: https://github.com/deep-diver/paper-reviewer

This project is going to support other models including open weights soon for both text-based content generation and voice synthesis for the podcast. The only reason I chose Gemini model is that it offers a "free-tier" which is enough to shape up this projects with non-realtime batch generations. I'm excited to see how others will use this tool to explore the world of AI research, hence feel free to share your feedback and suggestions!
  • 2 replies
Β·
chansungΒ 
posted an update about 2 months ago
view post
Post
4640
Effortlessly stay up-to-date with AI research trends using a new AI tool, "AI Paper Reviewer" !!

It analyzes a list of Hugging Face Daily Papers(w/ @akhaliq ) and turn them into insightful blog posts. This project leverages Gemini models (1.5 Pro, 1.5 Flash, and 1.5 Flash-8B) for content generation and Upstage Document Parse for parsing the layout and contents.
blog link: https://deep-diver.github.io/ai-paper-reviewer/

Also, here is the link of GitHub repository for parsing and generating pipeline. By using this, you can easily build your own GitHub static pages based on any arXiv papers with your own interest!
: https://github.com/deep-diver/paper-reviewer
chansungΒ 
posted an update 8 months ago
view post
Post
4011
πŸ¦™πŸ¦™ LLaMA Duo project update

Last time, I gave a brief introduction about LLaMA Duo project with @sayakpaul . It is a simple toolset to aligning sLLM with service LLM with coverage dataset πŸ‘‰πŸ» (https://huggingface.co/posts/chansung/708646454991943).
- coverage dataset is what we believe to be the most important/desired (instruction, response) pairs. In system thinking, each instruction could be an analogy of a function from traditional programming. We make unit tests and measure the coverage % for all the features/functions. Similarly, we need to ensure if our fine-tuned model could handle what % of given instructions from coverage dataset satisfactory (hence coverage dataset).

We have tested it with "Coding" category of data from HuggingFaceH4/no_robots dataset. It has about 300 SFT training data points under Coding category. After fine-tuning Gemma 7B model on that, the result was very poor. LLaMA Duo's evaluation tool gave < 20% of metrics in similarity and preciseness on the test split.

So, we used LLaMA Duo's synthetic data generation tool to generate 60k data points that looks similar to the original dataset. We first created ~10k synthetic data points, then created 50k more based on the synthetic dataset itself.

After fine-tuning Gemma 7B on the 60k synthetic dataset, the evaluation result went up to 80~90% high. Also, when testing out the model in UI, it tends to give good responses.

It is a good showcase to transition from service LLM to sLLM or having a backup sLLM for service LLM failure scenarios. I am going to expand this experiments on all categories of no_robots dataset. It will roughly generate > 100k data points.

Here are some links:
- LLaMA Duo project repo: https://github.com/deep-diver/llamaduo
- 60k Coding synthetic dataset: chansung/merged_ds_coding
- Fine-tuned Gemma 7B model: chansung/coding_llamaduo_60k_v0.2
chansungΒ 
posted an update 8 months ago
view post
Post
4398
πŸ’» Smoothing the Transition from Service LLM to Local LLM

Imagine your go-to LLM service is down, or you need to use it offline – yikes! This project is all about having that "Plan B" ready to go. Here's LLaMA Duo I've been building with @sayakpaul :

✨ Fine-tune a smaller LLM: We used Hugging Face's alignment-handbook to teach a smaller-sized LLM to mimic my favorite large language model. Think of it as that super-smart AI assistant getting a capable understudy.

πŸ€– Batch Inference: Let's get that fine-tuned LLM working! My scripts generate lots of text like a champ, and we've made sure things run smoothly even with bigger workloads.

🧐 Evaluation: How well is my small LLM doing? We integrated with the Gemini API to use it as an expert judge – it compares my model's work to the original. Talk about a tough critic!

πŸͺ„ Synthetic Data Generation: Need to boost that model's performance? Using Gemini's feedback, we can create even more training data, custom-made to make the LLM better.

🧱 Building Blocks: This isn't just a one-time thing – it's a toolkit for all kinds of LLMOps work. Want to change your evaluation metrics? Bring in models trained differently? Absolutely, let's make it happen.

Why this project is awesome:

πŸ’ͺ Reliability: Keep things running no matter what happens to your main LLM source.
πŸ”’ Privacy: Process sensitive information on your own terms.
πŸ—ΊοΈ Offline capable: No internet connection? No problem!
πŸ•°οΈ Version Control: Lock in your favorite LLM's behavior, even if the service model changes.

We'm excited to share the code on GitHub. Curious to see what you all think! πŸ‘‰πŸ» https://github.com/deep-diver/llamaduo
chansungΒ 
posted an update 9 months ago
view post
Post
2537
Realize LLM powered idea on Hugging Face Space.

I made Space for you to duplicate, then it comes with Gradio and LLM served by Hugging Face's efficient Text Generation Inference(TGI) framework packed into a single machine.

It provides a sample app code snippet with gr.ChatInterface. However, it is not limited to chat usage, but you can leverage the efficiency of TGI for any sort of apps built in Gradio.

Have you ever enjoyed playing with Hugging Chat? Then, you will enjoy writing your own idea with this. Because both are built on top of TGI!

Focus on your app code, and go beyond chat!

chansung/gradio_together_tgi
  • 2 replies
Β·
chansungΒ 
posted an update 10 months ago
view post
Post
πŸŽ₯ 🀾 Vid2Persona: talk to person from video clip

A fun project over the last week with @sayakpaul . It has a simple pipeline from extracting traits of video characters to chatting with them.

Under the hood, this project leverages the power of both commercial and open source models. We used Google's Gemini 1.0 Pro Vision model to understand the video content directly, then we used HuggingFaceH4/zephyr-7b-beta model to make conversation!

Try it Hugging Face Space and let us know what you think.
: chansung/vid2persona

The space application is a dedicated implementation for ZeroGPU environment + Hugging Face Inference API with PRO account. If you wish to host it on your own environment, consider duplicate the space or run locally with the project repository
: https://github.com/deep-diver/Vid2Persona
mvaloattoΒ 
posted an update 10 months ago
chansungΒ 
posted an update 10 months ago
view post
Post
Updating PaperQA Gradio app and Hugging Face Space.
: Link ➑️ chansung/paper_qa
: Standalone repo ➑️ https://github.com/deep-diver/paperqa-ui

The final goal is to let ppl have their own paper archive. At the end, You will be able to easily *clone* on local or Hugging Face Space with Google's Gemini API Key (which is free), Hugging Face Access Token. You can just drop arXiv IDs at the bottom, then all the auto analyze papers are automatically archived on Hugging Face Dataset repo.

Here are few updates included, and dig in the source code if you want similar features for your use cases!
πŸ–₯️ making complex UI + fully responsive
+ making UI as quickly as possible (avoid server-client when possible)
πŸ’¬ Permanent Chat history management with in-browser local storage
+ Chat history management *per* paper
+ Chat history management in lazy mode (too many paper, impossible to create chat history for every single paper beforehand, hence)

Current plan is to support Gemini and any open source models on Hugging Face PRO account, but will expand it to GPT4 soon.

Any suggestion on this project is welcome! possibly,
- hooking up RAG system (open models' context length is small)
- hooking up Internet search system
- image/figure analysis
....
chansungΒ 
posted an update 10 months ago
view post
Post
Understand research papers easier with automatically generated Q&As by LLM (Gemini 1.0 Pro). For this purpose, I have built two projects.

- [Auto Paper Analysis](https://github.com/deep-diver/auto-paper-analysis) let you generate QAs on a list of papers. The paper list could be specified either from Hugging Face's Daily Papers or in a set of raw arXiv IDs. Then the generated QA dataset could be pushed to the Hugging Face Dataset. Refer to the attached image.

- [PaperQA Space application]( chansung/paper_qa) shows how to interact with the generated QA dataset. Search the paper by keyword or date, then understand it with the QAs (in ELI5 and technical versions). Check out the attached video, or visit the space directly.

This is a baby step for the automated paper analysis (summarization) to easily consume the exploding information in the field of AI. In the next phase, I am gonna need spend my time to enhance prompt engineering, UI/UX (such as Like/Dislike system), ...

However, in the meantime, I hope this project could be helpful for someone who struggles on understanding papers (new papers comes out even when I did finish reading a paper from yesterday yet,,)!

Also, any suggestion to improve this, please let me know :)

  • 1 reply
Β·
mvaloattoΒ 
posted an update 10 months ago
view post
Post
8 Spaces Of The Week is nice, but 840 is even better! πŸ”₯

Here is the complete library of ALL Spaces featured by Hugging Face since October 2021:

All Spaces Of The Week - mvaloatto/ASOTW

-
A special mention goes to @osanseviero , whose collection inspired me to design this dedicated Space. Another shoutout to @victor , whose intricately designed Spaces cards motivated me to step up my CSS game :) I plan to release additional features in the future. In the meantime, suggestions are welcome!
Β·
mvaloattoΒ 
posted an update 10 months ago
view post
Post
Want more β€œgood machine learning” in your X feed? Here is a new Space for you:
πŸ”” Top HF Users To Follow On X - https://huggingface.co/spaces/mvaloatto/HF2X

Ever since I fell down the AI rabbit hole, it hasn’t been super easy to spot and follow the most impactful Hugging Face contributors on X. So, inspired by @Weyaxi leaderboards, I decided to create a list just for this purpose.

Why, you ask?

First, it’s quite surprising how so many talented AI pioneers and independent contributors on X don't get the visibility/reach you might expect. Sad but true: follower count doesn't always match up with the value or innovation an individual brings to the table (just stating the obvious here).

Open source AI, in particular, thrives not just on innovation but also on the collective spirit of its believers and builders. With Hugging Face standing out as a prime hub for top AI engineers and contributors, compiling a directory of X profiles from influential figures on this platform felt like a natural step.

This Space aims to not only connect these top contributors but also guide open AI enthusiasts and newcomers towards the field's leading lights.

I put this modest page together using some web scraping and what I remember from my web dev class ages ago! Suggestions/likes are welcome - I’m hoping to keep tweaking/upgrading it, especially if you all find it useful.

Now, let’s follow each other! It’s time to accelerate the dissemination of our ideas, encourage collaboration within our community, and ensure that open AI developments receive the attention and recognition they deserve. πŸ”₯
Β·
chansungΒ 
posted an update 11 months ago
view post
Post
Update on the Newsletter of πŸ€— Daily Paper

Automatic Korean translation is integrated. In the newspaper, "KO" links appear, and it will bring you to the translated version of full paper. This is done with the following workflow.

1. Grasp the list of arXiv IDs from πŸ€— Daily Paper API
2. Distribute a number of sub-list of arXiv IDs to VMs (possibly spot instances since the job ends shortly)
3. Commit & push the translated paper in HTML to the designated GitHub repository
4. Newsletter will include the links to the HTML of each paper

Job distribution to a number of VMs are super easily done with [dstack]( https://dstack.ai/ ), and the translation sub-workflow is done through 1) download PDF of each paper with arxiv-dl package, 2) PDF => text with nougat-ocr package, 3) a custom trained model( nlp-with-deeplearning/enko-t5-small-v0 ) in πŸ€— transformers to translate the English text into Korean line by line, and 4) reformat the translation into HTML.

Many people in Korea are not fluent in English but want to learn about new stuff in AI, so they usually use Google Translate or other services. This is why I made this feature for easier and direct access to the SOTA knowledge.

Are there other countries with the similar needs? If so, it would be wonderful to cooperate to support more languages. Please reach out anyone is interested in this.

PS; I always wanted to show the usefulness of open ML models by building a well working end to end product, and this newsletter shows it by featuring T5ForConditionalGeneration (translation), SOLAR LLM (summarization).

if you want to sub to the newsletter
: https://groups.google.com/g/hf-daily-paper-newsletter

if you want to look into the source codes
: https://github.com/deep-diver/hf-daily-paper-newsletter
Β·