lora concepts library

AI & ML interests

None defined yet.

Recent Activity

Yardenfren  updated a model 6 months ago
lora-library/B-LoRA-toy_storee
Yardenfren  updated a model 8 months ago
lora-library/B-LoRA-drawing2
Yardenfren  updated a model 8 months ago
lora-library/B-LoRA-painting
View all activity

lora-library's activity

1aurent 
posted an update 12 days ago
ehristoforu 
posted an update 21 days ago
view post
Post
2955
✒️ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
victor 
posted an update about 1 month ago
view post
Post
1950
Qwen/QwQ-32B-Preview shows us the future (and it's going to be exciting)...

I tested it against some really challenging reasoning prompts and the results are amazing 🤯.

Check this dataset for the results: victor/qwq-misguided-attention
  • 2 replies
·
victor 
posted an update about 2 months ago
view post
Post
2416
Perfect example of why Qwen/Qwen2.5-Coder-32B-Instruct is insane?

Introducing: AI Video Composer 🔥
huggingface-projects/ai-video-composer

Drag and drop your assets (images/videos/audios) to create any video you want using natural language!

It works by asking the model to output a valid FFMPEG and this can be quite complex but most of the time Qwen2.5-Coder-32B gets it right (that thing is a beast). It's an update of an old project made with GPT4 and it was almost impossible to make it work with open models back then (~1.5 years ago), but not anymore, let's go open weights 🚀.
victor 
posted an update about 2 months ago
view post
Post
1830
Qwen2.5-72B is now the default HuggingChat model.
This model is so good that you must try it! I often get better results on rephrasing with it than Sonnet or GPT-4!!
victor 
posted an update 3 months ago
victor 
posted an update 3 months ago
view post
Post
2673
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome 😊
  • 2 replies
·
1aurent 
posted an update 4 months ago
view post
Post
1263
Hey everyone 🤗!
We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! 🎉

We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners. You can install them by running:
comfy node registry-install comfyui-refiners

Or by unzipping the archive you can download by clicking "Download Latest" into your custom_nodes comfy folder.
We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! 🙏
1aurent 
posted an update 4 months ago
view post
Post
4418
Hey everyone 🤗!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter.

We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate 🚀.
It’s all open source under the MIT license ( finegrain/finegrain-box-segmenter), complete with a test set tailored for e-commerce ( finegrain/finegrain-product-masks-lite). Have fun experimenting with it!
Niansuh 
posted an update 4 months ago
view post
Post
2612
Plugins in NiansuhAI

Plugin Names:
1. WebSearch: Searches the web using search engines.
2. Calculator: Evaluates mathematical expressions, extending the base Tool class.
3. WebBrowser: Extracts and summarizes information from web pages.
4. Wikipedia: Retrieves information from Wikipedia using its API.
5. Arxiv: Searches and fetches article information from Arxiv.
6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life.

These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis.

Try it now: https://huggingface.co/spaces/NiansuhAI/chat

Similar to: https://hf.co/chat
victor 
posted an update 5 months ago
view post
Post
5619
🙋 Calling all Hugging Face users! We want to hear from YOU!

What feature or improvement would make the biggest impact on Hugging Face?

Whether it's the Hub, better documentation, new integrations, or something completely different – we're all ears!

Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! 👇
·
victor 
posted an update 5 months ago
view post
Post
4133
How good are you at spotting AI-generated images?

Find out by playing Fake Insects 🐞 a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!

victor/fake-insects
·
JoseRFJunior 
posted an update 5 months ago
view post
Post
1693
JoseRFJunior/TransNAR
https://github.com/JoseRFJuniorLLMs/TransNAR
https://arxiv.org/html/2406.09308v1
TransNAR hybrid architecture. Similar to Alayrac et al, we interleave existing Transformer layers with gated cross-attention layers which enable information to flow from the NAR to the Transformer. We generate queries from tokens while we obtain keys and values from nodes and edges of the graph. The node and edge embeddings are obtained by running the NAR on the graph version of the reasoning task to be solved. When experimenting with pre-trained Transformers, we initially close the cross-attention gate, in order to fully preserve the language model’s internal knowledge at the beginning of training.
victor 
posted an update 5 months ago
view post
Post
4048
Hugging Face famous organisations activity. Guess which one has the word "Open" in it 😂
  • 2 replies
·
1aurent 
posted an update 5 months ago
view post
Post
2571
Hey everyone 🤗!
Check out this cool new space from Finegrain: finegrain/finegrain-object-eraser

Under the hoods, it's a pipeline of models (currently exposed via an API) that allows you to easily erase any object from your image just by naming it or selecting it! Not only will the object disappear, but so will its effects on the scene, like shadows and reflections. Built on top of Refiners, our micro-framework for simple foundation model adaptation (feel free to star it on GitHub if you like it: https://github.com/finegrain-ai/refiners)
  • 2 replies
·
ehristoforu 
posted an update 6 months ago
view post
Post
4109
😏 Hello from Project Fluently Team!

✨ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.

🛠️ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.

🙌 Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.

🧐 Today, without demo images (there wasn’t much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.

😻 Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!
  • 1 reply
·
1aurent 
posted an update 6 months ago
multimodalart 
posted an update 6 months ago
Niansuh 
posted an update 6 months ago
view post
Post
2820
Introducing Plugins in NiansuhAI (on July 20, 2024)

Plugin Names:
1. WebSearch: Tool for searching the web using search engines.
2. Calculator: Helps evaluate mathematical expressions; extends the base Tool class.
3. WebBrowser: Interacts with web pages to extract information or summarize content.
4. Wikipedia: Retrieves data from Wikipedia using its API.
5. Arxiv: Searches and fetches article information from Arxiv.
6. WolframAlphaTool: Answers questions on Math, Science, Technology, Culture, Society, and Everyday Life.

Similar to https://hf.co/chat