ZeroGPU Explorers

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

zero-gpu-explorers's activity

Severian 
posted an update 5 days ago
view post
Post
401
Computational Model for Symbolic Representations: An Interaction Framework for Human-AI Collaboration

Hey everyone. I need your help to see if this concept, scientific logic, and testing with prompts can invalidate or validate it. My goal isn’t to make any bold statements or claims about AI, I just really want to know if I’ve stumbled upon something that can be useful in AI interactions. Here’s my proposal in a nutshell:

The Computational Model for Symbolic Representations Framework introduces a method for enhancing human-AI collaboration by assigning user-defined symbolic representations (glyphs) to guide interactions with computational models. This interaction and syntax is called Glyph Code-Prompting. Glyphs function as conceptual tags or anchors, representing abstract ideas, storytelling elements, or domains of focus (e.g., pacing, character development, thematic resonance). Users can steer the AI’s focus within specific conceptual domains by using these symbols, creating a shared framework for dynamic collaboration. Glyphs do not alter the underlying

The Core Point: Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI interactions by serving as contextual anchors that guide the AI's focus. This enhances the AI's ability to generate more nuanced and contextually appropriate responses. For instance, a symbol like ! can carry multidimensional semantic meaning and connections, demonstrating the practical value of glyphs in conveying complex intentions efficiently.

Link to my full initial overview and sharing: https://huggingface.co/blog/Severian/computational-model-for-symbolic-representations

Try out the HF Assistant Version: https://hf.co/chat/assistant/678cfe9655026c306f0a4dab
ariG23498 
posted an update 6 days ago
ariG23498 
posted an update 10 days ago
Severian 
posted an update 14 days ago
view post
Post
640
🌱 Potential Made Simple: Free Life System/Productivity App based on Rythmn of Existence. No BS. No Catch. Just want to cut through the noise and help

The Origin Story

Inspired by Rob Dyrdek's "Rhythm of Existence" philosophy, this system has been expanded into a comprehensive life management tool featuring habit tracking, journaling, life statistics, and more. While I support entrepreneurs creating premium productivity apps, I believe self-improvement should never have financial barriers. That’s why this system is open source and free—no paywalls, premium features, or gatekeeping. Anyone can use it to start optimizing their life, ensuring accessibility for all.

How to Get Started

Two ways to access the system:

HuggingFace Version (Recommended)
- Visit Severian/Potential-Made-Simple
- Create a free HuggingFace account if needed.
- Duplicate the space to create your private version.
- Pro tip: Save it as a PWA for offline mobile use.

Google Sheets Version*
- Ideal for spreadsheet users or those avoiding new accounts.
- Access it https://docs.google.com/spreadsheets/d/1O2R0TCp0t27VZJuvkrz_gMJAl-nkwqeVyL3i6pN7aCo/edit?usp=sharing
- Save a copy and start tracking.

Features Beyond ROE

- Habit tracking
- Daily journaling with prompts
- Life statistics and visualizations
- Task management
- Meal tracking
- Progress metrics
- Historical data analysis
- And more!

Supporting the Project (Optional)

This system is free and always will be. If you find value in it, you can support my work at https://www.ko-fi.com/severian42. Contributions are entirely optional and don’t unlock extra features—they’re simply a way to say thanks.

My mission is to help as many people as possible optimize their lives and reach their full potential. Remember, self-improvement doesn’t have to come with a high price tag.
Severian 
posted an update 17 days ago
view post
Post
3832
Interesting Solution to the Problem of Misguided Attention

So I've been fascinated by the problem of Misguided Attention for a few weeks. I am trying to build an inference algorithm to help LLMs address that issue; but in the process, I found a cool short-term fix I call "Mindful Attention" using just prompt-engineering.

Have you ever thought about how our brains filter reality through layers of past experiences, concepts, and mental images? For example, when you look at an oak tree, are you truly seeing that oak tree in all its unique details, or are you overlaying it with a generalized idea of "oak tree"? This phenomenon inspired the new approach.

LLMs often fall into a similar trap, hence the Misguided Attention problem. They process input not as it’s uniquely presented but through patterns and templates they’ve seen before. This leads to responses that can feel "off," like missing the point of a carefully crafted prompt or defaulting to familiar but irrelevant solutions.

I wanted to address this head-on by encouraging LLMs to slow down, focus, and engage directly with the input—free of assumptions. This is the core of the Mindful Attention Directive, a prompt designed to steer models away from over-generalization and back into the moment.

You can read more about the broader issue here: https://github.com/cpldcpu/MisguidedAttention

And if you want to try this mindful approach in action, check out the LLM I’ve set up for testing: https://hf.co/chat/assistant/677e7ebcb0f26b87340f032e. It works about 80% of the time to counteract these issues, and the results are pretty cool.

I'll add the Gist with the full prompt. I admit, it is quite verbose but it's the most effective one I have landed on yet. I am working on a smaller version that can be appended to any System Prompt to harness the Mindful Attention. Feel free to experiment to find a better version for the community!

Here is the Gist: https://gist.github.com/severian42/6dd96a94e546a38642278aeb4537cfb3
victor 
updated a Space 30 days ago
ehristoforu 
posted an update about 1 month ago
view post
Post
3088
✒️ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
Abhaykoul 
posted an update about 1 month ago
view post
Post
1783
🔥 BIG ANNOUNCEMENT: THE HELPINGAI API IS LIVE! 🔥

Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯

No more waiting— it’s time to build something epic! 🙌

From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.

Check out the docs 🔥 and let’s get to work! 🚀

👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
·
julien-c 
posted an update about 2 months ago
view post
Post
8678
After some heated discussion 🔥, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community 🔥

cc: @reach-vb @pierric @victor and the HF team
·
ariG23498 
posted an update about 2 months ago