Luigi PRO
luigi12345
AI & ML interests
None yet
Recent Activity
updated
a Space
about 21 hours ago
luigi12345/Autovideo-Gradio
published
a Space
about 21 hours ago
luigi12345/Autovideo-Gradio
Organizations
None yet
luigi12345's activity
posted
an
update
about 6 hours ago
Post
1769
โ
BEST DEBUG PROMPT
Language: Any. ๐Project Type: Any
Language: Any. ๐Project Type: Any
What prompt if sent to you will make you detect and fix all the code crashing issues in the COMPLETE codebase so I don't have to ask you to fix again and again ?
Step 1. Gimme such prompt
Step 2 Follow it yourself quietly and COMPLETELY
Step3: State that if you are asked again about finding fatal bugs and logic issues and inconsistencies in the current codebase you would not be able to find more. (You cannot lie, so you must make all the code adjustments necessary prior to such statement).
posted
an
update
14 days ago
Post
1769
โ
BEST DEBUG PROMPT
Language: Any. ๐Project Type: Any
Language: Any. ๐Project Type: Any
What prompt if sent to you will make you detect and fix all the code crashing issues in the COMPLETE codebase so I don't have to ask you to fix again and again ?
Step 1. Gimme such prompt
Step 2 Follow it yourself quietly and COMPLETELY
Step3: State that if you are asked again about finding fatal bugs and logic issues and inconsistencies in the current codebase you would not be able to find more. (You cannot lie, so you must make all the code adjustments necessary prior to such statement).
posted
an
update
about 1 month ago
Post
1939
๐ OpenAI o3-mini Just Dropped โ Hereโs What You Need to Know!
OpenAI just launched o3-mini, a faster, smarter upgrade over o1-mini. Itโs better at math, coding, and logic, making it more reliable for structured tasks. Now available in ChatGPT & API, with function calling, structured outputs, and system messages.
๐ฅ Why does this matter?
โ Stronger in logic, coding, and structured reasoning
โ Function calling now works reliably for API responses
โ More stable & efficient for production tasks
โ Faster responses with better accuracy
โ ๏ธ Who should use it?
โ๏ธ Great for coding, API calls, and structured Q&A
โ Not meant for long conversations or complex reasoning (GPT-4 is better)
๐ก Free users: Try it under โReasonโ mode in ChatGPT
๐ก Plus/Team users: Daily message limit tripled to 150/day!
OpenAI just launched o3-mini, a faster, smarter upgrade over o1-mini. Itโs better at math, coding, and logic, making it more reliable for structured tasks. Now available in ChatGPT & API, with function calling, structured outputs, and system messages.
๐ฅ Why does this matter?
โ Stronger in logic, coding, and structured reasoning
โ Function calling now works reliably for API responses
โ More stable & efficient for production tasks
โ Faster responses with better accuracy
โ ๏ธ Who should use it?
โ๏ธ Great for coding, API calls, and structured Q&A
โ Not meant for long conversations or complex reasoning (GPT-4 is better)
๐ก Free users: Try it under โReasonโ mode in ChatGPT
๐ก Plus/Team users: Daily message limit tripled to 150/day!
Post
1486
A U T O I N T E R P R E T E Rโ๏ธ๐ฅ
Took me long to found out how to nicely make Open-Interpreter work smoothly with UI.
[OPEN SPACE]( luigi12345/AutoInterpreter)
โ Run ANY script in your browser, download files, scrap emails, create images, debug files and recommit backโฆ ๐ฒโค๏ธ
Took me long to found out how to nicely make Open-Interpreter work smoothly with UI.
[OPEN SPACE]( luigi12345/AutoInterpreter)
โ Run ANY script in your browser, download files, scrap emails, create images, debug files and recommit backโฆ ๐ฒโค๏ธ
posted
an
update
about 1 month ago
Post
1486
A U T O I N T E R P R E T E Rโ๏ธ๐ฅ
Took me long to found out how to nicely make Open-Interpreter work smoothly with UI.
[OPEN SPACE]( luigi12345/AutoInterpreter)
โ Run ANY script in your browser, download files, scrap emails, create images, debug files and recommit backโฆ ๐ฒโค๏ธ
Took me long to found out how to nicely make Open-Interpreter work smoothly with UI.
[OPEN SPACE]( luigi12345/AutoInterpreter)
โ Run ANY script in your browser, download files, scrap emails, create images, debug files and recommit backโฆ ๐ฒโค๏ธ
posted
an
update
about 1 month ago
Post
1436
# Essential AutoGen Examples: Code Writing, File Operations & Agent Tools
1. **Code Writing with Function Calls & File Operations**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_code_writing/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb)
- *Key Tools Shown*:
-
-
-
- Code validation and syntax checking
- File backup and restore
2. **Auto Feedback from Code Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_auto_feedback_from_code_execution/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)
- *Key Tools Shown*:
-
- Error analysis and auto-correction
- Test case generation
- Iterative debugging loop
3. **Async Operations & Parallel Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_async/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_async.ipynb)
- *Key Tools Shown*:
- Async function registration
- Parallel agent operations
- Non-blocking file operations
- Task coordination
4. **LangChain Integration & Advanced Tools**
- [Colab](https://colab.research.google.com/github/sugarforever/LangChain-Advanced/blob/main/Integrations/AutoGen/autogen_langchain_uniswap_ai_agent.ipynb)
- *Key Tools Shown*:
- Vector store integration
- Document QA chains
- Multi-agent coordination
- Custom tool creation
Most relevant for file operations and code editing is Example #1, which demonstrates the core techniques used in autogenie.py for file manipulation and code editing using line numbers and replacement.
1. **Code Writing with Function Calls & File Operations**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_code_writing/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb)
- *Key Tools Shown*:
-
list_files()
- Directory listing-
read_file(filename)
- File reading-
edit_file(file, start_line, end_line, new_code)
- Precise code editing- Code validation and syntax checking
- File backup and restore
2. **Auto Feedback from Code Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_auto_feedback_from_code_execution/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)
- *Key Tools Shown*:
-
execute_code(code)
with output capture- Error analysis and auto-correction
- Test case generation
- Iterative debugging loop
3. **Async Operations & Parallel Execution**
- [Documentation](https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_async/)
- [Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_async.ipynb)
- *Key Tools Shown*:
- Async function registration
- Parallel agent operations
- Non-blocking file operations
- Task coordination
4. **LangChain Integration & Advanced Tools**
- [Colab](https://colab.research.google.com/github/sugarforever/LangChain-Advanced/blob/main/Integrations/AutoGen/autogen_langchain_uniswap_ai_agent.ipynb)
- *Key Tools Shown*:
- Vector store integration
- Document QA chains
- Multi-agent coordination
- Custom tool creation
Most relevant for file operations and code editing is Example #1, which demonstrates the core techniques used in autogenie.py for file manipulation and code editing using line numbers and replacement.
replied to
their
post
about 1 month ago
replied to
their
post
about 1 month ago
from gradio_client import Client, file
client = Client("black-forest-labs/FLUX.1-schnell")
client.predict(
prompt="A handrawn colorful mind map diagram, rugosity drawn lines, clear shapes, brain silhouette, text areas. must include the texts LITERACY/MENTAL โโโ PEACE [Dove Icon] โโโ HEALTH [Vitruvian Man ~60px] โโโ CONNECT [Brain-Mind Connection Icon] โโโ INTELLIGENCE โ โโโ EVERYTHING [Globe Icon ~50px] โโโ MEMORY โโโ READING [Book Icon ~40px] โโโ SPEED [Speedometer Icon] โโโ CREATIVITY โโโ INTELLIGENCE [Lightbulb + Infinity ~30px]",
seed=1872187377,
randomize_seed=True,
width=1024,
height=1024,
num_inference_steps=4,
api_name="/infer"
)
posted
an
update
about 1 month ago
Post
1253
๐คCreate Beautiful Diagrams with FLUX WITHOUT DISTORTED TEXTโ๏ธ
from huggingface_hub import InferenceClient
client = InferenceClient("black-forest-labs/FLUX.1-schnell", token="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
# output is a PIL.Image object
image = client.text_to_image("A handrawn colorful mind map diagram, rugosity drawn lines, clear shapes, brain silhouette, text areas. must include the texts LITERACY/MENTAL โโโ PEACE [Dove Icon] โโโ HEALTH [Vitruvian Man ~60px] โโโ CONNECT [Brain-Mind Connection Icon] โโโ INTELLIGENCE โ โโโ EVERYTHING [Globe Icon ~50px] โโโ MEMORY โโโ READING [Book Icon ~40px] โโโ SPEED [Speedometer Icon] โโโ CREATIVITY โโโ INTELLIGENCE [Lightbulb + Infinity ~30px]")
posted
an
update
2 months ago
Post
666
DEBUGGING PROMPT TEMPLATE (Python)
Please reply one by one without assumptions and fix code accordingly.
1. Core Functionality Check:
For each main function/view:
- What is the entry point?
- What state management is required?
- What database interactions occur?
- What UI elements should be visible?
- What user interactions are possible?
2. Data Flow Analysis:
For each data operation:
- Where is data initialized?
- How is it transformed?
- Where is it stored?
- How is it displayed?
- Are there any state updates?
3. UI/UX Verification:
For each interface element:
- Is it properly initialized?
- Are all buttons clickable?
- Are containers visible?
- Do updates reflect in real-time?
- Is feedback provided to user?
4. Error Handling:
For each critical operation:
- Are exceptions caught?
- Is error feedback shown?
- Does the state remain consistent?
- Can the user recover?
- Are errors logged?
5. State Management:
For each state change:
- Is initialization complete?
- Are updates atomic?
- Is persistence handled?
- Are race conditions prevented?
- Is cleanup performed?
6. Component Dependencies:
For each component:
- Required imports present?
- Database connections active?
- External services available?
- Proper sequencing maintained?
- Resource cleanup handled?
Please reply one by one without assumptions and fix code accordingly.
1. Core Functionality Check:
For each main function/view:
- What is the entry point?
- What state management is required?
- What database interactions occur?
- What UI elements should be visible?
- What user interactions are possible?
2. Data Flow Analysis:
For each data operation:
- Where is data initialized?
- How is it transformed?
- Where is it stored?
- How is it displayed?
- Are there any state updates?
3. UI/UX Verification:
For each interface element:
- Is it properly initialized?
- Are all buttons clickable?
- Are containers visible?
- Do updates reflect in real-time?
- Is feedback provided to user?
4. Error Handling:
For each critical operation:
- Are exceptions caught?
- Is error feedback shown?
- Does the state remain consistent?
- Can the user recover?
- Are errors logged?
5. State Management:
For each state change:
- Is initialization complete?
- Are updates atomic?
- Is persistence handled?
- Are race conditions prevented?
- Is cleanup performed?
6. Component Dependencies:
For each component:
- Required imports present?
- Database connections active?
- External services available?
- Proper sequencing maintained?
- Resource cleanup handled?
posted
an
update
2 months ago
Post
1596
Prompt yourself In a way that will make you detect fatal bugs and crashes of the script and fix each of them in the most optimized and comprehensive way. Don't talk.
Post
2623
PERFECT FINAL PROMPT for Coding and Debugging.
Step 1: Generate the prompt that if sent to you will make you adjust the script so it meets each and every of the criteria it needs to meet to be 100% bug free and perfect.
Step 2: adjust the script following the steps and instructions in the prompt created in Step 1.
replied to
their
post
2 months ago
Write 100 tests concisely that if passed will make every requirements and conditions and every related point mentioned by me throughout this complete conversation be fully addressed and adjust the code accordingly so it passes all tests.
posted
an
update
2 months ago
Post
2623
PERFECT FINAL PROMPT for Coding and Debugging.
Step 1: Generate the prompt that if sent to you will make you adjust the script so it meets each and every of the criteria it needs to meet to be 100% bug free and perfect.
Step 2: adjust the script following the steps and instructions in the prompt created in Step 1.
posted
an
update
3 months ago
Post
522
NEW LAUNCH! Apollo is a new family of open-source video language models by Meta, where 3B model outperforms most 7B models and 7B outperforms most 30B models ๐งถ
โจ the models come in 1.5B https://huggingface.co/Apollo-LMMs/Apollo-1_5B-t32, 3B https://huggingface.co/Apollo-LMMs/Apollo-3B-t32 and 7B https://huggingface.co/Apollo-LMMs/Apollo-7B-t32 with A2.0 license, based on Qwen1.5 & Qwen2
โจ the authors also release a benchmark dataset https://huggingface.co/spaces/Apollo-LMMs/ApolloBench
The paper has a lot of experiments (they trained 84 models!) about what makes the video LMs work โฏ๏ธ
Try the demo for best setup here https://huggingface.co/spaces/Apollo-LMMs/Apollo-3B
they evaluate sampling strategies, scaling laws for models and datasets, video representation and more!
> The authors find out that whatever design decision was applied to small models also scale properly when the model and dataset are scaled ๐ scaling dataset has diminishing returns for smaller models
> They evaluate frame sampling strategies, and find that FPS sampling is better than uniform sampling, and they find 8-32 tokens per frame optimal
> They also compare image encoders, they try a variation of models from shape optimized SigLIP to DINOv2
they find
google/siglip-so400m-patch14-384
to be most powerful ๐ฅ
> they also compare freezing different parts of models, training all stages with some frozen parts give the best yield
They eventually release three models, where Apollo-3B outperforms most 7B models and Apollo 7B outperforms 30B models ๐ฅhttps://huggingface.co/HappyAIUser/Apollo-LMMs-Apollo-3B
โจ the models come in 1.5B https://huggingface.co/Apollo-LMMs/Apollo-1_5B-t32, 3B https://huggingface.co/Apollo-LMMs/Apollo-3B-t32 and 7B https://huggingface.co/Apollo-LMMs/Apollo-7B-t32 with A2.0 license, based on Qwen1.5 & Qwen2
โจ the authors also release a benchmark dataset https://huggingface.co/spaces/Apollo-LMMs/ApolloBench
The paper has a lot of experiments (they trained 84 models!) about what makes the video LMs work โฏ๏ธ
Try the demo for best setup here https://huggingface.co/spaces/Apollo-LMMs/Apollo-3B
they evaluate sampling strategies, scaling laws for models and datasets, video representation and more!
> The authors find out that whatever design decision was applied to small models also scale properly when the model and dataset are scaled ๐ scaling dataset has diminishing returns for smaller models
> They evaluate frame sampling strategies, and find that FPS sampling is better than uniform sampling, and they find 8-32 tokens per frame optimal
> They also compare image encoders, they try a variation of models from shape optimized SigLIP to DINOv2
they find
google/siglip-so400m-patch14-384
to be most powerful ๐ฅ
> they also compare freezing different parts of models, training all stages with some frozen parts give the best yield
They eventually release three models, where Apollo-3B outperforms most 7B models and Apollo 7B outperforms 30B models ๐ฅhttps://huggingface.co/HappyAIUser/Apollo-LMMs-Apollo-3B
posted
an
update
3 months ago
Post
737
CHATGPT.com o1-MINI FOR FREE? Is this a bug?? Wow, I just converted gpt-4o-mini to o1-mini for free! In ChatGPT.com ! Is this a bug? I used this prompt
Really got it fully working and behaving in the UI with the complete Logic Section of Thoughts. I mean no surprises as it was quite obvious it was just the same model with backend automated reprompting, but it is quite astonoshing to see it behaving just the same as if I had choosen o1-mini which is limit rated while this one is free and UNLIMITED! Thoughts?
use CoT logic extensively to output the longest and richest and most beautiful possible verison of this app, call it MelindaAI Autoimage and make it be able to create 7 up to images with different prompts *the promtp of the user with differnt word order except for the first words that are fixed
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0" ...
Really got it fully working and behaving in the UI with the complete Logic Section of Thoughts. I mean no surprises as it was quite obvious it was just the same model with backend automated reprompting, but it is quite astonoshing to see it behaving just the same as if I had choosen o1-mini which is limit rated while this one is free and UNLIMITED! Thoughts?
posted
an
update
3 months ago
Post
1251
๐ฅ ๐๐ผ๐ผ๐ด๐น๐ฒ ๐ฟ๐ฒ๐น๐ฒ๐ฎ๐๐ฒ๐ ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐ฎ.๐ฌ, ๐๐๐ฎ๐ฟ๐๐ถ๐ป๐ด ๐๐ถ๐๐ต ๐ฎ ๐๐น๐ฎ๐๐ต ๐บ๐ผ๐ฑ๐ฒ๐น ๐๐ต๐ฎ๐ ๐๐๐ฒ๐ฎ๐บ๐ฟ๐ผ๐น๐น๐ ๐๐ฃ๐ง-๐ฐ๐ผ ๐ฎ๐ป๐ฑ ๐๐น๐ฎ๐๐ฑ๐ฒ-๐ฏ.๐ฒ ๐ฆ๐ผ๐ป๐ป๐ฒ๐! And they start a huge effort on agentic capabilities.
๐ The performance improvements are crazy for such a fast model:
โฃ Gemini 2.0 Flash outperforms the previous 1.5 Pro model at twice the speed
โฃ Now supports both input AND output of images, video, audio and text
โฃ Can natively use tools like Google Search and execute code
โก๏ธ If the price is on par with previous Flash iteration ($0.30 / M tokens, to compare with GPT-4o's $1.25) the competition will have a big problem with this 4x cheaper model that gets better benchmarks ๐คฏ
๐ค What about the agentic capabilities?
โฃ Project Astra: A universal AI assistant that can use Google Search, Lens and Maps
โฃ Project Mariner: A Chrome extension that can complete complex web tasks (83.5% success rate on WebVoyager benchmark, this is really impressive!)
โฃ Jules: An AI coding agent that integrates with GitHub workflows
I'll be eagerly awaiting further news from Google!
Read their blogpost here ๐ https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
๐ The performance improvements are crazy for such a fast model:
โฃ Gemini 2.0 Flash outperforms the previous 1.5 Pro model at twice the speed
โฃ Now supports both input AND output of images, video, audio and text
โฃ Can natively use tools like Google Search and execute code
โก๏ธ If the price is on par with previous Flash iteration ($0.30 / M tokens, to compare with GPT-4o's $1.25) the competition will have a big problem with this 4x cheaper model that gets better benchmarks ๐คฏ
๐ค What about the agentic capabilities?
โฃ Project Astra: A universal AI assistant that can use Google Search, Lens and Maps
โฃ Project Mariner: A Chrome extension that can complete complex web tasks (83.5% success rate on WebVoyager benchmark, this is really impressive!)
โฃ Jules: An AI coding agent that integrates with GitHub workflows
I'll be eagerly awaiting further news from Google!
Read their blogpost here ๐ https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
posted
an
update
3 months ago
Post
1602
#Perfect finalm debug prompt:
Step 1: geneate the optimal promtp that if sent to you will amke you aoutptu a cokmpelte fullyw orkign รจrfect UX UI priduciton ready verion fo the scitpt
Step 2: follow th winsturcitones yoriusfl and otuptut eh finals cript
Step 1: geneate the optimal promtp that if sent to you will amke you aoutptu a cokmpelte fullyw orkign รจrfect UX UI priduciton ready verion fo the scitpt
Step 2: follow th winsturcitones yoriusfl and otuptut eh finals cript
posted
an
update
3 months ago
Post
450
๐ฅ1๏ธโฃminute 9๏ธโฃseconds of Chain of Thoughts!!
Actually In my Prompt Engineering lessons, one of the self evaluation criteria I always tell my students to use when they must check the effectivity on prompt guidance is the time length of the โLogic Sectionโ of o1. (Of course server speed changes but for comparing different prompts is valid, specially considering that we are fighting with the -obvious- resource saving priorities of the model when run on OpenAI servers )
If anyone wants to share their own attempt, open https://chatgpt.com and give it a try feel free to post it in the comments section!๐
Actually In my Prompt Engineering lessons, one of the self evaluation criteria I always tell my students to use when they must check the effectivity on prompt guidance is the time length of the โLogic Sectionโ of o1. (Of course server speed changes but for comparing different prompts is valid, specially considering that we are fighting with the -obvious- resource saving priorities of the model when run on OpenAI servers )
If anyone wants to share their own attempt, open https://chatgpt.com and give it a try feel free to post it in the comments section!๐
Insert your code here