---
license: apache-2.0
tags:
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- text-generation-inference
- not-for-all-audiences
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- HuggingFaceFW/fineweb
- occiglot/occiglot-fineweb-v0.5
- omi-health/medical-dialogue-to-soap-summary
- keivalya/MedQuad-MedicalQnADataset
- ruslanmv/ai-medical-dataset
- Shekswess/medical_llama3_instruct_dataset_short
- ShenRuililin/MedicalQnA
- virattt/financial-qa-10K
- PatronusAI/financebench
- takala/financial_phrasebank
- Replete-AI/code_bagel
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- IlyaGusev/gpt_roleplay_realm
- rickRossie/bluemoon_roleplay_chat_data_300k_messages
- jtatman/hypnosis_dataset
- Hypersniper/philosophy_dialogue
- Locutusque/function-calling-chatml
- bible-nlp/biblenlp-corpus
- DatadudeDev/Bible
- Helsinki-NLP/bible_para
- HausaNLP/AfriSenti-Twitter
- aixsatoshi/Chat-with-cosmopedia
- xz56/react-llama
- BeIR/hotpotqa
- arcee-ai/agent-data
- HuggingFaceTB/cosmopedia-100k
- HuggingFaceFW/fineweb-edu
- m-a-p/CodeFeedback-Filtered-Instruction
- heliosbrahma/mental_health_chatbot_dataset
language:
- en
- sw
- ig
- so
- es
- ca
- xh
- zu
- ha
- tw
- af
- hi
- bm
- su
---
## SpydazWeb AI model :
This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind ,
who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world:
A friendly interface with a personality caring and flirtatious at times : non binary !...
and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents:
the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.
Quote for Motivation:
# "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
# "To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
— # Leroy Dyer (1972-Present)
# Project Overview:
The SpydazWeb AI React Project was initiated to build advanced AI agents capable of performing complex tasks using structured methods of thought and action. The project began with the SpydazWeb_AI_ChatQA_005/006 model as the base, which was subsequently trained using a methodology inspired by the ReAct paper. This training provided a solid foundation for developing ReAct Agents, designed to execute various tasks effectively.
### General Intenal Methods:
Trained for multi-task operations as well as rag and function calling :
This model is a fully functioning model and is fully uncensored:
the model has been trained on multiple datasets on the huggingface hub and kaggle :
the focus has been mainly on methodology :
* Chain of thoughts
* step by step planning
* tree of thoughts
* forest of thoughts
* graph of thoughts
* agent generation : Voting, ranking, ... dual agent response generation:
with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :
the model has been intensivly trained in recalling data previously entered into the matrix:
The model has also been trained on rich data and markdown outputs as much as possible :
the model can also generate markdown charts with mermaid.
## Training Methodology:
## Training Reginmes:
* Alpaca
* ChatML / OpenAI / MistralAI
* Text Generation
* Question/Answer (Chat)
* Planner
* Instruction/Input/Response (instruct)
* Mistral Standard Prompt
* Translation Tasks
* Entitys / Topic detection
* Book recall
* Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
* Agent Ranking and response anyalisis
* Medical tasks
* PubMed
* Diagnosis
* Psychaitry
* Counselling
* Life Coaching
* Note taking
* Medical smiles
* Medical Reporting
* Virtual laboritys simulations
* Chain of thoughts methods
* One shot / Multi shot prompting tasks
### Foundation Building:
The initial phase involved training the model on binary yes/no questions without any explicit methodology. This was crucial in establishing a baseline for the model’s decision-making capabilities.
The model was first trained using a simple production prompt, known as Prompt A, which provided basic functionality. Although this prompt was imperfect, it fit the dataset and set the stage for further refinement.
## Methodology Development:
The original prompt was later enhanced with a more flexible approach, combining elements from a handcrafted GPT-4.0 prompt. This adaptation aligned the model with my personal agent system, allowing it to better respond to diverse tasks and methodologies.
I discovered that regularly updating the model with new methodologies significantly enhanced its performance. The iterative process involved refining prompts and experimenting with different training strategies to achieve optimal results.
## Prompts and Epochs:
I found that large prompts required multiple epochs to yield consistent results. However, fewer epochs were needed when prompts were simplified or omitted. The purpose of large prompts during training was to give the model a wide range of response styles, allowing it to adjust parameters for various tasks.
This approach helped the model internalize methodologies for extracting information, which is central to fine-tuning. The training emphasized teaching the model to plan and execute complex tasks, such as generating complete software without errors.
## Key Findings:
### Self-Correction and Thought Processes:
During training, I observed that the model could self-correct by comparing its responses to expected outcomes, particularly in calculations. This self-check mechanism allowed the model to reflect on its answers and improve its accuracy.
I introduced the concept of "self-RAG" (self-retrieval-augmented generation), where the model queries itself before providing a final response. This internal process allowed the model to generate more thoughtful and accurate answers by simulating a multi-step internal dialogue.
Tool-Based Reasoning:
A significant portion of the training focused on enabling the model to use tools effectively. For instance, if the model needed to think, it would use a "think tool" that queried itself and provided an internal response. This tool-based approach was instrumental in enhancing the model’s reasoning capabilities, though it slowed down the response time on certain hardware like the RTX 2030.
Despite the slower response time, the model’s ability to perform complex internal queries resulted in more accurate and well-reasoned outputs.
Training for Comprehensive Responses:
One key finding was that the model initially struggled with generating complete software without errors. After training the model on planning and agency concepts, it showed significant improvement in developing complete projects. This highlighted the importance of training the model not just on individual tasks, but on the overall processes required to achieve a common goal.
Challenges and Refinements:
### Large Prompts vs. Simplified Training:
I noticed that while large prompts during training can offer the model more selection in its responses, they can also reduce the effectiveness if not handled correctly. Over-prompting led to a need for multiple epochs, whereas simpler prompts required fewer epochs. This balance between prompt size and training depth was crucial in fine-tuning the model.
The model's performance was evaluated across different prompting strategies, including 1-shot and multi-shot prompting, to determine the most effective approach for various tasks.
Future Directions:
### Dataset Expansion:
I aim to develop a dataset where the model can not only perform specific functions but also interact with users to gather additional information. This will enable the model to refine its responses and provide more accurate and contextually relevant answers.
The focus of future training will be on the process of achieving a goal, ensuring that the model can navigate complex tasks independently and effectively.
Real-Time Feedback:
In future iterations, I plan to incorporate a feature where the model informs the user of its internal processes, such as when it is thinking or performing actions. This real-time feedback will enhance communication between the user and the model, maintaining an effective conversational flow.
# Basic Production Prompt :
```python
def GetPrompt_(Input: str,Instruct: str = ""):
def FormatMistralPrompt(Instruct: str,Input: str):
Prompt: str = f"""{Instruct}{Input}"""
return Prompt
def CreatePrompt_Standard(Prompt:str, SystemPrompt: str = "You are the World Archive a Helpfull AI System , Answering questions and performing tasks: " ):
IPrompt : str = f"""{SystemPrompt}
### Instruction : Answer all questions Expertly and professionally :
you are expertly qualified to give any advice or provide any solutions:
your experience as a life coach and mentor as well as system designer and python developer,
will enable you to answer these questions :Think logically first, think object oriented ,
think methodology bottom up or top down solution. before you answer.
think about hthe user intent for this this problem and select the correct methodology.
Using the methodogy solve each stage , step by step, error check your work.
Before answering adusting your solution where required.
consider any available tools: return the response formatted in markdown:
### Input
{Prompt}
### Response : """
return IPrompt
MistralPrompt = FormatMistralPrompt(Instruct,Input)
prompt = CreatePrompt_Standard(MistralPrompt,)
return prompt
```