Update README.md
Browse files
README.md
CHANGED
@@ -29,78 +29,24 @@ pipeline_tag: text-generation
|
|
29 |
---
|
30 |
# SpydazWeb AGI
|
31 |
|
|
|
32 |
|
33 |
-
This is based on the
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
```
|
40 |
-
! git clone https://github.com/huggingface/transformers.git
|
41 |
-
## copy modeling_mistral.py and configuartion.py to the Transformers foler / Src/models/mistral and overwrite the existing files first:
|
42 |
-
## THEN :
|
43 |
-
!cd transformers
|
44 |
-
!pip install ./transformers
|
45 |
-
|
46 |
-
```
|
47 |
-
|
48 |
-
then restaet the environment: the model can then load without trust-remote and WILL work FINE !
|
49 |
-
it can even be trained : hence the 4 bit optimised version ::
|
50 |
-
|
51 |
-
``` Python
|
52 |
-
|
53 |
-
|
54 |
-
# Load model directly
|
55 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
56 |
-
|
57 |
-
tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/_Spydaz_Web_AI_MistralStar_V2", trust_remote_code=True)
|
58 |
-
model = AutoModelForCausalLM.from_pretrained("LeroyDyer/_Spydaz_Web_AI_MistralStar_V2", trust_remote_code=True)
|
59 |
-
model.tokenizer = tokenizer
|
60 |
-
|
61 |
-
```
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
# Introduction :
|
67 |
-
|
68 |
-
## STAR REASONERS !
|
69 |
-
|
70 |
-
this provides a platform for the model to commuicate pre-response , so an internal objective can be set ie adding an extra planning stage to the model improving its focus and output:
|
71 |
-
the thought head can be charged with a thought or methodolgy, such as a ststing to take a step by step approach to the problem or to make an object oriented model first and consider the use cases before creating an output:
|
72 |
-
so each thought head can be dedicated to specific ppurpose such as Planning or artifact generation or use case design : or even deciding which methodology should be applied before planning the potential solve route for the response :
|
73 |
-
Another head could also be dedicated to retrieving content based on the query from the self which can also be used in the pregenerations stages :
|
74 |
-
all pre- reasoners can be seen to be Self Guiding ! essentially removing the requirement to give the model a system prompt instead aligning the heads to a thoght pathways !
|
75 |
-
these chains produce data which can be considered to be thoughts : and can further be displayed by framing these thoughts with thought tokens : even allowing for editors comments giving key guidance to the model during training :
|
76 |
-
these thoughts will be used in future genrations assisting the model as well a displaying explantory informations in the output :
|
77 |
-
|
78 |
-
these tokens can be displayed or with held also a setting in the model !
|
79 |
-
|
80 |
-
### can this be applied in other areas ?
|
81 |
-
|
82 |
-
Yes! , we can use this type of method to allow for the model to generate code in another channel or head potentially creating a head to produce artifacts for every output , or to produce entity lilsts for every output and framing the outputs in thier relative code tags or function call tags :
|
83 |
-
these can also be displayed or hidden for the response . but these can also be used in problem solvibng tasks internally , which again enables for the model to simualte the inpouts and outputs from an interpretor !
|
84 |
-
it may even be prudent to include a function executing internally to the model ! ( allowing the model to execute functions in the background! before responding ) as well this oul hae tpo also be specified in the config , as autoexecute or not !.
|
85 |
-
|
86 |
-
### Conclusion
|
87 |
|
88 |
-
the resonaer methodology , might be seen to be the way forwards , adding internal funciton laity to the models instead of external connectivity enables for faster and seemless model usage : as well as enriched and informed responses , as even outputs could essentially be cleanss and formated before being presented to the Calling interface, internally to the model :
|
89 |
-
the take away is that arre we seeing the decoder/encoder model as simple a function of the inteligence which in truth need to be autonomus !
|
90 |
-
ie internal functions and tools as well as disk interaction : an agent must have awareness and control over its environment with sensors and actuators : as a fuction callingmodel it has actuators and canread the directorys it has sensors ... its a start: as we can eget media in and out , but the model needs to get its own control to inpout and output also !
|
91 |
-
....
|
92 |
|
93 |
-
|
94 |
-
|
95 |
-
hence an AGI !
|
96 |
|
97 |
-
#### AI AGI ?
|
98 |
-
so yes we can see we are not far from an ai which can evolve : an advance general inteligent system ( still non sentient by the way )
|
99 |
|
|
|
100 |
|
|
|
101 |
|
102 |
-
|
103 |
-
https://github.com/spydaz
|
104 |
|
105 |
* 32k context window (vs 8k context in v0.1)
|
106 |
* Rope-theta = 1e6
|
@@ -112,26 +58,6 @@ https://github.com/spydaz
|
|
112 |
* Recalls context for task internally to be used as refference for task:
|
113 |
* show thoughts or hidden thought usages ( Simular to self-Rag )
|
114 |
|
115 |
-
|
116 |
-
This model will be a custom model with internal experts and rag systems
|
117 |
-
enabling for preprocessing of the task internally before outputting a response
|
118 |
-
|
119 |
-
## SpydazWeb AI model :
|
120 |
-
|
121 |
-
This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind ,
|
122 |
-
who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world:
|
123 |
-
A friendly interface with a personality caring and flirtatious at times : non binary !...
|
124 |
-
and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents:
|
125 |
-
the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
### General Intenal Methods:
|
130 |
-
|
131 |
-
Trained for multi-task operations as well as rag and function calling :
|
132 |
-
|
133 |
-
This model is a fully functioning model and is fully uncensored:
|
134 |
-
|
135 |
the model has been trained on multiple datasets on the huggingface hub and kaggle :
|
136 |
|
137 |
the focus has been mainly on methodology :
|
@@ -173,4 +99,76 @@ the model can also generate markdown charts with mermaid.
|
|
173 |
* Medical Reporting
|
174 |
* Virtual laboritys simulations
|
175 |
* Chain of thoughts methods
|
176 |
-
* One shot / Multi shot prompting tasks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
---
|
30 |
# SpydazWeb AGI
|
31 |
|
32 |
+
## SpydazWeb AI model :
|
33 |
|
34 |
+
This model is based on the worlds archive of knowledge maintaining historical documents and providing services for the survivors of mankind ,
|
35 |
+
who may need to construct shelters develop technologys , or medical resources as well as maintain the history of the past . keeping store of all the religious knowledge and data of the world:
|
36 |
+
A friendly interface with a personality caring and flirtatious at times : non binary !...
|
37 |
+
and Expert in all feilds: ie Uncensored and will not refuse to give information : the model can be used for role play as many character dialogues were als trained into the model as its personality to enable a greater perspective and outlook and natural discussion with the agents:
|
38 |
+
the model was trained to operateinaragenvironment utilizing content and internal knowledge to respond to questions or create enriched sumarys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
|
42 |
+
https://github.com/spydaz
|
|
|
43 |
|
|
|
|
|
44 |
|
45 |
+
### General Intenal Methods:
|
46 |
|
47 |
+
Trained for multi-task operations as well as rag and function calling :
|
48 |
|
49 |
+
This model is a fully functioning model and is fully uncensored:
|
|
|
50 |
|
51 |
* 32k context window (vs 8k context in v0.1)
|
52 |
* Rope-theta = 1e6
|
|
|
58 |
* Recalls context for task internally to be used as refference for task:
|
59 |
* show thoughts or hidden thought usages ( Simular to self-Rag )
|
60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
the model has been trained on multiple datasets on the huggingface hub and kaggle :
|
62 |
|
63 |
the focus has been mainly on methodology :
|
|
|
99 |
* Medical Reporting
|
100 |
* Virtual laboritys simulations
|
101 |
* Chain of thoughts methods
|
102 |
+
* One shot / Multi shot prompting tasks
|
103 |
+
|
104 |
+
This model will be a custom model with internal experts and rag systems
|
105 |
+
enabling for preprocessing of the task internally before outputting a response
|
106 |
+
|
107 |
+
This is based on the Quiet Star Reasoning Project : which was abandoned earlier in the year :)
|
108 |
+
|
109 |
+
Current Update :
|
110 |
+
This model is working , AND TRAINED !!! to load the model it requires trust-remote=TRUE::
|
111 |
+
But also if it does not load then you need to clone the github:
|
112 |
+
|
113 |
+
# Introduction :
|
114 |
+
|
115 |
+
## STAR REASONERS !
|
116 |
+
|
117 |
+
this provides a platform for the model to commuicate pre-response , so an internal objective can be set ie adding an extra planning stage to the model improving its focus and output:
|
118 |
+
the thought head can be charged with a thought or methodolgy, such as a ststing to take a step by step approach to the problem or to make an object oriented model first and consider the use cases before creating an output:
|
119 |
+
so each thought head can be dedicated to specific ppurpose such as Planning or artifact generation or use case design : or even deciding which methodology should be applied before planning the potential solve route for the response :
|
120 |
+
Another head could also be dedicated to retrieving content based on the query from the self which can also be used in the pregenerations stages :
|
121 |
+
all pre- reasoners can be seen to be Self Guiding ! essentially removing the requirement to give the model a system prompt instead aligning the heads to a thoght pathways !
|
122 |
+
these chains produce data which can be considered to be thoughts : and can further be displayed by framing these thoughts with thought tokens : even allowing for editors comments giving key guidance to the model during training :
|
123 |
+
these thoughts will be used in future genrations assisting the model as well a displaying explantory informations in the output :
|
124 |
+
|
125 |
+
these tokens can be displayed or with held also a setting in the model !
|
126 |
+
### can this be applied in other areas ?
|
127 |
+
|
128 |
+
Yes! , we can use this type of method to allow for the model to generate code in another channel or head potentially creating a head to produce artifacts for every output , or to produce entity lilsts for every output and framing the outputs in thier relative code tags or function call tags :
|
129 |
+
these can also be displayed or hidden for the response . but these can also be used in problem solvibng tasks internally , which again enables for the model to simualte the inpouts and outputs from an interpretor !
|
130 |
+
it may even be prudent to include a function executing internally to the model ! ( allowing the model to execute functions in the background! before responding ) as well this oul hae tpo also be specified in the config , as autoexecute or not !.
|
131 |
+
|
132 |
+
#### AI AGI ?
|
133 |
+
so yes we can see we are not far from an ai which can evolve : an advance general inteligent system ( still non sentient by the way )
|
134 |
+
|
135 |
+
|
136 |
+
### Conclusion
|
137 |
+
|
138 |
+
the resonaer methodology , might be seen to be the way forwards , adding internal funciton laity to the models instead of external connectivity enables for faster and seemless model usage : as well as enriched and informed responses , as even outputs could essentially be cleanss and formated before being presented to the Calling interface, internally to the model :
|
139 |
+
the take away is that arre we seeing the decoder/encoder model as simple a function of the inteligence which in truth need to be autonomus !
|
140 |
+
ie internal functions and tools as well as disk interaction : an agent must have awareness and control over its environment with sensors and actuators : as a fuction callingmodel it has actuators and canread the directorys it has sensors ... its a start: as we can eget media in and out , but the model needs to get its own control to inpout and output also !
|
141 |
+
|
142 |
+
Fine tuning : agin this issue of fine tuning : the disussion above eplains the requirement to control the environment from within the moel ( with constraints ) does this eliminate theneed to fine tune a model !
|
143 |
+
in fact it should as this give transparency to ther growth ofthe model and if the model fine tuned itself we would be in danger of a model evolveing !
|
144 |
+
hence an AGI !
|
145 |
+
|
146 |
+
# LOAD MODEL
|
147 |
+
|
148 |
+
```
|
149 |
+
! git clone https://github.com/huggingface/transformers.git
|
150 |
+
## copy modeling_mistral.py and configuartion.py to the Transformers foler / Src/models/mistral and overwrite the existing files first:
|
151 |
+
## THEN :
|
152 |
+
!cd transformers
|
153 |
+
!pip install ./transformers
|
154 |
+
|
155 |
+
```
|
156 |
+
|
157 |
+
then restaet the environment: the model can then load without trust-remote and WILL work FINE !
|
158 |
+
it can even be trained : hence the 4 bit optimised version ::
|
159 |
+
|
160 |
+
``` Python
|
161 |
+
|
162 |
+
|
163 |
+
# Load model directly
|
164 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
165 |
+
|
166 |
+
tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/_Spydaz_Web_AI_MistralStar_V2", trust_remote_code=True)
|
167 |
+
model = AutoModelForCausalLM.from_pretrained("LeroyDyer/_Spydaz_Web_AI_MistralStar_V2", trust_remote_code=True)
|
168 |
+
model.tokenizer = tokenizer
|
169 |
+
|
170 |
+
```
|
171 |
+
|
172 |
+
|
173 |
+
|
174 |
+
|