Lauther commited on
Commit
504a6ee
·
verified ·
1 Parent(s): 6f371db

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1536,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 1536, "out_features": 1024, "bias": true, "activation_function": "torch.nn.modules.linear.Identity"}
2_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:464789c715d4c8c5c7073a7b2c0c857315e11ba106fb47a504451d5bebcfc304
3
+ size 6295712
README.md ADDED
@@ -0,0 +1,770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:5220
8
+ - loss:CosineSimilarityLoss
9
+ base_model: NovaSearch/stella_en_1.5B_v5
10
+ widget:
11
+ - source_sentence: Identify the column that stores the uncertainty value.
12
+ sentences:
13
+ - "What is measuring equipment?\nMeasuring equipment refers to the devices that\
14
+ \ make up a measurement system. Each piece of equipment has:\n- A unique serial\
15
+ \ number for identification.\n- A technical name, such as transmitter, plate,\
16
+ \ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
17
+ \ equipment is assigned to a measurement system, it is given a unique identifier\
18
+ \ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
19
+ \ considered in use in a measurement system.\n- If it does not have a tag, it\
20
+ \ is considered spare or unused\n\nEquipment assignment based on technology:\n\
21
+ The type of equipment assigned to a measurement system depends on the technology\
22
+ \ used, for example:\n1. Differential technology (for gas measurement):\n -\
23
+ \ Static pressure transmitters\n - Differential pressure transmitters\n \
24
+ \ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
25
+ \ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
26
+ \ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
27
+ \ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
28
+ - A measurement system can have multiple pieces of equipment.\n- However, a piece\
29
+ \ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
30
+ - The database includes a special table to manage the list of equipment assigned\
31
+ \ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
32
+ \ are searching for operational equipment assigned to a measurement system.\n\
33
+ - If a user is looking for spare or unused equipment, they are searching for equipment\
34
+ \ not listed in the tagged equipment table.\n- Commonly used when user refers\
35
+ \ directly to an \"\"Equipment Tag\""
36
+ - 'What is equipment calibration?
37
+
38
+ Calibration is a metrological verification process used to ensure the accuracy
39
+ of measurement equipment. It is performed periodically, based on intervals set
40
+ by the company or a regulatory body.
41
+
42
+
43
+ Purpose of calibration:
44
+
45
+ The calibration process corrects any deviations in how the equipment measures
46
+ physical magnitudes (variables). This ensures the equipment provides accurate
47
+ and reliable data.
48
+
49
+
50
+ Calibration cycles:
51
+
52
+ There are two main calibration cycles:
53
+
54
+ 1. As-found: Represents the equipment''s measurement accuracy before any adjustments
55
+ are made. This cycle is almost always implemented.
56
+
57
+ 2. As-left: Represents the equipment''s measurement accuracy after adjustments
58
+ are made. This cycle is used depending on regulatory requirements.
59
+
60
+
61
+ Calibration uncertainty:
62
+
63
+ - Uncertainty is included in the results of a calibration.
64
+
65
+ - Calibration uncertainty refers to the margin of error in the device''s measurements,
66
+ which also affects the uncertainty of the measured variable or magnitude.'
67
+ - 'What kind of data store an equipment?
68
+
69
+ Equipments can capture meteorological data, such as pressure, temperature, and
70
+ volume (magnitudes). This data is essential for users to perform various calculations.
71
+
72
+
73
+ Data storage:
74
+
75
+ - The measured values are stored in a special table in the database for magnitudes.
76
+ This table contains the values of the variables captured by the equipments.
77
+
78
+ - These values are **direct measurements** from the fluid (e.g., raw pressure,
79
+ temperature, or volume readings). **They are not calculated values**, such as
80
+ uncertainty.
81
+
82
+ - The values stored in the variable values table are **different** from variable
83
+ uncertainty values, which are calculated separately and represent the margin of
84
+ error.
85
+
86
+
87
+ Accessing the data:
88
+
89
+ - Users typically access the data by referring to the readings from the measurement
90
+ system, not directly from the individual equipments.
91
+
92
+ - The readings are stored in a "variable values" table within the database.
93
+
94
+
95
+ Linking variable names:
96
+
97
+ If the user needs to know the name of a variable, they must link the data to another
98
+ table that stores information about the types of variables.'
99
+ - source_sentence: SELECT * FROM EquipmentType LIMIT 1
100
+ sentences:
101
+ - 'What kind of data store an equipment?
102
+
103
+ Equipments can capture meteorological data, such as pressure, temperature, and
104
+ volume (magnitudes). This data is essential for users to perform various calculations.
105
+
106
+
107
+ Data storage:
108
+
109
+ - The measured values are stored in a special table in the database for magnitudes.
110
+ This table contains the values of the variables captured by the equipments.
111
+
112
+ - These values are **direct measurements** from the fluid (e.g., raw pressure,
113
+ temperature, or volume readings). **They are not calculated values**, such as
114
+ uncertainty.
115
+
116
+ - The values stored in the variable values table are **different** from variable
117
+ uncertainty values, which are calculated separately and represent the margin of
118
+ error.
119
+
120
+
121
+ Accessing the data:
122
+
123
+ - Users typically access the data by referring to the readings from the measurement
124
+ system, not directly from the individual equipments.
125
+
126
+ - The readings are stored in a "variable values" table within the database.
127
+
128
+
129
+ Linking variable names:
130
+
131
+ If the user needs to know the name of a variable, they must link the data to another
132
+ table that stores information about the types of variables.'
133
+ - "How does a flow computer generate and store reports?\nA flow computer generates\
134
+ \ daily or hourly reports to provide users with operational data. These reports\
135
+ \ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
136
+ - Each report includes:\n- Date and time of the data recording.\n- Data recorded\
137
+ \ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
138
+ \ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
139
+ \ identifier.\n2. Detail table:\n - Stores the measured values associated with\
140
+ \ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
141
+ \ are linked to a Modbus table. This table contains the names corresponding to\
142
+ \ each value in the reports, making it easier to interpret the data."
143
+ - 'What is a flow computer?
144
+
145
+ A flow computer is a device used in measurement engineering. It collects analog
146
+ and digital data from flow meters and other sensors.
147
+
148
+
149
+ Key features of a flow computer:
150
+
151
+ - It has a unique name, firmware version, and manufacturer information.
152
+
153
+ - It is designed to record and process data such as temperature, pressure, and
154
+ fluid volume (for gases or oils).
155
+
156
+
157
+ Main function:
158
+
159
+ The flow computer sends the collected data to a measurement system. This allows
160
+ measurement engineers to analyze the data and perform their tasks effectively.'
161
+ - source_sentence: What tables store measurement system data?
162
+ sentences:
163
+ - "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
164
+ \ and reliability of results obtained from equipment or measurement systems. It\
165
+ \ quantifies the potential error or margin of error in measurements.\n\nTypes\
166
+ \ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
167
+ \ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
168
+ \ such as temperature or pressure.\n - It is calculated after calibrating a\
169
+ \ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
170
+ \ serves as a starting point for further calculations related to the equipment.\n\
171
+ \n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
172
+ \ for the overall flow measurement.\n - It depends on the uncertainties of\
173
+ \ the individual variables (magnitudes) and represents the combined margin of\
174
+ \ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
175
+ \ (variables) are the foundation for calculating the uncertainty of the measurement\
176
+ \ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
177
+ \ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
178
+ \ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
179
+ \ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
180
+ \ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
181
+ \ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
182
+ \ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
183
+ \n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
184
+ \ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
185
+ - To find the uncertainty of the measurement system, join the measurement systems\
186
+ \ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
187
+ \ of a specific variable (magnitude), join the measurement systems table with\
188
+ \ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
189
+ \ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
190
+ \ of the measurement system, use the first join (measurement systems table + uncertainty\
191
+ \ of the measurement system table).\n- If the user requests the uncertainty of\
192
+ \ a specific variable (magnitude) in a report, use the second join (measurement\
193
+ \ systems table + uncertainty of magnitudes table)."
194
+ - "What is a measurement system?\nA measurement system, also referred to as a delivery\
195
+ \ point, measurement point, or reception point, is used to measure and monitor\
196
+ \ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
197
+ 1. Measurement technology:\n - Differential: Used for precise measurements.\n\
198
+ \ - Linear: Used for straightforward measurements.\n\n2. System identifier\
199
+ \ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
200
+ \ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
201
+ 4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
202
+ \ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
203
+ \ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
204
+ \ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
205
+ \ systems:\nMeasurement systems are classified based on the stage of the process\
206
+ \ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
207
+ - Appropriation\n- Custody\n- Production Poços"
208
+ - 'What do measurement equipment measure?
209
+
210
+ Each equipment measures a physical magnitude, also known as a variable. Based
211
+ on the type of variable they measure, devices are classified into different categories.
212
+
213
+
214
+ Equipment classification:
215
+
216
+ - Primary meter: Assigned by default to equipments like orifice plates.
217
+
218
+ - Secondary meter: Assigned by default to equipments like transmitters.
219
+
220
+ - Tertiary meter: Used for other types of equipments.
221
+
222
+
223
+ Equipment types in the database:
224
+
225
+ The database includes a table listing all equipment types. Examples of equipment
226
+ types are:
227
+
228
+ - Differential pressure transmitters
229
+
230
+ - RTDs (Resistance Temperature Detectors)
231
+
232
+ - Orifice plates
233
+
234
+ - Multivariable transmitters
235
+
236
+ - Ultrasonic meters
237
+
238
+
239
+ Meteorological checks for equipments:
240
+
241
+ Each equipment type is assigned a meteorological check, which can be either:
242
+
243
+ - Calibration: To ensure measurement accuracy.
244
+
245
+ - Inspection: To verify proper functioning.
246
+
247
+
248
+ Data storage in tables:
249
+
250
+ The database also includes a separate table for equipment classifications, which
251
+ are:
252
+
253
+ - Primary meter
254
+
255
+ - Secondary meter
256
+
257
+ - Tertiary meter
258
+
259
+ So, an equipment has equipment types and this types has classifications.'
260
+ - source_sentence: What is the table structure for equipment types?
261
+ sentences:
262
+ - "How does a flow computer generate and store reports?\nA flow computer generates\
263
+ \ daily or hourly reports to provide users with operational data. These reports\
264
+ \ are stored in the flow computer's memory in an organized format.\n\nReport structure:\n\
265
+ - Each report includes:\n- Date and time of the data recording.\n- Data recorded\
266
+ \ from flow computers.\n\nData storage in tables:\nThe reports are saved in two\
267
+ \ tables:\n1. Main table (Index):\n - Stores the date, time, and flow computer\
268
+ \ identifier.\n2. Detail table:\n - Stores the measured values associated with\
269
+ \ the report.\n\nConnection to the Modbus table:\nThe flow computer's reports\
270
+ \ are linked to a Modbus table. This table contains the names corresponding to\
271
+ \ each value in the reports, making it easier to interpret the data."
272
+ - "What is measuring equipment?\nMeasuring equipment refers to the devices that\
273
+ \ make up a measurement system. Each piece of equipment has:\n- A unique serial\
274
+ \ number for identification.\n- A technical name, such as transmitter, plate,\
275
+ \ thermometer, etc.\n\nHow is equipment assigned to a measurement system?\nWhen\
276
+ \ equipment is assigned to a measurement system, it is given a unique identifier\
277
+ \ called an \"\"Equipment Tag.\"\"\n- If a piece of equipment has a tag, it is\
278
+ \ considered in use in a measurement system.\n- If it does not have a tag, it\
279
+ \ is considered spare or unused\n\nEquipment assignment based on technology:\n\
280
+ The type of equipment assigned to a measurement system depends on the technology\
281
+ \ used, for example:\n1. Differential technology (for gas measurement):\n -\
282
+ \ Static pressure transmitters\n - Differential pressure transmitters\n \
283
+ \ - Temperature transmitters\n - RTDs (thermometers)\n - Orifice plates\n\
284
+ \ - Straight stretch\n\n2. Linear technology (for gas measurement):\n -\
285
+ \ Temperature transmitters\n - RTDs\n - Static pressure transmitters\n \
286
+ \ - Ultrasonic meters\n\nRelationship between equipment and measurement systems:\n\
287
+ - A measurement system can have multiple pieces of equipment.\n- However, a piece\
288
+ \ of equipment can only be assigned to one measurement system.\n\nDatabase management:\n\
289
+ - The database includes a special table to manage the list of equipment assigned\
290
+ \ to measurement systems.\n- When a user refers to an \"\"Equipment Tag\"\", they\
291
+ \ are searching for operational equipment assigned to a measurement system.\n\
292
+ - If a user is looking for spare or unused equipment, they are searching for equipment\
293
+ \ not listed in the tagged equipment table.\n- Commonly used when user refers\
294
+ \ directly to an \"\"Equipment Tag\""
295
+ - "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
296
+ \ and reliability of results obtained from equipment or measurement systems. It\
297
+ \ quantifies the potential error or margin of error in measurements.\n\nTypes\
298
+ \ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
299
+ \ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
300
+ \ such as temperature or pressure.\n - It is calculated after calibrating a\
301
+ \ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
302
+ \ serves as a starting point for further calculations related to the equipment.\n\
303
+ \n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
304
+ \ for the overall flow measurement.\n - It depends on the uncertainties of\
305
+ \ the individual variables (magnitudes) and represents the combined margin of\
306
+ \ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
307
+ \ (variables) are the foundation for calculating the uncertainty of the measurement\
308
+ \ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
309
+ \ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
310
+ \ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
311
+ \ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
312
+ \ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
313
+ \ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
314
+ \ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
315
+ \n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
316
+ \ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
317
+ - To find the uncertainty of the measurement system, join the measurement systems\
318
+ \ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
319
+ \ of a specific variable (magnitude), join the measurement systems table with\
320
+ \ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
321
+ \ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
322
+ \ of the measurement system, use the first join (measurement systems table + uncertainty\
323
+ \ of the measurement system table).\n- If the user requests the uncertainty of\
324
+ \ a specific variable (magnitude) in a report, use the second join (measurement\
325
+ \ systems table + uncertainty of magnitudes table)."
326
+ - source_sentence: What columns store the uncertainty values?
327
+ sentences:
328
+ - "What is a measurement system?\nA measurement system, also referred to as a delivery\
329
+ \ point, measurement point, or reception point, is used to measure and monitor\
330
+ \ fluids in industrial processes.\n\nKey characteristics of a measurement system:\n\
331
+ 1. Measurement technology:\n - Differential: Used for precise measurements.\n\
332
+ \ - Linear: Used for straightforward measurements.\n\n2. System identifier\
333
+ \ (TAG):\n - A unique identifier for the system.\n\n3. Fluid type:\n - The\
334
+ \ system can measure gases, oils, condensates, water, steam, or other fluids.\n\
335
+ 4. System type:\n - Specifies the category or purpose of the system.\n\nMeasurement\
336
+ \ technology by fluid type:\n- Gas measurement systems: Use both linear and differential\
337
+ \ measurement technologies.\n- Oil measurement systems: Do not use linear or differential\
338
+ \ technologies; they are programmed differently.\"\n\n\nClassification of measurement\
339
+ \ systems:\nMeasurement systems are classified based on the stage of the process\
340
+ \ in which they are used. Common classifications include:\n- Fiscal\n- Operational\n\
341
+ - Appropriation\n- Custody\n- Production Poços"
342
+ - 'How are flow computers and measurement systems related?
343
+
344
+ Flow computers can have multiple systems assigned to them. However, a measurement
345
+ system can only be assigned to one flow computer.
346
+
347
+
348
+ Database terminology:
349
+
350
+ In the database, this relationship is referred to as:
351
+
352
+ - Meter streams
353
+
354
+ - Meter runs
355
+
356
+ - Sections
357
+
358
+
359
+ Storage of the relationship:
360
+
361
+ The relationship between a flow computer and its assigned measurement system is
362
+ stored in a special table.
363
+
364
+
365
+ User context:
366
+
367
+ When a user refers to a "meter stream," they are indicating that they are searching
368
+ for a measurement system assigned to a specific flow computer.'
369
+ - "What is uncertainty?\nUncertainty is a measure of confidence in the precision\
370
+ \ and reliability of results obtained from equipment or measurement systems. It\
371
+ \ quantifies the potential error or margin of error in measurements.\n\nTypes\
372
+ \ of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of\
373
+ \ magnitudes (variables):\n - Refers to the uncertainty of specific variables,\
374
+ \ such as temperature or pressure.\n - It is calculated after calibrating a\
375
+ \ device or obtained from the equipment manufacturer's manual.\n - This uncertainty\
376
+ \ serves as a starting point for further calculations related to the equipment.\n\
377
+ \n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated\
378
+ \ for the overall flow measurement.\n - It depends on the uncertainties of\
379
+ \ the individual variables (magnitudes) and represents the combined margin of\
380
+ \ error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes\
381
+ \ (variables) are the foundation for calculating the uncertainty of the measurement\
382
+ \ system. Think of them as the \"building blocks.\"\n- Do not confuse the two\
383
+ \ types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific\
384
+ \ to individual variables (e.g., temperature, pressure).\n - **Uncertainty\
385
+ \ of the measurement system**: Specific to the overall flow measurement.\n\nDatabase\
386
+ \ storage for uncertainties:\nIn the database, uncertainty calculations are stored\
387
+ \ in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores\
388
+ \ the uncertainty values for specific variables (e.g., temperature, pressure).\n\
389
+ \n2. Uncertainty of the measurement system:\n - Stores the uncertainty values\
390
+ \ for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n\
391
+ - To find the uncertainty of the measurement system, join the measurement systems\
392
+ \ table with the uncertainty of the measurement system table.\n- To find the uncertainty\
393
+ \ of a specific variable (magnitude), join the measurement systems table with\
394
+ \ the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not\
395
+ \ confuse the two types of uncertainty:\n- If the user requests the uncertainty\
396
+ \ of the measurement system, use the first join (measurement systems table + uncertainty\
397
+ \ of the measurement system table).\n- If the user requests the uncertainty of\
398
+ \ a specific variable (magnitude) in a report, use the second join (measurement\
399
+ \ systems table + uncertainty of magnitudes table)."
400
+ datasets:
401
+ - Lauther/embeddings-train-semantic
402
+ pipeline_tag: sentence-similarity
403
+ library_name: sentence-transformers
404
+ ---
405
+
406
+ # SentenceTransformer based on NovaSearch/stella_en_1.5B_v5
407
+
408
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) on the [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
409
+
410
+ ## Model Details
411
+
412
+ ### Model Description
413
+ - **Model Type:** Sentence Transformer
414
+ - **Base model:** [NovaSearch/stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) <!-- at revision f10d4793289fa0d0a36978d6ecd0a9eaa6781f06 -->
415
+ - **Maximum Sequence Length:** 512 tokens
416
+ - **Output Dimensionality:** 1024 dimensions
417
+ - **Similarity Function:** Cosine Similarity
418
+ - **Training Dataset:**
419
+ - [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic)
420
+ <!-- - **Language:** Unknown -->
421
+ <!-- - **License:** Unknown -->
422
+
423
+ ### Model Sources
424
+
425
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
426
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
427
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
428
+
429
+ ### Full Model Architecture
430
+
431
+ ```
432
+ SentenceTransformer(
433
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model
434
+ (1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
435
+ (2): Dense({'in_features': 1536, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
436
+ )
437
+ ```
438
+
439
+ ## Usage
440
+
441
+ ### Direct Usage (Sentence Transformers)
442
+
443
+ First install the Sentence Transformers library:
444
+
445
+ ```bash
446
+ pip install -U sentence-transformers
447
+ ```
448
+
449
+ Then you can load this model and run inference.
450
+ ```python
451
+ from sentence_transformers import SentenceTransformer
452
+
453
+ # Download from the 🤗 Hub
454
+ model = SentenceTransformer("Lauther/emb-stella_en_1.5B_v5-1e")
455
+ # Run inference
456
+ sentences = [
457
+ 'What columns store the uncertainty values?',
458
+ 'How are flow computers and measurement systems related?\nFlow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.\n\nDatabase terminology:\nIn the database, this relationship is referred to as:\n- Meter streams\n- Meter runs\n- Sections\n\nStorage of the relationship:\nThe relationship between a flow computer and its assigned measurement system is stored in a special table.\n\nUser context:\nWhen a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.',
459
+ 'What is uncertainty?\nUncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.\n\nTypes of uncertainty:\nThere are two main types of uncertainty:\n1. Uncertainty of magnitudes (variables):\n - Refers to the uncertainty of specific variables, such as temperature or pressure.\n - It is calculated after calibrating a device or obtained from the equipment manufacturer\'s manual.\n - This uncertainty serves as a starting point for further calculations related to the equipment.\n\n2. Uncertainty of the measurement system:\n - Refers to the uncertainty calculated for the overall flow measurement.\n - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.\n\nKey points:\n- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of the measurement system. Think of them as the "building blocks."\n- Do not confuse the two types of uncertainty:\n - **Uncertainty of magnitudes/variables**: Specific to individual variables (e.g., temperature, pressure).\n - **Uncertainty of the measurement system**: Specific to the overall flow measurement.\n\nDatabase storage for uncertainties:\nIn the database, uncertainty calculations are stored in two separate tables:\n1. Uncertainty of magnitudes (variables):\n - Stores the uncertainty values for specific variables (e.g., temperature, pressure).\n\n2. Uncertainty of the measurement system:\n - Stores the uncertainty values for the overall flow measurement system.\n\nHow to retrieve uncertainty data:\n- To find the uncertainty of the measurement system, join the measurement systems table with the uncertainty of the measurement system table.\n- To find the uncertainty of a specific variable (magnitude), join the measurement systems table with the uncertainty of magnitudes (variables) table.\n\nImportant note:\nDo not confuse the two types of uncertainty:\n- If the user requests the uncertainty of the measurement system, use the first join (measurement systems table + uncertainty of the measurement system table).\n- If the user requests the uncertainty of a specific variable (magnitude) in a report, use the second join (measurement systems table + uncertainty of magnitudes table).',
460
+ ]
461
+ embeddings = model.encode(sentences)
462
+ print(embeddings.shape)
463
+ # [3, 1024]
464
+
465
+ # Get the similarity scores for the embeddings
466
+ similarities = model.similarity(embeddings, embeddings)
467
+ print(similarities.shape)
468
+ # [3, 3]
469
+ ```
470
+
471
+ <!--
472
+ ### Direct Usage (Transformers)
473
+
474
+ <details><summary>Click to see the direct usage in Transformers</summary>
475
+
476
+ </details>
477
+ -->
478
+
479
+ <!--
480
+ ### Downstream Usage (Sentence Transformers)
481
+
482
+ You can finetune this model on your own dataset.
483
+
484
+ <details><summary>Click to expand</summary>
485
+
486
+ </details>
487
+ -->
488
+
489
+ <!--
490
+ ### Out-of-Scope Use
491
+
492
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
493
+ -->
494
+
495
+ <!--
496
+ ## Bias, Risks and Limitations
497
+
498
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
499
+ -->
500
+
501
+ <!--
502
+ ### Recommendations
503
+
504
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
505
+ -->
506
+
507
+ ## Training Details
508
+
509
+ ### Training Dataset
510
+
511
+ #### embeddings-train-semantic
512
+
513
+ * Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
514
+ * Size: 5,220 training samples
515
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
516
+ * Approximate statistics based on the first 1000 samples:
517
+ | | sentence1 | sentence2 | score |
518
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------|
519
+ | type | string | string | float |
520
+ | details | <ul><li>min: 5 tokens</li><li>mean: 14.22 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 105 tokens</li><li>mean: 219.9 tokens</li><li>max: 447 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.23</li><li>max: 1.0</li></ul> |
521
+ * Samples:
522
+ | sentence1 | sentence2 | score |
523
+ |:------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
524
+ | <code>What is the data type of differential pressure in the measurement system?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
525
+ | <code>What is the structure of the &&&equipment_data&&& table?</code> | <code>How are flow computers and measurement systems related?<br>Flow computers can have multiple systems assigned to them. However, a measurement system can only be assigned to one flow computer.<br><br>Database terminology:<br>In the database, this relationship is referred to as:<br>- Meter streams<br>- Meter runs<br>- Sections<br><br>Storage of the relationship:<br>The relationship between a flow computer and its assigned measurement system is stored in a special table.<br><br>User context:<br>When a user refers to a "meter stream," they are indicating that they are searching for a measurement system assigned to a specific flow computer.</code> | <code>0.35000000000000003</code> |
526
+ | <code>Find the columns in the flow computer table that identify the flow computer.</code> | <code>What kind of data store an equipment?<br>Equipments can capture meteorological data, such as pressure, temperature, and volume (magnitudes). This data is essential for users to perform various calculations.<br><br>Data storage:<br>- The measured values are stored in a special table in the database for magnitudes. This table contains the values of the variables captured by the equipments.<br>- These values are **direct measurements** from the fluid (e.g., raw pressure, temperature, or volume readings). **They are not calculated values**, such as uncertainty.<br>- The values stored in the variable values table are **different** from variable uncertainty values, which are calculated separately and represent the margin of error.<br><br>Accessing the data:<br>- Users typically access the data by referring to the readings from the measurement system, not directly from the individual equipments.<br>- The readings are stored in a "variable values" table within the database.<br><br>Linking variable names:<br>If the user needs to kno...</code> | <code>0.1</code> |
527
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
528
+ ```json
529
+ {
530
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
531
+ }
532
+ ```
533
+
534
+ ### Evaluation Dataset
535
+
536
+ #### embeddings-train-semantic
537
+
538
+ * Dataset: [embeddings-train-semantic](https://huggingface.co/datasets/Lauther/embeddings-train-semantic) at [ce90f53](https://huggingface.co/datasets/Lauther/embeddings-train-semantic/tree/ce90f531bc39037053d223b27868ad178852f330)
539
+ * Size: 652 evaluation samples
540
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
541
+ * Approximate statistics based on the first 652 samples:
542
+ | | sentence1 | sentence2 | score |
543
+ |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------|
544
+ | type | string | string | float |
545
+ | details | <ul><li>min: 5 tokens</li><li>mean: 13.83 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 105 tokens</li><li>mean: 217.37 tokens</li><li>max: 447 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.9</li></ul> |
546
+ * Samples:
547
+ | sentence1 | sentence2 | score |
548
+ |:-------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
549
+ | <code>How can I filter uncertainty reports by equipment tag?</code> | <code>How does a flow computer generate and store reports?<br>A flow computer generates daily or hourly reports to provide users with operational data. These reports are stored in the flow computer's memory in an organized format.<br><br>Report structure:<br>- Each report includes:<br>- Date and time of the data recording.<br>- Data recorded from flow computers.<br><br>Data storage in tables:<br>The reports are saved in two tables:<br>1. Main table (Index):<br> - Stores the date, time, and flow computer identifier.<br>2. Detail table:<br> - Stores the measured values associated with the report.<br><br>Connection to the Modbus table:<br>The flow computer's reports are linked to a Modbus table. This table contains the names corresponding to each value in the reports, making it easier to interpret the data.</code> | <code>0.09999999999999999</code> |
550
+ | <code>What is the purpose of the flow_data table?</code> | <code>What is uncertainty?<br>Uncertainty is a measure of confidence in the precision and reliability of results obtained from equipment or measurement systems. It quantifies the potential error or margin of error in measurements.<br><br>Types of uncertainty:<br>There are two main types of uncertainty:<br>1. Uncertainty of magnitudes (variables):<br> - Refers to the uncertainty of specific variables, such as temperature or pressure.<br> - It is calculated after calibrating a device or obtained from the equipment manufacturer's manual.<br> - This uncertainty serves as a starting point for further calculations related to the equipment.<br><br>2. Uncertainty of the measurement system:<br> - Refers to the uncertainty calculated for the overall flow measurement.<br> - It depends on the uncertainties of the individual variables (magnitudes) and represents the combined margin of error for the entire system.<br><br>Key points:<br>- The uncertainties of magnitudes (variables) are the foundation for calculating the uncertainty of ...</code> | <code>0.15000000000000002</code> |
551
+ | <code>What is the column name for the report date in the Reports table?</code> | <code>What is equipment calibration?<br>Calibration is a metrological verification process used to ensure the accuracy of measurement equipment. It is performed periodically, based on intervals set by the company or a regulatory body.<br><br>Purpose of calibration:<br>The calibration process corrects any deviations in how the equipment measures physical magnitudes (variables). This ensures the equipment provides accurate and reliable data.<br><br>Calibration cycles:<br>There are two main calibration cycles:<br>1. As-found: Represents the equipment's measurement accuracy before any adjustments are made. This cycle is almost always implemented.<br>2. As-left: Represents the equipment's measurement accuracy after adjustments are made. This cycle is used depending on regulatory requirements.<br><br>Calibration uncertainty:<br>- Uncertainty is included in the results of a calibration.<br>- Calibration uncertainty refers to the margin of error in the device's measurements, which also affects the uncertainty of the measured variable or ...</code> | <code>0.1</code> |
552
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
553
+ ```json
554
+ {
555
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
556
+ }
557
+ ```
558
+
559
+ ### Training Hyperparameters
560
+ #### Non-Default Hyperparameters
561
+
562
+ - `eval_strategy`: steps
563
+ - `per_device_train_batch_size`: 4
564
+ - `per_device_eval_batch_size`: 4
565
+ - `gradient_accumulation_steps`: 4
566
+ - `learning_rate`: 2e-05
567
+ - `num_train_epochs`: 1
568
+ - `warmup_ratio`: 0.1
569
+
570
+ #### All Hyperparameters
571
+ <details><summary>Click to expand</summary>
572
+
573
+ - `overwrite_output_dir`: False
574
+ - `do_predict`: False
575
+ - `eval_strategy`: steps
576
+ - `prediction_loss_only`: True
577
+ - `per_device_train_batch_size`: 4
578
+ - `per_device_eval_batch_size`: 4
579
+ - `per_gpu_train_batch_size`: None
580
+ - `per_gpu_eval_batch_size`: None
581
+ - `gradient_accumulation_steps`: 4
582
+ - `eval_accumulation_steps`: None
583
+ - `torch_empty_cache_steps`: None
584
+ - `learning_rate`: 2e-05
585
+ - `weight_decay`: 0.0
586
+ - `adam_beta1`: 0.9
587
+ - `adam_beta2`: 0.999
588
+ - `adam_epsilon`: 1e-08
589
+ - `max_grad_norm`: 1.0
590
+ - `num_train_epochs`: 1
591
+ - `max_steps`: -1
592
+ - `lr_scheduler_type`: linear
593
+ - `lr_scheduler_kwargs`: {}
594
+ - `warmup_ratio`: 0.1
595
+ - `warmup_steps`: 0
596
+ - `log_level`: passive
597
+ - `log_level_replica`: warning
598
+ - `log_on_each_node`: True
599
+ - `logging_nan_inf_filter`: True
600
+ - `save_safetensors`: True
601
+ - `save_on_each_node`: False
602
+ - `save_only_model`: False
603
+ - `restore_callback_states_from_checkpoint`: False
604
+ - `no_cuda`: False
605
+ - `use_cpu`: False
606
+ - `use_mps_device`: False
607
+ - `seed`: 42
608
+ - `data_seed`: None
609
+ - `jit_mode_eval`: False
610
+ - `use_ipex`: False
611
+ - `bf16`: False
612
+ - `fp16`: False
613
+ - `fp16_opt_level`: O1
614
+ - `half_precision_backend`: auto
615
+ - `bf16_full_eval`: False
616
+ - `fp16_full_eval`: False
617
+ - `tf32`: None
618
+ - `local_rank`: 0
619
+ - `ddp_backend`: None
620
+ - `tpu_num_cores`: None
621
+ - `tpu_metrics_debug`: False
622
+ - `debug`: []
623
+ - `dataloader_drop_last`: False
624
+ - `dataloader_num_workers`: 0
625
+ - `dataloader_prefetch_factor`: None
626
+ - `past_index`: -1
627
+ - `disable_tqdm`: False
628
+ - `remove_unused_columns`: True
629
+ - `label_names`: None
630
+ - `load_best_model_at_end`: False
631
+ - `ignore_data_skip`: False
632
+ - `fsdp`: []
633
+ - `fsdp_min_num_params`: 0
634
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
635
+ - `fsdp_transformer_layer_cls_to_wrap`: None
636
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
637
+ - `deepspeed`: None
638
+ - `label_smoothing_factor`: 0.0
639
+ - `optim`: adamw_torch
640
+ - `optim_args`: None
641
+ - `adafactor`: False
642
+ - `group_by_length`: False
643
+ - `length_column_name`: length
644
+ - `ddp_find_unused_parameters`: None
645
+ - `ddp_bucket_cap_mb`: None
646
+ - `ddp_broadcast_buffers`: False
647
+ - `dataloader_pin_memory`: True
648
+ - `dataloader_persistent_workers`: False
649
+ - `skip_memory_metrics`: True
650
+ - `use_legacy_prediction_loop`: False
651
+ - `push_to_hub`: False
652
+ - `resume_from_checkpoint`: None
653
+ - `hub_model_id`: None
654
+ - `hub_strategy`: every_save
655
+ - `hub_private_repo`: None
656
+ - `hub_always_push`: False
657
+ - `gradient_checkpointing`: False
658
+ - `gradient_checkpointing_kwargs`: None
659
+ - `include_inputs_for_metrics`: False
660
+ - `include_for_metrics`: []
661
+ - `eval_do_concat_batches`: True
662
+ - `fp16_backend`: auto
663
+ - `push_to_hub_model_id`: None
664
+ - `push_to_hub_organization`: None
665
+ - `mp_parameters`:
666
+ - `auto_find_batch_size`: False
667
+ - `full_determinism`: False
668
+ - `torchdynamo`: None
669
+ - `ray_scope`: last
670
+ - `ddp_timeout`: 1800
671
+ - `torch_compile`: False
672
+ - `torch_compile_backend`: None
673
+ - `torch_compile_mode`: None
674
+ - `dispatch_batches`: None
675
+ - `split_batches`: None
676
+ - `include_tokens_per_second`: False
677
+ - `include_num_input_tokens_seen`: False
678
+ - `neftune_noise_alpha`: None
679
+ - `optim_target_modules`: None
680
+ - `batch_eval_metrics`: False
681
+ - `eval_on_start`: False
682
+ - `use_liger_kernel`: False
683
+ - `eval_use_gather_object`: False
684
+ - `average_tokens_across_devices`: False
685
+ - `prompts`: None
686
+ - `batch_sampler`: batch_sampler
687
+ - `multi_dataset_batch_sampler`: proportional
688
+
689
+ </details>
690
+
691
+ ### Training Logs
692
+ | Epoch | Step | Training Loss | Validation Loss |
693
+ |:------:|:----:|:-------------:|:---------------:|
694
+ | 0.0307 | 10 | 0.2817 | - |
695
+ | 0.0613 | 20 | 0.1694 | - |
696
+ | 0.0920 | 30 | 0.1173 | - |
697
+ | 0.1226 | 40 | 0.0953 | - |
698
+ | 0.1533 | 50 | 0.0959 | 0.0250 |
699
+ | 0.1839 | 60 | 0.0948 | - |
700
+ | 0.2146 | 70 | 0.1095 | - |
701
+ | 0.2452 | 80 | 0.1269 | - |
702
+ | 0.2759 | 90 | 0.1023 | - |
703
+ | 0.3065 | 100 | 0.0775 | 0.0220 |
704
+ | 0.3372 | 110 | 0.099 | - |
705
+ | 0.3678 | 120 | 0.077 | - |
706
+ | 0.3985 | 130 | 0.0837 | - |
707
+ | 0.4291 | 140 | 0.0677 | - |
708
+ | 0.4598 | 150 | 0.077 | 0.0198 |
709
+ | 0.4904 | 160 | 0.0793 | - |
710
+ | 0.5211 | 170 | 0.0847 | - |
711
+ | 0.5517 | 180 | 0.0786 | - |
712
+ | 0.5824 | 190 | 0.0601 | - |
713
+ | 0.6130 | 200 | 0.0474 | 0.0166 |
714
+ | 0.6437 | 210 | 0.0778 | - |
715
+ | 0.6743 | 220 | 0.0699 | - |
716
+ | 0.7050 | 230 | 0.066 | - |
717
+ | 0.7356 | 240 | 0.0741 | - |
718
+ | 0.7663 | 250 | 0.0576 | 0.0136 |
719
+ | 0.7969 | 260 | 0.0418 | - |
720
+ | 0.8276 | 270 | 0.0648 | - |
721
+ | 0.8582 | 280 | 0.0566 | - |
722
+ | 0.8889 | 290 | 0.0625 | - |
723
+ | 0.9195 | 300 | 0.0487 | 0.0131 |
724
+ | 0.9502 | 310 | 0.0533 | - |
725
+ | 0.9808 | 320 | 0.0405 | - |
726
+
727
+
728
+ ### Framework Versions
729
+ - Python: 3.11.0
730
+ - Sentence Transformers: 3.4.0
731
+ - Transformers: 4.48.1
732
+ - PyTorch: 2.5.1+cu124
733
+ - Accelerate: 1.3.0
734
+ - Datasets: 3.2.0
735
+ - Tokenizers: 0.21.0
736
+
737
+ ## Citation
738
+
739
+ ### BibTeX
740
+
741
+ #### Sentence Transformers
742
+ ```bibtex
743
+ @inproceedings{reimers-2019-sentence-bert,
744
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
745
+ author = "Reimers, Nils and Gurevych, Iryna",
746
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
747
+ month = "11",
748
+ year = "2019",
749
+ publisher = "Association for Computational Linguistics",
750
+ url = "https://arxiv.org/abs/1908.10084",
751
+ }
752
+ ```
753
+
754
+ <!--
755
+ ## Glossary
756
+
757
+ *Clearly define terms in order to be accessible across audiences.*
758
+ -->
759
+
760
+ <!--
761
+ ## Model Card Authors
762
+
763
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
764
+ -->
765
+
766
+ <!--
767
+ ## Model Card Contact
768
+
769
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
770
+ -->
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "NovaSearch/stella_en_1.5B_v5",
3
+ "architectures": [
4
+ "Qwen2Model"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoModel": "NovaSearch/stella_en_1.5B_v5--modeling_qwen.Qwen2Model",
9
+ "AutoModelForCausalLM": "NovaSearch/stella_en_1.5B_v5--modeling_qwen.Qwen2ForCausalLM",
10
+ "AutoModelForSequenceClassification": "NovaSearch/stella_en_1.5B_v5--modeling_qwen.Qwen2ForSequenceClassification"
11
+ },
12
+ "bos_token_id": 151643,
13
+ "eos_token_id": 151643,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 1536,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 8960,
18
+ "max_position_embeddings": 131072,
19
+ "max_window_layers": 21,
20
+ "model_type": "qwen2",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 28,
23
+ "num_key_value_heads": 2,
24
+ "rms_norm_eps": 1e-06,
25
+ "rope_scaling": null,
26
+ "rope_theta": 1000000.0,
27
+ "sliding_window": null,
28
+ "tie_word_embeddings": false,
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.48.1",
31
+ "use_cache": true,
32
+ "use_sliding_window": false,
33
+ "vocab_size": 151646
34
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.0",
4
+ "transformers": "4.48.1",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "s2p_query": "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: ",
9
+ "s2s_query": "Instruct: Retrieve semantically similar text.\nQuery: "
10
+ },
11
+ "default_prompt_name": null,
12
+ "similarity_fn_name": "cosine"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e499e68eb42b021043ab66e3ae7d13509288379837baeb31c846786f8c5caaf1
3
+ size 4994887136
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79fbbf072409d268af717b5cec330de29c65c18a9b262195e870137e169bee6e
3
+ size 1178224504
model.safetensors.index.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 6173075456
4
+ },
5
+ "weight_map": {
6
+ "embed_tokens.weight": "model-00001-of-00002.safetensors",
7
+ "layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
8
+ "layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
9
+ "layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
10
+ "layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
11
+ "layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
12
+ "layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
13
+ "layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
14
+ "layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
15
+ "layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
16
+ "layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
17
+ "layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
18
+ "layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
20
+ "layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
21
+ "layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
22
+ "layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
23
+ "layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
24
+ "layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
25
+ "layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
26
+ "layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
27
+ "layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
28
+ "layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
30
+ "layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
31
+ "layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
32
+ "layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
33
+ "layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
34
+ "layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
35
+ "layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
36
+ "layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
37
+ "layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
38
+ "layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
39
+ "layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
40
+ "layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
41
+ "layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
42
+ "layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
43
+ "layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
44
+ "layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
45
+ "layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
46
+ "layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
47
+ "layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
48
+ "layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
49
+ "layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
50
+ "layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
51
+ "layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
52
+ "layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
53
+ "layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
54
+ "layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
55
+ "layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
56
+ "layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
57
+ "layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
58
+ "layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
59
+ "layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
60
+ "layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
61
+ "layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
62
+ "layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
63
+ "layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
64
+ "layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
65
+ "layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
66
+ "layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
67
+ "layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
68
+ "layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
69
+ "layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
70
+ "layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
71
+ "layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
72
+ "layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
73
+ "layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
74
+ "layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
75
+ "layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
76
+ "layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
77
+ "layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
78
+ "layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
79
+ "layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
80
+ "layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
81
+ "layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
82
+ "layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
83
+ "layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
84
+ "layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
85
+ "layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
86
+ "layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
87
+ "layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
88
+ "layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
89
+ "layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
90
+ "layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
91
+ "layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
92
+ "layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
93
+ "layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
94
+ "layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
95
+ "layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
96
+ "layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
97
+ "layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
98
+ "layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
99
+ "layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
100
+ "layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
101
+ "layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
102
+ "layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
103
+ "layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
104
+ "layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
105
+ "layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
106
+ "layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
107
+ "layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
108
+ "layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
109
+ "layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
110
+ "layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
111
+ "layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
112
+ "layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
113
+ "layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
114
+ "layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
115
+ "layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
116
+ "layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
117
+ "layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
118
+ "layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
119
+ "layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
120
+ "layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
121
+ "layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
122
+ "layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
123
+ "layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
124
+ "layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
125
+ "layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
126
+ "layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
127
+ "layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
128
+ "layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
129
+ "layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
130
+ "layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
131
+ "layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
132
+ "layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
133
+ "layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
134
+ "layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
135
+ "layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
136
+ "layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
137
+ "layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
138
+ "layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
139
+ "layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
140
+ "layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
141
+ "layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
142
+ "layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
143
+ "layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
144
+ "layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
145
+ "layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
146
+ "layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
147
+ "layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
148
+ "layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
149
+ "layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
150
+ "layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
152
+ "layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
153
+ "layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
154
+ "layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
155
+ "layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
156
+ "layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
157
+ "layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
158
+ "layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
159
+ "layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
160
+ "layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
162
+ "layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
163
+ "layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
164
+ "layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
165
+ "layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
166
+ "layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
167
+ "layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
168
+ "layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
169
+ "layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
170
+ "layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
171
+ "layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
172
+ "layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
173
+ "layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
174
+ "layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
175
+ "layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
176
+ "layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
177
+ "layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
178
+ "layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
179
+ "layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
180
+ "layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
181
+ "layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
182
+ "layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
183
+ "layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
184
+ "layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
185
+ "layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
186
+ "layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
187
+ "layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
188
+ "layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
189
+ "layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
190
+ "layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
191
+ "layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
192
+ "layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
193
+ "layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
194
+ "layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
195
+ "layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
196
+ "layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
197
+ "layers.22.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
198
+ "layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
199
+ "layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
200
+ "layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
201
+ "layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
202
+ "layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
203
+ "layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
204
+ "layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
205
+ "layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
206
+ "layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
207
+ "layers.23.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
208
+ "layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
209
+ "layers.23.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
210
+ "layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
211
+ "layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
212
+ "layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
213
+ "layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
214
+ "layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
215
+ "layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
216
+ "layers.24.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
217
+ "layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
218
+ "layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
219
+ "layers.24.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
220
+ "layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
221
+ "layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
222
+ "layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
223
+ "layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
224
+ "layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
225
+ "layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
226
+ "layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
227
+ "layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
228
+ "layers.25.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
229
+ "layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
230
+ "layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
231
+ "layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
232
+ "layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
233
+ "layers.25.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
234
+ "layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
235
+ "layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
236
+ "layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
237
+ "layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
238
+ "layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
239
+ "layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
240
+ "layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
241
+ "layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
242
+ "layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
243
+ "layers.26.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
244
+ "layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
245
+ "layers.26.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
246
+ "layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
247
+ "layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
248
+ "layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
249
+ "layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
250
+ "layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
251
+ "layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
252
+ "layers.27.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
253
+ "layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
254
+ "layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
255
+ "layers.27.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
256
+ "layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
257
+ "layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
258
+ "layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
259
+ "layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
260
+ "layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
261
+ "layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
262
+ "layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
263
+ "layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
264
+ "layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
265
+ "layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
266
+ "layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
267
+ "layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
268
+ "layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
269
+ "layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
270
+ "layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
271
+ "layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
272
+ "layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
273
+ "layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
274
+ "layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
275
+ "layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
276
+ "layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
277
+ "layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
278
+ "layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
279
+ "layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
280
+ "layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
281
+ "layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
282
+ "layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
283
+ "layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
284
+ "layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
285
+ "layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
286
+ "layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
287
+ "layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
288
+ "layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
289
+ "layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
290
+ "layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
291
+ "layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
292
+ "layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
293
+ "layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
294
+ "layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
295
+ "layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
296
+ "layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
297
+ "layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
298
+ "layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
299
+ "layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
300
+ "layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
301
+ "layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
302
+ "layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
303
+ "layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
304
+ "layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
305
+ "layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
306
+ "layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
307
+ "layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
308
+ "layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
309
+ "layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
310
+ "layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
311
+ "layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
312
+ "layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
313
+ "layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
314
+ "layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
315
+ "layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
316
+ "layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
317
+ "layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
318
+ "layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
319
+ "layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
320
+ "layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
321
+ "layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
322
+ "layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
323
+ "layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
324
+ "layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
325
+ "layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
326
+ "layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
327
+ "layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
328
+ "layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
329
+ "layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
330
+ "layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
331
+ "layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
332
+ "layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
333
+ "layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
334
+ "layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
335
+ "layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
336
+ "layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
337
+ "layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
338
+ "layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
339
+ "layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
340
+ "layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
341
+ "layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
342
+ "layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
343
+ "norm.weight": "model-00002-of-00002.safetensors"
344
+ }
345
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f79052deba517b0663d877714e117a31a4a6243cddb85fc4443c80a2fa65a20
3
+ size 11419302
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": [
30
+ "<|im_start|>",
31
+ "<|im_end|>"
32
+ ],
33
+ "auto_map": {
34
+ "AutoTokenizer": [
35
+ "NovaSearch/stella_en_1.5B_v5--tokenization_qwen.Qwen2Tokenizer",
36
+ "NovaSearch/stella_en_1.5B_v5--tokenization_qwen.Qwen2TokenizerFast"
37
+ ]
38
+ },
39
+ "bos_token": null,
40
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
41
+ "clean_up_tokenization_spaces": false,
42
+ "eos_token": "<|endoftext|>",
43
+ "errors": "replace",
44
+ "extra_special_tokens": {},
45
+ "model_max_length": 512,
46
+ "pad_token": "<|endoftext|>",
47
+ "split_special_tokens": false,
48
+ "tokenizer_class": "Qwen2Tokenizer",
49
+ "unk_token": null
50
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff