doberst commited on
Commit
0dfded8
1 Parent(s): a38ac65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -18
README.md CHANGED
@@ -1,38 +1,28 @@
1
  ---
2
  license: apache-2.0
3
  inference: false
4
- tags: [green, p1, llmware-fx, ov, emerald]
5
  ---
6
 
7
- # slim-extract-tiny-ov
8
 
9
- **slim-extract-tiny-ov** is a specialized function calling model with a single mission to look for values in a text, based on an "extract" key that is passed as a parameter. No other instructions are required except to pass the context passage, and the target key, and the model will generate a python dictionary consisting of the extract key and a list of the values found in the text, including an 'empty list' if the text does not provide an answer for the value of the selected key.
10
 
11
- This is an OpenVino int4 quantized version of slim-extract-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
12
 
13
 
14
  ### Model Description
15
 
16
  - **Developed by:** llmware
17
- - **Model type:** tinyllama
18
- - **Parameters:** 1.1 billion
19
- - **Model Parent:** llmware/slim-extract-tiny
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
- - **Uses:** Extraction of values from complex business documents
23
  - **RAG Benchmark Accuracy Score:** NA
24
  - **Quantization:** int4
25
 
26
- ### Example Usage
27
-
28
- from llmware.models import ModelCatalog
29
-
30
- text_passage = "The company announced that for the current quarter the total revenue increased by 9% to $125 million."
31
- model = ModelCatalog().load_model("slim-extract-tiny-ov")
32
- llm_response = model.function_call(text_passage, function="extract", params=["revenue"])
33
-
34
- Output: `llm_response = {"revenue": [$125 million"]}`
35
-
36
 
37
  ## Model Card Contact
38
 
 
1
  ---
2
  license: apache-2.0
3
  inference: false
4
+ tags: [green, p3, llmware-fx, ov, emerald]
5
  ---
6
 
7
+ # slim-boolean-phi-3-ov
8
 
9
+ **slim-boolean-phi-3-ov** is a specialized function calling model optimized for boolean (yes/no) questions. The model expects as input a text passage context, and a boolean question, and generates a python dictionary consisting of two keys - an "answer" key with the 'yes or no' classification, and an "explain" key that provides a short explanation.
10
 
11
+ This is an OpenVino int4 quantized version of slim-boolean-phi-3, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
12
 
13
 
14
  ### Model Description
15
 
16
  - **Developed by:** llmware
17
+ - **Model type:** phi-3
18
+ - **Parameters:** 3.8 billion
19
+ - **Model Parent:** llmware/slim-boolean-phi-3
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
+ - **Uses:** Boolean question answering
23
  - **RAG Benchmark Accuracy Score:** NA
24
  - **Quantization:** int4
25
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## Model Card Contact
28