doberst commited on
Commit
aeaba35
·
1 Parent(s): 44dbaf1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -58,7 +58,7 @@ The first BLING models have been trained on question-answering, key-value extrac
58
 
59
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
 
61
- BLING has not been designed for end consumer-oriented applications, and there has been any focus in training on safeguards to mitigate potential bias. We would strongly discourage any use of BLING for any 'chatbot' use case.
62
 
63
 
64
  ## How to Get Started with the Model
@@ -66,15 +66,13 @@ BLING has not been designed for end consumer-oriented applications, and there ha
66
  The fastest way to get started with BLING is through direct import in transformers:
67
 
68
  from transformers import AutoTokenizer, AutoModelForCausalLM
69
-
70
  tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1.4b-0.1")
71
-
72
  model = AutoModelForCausalLM.from_pretrained("llmware/bling-1.4b-0.1")
73
 
74
 
75
- The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:
76
 
77
- full_prompt = "<human>: " + my_prompt + "\n" + "<bot>: "
78
 
79
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
80
 
 
58
 
59
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
 
61
+ BLING has not been designed for end consumer-oriented applications, and there has not been any focus in training on safeguards to mitigate potential bias. We would strongly discourage any use of BLING for any 'chatbot' use case.
62
 
63
 
64
  ## How to Get Started with the Model
 
66
  The fastest way to get started with BLING is through direct import in transformers:
67
 
68
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
69
  tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1.4b-0.1")
 
70
  model = AutoModelForCausalLM.from_pretrained("llmware/bling-1.4b-0.1")
71
 
72
 
73
+ The BLING model was fine-tuned with a simple "'<human>' and '<bot>' wrapper", so to get the best results, wrap inference entries as:
74
 
75
+ full_prompt = "'<human>': " + my_prompt + "\n" + "'<bot>': "
76
 
77
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
78