Update README.md
Browse files
README.md
CHANGED
@@ -32,3 +32,7 @@ inputs = tokenizer(input_text, return_tensors="pt")
|
|
32 |
outputs = model.generate(**inputs)
|
33 |
|
34 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
|
|
|
|
|
|
32 |
outputs = model.generate(**inputs)
|
33 |
|
34 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
35 |
+
|
36 |
+
### Limitations and Biases
|
37 |
+
|
38 |
+
While SandboxLM performs well in detecting potentially harmful shell commands, it may not catch all edge cases or obscure security risks. It should not be solely relied upon for mission-critical systems. It is recommended to combine it with other security measures to ensure the safety of shell operations. Additionally, since it was trained on specific datasets, it may reflect any biases present in those datasets.
|