Datasets:

Modalities:
Text
ArXiv:
License:
nxphi47 commited on
Commit
2521712
·
verified ·
1 Parent(s): 9e43fb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -272,7 +272,7 @@ Evaluation Code: [SalesforceAIResearch/SFR-RAG](https://github.com/SalesforceAIR
272
 
273
  ContextualBench is a powerful evaluation framework designed to assess the performance of Large Language Models (LLMs) on contextual datasets. It provides a flexible pipeline for evaluating various LLM families across different tasks, with a focus on handling large context inputs.
274
 
275
- > Each individual evaluation dataset in ContextualBench is licensed separately and must be adhered by a user.
276
 
277
 
278
  ## Features
@@ -284,7 +284,7 @@ ContextualBench is a powerful evaluation framework designed to assess the perfor
284
 
285
  The dataset can be loaded using the command
286
  ```python
287
- task = "hotpotqa" # it can be any other option like triviaqa,popqa,2wiki, MuSiQue, NaturalQuestions etc.
288
  load_dataset("Salesforce/ContextualBench", task, split="validation")
289
  ```
290
 
 
272
 
273
  ContextualBench is a powerful evaluation framework designed to assess the performance of Large Language Models (LLMs) on contextual datasets. It provides a flexible pipeline for evaluating various LLM families across different tasks, with a focus on handling large context inputs.
274
 
275
+ > Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data.
276
 
277
 
278
  ## Features
 
284
 
285
  The dataset can be loaded using the command
286
  ```python
287
+ task = "hotpotqa" # it can be any other option
288
  load_dataset("Salesforce/ContextualBench", task, split="validation")
289
  ```
290