Tonic commited on
Commit
bbde141
·
1 Parent(s): 961e4f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -28
README.md CHANGED
@@ -33,7 +33,10 @@ This is every "best reddit_question_best_answers" appended and produced accordin
33
  ```
34
 
35
 
36
- - 🌟 This dataset is internally consistent
 
 
 
37
 
38
 
39
  🤔The point is to make it easy to train models with a single correctly formatted dataset of
@@ -46,33 +49,7 @@ This is every "best reddit_question_best_answers" appended and produced accordin
46
 
47
  # How To Use :
48
 
49
- Combine random shards in random quantities to produce a very high quality conversational training dataset for fine tuning by running the following code:
50
-
51
- ```python
52
-
53
- import random
54
-
55
- # Define the shards
56
- shards = [f"shard_{i}" for i in range(1, 34)]
57
-
58
- # Function to combine random shards
59
- def combine_shards():
60
- # Select a random shard
61
- selected_shard = random.choice(shards)
62
-
63
- # Select a random quantity (1-5)
64
- quantity = random.randint(1, 5)
65
-
66
- return selected_shard, quantity
67
-
68
- # Example usage
69
- for _ in range(10): # Combine 10 times as an example
70
- shard, qty = combine_shards()
71
- print(f"Combined {qty} of {shard}")
72
-
73
- ```
74
-
75
- Or try combining rows line by line to save memory :
76
 
77
  ```python
78
 
 
33
  ```
34
 
35
 
36
+ - 🌟 You can use it in shards or all together !
37
+
38
+ - 🌟 This dataset is **internally consistent** !
39
+
40
 
41
 
42
  🤔The point is to make it easy to train models with a single correctly formatted dataset of
 
49
 
50
  # How To Use :
51
 
52
+ Combine random shards in random quantities to produce a very high quality conversational training dataset for fine tuning or try combining rows line by line to save memory by running the following code:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  ```python
55