File size: 2,287 Bytes
8fffc4f
 
 
 
b1c9387
8fffc4f
 
 
 
 
 
 
 
 
 
cbaaefd
8fffc4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# Ruby dataset

**Custom ruby dataset**

- rspec_dataset

**Bigcode dataset**

- ruby-dataset
- shell-dataset
- python-dataset
- sql-dataset

## rspec dataset

Specs are exclusively gathered from the 'app/services' directory within the specified repositories. This approach is employed since the majority of business logic is encapsulated within these services

```py
REPO_URLS = [
    'https://github.com/diaspora/diaspora.git',
    'https://github.com/mastodon/mastodon.git',
    'https://github.com/gitlabhq/gitlabhq.git',
    'https://github.com/discourse/discourse.git',
    'https://github.com/chatwoot/chatwoot.git',
    'https://github.com/opf/openproject.git',
]
```
output

```sh
Repository           Avg Source Lines Avg Test Lines  Test Cases
diaspora             62              156             12
mastodon             97              131             59
gitlabhq             66              154             952
discourse            188             303             49
chatwoot             63              107             50
openproject          86              178             98
------------------------------------------------------------
Total                74              159             1220
------------------------------------------------------------

# avg_source_lines = [62, 97, 66, 188, 63, 86]
# avg_test_lines = [156, 131, 154, 303, 107, 178]
# test_cases = [12, 59, 952, 49, 50, 98]

# Assuming an average of 10 tokens per line of code, which is a rough average for programming languages
# tokens_per_line = 10

# Calculating the total tokens for source and test lines
# total_source_tokens = sum([lines * tokens_per_line for lines in avg_source_lines])
# total_test_tokens = sum([lines * tokens_per_line for lines in avg_test_lines])

# Total tokens
# total_tokens = total_source_tokens + total_test_tokens

# Average tokens per test case
# avg_tokens_per_test_case = total_tokens / sum(test_cases)

# total_tokens, avg_tokens_per_test_case
# -> (15910, 13.040983606557377)
```

When you prepare data for training or inference with an LLM, each example (in this case, each test case or code snippet) needs to fit within this context window. The average tokens per test case calculated earlier (approximately 13.04 tokens) is well within the limits of LLMs