coding-dataset / README.md
jaigouk's picture
Upload README.md with huggingface_hub
8fffc4f
|
raw
history blame
2.21 kB

Ruby dataset

Custom ruby dataset

  • rspec_dataset

rspec dataset is

Bigcode dataset

  • ruby-dataset
  • shell-dataset
  • python-dataset
  • sql-dataset

rspec dataset

I gathers specs for app/services from following repos. Because most of business logics are in app/services.

REPO_URLS = [
    'https://github.com/diaspora/diaspora.git',
    'https://github.com/mastodon/mastodon.git',
    'https://github.com/gitlabhq/gitlabhq.git',
    'https://github.com/discourse/discourse.git',
    'https://github.com/chatwoot/chatwoot.git',
    'https://github.com/opf/openproject.git',
]

output

Repository           Avg Source Lines Avg Test Lines  Test Cases
diaspora             62              156             12
mastodon             97              131             59
gitlabhq             66              154             952
discourse            188             303             49
chatwoot             63              107             50
openproject          86              178             98
------------------------------------------------------------
Total                74              159             1220
------------------------------------------------------------

# avg_source_lines = [62, 97, 66, 188, 63, 86]
# avg_test_lines = [156, 131, 154, 303, 107, 178]
# test_cases = [12, 59, 952, 49, 50, 98]

# Assuming an average of 10 tokens per line of code, which is a rough average for programming languages
# tokens_per_line = 10

# Calculating the total tokens for source and test lines
# total_source_tokens = sum([lines * tokens_per_line for lines in avg_source_lines])
# total_test_tokens = sum([lines * tokens_per_line for lines in avg_test_lines])

# Total tokens
# total_tokens = total_source_tokens + total_test_tokens

# Average tokens per test case
# avg_tokens_per_test_case = total_tokens / sum(test_cases)

# total_tokens, avg_tokens_per_test_case
# -> (15910, 13.040983606557377)

When you prepare data for training or inference with an LLM, each example (in this case, each test case or code snippet) needs to fit within this context window. The average tokens per test case calculated earlier (approximately 13.04 tokens) is well within the limits of LLMs