Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'license', 'likes'}) and 5 missing columns ({'flageval_id', 'job_start_time', 'job_id', 'eval_id', 'architectures'}). This happened while the json dataset builder was generating data using hf://datasets/open-cn-llm-leaderboard/requests/01-ai/Yi-1.5-34B-Chat_eval_request_False_float16_Original.json (at revision 535cc66f1da4aa92a8b60248b6fb04597a510e18) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast model: string base_model: string revision: string private: bool precision: string weight_type: string status: string submitted_time: timestamp[s] model_type: string likes: int64 params: double license: string to {'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'params': Value(dtype='float64', id=None), 'architectures': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'job_id': Value(dtype='int64', id=None), 'job_start_time': Value(dtype='null', id=None), 'eval_id': Value(dtype='int64', id=None), 'flageval_id': Value(dtype='int64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'license', 'likes'}) and 5 missing columns ({'flageval_id', 'job_start_time', 'job_id', 'eval_id', 'architectures'}). This happened while the json dataset builder was generating data using hf://datasets/open-cn-llm-leaderboard/requests/01-ai/Yi-1.5-34B-Chat_eval_request_False_float16_Original.json (at revision 535cc66f1da4aa92a8b60248b6fb04597a510e18) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | params
float64 | architectures
string | weight_type
string | status
string | submitted_time
timestamp[us] | model_type
string | job_id
int64 | job_start_time
null | eval_id
int64 | flageval_id
int64 | likes
int64 | license
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
01-ai/Yi-1.5-34B-32K | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-24T23:58:39 | π’ : pretrained | -1 | null | 4,400 | 707 | null | null |
|
01-ai/Yi-1.5-34B-Chat-16K | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-24T23:59:06 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,402 | 708 | null | null |
|
01-ai/Yi-1.5-34B-Chat | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-15T01:37:19 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,069 | 622 | null | null |
|
01-ai/Yi-1.5-34B-Chat | main | false | float16 | 34.389 | null | Original | CANCELLED | 2024-05-15T03:04:37 | π¬ chat models (RLHF, DPO, IFT, ...) | null | null | null | null | 84 | apache-2.0 |
|
01-ai/Yi-1.5-34B | main | false | float16 | 34.389 | null | Original | FINISHED | 2024-05-15T03:00:48 | π’ pretrained | null | null | 3,087 | 624 | 22 | apache-2.0 |
|
01-ai/Yi-1.5-6B | main | false | float16 | 6.061 | null | Original | FINISHED | 2024-05-15T03:03:46 | π’ pretrained | null | null | 3,064 | 623 | 14 | apache-2.0 |
|
01-ai/Yi-1.5-9B-32K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-25T00:00:23 | π’ : pretrained | -1 | null | 4,395 | 714 | null | null |
|
01-ai/Yi-1.5-9B-Chat-16K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-25T00:00:04 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,004 | 710 | null | null |
|
01-ai/Yi-1.5-9B-Chat | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-15T01:44:49 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,042 | 621 | null | null |
|
01-ai/Yi-1.5-9B | main | false | float16 | 8.829 | null | Original | FINISHED | 2024-05-14T08:12:00 | π’ pretrained | null | null | 3,037 | 619 | 21 | apache-2.0 |
|
01-ai/Yi-34B-200K | main | false | float16 | 34.389 | null | Original | FINISHED | 2024-04-29T06:30:17 | PT | null | null | 2,956 | 450 | 300 | other |
|
01-ai/Yi-34B-Chat | main | false | float16 | 34.389 | null | Original | FINISHED | 2024-03-20T07:13:31 | chat | null | null | 2,910 | 478 | 296 | other |
|
01-ai/Yi-34B | main | false | float16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-03-11T06:38:38 | π’ : pretrained | -1 | null | 3,022 | 451 | null | null |
|
01-ai/Yi-6B-Chat | main | false | float16 | 6.061 | null | Original | FINISHED | 2024-04-22T10:16:43 | chat | null | null | 2,866 | 545 | 53 | other |
|
01-ai/Yi-6B | main | false | float16 | 6.061 | null | Original | FINISHED | 2024-04-22T10:17:41 | PT | null | null | 2,865 | 544 | 360 | other |
|
01-ai/Yi-9B-200K | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-26T04:15:23 | π’ : pretrained | -1 | null | 4,658 | 736 | null | null |
|
01-ai/Yi-9B | main | false | float16 | 8.829 | null | Original | FINISHED | 2024-04-24T06:56:34 | PT | null | null | 2,891 | 437 | 175 | other |
|
01-ai/Yi-Coder-9B-Chat | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-09-21T08:24:53 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 12,029 | 1,009 | null | null |
|
01-ai/Yi-Coder-9B | bc6e63fd26a654ee42dba737a553310bfa01dc5d | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-10-15T20:48:42 | π’ : pretrained | -1 | null | 20,274 | 1,010 | null | null |
|
AIJUUD/juud-Mistral-7B-dpo | QWEN2_70B | main | false | float16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-06-24T00:21:06 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 8,337 | 877 | null | null |
AbacusResearch/Jallabi-34B | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | PENDING | 2024-09-05T13:42:19 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 13,389 | 1,024 | null | null |
|
Artples/L-MChat-7b | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-05-26T20:30:41 | π€ : base merges and moerges | -1 | null | 4,669 | 743 | null | null |
|
Artples/L-MChat-Small | main | false | bfloat16 | 2.78 | PhiForCausalLM | Original | FINISHED | 2024-05-26T20:31:44 | π€ : base merges and moerges | -1 | null | 4,670 | 744 | null | null |
|
Artples/LAI-Paca-7b | Artples/Adapter-Baseline | main | false | bfloat16 | 7 | ? | Adapter | CANCELLED | 2024-05-26T22:07:33 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,757 | 745 | null | null |
Azure99/blossom-v5.1-34b | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-22T09:46:58 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,401 | 692 | null | null |
|
Azure99/blossom-v5.1-9b | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-22T09:47:24 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,389 | 693 | null | null |
|
BAAI/Infinity-Instruct-3M-0613-Mistral-7B | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | CANCELLED | 2024-12-07T05:47:16 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 21,517 | 938 | null | null |
|
BAAI/Infinity-Instruct-3M-0625-Llama3-8B | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-07-18T22:31:59 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 9,144 | 903 | null | null |
|
BAAI/Infinity-Instruct-3M-0625-Mistral-7B | 302e3ae0bcc50dae3fb69fc1b08b518398e8c407 | false | bfloat16 | 7.242 | MistralForCausalLM | Original | PENDING | 2024-09-11T14:38:25 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 23,459 | 904 | null | null |
|
BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B | main | false | bfloat16 | 0.003 | LlamaForCausalLM | Original | FINISHED | 2024-07-18T22:31:53 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 9,397 | 902 | null | null |
|
BAAI/Infinity-Instruct-7M-0729-Llama3_1-8B | 56f9c2845ae024eb8b1dd9ea0d8891cbaf33c596 | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-09-11T20:39:43 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 11,890 | 936 | null | null |
|
BAAI/Infinity-Instruct-7M-0729-mistral-7B | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | RUNNING | 2024-12-07T05:47:26 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 23,464 | 937 | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | PENDING | 2024-08-22T16:02:12 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 13,484 | 969 | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-08-22T16:01:08 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 20,212 | 963 | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-mistral-7B | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FAILED | 2024-08-22T16:01:51 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 11,152 | 964 | null | null |
|
CausalLM/34b-beta | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-30T19:56:30 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,801 | 781 | null | null |
|
Chickaboo/ChinaLM-9B | Chickaboo/ChinaLM-9B | main | false | bfloat16 | 8.929 | LlamaForCausalLM | Original | CANCELLED | 2024-06-18T17:54:44 | π€ : base merges and moerges | -1 | null | 8,332 | 867 | null | null |
CofeAI/Tele-FLM | main | false | float16 | 52 | null | Original | FINISHED | 2024-06-06T08:58:56 | ? : | null | null | 8,954 | 787 | 17 | apache-2.0 |
|
CombinHorizon/YiSM-blossom5.1-34B-SLERP | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-08-27T05:34:35 | π€ : base merges and moerges | -1 | null | 12,176 | 1,022 | null | null |
|
ConvexAI/Luminex-34B-v0.1 | main | false | float16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-06-08T22:12:14 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 5,007 | 811 | null | null |
|
ConvexAI/Luminex-34B-v0.2 | main | false | float16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-30T19:58:21 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,804 | 782 | null | null |
|
CultriX/NeuralMona_MoE-4x7B | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | FINISHED | 2024-06-10T09:29:16 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 5,013 | 817 | null | null |
|
Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO | b749dbcb19901b8fd0e9f38c923a24533569f895 | false | float16 | 13.96 | MistralForCausalLM | Original | FINISHED | 2024-09-11T18:40:17 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 12,163 | 934 | null | null |
|
Danielbrdz/Barcenas-Llama3-8b-ORPO | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-16T06:18:54 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,090 | 636 | null | null |
|
DeepMount00/Llama-3-8b-Ita | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-17T15:16:03 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,147 | 668 | null | null |
|
Duxiaoman-DI/XuanYuan-70B-Chat | main | false | float16 | 69.099 | LlamaForCausalLM | Original | PENDING | 2024-10-13T06:02:45 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 13,477 | 1,032 | null | null |
|
Duxiaoman-DI/XuanYuan-70B | main | false | float16 | 69.099 | LlamaForCausalLM | Original | PENDING | 2024-10-13T06:01:57 | π’ : pretrained | -1 | null | 13,476 | 1,029 | null | null |
|
EpistemeAI/Fireball-Alpaca-Llama3.1.08-8B-Philos-C-R2 | d1d160e4d29f6722dc34ae122e106852bf1506f2 | false | float16 | 8.03 | ? | Original | FAILED | 2024-09-15T04:06:21 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 13,424 | 996 | null | null |
|
FlagAlpha/Llama3-Chinese-8B-Instruct | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-17T20:24:58 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,166 | 682 | null | null |
|
GritLM/GritLM-7B-KTO | b5c48669508c1de18c698460c187f64e90e7df44 | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-06-29T17:10:19 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 8,958 | 834 | null | null |
|
GritLM/GritLM-8x7B-KTO | 938913477064fcc498757c5136d9899bb6e713ed | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | FINISHED | 2024-06-29T17:10:48 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 8,961 | 839 | null | null |
|
GritLM/GritLM-8x7B | main | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | FINISHED | 2024-05-30T22:02:14 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,808 | 786 | null | null |
|
HIT-SCIR/Chinese-Mixtral-8x7B | main | false | bfloat16 | 46.908 | MixtralForCausalLM | Original | FINISHED | 2024-05-17T20:15:48 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,412 | 678 | null | null |
|
Infinirc/Infinirc-Llama3-8B-2G-Release-v1.0 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-06-30T03:18:35 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 8,992 | 892 | null | null |
|
Kquant03/CognitiveFusion2-4x7B-BF16 | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | FINISHED | 2024-06-10T09:31:14 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 5,074 | 820 | null | null |
|
Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5 | main | false | float16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-07-27T03:41:14 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 9,574 | 909 | null | null |
|
Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-17T15:18:34 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,137 | 669 | null | null |
|
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.3 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-28T07:13:56 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,766 | 756 | null | null |
|
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.4 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-06-15T15:34:53 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 5,339 | 838 | null | null |
|
Kukedlc/NeuralSynthesis-7B-v0.1 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-06-12T20:59:29 | π€ : base merges and moerges | -1 | null | 5,190 | 829 | null | null |
|
Kukedlc/NeuralSynthesis-7B-v0.3 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-07-29T21:34:36 | π€ : base merges and moerges | -1 | null | 10,320 | 919 | null | null |
|
Kukedlc/NeuralSynthesis-7b-v0.4-slerp | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-05-30T09:33:09 | π€ : base merges and moerges | -1 | null | 4,790 | 776 | null | null |
|
Langboat/Mengzi3-8B-Chat | main | false | float16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-10-29T07:17:59 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 12,260 | 1,034 | null | null |
|
LoneStriker/Smaug-34B-v0.1-GPTQ | main | false | GPTQ | 272 | LlamaForCausalLM | Original | CANCELLED | 2024-05-17T07:58:21 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,160 | 672 | null | null |
|
MTSAIR/MultiVerse_70B | main | false | bfloat16 | 72.289 | LlamaForCausalLM | Original | FINISHED | 2024-06-22T05:37:09 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 9,147 | 872 | null | null |
|
MTSAIR/multi_verse_model | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-05-25T05:36:13 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,396 | 715 | null | null |
|
Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.3 | d2578eb754d1c20efe604749296580f680950917 | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-09-12T05:32:48 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 11,989 | 941 | null | null |
|
Magpie-Align/Llama-3-8B-Magpie-Align-v0.3 | 02f10f3b8178465e0e4d2a09c04775310492821b | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-09-12T04:41:25 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 11,988 | 940 | null | null |
|
MaziyarPanahi/Calme-4x7B-MoE-v0.1 | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | FINISHED | 2024-06-16T23:32:44 | π€ : base merges and moerges | -1 | null | 5,889 | 842 | null | null |
|
MaziyarPanahi/Calme-4x7B-MoE-v0.2 | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | FINISHED | 2024-06-17T17:06:27 | π€ : base merges and moerges | -1 | null | 6,540 | 858 | null | null |
|
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2 | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | FINISHED | 2024-05-26T03:55:11 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,663 | 738 | null | null |
|
MaziyarPanahi/Llama-3-8B-Instruct-v0.10 | 55a6fc03e04f1a68a5e2df16f3d0485d9ea357c8 | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-09-17T18:32:17 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 12,004 | 957 | null | null |
|
MaziyarPanahi/Llama-3-8B-Instruct-v0.8 | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-29T09:24:33 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,775 | 763 | null | null |
|
MaziyarPanahi/Llama-3-8B-Instruct-v0.9 | ddf91fdc0a3ab5e5d76864f1c4cf44e5adacd565 | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-09-17T16:18:31 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 12,000 | 958 | null | null |
|
MaziyarPanahi/Mistral-7B-Instruct-v0.2 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | FINISHED | 2024-05-29T07:54:05 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,771 | 760 | null | null |
|
MaziyarPanahi/Topxtral-4x7B-v0.1 | main | false | bfloat16 | 18.516 | MixtralForCausalLM | Original | FINISHED | 2024-06-10T09:33:06 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 5,093 | 823 | null | null |
|
MaziyarPanahi/calme-2.2-llama3-70b | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | CANCELLED | 2024-07-30T02:06:59 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 20,449 | 923 | null | null |
|
MaziyarPanahi/calme-2.3-llama3-70b | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | PENDING | 2024-09-18T20:17:37 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 13,581 | 1,044 | null | null |
|
MoaData/Myrrh_solar_10.7b_3.0 | main | false | float16 | 10.732 | LlamaForCausalLM | Original | FINISHED | 2024-05-17T20:49:38 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,960 | 688 | null | null |
|
NLPark/AnFeng_v3.1-Avocet | 5170739731033323e6e66a0f68d34790042a3b2a | false | float16 | 34.393 | LlamaForCausalLM | Original | FAILED | 2024-09-11T20:41:38 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 12,402 | 935 | null | null |
|
NLPark/AnFeng_v3_Avocet | main | false | bfloat16 | 34.981 | CohereForCausalLM | Original | FINISHED | 2024-05-14T18:11:34 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,091 | 626 | null | null |
|
NLPark/Shi-Ci_v3-Robin | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | PENDING | 2024-08-21T12:41:06 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 11,858 | 962 | null | null |
|
NLPark/Shi-Cis-Kestrel-uncensored | main | false | bfloat16 | 140.806 | MixtralForCausalLM | Original | PENDING | 2024-08-01T12:56:06 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 11,857 | 961 | null | null |
|
NLPark/Test1_SLIDE | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-14T18:12:56 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,039 | 620 | null | null |
|
Nexusflow/Athene-70B | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | PENDING | 2024-09-17T06:32:24 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 13,511 | 1,043 | null | null |
|
NotAiLOL/Yi-1.5-dolphin-9B | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | FINISHED | 2024-05-17T20:49:03 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 3,403 | 686 | null | null |
|
NousResearch/Hermes-2-Theta-Llama-3-8B | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-08-30T16:51:44 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 11,887 | 994 | null | null |
|
NousResearch/Meta-Llama-3-8B-Instruct | NousResearch/Meta-Llama-3-8B-Instruct | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-06-20T18:19:23 | π’ : pretrained | -1 | null | 7,092 | 869 | null | null |
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | main | false | bfloat16 | 46.703 | MixtralForCausalLM | Original | FINISHED | 2024-06-16T02:39:31 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 5,854 | 841 | null | null |
|
NousResearch/Nous-Hermes-2-SOLAR-10.7B | main | false | bfloat16 | 10.732 | LlamaForCausalLM | Original | FINISHED | 2024-05-30T16:49:20 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,794 | 778 | null | null |
|
NousResearch/Nous-Hermes-2-Yi-34B | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | FINISHED | 2024-05-25T05:38:28 | πΆ : fine-tuned on domain-specific datasets | -1 | null | 4,397 | 716 | null | null |
|
OpenBuddy/openbuddy-deepseek-67b-v18.1-4k | main | false | bfloat16 | 67.425 | LlamaForCausalLM | Original | FINISHED | 2024-05-23T07:30:29 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,907 | 704 | null | null |
|
OpenBuddy/openbuddy-llama3-70b-v21.2-32k | main | false | bfloat16 | 70.554 | LlamaForCausalLM | Original | FINISHED | 2024-06-17T04:56:12 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 6,379 | 856 | null | null |
|
OpenBuddy/openbuddy-llama3-8b-v21.1-8k | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-05-23T07:30:13 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,880 | 703 | null | null |
|
OpenBuddy/openbuddy-llama3-8b-v21.2-32k | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | FINISHED | 2024-06-25T08:18:17 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 8,400 | 880 | null | null |
|
OpenBuddy/openbuddy-mistral-22b-v21.1-32k | main | false | bfloat16 | 22.354 | MistralForCausalLM | Original | FINISHED | 2024-05-23T07:31:04 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 3,908 | 705 | null | null |
|
OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k | main | false | bfloat16 | 34.407 | LlamaForCausalLM | Original | FINISHED | 2024-06-17T04:52:27 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 6,186 | 850 | null | null |
|
OpenBuddy/openbuddy-yi1.5-34b-v21.6-32k-fp16 | main | false | float16 | 34.393 | LlamaForCausalLM | Original | FINISHED | 2024-06-25T08:18:36 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 8,871 | 881 | null | null |
|
OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k | main | false | bfloat16 | 8.85 | LlamaForCausalLM | Original | FINISHED | 2024-05-28T04:04:03 | π¬ : chat models (RLHF, DPO, IFT, ...) | -1 | null | 4,759 | 753 | null | null |
|
OpenBuddy/openbuddy-zero-14b-v22.3-32k | main | false | bfloat16 | 14.022 | LlamaForCausalLM | Original | FINISHED | 2024-07-18T09:58:30 | π€ : base merges and moerges | -1 | null | 10,220 | 901 | null | null |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 7,015