Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Float value 2.209 was truncated converting to int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2102, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Float value 2.209 was truncated converting to int64
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1420, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1052, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

model
string
model_api_url
string
model_api_key
string
model_api_name
string
base_model
string
revision
string
precision
string
private
bool
weight_type
string
status
string
submitted_time
timestamp[us]
model_type
string
params
int64
runsh
string
adapter
string
eval_id
int64
flageval_id
int64
Aria
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Claude-3.5-Sonnet-20241022
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Claude3-Opus-20240229
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Doubao-Pro-Vision-32k-241028
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
GLM-4V-Plus
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
GPT-4o-20240806
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
GPT-4o-20241120
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
GPT-4o-mini-20240718
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Gemini-1.5-Flash
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Gemini-1.5-Pro
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Idefics3-8B-Llama3
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
InternVL2-2B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
InternVL2-8B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
InternVL2-Llama3-76B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Janus-1.3B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
LLaVA-OneVision-0.5B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
LLaVA-OneVision-7B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
LLaVA-Onevision-72B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Llama-3.2-11B-Vision-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Llama-3.2-90B-Vision-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
MiniCPM-V-2.6
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Molmo-72B-0924
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Molmo-7B-D
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Mono-InternVL-2B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
NVLM-D-72B
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Phi-3.5-Vision-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Pixtral-12B-2409
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Qwen-VL-Max
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Qwen/Qwen2-VL-2B-Instruct
main
float16
false
Original
FINISHED
2025-01-24T02:46:12
🟒 : pretrained
2.209
#!/bin/bash current_file="$0" current_dir="$(dirname "$current_file")" SERVER_IP=$1 SERVER_PORT=$2 PYTHONPATH=$current_dir:$PYTHONPATH accelerate launch $current_dir/model_adapter.py --server_ip $SERVER_IP --server_port $SERVER_PORT "${@:3}" --cfg $current_dir/meta.json
import torch from typing import Dict, Any import time from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from flagevalmm.server import ServerDataset from flagevalmm.models.base_model_adapter import BaseModelAdapter from flagevalmm.server.utils import parse_args, process_images_symbol from qwen_vl_utils import process_vision_info class CustomDataset(ServerDataset): def __getitem__(self, index): data = self.get_data(index) question_id = data["question_id"] img_path = data["img_path"] qs = data["question"] qs, idx = process_images_symbol(qs) idx = set(idx) img_path_idx = [] for i in idx: if i < len(img_path): img_path_idx.append(img_path[i]) else: print("[warning] image index out of range") return question_id, img_path_idx, qs class ModelAdapter(BaseModelAdapter): def model_init(self, task_info: Dict): ckpt_path = task_info["model_path"] torch.set_grad_enabled(False) with self.accelerator.main_process_first(): tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) model = Qwen2VLForConditionalGeneration.from_pretrained( ckpt_path, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) model = self.accelerator.prepare_model(model, evaluation_mode=True) self.tokenizer = tokenizer if hasattr(model, "module"): model = model.module self.model = model self.processor = AutoProcessor.from_pretrained(ckpt_path) def build_message( self, query: str, image_paths=[], ) -> str: messages = [] messages.append( { "role": "user", "content": [], }, ) for img_path in image_paths: messages[-1]["content"].append( {"type": "image", "image": img_path}, ) # add question messages[-1]["content"].append( { "type": "text", "text": query, }, ) return messages def run_one_task(self, task_name: str, meta_info: Dict[str, Any]): results = [] cnt = 0 data_loader = self.create_data_loader( CustomDataset, task_name, batch_size=1, num_workers=0 ) for question_id, img_path, qs in data_loader: if cnt == 1: start_time = time.perf_counter() cnt += 1 question_id = question_id[0] img_path_flaten = [p[0] for p in img_path] qs = qs[0] messages = self.build_message(qs, image_paths=img_path_flaten) text = self.processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = self.processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = self.model.generate(**inputs, max_new_tokens=1024) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] response = self.processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False, )[0] self.accelerator.print(f"{qs}\n{response}\n\n") results.append( {"question_id": question_id, "answer": response.strip(), "prompt": qs} ) rank = self.accelerator.state.local_process_index self.save_result(results, meta_info, rank=rank) self.accelerator.wait_for_everyone() if self.accelerator.is_main_process: correct_num = self.collect_results_and_save(meta_info) total_time = time.perf_counter() - start_time print( f"Total time: {total_time}\nAverage time:{total_time / cnt}\nResults_collect number: {correct_num}" ) print("rank", rank, "finished") if __name__ == "__main__": args = parse_args() model_adapter = ModelAdapter( server_ip=args.server_ip, server_port=args.server_port, timeout=args.timeout, extra_cfg=args.cfg, ) model_adapter.run()
26,049
1,054
Qwen2-VL-2B-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Qwen2-VL-72B-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Qwen2-VL-7B-Instruct
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Step-1V-32k
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
XGen-MM-Instruct-Interleave-v1.5
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
Yi-Vision
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
-1
-1
deepseek-ai/Janus-Pro-7B
main
float16
false
Original
FINISHED
2025-02-14T06:58:30
🟒 : pretrained
0
#!/bin/bash current_file="$0" current_dir="$(dirname "$current_file")" SERVER_IP=$1 SERVER_PORT=$2 cd /share/project/daiteng01/deepseek/Janus-main pip install -e . -i http://10.1.1.16/repository/pypi-group/simple --trusted-host 10.1.1.16 cd - PYTHONPATH=$current_dir:$PYTHONPATH accelerate launch $current_dir/model_adapter.py --server_ip $SERVER_IP --server_port $SERVER_PORT "${@:3}" --cfg $current_dir/meta.json
import time from flagevalmm.server import ServerDataset import sys from flagevalmm.models.base_model_adapter import BaseModelAdapter from flagevalmm.server.utils import ( parse_args, default_collate_fn, process_images_symbol, load_pil_image, ) from typing import Dict, Any import torch from transformers import AutoModelForCausalLM from janus.models import MultiModalityCausalLM, VLChatProcessor from janus.utils.io import load_pil_images class CustomDataset(ServerDataset): def __getitem__(self, index): data = self.get_data(index) qs, idx = process_images_symbol( data["question"], dst_pattern="<image_placeholder>" ) question_id = data["question_id"] img_path = data["img_path"] return question_id, qs, img_path class ModelAdapter(BaseModelAdapter): def model_init(self, task_info: Dict): ckpt_path = task_info["model_path"] torch.set_grad_enabled(False) with self.accelerator.main_process_first(): self.vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(ckpt_path) self.tokenizer = self.vl_chat_processor.tokenizer vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained( ckpt_path, trust_remote_code=True ) vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval() model = self.accelerator.prepare_model(vl_gpt, evaluation_mode=True) if hasattr(model, "module"): model = model.module self.model = model def build_message( self, query: str, image_paths=[], ) -> str: content = "" liang = len(image_paths) - query.count("<image_placeholder>") print("= = shisha", query, len(image_paths), liang) if liang < 0: query = query.replace("<image_placeholder>", "", -liang) else: for i in range(liang): content += "<image_placeholder>\n" content += query messages = [ { "role": "<|User|>", "content": content, "images": image_paths, }, {"role": "<|Assistant|>", "content": ""}, ] print("= = jieguo", messages, file=sys.stderr) return messages def run_one_task(self, task_name: str, meta_info: Dict[str, Any]): results = [] cnt = 0 data_loader = self.create_data_loader( CustomDataset, task_name, collate_fn=default_collate_fn, batch_size=1, num_workers=2, ) for question_id, question, images in data_loader: if cnt == 1: start_time = time.perf_counter() cnt += 1 messages = self.build_message(question[0], images[0]) pil_images = load_pil_images(messages) prepare_inputs = self.vl_chat_processor( conversations=messages, images=pil_images, force_batchify=True ).to(self.model.device) inputs_embeds = self.model.prepare_inputs_embeds(**prepare_inputs) # run the model to get the response outputs = self.model.language_model.generate( inputs_embeds=inputs_embeds, attention_mask=prepare_inputs.attention_mask, pad_token_id=self.tokenizer.eos_token_id, bos_token_id=self.tokenizer.bos_token_id, eos_token_id=self.tokenizer.eos_token_id, max_new_tokens=4096, do_sample=False, use_cache=True, ) response = self.tokenizer.decode( outputs[0].cpu().tolist(), skip_special_tokens=True ) self.accelerator.print(f"{question[0]}\n{response}\n\n") results.append( { "question_id": question_id[0], "answer": response.strip(), "prompt": question[0], } ) rank = self.accelerator.state.local_process_index # save results for the rank self.save_result(results, meta_info, rank=rank) self.accelerator.wait_for_everyone() if self.accelerator.is_main_process: correct_num = self.collect_results_and_save(meta_info) total_time = time.perf_counter() - start_time print( f"Total time: {total_time}\nAverage time:{total_time / cnt}\nResults_collect number: {correct_num}" ) print("rank", rank, "finished") if __name__ == "__main__": args = parse_args() model_adapter = ModelAdapter( server_ip=args.server_ip, server_port=args.server_port, timeout=args.timeout, extra_cfg=args.cfg, ) model_adapter.run()
26,231
1,060
yi.daiteng01
https://api.lingyiwanwu.com/v1/chat/completions
876995f3b3ce41aca60b637fb51d752e
yi-vision
main
float16
false
Original
FINISHED
2025-01-24T07:22:04
🟒 : pretrained
0
26,055
1,055
README.md exists but content is empty.
Downloads last month
3,067

Space using open-cn-llm-leaderboard/vlm_requests 1