SHMT / README.md
zeroMN's picture
Update README.md
4cf4ce0 verified
|
raw
history blame
6.93 kB
metadata
language:
  - en
  - zh
license: apache-2.0
library_name: transformers
tags:
  - multimodal
  - vqa
  - text
  - audio
datasets:
  - synthetic-dataset
metrics:
  - accuracy
  - bleu
  - wer
model-index:
  - name: Evolutionary Multi-Modal Model
    results:
      - task:
          type: vqa
          name: Visual Question Answering
        dataset:
          type: synthetic-dataset
          name: Synthetic Multimodal Dataset
          split: test
        metrics:
          - type: accuracy
            value: 85
pipeline_tag: text-generation
widget:
  - text: >-
      Is this review positive or negative? Review: Best cast iron skillet you
      will ever buy.
    example_title: Sentiment analysis
  - text: >-
      Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
      He chose her because she had ...
    example_title: Coreference resolution
  - text: >-
      On a shelf, there are five books: a gray book, a red book, a purple book,
      a blue book, and a black book ...
    example_title: Logic puzzles
  - text: >-
      The two men running to become New York City's next mayor will face off in
      their first debate Wednesday night ...
    example_title: Reading comprehension

Model Sources

You need to use separate code, audio, text, and natural language together with the model. Because the model will use separate word segmenters and vocabularies to achieve the best results when dealing with special cases.

Multi-Modal Model

Model Card for Evolutionary

Model Description

-- ###############################################

from ucimlrepo import fetch_ucirepo 
fetch dataset 
breast_cancer_wisconsin_original = fetch_ucirepo(id=15) 
  
data (as pandas dataframes) 
X = breast_cancer_wisconsin_original.data.features 
y = breast_cancer_wisconsin_original.data.targets 
 
metadata 
print(breast_cancer_wisconsin_original.metadata) 
 
variable information 
print(breast_cancer_wisconsin_original.variables) 

##########################################################

0 0.93 0.99 0.96 79

1 0.98 0.90 0.94 58

-- #accuracy 0.95 137

-- This model, named Evolutionary Multi-Modal Model, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the adapter-transformers and transformers libraries and is intended to be a versatile base model for both direct use and fine-tuning.

-- Developed by: Independent researcher Funded by : Self-funded Shared by : Independent researcher Model type: Multimodal Language(s) (NLP): English zh License: Apache-2.0 Finetuned from model : None

Uses:https://huggingface.co/zeroMN/SHMT

Direct Use

git lfs install

git clone https://huggingface.co/zeroMN/SHMT.git

Downstream Use

The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.

Out-of-Scope Use

The Evolved Multimodal Model is not suitable for tasks that require high expertise or domain-specific expertise beyond its current capabilities. The number of speech frames still needs to be fine-tuned by yourself.

Bias, Risks, and Limitations

Recommendations

Users (both direct and downstream) should be made aware of the following risks, biases, and limitations:

  • Bias: The model may exhibit biases present in the training data, particularly if the data is not representative of all populations.
  • Risks: The model should not be used in critical applications where high accuracy and reliability are required without thorough testing and validation.
  • Limitations: The model may not perform well on tasks that require fine-grained recognition or highly specialized audio processing.

How to Get Started with the Model

import os
import torch
import torch.nn as nn
import numpy as np
import random
from transformers import (
    BartForConditionalGeneration, 
    AutoModelForCausalLM, 
    BertModel, 
    Wav2Vec2Model,
    CLIPModel,
    AutoTokenizer
)

class MultiModalModel(nn.Module):
    def __init__(self):
        super(MultiModalModel, self).__init__()
        # 初始化子模型
        self.text_generator = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
        self.code_generator = AutoModelForCausalLM.from_pretrained('gpt2')
        self.nlp_encoder = BertModel.from_pretrained('bert-base-uncased')
        self.speech_encoder = Wav2Vec2Model.from_pretrained('facebook/wav2vec2-base-960h')
        self.vision_encoder = CLIPModel.from_pretrained('openai/clip-vit-base-patch32')

        # 初始化分词器和处理器
        self.text_tokenizer = AutoTokenizer.from_pretrained('facebook/bart-base')
        self.code_tokenizer = AutoTokenizer.from_pretrained('gpt2')
        self.nlp_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
        self.speech_processor = AutoTokenizer.from_pretrained('facebook/wav2vec2-base-960h')
        self.vision_processor = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch32')

    def forward(self, task, inputs):
        if task == 'text_generation':
            attention_mask = inputs.get('attention_mask')
            outputs = self.text_generator.generate(
                inputs['input_ids'], 
                max_new_tokens=100,  
                pad_token_id=self.text_tokenizer.eos_token_id, 
                attention_mask=attention_mask,
                top_p=0.9,  
                top_k=50,  
                temperature=0.8,  
                do_sample=True
            )
            return self.text_tokenizer.decode(outputs[0], skip_special_tokens=True)
        elif task == 'code_generation':
            attention_mask = inputs.get('attention_mask')
            outputs = self.code_generator.generate(
                inputs['input_ids'], 
                max_new_tokens=50,  
                pad_token_id=self.code_tokenizer.eos_token_id, 
                attention_mask=attention_mask,
                top_p=0.95,  
                top_k=50,  
                temperature=1.2,  
                do_sample=True
            )
            return self.code_tokenizer.decode(outputs[0], skip_special_tokens=True)
        # 添加其他任务的逻辑...

# 计算模型参数数量的函数
def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

# 初始化模型
model = MultiModalModel()

# 计算并打印模型参数数量
total_params = count_parameters(model)
print(f"模型总参数数量: {total_params}")