Normal1919/Marian-NMT-en-zh-lil-fine-tune

  • base model: MarianMTModel
  • pretrained_ckpt: Helsinki-NLP/opus-mt-en-zh
  • This model was trained for rpy dl translate

Model description

  • source group: English
  • target group: Chinese
  • model: transformer
  • source language(s): eng
  • target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
  • fine_tune: On the basis of OPUS dataset checkpoints, train English original text with renpy text features (including but not limited to {i} [text] {/i}) to Chinese with the same reserved flag, as well as training for English name retention for LIL

How to use

>>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline
>>> mode_name = 'Normal1919/Marian-NMT-en-zh-lil-fine-tune'
>>> model = AutoModelWithLMHead.from_pretrained(mode_name)
>>> tokenizer = AutoTokenizer.from_pretrained(mode_name)
>>> translation = pipeline("Marian-NMT-en-zh-lil-fine-tune", model=model, tokenizer=tokenizer)
>>> translation('I {i} should {/i} say that I feel a little relieved to find out that {i}this {/i} is why you’ve been hanging out with Kaori lately, though. She’s really pretty and I got jealous and...I’m sorry', max_length=400)
    [{'我{i}应该{/i}说发现{i}这{/i}是你最近和Kaori出去的原因,我有点松了一口气。她很漂亮,我嫉妒,而且......我很抱歉。'}]

Download

Contact

[email protected] or [email protected]

Downloads last month
31
Safetensors
Model size
77.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.