DeepSeek Chat

[🏠Homepage] | [🤖 Chat with DeepSeek LLM] | [Discord] | [Wechat(微信)]

Paper Link👁️


1. Introduction to DeepSeekMath

See the Introduction for more details.

2. How to Use

Here give some examples of how to use our model.

Text Completion

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/deepseek-math-7b-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

text = "The integral of x^2 from 0 to 2 is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

3. License

This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.

See the LICENSE-MODEL for more details.

4. Contact

If you have any questions, please raise an issue or contact us at [email protected].

Downloads last month
37,693
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for deepseek-ai/deepseek-math-7b-base

Adapters
18 models
Finetunes
13 models
Merges
2 models
Quantizations
7 models

Spaces using deepseek-ai/deepseek-math-7b-base 3

Collection including deepseek-ai/deepseek-math-7b-base