File size: 2,380 Bytes
bdeb046 39d5a55 6dc5c1b 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 90f7974 39d5a55 90f7974 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 6dc5c1b 39d5a55 7d3a081 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 bdeb046 39d5a55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation
- causal-lm
- instruction-tuned
- serverless
library_name: transformers
inference: true
language:
- en
base_model: automatedstockminingorg/expert-on-investment-valuation-mypricermodel
datasets:
- automatedstockminingorg/investment-valuation-chunks
---
# Expert on Investment Valuation Model
## Introduction
This model is fine-tuned on data specifically curated for investment valuation, helping users with insights and explanations on various valuation techniques, including the discounted cash flow (DCF) model and comparable company analysis.
- Designed for generating text that follows instructions and role-playing in a financial advisory setting.
- Supports **long-context processing** to handle in-depth questions.
- **Multilingual support** available in English.
**This repo contains the instruction-tuned version of the model**:
- Type: Causal Language Model (instruction-tuned)
- Language: English
- Model Architecture: Transformers
For more details, please refer to our [documentation](https://huggingface.co/automatedstockminingorg/expert-on-investment-valuation-mypricermodel).
## Requirements
To ensure compatibility, use the latest version of `transformers`.
## Quickstart
Here is a code snippet to show how to load the tokenizer and model and generate responses.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "automatedstockminingorg/14b-stockanalyst-14b-stockanalyst"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the discounted cash flow (DCF) model in investment valuation."
messages = [
{"role": "system", "content": "You are an expert in investment valuation."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=300
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
|