File size: 1,637 Bytes
230f028
 
 
 
 
 
 
95e1b5f
230f028
 
 
 
 
95e1b5f
230f028
 
 
95e1b5f
230f028
 
95e1b5f
230f028
 
 
95e1b5f
230f028
 
 
 
 
 
 
 
 
 
 
 
95e1b5f
230f028
 
 
 
 
 
 
 
 
95e1b5f
 
 
 
230f028
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
library_name: transformers
tags: []
---

# Model Card for Model ID

This model is created based on the instructions provided in https://www.datacamp.com/tutorial/fine-tuning-google-gemma. It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue.

## Model Details

### Model Description

Model can play role play interesting fictional, celebrity characters as they appear in the dataset

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Model type:** Causal LM
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model Gemma-7B-it:** [More Information Needed]

## Uses

Just a demo model example to try out in training PEFT models

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]


## Training Details

### Training Data

Role Play training data hieunguyenminh/roleplay



## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** T4 (2)
- **Hours used:** 4
- **Cloud Provider:** Kaggle
- **Compute Region:** North America

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]