File size: 1,176 Bytes
c089307
 
d63e9f3
 
 
 
 
 
 
 
 
 
 
 
c089307
d63e9f3
 
 
 
 
0965d0d
d63e9f3
ca1409c
85a2764
83d36db
85a2764
d63e9f3
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
license_link: https://huggingface.co/rhysjones/Phi-3-mini-mango-1-llamafied/resolve/main/LICENSE

language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
  - messages:
      - role: user
        content: Can you provide ways to eat combinations of bananas and dragonfruits?
---

## Model Summary

The Phi-3-mini-mango-1-llamafied is an instruct finetune of [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) with 4K context and 3.8B parameters.

It is a first cut of finetuning Phi-3 (which is a great model!) to explore its properties and behaviour. More to follow.

This version of the model has had its weight layers converted to Llama format using @vonjack's [vonjack/Phi-3-mini-4k-instruct-LLaMAfied](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied) conversion script,
allowing it to run within a llama toolset ecosystem without change or trust_remote_code. It's also interesting to see how resilient the model still is after conversion.

The original Phi-3 format weights of this model are available at [rhysjones/Phi-3-mini-mango-1](https://huggingface.co/rhysjones/Phi-3-mini-mango-1).