File size: 1,963 Bytes
8d407fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">WizardLM 13b - Open Assistant</h1>
<h2 style="text-align: center">An instruction-following Llama model using full evolved-instructions. </h2>
## Model Details
This is a Lora merge of Open Assistant 13b - 4 Epoch with WizardLM-13b Uncensored. <br>
https://huggingface.co/serpdotai/llama-oasst-lora-13B <br>
https://huggingface.co/ehartford/WizardLM-13B-Uncensored
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</body>
</html>
<br><b>Other benchmark scores at the bottom of readme.</b>
<hr>
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
<hr>
<p><strong><font size="5">Click to Expand Benchmarks of different quantized variations</font></strong></p>
<strong><font size="4">The lower the number, the better the score.</font></strong>
<html>
<body>
<details>
<summary>Benchmarks Sorted by C4-New score</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr> |