File size: 7,211 Bytes
351bd1d c2419fb 351bd1d c2419fb 351bd1d 8b45658 351bd1d 15c9420 351bd1d 15c9420 351bd1d cae2f4c 351bd1d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
---
language:
- en
library_name: transformers
base_model: meta-llama/Llama-3.2-1B
license: apache-2.0
---
## Model Information
We release the HTML pruner model used in **HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieval Results in RAG Systems**.
<p align="left">
Useful links: 📝 <a href="https://arxiv.org/abs/2411.02959" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/zstanjj/SlimPLM-Query-Rewriting/" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/plageon/SlimPLM" target="_blank">Github</a>
</p>
We propose HtmlRAG, which uses HTML instead of plain text as the format of external knowledge in RAG systems. To tackle the long context brought by HTML, we propose **Lossless HTML Cleaning** and **Two-Step Block-Tree-Based HTML Pruning**.
- **Lossless HTML Cleaning**: This cleaning process just removes totally irrelevant contents and compress redundant structures, retaining all semantic information in the original HTML. The compressed HTML of lossless HTML cleaning is suitable for RAG systems that have long-context LLMs and are not willing to loss any information before generation.
- **Two-Step Block-Tree-Based HTML Pruning**: The block-tree-based HTML pruning consists of two steps, both of which are conducted on the block tree structure. The first pruning step uses a embedding model to calculate scores for blocks, while the second step uses a path generative model. The first step processes the result of lossless HTML cleaning, while the second step processes the result of the first pruning step.
🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HTMLRAG)** to support us. Your star means a lot!
## 📦 Installation
Install the package using pip:
```bash
pip install htmlrag
```
Or install the package from source:
```bash
pip install -e .
```
---
## 📖 User Guide
### 🧹 HTML Cleaning
```python
from htmlrag import clean_html
question = "When was the bellagio in las vegas built?"
html = """
<html>
<head>
<title>When was the bellagio in las vegas built?</title>
</head>
<body>
<p class="class0">The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
</body>
<div>
<div>
<p>Some other text</p>
<p>Some other text</p>
</div>
</div>
<p class="class1"></p>
<!-- Some comment -->
<script type="text/javascript">
document.write("Hello World!");
</script>
</html>
"""
simplified_html = clean_html(html)
print(simplified_html)
# <html>
# <title>When was the bellagio in las vegas built?</title>
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# <div>
# <p>Some other text</p>
# <p>Some other text</p>
# </div>
# </html>
```
### 🌲 Build Block Tree
```python
from htmlrag import build_block_tree
block_tree, simplified_html = build_block_tree(simplified_html, max_node_words=10)
for block in block_tree:
print("Block Content: ", block[0])
print("Block Path: ", block[1])
print("Is Leaf: ", block[2])
print("")
# Block Content: <title>When was the bellagio in las vegas built?</title>
# Block Path: ['html', 'title']
# Is Leaf: True
#
# Block Content: <div>
# <p>Some other text</p>
# <p>Some other text</p>
# </div>
# Block Path: ['html', 'div']
# Is Leaf: True
#
# Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# Block Path: ['html', 'p']
# Is Leaf: True
```
### ✂️ Prune HTML Blocks with Embedding Model
```python
from htmlrag import EmbedHTMLPruner
embed_html_pruner = EmbedHTMLPruner(embed_model="bm25")
block_rankings = embed_html_pruner.calculate_block_rankings(question, simplified_html, block_tree)
print(block_rankings)
# [0, 2, 1]
from transformers import AutoTokenizer
chat_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-70B-Instruct")
max_context_window = 60
pruned_html = embed_html_pruner.prune_HTML(simplified_html, block_tree, block_rankings, chat_tokenizer, max_context_window)
print(pruned_html)
# <html>
# <title>When was the bellagio in las vegas built?</title>
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# </html>
```
### ✂️ Prune HTML Blocks with Generative Model
```python
from htmlrag import GenHTMLPruner
import torch
ckpt_path = "zstanjj/HTML-Pruner-Llama-1B"
if torch.cuda.is_available():
device="cuda"
else:
device="cpu"
gen_embed_pruner = GenHTMLPruner(gen_model=ckpt_path, max_node_words=5, device=device)
block_rankings = gen_embed_pruner.calculate_block_rankings(question, pruned_html)
print(block_rankings)
# [1, 0]
block_tree, pruned_html=build_block_tree(pruned_html, max_node_words=10)
for block in block_tree:
print("Block Content: ", block[0])
print("Block Path: ", block[1])
print("Is Leaf: ", block[2])
print("")
# Block Content: <title>When was the bellagio in las vegas built?</title>
# Block Path: ['html', 'title']
# Is Leaf: True
#
# Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# Block Path: ['html', 'p']
# Is Leaf: True
max_context_window = 32
pruned_html = gen_embed_pruner.prune_HTML(pruned_html, block_tree, block_rankings, chat_tokenizer, max_context_window)
print(pruned_html)
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
```
## Results
- **Results for [HTML-Pruner-Phi-3.8B](https://huggingface.co/zstanjj/HTML-Pruner-Phi-3.8B) and [HTML-Pruner-Llama-1B](https://huggingface.co/zstanjj/HTML-Pruner-Llama-1B) with Llama-3.1-70B-Instruct as chat model**.
| Dataset | ASQA | HotpotQA | NQ | TriviaQA | MuSiQue | ELI5 |
|------------------|-----------|-----------|-----------|-----------|-----------|-----------|
| Metrics | EM | EM | EM | EM | EM | ROUGE-L |
| BM25 | 49.50 | 38.25 | 47.00 | 88.00 | 9.50 | 16.15 |
| BGE | 68.00 | 41.75 | 59.50 | 93.00 | 12.50 | 16.20 |
| E5-Mistral | 63.00 | 36.75 | 59.50 | 90.75 | 11.00 | 16.17 |
| LongLLMLingua | 62.50 | 45.00 | 56.75 | 92.50 | 10.25 | 15.84 |
| JinaAI Reader | 55.25 | 34.25 | 48.25 | 90.00 | 9.25 | 16.06 |
| HtmlRAG-Phi-3.8B | **68.50** | **46.25** | 60.50 | **93.50** | **13.25** | **16.33** |
| HtmlRAG-Llama-1B | 66.50 | 45.00 | **60.75** | 93.00 | 10.00 | 16.25 |
---
## 📜 Citation
```bibtex
@misc{tan2024htmlraghtmlbetterplain,
title={HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems},
author={Jiejun Tan and Zhicheng Dou and Wen Wang and Mang Wang and Weipeng Chen and Ji-Rong Wen},
year={2024},
eprint={2411.02959},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2411.02959},
}
```
|