Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,22 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# LiteRT Community
|
11 |
+
|
12 |
+
A community org for developers to discover models that are ready for deployment to edge platforms. [LiteRT](https://ai.google.dev/edge/litert), formerly known as TensorFlow Lite, is a high-performance runtime for on-device AI.
|
13 |
+
|
14 |
+
Models in the organization are pre-converted and ready to be used on [Android](https://ai.google.dev/edge/litert/android) and [iOS](https://ai.google.dev/edge/litert/ios/quickstart). For more information on how to run these models see our [LiteRT Documentation](https://ai.google.dev/edge/litert).
|
15 |
+
|
16 |
+
## LLMs
|
17 |
+
|
18 |
+
To make LLMs as simple as possible, LiteRT models can be bundled into .task files compatible with [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). MediaPipe LLM Inference API wraps LiteRT to provide an easy prompt in -> response out interface on [Android](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android), [iOS](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/ios), and [Web](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/web_js).
|
19 |
+
|
20 |
+
## How to Convert and Contribute Models
|
21 |
+
|
22 |
+
Follow the instructions for converting from [TensorFlow](https://ai.google.dev/edge/litert/models/convert_tf), [PyTorch](https://github.com/google-ai-edge/ai-edge-torch), or [JAX](https://ai.google.dev/edge/litert/models/convert_jax).
|
23 |
+
|
24 |
+
For LLMs specifically, use the [LiteRT Torch Generative API](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative).
|
25 |
+
|
26 |
+
Once converted, join the LiteRT community org and add the model yourself.
|
27 |
+
|
28 |
|