Spaces:
Running
Running
Update organization card with MobileCLIP 2 and FastVLM (#3)
Browse files- Update organization card with MobileCLIP 2 and FastVLM (6ae5617bf095b33292866c2736500ab4620ad3b0)
Co-authored-by: Pedro Cuenca <[email protected]>
README.md
CHANGED
@@ -14,6 +14,7 @@ Welcome to the official Hugging Face organization for Apple!
|
|
14 |
[Core ML](https://developer.apple.com/machine-learning/core-ml/) is optimized for on-device performance of a broad variety of model types by leveraging Apple Silicon and minimizing memory footprint and power consumption.
|
15 |
|
16 |
* Models
|
|
|
17 |
- [Depth Anything V2 Core ML](https://huggingface.co/collections/apple/core-ml-depth-anything-66727e780bc71c005763baf9): State-of-the-art depth estimation
|
18 |
- [DETR Resnet50 Core ML](https://huggingface.co/apple/coreml-detr-semantic-segmentation): Semantic Segmentation
|
19 |
- [FastViT Core ML](https://huggingface.co/collections/apple/core-ml-fastvit-666b782d98d6421a15237897): Image Classification
|
@@ -25,6 +26,8 @@ Welcome to the official Hugging Face organization for Apple!
|
|
25 |
Open research to enable the community to deliver amazing experiences that improve the lives of millions of people every day.
|
26 |
|
27 |
* Models
|
|
|
|
|
28 |
- [DepthPro](https://huggingface.co/collections/apple/depthpro-models-66fee63b2f0dc1b231375ca6): State-of-the-art monocular depth estimation.
|
29 |
- OpenELM [Base](https://huggingface.co/collections/apple/openelm-pretrained-models-6619ac6ca12a10bd0d0df89e) | [Instruct](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca): open, Transformer-based language model.
|
30 |
- [MobileCLIP](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Mobile-friendly image-text models.
|
|
|
14 |
[Core ML](https://developer.apple.com/machine-learning/core-ml/) is optimized for on-device performance of a broad variety of model types by leveraging Apple Silicon and minimizing memory footprint and power consumption.
|
15 |
|
16 |
* Models
|
17 |
+
- [FastVLM Core ML](https://huggingface.co/models?library=coreml&other=ml-fastvlm): On-device Vision-Language Model.
|
18 |
- [Depth Anything V2 Core ML](https://huggingface.co/collections/apple/core-ml-depth-anything-66727e780bc71c005763baf9): State-of-the-art depth estimation
|
19 |
- [DETR Resnet50 Core ML](https://huggingface.co/apple/coreml-detr-semantic-segmentation): Semantic Segmentation
|
20 |
- [FastViT Core ML](https://huggingface.co/collections/apple/core-ml-fastvit-666b782d98d6421a15237897): Image Classification
|
|
|
26 |
Open research to enable the community to deliver amazing experiences that improve the lives of millions of people every day.
|
27 |
|
28 |
* Models
|
29 |
+
- [MobileCLIP 2](https://huggingface.co/collections/apple/mobileclip2-68ac947dcb035c54bcd20c47): Mobile-friendly SOTA image-text models.
|
30 |
+
- [FastVLM](https://huggingface.co/collections/apple/fastvlm-68ac97b9cd5cacefdd04872e): Efficient Vision Language Models.
|
31 |
- [DepthPro](https://huggingface.co/collections/apple/depthpro-models-66fee63b2f0dc1b231375ca6): State-of-the-art monocular depth estimation.
|
32 |
- OpenELM [Base](https://huggingface.co/collections/apple/openelm-pretrained-models-6619ac6ca12a10bd0d0df89e) | [Instruct](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca): open, Transformer-based language model.
|
33 |
- [MobileCLIP](https://huggingface.co/collections/apple/mobileclip-models-datacompdr-data-665789776e1aa2b59f35f7c8): Mobile-friendly image-text models.
|