Habana AI

company
Activity Feed

AI & ML interests

Habana Labs was founded in 2016 to create world-class AI Processors, developed from the ground-up and optimized for training deep neural networks and for inference deployment in production environments.

Recent Activity

regisss  new activity about 2 months ago
Habana/mamba:Upload 2 files
regisss  new activity 4 months ago
Habana/mamba:Upload 2 files
regisss  new activity 4 months ago
Habana/mamba:Upload 2 files
View all activity

Habana's activity

jeffboudier 
posted an update 3 days ago
view post
Post
1993
Llama4 is out and Scout is already on the Dell Enterprise Hub to deploy on Dell systems 👉 dell.huggingface.co
jeffboudier 
posted an update 6 days ago
view post
Post
1425
Enterprise orgs now enable serverless Inference Providers for all members
- includes $2 free usage per org member (e.g. an Enterprise org with 1,000 members share $2,000 free credit each month)
- admins can set a monthly spend limit for the entire org
- works today with Together, fal, Novita, Cerebras and HF Inference.

Here's the doc to bill Inference Providers usage to your org: https://huggingface.co/docs/inference-providers/pricing#organization-billing
  • 2 replies
·
regisss 
posted an update about 2 months ago
view post
Post
1697
Nice paper comparing the fp8 inference efficiency of Nvidia H100 and Intel Gaudi2: An Investigation of FP8 Across Accelerators for LLM Inference (2502.01070)

The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"

One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!

Gaudi3 results soon...
regisss 
in Habana/mamba about 2 months ago

Upload 2 files

3
#3 opened about 2 months ago by
zzhang37
jeffboudier 
posted an update 3 months ago
view post
Post
739
NVIDIA just announced the Cosmos World Foundation Models, available on the Hub: nvidia/cosmos-6751e884dc10e013a0a0d8e6

Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development.
The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6

Learn more in this great community article by @mingyuliutw and @PranjaliJoshi https://huggingface.co/blog/mingyuliutw/nvidia-cosmos
  • 1 reply
·
regisss 
posted an update 4 months ago
regisss 
in Habana/mamba 4 months ago

Upload 2 files

#2 opened 4 months ago by
zzhang37

Upload 2 files

2
#1 opened 4 months ago by
zzhang37
jeffboudier 
posted an update 5 months ago
regisss 
posted an update 6 months ago
view post
Post
1424
Interested in performing inference with an ONNX model?⚡️

The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!

You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
from optimum.onnxruntime import ORTModelForSequenceClassification

# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

Check out the whole guide 👉 https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
jeffboudier 
posted an update 6 months ago
jeffboudier 
posted an update 7 months ago
view post
Post
469
Inference Endpoints got a bunch of cool updates yesterday, this is my top 3
jeffboudier 
posted an update 7 months ago
view post
Post
4083
Pro Tip - if you're a Firefox user, you can set up Hugging Chat as integrated AI Assistant, with contextual links to summarize or simplify any text - handy!

In this short video I show how to set it up
·
jeffboudier 
posted an update 11 months ago