Kenneth Hamilton's picture

Kenneth Hamilton PRO

ZennyKenny

AI & ML interests

Building and enablement @ montebello.ai Certified vibe coder

Recent Activity

updated a dataset 1 day ago
ZennyKenny/TRON-dataset-v.1.0
View all activity

Organizations

scikit-learn's profile picture TorchGeo's profile picture Kornia AI's profile picture Blog-explorers's profile picture OpenLLM France's profile picture Team Tonic's profile picture ZeroGPU Explorers's profile picture Data is Better Together - Russian Language Team's profile picture The Nevsky Collective's profile picture Plan Communications's profile picture MLX Community's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Data Is Better Together Contributor's profile picture

ZennyKenny's activity

upvoted an article about 12 hours ago
view article
Article

Cohere on Hugging Face Inference Providers πŸ”₯

β€’ 48
reacted to mikonvergence's post with 🧠 1 day ago
view post
Post
1158
πŒπ„π’π€ πŸ”οΈ π“πžπ±π­-π›πšπ¬πžπ 𝐭𝐞𝐫𝐫𝐚𝐒𝐧 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐒𝐨𝐧 𝐦𝐨𝐝𝐞π₯

MESA is a novel generative model based on latent denoising diffusion capable of generating 2.5D representations (co-registered colour and depth maps) of terrains based on text prompt conditioning.

Work developed by Paul Borne–Pons ( @NewtNewt ) during his joint internship at
Adobe & ESA, and in collaboration with asterisk labs.

πŸ”οΈ 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐏𝐚𝐠𝐞 : https://paulbornep.github.io/mesa-terrain/

πŸ“ 𝐏𝐫𝐞𝐩𝐫𝐒𝐧𝐭 : https://arxiv.org/abs/2504.07210
πŸ€— 𝐌𝐨𝐝𝐞π₯ π–πžπ’π π‘π­π¬ : NewtNewt/MESA
πŸ’Ύ πƒπšπ­πšπ¬πžπ­ : Major-TOM/Core-DEM
πŸ§‘πŸ»β€πŸ’»β€‹π‚π¨ππž : https://github.com/PaulBorneP/MESA

𝐇𝐅 π’π©πšπœπž: mikonvergence/MESA
  • 2 replies
Β·
posted an update 1 day ago
view post
Post
1095
Submitted my first dataset for the Reasoning Datasets Competition! ZennyKenny/TRON-dataset-v.1.0

This dataset is designed to post-train Metareasoning agents, or those agents whose job it is to quickly (and importantly, cheaply) reason through whether it makes sense to launch a full reasoning job or simply use a simple completions job.

There's still plenty of time to join the competition! https://www.bespokelabs.ai/blog/reasoning-datasets-competition

Generation notebook (linked in dataset) is open source and pretty well generalized if I don't say so myself, so you can use it to make your own Metareasoning datasets.

Shoutout to @onekq for his inspiring comment on this topic.
replied to their post 6 days ago
view reply

Benchmarks nowadays focus on accuracy. It would be great if we could factor in token cost, i.e. delivering the right answer with the fewest tokens. This would motivate the training to be inference efficient.

I used to complain that models don't bother to think if a problem is worthy of reasoning, and push the burden to users. We should do better on this.

Whoa. Good point.

reacted to hesamation's post with πŸ”₯ 7 days ago
view post
Post
7353
Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production:
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct

LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices
replied to their post 8 days ago
view reply

I guess the short answer is that they handle subjective questions better and they improve model output traceability (i.e., better understanding of what informed the model's response).

Agree with your general thoughts on reasoning models though, they aren't the best solution for every use case.

posted an update 9 days ago