Kenneth Hamilton PRO
ZennyKenny
AI & ML interests
Building and enablement @ montebello.ai
Certified vibe coder
Recent Activity
upvoted
an
article
about 12 hours ago
Cohere on Hugging Face Inference Providers π₯
reacted
to
mikonvergence's
post
with π§
1 day ago
ππππ ποΈ πππ±π-πππ¬ππ πππ«π«ππ’π§ π ππ§ππ«πππ’π¨π§ π¦π¨πππ₯
MESA is a novel generative model based on latent denoising diffusion capable of generating 2.5D representations (co-registered colour and depth maps) of terrains based on text prompt conditioning.
Work developed by Paul BorneβPons (@NewtNewt) during his joint internship at
Adobe & ESA, and in collaboration with asterisk labs.
ποΈ ππ«π¨π£πππ πππ π : https://paulbornep.github.io/mesa-terrain/
π ππ«ππ©π«π’π§π : https://arxiv.org/abs/2504.07210
π€ ππ¨πππ₯ πππ’π π‘ππ¬ : https://www.huggingface.co/NewtNewt/MESA
πΎ πππππ¬ππ : https://huggingface.co/datasets/Major-TOM/Core-DEM
π§π»βπ»βππ¨ππ : https://github.com/PaulBorneP/MESA
ππ
ππ©πππ: https://huggingface.co/spaces/mikonvergence/MESA
updated
a dataset
1 day ago
ZennyKenny/TRON-dataset-v.1.0
Organizations
ZennyKenny's activity

upvoted
an
article
about 12 hours ago
Article
β’
48

reacted to
mikonvergence's
post with π§
1 day ago
Post
1158
ππππ ποΈ πππ±π-πππ¬ππ πππ«π«ππ’π§ π ππ§ππ«πππ’π¨π§ π¦π¨πππ₯
MESA is a novel generative model based on latent denoising diffusion capable of generating 2.5D representations (co-registered colour and depth maps) of terrains based on text prompt conditioning.
Work developed by Paul BorneβPons ( @NewtNewt ) during his joint internship at
Adobe & ESA, and in collaboration with asterisk labs.
ποΈ ππ«π¨π£πππ πππ π : https://paulbornep.github.io/mesa-terrain/
π ππ«ππ©π«π’π§π : https://arxiv.org/abs/2504.07210
π€ ππ¨πππ₯ πππ’π π‘ππ¬ : NewtNewt/MESA
πΎ πππππ¬ππ : Major-TOM/Core-DEM
π§π»βπ»βππ¨ππ : https://github.com/PaulBorneP/MESA
ππ ππ©πππ: mikonvergence/MESA
MESA is a novel generative model based on latent denoising diffusion capable of generating 2.5D representations (co-registered colour and depth maps) of terrains based on text prompt conditioning.
Work developed by Paul BorneβPons ( @NewtNewt ) during his joint internship at
Adobe & ESA, and in collaboration with asterisk labs.
ποΈ ππ«π¨π£πππ πππ π : https://paulbornep.github.io/mesa-terrain/
π ππ«ππ©π«π’π§π : https://arxiv.org/abs/2504.07210
π€ ππ¨πππ₯ πππ’π π‘ππ¬ : NewtNewt/MESA
πΎ πππππ¬ππ : Major-TOM/Core-DEM
π§π»βπ»βππ¨ππ : https://github.com/PaulBorneP/MESA
ππ ππ©πππ: mikonvergence/MESA

posted
an
update
1 day ago
Post
1095
Submitted my first dataset for the Reasoning Datasets Competition!
ZennyKenny/TRON-dataset-v.1.0
This dataset is designed to post-train Metareasoning agents, or those agents whose job it is to quickly (and importantly, cheaply) reason through whether it makes sense to launch a full reasoning job or simply use a simple completions job.
There's still plenty of time to join the competition! https://www.bespokelabs.ai/blog/reasoning-datasets-competition
Generation notebook (linked in dataset) is open source and pretty well generalized if I don't say so myself, so you can use it to make your own Metareasoning datasets.
Shoutout to @onekq for his inspiring comment on this topic.
This dataset is designed to post-train Metareasoning agents, or those agents whose job it is to quickly (and importantly, cheaply) reason through whether it makes sense to launch a full reasoning job or simply use a simple completions job.
There's still plenty of time to join the competition! https://www.bespokelabs.ai/blog/reasoning-datasets-competition
Generation notebook (linked in dataset) is open source and pretty well generalized if I don't say so myself, so you can use it to make your own Metareasoning datasets.
Shoutout to @onekq for his inspiring comment on this topic.

replied to
their
post
6 days ago
Benchmarks nowadays focus on accuracy. It would be great if we could factor in token cost, i.e. delivering the right answer with the fewest tokens. This would motivate the training to be inference efficient.
I used to complain that models don't bother to think if a problem is worthy of reasoning, and push the burden to users. We should do better on this.
Whoa. Good point.

reacted to
hesamation's
post with π₯
7 days ago
Post
7353
Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production:
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct
LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct
LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices

replied to
their
post
8 days ago
I guess the short answer is that they handle subjective questions better and they improve model output traceability (i.e., better understanding of what informed the model's response).
Agree with your general thoughts on reasoning models though, they aren't the best solution for every use case.

upvoted
a
paper
9 days ago

posted
an
update
9 days ago
Post
2758
Just signed up for the Reasoning Datasets Competition from Hugging Face, Together AI, and Bespoke Labs!
Looking forward to seeing what the community comes up with to help train better reasoning models.
Join the fray: https://www.bespokelabs.ai/blog/reasoning-datasets-competition
Looking forward to seeing what the community comes up with to help train better reasoning models.
Join the fray: https://www.bespokelabs.ai/blog/reasoning-datasets-competition