Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
14
9
Lin Tan
lin-tan
Follow
ludbal's profile picture
sudanenator's profile picture
jonahelisio's profile picture
17 followers
·
8 following
https://www.cs.purdue.edu/homes/lintan/
Lin0Tan
lin-tan
lintan
AI & ML interests
AI-Software Synergy. LLM4Code (binary and source code). Mary J. Elmore New Frontiers Professor Purdue University
Recent Activity
replied
to
their
post
14 days ago
Can language models replace developers? #RepoCod says “Not Yet”, because GPT-4o and other LLMs have <30% accuracy/pass@1 on real-world code generation tasks. - Leaderboard https://lt-asset.github.io/REPOCOD/ - Dataset: https://huggingface.co/datasets/lt-asset/REPOCOD @jiang719 @shanchao @Yiran-Hu1007 Compared to #SWEBench, RepoCod tasks are - General code generation tasks, while SWE-Bench tasks resolve pull requests from GitHub issues. - With 2.6X more tests per task (313.5 compared to SWE-Bench’s 120.8). Compared to #HumanEval, #MBPP, #CoderEval, and #ClassEval, RepoCod has 980 instances from 11 Python projects, with - Whole function generation - Repository-level context - Validation with test cases, and - Real-world complex tasks: longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) Introducing hashtag #RepoCod-Lite 🐟 for faster evaluations: 200 of the toughest tasks from RepoCod with: - 67 repository-level, 67 file-level, and 66 self-contains tasks - Detailed problem descriptions (967 tokens) and long canonical solutions (918 tokens) - GPT-4o and other LLMs have < 10% accuracy/pass@1 on RepoCod-Lite tasks. - Dataset: https://huggingface.co/datasets/lt-asset/REPOCOD_Lite #LLM4code #LLM #CodeGeneration #Security
reacted
to
their
post
with 👍
18 days ago
🚀 Excited to share that our paper, "SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models", has been accepted to #ICRA2025! 🔗 Preprint: https://arxiv.org/pdf/2409.19471 We introduce SELP (Safe Efficient LLM Planner), a novel approach for generating plans that adhere to user-specified constraints while optimizing for time-efficient execution. By leveraging linear temporal logic (LTL) to interpret natural language commands, SELP effectively handles complex commands and long-horizon tasks. 🤖 💡SELP presents three key insights: 1️⃣ Equivalence Voting: Ensures robust translations from natural language instructions into LTL specifications. 2️⃣ Constrained Decoding: Uses the generated LTL formula to guide the autoregressive inference of plans, ensuring the generated plans conform to the LTL. 3️⃣ Domain-Specific Fine-Tuning: Customizes LLMs for specific robotic tasks, boosting both safety and efficiency. 📊 Experiment: Our experiments demonstrate SELP’s effectiveness and generalizability across diverse tasks. In drone navigation, SELP outperforms state-of-the-art LLM planners by 10.8% in safety rate and by 19.8% in plan efficiency. For robot manipulation, SELP achieves a 20.4% improvement in safety rate. @yiwu @jiang719 #ICRA2025 #LLM #Robotics #Agent #LLMPlanner
reacted
to
their
post
with 🔥
about 1 month ago
🚀 Excited to share that our paper, "SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models", has been accepted to #ICRA2025! 🔗 Preprint: https://arxiv.org/pdf/2409.19471 We introduce SELP (Safe Efficient LLM Planner), a novel approach for generating plans that adhere to user-specified constraints while optimizing for time-efficient execution. By leveraging linear temporal logic (LTL) to interpret natural language commands, SELP effectively handles complex commands and long-horizon tasks. 🤖 💡SELP presents three key insights: 1️⃣ Equivalence Voting: Ensures robust translations from natural language instructions into LTL specifications. 2️⃣ Constrained Decoding: Uses the generated LTL formula to guide the autoregressive inference of plans, ensuring the generated plans conform to the LTL. 3️⃣ Domain-Specific Fine-Tuning: Customizes LLMs for specific robotic tasks, boosting both safety and efficiency. 📊 Experiment: Our experiments demonstrate SELP’s effectiveness and generalizability across diverse tasks. In drone navigation, SELP outperforms state-of-the-art LLM planners by 10.8% in safety rate and by 19.8% in plan efficiency. For robot manipulation, SELP achieves a 20.4% improvement in safety rate. @yiwu @jiang719 #ICRA2025 #LLM #Robotics #Agent #LLMPlanner
View all activity
Organizations
lin-tan
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a dataset
2 months ago
lt-asset/REPOCOD_Lite_Unified
Viewer
•
Updated
28 days ago
•
746
•
386
•
1
liked
a dataset
3 months ago
lt-asset/REPOCOD_Lite
Viewer
•
Updated
Dec 3, 2024
•
200
•
85
•
1
liked
a model
4 months ago
lt-asset/Waffle_VLM_WebSight
Updated
Jan 15
•
72
•
12
liked
2 datasets
4 months ago
lt-asset/REPOCOD
Viewer
•
Updated
Dec 3, 2024
•
980
•
131
•
8
lt-asset/collu-bench
Viewer
•
Updated
Oct 13, 2024
•
13.2k
•
90
•
5
liked
2 models
5 months ago
lt-asset/nova-6.7b
Feature Extraction
•
Updated
Oct 8, 2024
•
35
•
5
lt-asset/nova-6.7b-bcr
Updated
Oct 8, 2024
•
107
•
5
liked
2 models
6 months ago
lt-asset/nova-1.3b-bcr
Text Generation
•
Updated
Oct 8, 2024
•
1.07k
•
5
lt-asset/nova-1.3b
Text Generation
•
Updated
Oct 8, 2024
•
412
•
4