AI & ML interests

None defined yet.

Recent Activity

Stanford's activity

Taylor658ย 
posted an update about 2 months ago
view post
Post
512
๐ŸŒ The Stanford Institute for Human-Centered AI (https://aiindex.stanford.edu/vibrancy/) has released its 2024 Global AI Vibrancy Tool, a way to explore and compare AI progress across 36 countries.

๐Ÿ“Š It measures progress across the 8 broad pillars of R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion and Infrastructure. (Each of these pillars have a number of Sub Indices)

๐Ÿ“ˆ As a whole it is not surprising that the USA was at the top in terms of overall score as of 2023 (AI investment activity is a large part of the economic pillar for example and that is a large part of the overall USA ranking) but drilling in to more STRATEGIC Macro pillars like Education, Infrastructure or R&D reveal interesting growth patterns in Asia (particularly China) and Western Europe that I suspect the 2024 metrics will bear out.

๐Ÿค– Hopefully the 2024 Global Vibrancy ranking will break out AI and ML verticals like Computer Vision or NLP and or the AI Agent space as that may also from a global macro level give indications of what is to come globally for AI in 2025.
Taylor658ย 
posted an update 2 months ago
view post
Post
765
๐Ÿค–๐Ÿ’ป Function Calling is a key component of Agent workflows. To call functions, an LLM needs a way to interact with other systems and run code. This usually means connecting it to a runtime environment that can handle function calls, data, and security.

Per the Berkeley Function-Calling Leaderboard there are only 2 fully open source models (The other 2 in the top 20 that are not closed source have cc-by-nc-4.0 licenses) out of the top 20 models that currently have function calling built in as of 17 Nov 2024.
https://gorilla.cs.berkeley.edu/leaderboard.html

The 2 Open Source Models out of the top 20 that currently support function calling are:

meetkai/functionary-medium-v3.1
Team-ACE/ToolACE-8B

This is a both a huge disadvantage AND an opportunity for the Open Source community as Enterprises, Small Business, Government Agencies etc. quickly adopt Agents and Agent workflows over the next few months. Open Source will have a lot of catching up to do as Enterprises will be hesitant to switch from the closed source models that they may initially build their Agent workflows on in the next few months to an open source alternative later.

Hopefully more open source models will support function calling in the near future.
Taylor658ย 
posted an update 3 months ago
view post
Post
2274
The Mystery Bot ๐Ÿ•ต๏ธโ€โ™‚๏ธ saga I posted about from earlier this week has been solved...๐Ÿค—

Cohere for AI has just announced its open source Aya Expanse multilingual model. The Initial release supports 23 languages with more on the way soon.๐ŸŒŒ ๐ŸŒ

You can also try Aya Expanse via SMS on your mobile phone using the global WhatsApp number or one of the initial set of country specific numbers listed below.โฌ‡๏ธ

๐ŸŒWhatsApp - +14313028498
Germany - (+49) 1771786365
USA โ€“ +18332746219
United Kingdom โ€” (+44) 7418373332
Canada โ€“ (+1) 2044107115
Netherlands โ€“ (+31) 97006520757
Brazil โ€” (+55) 11950110169
Portugal โ€“ (+351) 923249773
Italy โ€“ (+39) 3399950813
Poland - (+48) 459050281
  • 1 reply
ยท
Taylor658ย 
posted an update 3 months ago
view post
Post
2518
Spent the weekend testing out some prompts with ๐Ÿ•ต๏ธโ€โ™‚๏ธMystery Bot๐Ÿ•ต๏ธโ€โ™‚๏ธ on my mobile... exciting things are coming soon for the following languages:

๐ŸŒArabic, Chinese, Czech, Dutch, English French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese!๐ŸŒ
Taylor658ย 
posted an update 4 months ago
Taylor658ย 
posted an update 5 months ago
view post
Post
2351
๐Ÿ’กAndrew Ng recently gave a strong defense of Open Source AI models and the need to slow down legislative efforts in the US and the EU to restrict innovation in Open Source AI at Stanford GSB.

๐ŸŽฅSee video below
https://youtu.be/yzUdmwlh1sQ?si=bZc690p8iubolXm_
ยท
Taylor658ย 
posted an update 7 months ago
view post
Post
809
Researchers from Auburn University and the University of Alberta have explored the limitations of Vision Language Models (VLMs) in their recently published paper titled "Vision language models are blind." ( Vision language models are blind (2407.06581))

Key Findings:๐Ÿ”
VLMs, including GPT-4o, Gemini-1.5 Pro, Claude-3 Sonnet, and Claude-3.5 Sonnet, struggle with basic visual tasks.
Tasks such as identifying where lines intersect or counting basic shapes are challenging for these models.
The authors noted, "The shockingly poor performance of four state-of-the-art VLMs suggests their vision is, at best, like of a person with myopia seeing fine details as blurry, and at worst, like an intelligent person that is blind making educated guesses"โ€‹(Vision Language Models Are Blind; 2024)โ€‹.

Human-like Myopia? ๐Ÿ‘“
VLMs may have a blind spot similar to human myopia.
This limitation makes it difficult for VLMs to perceive details.
Suggests a potential parallel between human and machine vision limitations.

Technical Details: ๐Ÿ”ง
The researchers created a new benchmark called BlindTest.
BlindTest consists of simple visual tasks to evaluate VLMs low-level vision capabilities.
Four VLMs were assessed using BlindTest.
Many shortcomings were revealed in the models ability to process basic visual information.

Learn More: ๐Ÿ–ผ๏ธ
For a deeper dive into this research, check out the project page: https://vlmsareblind.github.io/
Taylor658ย 
posted an update 7 months ago
view post
Post
696
๐ŸŒ Cohere for AI has announced that this July and August, it is inviting researchers from around the world to join Expedition Aya, a global initiative focused on launching projects using multilingual tools like Aya 23 and Aya 101. ๐ŸŒ

Participants can start by joining the Aya server, where all organization will take place. They can share ideas and connect with others on Discord and the signup sheet. Various events will be hosted to help people find potential team members. ๐Ÿค

To support the projects, Cohere API credits will be issued. ๐Ÿ’ฐ

Over the course of six weeks, weekly check-in calls are also planned to help teams stay on track and receive support with using Aya. ๐Ÿ–ฅ๏ธ

The expedition will wrap up at the end of August with a closing event to showcase everyoneโ€™s work and plan next steps. Participants who complete the expedition will also receive some Expedition Aya swag. ๐ŸŽ‰

Links:
Join the Aya Discord: https://discord.com/invite/q9QRYkjpwk
Visit the Expedition Aya Minisite: https://sites.google.com/cohere.com/expedition-aya/home
  • 1 reply
ยท
Taylor658ย 
posted an update 7 months ago
view post
Post
939
๐Ÿ” A recently published technical report introduces MINT-1T, a dataset that will considerably expand open-source multimodal data. It features one trillion text tokens and three billion images and is scheduled for release in July 2024.

Researcher Affiliation:

University of Washington
Salesforce Research
Stanford University
University of Texas at Austin
University of California, Berkeley

Paper:
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
https://arxiv.org/pdf/2406.11271v1.pdf

GitHub:
https://github.com/mlfoundations/MINT-1T

Highlights:

MINT-1T Dataset: Largest open-source multimodal interleaved dataset with 1 trillion text tokens & 3 billion images. ๐Ÿ“Š๐Ÿ–ผ๏ธ
Diverse Sources: Incorporates data from HTML, PDFs, and ArXiv documents. ๐Ÿ“„๐Ÿ“š
Open Source: Dataset and code will be released at https://github.com/mlfoundations/MINT-1T. ๐ŸŒ๐Ÿ”“
Broader Domain Representation: Uses diverse data sources for balanced domain representation. ๐ŸŒ๐Ÿ“š
Performance in Multimodal Tasks: The datasetโ€™s scale and diversity should enhance multimodal task performance. ๐Ÿค–๐Ÿ’ก

Datasheet Information:

Motivation: Addresses the gap in large-scale open-source multimodal datasets. ๐ŸŒ๐Ÿ“Š
Composition: 927.6 million documents, including HTML, PDF, and ArXiv sources. ๐Ÿ“„๐Ÿ“š
Collection Process: Gathered from CommonCrawl WARC and WAT dumps, with rigorous filtering. ๐Ÿ—‚๏ธ๐Ÿ”
Preprocessing/Cleaning: Removal of low-quality text, duplicates and anonymization of sensitive information. ๐Ÿงน๐Ÿ”’
Ethical Considerations: Measures to ensure privacy and avoid bias. โš–๏ธ๐Ÿ”
Uses: Training multimodal models, generating interleaved image-text sequences, and building retrieval systems. ๐Ÿค–๐Ÿ“–
Taylor658ย 
posted an update 7 months ago
view post
Post
828
With the CVPR conference (https://cvpr.thecvf.com) in full swing this week in Seattle ๐Ÿ™๏ธ, the competition details for NeurIPS 2024 have just been released.๐Ÿš€

Some of the competitions this year include:

๐Ÿฆพ MyoChallenge 2024: Physiological dexterity in bionic humans.
๐ŸŒŒ FAIR Universe: Handling uncertainties in fundamental science.
๐Ÿงช BELKA: Chemical assessment through big encoded libraries.
๐Ÿ† HAC: Hacker-Cup AI competition.
๐Ÿ’ฐ Large-Scale Auction Challenge: Decision-making in competitive games.
๐Ÿ“ถ URGENT Challenge: Signal reconstruction and enhancement.
๐Ÿ›ก๏ธ LASC 2024: Safety in LLM and AI agents.

For more details, check out: https://blog.neurips.cc/2024/06/04/neurips-2024-competitions-announced
Taylor658ย 
posted an update 8 months ago
view post
Post
4421
Luma AI has just launched Dream Machine, a Sora and Kling AI-like tool that generates videos from simple text and images. ๐ŸŽฅ
Dream Machine is out of beta and offers a free tier to test it out.

I tried this extremely simple prompt with the pic below and thought the capture of my prompt into a drone camera-like video was decent:

You are a drone operator. Create a 30-second video from a drone heading eastbound over the western suburbs of Bismarck, North Dakota, looking east towards the city on an overcast summer evening during the golden hour from an altitude of 200 ft.


Dream Machine also has a paid tier. However, like its paid tier text-to-image brethren from 2023 (who all fared EXTREMELY badly once good text-to-image capabilities became the norm in open and closed source LLMs), time will tell if the pay tier model will work for text and image to video. โณ

This will be evident in 3 to 5 months once GPT-5, Gemini-2, Mistral-9, Llama 4, et al., all models with enhanced multimodal capabilities, are released. ๐Ÿš€
Taylor658ย 
posted an update 8 months ago
view post
Post
2533
Researchers at Carnegie Mellon University have introduced Sotopia, a platform designed to evaluate and enhance AIโ€™s social capabilities. Sotopia focuses on assessing AIโ€™s performance in goal-oriented social interactions, like collaboration, negotiation, and competition.

๐Ÿ” Key Findings:
Performance Evaluation: The platform enables testing and comparison of different AI systems, with a specific emphasis on refining Mistral-7B. ๐Ÿ› ๏ธ
Benchmarking: Sotopia uses GPT-4 as a benchmark to evaluate other AI systemsโ€™ capabilities. ๐Ÿ“

๐Ÿ”ง Technical Points:
Foundation: Sotopia builds upon Mistral-7B, focusing on behavior cloning and self-reinforcement. ๐Ÿ—๏ธ
Multi-Dimensional Assessment: Sotopia evaluates AI performance across 7 social dimensions, including believability, adherence to social norms, and successful goal completion. ๐ŸŒ
Data Collection: The platform gathers data from human-human, human-AI, and AI-AI interactions. ๐Ÿ“‚

Sotopia Project Page: https://www.sotopia.world/
Check out the HF space here: cmu-lti/sotopia-space
Additional details are in the HF Collection: cmu-lti/sotopia-65f312c1bd04a8c4a9225e5b

Taylor658ย 
posted an update 8 months ago
view post
Post
2159
๐Ÿ”ฌ This paper introduces Fusion Intelligence (FI), a novel approach integrating the adaptive behaviors of natural organisms ๐Ÿ(Bees!)๐Ÿ with AI's computational power.

Paper:
Fusion Intelligence: Confluence of Natural and Artificial Intelligence for Enhanced Problem-Solving Efficiency (2405.09763)
https://arxiv.org/pdf/2405.09763

Key Takeaways:
* Fusion Intelligence (FI): Combines natural organism efficiency with AI's power. ๐ŸŒŸ
* Hybrid Approach: Integrates natural abilities with AI for better problem-solving. ๐Ÿง ๐Ÿค–
* Agricultural Applications: Shows a 50% improvement in pollination efficiency. ๐Ÿ๐ŸŒผ
* Energy Efficiency: Consumes only 29.5-50.2 mW per bee, much lower than traditional methods. โšก
* Scalability: Applicable to fields like environmental monitoring and search and rescue. ๐ŸŒ๐Ÿ”
* Non-Invasive: Eliminates the need for invasive modifications to biological entities. ๐ŸŒฑ

This research offers a new approach for those interested in sustainable AI solutions. By merging biology with AI, (FI) aims to create solutions for a variety of challenges.
  • 2 replies
ยท