songy / transformers /awesome-transformers.md
trishv's picture
Upload 2383 files
96e9536

A newer version of the Gradio SDK is available: 5.9.1

Upgrade

Awesome projects built with Transformers

This page lists awesome projects built on top of Transformers. Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.

In this list, we showcase incredibly impactful and novel projects that have pushed the field forward. We celebrate 100 of these projects as we reach the milestone of 100k stars as a community; but we're very open to pull requests adding other projects to the list. If you believe a project should be here and it's not, then please, open a PR to add it.

gpt4all

gpt4all is an ecosystem of open-source chatbots trained on massive collections of clean assistant data including code, stories and dialogue. It offers open-source, large language models such as LLaMA and GPT-J trained in an assistant-style.

Keywords: Open-source, LLaMa, GPT-J, instruction, assistant

recommenders

This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. It goes over several aspects required to build efficient recommendation systems: data preparation, modeling, evaluation, model selection & optimization, as well as operationalization

Keywords: Recommender systems, AzureML

lama-cleaner

Image inpainting tool powered by Stable Diffusion. Remove any unwanted object, defect, people from your pictures or erase and replace anything on your pictures.

Keywords: inpainting, SD, Stable Diffusion

flair

FLAIR is a powerful PyTorch NLP framework, convering several important tasks: NER, sentiment-analysis, part-of-speech tagging, text and document embeddings, among other things.

Keywords: NLP, text embedding, document embedding, biomedical, NER, PoS, sentiment-analysis

mindsdb

MindsDB is a low-code ML platform, which automates and integrates several ML frameworks into the data stack as "AI Tables" to streamline the integration of AI into applications, making it accessible to developers of all skill levels.

Keywords: Database, low-code, AI table

langchain

langchain is aimed at assisting in the development of apps merging both LLMs and other sources of knowledge. The library allows chaining calls to applications, creating a sequence across many tools.

Keywords: LLMs, Large Language Models, Agents, Chains

LlamaIndex

LlamaIndex is a project that provides a central interface to connect your LLM's with external data. It provides various kinds of indices and retreival mechanisms to perform different LLM tasks and obtain knowledge-augmented results.

Keywords: LLMs, Large Language Models, Data Retrieval, Indices, Knowledge Augmentation

ParlAI

ParlAI is a python framework for sharing, training and testing dialogue models, from open-domain chitchat, to task-oriented dialogue, to visual question answering. It provides more than 100 datasets under the same API, a large zoo of pretrained models, a set of agents, and has several integrations.

Keywords: Dialogue, Chatbots, VQA, Datasets, Agents

sentence-transformers

This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various task. Text is embedding in vector space such that similar text is close and can efficiently be found using cosine similarity.

Keywords: Dense vector representations, Text embeddings, Sentence embeddings

ludwig

Ludwig is a declarative machine learning framework that makes it easy to define machine learning pipelines using a simple and flexible data-driven configuration system. Ludwig is targeted at a wide variety of AI tasks. It provides a data-driven configuration system, training, prediction, and evaluation scripts, as well as a programmatic API.

Keywords: Declarative, Data-driven, ML Framework

InvokeAI

InvokeAI is an engine for Stable Diffusion models, aimed at professionals, artists, and enthusiasts. It leverages the latest AI-driven technologies through CLI as well as a WebUI.

Keywords: Stable-Diffusion, WebUI, CLI

PaddleNLP

PaddleNLP is an easy-to-use and powerful NLP library particularly targeted at the Chinese languages. It has support for multiple pre-trained model zoos, and supports a wide-range of NLP tasks from research to industrial applications.

Keywords: NLP, Chinese, Research, Industry

stanza

The Stanford NLP Group's official Python NLP library. It contains support for running various accurate natural language processing tools on 60+ languages and for accessing the Java Stanford CoreNLP software from Python.

Keywords: NLP, Multilingual, CoreNLP

DeepPavlov

DeepPavlov is an open-source conversational AI library. It is designed for the development of production ready chat-bots and complex conversational systems, as well as research in the area of NLP and, particularly, of dialog systems.

Keywords: Conversational, Chatbot, Dialog

alpaca-lora

Alpaca-lora contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). The repository provides training (fine-tuning) as well as generation scripts.

Keywords: LoRA, Parameter-efficient fine-tuning

imagen-pytorch

An open-source Implementation of Imagen, Google's closed-source Text-to-Image Neural Network that beats DALL-E2. As of release, it is the new SOTA for text-to-image synthesis.

Keywords: Imagen, Text-to-image

adapter-transformers

adapter-transformers is an extension of HuggingFace's Transformers library, integrating adapters into state-of-the-art language models by incorporating AdapterHub, a central repository for pre-trained adapter modules. It is a drop-in replacement for transformers, which is regularly updated to stay up-to-date with the developments of transformers.

Keywords: Adapters, LoRA, Parameter-efficient fine-tuning, Hub

NeMo

NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), text-to-speech synthesis (TTS), large language models (LLMs), and natural language processing (NLP). The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new https://developer.nvidia.com/conversational-ai#started.

Keywords: Conversational, ASR, TTS, LLMs, NLP

Runhouse

Runhouse allows to send code and data to any of your compute or data infra, all in Python, and continue to interact with them normally from your existing code and environment. Runhouse developers mention:

Think of it as an expansion pack to your Python interpreter that lets it take detours to remote machines or manipulate remote data.

Keywords: MLOps, Infrastructure, Data storage, Modeling

MONAI

MONAI is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its ambitions are:

  • developing a community of academic, industrial and clinical researchers collaborating on a common foundation;
  • creating state-of-the-art, end-to-end training workflows for healthcare imaging;
  • providing researchers with the optimized and standardized way to create and evaluate deep learning models.

Keywords: Healthcare imaging, Training, Evaluation

simpletransformers

Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize, train, and evaluate a model. It supports a wide variety of NLP tasks.

Keywords: Framework, simplicity, NLP

JARVIS

JARVIS is a system attempting to merge LLMs such as GPT-4 with the rest of the open-source ML community: leveraging up to 60 downstream models in order to perform tasks identified by the LLM.

Keywords: LLM, Agents, HF Hub

transformers.js

transformers.js is a JavaScript library targeted at running models from transformers directly within the browser.

Keywords: Transformers, JavaScript, browser

bumblebee

Bumblebee provides pre-trained Neural Network models on top of Axon, a neural networks library for the Elixir language. It includes integration with 🤗 Models, allowing anyone to download and perform Machine Learning tasks with few lines of code.

Keywords: Elixir, Axon

argilla

Argilla is an open-source platform providing advanced NLP labeling, monitoring, and workspaces. It is compatible with many open source ecosystems such as Hugging Face, Stanza, FLAIR, and others.

Keywords: NLP, Labeling, Monitoring, Workspaces

haystack

Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs. It offers production-ready tools to quickly build complex decision making, question answering, semantic search, text generation applications, and more.

Keywords: NLP, Framework, LLM

spaCy

spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. It offers support for transformers models through its third party package, spacy-transformers.

Keywords: NLP, Framework

speechbrain

SpeechBrain is an open-source and all-in-one conversational AI toolkit based on PyTorch. The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, speech separation, language identification, multi-microphone signal processing, and many others.

Keywords: Conversational, Speech

skorch

Skorch is a scikit-learn compatible neural network library that wraps PyTorch. It has support for models within transformers, and tokenizers from tokenizers.

Keywords: Scikit-Learn, PyTorch

bertviz

BertViz is an interactive tool for visualizing attention in Transformer language models such as BERT, GPT2, or T5. It can be run inside a Jupyter or Colab notebook through a simple Python API that supports most Huggingface models.

Keywords: Visualization, Transformers

mesh-transformer-jax

mesh-transformer-jax is a haiku library using the xmap/pjit operators in JAX for model parallelism of transformers. This library is designed for scalability up to approximately 40B parameters on TPUv3s. It was the library used to train the GPT-J model.

Keywords: Haiku, Model parallelism, LLM, TPU

deepchem

DeepChem aims to provide a high quality open-source toolchain that democratizes the use of deep-learning in drug discovery, materials science, quantum chemistry, and biology.

Keywords: Drug discovery, Materials Science, Quantum Chemistry, Biology

OpenNRE

An Open-Source Package for Neural Relation Extraction (NRE). It is targeted at a wide range of users, from newcomers to relation extraction, to developers, researchers, or students.

Keywords: Neural Relation Extraction, Framework

pycorrector

PyCorrector is a Chinese Text Error Correction Tool. It uses a language model to detect errors, pinyin feature and shape feature to correct Chinese text errors. it can be used for Chinese Pinyin and stroke input method.

Keywords: Chinese, Error correction tool, Language model, Pinyin

nlpaug

This python library helps you with augmenting nlp for machine learning projects. It is a lightweight library featuring synthetic data generation for improving model performance, support for audio and text, and compatibility with several ecosystems (scikit-learn, pytorch, tensorflow).

Keywords: Data augmentation, Synthetic data generation, Audio, NLP

dream-textures

dream-textures is a library targeted at bringing stable-diffusion support within Blender. It supports several use-cases, such as image generation, texture projection, inpainting/outpainting, ControlNet, and upscaling.

Keywords: Stable-Diffusion, Blender

seldon-core

Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices. Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more.

Keywords: Microservices, Modeling, Language wrappers

open_model_zoo

This repository includes optimized deep learning models and a set of demos to expedite development of high-performance deep learning inference applications. Use these free pre-trained models instead of training your own models to speed-up the development and production deployment process.

Keywords: Optimized models, Demos

ml-stable-diffusion

ML-Stable-Diffusion is a repository by Apple bringing Stable Diffusion support to Core ML, on Apple Silicon devices. It supports stable diffusion checkpoints hosted on the Hugging Face Hub.

Keywords: Stable Diffusion, Apple Silicon, Core ML

stable-dreamfusion

Stable-Dreamfusion is a pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model.

Keywords: Text-to-3D, Stable Diffusion

txtai

txtai is an open-source platform for semantic search and workflows powered by language models. txtai builds embeddings databases, which are a union of vector indexes and relational databases enabling similarity search with SQL. Semantic workflows connect language models together into unified applications.

Keywords: Semantic search, LLM

djl

Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is designed to be easy to get started with and simple to use for developers. DJL provides a native Java development experience and functions like any other regular Java library. DJL offers a Java binding for HuggingFace Tokenizers and easy conversion toolkit for HuggingFace model to deploy in Java.

Keywords: Java, Framework

lm-evaluation-harness

This project provides a unified framework to test generative language models on a large number of different evaluation tasks. It has support for more than 200 tasks, and supports different ecosystems: HF Transformers, GPT-NeoX, DeepSpeed, as well as the OpenAI API.

Keywords: LLM, Evaluation, Few-shot

gpt-neox

This repository records EleutherAI's library for training large-scale language models on GPUs. The framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. It is focused on training multi-billion-parameter models.

Keywords: Training, LLM, Megatron, DeepSpeed

muzic

Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence. Muzic was created by researchers from Microsoft Research Asia.

Keywords: Music understanding, Music generation

dalle-flow

DALL·E Flow is an interactive workflow for generating high-definition images from a text prompt. Itt leverages DALL·E-Mega, GLID-3 XL, and Stable Diffusion to generate image candidates, and then calls CLIP-as-service to rank the candidates w.r.t. the prompt. The preferred candidate is fed to GLID-3 XL for diffusion, which often enriches the texture and background. Finally, the candidate is upscaled to 1024x1024 via SwinIR.

Keywords: High-definition image generation, Stable Diffusion, DALL-E Mega, GLID-3 XL, CLIP, SwinIR

lightseq

LightSeq is a high performance training and inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP and CV models such as BERT, GPT, Transformer, etc. It is therefore best useful for machine translation, text generation, image classification, and other sequence related tasks.

Keywords: Training, Inference, Sequence Processing, Sequence Generation

LaTeX-OCR

The goal of this project is to create a learning based system that takes an image of a math formula and returns corresponding LaTeX code.

Keywords: OCR, LaTeX, Math formula

open_clip

OpenCLIP is an open source implementation of OpenAI's CLIP.

The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. The starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset.

Specifically, a ResNet-50 model trained with this codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet.

Keywords: CLIP, Open-source, Contrastive, Image-text

dalle-playground

A playground to generate images from any text prompt using Stable Diffusion and Dall-E mini.

Keywords: WebUI, Stable Diffusion, Dall-E mini

FedML

FedML is a federated learning and analytics library enabling secure and collaborative machine learning on decentralized data anywhere at any scale.

It supports large-scale cross-silo federated learning, and cross-device federated learning on smartphones/IoTs, and research simulation.

Keywords: Federated Learning, Analytics, Collaborative ML, Decentralized

gpt-code-clippy

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Keywords: LLM, Code

TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP.

Keywords: Adversarial attacks, Data augmentation, NLP

OpenPrompt

Prompt-learning is a paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modify the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from https://github.com/huggingface/transformers.

text-generation-webui

text-generation-webui is a Gradio Web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.

Keywords: LLM, WebUI

libra

An ergonomic machine learning library for non-technical users. It focuses on ergonomics and on ensuring that training a model is as simple as it can be.

Keywords: Ergonomic, Non-technical

alibi

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

Keywords: Model inspection, Model interpretation, Black-box, White-box

tortoise-tts

Tortoise is a text-to-speech program built with the following priorities: strong multi-voice capabilities, and highly realistic prosody and intonation.

Keywords: Text-to-speech

flower

Flower (flwr) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles: customizability, extendability, framework agnosticity, and ease-of-use.

Keywords: Federated learning systems, Customizable, Extendable, Framework-agnostic, Simplicity

fast-bert

Fast-Bert is a deep learning library that allows developers and data scientists to train and deploy BERT and XLNet based models for natural language processing tasks beginning with Text Classification. It is aimed at simplicity.

Keywords: Deployment, BERT, XLNet

towhee

Towhee makes it easy to build neural data processing pipelines for AI applications. We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks. Users can use Towhee's Pythonic API to build a prototype of their pipeline and automatically optimize it for production-ready environments.

Keywords: Data processing pipeline, Optimization

alibi-detect

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection.

Keywords: Adversarial, Outlier, Drift detection

FARM

FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built upon transformers and provides additional features to simplify the life of developers: Parallelized preprocessing, highly modular design, multi-task learning, experiment tracking, easy debugging and close integration with AWS SageMaker.

Keywords: Transfer Learning, Modular design, Multi-task learning, Experiment tracking

aitextgen

A robust Python tool for text-based AI training and generation using OpenAI's GPT-2 and EleutherAI's GPT Neo/GPT-3 architecture. aitextgen is a Python package that leverages PyTorch, Hugging Face Transformers and pytorch-lightning with specific optimizations for text generation using GPT-2, plus many added features.

Keywords: Training, Generation

diffgram

Diffgram aims to integrate human supervision into platforms. We support your team programmatically changing the UI (Schema, layout, etc.) like in Streamlit. This means that you can collect and annotate timely data from users. In other words, we are the platform behind your platform, an integrated part of your application, to ship new & better AI products faster.

Keywords: Human supervision, Platform

ecco

Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).

Keywords: Model explainability

s3prl

s3prl stands for Self-Supervised Speech Pre-training and Representation Learning. Self-supervised speech pre-trained models are called upstream in this toolkit, and are utilized in various downstream tasks.

Keywords: Speech, Training

ru-dalle

RuDALL-E aims to be similar to DALL-E, targeted to Russian.

Keywords: DALL-E, Russian

DeepKE

DeepKE is a knowledge extraction toolkit for knowledge graph construction supporting cnSchema,low-resource, document-level and multimodal scenarios for entity, relation and attribute extraction.

Keywords: Knowledge Extraction, Knowledge Graphs

Nebuly

Nebuly is the next-generation platform to monitor and optimize your AI costs in one place. The platform connects to all your AI cost sources (compute, API providers, AI software licenses, etc) and centralizes them in one place to give you full visibility on a model basis. The platform also provides optimization recommendations and a co-pilot model that can guide during the optimization process. The platform builds on top of the open-source tools allowing you to optimize the different steps of your AI stack to squeeze out the best possible cost performances.

Keywords: Optimization, Performance, Monitoring

imaginAIry

Offers a CLI and a Python API to generate images with Stable Diffusion. It has support for many tools, like image structure control (controlnet), instruction-based image edits (InstructPix2Pix), prompt-based masking (clipseg), among others.

Keywords: Stable Diffusion, CLI, Python API

sparseml

SparseML is an open-source model optimization toolkit that enables you to create inference-optimized sparse models using pruning, quantization, and distillation algorithms. Models optimized with SparseML can then be exported to the ONNX and deployed with DeepSparse for GPU-class performance on CPU hardware.

Keywords: Model optimization, Pruning, Quantization, Distillation

opacus

Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment.

Keywords: Differential privacy

LAVIS

LAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets. It features a unified interface design to access

Keywords: Multimodal, NLP, Vision

buzz

Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.

Keywords: Audio transcription, Translation

rust-bert

Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's Transformers library, using the tch-rs crate and pre-processing from rust-tokenizers. Supports multi-threaded tokenization and GPU inference. This repository exposes the model base architecture, task-specific heads and ready-to-use pipelines.

Keywords: Rust, BERT, Inference

EasyNLP

EasyNLP is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. EasyNLP integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and deployment for real-world applications.

Keywords: NLP, Knowledge distillation, Few-shot learning, Multi-modality, Training, Inference, Deployment

TurboTransformers

A fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.

Keywords: Optimization, Performance

hivemind

Hivemind is a PyTorch library for decentralized deep learning across the Internet. Its intended usage is training one large model on hundreds of computers from different universities, companies, and volunteers.

Keywords: Decentralized training

docquery

DocQuery is a library and command-line tool that makes it easy to analyze semi-structured and unstructured documents (PDFs, scanned images, etc.) using large language models (LLMs). You simply point DocQuery at one or more documents and specify a question you want to ask. DocQuery is created by the team at Impira.

Keywords: Semi-structured documents, Unstructured documents, LLM, Document Question Answering

CodeGeeX

CodeGeeX is a large-scale multilingual code generation model with 13 billion parameters, pre-trained on a large code corpus of more than 20 programming languages. It has several unique features:

  • Multilingual code generation
  • Crosslingual code translation
  • Is a customizable programming assistant

Keywords: Code Generation Model

ktrain

ktrain is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, ktrain is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners.

Keywords: Keras wrapper, Model building, Training, Deployment

FastDeploy

FastDeploy is an Easy-to-use and High Performance AI model deployment toolkit for Cloud, Mobile and Edge with packageout-of-the-box and unified experience, endend-to-end optimization for over fire160+ Text, Vision, Speech and Cross-modal AI models. Including image classification, object detection, OCR, face detection, matting, pp-tracking, NLP, stable diffusion, TTS and other tasks to meet developers' industrial deployment needs for multi-scenario, multi-hardware and multi-platform.

Keywords: Model deployment, CLoud, Mobile, Edge

underthesea

underthesea is a Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.

Keywords: Vietnamese, NLP

hasktorch

Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the core C++ libraries shared by PyTorch.

Keywords: Haskell, Neural Networks

donut

Donut, or Document understanding transformer, is a new method of document understanding that utilizes an OCR-free end-to-end Transformer model.

Donut does not require off-the-shelf OCR engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing).

Keywords: Document Understanding

transformers-interpret

Transformers Interpret is a model explainability tool designed to work exclusively with the transformers package.

In line with the philosophy of the Transformers package Transformers Interpret allows any transformers model to be explained in just two lines. Explainers are available for both text and computer vision models. Visualizations are also available in notebooks and as savable png and html files

Keywords: Model interpretation, Visualization

mlrun

MLRun is an open MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources. With MLRun, you can choose any IDE on your local machine or on the cloud. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous improvements.

Keywords: MLOps

FederatedScope

FederatedScope is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, FederatedScope integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.

Keywords: Federated learning, Event-driven

pythainlp

PyThaiNLP is a Python package for text processing and linguistic analysis, similar to NLTK with focus on Thai language.

Keywords: Thai, NLP, NLTK

FlagAI

FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model. Our goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality.

Keywords: Large models, Training, Fine-tuning, Deployment, Multi-modal

pyserini

pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with the group's Anserini IR toolkit. Retrieval using dense representations is provided via integration with Facebook's Faiss library.

Keywords: IR, Information Retrieval, Dense, Sparse

baal

baal is an active learning library that supports both industrial applications and research usecases. baal currently supports Monte-Carlo Dropout, MCDropConnect, deep ensembles, and semi-supervised learning.

Keywords: Active Learning, Research, Labeling

cleanlab

cleanlab is the standard data-centric AI package for data quality and machine learning with messy, real-world data and labels. For text, image, tabular, audio (among others) datasets, you can use cleanlab to automatically: detect data issues (outliers, label errors, near duplicates, etc), train robust ML models, infer consensus + annotator-quality for multi-annotator data, suggest data to (re)label next (active learning).

Keywords: Data-Centric AI, Data Quality, Noisy Labels, Outlier Detection, Active Learning

BentoML

BentoML is the unified framework for for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models. All Hugging Face models and pipelines can be seamlessly integrated into BentoML applications, enabling the running of models on the most suitable hardware and independent scaling based on usage.

Keywords: BentoML, Framework, Deployment, AI Applications

LLaMA-Efficient-Tuning

LLaMA-Efficient-Tuning offers a user-friendly fine-tuning framework that incorporates PEFT. The repository includes training(fine-tuning) and inference examples for LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, and other LLMs. A ChatGLM version is also available in ChatGLM-Efficient-Tuning.

Keywords: PEFT, fine-tuning, LLaMA-2, ChatGLM, Qwen