awacke1's picture
Update app.py
ddf1b18
import streamlit as st
st.markdown('''
---
title: README
emoji: πŸƒ
colorFrom: pink
colorTo: blue
sdk: static
pinned: false
---
Welcome - This classroom organization holds examples and links for this session.
Begin by adding a bookmark.
# Examples and Exercises - Create These Spaces in Your Account and Test / Modify
## Easy Examples
1. [FastSpeech](https://huggingface.co/spaces/AIZero2HeroBootcamp/FastSpeech2LinerGradioApp)
2. [Memory](https://huggingface.co/spaces/AIZero2HeroBootcamp/Memory)
3. [StaticHTML5PlayCanvas](https://huggingface.co/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas)
4. [3DHuman](https://huggingface.co/spaces/AIZero2HeroBootcamp/3DHuman)
5. [TranscriptAILearnerFromYoutube](https://huggingface.co/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube)
6. [AnimatedGifGallery](https://huggingface.co/spaces/AIZero2HeroBootcamp/AnimatedGifGallery)
7. [VideoToAnimatedGif](https://huggingface.co/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif)
## Hard Examples:
8. [ChatGPTandLangChain](https://huggingface.co/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain)
a. Keys: [API Keys](https://platform.openai.com/account/api-keys)
9. [MultiPDFQAChatGPTLangchain](https://huggingface.co/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain)
# πŸ‘‹ Two easy ways to turbo boost your AI learning journey - Lets go 100X! πŸ’»
# 🌐 AI Pair Programming with GPT
### Open 2 Browsers to:
1. 🌐 [ChatGPT](https://chat.openai.com/chat) or [URL2](https://platform.openai.com/playground) and
2. 🌐 [Huggingface](https://huggingface.co/awacke1) in separate browser windows.
1. πŸ€– Use prompts to generate a streamlit program on Huggingface or locally to test it.
2. πŸ”§ For advanced work, add Python 3.10 and VSCode locally, and debug as gradio or streamlit apps.
3. πŸš€ Use these two superpower processes to reduce the time it takes you to make a new AI program! ⏱️
# πŸŽ₯ YouTube University Method:
1. πŸ‹οΈβ€β™€οΈ Plan two hours each weekday to exercise your body and brain.
2. 🎬 Make a playlist of videos you want to learn from on YouTube. Save the links to edit later.
3. πŸš€ Try watching the videos at a faster speed while exercising, and sample the first five minutes of each video.
4. πŸ“œ Reorder the playlist so the most useful videos are at the front, and take breaks to exercise.
5. πŸ“ Practice note-taking in markdown to instantly save what you want to remember. Share your notes with others!
6. πŸ‘₯ AI Pair Programming Using Long Answer Language Models with Human Feedback
## πŸŽ₯ 2023 AI/ML Learning Playlists for ChatGPT, LLMs, Recent Events in AI:
1. [AI News](https://www.youtube.com/playlist?list=PLHgX2IExbFotMOKWOErYeyHSiikf6RTeX)
2. [ChatGPT Code Interpreter](https://www.youtube.com/playlist?list=PLHgX2IExbFou1pOQMayB7PArCalMWLfU-)
3. [Ilya Sutskever and Sam Altman](https://www.youtube.com/playlist?list=PLHgX2IExbFovr66KW6Mqa456qyY-Vmvw-)
4. [Andrew Huberman on Neuroscience and Health](https://www.youtube.com/playlist?list=PLHgX2IExbFotRU0jl_a0e0mdlYU-NWy1r)
5. [Andrej Karpathy](https://www.youtube.com/playlist?list=PLHgX2IExbFovbOFCgLNw1hRutQQKrfYNP)
6. [Medical Futurist on GPT](https://www.youtube.com/playlist?list=PLHgX2IExbFosVaCMZCZ36bYqKBYqFKHB2)
7. [ML APIs](https://www.youtube.com/playlist?list=PLHg
- πŸ”— Source Code:
1. [BigScience (GitHub)](https://github.com/bigscience-workshop/bigscience)
## πŸƒ GPT-3 Performance:
- GPT-3, while less performant than BigScience, has found widespread use due to its availability through the OpenAI API, making it easier for developers to incorporate the model into their applications without requiring substantial computational resources.
- While the GPT-3 model has 175 billion parameters, its performance is considered slightly less than the newer BigScience model. However, the specific performance of each model can vary depending on the task.
## DALL-E 2.0 Overview 🎨
- DALL-E 2.0 is an AI model developed by OpenAI that generates images from textual descriptions.
- It has 500 million parameters and uses a dataset curated by OpenAI, consisting of a diverse range of images from the internet.
## NVIDIA's Megatron Overview πŸ’‘
- Megatron is a large-scale transformer model developed by NVIDIA. It's primarily designed for tasks that require understanding the context of large pieces of text.
- It has 8.3 billion parameters and is trained on a variety of text data from the internet.
## Transformer-XL Overview ⚑️
- Transformer-XL is an AI model developed by Google Brain, which introduces a novel recurrence mechanism and relative positional encoding scheme.
- It has 250 million parameters and uses a variety of datasets for training, including BooksCorpus and English Wikipedia.
## XLNet Overview 🌐
- XLNet is a generalized autoregressive model that outperforms BERT on several benchmarks.
- It has 210 million parameters and uses a variety of datasets for training, including BooksCorpus and English Wikipedia.
<h1><center>πŸ“ŠAI Model ComparisonπŸ“‰</center></h1>
| Model Name | Model Size (in Parameters) | Model Overview |
| --- | --- | --- |
| BigScience-tr11-176B | 176 billion | BigScience is the latest AI model developed by the Big Science Workshop. It has 176 billion parameters and uses a combination of text data from the internet and scientific literature for training. |
| GPT-3 | 175 billion | GPT-3 is an AI model developed by OpenAI, which has 175 billion parameters and uses a variety of datasets for training, including Common Crawl, BooksCorpus, and English Wikipedia. |
| OpenAI's DALL-E 2.0 | 500 million | DALL-E 2.0 is an AI model developed by OpenAI that generates images from textual descriptions. It has 500 million parameters and uses a dataset curated by OpenAI. |
| NVIDIA's Megatron | 8.3 billion | Megatron is a large-scale transformer model developed by NVIDIA. It's primarily designed for tasks that require understanding the context of large pieces of text. |
| Transformer-XL | 250 million | Transformer-XL is an AI model developed by Google Brain, which introduces a novel recurrence mechanism and relative positional encoding scheme. |
| XLNet | 210 million | XLNet is a generalized autoregressive model that outperforms BERT on several benchmarks. |
## References:
1. [BigScience - A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100)
2. [GPT-3 - Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
3. [DALL-E 2.0 - Generative Pretraining from Pixels](https://openai.com/research/dall-e/)
4. [Megatron - Training Multi-Billion Parameter Language Models Using GPU Model Parallelism](https://arxiv.org/abs/1909.08053)
5. [Transformer-XL - Transformers with Longer-Range Dependencies](https://arxiv.org/abs/1901.02860)
6. [XLNet - Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237)
| Model Name | Model Size (in Parameters) |
| --- | --- |
| BigScience-tr11-176B | 176 billion |
| GPT-3 | 175 billion |
| OpenAI's DALL-E 2.0 | 500 million |
| NVIDIA's Megatron | 8.3 billion |
| Transformer-XL | 250 million |
| XLNet | 210 million |
| Model Name | Model Size (in Parameters) | Model Overview |
| --- | --- | --- |
| BigScience-tr11-176B | 176 billion | Uses a combination of text data from the internet and scientific literature for training. |
| GPT-3 | 175 billion | Uses a variety of datasets for training, including Common Crawl,
''')