IliaLarchenko commited on
Commit
db66bfe
·
1 Parent(s): 64e8b03

Improved readme

Browse files
Files changed (1) hide show
  1. README.md +22 -22
README.md CHANGED
@@ -33,23 +33,22 @@ You can try this service in the demo mode here: [AI Interviewer](https://hugging
33
  But for the good experience you need to run it locally [Project repository](https://github.com/IliaLarchenko/Interviewer).
34
 
35
  This tool is designed to help you practice various technical interviews by simulating real interview experiences.
36
- Now, you can enhance your skills not only in coding but also in system design, machine learning system design, and other specialized topics.
37
- Here you can brush your interview skills in a realistic setting, although it’s not intended to replace thorough preparations like studying algorithms or practicing coding problems.
38
 
39
  ## Key Features
40
 
41
  - **Speech-First Interface**: Talk to the AI just like you would with a real interviewer. This makes your practice sessions feel more realistic.
42
  - **Various AI Models**: The tool uses three types of AI models:
43
  - **LLM (Large Language Model)**: Acts as the interviewer.
44
- - **Speech-to-Text and Text-to-Speech Models**: These help mimic real conversations by converting spoken words to text and vice versa.
45
- - **Model Flexibility**: The tool works with many different models, including those from OpenAI, open-source models from Hugging Face, and locally running models.
46
- - **Streaming Mode**: The tool can use all models in streaming mode when it is supported. Instead of waiting for the full response from the AI, you can get partial responses in real-time.
47
- - **Expanded Interview Coverage**: The tool now supports a variety of interview types, including Coding, System Design, Machine Learning System Design, Math, Stats, and Logic, SQL, and ML Theory interviews.
48
 
49
 
50
  # Running the AI Tech Interviewer Simulator
51
 
52
- To get the real experience you should run the service locally and use your own API key or local model.
53
 
54
  ## Initial Setup
55
 
@@ -86,7 +85,7 @@ The application will be accessible at `http://localhost:7860`.
86
 
87
  ### Running Locally (alternative)
88
 
89
- Set up a Python environment and install dependencies to run the application locally:
90
 
91
  ```bash
92
  python -m venv venv
@@ -96,21 +95,18 @@ python app.py
96
  ```
97
 
98
  The application should now be accessible at `http://localhost:7860`.
99
-
100
 
101
- # Models Configuration
102
-
103
- This tool utilizes three types of AI models: a Large Language Model (LLM) for simulating interviews, a Speech-to-Text (STT) model for audio processing, and a Text-to-Speech (TTS) model for auditory feedback. You can configure each model separately to tailor the experience based on your preferences and available resources.
104
 
105
- ## Flexible Model Integration
106
 
107
- You can connect various models from different sources to the tool. Whether you are using models from OpenAI, Hugging Face, or even locally hosted models, the tool is designed to be compatible with a range of APIs. Here’s how you can configure each type:
108
 
109
  ### Large Language Model (LLM)
110
 
111
- - **OpenAI Models**: You can use models like GPT-3.5-turbo or GPT-4 provided by OpenAI. Set up is straightforward with your OpenAI API key.
112
  - **Hugging Face Models**: Models like Meta-Llama from Hugging Face can also be integrated. Make sure your API key has appropriate permissions.
113
- - **Local Models**: If you have the capability, you can run models locally. Ensure they are compatible with the Hugging Face API for seamless integration.
 
114
 
115
  ### Speech-to-Text (STT)
116
 
@@ -128,7 +124,7 @@ The tool uses a `.env` file for environment configuration. Here’s a breakdown
128
 
129
  - **API Keys**: Whether using OpenAI, Hugging Face, or other services, your API key must be specified in the `.env` file. This key should have the necessary permissions to access the models you intend to use.
130
  - **Model URLs and Types**: Specify the API endpoint URLs for each model and their type (e.g., `OPENAI_API` for OpenAI models, `HF_API` for Hugging Face or local APIs).
131
- - **Model Names**: Set the specific model name, such as `gpt-3.5-turbo` or `whisper-1`, to tell the application which model to interact with.
132
 
133
  ### Example Configuration
134
 
@@ -140,6 +136,13 @@ LLM_TYPE=OPENAI_API
140
  LLM_NAME=gpt-3.5-turbo
141
  ```
142
 
 
 
 
 
 
 
 
143
  Hugging face TTS:
144
  ```plaintext
145
  HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
@@ -158,19 +161,17 @@ STT_NAME=whisper-base.en
158
 
159
  You can configure each models separately. Find more examples in the `.env.example` files provided.
160
 
161
-
162
-
163
  # Acknowledgements
164
 
165
  The service is powered by Gradio, and the demo version is hosted on HuggingFace Spaces.
166
 
167
  Even though the service can be used with great variety of models I want to specifically acknowledge a few of them:
168
- - **OpenAI**: For models like GPT-3.5, GPT-4, Whisper, and TTS-1. More details on their models and usage policies can be found at [OpenAI's website](https://www.openai.com).
169
  - **Meta**: For the Llama models, particularly the Meta-Llama-3-70B-Instruct, as well as Facebook-mms-tts-eng model. Visit [Meta AI](https://ai.facebook.com) for more information.
170
  - **HuggingFace**: For a wide range of models and APIs that greatly enhance the flexibility of this tool. For specific details on usage, refer to [Hugging Face's documentation](https://huggingface.co).
171
 
172
  Please ensure to review the specific documentation and follow the terms of service for each model and API you use, as this is crucial for responsible and compliant use of these technologies.
173
-
174
 
175
  # Important Legal and Compliance Information
176
 
@@ -201,4 +202,3 @@ Contributors are required to ensure that their contributions comply with this li
201
 
202
  ## AI-Generated Content Disclaimer
203
  - **Nature of AI Content**: Content generated by this service is derived from artificial intelligence, utilizing models such as Large Language Models (LLM), Speech-to-Text (STT), Text-to-Speech (TTS), and other models. The service owner assumes no responsibility for the content generated by AI. This content is provided for informational or entertainment purposes only and should not be considered legally binding or factually accurate. AI-generated content does not constitute an agreement or acknowledge any factual statements or obligations.
204
-
 
33
  But for the good experience you need to run it locally [Project repository](https://github.com/IliaLarchenko/Interviewer).
34
 
35
  This tool is designed to help you practice various technical interviews by simulating real interview experiences.
36
+ You can enhance your skills in coding, (machine learning) system design, and other topics.
37
+ You can brush your interview skills in a realistic setting, although it’s not intended to replace thorough preparations like studying algorithms or practicing coding problems.
38
 
39
  ## Key Features
40
 
41
  - **Speech-First Interface**: Talk to the AI just like you would with a real interviewer. This makes your practice sessions feel more realistic.
42
  - **Various AI Models**: The tool uses three types of AI models:
43
  - **LLM (Large Language Model)**: Acts as the interviewer.
44
+ - **Speech-to-Text and Text-to-Speech Models**: These models help to mimic real conversations by converting spoken words to text and vice versa.
45
+ - **Model Flexibility**: You can use many different models, including those from OpenAI, open-source models from Hugging Face, and locally running models.
46
+ - **Streaming Mode**: All models can be used in streaming mode. Instead of waiting for the full response from the AI, you can get partial responses in real-time.
 
47
 
48
 
49
  # Running the AI Tech Interviewer Simulator
50
 
51
+ To get the real experience you should run the AI interviewer locally and use your own API key or local model.
52
 
53
  ## Initial Setup
54
 
 
85
 
86
  ### Running Locally (alternative)
87
 
88
+ If you don't want to use Docker just set up a Python environment and install dependencies to run the application locally:
89
 
90
  ```bash
91
  python -m venv venv
 
95
  ```
96
 
97
  The application should now be accessible at `http://localhost:7860`.
 
98
 
 
 
 
99
 
100
+ # Models Configuration
101
 
102
+ AI Interviewer is powered by three types of AI models: a Large Language Model (LLM) for simulating interviews, a Speech-to-Text (STT) model for audio processing, and a Text-to-Speech (TTS) model to read LLM responses. You can configure each model separately to tailor the experience based on your preferences and available resources.
103
 
104
  ### Large Language Model (LLM)
105
 
106
+ - **OpenAI Models**: You can use models like GPT-3.5-turbo, GPT-4, GPT-4o or others provided by OpenAI. Set up is straightforward with your OpenAI API key.
107
  - **Hugging Face Models**: Models like Meta-Llama from Hugging Face can also be integrated. Make sure your API key has appropriate permissions.
108
+ - **Claude**: You can use models from Anthropic, such as Claude, for a different interview experience. Ensure you have the necessary API key and permissions.
109
+ - **Local Models**: If you have the capability, you can run models locally using Ollama or other tools. Ensure they are compatible with the Open AI or Hugging Face API for seamless integration.
110
 
111
  ### Speech-to-Text (STT)
112
 
 
124
 
125
  - **API Keys**: Whether using OpenAI, Hugging Face, or other services, your API key must be specified in the `.env` file. This key should have the necessary permissions to access the models you intend to use.
126
  - **Model URLs and Types**: Specify the API endpoint URLs for each model and their type (e.g., `OPENAI_API` for OpenAI models, `HF_API` for Hugging Face or local APIs).
127
+ - **Model Names**: Set the specific model name, such as `gpt-4o` or `whisper-1`, to tell the application which model to interact with.
128
 
129
  ### Example Configuration
130
 
 
136
  LLM_NAME=gpt-3.5-turbo
137
  ```
138
 
139
+ Claude LLM:
140
+ ```plaintext
141
+ ANTHROPIC_API_KEY=sk-ant-YOUR_ANTHROPIC_API_KEY
142
+ LLM_TYPE=ANTHROPIC_API
143
+ LLM_NAME=claude-3-5-sonnet-20240620
144
+ ```
145
+
146
  Hugging face TTS:
147
  ```plaintext
148
  HF_API_KEY=hf_YOUR_HUGGINGFACE_API_KEY
 
161
 
162
  You can configure each models separately. Find more examples in the `.env.example` files provided.
163
 
 
 
164
  # Acknowledgements
165
 
166
  The service is powered by Gradio, and the demo version is hosted on HuggingFace Spaces.
167
 
168
  Even though the service can be used with great variety of models I want to specifically acknowledge a few of them:
169
+ - **OpenAI**: For models like GPT, Whisper, and TTS-1. More details on their models and usage policies can be found at [OpenAI's website](https://www.openai.com).
170
  - **Meta**: For the Llama models, particularly the Meta-Llama-3-70B-Instruct, as well as Facebook-mms-tts-eng model. Visit [Meta AI](https://ai.facebook.com) for more information.
171
  - **HuggingFace**: For a wide range of models and APIs that greatly enhance the flexibility of this tool. For specific details on usage, refer to [Hugging Face's documentation](https://huggingface.co).
172
 
173
  Please ensure to review the specific documentation and follow the terms of service for each model and API you use, as this is crucial for responsible and compliant use of these technologies.
174
+
175
 
176
  # Important Legal and Compliance Information
177
 
 
202
 
203
  ## AI-Generated Content Disclaimer
204
  - **Nature of AI Content**: Content generated by this service is derived from artificial intelligence, utilizing models such as Large Language Models (LLM), Speech-to-Text (STT), Text-to-Speech (TTS), and other models. The service owner assumes no responsibility for the content generated by AI. This content is provided for informational or entertainment purposes only and should not be considered legally binding or factually accurate. AI-generated content does not constitute an agreement or acknowledge any factual statements or obligations.