Spaces:
Running
Running
Upload 6 files
Browse files- Agents.zip +3 -0
- Logs.zip +3 -0
- README.md +157 -14
- Templates.zip +3 -0
- app.py +269 -0
- requirements.txt +17 -0
Agents.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:053faf0cd88d2391d0a06a4896930f6a4674aca456f40ffa98e94db3fe607e97
|
3 |
+
size 10796
|
Logs.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9bcc0d8bf86383d94c9e16d99167c1727d677f6785d7ae49dc17df18038d2b93
|
3 |
+
size 338
|
README.md
CHANGED
@@ -1,14 +1,157 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# AI Recruitment System
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
|
5 |
+
The **AI Recruitment System** is an advanced, AI-driven hiring platform built in Python to automate and optimize the recruitment process. Powered by Llama 3.x (via Groq’s API) and the CrewAI multi-agent framework, it offers a suite of tools accessible through a Streamlit web interface. The system handles everything from creating detailed job descriptions to conducting AI-driven interviews, making it an efficient solution for modern hiring needs.
|
6 |
+
|
7 |
+
Key features include:
|
8 |
+
- **Detailed Job Description Generation**: Produces comprehensive job postings with multiple sections.
|
9 |
+
- **Resume Ranking**: Evaluates resumes for job fit with bias mitigation.
|
10 |
+
- **Personalized Email Automation**: Sends AI-generated, tailored emails (e.g., interview invites).
|
11 |
+
- **Interview Scheduling**: Schedules interviews based on candidate availability, with an AI chatbot interviewer.
|
12 |
+
- **Interview Agent**: Conducts interactive interviews and evaluates responses.
|
13 |
+
- **Hire Recommendation**: Analyzes transcripts for hiring decisions.
|
14 |
+
- **Sentiment Analysis**: Assesses candidate sentiment from interviews.
|
15 |
+
|
16 |
+
The system incorporates a **Model Context Protocol (MCP)** to maintain state across its components. MCP uses Streamlit’s session state (`st.session_state.mcp_context`) to store critical outputs—like job descriptions, ranked resumes, scheduled times, and interview transcripts—enabling seamless data flow between tabs without persistent storage. This enhances workflow efficiency and ensures context is preserved throughout the hiring process.
|
17 |
+
|
18 |
+
The system prioritizes ethical AI practices, such as bias avoidance and in-memory data processing for privacy (via MCP), and uses simulated APIs for email and calendar functions.
|
19 |
+
|
20 |
+
|
21 |
+
## Project Structure
|
22 |
+
ai-recruitment-system/
|
23 |
+
├── Agents/
|
24 |
+
│ ├── jd_generator.py # Generates detailed job descriptions
|
25 |
+
│ ├── resume_ranker.py # Ranks resumes with fairness
|
26 |
+
│ ├── email_automation.py # Crafts personalized emails
|
27 |
+
│ ├── interview_scheduler.py # Schedules interviews based on candidate availability
|
28 |
+
│ ├── interview_agent.py # Conducts AI-driven interviews
|
29 |
+
│ ├── hire_recommendation.py # Provides hiring recommendations
|
30 |
+
│ ├── sentiment_analyzer.py # Analyzes sentiment in transcripts
|
31 |
+
├── Templates/
|
32 |
+
│ ├── jd_template.txt # Default JD template with detailed sections
|
33 |
+
├── Logs/
|
34 |
+
│ ├── app.log # Log file for debugging
|
35 |
+
├── app.py # Streamlit UI for the system
|
36 |
+
├── requirements.txt # Dependencies
|
37 |
+
├── .env # GROQ_API_KEY
|
38 |
+
└── README.md # This file
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## Agent Functionality
|
43 |
+
|
44 |
+
The system uses a multi-agent architecture, with each agent specializing in a recruitment task. Below is a detailed explanation of their roles and how they leverage the Model Context Protocol (MCP):
|
45 |
+
|
46 |
+
### 1. JD Generator (`jd_generator.py`)
|
47 |
+
- **Role**: Generates detailed, professional job descriptions.
|
48 |
+
- **Functionality**:
|
49 |
+
- Fetches job-related data from trusted web sources (e.g., `.edu`, `.org`, `.gov`) using Google search and RecursiveUrlLoader.
|
50 |
+
- Builds a FAISS vector store for contextual relevance from web content.
|
51 |
+
- Uses a customizable template and inputs (job title, skills, experience level) to create a comprehensive JD with sections: Company Overview, Job Overview, Responsibilities (5-7 items), Required Skills and Qualifications (5-7 items), Preferred Skills, Benefits, and Application Process.
|
52 |
+
- Stores the output in MCP (`mcp_context["job_description"]`) for use in other tabs.
|
53 |
+
- **Output**: A markdown-formatted, detailed job description.
|
54 |
+
|
55 |
+
### 2. Resume Ranker (`resume_ranker.py`)
|
56 |
+
- **Role**: Ranks resumes based on job fit.
|
57 |
+
- **Functionality**:
|
58 |
+
- Extracts text from PDF resumes (uploaded or from a directory).
|
59 |
+
- Compares resumes to the job description (optionally sourced from MCP) and web context.
|
60 |
+
- Assigns scores (0-100) with reasoning, flagging potential bias (e.g., gender, age, ethnicity) for fairness.
|
61 |
+
- Stores the ranked list in MCP (`mcp_context["ranked_resumes"]`).
|
62 |
+
- **Output**: A ranked list of resumes with scores and bias checks.
|
63 |
+
|
64 |
+
### 3. Email Automation (`email_automation.py`)
|
65 |
+
- **Role**: Generates and simulates sending personalized emails.
|
66 |
+
- **Functionality**:
|
67 |
+
- Takes inputs: candidate name, job title, email type (interview invite or team update), details (e.g., interview time from MCP), and recipient email.
|
68 |
+
- Uses Llama 3.x to craft fully personalized, professional emails tailored to the context and recipient.
|
69 |
+
- Simulates email delivery with a mock API.
|
70 |
+
- **Output**: A drafted email and simulated API response.
|
71 |
+
|
72 |
+
### 4. Interview Scheduler (`interview_scheduler.py`)
|
73 |
+
- **Role**: Schedules interviews based on candidate availability.
|
74 |
+
- **Functionality**:
|
75 |
+
- Accepts candidate name, job title, and availability (e.g., "March 25, 2025, 9 AM - 12 PM").
|
76 |
+
- Since the interviewer is an AI chatbot (always available), it selects a time within the candidate’s range.
|
77 |
+
- Generates a concise summary of the scheduled interview.
|
78 |
+
- Stores the scheduled time in MCP (`mcp_context["scheduled_time"]`).
|
79 |
+
- Simulates calendar integration via a mock API.
|
80 |
+
- **Output**: Scheduled time, calendar response, and a summary.
|
81 |
+
|
82 |
+
### 5. Interview Agent (`interview_agent.py`)
|
83 |
+
- **Role**: Conducts AI-driven interviews.
|
84 |
+
- **Functionality**:
|
85 |
+
- Uses Retrieval-Augmented Generation (RAG) with web data and the job description (from MCP) to generate job-specific questions.
|
86 |
+
- Engages in a conversational loop, evaluating responses and asking follow-ups.
|
87 |
+
- Stores the transcript in MCP (`mcp_context["interview_transcript"]`) for downstream analysis.
|
88 |
+
- **Output**: Interview questions and a full transcript.
|
89 |
+
|
90 |
+
### 6. Hire Recommendation (`hire_recommendation.py`)
|
91 |
+
- **Role**: Provides hiring recommendations from transcripts.
|
92 |
+
- **Functionality**:
|
93 |
+
- Analyzes interview transcripts (from MCP) for strengths, weaknesses, and a Hire/No-Hire decision.
|
94 |
+
- Ensures fairness by avoiding bias (e.g., gender, age, ethnicity) and flagging issues.
|
95 |
+
- **Output**: A detailed analysis with a hiring recommendation.
|
96 |
+
|
97 |
+
### 7. Sentiment Analyzer (`sentiment_analyzer.py`)
|
98 |
+
- **Role**: Assesses candidate sentiment in interviews.
|
99 |
+
- **Functionality**:
|
100 |
+
- Evaluates the tone and sentiment (e.g., positive, neutral, negative) of interview transcripts (from MCP).
|
101 |
+
- Offers insights into candidate confidence and engagement.
|
102 |
+
- **Output**: A sentiment analysis report.
|
103 |
+
|
104 |
+
## How Llama 3.x Powers the Solution
|
105 |
+
|
106 |
+
Llama 3.x, accessed via Groq’s API, is the backbone of the AI Recruitment System, providing advanced natural language processing capabilities. Its integration drives the system’s automation, personalization, and analytical features, enhanced by the Model Context Protocol (MCP) for state management. Here’s how it contributes:
|
107 |
+
|
108 |
+
### 1. Detailed Text Generation
|
109 |
+
- **Agents**: JD Generator, Email Automation, Interview Scheduler (summary), Interview Agent.
|
110 |
+
- **Role**: Llama 3.x generates rich, context-aware text:
|
111 |
+
- **JD Generator**: Produces detailed job descriptions with multiple sections (e.g., Responsibilities, Benefits), incorporating web context and user inputs into a professional, markdown-formatted output stored in MCP.
|
112 |
+
- **Email Automation**: Creates personalized emails tailored to the candidate, job, and context (e.g., using MCP’s scheduled time), replacing static templates with dynamic content.
|
113 |
+
- **Interview Scheduler**: Generates concise, readable summaries of scheduled interviews, saved to MCP.
|
114 |
+
- **Interview Agent**: Crafts dynamic, job-specific questions and follow-ups based on the job description (from MCP) and candidate responses.
|
115 |
+
|
116 |
+
### 2. Contextual Analysis and Reasoning
|
117 |
+
- **Agents**: Resume Ranker, Interview Agent, Hire Recommendation, Sentiment Analyzer.
|
118 |
+
- **Role**: Llama 3.x interprets and evaluates complex text inputs:
|
119 |
+
- **Resume Ranker**: Analyzes resume content against job descriptions (from MCP) and web context, providing scores and bias-aware reasoning, stored in MCP.
|
120 |
+
- **Interview Agent**: Assesses candidate responses for relevance and depth, using RAG and MCP data for informed questioning, with transcripts saved to MCP.
|
121 |
+
- **Hire Recommendation**: Evaluates transcripts (from MCP) for strengths, weaknesses, and hiring decisions, ensuring fairness.
|
122 |
+
- **Sentiment Analyzer**: Detects emotional tone and sentiment in transcripts (from MCP) with nuanced understanding.
|
123 |
+
|
124 |
+
### 3. Task Automation via CrewAI
|
125 |
+
- **Agents**: All agents.
|
126 |
+
- **Role**: Llama 3.x powers the CrewAI framework, enabling autonomous task execution:
|
127 |
+
- Each agent processes specific prompts (e.g., "Generate a detailed JD," "Schedule an interview") using Llama’s reasoning and generation capabilities, with MCP ensuring context continuity.
|
128 |
+
- Groq’s API ensures fast inference, critical for real-time features like the Interview Agent.
|
129 |
+
|
130 |
+
### 4. Ethical AI Practices
|
131 |
+
- **Bias Mitigation**: In Resume Ranker and Hire Recommendation, Llama 3.x is instructed to flag and avoid bias based on gender, age, or ethnicity, supporting ethical hiring.
|
132 |
+
- **Privacy via MCP**: MCP stores data in-memory (e.g., `mcp_context`), avoiding persistent storage for privacy.
|
133 |
+
- **Transparency**: The Streamlit sidebar highlights Llama 3.x’s role and limitations (e.g., potential inaccuracies).
|
134 |
+
|
135 |
+
### Technical Details
|
136 |
+
- **Model**: Llama 3.x (70B parameters, 8192 token context) via `langchain_groq.ChatGroq`.
|
137 |
+
- **Parameters**: Temperature=0.5 for balanced output, max_tokens=2000 (increased for detailed JDs) in JD Generator, 1000 elsewhere.
|
138 |
+
- **Enhancements**: RAG (via FAISS and HuggingFace embeddings) augments JD Generator and Interview Agent with web-sourced context, integrated with MCP.
|
139 |
+
|
140 |
+
## Setup Instructions
|
141 |
+
|
142 |
+
1. **Extract the ZIP File**:
|
143 |
+
- Download the `ai-recruitment-system.zip` file.
|
144 |
+
- Extract it to a directory of your choice using a tool like WinZip, 7-Zip, or your OS’s built-in unzip feature:
|
145 |
+
|
146 |
+
2. **Set Up Environment**:
|
147 |
+
Create a .env file in the root directory with your Groq API key:
|
148 |
+
|
149 |
+
echo GROQ_API_KEY=<your-api-key> > .env
|
150 |
+
|
151 |
+
3. **Install Dependencies**:
|
152 |
+
Ensure Python 3.8+ is installed, then run:
|
153 |
+
|
154 |
+
pip install -r requirements.txt
|
155 |
+
|
156 |
+
4. **Run Application**:
|
157 |
+
streamlit run app.py
|
Templates.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b8da7bb2052b01cb25d30bf7b27d694426712f9400527a49d6fba4d35e16e7b
|
3 |
+
size 1705
|
app.py
ADDED
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from crewai import Crew
|
3 |
+
from Agents.jd_generator import jd_generator, create_jd_task
|
4 |
+
from Agents.resume_ranker import resume_ranker, create_resume_rank_task
|
5 |
+
from Agents.email_automation import email_automation, create_email_task, simulate_email_api
|
6 |
+
from Agents.interview_scheduler import interview_scheduler, create_schedule_task, simulate_calendar_api, create_schedule_summary_task
|
7 |
+
from Agents.interview_agent import interview_agent, create_interview_task, evaluate_response_task
|
8 |
+
from Agents.hire_recommendation import hire_recommendation_agent, create_hire_recommendation_task
|
9 |
+
from Agents.sentiment_analyzer import sentiment_analyzer, create_sentiment_task
|
10 |
+
from datetime import datetime
|
11 |
+
import os
|
12 |
+
import logging
|
13 |
+
from streamlit_extras.add_vertical_space import add_vertical_space
|
14 |
+
|
15 |
+
# Setup logging
|
16 |
+
logging.basicConfig(filename="Logs/app.log", level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
|
17 |
+
|
18 |
+
st.set_page_config(page_title="AI Recruitment System", layout="wide")
|
19 |
+
st.title("AI Recruitment System")
|
20 |
+
|
21 |
+
# MCP Context
|
22 |
+
if "mcp_context" not in st.session_state:
|
23 |
+
st.session_state.mcp_context = {
|
24 |
+
"job_description": None,
|
25 |
+
"ranked_resumes": None,
|
26 |
+
"scheduled_time": None,
|
27 |
+
"interview_transcript": None
|
28 |
+
}
|
29 |
+
|
30 |
+
# Sidebar for transparency
|
31 |
+
st.sidebar.markdown("""
|
32 |
+
### AI Capabilities & Limitations
|
33 |
+
- **Powered by llama3-70b-8192**: Generates human-like text but may produce inaccuracies.
|
34 |
+
- **RAG**: Enhances outputs with web data, limited by search quality.
|
35 |
+
- **Simulation**: Email and calendar APIs are simulated.
|
36 |
+
|
37 |
+
### Ethical Hiring
|
38 |
+
Data is processed in-memory and not stored unless saved by the user.
|
39 |
+
""")
|
40 |
+
|
41 |
+
tabs = st.tabs([
|
42 |
+
"JD Generator", "Resume Ranker", "Email Automation", "Interview Scheduler",
|
43 |
+
"Interview Agent", "Hire Recommendation", "Sentiment Analyzer"
|
44 |
+
])
|
45 |
+
|
46 |
+
# Tab 1: JD Generator
|
47 |
+
with tabs[0]:
|
48 |
+
st.header("JD Generator")
|
49 |
+
with st.expander("Template & Inputs", expanded=True):
|
50 |
+
st.markdown("**Upload a JD template** *(optional, defaults to Templates/jd_template.txt)*")
|
51 |
+
template_file = st.file_uploader("Upload JD Template (.txt)", type=["txt"], key="jd_template")
|
52 |
+
if template_file:
|
53 |
+
with open("Templates/jd_template.txt", "wb") as f:
|
54 |
+
f.write(template_file.read())
|
55 |
+
job_title = st.text_input("Job Title", "e.g., Senior Python Developer", key="jd_job_title", help="Enter the job title")
|
56 |
+
skills = st.text_area("Required Skills", "e.g., Python, Flask, SQL, AWS", key="jd_skills")
|
57 |
+
experience_level = st.text_input("Experience Level", "e.g., 5+ years", key="jd_experience")
|
58 |
+
if st.button("Generate Job Description", key="jd_button", help="Generate a detailed job description"):
|
59 |
+
if job_title and skills and experience_level:
|
60 |
+
with st.spinner("Generating detailed JD..."):
|
61 |
+
try:
|
62 |
+
jd_task = create_jd_task(job_title, skills, experience_level)
|
63 |
+
crew = Crew(agents=[jd_generator], tasks=[jd_task], verbose=True)
|
64 |
+
result = crew.kickoff()
|
65 |
+
st.session_state.mcp_context["job_description"] = result
|
66 |
+
st.subheader("Generated Job Description")
|
67 |
+
st.markdown(result) # Use markdown to render formatted output
|
68 |
+
logging.info(f"JD generated for {job_title}")
|
69 |
+
except Exception as e:
|
70 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
71 |
+
logging.error(f"JD generation failed: {str(e)}")
|
72 |
+
else:
|
73 |
+
st.error("Fill in all fields.")
|
74 |
+
logging.warning("JD generation attempted with missing fields")
|
75 |
+
|
76 |
+
# Tab 2: Resume Ranker
|
77 |
+
with tabs[1]:
|
78 |
+
st.header("Resume Ranker")
|
79 |
+
with st.expander("Inputs", expanded=True):
|
80 |
+
job_desc = st.text_area("Job Description", value=st.session_state.mcp_context["job_description"] or "", key="resume_job_desc", help="Paste or enter job description")
|
81 |
+
dir_path = st.text_input("Directory Path", "e.g., D:/resumes", key="dir_path")
|
82 |
+
uploaded_files = st.file_uploader("Upload Resume PDFs", type=["pdf"], accept_multiple_files=True, key="resume_files")
|
83 |
+
if st.button("Rank Resumes", key="resume_button", help="Rank uploaded or directory resumes"):
|
84 |
+
if job_desc:
|
85 |
+
if (dir_path and os.path.isdir(dir_path)) or uploaded_files:
|
86 |
+
with st.spinner("Ranking resumes..."):
|
87 |
+
try:
|
88 |
+
task = create_resume_rank_task(job_desc, dir_path, uploaded_files)
|
89 |
+
if task:
|
90 |
+
crew = Crew(agents=[resume_ranker], tasks=[task], verbose=True)
|
91 |
+
result = crew.kickoff()
|
92 |
+
st.session_state.mcp_context["ranked_resumes"] = result
|
93 |
+
st.subheader("Ranked Resumes")
|
94 |
+
st.write(result)
|
95 |
+
logging.info("Resumes ranked")
|
96 |
+
else:
|
97 |
+
st.error("No valid resumes found.")
|
98 |
+
except Exception as e:
|
99 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
100 |
+
logging.error(f"Resume ranking failed: {str(e)}")
|
101 |
+
else:
|
102 |
+
st.error("Provide a directory path or upload resumes.")
|
103 |
+
else:
|
104 |
+
st.error("Provide a job description.")
|
105 |
+
|
106 |
+
|
107 |
+
# Tab 3: Email Automation (Personalized AI-Generated Content)
|
108 |
+
with tabs[2]:
|
109 |
+
st.header("Email Automation")
|
110 |
+
with st.expander("Email Details", expanded=True):
|
111 |
+
st.markdown("**Enter details for a personalized email**")
|
112 |
+
candidate_name = st.text_input("Candidate Name", "e.g., John Doe", key="email_candidate", help="Candidate's full name")
|
113 |
+
job_title_email = st.text_input("Job Title", "e.g., Senior Python Developer", key="email_job_title", help="Job title for the email")
|
114 |
+
email_type = st.selectbox("Email Type", ["interview_invite", "hiring_team_update"], key="email_type", help="Choose email purpose")
|
115 |
+
details = st.text_area("Details", value=st.session_state.mcp_context["scheduled_time"] or "e.g., March 25, 2025, 10 AM", key="email_details", help="e.g., interview time or update details")
|
116 |
+
recipient_email = st.text_input("Recipient Email", "e.g., [email protected]", key="email_recipient", help="Recipient's email address")
|
117 |
+
if st.button("Send Email", key="email_button", help="Generate and simulate sending a personalized email"):
|
118 |
+
if candidate_name and job_title_email and details and recipient_email:
|
119 |
+
with st.spinner("Generating personalized email..."):
|
120 |
+
try:
|
121 |
+
task = create_email_task(candidate_name, email_type, job_title_email, details, recipient_email)
|
122 |
+
crew = Crew(agents=[email_automation], tasks=[task], verbose=True)
|
123 |
+
email_content = crew.kickoff()
|
124 |
+
result = simulate_email_api(email_content, recipient_email)
|
125 |
+
st.subheader("Email Content")
|
126 |
+
st.write(email_content)
|
127 |
+
st.subheader("API Response")
|
128 |
+
st.write(result)
|
129 |
+
logging.info(f"Email simulated for {recipient_email}")
|
130 |
+
except Exception as e:
|
131 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
132 |
+
logging.error(f"Email generation failed: {str(e)}")
|
133 |
+
else:
|
134 |
+
st.error("Fill in all fields.")
|
135 |
+
|
136 |
+
# Tab 4: Interview Scheduler (Candidate Availability Only)
|
137 |
+
with tabs[3]:
|
138 |
+
st.header("Interview Scheduler")
|
139 |
+
with st.expander("Scheduling Details", expanded=True):
|
140 |
+
st.markdown("**Enter candidate availability** ")
|
141 |
+
candidate_name_sched = st.text_input("Candidate Name", "e.g., John Doe", key="sched_candidate", help="Candidate's full name")
|
142 |
+
job_title_sched = st.text_input("Job Title", "e.g., Senior Python Developer", key="sched_job_title", help="Job title for the interview")
|
143 |
+
candidate_avail = st.text_area("Candidate Availability", "e.g., March 25, 2025, 9 AM - 12 PM", key="sched_candidate_avail", help="e.g., March 25, 2025, 9 AM - 12 PM")
|
144 |
+
if st.button("Schedule Interview", key="sched_button", help="Schedule the interview based on candidate availability"):
|
145 |
+
if candidate_name_sched and job_title_sched and candidate_avail:
|
146 |
+
with st.spinner("Scheduling interview..."):
|
147 |
+
try:
|
148 |
+
time_task = create_schedule_task(job_title_sched, candidate_name_sched, candidate_avail)
|
149 |
+
crew = Crew(agents=[interview_scheduler], tasks=[time_task], verbose=True)
|
150 |
+
scheduled_time_str = crew.kickoff()
|
151 |
+
scheduled_time = datetime.strptime(scheduled_time_str, "%B %d, %Y, %I:%M %p")
|
152 |
+
calendar_result = simulate_calendar_api(candidate_name_sched, job_title_sched, scheduled_time)
|
153 |
+
st.session_state.mcp_context["scheduled_time"] = scheduled_time_str
|
154 |
+
|
155 |
+
summary_task = create_schedule_summary_task(candidate_name_sched, job_title_sched, scheduled_time_str)
|
156 |
+
crew = Crew(agents=[interview_scheduler], tasks=[summary_task], verbose=True)
|
157 |
+
summary = crew.kickoff()
|
158 |
+
|
159 |
+
st.subheader("Scheduled Time")
|
160 |
+
st.write(scheduled_time_str)
|
161 |
+
st.subheader("Calendar Response")
|
162 |
+
st.write(calendar_result)
|
163 |
+
st.subheader("Interview Summary")
|
164 |
+
st.write(summary)
|
165 |
+
logging.info(f"Interview scheduled for {candidate_name_sched} with summary")
|
166 |
+
except Exception as e:
|
167 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
168 |
+
logging.error(f"Scheduling failed: {str(e)}")
|
169 |
+
else:
|
170 |
+
st.error("Fill in all fields.")
|
171 |
+
logging.warning("Scheduling attempted with missing fields")
|
172 |
+
# Tab 5: Interview Agent
|
173 |
+
with tabs[4]:
|
174 |
+
st.header("Interview Agent")
|
175 |
+
with st.expander("Job Description", expanded=True):
|
176 |
+
st.markdown("**Enter the job description** *(e.g., Senior Python Developer requiring...)*")
|
177 |
+
job_desc_interview = st.text_area("Job Description", value=st.session_state.mcp_context["job_description"] or "", key="interview_job_desc")
|
178 |
+
if "interview_history" not in st.session_state:
|
179 |
+
st.session_state.interview_history = []
|
180 |
+
if "current_question" not in st.session_state:
|
181 |
+
st.session_state.current_question = None
|
182 |
+
if st.button("Start Interview", key="start_interview", help="Begin the interview"):
|
183 |
+
if job_desc_interview:
|
184 |
+
with st.spinner("Generating Initial Question..."):
|
185 |
+
try:
|
186 |
+
task = create_interview_task(job_desc_interview)
|
187 |
+
crew = Crew(agents=[interview_agent], tasks=[task], verbose=True)
|
188 |
+
question = crew.kickoff()
|
189 |
+
st.session_state.current_question = question
|
190 |
+
st.session_state.interview_history = [{"role": "agent", "content": str(question)}]
|
191 |
+
logging.info("Interview started")
|
192 |
+
except Exception as e:
|
193 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
194 |
+
logging.error(f"Interview start failed: {str(e)}")
|
195 |
+
else:
|
196 |
+
st.error("Provide a job description.")
|
197 |
+
if st.session_state.interview_history:
|
198 |
+
st.subheader("Conversation History")
|
199 |
+
for message in st.session_state.interview_history:
|
200 |
+
st.write(f"**{message['role'].capitalize()}**: {message['content']}")
|
201 |
+
if st.session_state.current_question:
|
202 |
+
candidate_response = st.text_area("Your Response", key="candidate_response", value="", height=100)
|
203 |
+
if st.button("Submit Response", key="submit_response", help="Submit your answer"):
|
204 |
+
if candidate_response:
|
205 |
+
with st.spinner("Generating Follow-up..."):
|
206 |
+
try:
|
207 |
+
st.session_state.interview_history.append({"role": "candidate", "content": candidate_response})
|
208 |
+
eval_task = evaluate_response_task(job_desc_interview, st.session_state.interview_history, candidate_response)
|
209 |
+
crew = Crew(agents=[interview_agent], tasks=[eval_task], verbose=True)
|
210 |
+
follow_up = crew.kickoff()
|
211 |
+
st.session_state.current_question = follow_up
|
212 |
+
st.session_state.interview_history.append({"role": "agent", "content": str(follow_up)})
|
213 |
+
st.session_state.mcp_context["interview_transcript"] = "\n".join([f"{m['role']}: {m['content']}" for m in st.session_state.interview_history])
|
214 |
+
st.experimental_rerun()
|
215 |
+
logging.info("Follow-up question generated")
|
216 |
+
except Exception as e:
|
217 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
218 |
+
logging.error(f"Follow-up generation failed: {str(e)}")
|
219 |
+
else:
|
220 |
+
st.error("Provide a response.")
|
221 |
+
|
222 |
+
# Tab 6: Hire Recommendation
|
223 |
+
with tabs[5]:
|
224 |
+
st.header("Hire Recommendation Agent")
|
225 |
+
with st.expander("Transcript Input", expanded=True):
|
226 |
+
transcript_default = st.session_state.mcp_context["interview_transcript"] or ""
|
227 |
+
transcript_file = st.file_uploader("Upload Transcript (.txt)", type=["txt"], key="hire_transcript")
|
228 |
+
transcript_text = st.text_area("Or Paste Transcript", value=transcript_default, key="hire_transcript_text")
|
229 |
+
if st.button("Generate Recommendation", key="hire_button", help="Analyze transcript for hiring decision"):
|
230 |
+
transcript = transcript_file.read().decode("utf-8") if transcript_file else transcript_text
|
231 |
+
if transcript:
|
232 |
+
with st.spinner("Analyzing Transcript..."):
|
233 |
+
try:
|
234 |
+
task = create_hire_recommendation_task(transcript)
|
235 |
+
crew = Crew(agents=[hire_recommendation_agent], tasks=[task], verbose=True)
|
236 |
+
result = crew.kickoff()
|
237 |
+
st.subheader("Hiring Recommendation")
|
238 |
+
st.write(result)
|
239 |
+
logging.info("Hiring recommendation generated")
|
240 |
+
except Exception as e:
|
241 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
242 |
+
logging.error(f"Hire recommendation failed: {str(e)}")
|
243 |
+
else:
|
244 |
+
st.error("Upload a file or paste a transcript.")
|
245 |
+
|
246 |
+
# Tab 7: Sentiment Analyzer
|
247 |
+
with tabs[6]:
|
248 |
+
st.header("Sentiment Analyzer")
|
249 |
+
with st.expander("Transcript Input", expanded=True):
|
250 |
+
sentiment_default = st.session_state.mcp_context["interview_transcript"] or ""
|
251 |
+
sentiment_file = st.file_uploader("Upload Transcript (.txt)", type=["txt"], key="sentiment_transcript")
|
252 |
+
sentiment_text = st.text_area("Or Paste Transcript", value=sentiment_default, key="sentiment_transcript_text")
|
253 |
+
if st.button("Analyze Sentiment", key="sentiment_button", help="Analyze transcript sentiment"):
|
254 |
+
transcript = sentiment_file.read().decode("utf-8") if sentiment_file else sentiment_text
|
255 |
+
if transcript:
|
256 |
+
with st.spinner("Analyzing Sentiment..."):
|
257 |
+
try:
|
258 |
+
task = create_sentiment_task(transcript)
|
259 |
+
crew = Crew(agents=[sentiment_analyzer], tasks=[task], verbose=True)
|
260 |
+
result = crew.kickoff()
|
261 |
+
st.subheader("Sentiment Analysis")
|
262 |
+
st.write(result)
|
263 |
+
logging.info("Sentiment analysis completed")
|
264 |
+
except Exception as e:
|
265 |
+
st.error(f"Error: {str(e)}. See Logs/app.log for details.")
|
266 |
+
logging.error(f"Sentiment analysis failed: {str(e)}")
|
267 |
+
else:
|
268 |
+
st.error("Upload a file or paste a transcript.")
|
269 |
+
add_vertical_space(2)
|
requirements.txt
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
crewai==0.30.0
|
2 |
+
langchain-groq==0.1.0
|
3 |
+
streamlit==1.32.0
|
4 |
+
python-dotenv==1.0.0
|
5 |
+
PyPDF2==3.0.1
|
6 |
+
googlesearch-python
|
7 |
+
langchain==0.1.20
|
8 |
+
langchain-community==0.0.38
|
9 |
+
beautifulsoup4==4.12.3
|
10 |
+
faiss-cpu==1.8.0
|
11 |
+
google-auth-oauthlib==1.2.0
|
12 |
+
google-auth-httplib2==0.2.0
|
13 |
+
google-api-python-client==2.134.0
|
14 |
+
requests==2.32.3
|
15 |
+
streamlit-extras==0.4.0
|
16 |
+
sentence-transformers
|
17 |
+
bs4
|