Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
llm_topic_modelling
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
d9427a2
llm_topic_modelling
/
tools
Ctrl+K
Ctrl+K
3 contributors
History:
15 commits
seanpedrickcase
Trying to move calls to @spaces.GPU to specific gemma calls to use the local model more efficiently
d9427a2
5 months ago
__init__.py
Safe
0 Bytes
First commit
5 months ago
auth.py
Safe
1.54 kB
Allowed for server port, queue size, and file size to be specified by environment variables
5 months ago
aws_functions.py
Safe
7.26 kB
Corrected line in upload_file_to_s3 function that was causing issues
5 months ago
chatfuncs.py
Safe
8.17 kB
Trying to move calls to @spaces.GPU to specific gemma calls to use the local model more efficiently
5 months ago
helper_functions.py
Safe
14.6 kB
Added presentation of summary table outputs
5 months ago
llm_api_call.py
Safe
92.4 kB
Trying to move calls to @spaces.GPU to specific gemma calls to use the local model more efficiently
5 months ago
prompts.py
Safe
5.25 kB
Refactor app.py and related modules for improved topic extraction and summarization. Updated UI prompts for clarity, enhanced file upload functionality, and added error handling in AWS file uploads. Introduced new functions for converting response text to markdown tables, creating general topics from subtopics, and improved overall code structure for better maintainability.
5 months ago