Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
llm_topic_modelling
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
a10d388
llm_topic_modelling
/
tools
Ctrl+K
Ctrl+K
3 contributors
History:
10 commits
seanpedrickcase
Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
a10d388
5 months ago
__init__.py
Safe
0 Bytes
First commit
5 months ago
auth.py
Safe
1.54 kB
Allowed for server port, queue size, and file size to be specified by environment variables
5 months ago
aws_functions.py
Safe
7.02 kB
First commit
5 months ago
chatfuncs.py
Safe
8.51 kB
Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
5 months ago
helper_functions.py
Safe
12.7 kB
Added support for using local models (specifically Gemma 2b) for topic extraction and summary. Generally improved output format safeguards.
5 months ago
llm_api_call.py
Safe
88 kB
Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change.
5 months ago
prompts.py
Safe
4.67 kB
Corrected prompt. Now runs Haiku correctly
5 months ago