Refactored call_llama_cpp_model function to include model parameter in chatfuncs.py and updated import statements in llm_api_call.py to reflect this change. a10d388 seanpedrickcase commited on Dec 11, 2024
Moved model load to chatfuncs submodule to hopefully avoid gpu run issues 1f0d087 seanpedrickcase commited on Dec 11, 2024
Can now specify root path for app with an environment variable c79d667 seanpedrickcase commited on Dec 10, 2024
Added support for using local models (specifically Gemma 2b) for topic extraction and summary. Generally improved output format safeguards. b7f4700 seanpedrickcase commited on Dec 10, 2024
Added more guidance in Readme. Now wipes variables on click to create or summarise topics f8f34c2 seanpedrickcase commited on Dec 4, 2024