QA-Pilot is an interactive chat project that leverages online/local LLM for rapid understanding and navigation of GitHub code repository.
codegraph to view the python file2024-07-03 update langchain to 0.2.6 version and add moonshot API support
2024-06-30 add Go Codegraph
2024-06-27 add nvidia/tongyi API support
2024-06-19 add llamacpp API support, improve the settings list in the sidebar and add upload model function for llamacpp, add prompt templates setting
2024-06-15 add anthropic API support, refactor some functions, and fix chat show messages
2024-06-12 add zhipuai API support
2024-06-10 Convert flask to fastapi and add localai API support
2024-06-07 Add rr: option and use FlashRank for the search
2024-06-05 Upgrade langchain to v0.2 and add ollama embeddings
2024-05-26 Release v2.0.1: Refactoring to replace Streamlit fontend with Svelte to improve the performance.
Do not use models for analyzing your critical or production data!!Do not use models for analyzing customer data to ensure data privacy and security!!Do not use models for analyzing you private/sensitivity code respository!!To deploy QA-Pilot, you can follow the below steps:
git clone https://github.com/reid41/QA-Pilot.git
cd QA-Pilot
conda create -n QA-Pilot python=3.10.14
conda activate QA-Pilot
pip install -r requirements.txt
Install the pytorch with cuda pytorch
Setup providers
ollama pull <model_name>
ollama list
base_url in config/config.ini.
e.g.
```shell
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
* For setup llamacpp with [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#windows-remarks)
- upload the model to `llamacpp_models` dir or upload from the `llamacpp models` under the `Settings`
- set the model in `llamacpp_llm_models` section in `config/config.ini`
* For setup API key in `.env`
- [OpenAI](https://platform.openai.com/docs/overview): OPENAI_API_KEY='<openai_api_key,>'
- [MistralAI](https://docs.mistral.ai/): MISTRAL_API_KEY='<mistralai_api_key>'
- [ZhipuAI](https://open.bigmodel.cn/): ZHIPUAI_API_KEY='<zhipuai_api_key,>'
- [Anthropic](https://console.anthropic.com/settings/keys): ANTHROPIC_API_KEY='<anthropic_api_key>'
- [Nvidia](https://build.nvidia.com/explore/discover): NVIDIA_API_KEY='<nvidia_api_key>'
- [TongYi](https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0): DASHSCOPE_API_KEY='<tongyi_api_key>'
- [Moonshot](https://platform.moonshot.cn/): MOONSHOT_API_KEY='<moonshot_api_key>'
* For `Go codegraph`, make sure setup [GO](https://go.dev/doc/install) env, compile go file and test
```shell
go build -o parser parser.go
# test
./parser /path/test.go
config/config.ini, e.g. model provider, model, variable, Ollama API url and setup the Postgresql env
```shellcat config/config.ini [database] db_name = qa_pilot_chatsession_db db_user = qa_pilot_user db_password = qa_pilot_p db_host = localhost db_port = 5432
python check_postgresql_connection.py
7. Download and install [node.js](https://nodejs.org/en/download/package-manager) and Set up the fontend env in one terminal
```shell
# make sure the backend server host ip is correct, localhost is by default
cat svelte-app/src/config.js
export const API_BASE_URL = 'http://localhost:5000';
# install deps
cd svelte-app
npm install
npm run dev
python qa_pilot_run.py
New Source Button to add a new projectrsd: to start the input and get the source documentrr: to start the input and use the FlashrankRerank for the searchOpen Code Graph in QA-Pilot to view the code(make sure the the already in the project session and loaded before click), curretly support python and go