MURPH is an advanced, voice-activated AI assistant featuring natural language processing, browser automation, and contextual memory. Designed for seamless voice interactions, MURPH combines state-of-the-art speech recognition, intelligent conversation management, and system-level control capabilities to deliver a comprehensive virtual assistant experience.
MURPH employs a modern, microservices-inspired architecture:
Ensure the following software is installed on your system:
| Requirement | Version | Purpose |
|---|---|---|
| Python | 3.12+ | Backend runtime |
| Node.js | 18+ | Frontend development |
| FFmpeg | Latest | Audio processing |
| Ollama | Latest | LLM inference |
| Chrome | Latest | Browser automation |
| Git | Latest | Version control |
Windows:
# Download from https://ffmpeg.org/download.html
# Extract and add bin folder to System PATH
# Verify installation
ffmpeg -version
Ubuntu/Debian:
sudo apt update
sudo apt install ffmpeg -y
ffmpeg -version
macOS:
brew install ffmpeg
ffmpeg -version
Install Ollama:
Visit https://ollama.ai and download the appropriate installer for your operating system.
Pull Required Model:
# Download Llama3.1:8b-instruct-q4_K_M
ollama pull llama3.1:8b-instruct-q4_K_M
# Verify installation
ollama list
# Test model (optional)
ollama run llama3:8b "Hello, test message"
Start Ollama Service:
# Ollama typically runs as a background service
# If not running, start manually:
ollama serve
ChromaDB is included in the Python dependencies and will be installed automatically. However, for optimal performance, ensure the following:
System Requirements:
Installation Verification:
# After pip install, verify ChromaDB
python -c "import chromadb; print(chromadb.__version__)"
Database Initialization:
ChromaDB will automatically initialize on first run. The database files will be stored in:
./memory_db/
Configuration (Optional):
For production deployments, consider ChromaDB's client-server mode:
# Install ChromaDB server
pip install chromadb[server]
# Run ChromaDB server
chroma run --path ./memory_db
Clone Repository:
git clone https://github.com/Prajwal-Pujari/Murph-.git
cd murph-ai-assistant
Create Virtual Environment:
# Create environment
python -m venv venv
# Activate environment
# Windows:
venv\Scripts\activate
# Linux/macOS:
source venv/bin/activate
Install Python Dependencies:
pip install --upgrade pip
pip install -r requirements.txt
requirements.txt contents:
fastapi==0.104.1
uvicorn[standard]==0.24.0
aiohttp==3.9.1
torch==2.1.0
openai-whisper==20231117
pyttsx3==2.90
chromadb==0.4.18
gtts==2.4.0
piper-tts==1.2.0
selenium==4.15.2
webdriver-manager==4.0.1
pyautogui==0.9.54
pygetwindow==0.0.9
python-multipart==0.0.6
python-dotenv==1.0.0
Note: If you encounter any dependency conflicts, consider using these compatible versions. For the latest versions, remove version specifications.
Download Piper TTS Model:
# Create models directory
mkdir -p models
cd models
# Download male voice model
# Visit: https://github.com/rhasspy/piper/releases/
# Download both files:
# - en_US-hfc_male-medium.onnx
# - en_US-hfc_male-medium.onnx.json
# Or use wget (Linux/macOS):
wget https://github.com/rhasspy/piper/releases/download/v1.2.0/en_US-hfc_male-medium.onnx
wget https://github.com/rhasspy/piper/releases/download/v1.2.0/en_US-hfc_male-medium.onnx.json
cd ..
Initialize ChromaDB:
# ChromaDB will auto-initialize, but you can pre-create the directory
mkdir -p memory_db
# Test ChromaDB setup
python -c "import chromadb; client = chromadb.PersistentClient(path='./memory_db'); print('ChromaDB initialized successfully')"
Start Backend Server:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Navigate to Frontend Directory:
cd frontend
Install Node Dependencies:
npm install
Start Development Server:
npm run dev
Build for Production (Optional):
npm run build
npm run preview
Open your web browser and navigate to:
http://localhost:5173
The backend API will be available at:
http://localhost:8000
API documentation can be accessed at:
http://localhost:8000/docs
Edit main.py to customize settings:
# Ollama Configuration
OLLAMA_API_URL = "http://localhost:11434/api/generate"
MODEL_NAME = "llama3.1:8b-instruct-q4_K_M"
OLLAMA_TIMEOUT = 120 # seconds
# ChromaDB Configuration
CHROMA_DB_PATH = "./memory_db"
COLLECTION_NAME = "conversation_history"
# CORS Settings
origins = [
"http://localhost:5173",
"http://localhost:3000",
]
# Voice Configuration
PIPER_MODEL_PATH = "models/en_US-hfc_male-medium.onnx"
USE_PIPER_TTS = True # Set to False to use gTTS
# Personality Settings
DEFAULT_HUMOR_LEVEL = 85 # 0-100
Update API endpoint in frontend/src/routes/+page.svelte:
const API_BASE_URL = 'http://localhost:8000';
Create a .env file in the project root:
OLLAMA_API_URL=http://localhost:11434/api/generate
OLLAMA_MODEL=llama3:8b
OLLAMA_TIMEOUT=120
CHROMA_DB_PATH=./memory_db
PIPER_MODEL_PATH=./models/en_US-hfc_male-medium.onnx
CORS_ORIGINS=http://localhost:5173,http://localhost:3000
DEFAULT_HUMOR_LEVEL=85
Primary Input Method: Press and hold SPACEBAR to record audio, release to process.
"Hey MURPH, what's the current time?"
"What's the weather like in San Francisco?"
"Tell me about yourself"
"Open Google and search for artificial intelligence"
"Navigate to Wikipedia and look up quantum computing"
"Play 'Stairway to Heaven' on YouTube"
"Pause the video"
"Increase volume to 80%"
"Go to the next video"
"Close this tab"
"Scroll down the page"
"Open Visual Studio Code"
"Switch to Chrome"
"List files in the current directory"
"Read the contents of readme.txt"
"Write 'Hello World' to test.txt"
"Set humor level to 100" # Maximum personality
"Set humor level to 50" # Balanced mode
"Set humor level to 0" # Professional mode
| Shortcut | Action |
|---|---|
| SPACE (hold) | Record voice input |
| Ctrl + Shift + H | View conversation history |
| Esc | Cancel recording |
/voice-chatProcess voice input and return AI response.
Request: multipart/form-data
audio: Audio file (WAV, MP3, etc.)Response: application/json
{
"text": "Transcribed text",
"response": "AI response text",
"audio_url": "/audio/response.mp3",
"timestamp": "2025-11-17T10:30:00Z"
}
/historyRetrieve conversation history.
Response: Array of conversation entries
/set-humorAdjust personality humor level.
Request: application/json
{
"level": 85
}
Symptom: Female voice instead of male voice
Solution:
models/ directory.onnx and .onnx.json files are presentSymptom: "Ollama is taking too long to respond" or timeout errors
Solution:
ollama list
ollama run llama3:8b
main.py:OLLAMA_TIMEOUT = 180
Symptom: Database initialization errors or persistence failures
Solution:
memory_db/ directory:chmod -R 755 memory_db
rm -rf memory_db
python -c "import chromadb; chromadb.PersistentClient(path='./memory_db')"
pip install --upgrade chromadb
Symptom: Selenium commands not executing
Solution:
pip install --upgrade webdriver-manager
Symptom: "Failed to load audio" or microphone access denied
Solution:
ffmpeg -version
We welcome contributions from the community! Please follow these guidelines:
git checkout -b feature/YourFeatureName
git commit -m "Add: Brief description of changes"
git push origin feature/YourFeatureName
This project is licensed under the MIT License. See the LICENSE file for complete details.
MURPH is built upon several outstanding open-source projects:
Developer: @Gravity_Exists
Project Repository: https://github.com/Prajwal-Pujari/Murph-
Issues & Support: GitHub Issues
Version: 1.0.0
Last Updated: November 2025
Status: Active Development