auto-claude: subtask-3-8 - Document development and testing

This commit is contained in:
Maciej Pienczyn 2026-01-10 09:46:11 +01:00
parent d74889a1c6
commit cbff2ffafe

274
README.md
View File

@ -455,6 +455,280 @@ docker exec -i nordabiz-postgres psql -U nordabiz_app -d nordabiz < database/sch
For more detailed database setup and management, see `database/README.md`.
## Testing & Quality Assurance
The Norda Biznes Hub includes a comprehensive testing framework to ensure code quality, AI accuracy, and feature reliability. The test suite covers unit tests for individual components and specialized AI quality evaluation.
### Test Suite Overview
The `tests/` directory contains the following test modules:
```
tests/
├── ai_quality_evaluator.py # AI testing framework
├── ai_quality_test_cases.json # 15 test cases in 8 categories
├── test_admin_seo_dashboard.py # SEO dashboard functionality tests
├── test_gbp_audit_field_checks.py # Google Business Profile audit tests
├── test_it_audit_collaboration.py # IT infrastructure audit tests
├── test_seo_audit.py # SEO audit service tests
├── test_social_media_audit.py # Social media audit tests
└── results/ # Test results and reports (generated)
```
### Running Tests
#### AI Quality Tests
The AI Quality Testing Framework evaluates the accuracy and relevance of AI chat responses using predefined test cases.
**Basic Usage:**
```bash
# Run all AI quality tests (standard output)
python run_ai_quality_tests.py
# Run with verbose output showing details for each test
python run_ai_quality_tests.py --verbose
# or
python run_ai_quality_tests.py -v
# Run with verbose output AND save results to JSON file
python run_ai_quality_tests.py -v -s
# Run only high-priority test cases (quick validation)
python run_ai_quality_tests.py -q
```
**Test Results:**
Test results are displayed in the console and optionally saved to `tests/results/` directory with timestamps:
- `ai_quality_report_20260110_143000.json` - Detailed test results with scores and metrics
**AI Test Categories:**
The AI quality tests cover 8 business categories with 15 test cases:
1. **IT/Web Development** - Testing recommendations for IT services, web development
2. **Services/Legal** - Legal services, law firms
3. **Services/Accounting** - Accounting, bookkeeping services
4. **Production/Metal** - Metal production, manufacturing
5. **Construction** - Construction companies, contractors
6. **HVAC** - Heating, ventilation, air conditioning
7. **Energy/Renewable** - Renewable energy, solar panels
8. **IT/Security** - IT security, cybersecurity services
**Quality Thresholds:**
- **Pass Threshold:** 70% - Tests must achieve at least 70% accuracy to pass
- **Scoring:** Each test case is scored 0.0-1.0 based on:
- Company relevance (did it find the right companies?)
- Response accuracy (is the information correct?)
- Context understanding (did it understand the query intent?)
#### Unit Tests
Run unit tests for specific components using pytest:
```bash
# Run all unit tests
python -m pytest tests/
# Run specific test file
python -m pytest tests/test_admin_seo_dashboard.py
# Run with verbose output
python -m pytest tests/ -v
# Run with coverage report
python -m pytest tests/ --cov=.
```
#### Manual Testing in Production
**⚠️ IMPORTANT:** Always use test accounts for production verification.
**Test Accounts:**
| Account | Email | Purpose |
|---------|-------|---------|
| Test User | `test@nordabiznes.pl` | Test user-facing features (directory, search, chat) |
| Test Admin | `testadmin@nordabiznes.pl` | Test admin features (moderation, audits, management) |
**Testing Workflow:**
1. **User Features:**
- Company directory browsing
- Search functionality
- AI chat conversations
- User registration and authentication
- Notifications
2. **Admin Features:**
- News moderation (`/admin/news`)
- SEO dashboard (`/admin/seo`)
- Social media audit (`/admin/social-media`)
- GBP audit (`/admin/gbp-audit`)
- IT audit (`/admin/it-audit`)
- User management (`/admin/users`)
### Quality Assurance Process
#### Pre-Deployment QA Checklist
Before deploying to production, ensure all quality checks pass:
1. **Code Quality:**
```bash
# Check Python syntax
python -m py_compile app.py
# Check all Python files
find . -name "*.py" -exec python -m py_compile {} \;
```
2. **AI Quality:**
```bash
# Run AI quality tests with 70% pass threshold
python run_ai_quality_tests.py -v
# Expected: Pass rate ≥ 70%
```
3. **Unit Tests:**
```bash
# Run all unit tests
python -m pytest tests/ -v
# Expected: All tests pass
```
4. **Database Integrity:**
```bash
# Verify database schema (development)
psql -U nordabiz_app -d nordabiz -h localhost -p 5433 -c "\dt"
# Check for data quality issues
python verify_all_companies_data.py
```
5. **API Endpoints:**
```bash
# Test health endpoint
curl http://localhost:5000/health
# Expected: {"status": "healthy"}
# Test companies API
curl http://localhost:5000/api/companies | jq '.total'
# Expected: 80
```
#### Continuous Quality Monitoring
**Development:**
- Run AI quality tests after chat engine modifications
- Run unit tests after feature changes
- Verify test account functionality weekly
**Production:**
- Monitor health endpoint: `https://nordabiznes.pl/health`
- Review application logs: `sudo journalctl -u nordabiznes -f`
- Track AI chat analytics: `/admin/chat-analytics`
- Monitor rate limits and API usage
### Test Coverage
Current test coverage by feature area:
| Feature Area | Test Coverage | Test Files |
|--------------|---------------|------------|
| **AI Chat Engine** | ✅ Comprehensive | `ai_quality_evaluator.py`, `ai_quality_test_cases.json` |
| **SEO Audit** | ✅ Comprehensive | `test_admin_seo_dashboard.py`, `test_seo_audit.py` |
| **GBP Audit** | ✅ Comprehensive | `test_gbp_audit_field_checks.py` |
| **IT Audit** | ✅ Comprehensive | `test_it_audit_collaboration.py` |
| **Social Media Audit** | ✅ Comprehensive | `test_social_media_audit.py` |
| **Search Service** | ⚠️ Manual Testing | SearchService integration tests needed |
| **Authentication** | ⚠️ Manual Testing | User registration/login flow tests needed |
| **API Endpoints** | ⚠️ Manual Testing | API integration tests needed |
### Adding New Tests
#### AI Quality Test Cases
To add a new AI quality test case, edit `tests/ai_quality_test_cases.json`:
```json
{
"test_id": "new_test_01",
"category": "Services/Consulting",
"priority": "high",
"query": "Szukam firmy doradczej w Wejherowie",
"expected_companies": [
"Firma A",
"Firma B"
],
"notes": "Testing consulting service recommendations"
}
```
#### Unit Tests
Create a new test file in `tests/` following the pattern:
```python
"""
Test module for [feature name]
"""
import pytest
from database import SessionLocal, init_db
# Import modules to test
def test_feature_functionality():
"""Test basic feature functionality"""
# Arrange
db = SessionLocal()
# Act
result = feature_function()
# Assert
assert result is not None
assert result.status == "success"
db.close()
```
### Troubleshooting Tests
**AI Tests Failing:**
```bash
# Check Gemini API key is configured
grep GOOGLE_GEMINI_API_KEY .env
# Verify database has companies data
python -c "from database import SessionLocal, Company; db = SessionLocal(); print(db.query(Company).count())"
# Expected: 80
```
**Database Connection Errors:**
```bash
# Ensure PostgreSQL is running
docker ps | grep nordabiz-postgres
# Test database connection
psql -U nordabiz_app -d nordabiz -h localhost -p 5433 -c "SELECT 1;"
```
**Import Errors:**
```bash
# Ensure all dependencies are installed
pip install -r requirements.txt
# Verify virtual environment is activated
which python
# Expected: /path/to/nordabiz/venv/bin/python
```
For detailed test implementation and examples, see the `tests/` directory and individual test modules.
## Deployment & Production
The application is live in production at **https://nordabiznes.pl** since November 23, 2025.