Skip to main content

Testing Your Integration

Learn how to thoroughly test your CallCov integration before going to production.

Overview​

CallCov provides a complete test environment to validate your integration:

  • Test API keys - Separate keys for testing (prefix: sk_test_)
  • Test mode - Simulated processing without charges
  • Sample audio files - Pre-configured test data
  • Deterministic results - Predictable responses for automated testing

Test vs Production Keys​

Getting Test Keys​

Test keys are available in your dashboard under "API Keys" β†’ "Test Mode":

# Test key (always starts with sk_test_)
export CALLCOV_API_KEY_TEST="sk_test_abc123..."

# Production key (starts with sk_live_)
export CALLCOV_API_KEY_PROD="sk_live_xyz789..."

Key Differences​

FeatureTest Mode (sk_test_)Production (sk_live_)
BillingFree, no chargesMetered billing
ProcessingSimulated (instant)Real AI analysis
Rate Limits10 req/sec100 req/sec
WebhooksDelivered normallyDelivered normally
Data Retention7 daysConfigurable

Unit Testing​

Testing API Client​

Test your API client wrapper with mocked responses:

import unittest
from unittest.mock import patch, Mock
from your_app.callcov_client import CallCovClient
class TestCallCovClient(unittest.TestCase):
def setUp(self):
self.client = CallCovClient(api_key="sk_test_mock")
@patch('requests.post')
def test_submit_analysis_success(self, mock_post):
# Mock successful response
mock_response = Mock()
mock_response.status_code = 201
mock_response.json.return_value = {
'id': 'analysis_123',
'status': 'processing',
'created': 1234567890
}
mock_post.return_value = mock_response
# Test submission
result = self.client.submit_analysis('https://example.com/call.wav')
# Assertions
self.assertEqual(result['id'], 'analysis_123')
self.assertEqual(result['status'], 'processing')
mock_post.assert_called_once()
@patch('requests.post')
def test_submit_analysis_with_invalid_url(self, mock_post):
# Mock error response
mock_response = Mock()
mock_response.status_code = 400
mock_response.json.return_value = {
'error': {
'type': 'invalid_request',
'message': 'Invalid audio URL'
}
}
mock_post.return_value = mock_response
# Test error handling
with self.assertRaises(ValueError):
self.client.submit_analysis('invalid-url')
@patch('requests.get')
def test_get_analysis_results(self, mock_get):
# Mock completed analysis
mock_response = Mock()
mock_response.status_code = 200
mock_response.json.return_value = {
'id': 'analysis_123',
'status': 'completed',
'results': {
'compliance': {'overall_score': 0.95},
'quality': {'overall_score': 0.88}
}
}
mock_get.return_value = mock_response
# Test retrieval
result = self.client.get_analysis('analysis_123')
# Assertions
self.assertEqual(result['status'], 'completed')
self.assertEqual(result['results']['compliance']['overall_score'], 0.95)
if __name__ == '__main__':
unittest.main()

Integration Testing​

Test with Real API (Test Mode)​

Integration tests use actual API calls with test keys:

import unittest
import os
from your_app.callcov_client import CallCovClient
class TestCallCovIntegration(unittest.TestCase):
@classmethod
def setUpClass(cls):
# Use test API key
cls.client = CallCovClient(
api_key=os.getenv('CALLCOV_API_KEY_TEST')
)
def test_full_analysis_workflow(self):
"""Test complete workflow: submit -> poll -> retrieve results"""
# Step 1: Submit analysis with test audio
submission = self.client.submit_analysis(
audio_url='https://callcov-test-data.s3.amazonaws.com/sample-call.wav',
agent_id='test_agent_001'
)
self.assertIn('id', submission)
self.assertEqual(submission['status'], 'processing')
analysis_id = submission['id']
# Step 2: Wait for completion (test mode is instant)
import time
time.sleep(2) # Brief delay for async processing
# Step 3: Retrieve results
results = self.client.get_analysis(analysis_id)
self.assertEqual(results['status'], 'completed')
self.assertIn('results', results)
self.assertIn('compliance', results['results'])
self.assertIn('quality', results['results'])
def test_webhook_delivery(self):
"""Test webhook is triggered on completion"""
webhook_url = 'https://webhook.site/your-unique-id'
submission = self.client.submit_analysis(
audio_url='https://callcov-test-data.s3.amazonaws.com/sample-call.wav',
webhook_url=webhook_url
)
# In test mode, webhook fires immediately
time.sleep(3)
# Verify webhook was received (check webhook.site)
# In real tests, you'd verify via your webhook endpoint logs
def test_error_handling(self):
"""Test API returns appropriate errors"""
# Test invalid audio URL
with self.assertRaises(Exception) as context:
self.client.submit_analysis(audio_url='not-a-url')
self.assertIn('invalid', str(context.exception).lower())
def test_pagination(self):
"""Test paginated results"""
# Submit multiple analyses
analysis_ids = []
for i in range(5):
result = self.client.submit_analysis(
audio_url='https://callcov-test-data.s3.amazonaws.com/sample-call.wav',
agent_id=f'test_agent_{i:03d}'
)
analysis_ids.append(result['id'])
# Retrieve with pagination
page1 = self.client.list_analyses(limit=3)
self.assertEqual(len(page1['data']), 3)
self.assertTrue(page1['has_more'])
page2 = self.client.list_analyses(
limit=3,
starting_after=page1['data'][-1]['id']
)
self.assertEqual(len(page2['data']), 2)

Test Data and Fixtures​

Sample Audio Files​

CallCov provides test audio files for consistent testing:

# Sample files available in test mode
https://callcov-test-data.s3.amazonaws.com/sample-call.wav
https://callcov-test-data.s3.amazonaws.com/sample-compliance-violation.wav
https://callcov-test-data.s3.amazonaws.com/sample-poor-quality.wav
https://callcov-test-data.s3.amazonaws.com/sample-excellent-call.wav

Predictable Test Results​

Test mode returns deterministic results for specific test files:

# Test fixtures with known results
TEST_FIXTURES = {
'excellent_call': {
'url': 'https://callcov-test-data.s3.amazonaws.com/sample-excellent-call.wav',
'expected_results': {
'status': 'completed',
'results': {
'compliance': {'overall_score': 1.0},
'quality': {'overall_score': 0.95}
}
}
},
'compliance_violation': {
'url': 'https://callcov-test-data.s3.amazonaws.com/sample-compliance-violation.wav',
'expected_results': {
'status': 'completed',
'results': {
'compliance': {
'overall_score': 0.6,
'violations': [
{'type': 'missing_disclosure', 'severity': 'high'}
]
}
}
}
}
}
def test_with_fixture(fixture_name):
"""Test using known fixture data"""
fixture = TEST_FIXTURES[fixture_name]
# Submit analysis
result = client.submit_analysis(fixture['url'])
time.sleep(2)
# Retrieve and verify
analysis = client.get_analysis(result['id'])
assert analysis['status'] == fixture['expected_results']['status']
assert analysis['results']['compliance']['overall_score'] == \
fixture['expected_results']['results']['compliance']['overall_score']

End-to-End Testing​

Complete User Workflow​

Test the entire user journey:

def test_complete_user_workflow():
"""E2E: Manager reviews agent's call"""

# 1. Agent makes call (simulated by test audio)
audio_url = 'https://callcov-test-data.s3.amazonaws.com/sample-call.wav'

# 2. System submits for analysis
submission = client.submit_analysis(
audio_url=audio_url,
agent_id='agent_001',
contact_id='customer_456',
metadata={'campaign': 'spring_2024'}
)

# 3. System receives webhook when complete
analysis_id = submission['id']
time.sleep(2) # Wait for completion

# 4. Manager views results
results = client.get_analysis(analysis_id)

# 5. System flags for review if quality < 80%
quality_score = results['results']['quality']['overall_score']

if quality_score < 0.8:
flagged = flag_for_manager_review(
analysis_id=analysis_id,
agent_id=results['call']['agent_id'],
reason='quality_below_threshold'
)
assert flagged is True

# 6. System sends coaching email if needed
coaching_issues = results['results']['coaching']['critical_issues']

if len(coaching_issues) > 0:
email_sent = send_coaching_email(
agent_id=results['call']['agent_id'],
issues=coaching_issues
)
assert email_sent is True

Testing Best Practices​

1. Use Test Mode for Development​

Always use test keys during development:

import os

# Environment-based configuration
if os.getenv('ENVIRONMENT') == 'production':
API_KEY = os.getenv('CALLCOV_API_KEY_PROD')
else:
API_KEY = os.getenv('CALLCOV_API_KEY_TEST')

2. Mock External Dependencies​

Mock CallCov API for fast unit tests:

from unittest.mock import patch

@patch('your_app.callcov_client.requests.post')
def test_fast_unit_test(mock_post):
"""Unit test doesn't hit real API"""
mock_post.return_value.json.return_value = {'id': 'mock_id'}

result = submit_analysis('http://example.com/call.wav')
assert result['id'] == 'mock_id'

3. Test Error Scenarios​

Don't only test happy paths:

def test_error_scenarios():
"""Test various failure modes"""

# Network timeout
with pytest.raises(TimeoutError):
client.submit_analysis('https://example.com/call.wav', timeout=0.001)

# Invalid API key
bad_client = CallCovClient(api_key='sk_test_invalid')
with pytest.raises(AuthenticationError):
bad_client.submit_analysis('https://example.com/call.wav')

# Rate limit exceeded
# (In test mode, simulate by rapid requests)
for i in range(20):
try:
client.submit_analysis('https://example.com/call.wav')
except RateLimitError:
# Expected after ~10 requests in test mode
break

4. Clean Up Test Data​

Remove test analyses after testing:

def tearDown(self):
"""Clean up test analyses"""
# Delete any analyses created during testing
for analysis_id in self.created_analysis_ids:
try:
self.client.delete_analysis(analysis_id)
except:
pass # Already deleted or doesn't exist

5. Use CI/CD for Automated Testing​

Run tests automatically on every commit:

# .github/workflows/test.yml
name: Test CallCov Integration

on: [push, pull_request]

jobs:
test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'

- name: Install dependencies
run: pip install -r requirements.txt

- name: Run tests
env:
CALLCOV_API_KEY_TEST: ${{ secrets.CALLCOV_API_KEY_TEST }}
run: pytest tests/

Common Test Scenarios​

Test Compliance Detection​

def test_compliance_violation_detection():
"""Verify compliance violations are detected"""

result = client.submit_analysis(
'https://callcov-test-data.s3.amazonaws.com/sample-compliance-violation.wav'
)
time.sleep(2)

analysis = client.get_analysis(result['id'])
violations = analysis['results']['compliance']['violations']

assert len(violations) > 0
assert any(v['type'] == 'missing_disclosure' for v in violations)

Test Quality Scoring​

def test_quality_scoring_accuracy():
"""Verify quality scores are within expected range"""

excellent_result = client.submit_analysis(
'https://callcov-test-data.s3.amazonaws.com/sample-excellent-call.wav'
)
time.sleep(2)

analysis = client.get_analysis(excellent_result['id'])
quality_score = analysis['results']['quality']['overall_score']

assert quality_score >= 0.9, "Excellent call should score >= 90%"

Test Webhook Reliability​

def test_webhook_retry_logic():
"""Test webhook retry behavior on failure"""

# Set up webhook endpoint that fails first attempt
webhook_url = 'https://your-test-webhook.com/endpoint'

submission = client.submit_analysis(
audio_url='https://callcov-test-data.s3.amazonaws.com/sample-call.wav',
webhook_url=webhook_url
)

# Verify webhook was retried
# (Check your webhook endpoint logs for multiple attempts)

Transitioning to Production​

Pre-Launch Checklist​

Before switching to production keys:

  • All tests passing with test keys
  • Error handling tested for all scenarios
  • Webhook endpoint tested and verified
  • Rate limiting implemented
  • Logging and monitoring configured
  • Cost estimation completed
  • Security review completed
  • Documentation updated

Switch to Production​

# config.py
import os

class Config:
def __init__(self):
self.environment = os.getenv('ENVIRONMENT', 'development')

if self.environment == 'production':
self.api_key = os.getenv('CALLCOV_API_KEY_PROD')
self.webhook_url = 'https://api.yourapp.com/webhooks/callcov'
else:
self.api_key = os.getenv('CALLCOV_API_KEY_TEST')
self.webhook_url = 'https://webhook.site/test-id'

def is_production(self):
return self.environment == 'production'

Next Steps​