Understanding Analysis Results
Learn how to retrieve, parse, and act on CallCov's AI-powered call analysis results.
Overviewβ
CallCov analyzes each call and provides structured, qualitative insights (not numerical scores) across three key areas:
- Compliance - Regulatory checks with pass/fail flags (identity verification, disclosures, prohibited phrases, sensitive data)
- Quality - Performance metrics with flags and counts (greeting, sentiment, empathy, interruptions, call structure)
- Coaching - Actionable recommendations with priorities and customer effort score (0-5 scale)
Note: CallCov does not provide overall numerical scores like "Quality: 85/100". Instead, it returns detailed, structured data with boolean flags, counts, and timestamps that you can use to build your own scoring system.
Retrieving Resultsβ
Get Analysis by IDβ
Once an analysis is complete, retrieve the full results using the analysis ID:
import requests
API_KEY = "your_api_key_here"API_URL = "https://api.callcov.com/api/v1"
def get_analysis_results(analysis_id): """Retrieve complete analysis results""" headers = {"X-API-Key": API_KEY}
response = requests.get( f"{API_URL}/analysis/{analysis_id}", headers=headers )
if response.status_code == 200: return response.json() else: raise Exception(f"Error: {response.status_code} - {response.text}")
# Usageresults = get_analysis_results("550e8400-e29b-41d4-a716-446655440000")print(f"Analysis Status: {results['status']}")print(f"Call Duration: {results['audio']['duration_seconds']}s")Understanding the Results Structureβ
A complete analysis result includes:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"object": "analysis",
"created": 1642248000,
"status": "completed",
"livemode": false,
"call": {
"agent_id": "agent_001",
"contact_id": "customer_12345",
"duration_ms": 125500,
"duration_seconds": 125.5
},
"audio": {
"url": "https://s3.amazonaws.com/callcov/...",
"size_bytes": 1048576,
"format": "wav",
"duration_seconds": 125.5
},
"transcript": {
"text": "Full conversation transcript...",
"segments": [
{
"speaker": "A",
"text": "Hello, thank you for calling.",
"start": 0.0,
"end": 2.5
}
],
"language": "en"
},
"results": {
"compliance": {
"identity_verification": {"verified": true, "timestamp": 0.0, "flagged": false},
"mandatory_disclosures": {"read": true, "matched_disclosures": [...], "missing_disclosures": []},
"purpose_declaration": {"disclosed": true, "interaction_number": 1, "flagged": false},
"prohibited_phrases": {"used": false, "instances": []},
"sensitive_data": {"shared": false, "instances": []}
},
"quality": {
"sentiment_analysis": {...},
"greeting": {"present": true, "timestamp": 0.0, "flagged_missing": false, "flagged_late": false},
"call_structure": {...},
"resolution_marker": {"confirmed": true, "timestamp": 20.0},
"empathetic_language": {...},
"objections": {...},
"persuasion_markers": {...},
"interruptions": {"count": 0, "instances": [], "flagged": false},
"language_adequacy": {...}
},
"coaching": {
"recommendations": [{"category": "greeting", "priority": "low", "description": "..."}],
"customer_effort_score": 1
}
},
"metadata": {
"webhook_url": "https://your-app.com/webhooks/analysis",
"completed_at": "2024-01-15T14:32:15.123Z",
"processing_time_ms": 45230,
"error_message": null
}
}
The examples in this guide use illustrative code patterns. For the exact, current API response structure, see:
- Retrieve Analysis API Reference - Complete response example
- Example Response JSON - Full JSON structure
The actual API returns structured flags and counts, not numerical scores like overall_score or compliance.score.
Working with Compliance Resultsβ
Compliance results contain pass/fail flags for regulatory requirements:
def check_compliance_violations(results): """Identify and categorize compliance issues""" compliance = results['results']['compliance']
violations = []
# Check required disclosures if not compliance['disclosures']['privacy_notice']: violations.append({ 'type': 'missing_disclosure', 'severity': 'high', 'message': 'Privacy notice not provided', 'timestamp': None # Absence detection })
# Check prohibited language for violation in compliance['violations']: if violation['type'] == 'prohibited_language': violations.append({ 'type': 'prohibited_language', 'severity': violation['severity'], 'message': violation['description'], 'timestamp': violation['timestamp'], 'quote': violation['quote'] })
# Script compliance script_score = compliance['script_adherence']['score'] if script_score < 0.8: # Below 80% adherence violations.append({ 'type': 'script_deviation', 'severity': 'medium', 'message': f'Script adherence at {script_score:.0%}', 'missed_points': compliance['script_adherence']['missed_points'] })
return violations
# Usageviolations = check_compliance_violations(results)if violations: print(f"β οΈ Found {len(violations)} compliance issues:") for v in violations: print(f" [{v['severity'].upper()}] {v['message']}")Compliance Result Fieldsβ
| Field | Type | Description |
|---|---|---|
identity_verification.verified | boolean | Agent stated their identity |
identity_verification.timestamp | float | When identity was stated (seconds) |
identity_verification.flagged | boolean | True if not verified or late |
mandatory_disclosures.read | boolean | All required disclosures were read |
mandatory_disclosures.matched_disclosures | array | Disclosures that were found |
mandatory_disclosures.missing_disclosures | array | Disclosures that were missing |
purpose_declaration.disclosed | boolean | Call purpose was disclosed |
purpose_declaration.interaction_number | int | Which interaction disclosed purpose |
purpose_declaration.flagged | boolean | True if not disclosed or too late |
prohibited_phrases.used | boolean | Whether prohibited language was used |
prohibited_phrases.instances | array | List of violations with timestamps |
sensitive_data.shared | boolean | Whether sensitive data was shared |
sensitive_data.instances | array | List of sensitive data instances |
Working with Quality Resultsβ
Quality metrics measure customer experience and agent performance:
def generate_quality_scorecard(results): """Create agent scorecard from quality metrics""" quality = results['results']['quality']
scorecard = { 'agent_id': results['call']['agent_id'], 'call_id': results['id'], 'overall_score': quality['overall_score'], 'metrics': {} }
# Customer sentiment sentiment = quality['customer_sentiment'] scorecard['metrics']['customer_satisfaction'] = { 'score': sentiment['score'], # -1 to 1 'label': sentiment['label'], # 'positive', 'neutral', 'negative' 'rating': convert_sentiment_to_rating(sentiment['score']) }
# Agent performance performance = quality['agent_performance'] scorecard['metrics']['professionalism'] = performance['professionalism'] scorecard['metrics']['empathy'] = performance['empathy'] scorecard['metrics']['problem_solving'] = performance['problem_solving'] scorecard['metrics']['communication_clarity'] = performance['communication_clarity']
# Call handling handling = quality['call_handling'] scorecard['metrics']['resolution_achieved'] = handling['resolution_achieved'] scorecard['metrics']['hold_time_appropriate'] = handling['hold_time_seconds'] < 60 scorecard['metrics']['transfer_avoided'] = not handling['transferred']
return scorecard
def convert_sentiment_to_rating(score): """Convert -1 to 1 score to 1-5 star rating""" # -1.0 to -0.6: 1 star # -0.6 to -0.2: 2 stars # -0.2 to 0.2: 3 stars # 0.2 to 0.6: 4 stars # 0.6 to 1.0: 5 stars return min(5, max(1, int((score + 1) * 2.5) + 1))
# Usagescorecard = generate_quality_scorecard(results)print(f"Agent: {scorecard['agent_id']}")print(f"Overall Score: {scorecard['overall_score']:.0%}")print(f"Customer Satisfaction: {scorecard['metrics']['customer_satisfaction']['rating']}/5 stars")Quality Result Fieldsβ
| Field | Type | Description |
|---|---|---|
sentiment_analysis.trajectory | array | Sentiment points (1-5) throughout call |
sentiment_analysis.first_30s_avg | int | Average sentiment in first 30 seconds |
sentiment_analysis.last_30s_avg | int | Average sentiment in last 30 seconds |
sentiment_analysis.drop_magnitude | int | Magnitude of sentiment drop |
sentiment_analysis.flagged_drop | boolean | True if drop > threshold |
sentiment_analysis.negative_periods | array | Periods of negative sentiment |
greeting.present | boolean | Greeting was present |
greeting.timestamp | float | When greeting occurred |
greeting.flagged_missing | boolean | True if no greeting found |
greeting.flagged_late | boolean | True if greeting was late |
call_structure.follows_structure | boolean | Call followed expected structure |
call_structure.stages_completed | array | List of completed stages |
call_structure.missing_stages | array | List of missing stages |
resolution_marker.confirmed | boolean | Resolution was confirmed |
resolution_marker.timestamp | float | When resolution was confirmed |
empathetic_language.used | boolean | Empathy markers were present |
empathetic_language.instances | array | Timestamps of empathy instances |
empathetic_language.flagged | boolean | True if intervals lack empathy |
objections.identified | array | Customer objections detected |
objections.flagged_unhandled | boolean | True if objections not handled |
interruptions.count | int | Number of interruptions |
interruptions.flagged | boolean | True if count > 5 |
language_adequacy.appropriate | boolean | Language was appropriate |
language_adequacy.flagged | boolean | True if issues found |
Working with Coaching Insightsβ
Coaching insights provide actionable feedback for agent improvement:
def extract_coaching_opportunities(results): """Extract prioritized coaching points""" coaching = results['results']['coaching']
opportunities = { 'strengths': [], 'improvements': [], 'critical_issues': [] }
# Strengths to reinforce for strength in coaching['strengths']: opportunities['strengths'].append({ 'area': strength['category'], 'description': strength['description'], 'example': strength['quote'], 'timestamp': strength['timestamp'] })
# Areas for improvement for improvement in coaching['improvements']: opportunities['improvements'].append({ 'area': improvement['category'], 'current_behavior': improvement['what_happened'], 'recommended_approach': improvement['what_to_do'], 'example': improvement['quote'], 'priority': improvement['priority'], # 'high', 'medium', 'low' 'timestamp': improvement['timestamp'] })
# Critical issues requiring immediate attention for issue in coaching['critical_issues']: opportunities['critical_issues'].append({ 'area': issue['category'], 'description': issue['description'], 'severity': issue['severity'], 'example': issue['quote'], 'timestamp': issue['timestamp'] })
return opportunities
# Usagecoaching = extract_coaching_opportunities(results)
print("\nπ― Coaching Summary:")print(f" Strengths: {len(coaching['strengths'])}")print(f" Improvements: {len(coaching['improvements'])}")print(f" Critical Issues: {len(coaching['critical_issues'])}")
if coaching['critical_issues']: print("\nβ οΈ Critical Issues Requiring Immediate Attention:") for issue in coaching['critical_issues']: print(f" β’ {issue['description']}") print(f" Quote: \"{issue['example']}\" at {issue['timestamp']}s")Coaching Result Fieldsβ
| Field | Type | Description |
|---|---|---|
recommendations | array | List of coaching recommendations |
recommendations[].category | string | Category (e.g., "greeting", "empathy") |
recommendations[].priority | string | Priority level: "high", "medium", "low" |
recommendations[].description | string | Specific recommendation text |
customer_effort_score | int | Customer effort score (0-5, where 5 = highest effort) |
Best Practicesβ
1. Cache Results Appropriatelyβ
Avoid unnecessary API calls by caching completed analyses:
from functools import lru_cache
from datetime import datetime, timedelta
# In-memory cache with TTL
results_cache = {}
def get_cached_results(analysis_id, ttl_hours=24):
"""Get results with caching"""
if analysis_id in results_cache:
cached_data, cached_time = results_cache[analysis_id]
if datetime.now() - cached_time < timedelta(hours=ttl_hours):
return cached_data
# Fetch fresh data
results = get_analysis_results(analysis_id)
# Cache only completed analyses
if results['status'] == 'completed':
results_cache[analysis_id] = (results, datetime.now())
return results
2. Handle Partial Resultsβ
Process results even if some sections are unavailable:
def safe_extract_metrics(results):
"""Safely extract metrics with fallbacks"""
metrics = {}
try:
metrics['compliance_score'] = results['results']['compliance']['overall_score']
except (KeyError, TypeError):
metrics['compliance_score'] = None
try:
metrics['quality_score'] = results['results']['quality']['overall_score']
except (KeyError, TypeError):
metrics['quality_score'] = None
return metrics
3. Aggregate Results for Reportingβ
Combine multiple analyses for team or trend reports:
def aggregate_team_performance(analysis_ids):
"""Generate team-level metrics"""
all_results = [get_analysis_results(aid) for aid in analysis_ids]
team_metrics = {
'total_calls': len(all_results),
'avg_quality_score': 0,
'avg_compliance_score': 0,
'total_violations': 0,
'resolution_rate': 0
}
for results in all_results:
quality = results['results']['quality']
compliance = results['results']['compliance']
team_metrics['avg_quality_score'] += quality['overall_score']
team_metrics['avg_compliance_score'] += compliance['overall_score']
team_metrics['total_violations'] += len(compliance['violations'])
if quality['call_handling']['resolution_achieved']:
team_metrics['resolution_rate'] += 1
# Calculate averages
team_metrics['avg_quality_score'] /= len(all_results)
team_metrics['avg_compliance_score'] /= len(all_results)
team_metrics['resolution_rate'] /= len(all_results)
return team_metrics
4. Filter Results by Criteriaβ
Extract specific insights based on business rules:
def find_calls_needing_review(analysis_ids):
"""Identify calls requiring manager review"""
flagged_calls = []
for analysis_id in analysis_ids:
results = get_analysis_results(analysis_id)
# Flag criteria
quality_score = results['results']['quality']['overall_score']
compliance_violations = results['results']['compliance']['violations']
critical_coaching = results['results']['coaching']['critical_issues']
should_review = (
quality_score < 0.6 or # Quality below 60%
len(compliance_violations) > 0 or # Any compliance issues
len(critical_coaching) > 0 # Critical coaching needed
)
if should_review:
flagged_calls.append({
'analysis_id': analysis_id,
'agent_id': results['call']['agent_id'],
'quality_score': quality_score,
'violation_count': len(compliance_violations),
'critical_issues': len(critical_coaching)
})
return flagged_calls
Common Use Casesβ
Generate Agent Scorecardβ
Combine all metrics into a comprehensive scorecard:
def create_agent_scorecard(agent_id, date_range):
"""Generate comprehensive agent scorecard"""
# Get all analyses for agent in date range
analyses = get_analyses_for_agent(agent_id, date_range)
scorecard = {
'agent_id': agent_id,
'period': date_range,
'total_calls': len(analyses),
'metrics': {
'avg_quality': 0,
'avg_compliance': 0,
'resolution_rate': 0,
'avg_customer_sentiment': 0
},
'strengths': [],
'improvement_areas': []
}
# Aggregate metrics
for analysis in analyses:
results = get_analysis_results(analysis['id'])
scorecard['metrics']['avg_quality'] += results['results']['quality']['overall_score']
scorecard['metrics']['avg_compliance'] += results['results']['compliance']['overall_score']
# ... aggregate other metrics
# Calculate averages
scorecard['metrics']['avg_quality'] /= len(analyses)
scorecard['metrics']['avg_compliance'] /= len(analyses)
return scorecard
Compliance Audit Reportβ
Identify all compliance violations across calls:
def generate_compliance_audit(date_range):
"""Generate compliance audit report"""
analyses = get_all_analyses(date_range)
audit = {
'total_calls': len(analyses),
'compliant_calls': 0,
'violations_by_type': {},
'high_risk_calls': []
}
for analysis_id in analyses:
results = get_analysis_results(analysis_id)
compliance = results['results']['compliance']
violations = compliance['violations']
if len(violations) == 0:
audit['compliant_calls'] += 1
else:
# Categorize violations
for violation in violations:
vtype = violation['type']
audit['violations_by_type'][vtype] = \
audit['violations_by_type'].get(vtype, 0) + 1
if violation['severity'] == 'high':
audit['high_risk_calls'].append({
'analysis_id': analysis_id,
'agent_id': results['call']['agent_id'],
'violation': violation
})
return audit
Next Stepsβ
- Authentication Guide - Set up API authentication
- Submitting Analysis - Submit calls for analysis
- Error Handling - Handle errors gracefully
- Webhooks - Get notified when analysis completes