Automated video analysis toolkit for human interaction research - Extract comprehensive behavioral annotations from videos using AI pipelines, with an intuitive web interface for visualization and analysis.
VideoAnnotator automatically analyzes videos of human interactions and extracts rich behavioral data including:
- ๐ฅ Person tracking - Multi-person detection and pose estimation with persistent IDs
- ๐ Facial analysis - Emotions, expressions, gaze direction, and action units
- ๐ฌ Scene detection - Environment classification and temporal segmentation
- ๐ค Audio analysis - Speech recognition, speaker identification, and emotion detection
Perfect for researchers studying parent-child interactions, social behavior, developmental psychology, and human-computer interaction.
VideoAnnotator provides both automated processing and interactive visualization:
AI-powered video processing pipeline
- Processes videos to extract behavioral annotations
- REST API for integration with research workflows
- Supports batch processing and custom configurations
- Outputs standardized JSON data
๐ Video Annotation Viewer (paired repository)
Interactive web-based visualization tool
- Load and visualize VideoAnnotator results
- Synchronized video playback with annotation overlays
- Timeline scrubbing with pose, face, and audio data
- Export tools for further analysis
Complete workflow: Your Videos โ [VideoAnnotator Processing] โ Annotation Data โ [Video Annotation Viewer] โ Interactive Analysis
# Install modern Python package manager
curl -LsSf https://astral.sh/uv/install.sh | sh # Linux/Mac
# powershell -c "irm https://astral.sh/uv/install.ps1 | iex" # Windows
# Clone and install
git clone https://github.com/InfantLab/VideoAnnotator.git
cd VideoAnnotator
uv sync # Fast dependency installation (30 seconds)
# Initialize the local database (creates tables + admin user/token)
uv run videoannotator setup-db --admin-email you@example.com --admin-username you# Start the API server
uv run videoannotator server --host 0.0.0.0 --port 18011
# Use the API key printed by `setup-db` (or the server's first-start output)
# Process your first video (in another terminal)
curl -X POST "http://localhost:18011/api/v1/jobs/" \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "video=@your_video.mp4" \
-F "selected_pipelines=person,face,scene,audio"
# Check results at http://localhost:18011/docs# Install the companion web viewer
git clone https://github.com/InfantLab/video-annotation-viewer.git
cd video-annotation-viewer
npm install
npm run devNote: Ensure Node and NPM are installed. On macOS with Homebrew: brew install node
Open http://localhost:3000 and load your VideoAnnotator results.
๐ That's it! You now have both automated video processing and interactive visualization.
Authoritative pipeline metadata (names, tasks, modalities, capabilities) is generated from the registry:
- Pipeline specification table: docs/pipelines_spec.md (auto-generated; do not edit by hand)
- Pipeline API endpoint: http://localhost:18011/api/v1/pipelines
- Emotion output format spec: docs/development/emotion_output_format.md
Additional Specs:
- Output Naming Conventions: docs/development/output_naming_conventions.md (stable patterns for downstream tooling)
- Emotion Validator Utility: src/videoannotator/validation/emotion_validator.py (programmatic validation of
.emotion.jsonfiles) - CLI Validation:
uv run videoannotator validate-emotion path/to/file.emotion.jsonreturns non-zero exit on failure Client tools (e.g. the Video Annotation Viewer) should rely on those sources or the/api/v1/pipelinesendpoint rather than hard-coding pipeline assumptions.
| Pipeline | Technology | Outputs | Stability |
|---|---|---|---|
| Person Tracking & Pose | YOLO11 + ByteTrack | COCO bounding boxes, 17-point pose keypoints, persistent person IDs | beta |
| Pipeline | Technology | Outputs | Stability |
|---|---|---|---|
| Face Analysis | DeepFace (TensorFlow/OpenCV) | Emotion labels, age/gender, action units | stable |
| LAION CLIP Face Embedding | LAION CLIP-derived model | 512-D semantic embeddings, zero-shot attribute & emotion tagging | experimental |
| OpenFace3 Face Embedding | OpenFace 3.0 (ONNX/PyTorch) | 512-D face embeddings for recognition or clustering | experimental |
| Pipeline | Technology | Outputs | Stability |
|---|---|---|---|
| Scene Detection | PySceneDetect + CLIP | Scene boundaries, environment classification, temporal segmentation | beta |
| Pipeline | Technology | Outputs | Stability |
|---|---|---|---|
| Speech Recognition | OpenAI Whisper | WebVTT transcripts with word-level timestamps | stable |
| Speaker Diarization | pyannote.audio | RTTM speaker turns with timestamps | stable |
| Audio Processing | Whisper + pyannote (combined) | WebVTT transcripts + RTTM speaker turns | beta |
| LAION Empathic Voice | LAION Empathic Insight + Whisper embeddings | Emotion segments, empathic scores, emotion timeline | stable |
| Voice Emotion Baseline | Spectral CNN over Whisper embeddings | (planned โ not yet implemented) | experimental |
- No coding required - Web interface and REST API
- Standardized outputs - JSON formats compatible with analysis tools
- Reproducible results - Version-controlled processing pipelines
- Batch processing - Handle multiple videos efficiently
- State-of-the-art models - YOLO11, OpenFace 3.0, Whisper
- Validated pipelines - Tested on developmental psychology datasets
- Comprehensive metrics - Confidence scores, validation tools
- Flexible configuration - Adjust parameters for your research needs
- Fast processing - GPU acceleration, optimized pipelines
- Scalable architecture - Docker containers, API-first design
- Cross-platform - Windows, macOS, Linux support
- Enterprise features - Authentication, logging, monitoring
- 100% Local Processing - All analysis runs on your hardware, no cloud dependencies
- No Data Transmission - Videos and results never leave your infrastructure
- GDPR Compliant - Full control over sensitive research data
- Foundation Model Free - No external API calls to commercial AI services
- Research Ethics Ready - Designed for studies requiring strict data confidentiality
VideoAnnotator generates rich, structured data like this:
{
"person_tracking": [
{
"timestamp": 12.34,
"person_id": 1,
"bbox": [0.2, 0.3, 0.4, 0.5],
"pose_keypoints": [...],
"confidence": 0.87
}
],
"face_analysis": [
{
"timestamp": 12.34,
"person_id": 1,
"emotion": "happy",
"confidence": 0.91,
"facial_landmarks": [...],
"gaze_direction": [0.1, -0.2]
}
],
"scene_detection": [
{
"start_time": 0.0,
"end_time": 45.6,
"scene_type": "living_room",
"confidence": 0.95
}
],
"audio_analysis": [
{
"start_time": 1.2,
"end_time": 3.8,
"speaker": "adult",
"transcript": "Look at this toy!",
"emotion": "excited"
}
]
}VideoAnnotator produces machine-readable outputs (primarily JSON files and API responses) intended to be easy to consume from common data tools.
- Python: Load JSON into pandas / numpy for analysis (see examples/)
- R / MATLAB: Not currently supported with official helper packages, but the JSON outputs can be consumed using standard JSON readers
- Visualization: Use the companion Video Annotation Viewer for interactive playback + overlays
The quickstart above covers the recommended local install via uv. For more detail, see the installation guide.
# CPU version (lightweight)
docker build -f Dockerfile.cpu -t videoannotator:cpu .
docker run -p 18011:18011 videoannotator:cpu
# GPU version (faster processing)
docker build -f Dockerfile.gpu -t videoannotator:gpu .
docker run -p 18011:18011 --gpus all videoannotator:gpu
# Development version (pre-cached models)
docker build -f Dockerfile.dev -t videoannotator:dev .
docker run -p 18011:18011 --gpus all videoannotator:dev| Resource | Description |
|---|---|
| ๐ Interactive Docs | Complete documentation with examples |
| ๐ฎ Live API Testing | Interactive API when server is running |
| ๐ Getting Started Guide | Step-by-step setup and first video |
| ๐ง Installation Guide | Detailed installation instructions |
| โ๏ธ Pipeline Specifications | Auto-generated pipeline spec table |
| ๐ฏ Demo Commands | Example commands and workflows |
- Parent-child interaction studies with synchronized behavioral coding
- Social development research with multi-person tracking
- Language acquisition studies with audio-visual alignment
- Autism spectrum behavioral analysis with facial expression tracking
- Therapy session analysis with emotion and engagement metrics
- Developmental assessment with standardized behavioral measures
- User experience research with attention and emotion tracking
- Interface evaluation with gaze direction and facial feedback
- Accessibility studies with comprehensive behavioral data
- FastAPI - High-performance REST API with automatic documentation
- YOLO11 - State-of-the-art object detection and pose estimation
- DeepFace / OpenFace 3.0 / LAION CLIP - Facial analysis, embeddings, and emotion recognition
- Whisper - Robust speech recognition and transcription
- pyannote.audio - Speaker diarization and segmentation
- LAION Empathic Insight - Voice emotion analysis from Whisper embeddings
- PySceneDetect + CLIP - Scene boundary detection and environment classification
- PyTorch - GPU-accelerated machine learning inference
- Processing speed: ~2-4x real-time with GPU acceleration
- Memory usage: 4-8GB RAM for typical videos
- Storage: ~100MB output per hour of video
- Accuracy: 90%+ for person detection, 85%+ for emotion recognition
- Batch processing: Handle multiple videos simultaneously
- Container deployment: Docker support for cloud platforms
- Distributed processing: API-first design for microservices
- Resource optimization: CPU and GPU variants available
- ๐ Report issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Contact: Caspar Addyman at infantologist@gmail.com
- ๐ฌ Collaborations: Open to research partnerships
- Code quality: Automated linting, typing checks, and tests (see the CI badge above)
- Documentation: Comprehensive guides and API documentation
- CI/CD: Automated testing and deployment pipelines
- Standards: Following research software engineering best practices
If you use VideoAnnotator in your research, please cite:
Addyman, C. (2025). VideoAnnotator: Automated video analysis toolkit for human interaction research.
Zenodo. https://doi.org/10.5281/zenodo.16961751
MIT License - Full terms in LICENSE
- The Global Parenting Initiative (Funded by The LEGO Foundation)
- Caspar Addyman - Stellenbosch University, South Africa
โ Lead Developer & Corresponding Author
- Jeremiah Ishaya - Stellenbosch University, South Africa
- Irene Uwerikowe - Stellenbosch University, South Africa
- Daniel Stamate - Department of Computing, Goldsmiths, University of London, UK
- Jamie Lachman - Department of Social Policy and Intervention, University of Oxford, UK
- Mark Tomlinson - Stellenbosch University, South Africa
Built with and grateful to:
- YOLO & Ultralytics - Object detection, tracking, and pose estimation
- DeepFace - Face detection and emotion recognition
- OpenFace 3.0 - Facial behavior analysis and embeddings
- LAION - CLIP face embeddings and empathic voice emotion models
- OpenAI Whisper - Speech recognition
- pyannote.audio - Speaker diarization
- PySceneDetect - Scene boundary detection
- FastAPI - Modern web framework
- PyTorch - Machine learning infrastructure
Development was greatly helped by:
- Visual Studio Code - Primary development environment
- GitHub Copilot - AI pair programming assistance
- Claude Code - Architecture design and documentation
- GPT-4 & Claude Models - Code generation and debugging help
This project demonstrates how AI-assisted development can accelerate research software creation while maintaining code quality and comprehensive testing.
๐ฅ Ready to start analyzing videos? Follow the 60-second setup above!