4.4 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	HeroDB Benchmarks
This directory contains comprehensive performance benchmarks for HeroDB's storage backends (redb and sled).
Quick Start
# Run all benchmarks
cargo bench
# Run specific suite
cargo bench --bench single_ops
# Quick run (fewer samples)
cargo bench -- --quick
Benchmark Suites
1. Single Operations (single_ops.rs)
Measures individual operation latency:
- String operations: SET, GET, DEL, EXISTS
- Hash operations: HSET, HGET, HGETALL, HDEL, HEXISTS
- List operations: LPUSH, RPUSH, LPOP, RPOP, LRANGE
2. Bulk Operations (bulk_ops.rs)
Tests throughput with varying batch sizes:
- Bulk insert (100, 1K, 10K records)
- Bulk read (sequential and random)
- Bulk update and delete
- Mixed workload (70% reads, 30% writes)
3. Scan Operations (scan_ops.rs)
Evaluates iteration and filtering:
- SCAN with pattern matching
- HSCAN for hash fields
- KEYS operation
- DBSIZE, HKEYS, HVALS
4. Concurrent Operations (concurrent_ops.rs)
Simulates multi-client scenarios:
- Concurrent writes (10, 50 clients)
- Concurrent reads (10, 50 clients)
- Mixed concurrent workload
- Concurrent hash and list operations
5. Memory Profiling (memory_profile.rs)
Tracks memory usage patterns:
- Per-operation memory allocation
- Peak memory usage
- Memory efficiency (bytes per record)
- Allocation count tracking
Common Infrastructure
The common/ directory provides shared utilities:
- data_generator.rs: Deterministic test data generation
- backends.rs: Backend setup and management
- metrics.rs: Custom metrics collection and export
Results Analysis
Parse Results
python3 scripts/parse_results.py target/criterion --csv results.csv --json results.json
Compare Backends
python3 scripts/compare_backends.py results.csv --export comparison.csv
View HTML Reports
Open target/criterion/report/index.html in a browser for interactive charts.
Documentation
- Running Benchmarks - Quick start guide
- Benchmarking Guide - Complete user guide
- Architecture - System design
- Implementation Plan - Development details
- Sample Results - Example analysis
Key Features
✅ Statistical Rigor: Uses Criterion for statistically sound measurements ✅ Fair Comparison: Identical test datasets across all backends ✅ Reproducibility: Fixed random seeds for deterministic results ✅ Comprehensive Coverage: Single ops, bulk ops, scans, concurrency ✅ Memory Profiling: Custom allocator tracking ✅ Multiple Formats: Terminal, CSV, JSON, HTML outputs
Performance Tips
For Accurate Results
- 
System Preparation - Close unnecessary applications
- Disable CPU frequency scaling
- Ensure stable power supply
 
- 
Benchmark Configuration - Use sufficient sample size (100+)
- Allow proper warm-up time
- Run multiple iterations
 
- 
Environment Isolation - Use temporary directories
- Clean state between benchmarks
- Avoid shared resources
 
For Faster Iteration
# Quick mode (fewer samples)
cargo bench -- --quick
# Specific operation only
cargo bench -- single_ops/strings/set
# Specific backend only
cargo bench -- redb
Troubleshooting
High Variance
- Close background applications
- Disable CPU frequency scaling
- Increase sample size
Out of Memory
- Run suites separately
- Reduce dataset sizes
- Increase system swap
Slow Benchmarks
- Use --quickflag
- Run specific benchmarks
- Reduce measurement time
See Running Benchmarks for detailed troubleshooting.
Contributing
When adding new benchmarks:
- Follow existing patterns in benchmark files
- Use common infrastructure (data_generator, backends)
- Ensure fair comparison between backends
- Add documentation for new metrics
- Test with both --quickand full runs
Example Output
single_ops/strings/set/redb/100bytes
                        time:   [1.234 µs 1.245 µs 1.256 µs]
                        thrpt:  [802.5K ops/s 810.2K ops/s 818.1K ops/s]
single_ops/strings/set/sled/100bytes
                        time:   [1.567 µs 1.578 µs 1.589 µs]
                        thrpt:  [629.5K ops/s 633.7K ops/s 638.1K ops/s]
License
Same as HeroDB project.