172 lines
4.4 KiB
Markdown
172 lines
4.4 KiB
Markdown
# HeroDB Benchmarks
|
|
|
|
This directory contains comprehensive performance benchmarks for HeroDB's storage backends (redb and sled).
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# Run all benchmarks
|
|
cargo bench
|
|
|
|
# Run specific suite
|
|
cargo bench --bench single_ops
|
|
|
|
# Quick run (fewer samples)
|
|
cargo bench -- --quick
|
|
```
|
|
|
|
## Benchmark Suites
|
|
|
|
### 1. Single Operations (`single_ops.rs`)
|
|
Measures individual operation latency:
|
|
- **String operations**: SET, GET, DEL, EXISTS
|
|
- **Hash operations**: HSET, HGET, HGETALL, HDEL, HEXISTS
|
|
- **List operations**: LPUSH, RPUSH, LPOP, RPOP, LRANGE
|
|
|
|
### 2. Bulk Operations (`bulk_ops.rs`)
|
|
Tests throughput with varying batch sizes:
|
|
- Bulk insert (100, 1K, 10K records)
|
|
- Bulk read (sequential and random)
|
|
- Bulk update and delete
|
|
- Mixed workload (70% reads, 30% writes)
|
|
|
|
### 3. Scan Operations (`scan_ops.rs`)
|
|
Evaluates iteration and filtering:
|
|
- SCAN with pattern matching
|
|
- HSCAN for hash fields
|
|
- KEYS operation
|
|
- DBSIZE, HKEYS, HVALS
|
|
|
|
### 4. Concurrent Operations (`concurrent_ops.rs`)
|
|
Simulates multi-client scenarios:
|
|
- Concurrent writes (10, 50 clients)
|
|
- Concurrent reads (10, 50 clients)
|
|
- Mixed concurrent workload
|
|
- Concurrent hash and list operations
|
|
|
|
### 5. Memory Profiling (`memory_profile.rs`)
|
|
Tracks memory usage patterns:
|
|
- Per-operation memory allocation
|
|
- Peak memory usage
|
|
- Memory efficiency (bytes per record)
|
|
- Allocation count tracking
|
|
|
|
## Common Infrastructure
|
|
|
|
The `common/` directory provides shared utilities:
|
|
|
|
- **`data_generator.rs`**: Deterministic test data generation
|
|
- **`backends.rs`**: Backend setup and management
|
|
- **`metrics.rs`**: Custom metrics collection and export
|
|
|
|
## Results Analysis
|
|
|
|
### Parse Results
|
|
|
|
```bash
|
|
python3 scripts/parse_results.py target/criterion --csv results.csv --json results.json
|
|
```
|
|
|
|
### Compare Backends
|
|
|
|
```bash
|
|
python3 scripts/compare_backends.py results.csv --export comparison.csv
|
|
```
|
|
|
|
### View HTML Reports
|
|
|
|
Open `target/criterion/report/index.html` in a browser for interactive charts.
|
|
|
|
## Documentation
|
|
|
|
- **[Running Benchmarks](../docs/running_benchmarks.md)** - Quick start guide
|
|
- **[Benchmarking Guide](../docs/benchmarking.md)** - Complete user guide
|
|
- **[Architecture](../docs/benchmark_architecture.md)** - System design
|
|
- **[Implementation Plan](../docs/benchmark_implementation_plan.md)** - Development details
|
|
- **[Sample Results](../docs/benchmark_results_sample.md)** - Example analysis
|
|
|
|
## Key Features
|
|
|
|
✅ **Statistical Rigor**: Uses Criterion for statistically sound measurements
|
|
✅ **Fair Comparison**: Identical test datasets across all backends
|
|
✅ **Reproducibility**: Fixed random seeds for deterministic results
|
|
✅ **Comprehensive Coverage**: Single ops, bulk ops, scans, concurrency
|
|
✅ **Memory Profiling**: Custom allocator tracking
|
|
✅ **Multiple Formats**: Terminal, CSV, JSON, HTML outputs
|
|
|
|
## Performance Tips
|
|
|
|
### For Accurate Results
|
|
|
|
1. **System Preparation**
|
|
- Close unnecessary applications
|
|
- Disable CPU frequency scaling
|
|
- Ensure stable power supply
|
|
|
|
2. **Benchmark Configuration**
|
|
- Use sufficient sample size (100+)
|
|
- Allow proper warm-up time
|
|
- Run multiple iterations
|
|
|
|
3. **Environment Isolation**
|
|
- Use temporary directories
|
|
- Clean state between benchmarks
|
|
- Avoid shared resources
|
|
|
|
### For Faster Iteration
|
|
|
|
```bash
|
|
# Quick mode (fewer samples)
|
|
cargo bench -- --quick
|
|
|
|
# Specific operation only
|
|
cargo bench -- single_ops/strings/set
|
|
|
|
# Specific backend only
|
|
cargo bench -- redb
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### High Variance
|
|
- Close background applications
|
|
- Disable CPU frequency scaling
|
|
- Increase sample size
|
|
|
|
### Out of Memory
|
|
- Run suites separately
|
|
- Reduce dataset sizes
|
|
- Increase system swap
|
|
|
|
### Slow Benchmarks
|
|
- Use `--quick` flag
|
|
- Run specific benchmarks
|
|
- Reduce measurement time
|
|
|
|
See [Running Benchmarks](../docs/running_benchmarks.md) for detailed troubleshooting.
|
|
|
|
## Contributing
|
|
|
|
When adding new benchmarks:
|
|
|
|
1. Follow existing patterns in benchmark files
|
|
2. Use common infrastructure (data_generator, backends)
|
|
3. Ensure fair comparison between backends
|
|
4. Add documentation for new metrics
|
|
5. Test with both `--quick` and full runs
|
|
|
|
## Example Output
|
|
|
|
```
|
|
single_ops/strings/set/redb/100bytes
|
|
time: [1.234 µs 1.245 µs 1.256 µs]
|
|
thrpt: [802.5K ops/s 810.2K ops/s 818.1K ops/s]
|
|
|
|
single_ops/strings/set/sled/100bytes
|
|
time: [1.567 µs 1.578 µs 1.589 µs]
|
|
thrpt: [629.5K ops/s 633.7K ops/s 638.1K ops/s]
|
|
```
|
|
|
|
## License
|
|
|
|
Same as HeroDB project. |