Hero Logger
A hierarchical logging system for the Hero project that provides system-level and per-job logging with complete isolation using the tracing ecosystem.
Features
- Hierarchical Organization: Physical separation of logs by component and job
 - System Logger: Global logging for all non-job-specific events
 - Per-Job Logger: Isolated logging for individual job execution
 - Hourly Rotation: Automatic log file rotation every hour
 - Rhai Integration: Capture Rhai script 
print()anddebug()calls - High Performance: Async logging with efficient filtering
 - Structured Logging: Rich context and metadata support
 
Architecture
The logging system uses a hybrid approach with two main components:
System Logger (Global)
- Long-lived logger initialized at application startup
 - Routes logs to different files based on tracing targets
 - Supports multiple components simultaneously
 
Per-Job Logger (Dynamic)
- Created on-demand for each job execution
 - Provides complete isolation for job-specific logs
 - Automatically disposed after job completion
 
Directory Structure
logs/
├── supervisor/               # System logs for supervisor
│   └── 2025-08-06-11.log
└── actor/
    ├── osis/
    │   ├── 2025-08-06-11.log   # General OSIS actor logs
    │   ├── job-a1b2c3d4/       # Job-specific logs
    │   │   └── 2025-08-06-11.log
    │   └── job-9a8b7c6d/
    │       └── 2025-08-06-12.log
    └── sal/
        ├── 2025-08-06-13.log   # General SAL actor logs
        └── job-f1e2d3c4/
            └── 2025-08-06-13.log
Quick Start
1. Initialize System Logger
use hero_logger;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Define your system components
    let components = vec![
        "supervisor".to_string(),
        "osis_actor".to_string(),
        "sal_actor".to_string(),
    ];
    
    // Initialize the system logger
    let _guards = hero_logger::init_system_logger("logs", &components)?;
    
    // Now you can use tracing macros with targets
    tracing::info!(target: "supervisor", "System started");
    tracing::info!(target: "osis_actor", "Actor ready");
    
    Ok(())
}
2. Per-Job Logging
use hero_logger::create_job_logger;
use tracing::subscriber::with_default;
async fn process_job(job_id: &str, actor_type: &str) {
    // Create job-specific logger
    let job_logger = create_job_logger("logs", actor_type, job_id)?;
    
    // Execute job within logging context
    with_default(job_logger, || {
        tracing::info!(target: "osis_actor", "Job {} started", job_id);
        
        // All tracing calls here go to the job-specific log
        tracing::debug!(target: "osis_actor", "Processing data...");
        tracing::info!(target: "osis_actor", "Job {} completed", job_id);
    });
}
3. Rhai Script Integration
use hero_logger::rhai_integration::configure_rhai_logging;
use rhai::Engine;
fn setup_rhai_engine() -> Engine {
    let mut engine = Engine::new();
    
    // Configure Rhai to capture print/debug calls
    configure_rhai_logging(&mut engine, "osis_actor");
    
    engine
}
// Now Rhai scripts can use print() and debug()
let script = r#"
    print("Hello from Rhai!");
    debug("Debug information");
    42
"#;
let result = engine.eval::<i64>(script)?;
API Reference
Core Functions
init_system_logger(logs_root, components)
Initialize the global system logger with component-based filtering.
Parameters:
logs_root: Root directory for log filescomponents: List of component names for dedicated logging
Returns: Vector of WorkerGuards that must be kept alive
create_job_logger(logs_root, actor_type, job_id)
Create a per-job logger for isolated logging.
Parameters:
logs_root: Root directory for log filesactor_type: Type of actor (e.g., "osis", "sal")job_id: Unique job identifier
Returns: Boxed subscriber for use with with_default()
Rhai Integration
configure_rhai_logging(engine, target)
Configure a Rhai engine to capture print/debug output.
add_custom_logging_functions(engine, target)
Add custom logging functions (log_info, log_debug, etc.) to Rhai.
create_logging_enabled_engine(target, include_custom)
Create a new Rhai engine with full logging integration.
Utilities
ensure_log_directories(logs_root, components)
Ensure the log directory structure exists.
extract_actor_type(component)
Extract actor type from component name.
cleanup_old_logs(directory, pattern, max_age_days)
Clean up old log files based on age.
Configuration
Log Levels
The system supports standard tracing log levels:
ERROR: Critical errorsWARN: Warning messagesINFO: Informational messagesDEBUG: Debug informationTRACE: Detailed trace information
Environment Variables
RUST_LOG: Set log level filtering (e.g.,RUST_LOG=debug)
File Rotation
- Hourly: Default rotation every hour
 - Daily: Optional daily rotation
 - Never: Single file (no rotation)
 
Examples
Basic Usage
cargo run --example logging_demo
Integration with Actor System
// In your actor implementation
async fn process_job(&self, job: &Job) {
    let job_logger = hero_logger::create_job_logger(
        "logs", 
        &self.actor_type, 
        &job.id
    ).unwrap();
    let job_task = async move {
        tracing::info!(target: &self.actor_type, "Job processing started");
        
        // Configure Rhai engine for this job
        let mut engine = Engine::new();
        hero_logger::rhai_integration::configure_rhai_logging(
            &mut engine, 
            &self.actor_type
        );
        
        // Execute Rhai script - print/debug calls captured
        let result = engine.eval::<String>(&job.script)?;
        
        tracing::info!(target: &self.actor_type, "Job finished: {}", result);
        Ok(result)
    };
    // Execute with job-specific logging
    tracing::subscriber::with_default(job_logger, job_task).await;
}
Performance Considerations
- Async Logging: All file I/O is asynchronous
 - Efficient Filtering: Target-based filtering minimizes overhead
 - Memory Usage: Per-job loggers are short-lived and automatically cleaned up
 - File Handles: Automatic rotation prevents excessive file handle usage
 
Troubleshooting
Common Issues
- Logs not appearing: Ensure 
WorkerGuards are kept alive - Permission errors: Check write permissions on log directory
 - Missing directories: Use 
ensure_log_directories()before logging - Rhai output not captured: Verify 
configure_rhai_logging()is called 
Debug Mode
Enable debug logging to see internal logger operations:
RUST_LOG=hero_logger=debug cargo run
Testing
Run the test suite:
cargo test
Run the demo example:
cargo run --example logging_demo
License
This project is part of the Hero ecosystem and follows the same licensing terms.