Compare commits

..

2 Commits

Author SHA1 Message Date
947d156921 Added youki build and fromatting of scripts
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-11-11 20:49:36 +01:00
721e26a855 build: remove testing.sh in favor of runit.sh; add claude.md reference
Replace inline boot testing with standalone runit.sh runner for clarity:
- Remove scripts/lib/testing.sh source and boot_tests stage from build.sh
- Remove --skip-tests option from build.sh and rebuild-after-zinit.sh
- Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing
- Add comprehensive claude.md as AI assistant entry point with guidelines

Testing is now fully decoupled from build pipeline; use ./runit.sh for
QEMU/cloud-hypervisor validation after builds complete.
2025-11-04 13:47:24 +01:00
15 changed files with 1537 additions and 690 deletions

View File

@@ -20,6 +20,7 @@ RUN apk add --no-cache \
musl-utils \ musl-utils \
pkgconfig \ pkgconfig \
openssl openssl-dev \ openssl openssl-dev \
libseccomp libseccomp-dev \
perl \ perl \
shadow \ shadow \
bash \ bash \
@@ -55,4 +56,4 @@ RUN chown builder:builder /workspace
# Set environment variables - rustup handles everything # Set environment variables - rustup handles everything
ENV PATH="/root/.cargo/bin:${PATH}" ENV PATH="/root/.cargo/bin:${PATH}"
CMD ["/bin/bash"] CMD ["/bin/bash"]

118
README.md
View File

@@ -103,8 +103,7 @@ zosbuilder/
│ │ ├── alpine.sh # Alpine operations │ │ ├── alpine.sh # Alpine operations
│ │ ├── components.sh # source building │ │ ├── components.sh # source building
│ │ ├── initramfs.sh # assembly & optimization │ │ ├── initramfs.sh # assembly & optimization
│ │ ── kernel.sh # kernel building │ │ ── kernel.sh # kernel building
│ │ └── testing.sh # QEMU/cloud-hypervisor
│ ├── build.sh # main orchestrator │ ├── build.sh # main orchestrator
│ └── clean.sh # cleanup script │ └── clean.sh # cleanup script
├── initramfs/ # build output (generated) ├── initramfs/ # build output (generated)
@@ -244,34 +243,40 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
### Phase 6: Packaging ### Phase 6: Packaging
1. Create `initramfs.cpio.xz` with XZ compression 1. Create `initramfs.cpio.xz` with XZ compression
2. Build kernel with embedded initramfs 2. Build kernel with embedded initramfs
3. Generate `vmlinuz.efi` 3. Generate `vmlinuz.efi` (default kernel)
4. Generate versioned kernel: `vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
5. Optionally upload versioned kernel to S3 (set `UPLOAD_KERNEL=true`)
## Testing ## Testing
### QEMU Testing ### QEMU Testing
```bash ```bash
# Boot test with QEMU # Boot test with QEMU (default)
./scripts/test.sh --qemu ./runit.sh
# With serial console # With custom parameters
./scripts/test.sh --qemu --serial ./runit.sh --hypervisor qemu --memory 2048 --disks 3
``` ```
### cloud-hypervisor Testing ### cloud-hypervisor Testing
```bash ```bash
# Boot test with cloud-hypervisor # Boot test with cloud-hypervisor
./scripts/test.sh --cloud-hypervisor ./runit.sh --hypervisor ch
# With disk reset
./runit.sh --hypervisor ch --reset --disks 5
``` ```
### Custom Testing ### Advanced Options
```bash ```bash
# Manual QEMU command # See all options
qemu-system-x86_64 \ ./runit.sh --help
-kernel dist/vmlinuz.efi \
-m 512M \ # Custom disk size and bridge
-nographic \ ./runit.sh --disk-size 20G --bridge zosbr --disks 4
-serial mon:stdio \
-append "console=ttyS0,115200 console=tty1 loglevel=7" # Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh
``` ```
## Size Optimization ## Size Optimization
@@ -321,7 +326,7 @@ jobs:
- name: Build - name: Build
run: ./scripts/build.sh run: ./scripts/build.sh
- name: Test - name: Test
run: ./scripts/test.sh --qemu run: ./runit.sh --hypervisor qemu
``` ```
## Advanced Usage ## Advanced Usage
@@ -354,6 +359,72 @@ function build_myapp() {
} }
``` ```
### S3 Uploads (Kernel & RFS Flists)
Automatically upload build artifacts to S3-compatible storage:
#### Configuration
Create `config/rfs.conf`:
```bash
S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="zos"
S3_PREFIX="flists/zosbuilder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"
```
#### Upload Kernel
```bash
# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh
# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh
```
**Uploaded files:**
- `s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi` - Versioned kernel
- `s3://{bucket}/{prefix}/kernel/kernels.txt` - Text index (one kernel per line)
- `s3://{bucket}/{prefix}/kernel/kernels.json` - JSON index with metadata
**Index files:**
The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:
- Easy kernel selection in web UIs (dropdown menus)
- Programmatic access without S3 API listing
- Metadata like upload timestamp and kernel count (JSON format)
**JSON index format:**
```json
{
"kernels": [
"vmlinuz-6.12.44-Zero-OS-abc1234.efi",
"vmlinuz-6.12.44-Zero-OS-def5678.efi"
],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
#### Upload RFS Flists
```bash
# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh
```
Uploaded as:
- `s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fl`
- `s3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl`
#### Requirements
- MinIO Client (`mcli` or `mc`) must be installed
- Valid S3 credentials in `config/rfs.conf`
### Container Builds ### Container Builds
Build in isolated container: Build in isolated container:
@@ -432,13 +503,16 @@ export DEBUG=1
```bash ```bash
# Setup development environment # Setup development environment
./scripts/setup-dev.sh ./scripts/dev-container.sh start
# Run tests # Run incremental build
./scripts/test.sh --all ./scripts/build.sh
# Check size impact # Test with QEMU
./scripts/analyze-size.sh --compare ./runit.sh --hypervisor qemu
# Test with cloud-hypervisor
./runit.sh --hypervisor ch
``` ```
## License ## License

523
claude.md Normal file
View File

@@ -0,0 +1,523 @@
# Claude Code Reference for Zero-OS Builder
This document provides essential context for Claude Code (or any AI assistant) working with this Zero-OS Alpine Initramfs Builder repository.
## Project Overview
**What is this?**
A sophisticated build system for creating custom Alpine Linux 3.22 x86_64 initramfs images with zinit process management, designed for Zero-OS deployment on ThreeFold Grid.
**Key Features:**
- Container-based reproducible builds (rootless podman/docker)
- Incremental staged build pipeline with completion markers
- zinit process manager (complete OpenRC replacement)
- RFS (Remote File System) for lazy-loading modules/firmware from S3
- Rust components built with musl static linking
- Aggressive size optimization (strip + UPX)
- Embedded initramfs in kernel (single vmlinuz.efi output)
## Repository Structure
```
zosbuilder/
├── config/ # All configuration files
│ ├── build.conf # Build settings (versions, paths, flags)
│ ├── packages.list # Alpine packages to install
│ ├── sources.conf # ThreeFold components to build
│ ├── modules.conf # 2-stage kernel module loading
│ ├── firmware.conf # Firmware to include in initramfs
│ ├── kernel.config # Linux kernel configuration
│ ├── init # /init script for initramfs
│ └── zinit/ # zinit service definitions (YAML)
├── scripts/
│ ├── build.sh # Main orchestrator (DO NOT EDIT LIGHTLY)
│ ├── clean.sh # Clean all artifacts
│ ├── dev-container.sh # Persistent dev container manager
│ ├── rebuild-after-zinit.sh # Quick rebuild helper
│ ├── lib/ # Modular build libraries
│ │ ├── common.sh # Logging, path normalization, utilities
│ │ ├── stages.sh # Incremental stage tracking
│ │ ├── docker.sh # Container lifecycle
│ │ ├── alpine.sh # Alpine extraction, packages, cleanup
│ │ ├── components.sh # Build Rust components from sources.conf
│ │ ├── initramfs.sh # Assembly, optimization, CPIO creation
│ │ └── kernel.sh # Kernel download, config, build, embed
│ └── rfs/ # RFS flist generation scripts
│ ├── common.sh # S3 config, version computation
│ ├── pack-modules.sh # Create modules flist
│ ├── pack-firmware.sh # Create firmware flist
│ └── verify-flist.sh # Inspect/test flists
├── docs/ # Detailed documentation
│ ├── NOTES.md # Operational knowledge & troubleshooting
│ ├── PROMPT.md # Agent guidance (strict debugger mode)
│ ├── TODO.md # Persistent checklist with code refs
│ ├── AGENTS.md # Quick reference for agents
│ ├── rfs-flists.md # RFS design and runtime flow
│ ├── review-rfs-integration.md # Integration points
│ └── depmod-behavior.md # Module dependency details
├── runit.sh # Test runner (QEMU/cloud-hypervisor)
├── initramfs/ # Generated initramfs tree
├── components/ # Generated component sources
├── kernel/ # Generated kernel source
├── dist/ # Final outputs
│ ├── vmlinuz.efi # Kernel with embedded initramfs
│ └── initramfs.cpio.xz # Standalone initramfs archive
└── .build-stages/ # Incremental build markers (*.done files)
```
## Core Concepts
### 1. Incremental Staged Builds
**How it works:**
- Each stage creates a `.build-stages/<stage_name>.done` marker on success
- Subsequent builds skip completed stages unless forced
- Use `./scripts/build.sh --show-stages` to see status
- Use `./scripts/build.sh --rebuild-from=<stage>` to restart from a specific stage
- Manually remove `.done` files to re-run specific stages
**Build stages (in order):**
```
alpine_extract → alpine_configure → alpine_packages → alpine_firmware
→ components_build → components_verify → kernel_modules
→ init_script → components_copy → zinit_setup
→ modules_setup → modules_copy → cleanup → rfs_flists
→ validation → initramfs_create → initramfs_test → kernel_build
```
**Key insight:** The build ALWAYS runs inside a container. Host invocations auto-spawn containers.
### 2. Container-First Architecture
**Why containers?**
- Reproducible toolchain (Alpine 3.22 base with exact dependencies)
- Rootless execution (no privileged access needed)
- Isolation from host environment
- GitHub Actions compatible
**Container modes:**
- **Transient:** `./scripts/build.sh` spawns, builds, exits
- **Persistent:** `./scripts/dev-container.sh start/shell/build`
**Important:** Directory paths are normalized to absolute PROJECT_ROOT to avoid CWD issues when stages change directories (especially kernel builds).
### 3. Component Build System
**sources.conf format:**
```
TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA]
```
**Example:**
```bash
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex
```
**Build functions** are defined in `scripts/lib/components.sh` and handle:
- Rust builds with `x86_64-unknown-linux-musl` target
- Static linking via `RUSTFLAGS="-C target-feature=+crt-static"`
- Special cases (e.g., mycelium builds in `myceliumd/` subdirectory)
### 4. RFS Flists (Remote File System)
**Purpose:** Lazy-load kernel modules and firmware from S3 at runtime
**Flow:**
1. Build stage creates flists: `modules-<KERNEL_VERSION>.fl` and `firmware-<TAG>.fl`
2. Flists are SQLite databases containing:
- Content-addressed blob references
- S3 store URIs (patched for read-only access)
- Directory tree metadata
3. Flists embedded in initramfs at `/etc/rfs/`
4. Runtime: zinit units mount flists over `/lib/modules/` and `/lib/firmware/`
5. Dual udev coldplug: early (before RFS) for networking, post-RFS for new hardware
**Key files:**
- `scripts/rfs/pack-modules.sh` - Creates modules flist from container `/lib/modules/`
- `scripts/rfs/pack-firmware.sh` - Creates firmware flist from Alpine packages
- `config/zinit/init/modules.sh` - Runtime mount script
- `config/zinit/init/firmware.sh` - Runtime mount script
### 5. zinit Service Management
**No OpenRC:** This system uses zinit exclusively for process management.
**Service graph:**
```
/init → zinit → [stage1-modules, udevd, depmod]
→ udev-trigger (early coldplug)
→ network
→ rfs-modules + rfs-firmware (mount flists)
→ udev-rfs (post-RFS coldplug)
→ services
```
**Service definitions:** YAML files in `config/zinit/` with `after:`, `needs:`, `wants:` dependencies
### 6. Kernel Versioning and S3 Upload
**Versioned Kernel Output:**
- Standard kernel: `dist/vmlinuz.efi` (for compatibility)
- Versioned kernel: `dist/vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
- Example: `vmlinuz-6.12.44-Zero-OS-a1b2c3d.efi`
**Version components:**
- `{VERSION}`: Full kernel version from `KERNEL_VERSION` + `CONFIG_LOCALVERSION`
- `{ZINIT_HASH}`: Short git commit hash from `components/zinit/.git`
**S3 Upload (optional):**
- Controlled by `UPLOAD_KERNEL=true` environment variable
- Uses MinIO client (`mcli` or `mc`) to upload to S3-compatible storage
- Uploads versioned kernel to: `s3://{bucket}/{prefix}/kernel/{versioned_filename}`
**Kernel Index Generation:**
After uploading, automatically generates and uploads index files:
- `kernels.txt` - Plain text, one kernel per line, sorted reverse chronologically
- `kernels.json` - JSON format with metadata (timestamp, count)
**Why index files?**
- S3 web interfaces often don't support directory listings
- Enables dropdown menus in web UIs without S3 API access
- Provides kernel discovery for deployment tools
**JSON index structure:**
```json
{
"kernels": ["vmlinuz-6.12.44-Zero-OS-abc1234.efi", ...],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
**Key functions:**
- `get_git_commit_hash()` in `scripts/lib/common.sh` - Extracts git hash
- `kernel_build_with_initramfs()` in `scripts/lib/kernel.sh` - Creates versioned kernel
- `kernel_upload_to_s3()` in `scripts/lib/kernel.sh` - Uploads to S3
- `kernel_generate_index()` in `scripts/lib/kernel.sh` - Generates and uploads index
## Critical Conventions
### Path Normalization
**Problem:** Stages can change CWD (kernel build uses `/workspace/kernel/current`)
**Solution:** All paths normalized to absolute at startup in `scripts/lib/common.sh:244`
**Variables affected:**
- `INSTALL_DIR` (initramfs/)
- `COMPONENTS_DIR` (components/)
- `KERNEL_DIR` (kernel/)
- `DIST_DIR` (dist/)
**Never use relative paths** when calling functions that might be in different CWDs.
### Branding and Security
**Passwordless root enforcement:**
- Applied in `scripts/lib/initramfs.sh:575` via `passwd -d -R "${initramfs_dir}" root`
- Creates `root::` in `/etc/shadow` (empty password field)
- Controlled by `ZEROOS_BRANDING` and `ZEROOS_PASSWORDLESS_ROOT` flags
**Never edit /etc/shadow manually** - always use `passwd` or `chpasswd` with chroot.
### Module Loading Strategy
**2-stage approach:**
- **Stage 1:** Critical boot modules (virtio, e1000, scsi) - loaded by zinit early
- **Stage 2:** Extended hardware (igb, ixgbe, i40e) - loaded after network
**Config:** `config/modules.conf` with `stage1:` and `stage2:` prefixes
**Dependency resolution:**
- Uses `modinfo` to build dependency tree
- Resolves from container `/lib/modules/<FULL_VERSION>/`
- Must run after `kernel_modules` stage
### Firmware Policy
**For initramfs:** `config/firmware.conf` is the SINGLE source of truth
- Any firmware hints in `modules.conf` are IGNORED
- Prevents duplication/version mismatches
**For RFS:** Full Alpine `linux-firmware*` packages installed in container
- Packed from container `/lib/firmware/`
- Overmounts at runtime for extended hardware
## Common Workflows
### Full Build from Scratch
```bash
# Clean everything and rebuild
./scripts/build.sh --clean
# Or just rebuild all stages
./scripts/build.sh --force-rebuild
```
### Quick Iteration After Config Changes
```bash
# After editing zinit configs, init script, or modules.conf
./scripts/rebuild-after-zinit.sh
# With kernel rebuild
./scripts/rebuild-after-zinit.sh --with-kernel
# Dry-run to see what changed
./scripts/rebuild-after-zinit.sh --verify-only
```
### Minimal Manual Rebuild
```bash
# Remove specific stages
rm -f .build-stages/initramfs_create.done
rm -f .build-stages/validation.done
# Rebuild only those stages
DEBUG=1 ./scripts/build.sh
```
### Testing the Built Kernel
```bash
# QEMU (default)
./runit.sh
# cloud-hypervisor with 5 disks
./runit.sh --hypervisor ch --disks 5 --reset
# Custom memory and bridge
./runit.sh --memory 4096 --bridge zosbr
```
### Persistent Dev Container
```bash
# Start persistent container
./scripts/dev-container.sh start
# Enter shell
./scripts/dev-container.sh shell
# Run build inside
./scripts/dev-container.sh build
# Stop container
./scripts/dev-container.sh stop
```
## Debugging Guidelines
### Diagnostics-First Approach
**ALWAYS add diagnostics before fixes:**
1. Enable `DEBUG=1` for verbose safe_execute logs
2. Add strategic `log_debug` statements
3. Confirm hypothesis in logs
4. Then apply minimal fix
**Example:**
```bash
# Bad: Guess and fix
Edit file to fix suspected issue
# Good: Diagnose first
1. Add log_debug "Variable X=${X}, resolved=${resolved_path}"
2. DEBUG=1 ./scripts/build.sh
3. Confirm in output
4. Apply fix with evidence
```
### Key Diagnostic Functions
- `scripts/lib/common.sh`: `log_info`, `log_warn`, `log_error`, `log_debug`
- `scripts/lib/initramfs.sh:820`: Validation debug prints (input, PWD, PROJECT_ROOT, resolved paths)
- `scripts/lib/initramfs.sh:691`: Pre-CPIO sanity checks with file listings
### Common Issues and Solutions
**"Initramfs directory not found"**
- **Cause:** INSTALL_DIR interpreted as relative in different CWD
- **Fix:** Already patched - paths normalized at startup
- **Check:** Look for "Validation debug:" logs showing resolved paths
**"INITRAMFS_ARCHIVE unbound"**
- **Cause:** Incremental build skipped initramfs_create stage
- **Fix:** Already patched - stages default INITRAMFS_ARCHIVE if unset
- **Check:** `scripts/build.sh:401` logs "defaulting INITRAMFS_ARCHIVE"
**"Module dependency resolution fails"**
- **Cause:** Container `/lib/modules/<FULL_VERSION>` missing or stale
- **Fix:** `./scripts/rebuild-after-zinit.sh --refresh-container-mods`
- **Check:** Ensure `kernel_modules` stage completed successfully
**"Passwordless root not working"**
- **Cause:** Branding disabled or shadow file not updated
- **Fix:** Check ZEROOS_BRANDING=true in logs, verify /etc/shadow has `root::`
- **Verify:** Extract initramfs and `grep '^root:' etc/shadow`
## Important Files Quick Reference
### Must-Read Before Editing
- `scripts/build.sh` - Orchestrator with precise stage order
- `scripts/lib/common.sh` - Path normalization, logging, utilities
- `scripts/lib/stages.sh` - Stage tracking logic
- `config/build.conf` - Version pins, directory settings, flags
### Safe to Edit
- `config/zinit/*.yaml` - Service definitions
- `config/zinit/init/*.sh` - Runtime initialization scripts
- `config/modules.conf` - Module lists (stage1/stage2)
- `config/firmware.conf` - Initramfs firmware selection
- `config/packages.list` - Alpine packages
### Generated (Never Edit)
- `initramfs/` - Assembled initramfs tree
- `components/` - Downloaded component sources
- `kernel/` - Kernel source tree
- `dist/` - Build outputs
- `.build-stages/` - Completion markers
## Testing Architecture
**No built-in tests during build** - Tests run separately via `runit.sh`
**Why?**
- Build is for assembly, not validation
- Tests require hypervisor (QEMU/cloud-hypervisor)
- Separation allows faster iteration
**runit.sh features:**
- Multi-disk support (qcow2 for QEMU, raw for cloud-hypervisor)
- Network bridge/TAP configuration
- Persistent volumes (reset with `--reset`)
- Serial console logging
## Quick Command Reference
```bash
# Build
./scripts/build.sh # Incremental build
./scripts/build.sh --clean # Clean build
./scripts/build.sh --show-stages # Show completion status
./scripts/build.sh --rebuild-from=zinit_setup # Rebuild from stage
DEBUG=1 ./scripts/build.sh # Verbose output
# Rebuild helpers
./scripts/rebuild-after-zinit.sh # After zinit/init/modules changes
./scripts/rebuild-after-zinit.sh --with-kernel # Also rebuild kernel
./scripts/rebuild-after-zinit.sh --verify-only # Dry-run
# Testing
./runit.sh # QEMU test
./runit.sh --hypervisor ch # cloud-hypervisor test
./runit.sh --help # All options
# Dev container
./scripts/dev-container.sh start # Start persistent container
./scripts/dev-container.sh shell # Enter shell
./scripts/dev-container.sh build # Build inside container
./scripts/dev-container.sh stop # Stop container
# Cleanup
./scripts/clean.sh # Remove all generated files
rm -rf .build-stages/ # Reset stage markers
```
## Environment Variables
**Build control:**
- `DEBUG=1` - Enable verbose logging
- `FORCE_REBUILD=true` - Force rebuild all stages
- `REBUILD_FROM_STAGE=<name>` - Rebuild from specific stage
**Version overrides:**
- `ALPINE_VERSION=3.22` - Alpine Linux version
- `KERNEL_VERSION=6.12.44` - Linux kernel version
- `RUST_TARGET=x86_64-unknown-linux-musl` - Rust compilation target
**Firmware tagging:**
- `FIRMWARE_TAG=20250908` - Firmware flist version tag
**S3 upload control:**
- `UPLOAD_KERNEL=true` - Upload versioned kernel to S3 (default: false)
- `UPLOAD_MANIFESTS=true` - Upload RFS flists to S3 (default: false)
- `KERNEL_SUBPATH=kernel` - S3 subpath for kernel uploads (default: kernel)
**S3 configuration:**
- See `config/rfs.conf` for S3 endpoint, credentials, paths
- Used by both RFS flist uploads and kernel uploads
## Documentation Hierarchy
**Start here:**
1. `README.md` - User-facing guide with features and setup
2. This file (`claude.md`) - AI assistant context
**For development:**
3. `docs/NOTES.md` - Operational knowledge, troubleshooting
4. `docs/AGENTS.md` - Quick agent reference
5. `docs/TODO.md` - Current work checklist with code links
**For deep dives:**
6. `docs/PROMPT.md` - Strict debugger agent mode (diagnostics-first)
7. `docs/rfs-flists.md` - RFS design and implementation
8. `docs/review-rfs-integration.md` - Integration points analysis
9. `docs/depmod-behavior.md` - Module dependency deep dive
**Historical:**
10. `IMPLEMENTATION_PLAN.md` - Original design document
11. `GITHUB_ACTIONS.md` - CI/CD setup guide
## Project Philosophy
1. **Reproducibility:** Container-based builds ensure identical results
2. **Incrementality:** Stage markers minimize rebuild time
3. **Diagnostics-first:** Log before fixing, validate assumptions
4. **Minimal intervention:** Alpine + zinit only, no systemd/OpenRC
5. **Size-optimized:** Aggressive cleanup, strip, UPX compression
6. **Remote-ready:** RFS enables lazy-loading for extended hardware support
## Commit Message Guidelines
**DO NOT add Claude Code or AI assistant references to commit messages.**
Keep commits clean and professional:
- Focus on what changed and why
- Use conventional commit prefixes: `build:`, `docs:`, `fix:`, `feat:`, `refactor:`
- Be concise but descriptive
- No emoji unless project convention
- No "Generated with Claude Code" or "Co-Authored-By: Claude" footers
**Good example:**
```
build: remove testing.sh in favor of runit.sh
Replace inline boot testing with standalone runit.sh runner.
Tests now run separately from build pipeline for faster iteration.
```
**Bad example:**
```
build: remove testing.sh 🤖
Made some changes to testing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
## Keywords for Quick Search
- **Build fails:** Check `DEBUG=1` logs, stage completion markers, container state
- **Module issues:** `kernel_modules` stage, `CONTAINER_MODULES_PATH`, depmod logs
- **Firmware missing:** `config/firmware.conf` for initramfs, RFS flist for runtime
- **zinit problems:** Service YAML syntax, dependency order, init script errors
- **Path errors:** Absolute path normalization in `common.sh:244`
- **Size too large:** Check cleanup stage, strip/UPX execution, package list
- **Container issues:** Rootless setup, subuid/subgid, podman vs docker
- **RFS mount fails:** S3 credentials, network readiness, flist manifest paths
- **Kernel upload:** `UPLOAD_KERNEL=true`, requires `config/rfs.conf`, MinIO client (`mcli`/`mc`)
- **Kernel index:** Auto-generated `kernels.txt`/`kernels.json` for dropdown UIs, updated on upload
---
**Last updated:** 2025-01-04
**Maintainer notes:** This file is the entry point for AI assistants. Keep it updated when architecture changes. Cross-reference with `docs/NOTES.md` for operational details.

View File

@@ -61,6 +61,7 @@ ENABLE_STRIP="true"
ENABLE_UPX="true" ENABLE_UPX="true"
ENABLE_AGGRESSIVE_CLEANUP="true" ENABLE_AGGRESSIVE_CLEANUP="true"
ENABLE_2STAGE_MODULES="true" ENABLE_2STAGE_MODULES="true"
UPLOAD_KERNEL=true
# Debug and development # Debug and development
DEBUG_DEFAULT="0" DEBUG_DEFAULT="0"

View File

@@ -4,8 +4,9 @@
# Git repositories to clone and build # Git repositories to clone and build
git zinit https://github.com/threefoldtech/zinit master build_zinit git zinit https://github.com/threefoldtech/zinit master build_zinit
git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium
git zosstorage git@git.ourworld.tf:delandtj/zosstorage main build_zosstorage git zosstorage https://git.ourworld.tf/delandtj/zosstorage main build_zosstorage
git youki git@github.com:youki-dev/youki.git v0.5.7 build_youki
git rfs https://github.com/threefoldtech/rfs development build_rfs git rfs https://github.com/threefoldtech/rfs development build_rfs
# Pre-built releases to download # Pre-built releases to download
# release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs # release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex

View File

@@ -176,9 +176,12 @@ Stage System and Incremental Rebuilds
- Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only). - Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only).
- Manual minimal rebuild: - Manual minimal rebuild:
- Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done - Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done
- Rerun: DEBUG=1 ./scripts/build.sh --skip-tests - Rerun: DEBUG=1 ./scripts/build.sh
- Show status: - Show status:
- ./scripts/build.sh --show-stages - ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5 --reset
Key Decisions (current) Key Decisions (current)
- Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch. - Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch.

View File

@@ -107,7 +107,7 @@ Diagnostics-first workflow (strict)
- For any failure, first collect specific logs: - For any failure, first collect specific logs:
- Enable DEBUG=1 for verbose logs. - Enable DEBUG=1 for verbose logs.
- Re-run only the impacted stage if possible: - Re-run only the impacted stage if possible:
- Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh --skip-tests - Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh
- Use existing diagnostics: - Use existing diagnostics:
- Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575) - Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799) - Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
@@ -119,12 +119,15 @@ Common tasks and commands
- DEBUG=1 ./scripts/build.sh - DEBUG=1 ./scripts/build.sh
- Minimal rebuild of last steps: - Minimal rebuild of last steps:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done - rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
- Validation only: - Validation only:
- rm -f .build-stages/validation.done - rm -f .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
- Show stage status: - Show stage status:
- ./scripts/build.sh --show-stages - ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5
Checklists and helpers Checklists and helpers
@@ -154,7 +157,7 @@ C) Minimal rebuild after zinit/init/modules.conf changes
- Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages. - Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages.
- Manual fallback: - Manual fallback:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done - rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
D) INITRAMFS_ARCHIVE unbound during kernel build stage D) INITRAMFS_ARCHIVE unbound during kernel build stage
- stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset: - stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset:

View File

@@ -57,7 +57,7 @@ This canonical checklist is the single source of truth for ongoing work. It mirr
- Create: [bash.initramfs_create_cpio()](../scripts/lib/initramfs.sh:691) - Create: [bash.initramfs_create_cpio()](../scripts/lib/initramfs.sh:691)
- Validate: [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820) - Validate: [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
- [ ] QEMU / cloud-hypervisor smoke tests - [ ] QEMU / cloud-hypervisor smoke tests
- Test suite: [bash.testing_run_all()](../scripts/lib/testing.sh:299) - Test runner: [runit.sh](../runit.sh)
- [ ] Kernel embed path and versioning sanity - [ ] Kernel embed path and versioning sanity
- Embed config: [bash.kernel_modify_config_for_initramfs()](../scripts/lib/kernel.sh:130) - Embed config: [bash.kernel_modify_config_for_initramfs()](../scripts/lib/kernel.sh:130)
- Full version logic: [bash.kernel_get_full_version()](../scripts/lib/kernel.sh:14) - Full version logic: [bash.kernel_get_full_version()](../scripts/lib/kernel.sh:14)

View File

@@ -13,7 +13,6 @@ Key sourced libraries:
- [initramfs.sh](scripts/lib/initramfs.sh) - [initramfs.sh](scripts/lib/initramfs.sh)
- [stages.sh](scripts/lib/stages.sh) - [stages.sh](scripts/lib/stages.sh)
- [docker.sh](scripts/lib/docker.sh) - [docker.sh](scripts/lib/docker.sh)
- [testing.sh](scripts/lib/testing.sh)
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)): Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
1) alpine_extract, alpine_configure, alpine_packages 1) alpine_extract, alpine_configure, alpine_packages

View File

@@ -15,7 +15,6 @@ source "${SCRIPT_DIR}/lib/alpine.sh"
source "${SCRIPT_DIR}/lib/components.sh" source "${SCRIPT_DIR}/lib/components.sh"
source "${SCRIPT_DIR}/lib/initramfs.sh" source "${SCRIPT_DIR}/lib/initramfs.sh"
source "${SCRIPT_DIR}/lib/kernel.sh" source "${SCRIPT_DIR}/lib/kernel.sh"
source "${SCRIPT_DIR}/lib/testing.sh"
# Build configuration loaded from config/build.conf via common.sh # Build configuration loaded from config/build.conf via common.sh
# Environment variables can override config file values # Environment variables can override config file values
@@ -42,7 +41,6 @@ ZINIT_CONFIG_DIR="${CONFIG_DIR}/zinit"
# Build options # Build options
USE_CONTAINER="${USE_CONTAINER:-auto}" USE_CONTAINER="${USE_CONTAINER:-auto}"
CLEAN_BUILD="${CLEAN_BUILD:-false}" CLEAN_BUILD="${CLEAN_BUILD:-false}"
SKIP_TESTS="${SKIP_TESTS:-false}"
KEEP_ARTIFACTS="${KEEP_ARTIFACTS:-false}" KEEP_ARTIFACTS="${KEEP_ARTIFACTS:-false}"
# Display usage information # Display usage information
@@ -54,7 +52,6 @@ Usage: $0 [OPTIONS]
Options: Options:
--clean Clean build (remove all artifacts first) --clean Clean build (remove all artifacts first)
--skip-tests Skip boot tests
--keep-artifacts Keep build artifacts after completion --keep-artifacts Keep build artifacts after completion
--force-rebuild Force rebuild all stages (ignore completion markers) --force-rebuild Force rebuild all stages (ignore completion markers)
--rebuild-from=STAGE Force rebuild from specific stage onward --rebuild-from=STAGE Force rebuild from specific stage onward
@@ -92,10 +89,6 @@ function parse_arguments() {
CLEAN_BUILD="true" CLEAN_BUILD="true"
shift shift
;; ;;
--skip-tests)
SKIP_TESTS="true"
shift
;;
--keep-artifacts) --keep-artifacts)
KEEP_ARTIFACTS="true" KEEP_ARTIFACTS="true"
shift shift
@@ -408,21 +401,19 @@ function main_build_process() {
log_debug "stage_kernel_build: defaulting INITRAMFS_ARCHIVE=${INITRAMFS_ARCHIVE}" log_debug "stage_kernel_build: defaulting INITRAMFS_ARCHIVE=${INITRAMFS_ARCHIVE}"
fi fi
# Ensure FULL_KERNEL_VERSION is set for versioned output filename
if [[ -z "${FULL_KERNEL_VERSION:-}" ]]; then
FULL_KERNEL_VERSION=$(kernel_get_full_version "$KERNEL_VERSION" "$KERNEL_CONFIG")
export FULL_KERNEL_VERSION
log_debug "stage_kernel_build: resolved FULL_KERNEL_VERSION=${FULL_KERNEL_VERSION}"
fi
kernel_build_with_initramfs "$KERNEL_CONFIG" "$INITRAMFS_ARCHIVE" "$kernel_output" kernel_build_with_initramfs "$KERNEL_CONFIG" "$INITRAMFS_ARCHIVE" "$kernel_output"
export KERNEL_OUTPUT="$kernel_output" export KERNEL_OUTPUT="$kernel_output"
} }
function stage_boot_tests() { # Boot tests removed - use runit.sh for testing instead
if [[ "$SKIP_TESTS" != "true" ]]; then
# Ensure KERNEL_OUTPUT is set (for incremental builds)
if [[ -z "${KERNEL_OUTPUT:-}" ]]; then
KERNEL_OUTPUT="${DIST_DIR}/vmlinuz.efi"
export KERNEL_OUTPUT
fi
testing_run_all "$KERNEL_OUTPUT"
fi
}
# Run all stages with incremental tracking # Run all stages with incremental tracking
stage_run "alpine_extract" stage_alpine_extract stage_run "alpine_extract" stage_alpine_extract
stage_run "alpine_configure" stage_alpine_configure stage_run "alpine_configure" stage_alpine_configure
@@ -442,8 +433,7 @@ function main_build_process() {
stage_run "initramfs_create" stage_initramfs_create stage_run "initramfs_create" stage_initramfs_create
stage_run "initramfs_test" stage_initramfs_test stage_run "initramfs_test" stage_initramfs_test
stage_run "kernel_build" stage_kernel_build stage_run "kernel_build" stage_kernel_build
stage_run "boot_tests" stage_boot_tests
# Calculate build time # Calculate build time
local end_time=$(date +%s) local end_time=$(date +%s)
local build_time=$((end_time - start_time)) local build_time=$((end_time - start_time))
@@ -501,16 +491,13 @@ function main() {
log_info "Starting container build" log_info "Starting container build"
docker_detect_runtime docker_detect_runtime
docker_build_container docker_build_container
# Pass through relevant arguments to container # Pass through relevant arguments to container
local container_args="" local container_args=""
if [[ "$SKIP_TESTS" == "true" ]]; then
container_args="$container_args --skip-tests"
fi
if [[ "$KEEP_ARTIFACTS" == "true" ]]; then if [[ "$KEEP_ARTIFACTS" == "true" ]]; then
container_args="$container_args --keep-artifacts" container_args="$container_args --keep-artifacts"
fi fi
docker_run_build "./scripts/build.sh${container_args}" docker_run_build "./scripts/build.sh${container_args}"
else else
log_error "Container runtime required (podman or docker)" log_error "Container runtime required (podman or docker)"

View File

@@ -18,7 +18,7 @@ export DEBUG="${DEBUG:-1}"
source "${SCRIPT_DIR}/lib/common.sh" source "${SCRIPT_DIR}/lib/common.sh"
function show_usage() { function show_usage() {
cat << EOF cat <<EOF
Zero OS Development Container Manager Zero OS Development Container Manager
Usage: $0 [COMMAND] Usage: $0 [COMMAND]
@@ -70,10 +70,10 @@ function ensure_builder_image() {
function dev_container_start() { function dev_container_start() {
section_header "Starting Development Container" section_header "Starting Development Container"
# Ensure builder image exists (handles clean --all case and short-name policy) # Ensure builder image exists (handles clean --all case and short-name policy)
ensure_builder_image ensure_builder_image
# Check if container already exists # Check if container already exists
if podman container exists "$CONTAINER_NAME" 2>/dev/null; then if podman container exists "$CONTAINER_NAME" 2>/dev/null; then
if podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then if podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then
@@ -85,15 +85,16 @@ function dev_container_start() {
return 0 return 0
fi fi
fi fi
log_info "Creating new development container: ${CONTAINER_NAME}" log_info "Creating new development container: ${CONTAINER_NAME}"
# Create persistent container with all necessary mounts and environment # Create persistent container with all necessary mounts and environment
local podman_args=( local podman_args=(
run -d run -d
--name "$CONTAINER_NAME" --name "$CONTAINER_NAME"
--privileged --privileged
-v "${PROJECT_ROOT}:/workspace" -v "${PROJECT_ROOT}:/workspace"
-v "$HOME/.ssh:root/.ssh"
-w /workspace -w /workspace
-e DEBUG=1 -e DEBUG=1
-e ALPINE_VERSION=3.22 -e ALPINE_VERSION=3.22
@@ -115,7 +116,7 @@ function dev_container_start() {
) )
safe_execute podman "${podman_args[@]}" safe_execute podman "${podman_args[@]}"
log_info "Development container started successfully" log_info "Development container started successfully"
log_info "Container name: ${CONTAINER_NAME}" log_info "Container name: ${CONTAINER_NAME}"
log_info "Access with: $0 shell" log_info "Access with: $0 shell"
@@ -123,7 +124,7 @@ function dev_container_start() {
function dev_container_stop() { function dev_container_stop() {
section_header "Stopping Development Container" section_header "Stopping Development Container"
if podman container exists "$CONTAINER_NAME" 2>/dev/null; then if podman container exists "$CONTAINER_NAME" 2>/dev/null; then
log_info "Stopping development container: ${CONTAINER_NAME}" log_info "Stopping development container: ${CONTAINER_NAME}"
safe_execute podman stop "$CONTAINER_NAME" safe_execute podman stop "$CONTAINER_NAME"
@@ -135,17 +136,17 @@ function dev_container_stop() {
function dev_container_shell() { function dev_container_shell() {
section_header "Entering Development Container Shell" section_header "Entering Development Container Shell"
if ! podman container exists "$CONTAINER_NAME" 2>/dev/null; then if ! podman container exists "$CONTAINER_NAME" 2>/dev/null; then
log_info "Development container not found, starting..." log_info "Development container not found, starting..."
dev_container_start dev_container_start
fi fi
if ! podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then if ! podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then
log_info "Starting stopped development container" log_info "Starting stopped development container"
safe_execute podman start "$CONTAINER_NAME" safe_execute podman start "$CONTAINER_NAME"
fi fi
log_info "Entering container shell (exit with 'exit' or Ctrl+D)" log_info "Entering container shell (exit with 'exit' or Ctrl+D)"
# Use direct execution for interactive shell (don't use safe_execute) # Use direct execution for interactive shell (don't use safe_execute)
exec podman exec -it "$CONTAINER_NAME" /bin/bash exec podman exec -it "$CONTAINER_NAME" /bin/bash
@@ -153,56 +154,56 @@ function dev_container_shell() {
function dev_container_build() { function dev_container_build() {
section_header "Running Build in Development Container" section_header "Running Build in Development Container"
if ! podman container exists "$CONTAINER_NAME" 2>/dev/null; then if ! podman container exists "$CONTAINER_NAME" 2>/dev/null; then
log_info "Development container not found, starting..." log_info "Development container not found, starting..."
dev_container_start dev_container_start
fi fi
if ! podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then if ! podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}' | grep -q "running"; then
log_info "Starting stopped development container" log_info "Starting stopped development container"
safe_execute podman start "$CONTAINER_NAME" safe_execute podman start "$CONTAINER_NAME"
fi fi
log_info "Running build in persistent container (real-time output)" log_info "Running build in persistent container (real-time output)"
log_info "Command: podman exec $CONTAINER_NAME ./scripts/build.sh $*" log_info "Command: podman exec $CONTAINER_NAME ./scripts/build.sh $*"
# Use direct execution to show real-time output (bypass safe_execute) # Use direct execution to show real-time output (bypass safe_execute)
podman exec "$CONTAINER_NAME" ./scripts/build.sh "$@" podman exec "$CONTAINER_NAME" ./scripts/build.sh "$@"
local exit_code=$? local exit_code=$?
if [[ $exit_code -eq 0 ]]; then if [[ $exit_code -eq 0 ]]; then
log_info "Build completed successfully in container" log_info "Build completed successfully in container"
else else
log_error "Build failed in container with exit code: $exit_code" log_error "Build failed in container with exit code: $exit_code"
fi fi
return $exit_code return $exit_code
} }
function dev_container_clean() { function dev_container_clean() {
section_header "Cleaning Development Container" section_header "Cleaning Development Container"
if podman container exists "$CONTAINER_NAME" 2>/dev/null; then if podman container exists "$CONTAINER_NAME" 2>/dev/null; then
log_info "Removing existing development container" log_info "Removing existing development container"
safe_execute podman rm -f "$CONTAINER_NAME" safe_execute podman rm -f "$CONTAINER_NAME"
fi fi
log_info "Starting fresh development container" log_info "Starting fresh development container"
dev_container_start dev_container_start
} }
function dev_container_status() { function dev_container_status() {
section_header "Development Container Status" section_header "Development Container Status"
if podman container exists "$CONTAINER_NAME" 2>/dev/null; then if podman container exists "$CONTAINER_NAME" 2>/dev/null; then
local status=$(podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}') local status=$(podman container inspect "$CONTAINER_NAME" --format '{{.State.Status}}')
local created=$(podman container inspect "$CONTAINER_NAME" --format '{{.Created}}') local created=$(podman container inspect "$CONTAINER_NAME" --format '{{.Created}}')
log_info "Container: ${CONTAINER_NAME}" log_info "Container: ${CONTAINER_NAME}"
log_info "Status: ${status}" log_info "Status: ${status}"
log_info "Created: ${created}" log_info "Created: ${created}"
if [[ "$status" == "running" ]]; then if [[ "$status" == "running" ]]; then
log_info "✓ Ready for development" log_info "✓ Ready for development"
else else
@@ -216,7 +217,7 @@ function dev_container_status() {
function dev_container_logs() { function dev_container_logs() {
section_header "Development Container Logs" section_header "Development Container Logs"
if podman container exists "$CONTAINER_NAME" 2>/dev/null; then if podman container exists "$CONTAINER_NAME" 2>/dev/null; then
safe_execute podman logs "$CONTAINER_NAME" safe_execute podman logs "$CONTAINER_NAME"
else else
@@ -228,39 +229,39 @@ function dev_container_logs() {
# Main function # Main function
function main() { function main() {
local command="${1:-help}" local command="${1:-help}"
case "$command" in case "$command" in
start) start)
dev_container_start dev_container_start
;; ;;
stop) stop)
dev_container_stop dev_container_stop
;; ;;
shell) shell)
dev_container_shell dev_container_shell
;; ;;
build) build)
shift shift
dev_container_build "$@" dev_container_build "$@"
;; ;;
clean) clean)
dev_container_clean dev_container_clean
;; ;;
status) status)
dev_container_status dev_container_status
;; ;;
logs) logs)
dev_container_logs dev_container_logs
;; ;;
help|--help|-h) help | --help | -h)
show_usage show_usage
;; ;;
*) *)
log_error "Unknown command: $command" log_error "Unknown command: $command"
show_usage show_usage
exit 1 exit 1
;; ;;
esac esac
} }
main "$@" main "$@"

View File

@@ -195,6 +195,26 @@ function get_file_size() {
fi fi
} }
# Get short git commit hash from a git repository directory
function get_git_commit_hash() {
local repo_dir="$1"
local short="${2:-true}" # Default to short hash
if [[ ! -d "$repo_dir/.git" ]]; then
echo "unknown"
return 1
fi
local hash
if [[ "$short" == "true" ]]; then
hash=$(cd "$repo_dir" && git rev-parse --short HEAD 2>/dev/null || echo "unknown")
else
hash=$(cd "$repo_dir" && git rev-parse HEAD 2>/dev/null || echo "unknown")
fi
echo "$hash"
}
# Wait for file to exist with timeout # Wait for file to exist with timeout
function wait_for_file() { function wait_for_file() {
local file="$1" local file="$1"

File diff suppressed because it is too large Load Diff

View File

@@ -223,20 +223,219 @@ function kernel_build_with_initramfs() {
output_dir=$(dirname "$output_abs") output_dir=$(dirname "$output_abs")
safe_mkdir "$output_dir" safe_mkdir "$output_dir"
safe_copy "$kernel_image" "$output_abs" safe_copy "$kernel_image" "$output_abs"
# Also copy with versioned filename including kernel version and zinit hash
local full_kernel_version="${FULL_KERNEL_VERSION:-unknown}"
local zinit_hash="unknown"
local zinit_dir="${COMPONENTS_DIR:-${PROJECT_ROOT}/components}/zinit"
if [[ -d "$zinit_dir/.git" ]]; then
zinit_hash=$(get_git_commit_hash "$zinit_dir")
else
log_warn "zinit git directory not found at ${zinit_dir}, using 'unknown' for hash"
fi
# Create versioned filename: vmlinuz-{VERSION}-{ZINIT_HASH}.efi
local versioned_name="vmlinuz-${full_kernel_version}-${zinit_hash}.efi"
local versioned_output="${output_dir}/${versioned_name}"
safe_copy "$kernel_image" "$versioned_output"
# Verify final kernel # Verify final kernel
local kernel_size local kernel_size
kernel_size=$(get_file_size "$output_abs") kernel_size=$(get_file_size "$output_abs")
local versioned_size
versioned_size=$(get_file_size "$versioned_output")
log_info "Kernel build complete:" log_info "Kernel build complete:"
log_info " Output file: ${output_abs}" log_info " Output file: ${output_abs}"
log_info " Versioned: ${versioned_output}"
log_info " Kernel size: ${kernel_size}" log_info " Kernel size: ${kernel_size}"
log_info " Version: ${full_kernel_version}"
log_info " zinit hash: ${zinit_hash}"
# Verify initramfs is embedded # Verify initramfs is embedded
if strings "$output_file" | grep -q "initramfs"; then if strings "$output_file" | grep -q "initramfs"; then
log_info "✓ Initramfs appears to be embedded in kernel" log_info "✓ Initramfs appears to be embedded in kernel"
else else
log_warn "Initramfs embedding verification inconclusive" log_warn "Initramfs embedding verification inconclusive"
fi fi
# Upload versioned kernel to S3 if enabled
kernel_upload_to_s3 "$versioned_output" "$full_kernel_version" "$zinit_hash"
}
# Upload versioned kernel to S3 using MinIO client (mcli/mc)
function kernel_upload_to_s3() {
local kernel_file="$1"
local kernel_version="$2"
local zinit_hash="$3"
section_header "Uploading Kernel to S3"
# Check if upload is enabled
if [[ "${UPLOAD_KERNEL:-false}" != "true" ]]; then
log_info "UPLOAD_KERNEL not enabled; skipping kernel upload"
return 0
fi
# Verify kernel file exists
if [[ ! -f "$kernel_file" ]]; then
log_error "Kernel file not found: ${kernel_file}"
return 1
fi
# Load S3 configuration from rfs.conf
local rfs_conf="${PROJECT_ROOT}/config/rfs.conf"
local rfs_conf_example="${PROJECT_ROOT}/config/rfs.conf.example"
if [[ -f "$rfs_conf" ]]; then
# shellcheck source=/dev/null
source "$rfs_conf"
log_info "Loaded S3 config from: ${rfs_conf}"
elif [[ -f "$rfs_conf_example" ]]; then
# shellcheck source=/dev/null
source "$rfs_conf_example"
log_warn "Using example S3 config: ${rfs_conf_example}"
else
log_error "No S3 config found (config/rfs.conf or config/rfs.conf.example)"
return 1
fi
# Validate required S3 variables
for var in S3_ENDPOINT S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY; do
if [[ -z "${!var}" ]]; then
log_error "Missing required S3 variable: ${var}"
return 1
fi
done
# Detect MinIO client binary (mcli or mc)
local mcli_bin=""
if command -v mcli >/dev/null 2>&1; then
mcli_bin="mcli"
elif command -v mc >/dev/null 2>&1; then
mcli_bin="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping kernel upload"
return 0
fi
log_info "Using MinIO client: ${mcli_bin}"
# Setup S3 alias
log_info "Configuring S3 alias..."
safe_execute "${mcli_bin}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
# Construct destination path: rfs/{bucket}/{prefix}/kernel/{versioned_filename}
local kernel_filename
kernel_filename=$(basename "$kernel_file")
local kernel_subpath="${KERNEL_SUBPATH:-kernel}"
local mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${kernel_subpath%/}/${kernel_filename}"
# Upload kernel
log_info "Uploading: ${kernel_file} -> ${mcli_dst}"
safe_execute "${mcli_bin}" cp "${kernel_file}" "${mcli_dst}"
log_info "✓ Kernel uploaded successfully"
log_info " Version: ${kernel_version}"
log_info " zinit: ${zinit_hash}"
log_info " S3 path: ${mcli_dst}"
# Generate and upload kernel index
kernel_generate_index "${mcli_bin}" "${S3_BUCKET}" "${S3_PREFIX}" "${kernel_subpath}"
}
# Generate kernel index file from S3 listing and upload it
function kernel_generate_index() {
local mcli_bin="$1"
local bucket="$2"
local prefix="$3"
local kernel_subpath="$4"
section_header "Generating Kernel Index"
# Construct S3 path for listing
local s3_path="rfs/${bucket}/${prefix%/}/${kernel_subpath%/}/"
log_info "Listing kernels from: ${s3_path}"
# List all files in the kernel directory
local ls_output
if ! ls_output=$("${mcli_bin}" ls "${s3_path}" 2>&1); then
log_warn "Failed to list S3 kernel directory, index not generated"
log_debug "mcli ls output: ${ls_output}"
return 0
fi
# Parse output and extract kernel filenames matching vmlinuz-*
local kernels=()
while IFS= read -r line; do
# mcli ls output format: [DATE TIME TZ] SIZE FILENAME
# Extract filename (last field)
local filename
filename=$(echo "$line" | awk '{print $NF}')
# Filter for vmlinuz files (both .efi and without extension)
if [[ "$filename" =~ ^vmlinuz-.* ]]; then
kernels+=("$filename")
fi
done <<< "$ls_output"
if [[ ${#kernels[@]} -eq 0 ]]; then
log_warn "No kernels found in S3 path: ${s3_path}"
return 0
fi
log_info "Found ${#kernels[@]} kernel(s)"
# Create index files in dist directory
local index_dir="${DIST_DIR:-${PROJECT_ROOT}/dist}"
local text_index="${index_dir}/kernels.txt"
local json_index="${index_dir}/kernels.json"
# Generate text index (one kernel per line, sorted)
printf "%s\n" "${kernels[@]}" | sort -r > "$text_index"
log_info "Created text index: ${text_index}"
# Generate JSON index (array of kernel filenames)
{
echo "{"
echo " \"kernels\": ["
local first=true
for kernel in $(printf "%s\n" "${kernels[@]}" | sort -r); do
if [[ "$first" == "true" ]]; then
first=false
else
echo ","
fi
printf " \"%s\"" "$kernel"
done
echo ""
echo " ],"
echo " \"updated\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\","
echo " \"count\": ${#kernels[@]}"
echo "}"
} > "$json_index"
log_info "Created JSON index: ${json_index}"
# Upload both index files to S3
log_info "Uploading kernel index files to S3..."
local text_dst="${s3_path}kernels.txt"
local json_dst="${s3_path}kernels.json"
if safe_execute "${mcli_bin}" cp "$text_index" "$text_dst"; then
log_info "✓ Uploaded text index: ${text_dst}"
else
log_warn "Failed to upload text index"
fi
if safe_execute "${mcli_bin}" cp "$json_index" "$json_dst"; then
log_info "✓ Uploaded JSON index: ${json_dst}"
else
log_warn "Failed to upload JSON index"
fi
log_info "Kernel index generation complete"
} }
# Build and install modules in container for proper dependency resolution # Build and install modules in container for proper dependency resolution

View File

@@ -218,13 +218,9 @@ build_from_args=()
if in_container; then if in_container; then
# Run directly when already inside the dev/build container # Run directly when already inside the dev/build container
if [[ "$run_tests" -eq 1 ]]; then # Note: Tests are run separately using runit.sh, not during build
log "Including boot tests (in-container)" log "Running rebuild (in-container) - use runit.sh for testing"
DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" "${build_from_args[@]}" "${extra_args[@]}" DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" "${build_from_args[@]}" "${extra_args[@]}"
else
log "Skipping boot tests (in-container)"
DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" --skip-tests "${build_from_args[@]}" "${extra_args[@]}"
fi
else else
# Not in container: delegate to dev-container manager which ensures container exists and is running # Not in container: delegate to dev-container manager which ensures container exists and is running
devctl="${PROJECT_ROOT}/scripts/dev-container.sh" devctl="${PROJECT_ROOT}/scripts/dev-container.sh"
@@ -234,11 +230,7 @@ else
exit 1 exit 1
fi fi
if [[ "$run_tests" -eq 1 ]]; then # Note: Tests are run separately using runit.sh, not during build
log "Including boot tests via dev-container" log "Running rebuild via dev-container - use runit.sh for testing"
"$devctl" build "${build_from_args[@]}" "${extra_args[@]}" "$devctl" build "${build_from_args[@]}" "${extra_args[@]}"
else
log "Skipping boot tests via dev-container"
"$devctl" build --skip-tests "${build_from_args[@]}" "${extra_args[@]}"
fi
fi fi