Reviewed-on: #1
Zero OS Alpine Initramfs Builder
A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initramfs with zinit process management, designed for Zero OS deployment.
Features
- Alpine Linux 3.22 miniroot as base system
- zinit process manager (complete OpenRC replacement)
- Rootless containers (Docker/Podman compatible)
- Rust components with musl targeting (zinit, rfs, mycelium, zosstorage)
- Aggressive optimization (strip + UPX compression)
- 2-stage module loading for hardware support
- GitHub Actions compatible build pipeline
- Final output:
vmlinuz.efiwith embeddedinitramfs.cpio.xz
Quick Start
Prerequisites
Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y \
build-essential \
rustc \
cargo \
upx-ucl \
binutils \
git \
wget \
qemu-system-x86 \
podman
# Add Rust musl target
rustup target add x86_64-unknown-linux-musl
sudo apt-get install -y musl-tools
Alpine Linux
apk add --no-cache \
build-base \
rust \
cargo \
upx \
git \
wget \
qemu-system-x86 \
podman
# Add Rust musl target
rustup target add x86_64-unknown-linux-musl
Rootless Container Setup
For rootless Docker/Podman support:
# Configure subuid/subgid (if not already configured)
echo "$(whoami):100000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subgid
# Verify setup
podman system info
Build
# Clone the repository
git clone <repository-url>
cd zosbuilder
# Make scripts executable
chmod +x scripts/build.sh scripts/clean.sh
# Build initramfs
./scripts/build.sh
# Output will be in dist/
ls -la dist/
# vmlinuz.efi - Kernel with embedded initramfs
# initramfs.cpio.xz - Standalone initramfs archive
Project Structure
zosbuilder/
├── config/
│ ├── zinit/ # zinit service definitions
│ │ ├── services/ # individual service files
│ │ └── zinit.conf # main zinit configuration
│ ├── packages.list # Alpine packages to install
│ ├── sources.conf # components to build (ThreeFold)
│ ├── kernel.config # Linux kernel configuration
│ └── modules.conf # 2-stage module loading
├── configs/ # existing configurations (migrated)
├── scripts/
│ ├── lib/
│ │ ├── docker.sh # container management
│ │ ├── alpine.sh # Alpine operations
│ │ ├── components.sh # source building
│ │ ├── initramfs.sh # assembly & optimization
│ │ └── kernel.sh # kernel building
│ ├── build.sh # main orchestrator
│ └── clean.sh # cleanup script
├── initramfs/ # build output (generated)
├── components/ # component sources (generated)
├── kernel/ # kernel source (generated)
└── dist/ # final artifacts (generated)
Configuration
Component Sources (config/sources.conf)
Define components to download and build:
# Format: TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA_OPTIONS]
# Git repositories (Rust components with musl)
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
# Pre-built releases
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex
Package List (config/packages.list)
Alpine packages to install (NO OpenRC):
# Core system
alpine-baselayout
busybox
musl
# Hardware detection & modules
eudev
eudev-hwids
kmod
# Networking
iproute2
ethtool
dhcpcd
# Filesystems
btrfs-progs
dosfstools
# Security & SSH
haveged
openssh-server
# Tools
zellij
tcpdump
bmon
Module Loading (config/modules.conf)
2-stage hardware module loading:
# Stage 1: Critical boot modules
stage1:virtio_net
stage1:virtio_scsi
stage1:virtio_blk
stage1:e1000
stage1:e1000e
# Stage 2: Extended hardware support
stage2:igb
stage2:ixgbe
stage2:i40e
stage2:r8169
stage2:bnx2
stage2:bnx2x
zinit Configuration (config/zinit/)
Main config (config/zinit/zinit.conf)
log_level: debug
init:
- stage1-modules
- stage2-modules
- networking
- services
Service definitions (config/zinit/services/)
Services are migrated from existing configs/zinit/ directory with proper initialization order.
Build Process
Phase 1: Environment Setup
- Create build directories
- Install build dependencies
- Setup Rust musl target
Phase 2: Alpine Base
- Download Alpine 3.22 miniroot
- Extract to initramfs directory
- Install packages from
config/packages.list - NO OpenRC installation
Phase 3: Component Building
- Parse
config/sources.conf - Download/clone sources to
components/ - Build Rust components with musl:
- zinit: Standard cargo build
- rfs: Standard cargo build
- mycelium: Build in
myceliumd/subdirectory - zosstorage: Build from the storage orchestration component for Zero-OS
- Install binaries to initramfs
Phase 4: System Configuration
- Replace
/sbin/initwith zinit - Copy zinit configuration
- Setup 2-stage module loading
- Configure system services
Phase 5: Optimization
- Aggressive cleanup:
- Remove docs, man pages, locales
- Remove headers, development files
- Remove APK cache
- Binary optimization:
- Strip all executables and libraries
- UPX compress all binaries
- Size verification
Phase 6: Packaging
- Create
initramfs.cpio.xzwith XZ compression - Build kernel with embedded initramfs
- Generate
vmlinuz.efi(default kernel) - Generate versioned kernel:
vmlinuz-{VERSION}-{ZINIT_HASH}.efi - Optionally upload versioned kernel to S3 (set
UPLOAD_KERNEL=true)
Testing
QEMU Testing
# Boot test with QEMU (default)
./runit.sh
# With custom parameters
./runit.sh --hypervisor qemu --memory 2048 --disks 3
cloud-hypervisor Testing
# Boot test with cloud-hypervisor
./runit.sh --hypervisor ch
# With disk reset
./runit.sh --hypervisor ch --reset --disks 5
Advanced Options
# See all options
./runit.sh --help
# Custom disk size and bridge
./runit.sh --disk-size 20G --bridge zosbr --disks 4
# Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh
Size Optimization
The build system achieves minimal size through:
Package Selection
- Minimal Alpine packages (~50MB target)
- No OpenRC or systemd
- Essential tools only
Binary Optimization
- strip: Remove debug symbols
- UPX: Maximum compression
- musl static linking: No runtime dependencies
Filesystem Cleanup
- Remove documentation
- Remove locales (except C)
- Remove development headers
- Remove package manager cache
Expected Sizes
- Base Alpine: ~5MB
- With packages: ~25MB
- With components: ~40MB
- After optimization: ~15-20MB
- Final initramfs.cpio.xz: ~8-12MB
GitHub Actions Integration
See GITHUB_ACTIONS.md for complete CI/CD setup.
Basic Workflow
name: Build Zero OS
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup rootless containers
run: |
echo "runner:100000:65536" | sudo tee -a /etc/subuid
echo "runner:100000:65536" | sudo tee -a /etc/subgid
- name: Build
run: ./scripts/build.sh
- name: Test
run: ./runit.sh --hypervisor qemu
Advanced Usage
Custom Components
Add custom components to config/sources.conf:
# Custom Git component
git:myapp:https://github.com/user/myapp:v1.0:build_myapp
# Custom release
release:mytool:https://releases.example.com/mytool-x86_64:v2.0:install_mytool
Implement build function in scripts/lib/components.sh:
function build_myapp() {
local name="$1"
local component_dir="$2"
# Custom build logic
export RUSTFLAGS="-C target-feature=+crt-static"
cargo build --release --target x86_64-unknown-linux-musl
# Install binary
cp target/x86_64-unknown-linux-musl/release/myapp "${INSTALL_DIR}/usr/bin/"
}
S3 Uploads (Kernel & RFS Flists)
Automatically upload build artifacts to S3-compatible storage:
Configuration
Create config/rfs.conf:
S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="zos"
S3_PREFIX="flists/zosbuilder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"
Upload Kernel
# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh
# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh
Uploaded files:
s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi- Versioned kernels3://{bucket}/{prefix}/kernel/kernels.txt- Text index (one kernel per line)s3://{bucket}/{prefix}/kernel/kernels.json- JSON index with metadata
Index files: The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:
- Easy kernel selection in web UIs (dropdown menus)
- Programmatic access without S3 API listing
- Metadata like upload timestamp and kernel count (JSON format)
JSON index format:
{
"kernels": [
"vmlinuz-6.12.44-Zero-OS-abc1234.efi",
"vmlinuz-6.12.44-Zero-OS-def5678.efi"
],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
Upload RFS Flists
# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh
Uploaded as:
s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fls3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl
Requirements
- MinIO Client (
mcliormc) must be installed - Valid S3 credentials in
config/rfs.conf
Container Builds
Build in isolated container:
# Build container image
podman build -t zero-os-builder .
# Run build in container
podman run --rm \
-v $(pwd):/workspace \
-w /workspace \
zero-os-builder \
./scripts/build.sh
Cross-Platform Support (totally untestd)
The build system supports multiple architectures:
# Build for different targets
export RUST_TARGET="aarch64-unknown-linux-musl"
export ALPINE_ARCH="aarch64"
./scripts/build.sh
Troubleshooting
Common Issues
Build Failures
# Clean and retry
./scripts/clean.sh
./scripts/build.sh
# Check dependencies
./scripts/build.sh --check-deps
Container Issues
# Verify rootless setup
podman system info
# Reset user namespace
podman system reset
Rust Build Issues
# Verify musl target
rustup target list --installed | grep musl
# Add if missing
rustup target add x86_64-unknown-linux-musl
Debug Mode
# Enable verbose output
export DEBUG=1
./scripts/build.sh
Contributing
- Fork the repository
- Create feature branch
- Test thoroughly with both QEMU and cloud-hypervisor
- Ensure size optimization targets are met
- Submit pull request with detailed description
Development Workflow
# Setup development environment
./scripts/dev-container.sh start
# Run incremental build
./scripts/build.sh
# Test with QEMU
./runit.sh --hypervisor qemu
# Test with cloud-hypervisor
./runit.sh --hypervisor ch
License
[License information]
Support
- Issues: GitHub Issues
- Documentation: See
docs/directory - Examples: See
examples/directory