Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.
Changes
1) New RFS tooling (scripts/rfs/)
- common.sh:
- Compute FULL_KERNEL_VERSION from configs (no uname).
- Load S3 config and construct pack store URI.
- Build read-only S3 route URL and patch flist (sqlite).
- Helpers to locate modules/firmware trees and rfs binary.
- pack-modules.sh:
- Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
- Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
- Optional upload of .fl using MinIO client (mcli/mc).
- pack-firmware.sh:
- Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
- Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
- Patch flist route to read-only S3; optional .fl upload via mcli/mc.
- verify-flist.sh:
- rfs flist inspect/tree; optional mount test (best effort).
- patch-stores.sh:
- Helper to patch stores (kept though not used by default).
2) Dev-container (Dockerfile)
- Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
- Retains rustup and musl target for building rfs/zinit/mycelium.
3) Config and examples
- config/rfs.conf.example:
- S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
- S3_ACCESS_KEY/S3_SECRET_KEY (write)
- READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
- config/rfs.conf updated by user with real values (not committed here; example included).
- config/modules.conf minor tweak (staged).
4) Zinit mount scripts (config/zinit/init/)
- firmware.sh:
- Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
- modules.sh:
- Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
- Both skip if target already mounted and respect RFS_BIN env.
5) Documentation
- docs/rfs-flists.md:
- End-to-end flow, S3-only route URL patching, mcli upload notes.
- docs/review-rfs-integration.md:
- Integration points, build flow, and post-build standalone usage.
- docs/depmod-behavior.md:
- depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.
6) Utility
- scripts/functionlist.md synced with current functions.
Behavioral details
- Pack (write):
s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.
Runtime mount examples
- Modules:
rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware
Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).
Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
7.4 KiB
Review: Current Build Flow and RFS Integration Hook Points
This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.
Build flow overview
Primary orchestrator: scripts/build.sh
Key sourced libraries:
Main stages executed (incremental via stage_run()):
- alpine_extract, alpine_configure, alpine_packages
- alpine_firmware
- components_build, components_verify
- kernel_modules
- init_script, components_copy, zinit_setup
- modules_setup, modules_copy
- cleanup, validation
- initramfs_create, initramfs_test, kernel_build
- boot_tests
Where key artifacts come from
-
Kernel full version:
- Derived deterministically using kernel_get_full_version()
- Computed as: KERNEL_VERSION from config/build.conf + CONFIG_LOCALVERSION from config/kernel.config
- Example target: 6.12.44-Zero-OS
-
Built modules in container:
- Stage: kernel_build_modules()
- Builds and installs into container root: /lib/modules/<FULL_VERSION>
- Runs depmod in container and sets:
- CONTAINER_MODULES_PATH=/lib/modules/<FULL_VERSION>
- KERNEL_FULL_VERSION=<FULL_VERSION>
-
Initramfs modules copy and metadata:
- Stage: initramfs_copy_resolved_modules()
- Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/<FULL_VERSION>
-
Firmware content:
- Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is
- Fallback within build flow: firmware packages installed by alpine_install_firmware() into initramfs/lib/firmware
-
rfs binary:
- Built via build_rfs() into components/rfs/target/x86_64-unknown-linux-musl/release/rfs
- Also expected to be available on PATH inside dev-container
udev and module load sequencing at runtime
-
zinit units present:
- udevd: config/zinit/udevd.yaml
- depmod: config/zinit/depmod.yaml
- udev trigger: config/zinit/udev-trigger.yaml calling udev.sh
-
initramfs module orchestration:
- Module resolution logic: initramfs_setup_modules() and initramfs_resolve_module_dependencies()
- Load scripts created for zinit:
- stage1: initramfs_create_module_scripts() emits /etc/zinit/init/stage1-modules.sh
- stage2 is currently disabled in config
Current integration gaps for RFS flists
- There is no existing code that:
- Packs modules or firmware into RFS flists (.fl sqlite manifests)
- Publishes associated content-addressed blobs to a store
- Uploads the .fl manifest to an S3 bucket (separate from the blob store)
- Mounts these flists at runtime prior to udev coldplug
Reliable inputs for RFS pack
- Kernel full version: use kernel_get_full_version() logic (never
uname -rinside container) - Modules source tree candidates (priority):
- /lib/modules/<FULL_VERSION> (from kernel_build_modules())
- initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
- Firmware source tree candidates (priority):
- $PROJECT_ROOT/firmware (external provided tree; user-preferred)
- initramfs/lib/firmware (APK-installed fallback)
S3 configuration needs
A new configuration file is required to avoid touching existing code:
- Path: config/rfs.conf (to be created)
- Required keys:
- S3_ENDPOINT (e.g., https://s3.example.com:9000)
- S3_REGION
- S3_BUCKET
- S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
- S3_ACCESS_KEY
- S3_SECRET_KEY
- These values will be consumed by standalone scripts (not existing build flow)
Proposed standalone scripts (no existing code changes)
Directory: scripts/rfs
-
common.sh
- Read config/build.conf, config/kernel.config to compute FULL_KERNEL_VERSION
- Read config/rfs.conf and construct RFS S3 store URI
- Detect rfs binary from PATH or components/rfs
- Locate modules and firmware source trees per the above priority order
-
pack-modules.sh
- Name: modules-<FULL_KERNEL_VERSION>.fl
- Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
- Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
-
pack-firmware.sh
- Name: firmware-.fl by default, overridable via FIRMWARE_TAG
- Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
- Pack with rfs and upload the .fl manifest similarly
-
verify-flist.sh
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional test mount with a temporary mountpoint when requested
Future runtime units (deferred)
Will be added as new zinit units once flist generation is validated:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
- Run depmod -a <FULL_VERSION>
- Run udev coldplug sequence (reload, trigger add, settle)
Placement relative to current units:
- Must occur before udev-trigger.yaml
- Should ensure depmod.yaml is sequenced after modules are available from mount
Flow summary (Mermaid)
flowchart TD
A[Build start] --> B[alpine_extract/configure/packages]
B --> C[components_build verify]
C --> D[kernel_modules
install modules in container
set KERNEL_FULL_VERSION]
D --> E[init_script zinit_setup]
E --> F[modules_setup copy]
F --> G[cleanup validation]
G --> H[initramfs_create test kernel_build]
H --> I[boot_tests]
subgraph RFS standalone
R1[Compute FULL_VERSION
from configs]
R2[Select sources:
modules /lib/modules/FULL_VERSION
firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
R3[Pack modules flist
rfs pack -s s3://...]
R4[Pack firmware flist
rfs pack -s s3://...]
R5[Upload .fl manifests
to S3 manifests/]
R6[Verify flists
inspect/tree/mount opt]
end
H -. post-build manual .-> R1
R1 --> R2 --> R3 --> R5
R2 --> R4 --> R5
R3 --> R6
R4 --> R6
Conclusion
- The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
- Firmware can be sourced from the user-provided tree or the initramfs fallback.
- RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
- Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.