Files
zosbuilder/docs/review-rfs-integration.md
Jan De Landtsheer 721e26a855 build: remove testing.sh in favor of runit.sh; add claude.md reference
Replace inline boot testing with standalone runit.sh runner for clarity:
- Remove scripts/lib/testing.sh source and boot_tests stage from build.sh
- Remove --skip-tests option from build.sh and rebuild-after-zinit.sh
- Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing
- Add comprehensive claude.md as AI assistant entry point with guidelines

Testing is now fully decoupled from build pipeline; use ./runit.sh for
QEMU/cloud-hypervisor validation after builds complete.
2025-11-04 13:47:24 +01:00

7.4 KiB

Review: Current Build Flow and RFS Integration Hook Points

This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.

Build flow overview

Primary orchestrator: scripts/build.sh

Key sourced libraries:

Main stages executed (incremental via stage_run()):

  1. alpine_extract, alpine_configure, alpine_packages
  2. alpine_firmware
  3. components_build, components_verify
  4. kernel_modules
  5. init_script, components_copy, zinit_setup
  6. modules_setup, modules_copy
  7. cleanup, validation
  8. initramfs_create, initramfs_test, kernel_build
  9. boot_tests

Where key artifacts come from

udev and module load sequencing at runtime

Current integration gaps for RFS flists

  • There is no existing code that:
    • Packs modules or firmware into RFS flists (.fl sqlite manifests)
    • Publishes associated content-addressed blobs to a store
    • Uploads the .fl manifest to an S3 bucket (separate from the blob store)
    • Mounts these flists at runtime prior to udev coldplug

Reliable inputs for RFS pack

  • Kernel full version: use kernel_get_full_version() logic (never uname -r inside container)
  • Modules source tree candidates (priority):
    1. /lib/modules/<FULL_VERSION> (from kernel_build_modules())
    2. initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
  • Firmware source tree candidates (priority):
    1. $PROJECT_ROOT/firmware (external provided tree; user-preferred)
    2. initramfs/lib/firmware (APK-installed fallback)

S3 configuration needs

A new configuration file is required to avoid touching existing code:

  • Path: config/rfs.conf (to be created)
  • Required keys:
    • S3_ENDPOINT (e.g., https://s3.example.com:9000)
    • S3_REGION
    • S3_BUCKET
    • S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
    • S3_ACCESS_KEY
    • S3_SECRET_KEY
  • These values will be consumed by standalone scripts (not existing build flow)

Proposed standalone scripts (no existing code changes)

Directory: scripts/rfs

  • common.sh

  • pack-modules.sh

    • Name: modules-<FULL_KERNEL_VERSION>.fl
    • Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
    • Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
  • pack-firmware.sh

    • Name: firmware-.fl by default, overridable via FIRMWARE_TAG
    • Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
    • Pack with rfs and upload the .fl manifest similarly
  • verify-flist.sh

    • rfs flist inspect dist/flists/NAME.fl
    • rfs flist tree dist/flists/NAME.fl | head
    • Optional test mount with a temporary mountpoint when requested

Future runtime units (deferred)

Will be added as new zinit units once flist generation is validated:

  • Mount firmware flist read-only at /lib/firmware (overmount to hide initramfs firmware beneath)
  • Mount modules flist read-only at /lib/modules/<FULL_VERSION>
  • Run depmod -a <FULL_VERSION>
  • Run udev coldplug sequence (reload, trigger add, settle)

Placement relative to current units:

Flow summary (Mermaid)

flowchart TD
  A[Build start] --> B[alpine_extract/configure/packages]
  B --> C[components_build verify]
  C --> D[kernel_modules
  install modules in container
  set KERNEL_FULL_VERSION]
  D --> E[init_script zinit_setup]
  E --> F[modules_setup copy]
  F --> G[cleanup validation]
  G --> H[initramfs_create test kernel_build]
  H --> I[boot_tests]

  subgraph RFS standalone
    R1[Compute FULL_VERSION
    from configs]
    R2[Select sources:
    modules /lib/modules/FULL_VERSION
    firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
    R3[Pack modules flist
    rfs pack -s s3://...]
    R4[Pack firmware flist
    rfs pack -s s3://...]
    R5[Upload .fl manifests
    to S3 manifests/]
    R6[Verify flists
    inspect/tree/mount opt]
  end

  H -. post-build manual .-> R1
  R1 --> R2 --> R3 --> R5
  R2 --> R4 --> R5
  R3 --> R6
  R4 --> R6

Conclusion

  • The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
  • Firmware can be sourced from the user-provided tree or the initramfs fallback.
  • RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
  • Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.