Files
zosbuilder/docs/rfs-flists.md

11 KiB
Raw Permalink Blame History

RFS flist creation and runtime overmounts (design)

Goal

  • Produce two flists without modifying existing build scripts:
    • firmware-VERSION.fl
    • modules-KERNEL_FULL_VERSION.fl
  • Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
  • Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.

Scope of this change

  • Add standalone scripts under scripts/rfs (no changes in existing libs or stages).
  • Add a config file config/rfs.conf for S3 credentials and addressing.
  • Document the flow and usage here; scripting comes next.

Inputs

  • Built kernel modules present in the dev-container (from kernel build stages):
    • Preferred: /lib/modules/KERNEL_FULL_VERSION
  • Firmware source for RFS pack:
  • Install all Alpine linux-firmware* packages into the build container and use /lib/firmware as the source (full set).
  • Initramfs fallback (build-time):
  • Selective firmware packages installed by bash.alpine_install_firmware() into initramfs/lib/firmware (kept inside the initramfs).
  • Kernel version derivation (never use uname -r in container):

Outputs and locations

Configuration: config/rfs.conf Required values:

  • S3_ENDPOINT=https://s3.example.com:9000
  • S3_REGION=us-east-1
  • S3_BUCKET=zos
  • S3_PREFIX=flists/zosbuilder
  • S3_ACCESS_KEY=AKIA...
  • S3_SECRET_KEY=...

Notes:

  • We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
    • s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
  • After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
    • UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
    • Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)

Scripts to add (standalone)

Runtime (deferred to a follow-up)

Naming policy

  • modules flist:
    • modules-KERNEL_FULL_VERSION.fl
  • firmware flist:
    • firmware-YYYYMMDD.fl by default
    • firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set

Usage flow (after your normal build inside dev-container)

  1. Create config for S3: config/rfs.conf
  2. Generate modules flist: scripts/rfs/pack-modules.sh
  3. Generate firmware flist: scripts/rfs/pack-firmware.sh
  4. Verify manifests: scripts/rfs/verify-flist.sh dist/flists/modules-...fl

Assumptions

  • rfs supports s3 store URIs as described (per components/rfs/README.md).
  • The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via kernel_build_modules()).
  • No changes are made to existing build scripts. The new scripts are run on-demand.

Open question for confirm

  • Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.

Note on route URL vs HTTP endpoint

  • rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
  • This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
    • Example SQL equivalent:
      • UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
  • Configure these in config/rfs.conf:
    • READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
    • ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
  • Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateways blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.

Runtime units and ordering (zinit)

This repo now includes runtime zinit units and init scripts to mount the RFS flists and perform dual udev coldplug sequences.

Reproducible firmware tagging

  • The firmware flist name can be pinned via FIRMWARE_TAG in config/build.conf.
    • If set: firmware-FIRMWARE_TAG.fl
    • If unset: the build uses firmware-latest.fl for embedding (standalone pack may default to date-based).
  • The build logic picks the tag with this precedence:
    1. Environment FIRMWARE_TAG
    2. FIRMWARE_TAG from config/build.conf
    3. "latest"
  • Build integration implemented in scripts/build.sh.

Example:

  • Set FIRMWARE_TAG in config: add FIRMWARE_TAG="20250908" in config/build.conf
  • Or export at build time: export FIRMWARE_TAG="v1"

Verifying flists

Use the helper to inspect a manifest, optionally listing entries and testing a local mount (root + proper FUSE policy required):

  • Inspect only:
    • scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl
  • Inspect + tree:
    • scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
  • Inspect + mount test to a temp dir:
    • sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount

Additional blob store backends (design)

This extends the existing S3/HTTP approach with a RESP/DB-style backend option for rfs blob storage. It is a design-only addition; CLI and scripts will be extended in a follow-up.

Scope

URI schemes (draft)

  • resp://host:port/db?prefix=blobs
  • resp+tls://host:port/db?prefix=blobs&ca=/etc/ssl/certs/ca.pem
  • resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
  • Credentials may be provided via URI userinfo or config (recommended: config only).

Operations (minimal set)

  • PUT blob: write content-addressed key (e.g., prefix/ab/cd/hash)
  • GET blob: fetch by exact key
  • Exists/HEAD: presence test by key
  • Optional batching: pipelined MGET for prefetch

Config keys (see example additions in config/rfs.conf.example)

  • RESP_ENDPOINT (host:port), RESP_DB (integer), RESP_PREFIX (path namespace)
  • RESP_USERNAME/RESP_PASSWORD (optional), RESP_TLS=0/1 (+ RESP_CA if needed)
  • RESP_SENTINEL and RESP_MASTER for sentinel deployments

Manifests and routes

  • Keep S3 store in flist stores table (fallback) while enabling route.url patching to HTTP/S3 for read-only access:
  • RESP may be used primarily for pack-time blob uploads or as an additional store the CLI can consume later.

Security

  • Do not embed write credentials in manifests.
  • Read-only credentials may be embedded in route.url if required, mirroring S3 pattern.

Next steps

  • Implement RESP uploader shim called from pack scripts; keep the CLI S3 flow unchanged.
  • Extend config loader in scripts/rfs/common.sh to parse RESP_* variables.
  • Add verification routines to sanity-check connectivity before pack.