feat(rfs): flist pack to S3 + read-only route embedding + zinit mount scripts; docs; dev-container tooling
Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.
Changes
1) New RFS tooling (scripts/rfs/)
- common.sh:
- Compute FULL_KERNEL_VERSION from configs (no uname).
- Load S3 config and construct pack store URI.
- Build read-only S3 route URL and patch flist (sqlite).
- Helpers to locate modules/firmware trees and rfs binary.
- pack-modules.sh:
- Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
- Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
- Optional upload of .fl using MinIO client (mcli/mc).
- pack-firmware.sh:
- Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
- Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
- Patch flist route to read-only S3; optional .fl upload via mcli/mc.
- verify-flist.sh:
- rfs flist inspect/tree; optional mount test (best effort).
- patch-stores.sh:
- Helper to patch stores (kept though not used by default).
2) Dev-container (Dockerfile)
- Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
- Retains rustup and musl target for building rfs/zinit/mycelium.
3) Config and examples
- config/rfs.conf.example:
- S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
- S3_ACCESS_KEY/S3_SECRET_KEY (write)
- READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
- config/rfs.conf updated by user with real values (not committed here; example included).
- config/modules.conf minor tweak (staged).
4) Zinit mount scripts (config/zinit/init/)
- firmware.sh:
- Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
- modules.sh:
- Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
- Both skip if target already mounted and respect RFS_BIN env.
5) Documentation
- docs/rfs-flists.md:
- End-to-end flow, S3-only route URL patching, mcli upload notes.
- docs/review-rfs-integration.md:
- Integration points, build flow, and post-build standalone usage.
- docs/depmod-behavior.md:
- depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.
6) Utility
- scripts/functionlist.md synced with current functions.
Behavioral details
- Pack (write):
s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.
Runtime mount examples
- Modules:
rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware
Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).
Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
This commit is contained in:
71
docs/depmod-behavior.md
Normal file
71
docs/depmod-behavior.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# depmod behavior, impact on lazy-mounted module stores, and flist store rewriting
|
||||
|
||||
Summary (short answer)
|
||||
- depmod builds the modules dependency/alias databases by scanning the modules tree under /lib/modules/<kernel>. It reads metadata from each .ko file (.modinfo section) to generate:
|
||||
- modules.dep(.bin), modules.alias(.bin), modules.symbols(.bin), modules.devname, modules.order, etc.
|
||||
- It does not load modules; it opens many files for small reads. On a lazy store, the first depmod run can trigger many object fetches.
|
||||
- If modules metadata files are already present and consistent (as produced during build), modprobe can work without re-running depmod. Use depmod -A (update only) or skip depmod entirely if timestamps and paths are unchanged.
|
||||
- For private S3 (garage) without anonymous read, post-process the .fl manifest to replace the store URI with your HTTPS web endpoint for that bucket, so runtime mounts fetch over the web endpoint instead of signed S3.
|
||||
|
||||
Details
|
||||
|
||||
1) What depmod actually reads/builds
|
||||
- Inputs scanned under /lib/modules/<kernel>:
|
||||
- .ko files: depmod reads ELF .modinfo to collect depends=, alias=, vermagic, etc. It does not execute or load modules.
|
||||
- modules.builtin and modules.builtin.modinfo: indicate built-in drivers so they are excluded from dep graph.
|
||||
- Optional flags:
|
||||
- depmod -F <System.map> and -E <Module.symvers> allow symbol/CRC checks; these are typically not required on target systems for generating dependency/alias maps.
|
||||
- Outputs (consumed by modprobe/kmod):
|
||||
- modules.dep and modules.dep.bin: dependency lists and fast index
|
||||
- modules.alias and modules.alias.bin: modalias to module name mapping
|
||||
- modules.symbols(.bin), modules.devname, modules.order, etc.
|
||||
|
||||
Key property: depmod’s default operation opens many .ko files to read .modinfo, which on a lazy FUSE-backed store causes many small reads.
|
||||
|
||||
2) Recommended strategy with lazy flists
|
||||
- Precompute metadata during build:
|
||||
- In the dev container, your pipeline already runs depmod (see [kernel_build_modules()](scripts/lib/kernel.sh:228)). Ensure the resulting metadata files in /lib/modules/<kernel> are included in the modules flist.
|
||||
- At runtime after overmounting the modules flist:
|
||||
- Option A: Do nothing. If your path is the same (/lib/modules/<kernel>), modprobe will use the precomputed .bin maps and will not need to rescan .ko files. This minimizes object fetches (only when a module is actually loaded).
|
||||
- Option B: Run depmod -A <kernel> (update only if any .ko newer than modules.dep). This typically performs stats on files and only rebuilds if needed, avoiding a full read of all .ko files.
|
||||
- Option C: Run depmod -a only if you changed the module set or path layout. Expect many small reads on first run.
|
||||
|
||||
3) Firmware implications
|
||||
- No depmod impact, but udev coldplug will probe devices. Keep firmware files accessible via the firmware flist mount (e.g., /usr/lib/firmware).
|
||||
- Since firmware loads on-demand by the kernel/driver, the lazy store will fetch only needed blobs.
|
||||
|
||||
4) Post-processing .fl to use a web endpoint (garage S3 private)
|
||||
- Goal: Pack/upload blobs to private S3 using credentials, but ship a manifest (.fl) that references a public HTTPS endpoint (or authenticated gateway) that your rfs mount can fetch from without S3 signing.
|
||||
- Approach A: Use rfs CLI (if supported) to edit store URIs within the manifest.
|
||||
- Example (conceptual): rfs flist edit-store -m dist/flists/modules-...fl --set https://web.example.com/bucket/prefix
|
||||
- Approach B: Use sqlite3 to patch the manifest directly (the .fl is sqlite):
|
||||
- Inspect stores:
|
||||
- sqlite3 dist/flists/modules-...fl "SELECT id, uri FROM stores;"
|
||||
- Replace s3 store with web endpoint:
|
||||
- sqlite3 dist/flists/modules-...fl "UPDATE stores SET uri='https://web.example.com/bucket/prefix' WHERE uri LIKE 's3://%';"
|
||||
- Validate:
|
||||
- rfs flist inspect dist/flists/modules-...fl
|
||||
- Notes:
|
||||
- The web endpoint you provide must serve the same content-addressed paths that rfs expects. Confirm the object path layout (e.g., /bucket/prefix/ab/cd/abcdef...).
|
||||
- You can maintain multiple store rows to provide fallbacks (if rfs supports trying multiple stores).
|
||||
|
||||
5) Suggested runtime sequence after overmount (with precomputed metadata)
|
||||
- Mount modules flist read-only at /lib/modules/<kernel>.
|
||||
- Optionally depmod -A <kernel> (cheap; no full scan).
|
||||
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
|
||||
- Load required baseline modules (stage1) if needed; the lazy store ensures only requested .ko files are fetched.
|
||||
|
||||
6) Practical checklist for our scripts
|
||||
- Ensure pack-modules includes:
|
||||
- /lib/modules/<kernel>/*.ko*
|
||||
- All modules.* metadata files (dep, alias, symbols, order, builtin, *.bin)
|
||||
- After pack completes and blobs are uploaded to S3, patch the .fl manifest’s stores table to the public HTTPS endpoint of your garage bucket/web gateway.
|
||||
- Provide verify utilities:
|
||||
- rfs flist inspect/tree
|
||||
- Optional local mount test against the web endpoint referenced in the manifest.
|
||||
|
||||
Appendix: Commands and flags
|
||||
- Generate/update metadata (build-time): depmod -a <kernel>
|
||||
- Fast update at boot: depmod -A <kernel> # only if newer/changed
|
||||
- Chroot/base path (useful for initramfs image pathing): depmod -b <base> -a <kernel>
|
||||
- Modprobe uses *.bin maps when present, which avoids parsing large text maps on every lookup.
|
||||
179
docs/review-rfs-integration.md
Normal file
179
docs/review-rfs-integration.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Review: Current Build Flow and RFS Integration Hook Points
|
||||
|
||||
This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.
|
||||
|
||||
## Build flow overview
|
||||
|
||||
Primary orchestrator: [scripts/build.sh](scripts/build.sh)
|
||||
|
||||
Key sourced libraries:
|
||||
- [alpine.sh](scripts/lib/alpine.sh)
|
||||
- [components.sh](scripts/lib/components.sh)
|
||||
- [kernel.sh](scripts/lib/kernel.sh)
|
||||
- [initramfs.sh](scripts/lib/initramfs.sh)
|
||||
- [stages.sh](scripts/lib/stages.sh)
|
||||
- [docker.sh](scripts/lib/docker.sh)
|
||||
- [testing.sh](scripts/lib/testing.sh)
|
||||
|
||||
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
|
||||
1) alpine_extract, alpine_configure, alpine_packages
|
||||
2) alpine_firmware
|
||||
3) components_build, components_verify
|
||||
4) kernel_modules
|
||||
5) init_script, components_copy, zinit_setup
|
||||
6) modules_setup, modules_copy
|
||||
7) cleanup, validation
|
||||
8) initramfs_create, initramfs_test, kernel_build
|
||||
9) boot_tests
|
||||
|
||||
## Where key artifacts come from
|
||||
|
||||
- Kernel full version:
|
||||
- Derived deterministically using [kernel_get_full_version()](scripts/lib/kernel.sh:14)
|
||||
- Computed as: KERNEL_VERSION from [config/build.conf](config/build.conf) + CONFIG_LOCALVERSION from [config/kernel.config](config/kernel.config)
|
||||
- Example target: 6.12.44-Zero-OS
|
||||
|
||||
- Built modules in container:
|
||||
- Stage: [kernel_build_modules()](scripts/lib/kernel.sh:228)
|
||||
- Builds and installs into container root: /lib/modules/<FULL_VERSION>
|
||||
- Runs depmod in container and sets:
|
||||
- CONTAINER_MODULES_PATH=/lib/modules/<FULL_VERSION>
|
||||
- KERNEL_FULL_VERSION=<FULL_VERSION>
|
||||
|
||||
- Initramfs modules copy and metadata:
|
||||
- Stage: [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846)
|
||||
- Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/<FULL_VERSION>
|
||||
|
||||
- Firmware content:
|
||||
- Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is
|
||||
- Fallback within build flow: firmware packages installed by [alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware
|
||||
|
||||
- rfs binary:
|
||||
- Built via [build_rfs()](scripts/lib/components.sh:299) into [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)
|
||||
- Also expected to be available on PATH inside dev-container
|
||||
|
||||
## udev and module load sequencing at runtime
|
||||
|
||||
- zinit units present:
|
||||
- udevd: [config/zinit/udevd.yaml](config/zinit/udevd.yaml)
|
||||
- depmod: [config/zinit/depmod.yaml](config/zinit/depmod.yaml)
|
||||
- udev trigger: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calling [udev.sh](config/zinit/init/udev.sh)
|
||||
|
||||
- initramfs module orchestration:
|
||||
- Module resolution logic: [initramfs_setup_modules()](scripts/lib/initramfs.sh:225) and [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313)
|
||||
- Load scripts created for zinit:
|
||||
- stage1: [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422) emits /etc/zinit/init/stage1-modules.sh
|
||||
- stage2 is currently disabled in config
|
||||
|
||||
## Current integration gaps for RFS flists
|
||||
|
||||
- There is no existing code that:
|
||||
- Packs modules or firmware into RFS flists (.fl sqlite manifests)
|
||||
- Publishes associated content-addressed blobs to a store
|
||||
- Uploads the .fl manifest to an S3 bucket (separate from the blob store)
|
||||
- Mounts these flists at runtime prior to udev coldplug
|
||||
|
||||
## Reliable inputs for RFS pack
|
||||
|
||||
- Kernel full version: use [kernel_get_full_version()](scripts/lib/kernel.sh:14) logic (never `uname -r` inside container)
|
||||
- Modules source tree candidates (priority):
|
||||
1) /lib/modules/<FULL_VERSION> (from [kernel_build_modules()](scripts/lib/kernel.sh:228))
|
||||
2) initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
|
||||
- Firmware source tree candidates (priority):
|
||||
1) $PROJECT_ROOT/firmware (external provided tree; user-preferred)
|
||||
2) initramfs/lib/firmware (APK-installed fallback)
|
||||
|
||||
## S3 configuration needs
|
||||
|
||||
A new configuration file is required to avoid touching existing code:
|
||||
- Path: config/rfs.conf (to be created)
|
||||
- Required keys:
|
||||
- S3_ENDPOINT (e.g., https://s3.example.com:9000)
|
||||
- S3_REGION
|
||||
- S3_BUCKET
|
||||
- S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
|
||||
- S3_ACCESS_KEY
|
||||
- S3_SECRET_KEY
|
||||
- These values will be consumed by standalone scripts (not existing build flow)
|
||||
|
||||
## Proposed standalone scripts (no existing code changes)
|
||||
|
||||
Directory: scripts/rfs
|
||||
|
||||
- common.sh
|
||||
- Read [config/build.conf](config/build.conf), [config/kernel.config](config/kernel.config) to compute FULL_KERNEL_VERSION
|
||||
- Read [config/rfs.conf](config/rfs.conf) and construct RFS S3 store URI
|
||||
- Detect rfs binary from PATH or [components/rfs](components/rfs)
|
||||
- Locate modules and firmware source trees per the above priority order
|
||||
|
||||
- pack-modules.sh
|
||||
- Name: modules-<FULL_KERNEL_VERSION>.fl
|
||||
- Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
|
||||
- Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
|
||||
|
||||
- pack-firmware.sh
|
||||
- Name: firmware-<YYYYMMDD>.fl by default, overridable via FIRMWARE_TAG
|
||||
- Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
|
||||
- Pack with rfs and upload the .fl manifest similarly
|
||||
|
||||
- verify-flist.sh
|
||||
- rfs flist inspect dist/flists/NAME.fl
|
||||
- rfs flist tree dist/flists/NAME.fl | head
|
||||
- Optional test mount with a temporary mountpoint when requested
|
||||
|
||||
## Future runtime units (deferred)
|
||||
|
||||
Will be added as new zinit units once flist generation is validated:
|
||||
- Mount firmware flist read-only at /usr/lib/firmware
|
||||
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
|
||||
- Run depmod -a <FULL_VERSION>
|
||||
- Run udev coldplug sequence (reload, trigger add, settle)
|
||||
|
||||
Placement relative to current units:
|
||||
- Must occur before [udev-trigger.yaml](config/zinit/udev-trigger.yaml)
|
||||
- Should ensure [depmod.yaml](config/zinit/depmod.yaml) is sequenced after modules are available from mount
|
||||
|
||||
## Flow summary (Mermaid)
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Build start] --> B[alpine_extract/configure/packages]
|
||||
B --> C[components_build verify]
|
||||
C --> D[kernel_modules
|
||||
install modules in container
|
||||
set KERNEL_FULL_VERSION]
|
||||
D --> E[init_script zinit_setup]
|
||||
E --> F[modules_setup copy]
|
||||
F --> G[cleanup validation]
|
||||
G --> H[initramfs_create test kernel_build]
|
||||
H --> I[boot_tests]
|
||||
|
||||
subgraph RFS standalone
|
||||
R1[Compute FULL_VERSION
|
||||
from configs]
|
||||
R2[Select sources:
|
||||
modules /lib/modules/FULL_VERSION
|
||||
firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
|
||||
R3[Pack modules flist
|
||||
rfs pack -s s3://...]
|
||||
R4[Pack firmware flist
|
||||
rfs pack -s s3://...]
|
||||
R5[Upload .fl manifests
|
||||
to S3 manifests/]
|
||||
R6[Verify flists
|
||||
inspect/tree/mount opt]
|
||||
end
|
||||
|
||||
H -. post-build manual .-> R1
|
||||
R1 --> R2 --> R3 --> R5
|
||||
R2 --> R4 --> R5
|
||||
R3 --> R6
|
||||
R4 --> R6
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
- The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
|
||||
- Firmware can be sourced from the user-provided tree or the initramfs fallback.
|
||||
- RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
|
||||
- Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.
|
||||
112
docs/rfs-flists.md
Normal file
112
docs/rfs-flists.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# RFS flist creation and runtime overmounts (design)
|
||||
|
||||
Goal
|
||||
- Produce two flists without modifying existing build scripts:
|
||||
- firmware-VERSION.fl
|
||||
- modules-KERNEL_FULL_VERSION.fl
|
||||
- Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
|
||||
- Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.
|
||||
|
||||
Scope of this change
|
||||
- Add standalone scripts under [scripts/rfs](scripts/rfs) (no changes in existing libs or stages).
|
||||
- Add a config file [config/rfs.conf](config/rfs.conf) for S3 credentials and addressing.
|
||||
- Document the flow and usage here; scripting comes next.
|
||||
|
||||
Inputs
|
||||
- Built kernel modules present in the dev-container (from kernel build stages):
|
||||
- Preferred: /lib/modules/KERNEL_FULL_VERSION
|
||||
- Firmware tree:
|
||||
- Preferred: $PROJECT_ROOT/firmware (prepopulated tree from dev-container: “$root/firmware”)
|
||||
- Fallback: initramfs/lib/firmware created by apk install of firmware packages
|
||||
- Kernel version derivation (never use uname -r in container):
|
||||
- Combine KERNEL_VERSION from [config/build.conf](config/build.conf) and LOCALVERSION from [config/kernel.config](config/kernel.config).
|
||||
- This matches [kernel_get_full_version()](scripts/lib/kernel.sh:14).
|
||||
|
||||
Outputs and locations
|
||||
- Flists:
|
||||
- [dist/flists/firmware-VERSION.fl](dist/flists/firmware-VERSION.fl)
|
||||
- [dist/flists/modules-KERNEL_FULL_VERSION.fl](dist/flists/modules-KERNEL_FULL_VERSION.fl)
|
||||
- Blobs are uploaded by rfs to the configured S3 store.
|
||||
- Manifests (.fl sqlite) are uploaded by script as S3 objects (separate from blob store).
|
||||
|
||||
Configuration: [config/rfs.conf](config/rfs.conf)
|
||||
Required values:
|
||||
- S3_ENDPOINT=https://s3.example.com:9000
|
||||
- S3_REGION=us-east-1
|
||||
- S3_BUCKET=zos
|
||||
- S3_PREFIX=flists/zosbuilder
|
||||
- S3_ACCESS_KEY=AKIA...
|
||||
- S3_SECRET_KEY=...
|
||||
|
||||
Notes:
|
||||
- We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
|
||||
- s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
|
||||
- After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
|
||||
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
|
||||
- Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)
|
||||
|
||||
Scripts to add (standalone)
|
||||
- [scripts/rfs/common.sh](scripts/rfs/common.sh)
|
||||
- Read [config/build.conf](config/build.conf) and [config/kernel.config](config/kernel.config).
|
||||
- Compute FULL_KERNEL_VERSION exactly as [kernel_get_full_version()](scripts/lib/kernel.sh:14).
|
||||
- Read and validate [config/rfs.conf](config/rfs.conf).
|
||||
- Build S3 store URI for rfs.
|
||||
- Locate module and firmware source trees (with priority rules).
|
||||
- Locate rfs binary (PATH first, fallback to [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)).
|
||||
|
||||
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
|
||||
- Name: modules-KERNEL_FULL_VERSION.fl (e.g., modules-6.12.44-Zero-OS.fl).
|
||||
- rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/KERNEL_FULL_VERSION
|
||||
- Optional: upload dist/flists/modules-...fl to s3://S3_BUCKET/S3_PREFIX/manifests/ using MinIO Client (mc) if present.
|
||||
|
||||
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
|
||||
- Source: $PROJECT_ROOT/firmware if exists, else initramfs/lib/firmware.
|
||||
- Name: firmware-YYYYMMDD.fl by default; override with FIRMWARE_TAG env to firmware-FIRMWARE_TAG.fl.
|
||||
- rfs pack as above; optional upload of the .fl manifest using MinIO Client (mc) if present.
|
||||
|
||||
- [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh)
|
||||
- rfs flist inspect dist/flists/NAME.fl
|
||||
- rfs flist tree dist/flists/NAME.fl | head
|
||||
- Optional: test mount if run with --mount (mountpoint under /tmp).
|
||||
|
||||
Runtime (deferred to a follow-up)
|
||||
- New zinit units to mount and coldplug:
|
||||
- Mount firmware flist read-only at /usr/lib/firmware
|
||||
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
|
||||
- Run depmod -a KERNEL_FULL_VERSION
|
||||
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
|
||||
- Placement examples (to be created later):
|
||||
- [config/zinit/rfs-modules.yaml](config/zinit/rfs-modules.yaml)
|
||||
- [config/zinit/rfs-firmware.yaml](config/zinit/rfs-firmware.yaml)
|
||||
- Keep in correct dependency order before [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml).
|
||||
|
||||
Naming policy
|
||||
- modules flist:
|
||||
- modules-KERNEL_FULL_VERSION.fl
|
||||
- firmware flist:
|
||||
- firmware-YYYYMMDD.fl by default
|
||||
- firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set
|
||||
|
||||
Usage flow (after your normal build inside dev-container)
|
||||
1) Create config for S3: [config/rfs.conf](config/rfs.conf)
|
||||
2) Generate modules flist: [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
|
||||
3) Generate firmware flist: [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
|
||||
4) Verify manifests: [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh) dist/flists/modules-...fl
|
||||
|
||||
Assumptions
|
||||
- rfs supports s3 store URIs as described (per [components/rfs/README.md](components/rfs/README.md)).
|
||||
- The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via [kernel_build_modules()](scripts/lib/kernel.sh:228)).
|
||||
- No changes are made to existing build scripts. The new scripts are run on-demand.
|
||||
|
||||
Open question for confirm
|
||||
- Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.
|
||||
|
||||
Note on route URL vs HTTP endpoint
|
||||
- rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
|
||||
- This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
|
||||
- Example SQL equivalent:
|
||||
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
|
||||
- Configure these in config/rfs.conf:
|
||||
- READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
|
||||
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
|
||||
- Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateway’s blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.
|
||||
Reference in New Issue
Block a user