feat(rfs): flist pack to S3 + read-only route embedding + zinit mount scripts; docs; dev-container tooling

Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.

Changes
1) New RFS tooling (scripts/rfs/)
   - common.sh:
     - Compute FULL_KERNEL_VERSION from configs (no uname).
     - Load S3 config and construct pack store URI.
     - Build read-only S3 route URL and patch flist (sqlite).
     - Helpers to locate modules/firmware trees and rfs binary.
   - pack-modules.sh:
     - Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
     - Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
     - Optional upload of .fl using MinIO client (mcli/mc).
   - pack-firmware.sh:
     - Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
     - Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
     - Patch flist route to read-only S3; optional .fl upload via mcli/mc.
   - verify-flist.sh:
     - rfs flist inspect/tree; optional mount test (best effort).
   - patch-stores.sh:
     - Helper to patch stores (kept though not used by default).

2) Dev-container (Dockerfile)
   - Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
   - Retains rustup and musl target for building rfs/zinit/mycelium.

3) Config and examples
   - config/rfs.conf.example:
     - S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
     - S3_ACCESS_KEY/S3_SECRET_KEY (write)
     - READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
     - ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
     - MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
   - config/rfs.conf updated by user with real values (not committed here; example included).
   - config/modules.conf minor tweak (staged).

4) Zinit mount scripts (config/zinit/init/)
   - firmware.sh:
     - Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
   - modules.sh:
     - Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
   - Both skip if target already mounted and respect RFS_BIN env.

5) Documentation
   - docs/rfs-flists.md:
     - End-to-end flow, S3-only route URL patching, mcli upload notes.
   - docs/review-rfs-integration.md:
     - Integration points, build flow, and post-build standalone usage.
   - docs/depmod-behavior.md:
     - depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.

6) Utility
   - scripts/functionlist.md synced with current functions.

Behavioral details
- Pack (write):
  s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
  s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
  Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.

Runtime mount examples
- Modules:
  rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
  rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware

Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).

Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
This commit is contained in:
2025-09-08 22:51:53 +02:00
parent acc2fce84a
commit afd4f4c6f9
16 changed files with 1344 additions and 167 deletions

View File

@@ -1,167 +1,110 @@
# Function List - scripts/lib Library
This document provides a comprehensive description of all functions available in the `scripts/lib` library that are to be sourced by build scripts.
This document lists all functions currently defined under [scripts/lib](scripts/lib) with their source locations.
## **alpine.sh** - Alpine Linux Operations
## alpine.sh - Alpine Linux operations
File: [scripts/lib/alpine.sh](scripts/lib/alpine.sh)
- [alpine_extract_miniroot()](scripts/lib/alpine.sh:14) - Download and extract Alpine miniroot
- [alpine_setup_chroot()](scripts/lib/alpine.sh:70) - Setup chroot mounts and resolv.conf
- [alpine_cleanup_chroot()](scripts/lib/alpine.sh:115) - Unmount chroot mounts
- [alpine_install_packages()](scripts/lib/alpine.sh:142) - Install packages from packages.list
- [alpine_aggressive_cleanup()](scripts/lib/alpine.sh:211) - Reduce image size by removing docs/locales/etc
- [alpine_configure_repos()](scripts/lib/alpine.sh:321) - Configure APK repositories
- [alpine_configure_system()](scripts/lib/alpine.sh:339) - Configure hostname, hosts, timezone, profile
- [alpine_install_firmware()](scripts/lib/alpine.sh:392) - Install required firmware packages
### Core Functions
- [`alpine_extract_miniroot()`](lib/alpine.sh:14) - Downloads and extracts Alpine miniroot to target directory
- [`alpine_setup_chroot()`](lib/alpine.sh:70) - Sets up chroot environment with essential filesystem mounts
- [`alpine_cleanup_chroot()`](lib/alpine.sh:115) - Unmounts and cleans up chroot environment
- [`alpine_install_packages()`](lib/alpine.sh:142) - Installs packages from packages.list (excludes OpenRC)
- [`alpine_aggressive_cleanup()`](lib/alpine.sh:211) - Removes documentation, locales, dev files for size optimization
- [`alpine_configure_repos()`](lib/alpine.sh:302) - Configures Alpine package repositories
- [`alpine_configure_system()`](lib/alpine.sh:320) - Sets up basic system configuration (hostname, hosts, timezone)
- [`alpine_install_firmware()`](lib/alpine.sh:374) - Installs firmware packages for hardware support
## common.sh - Core utilities
File: [scripts/lib/common.sh](scripts/lib/common.sh)
- [log_info()](scripts/lib/common.sh:31)
- [log_warn()](scripts/lib/common.sh:36)
- [log_error()](scripts/lib/common.sh:41)
- [log_debug()](scripts/lib/common.sh:46)
- [safe_execute()](scripts/lib/common.sh:54)
- [section_header()](scripts/lib/common.sh:79)
- [command_exists()](scripts/lib/common.sh:89)
- [in_container()](scripts/lib/common.sh:94)
- [check_dependencies()](scripts/lib/common.sh:99)
- [safe_mkdir()](scripts/lib/common.sh:142)
- [safe_rmdir()](scripts/lib/common.sh:149)
- [safe_copy()](scripts/lib/common.sh:158)
- [is_absolute_path()](scripts/lib/common.sh:166)
- [resolve_path()](scripts/lib/common.sh:171)
- [get_file_size()](scripts/lib/common.sh:181)
- [wait_for_file()](scripts/lib/common.sh:191)
- [cleanup_on_exit()](scripts/lib/common.sh:205)
## **common.sh** - Core Utilities
## components.sh - Component management
File: [scripts/lib/components.sh](scripts/lib/components.sh)
- [components_parse_sources_conf()](scripts/lib/components.sh:13)
- [components_download_git()](scripts/lib/components.sh:72)
- [components_download_release()](scripts/lib/components.sh:104)
- [components_process_extra_options()](scripts/lib/components.sh:144)
- [components_build_component()](scripts/lib/components.sh:183)
- [components_setup_rust_env()](scripts/lib/components.sh:217)
- [build_zinit()](scripts/lib/components.sh:252)
- [build_rfs()](scripts/lib/components.sh:299)
- [build_mycelium()](scripts/lib/components.sh:346)
- [install_rfs()](scripts/lib/components.sh:386)
- [install_corex()](scripts/lib/components.sh:409)
- [components_verify_installation()](scripts/lib/components.sh:436)
- [components_cleanup()](scripts/lib/components.sh:472)
### Logging Functions
- [`log_info()`](lib/common.sh:31) - Log informational messages with timestamp and color
- [`log_warn()`](lib/common.sh:36) - Log warning messages with timestamp and color
- [`log_error()`](lib/common.sh:41) - Log error messages with timestamp and color
- [`log_debug()`](lib/common.sh:46) - Log debug messages (only when DEBUG=1)
## docker.sh - Container runtime management
File: [scripts/lib/docker.sh](scripts/lib/docker.sh)
- [docker_detect_runtime()](scripts/lib/docker.sh:14)
- [docker_verify_rootless()](scripts/lib/docker.sh:33)
- [docker_build_container()](scripts/lib/docker.sh:47)
- [docker_create_dockerfile()](scripts/lib/docker.sh:65)
- [docker_start_rootless()](scripts/lib/docker.sh:116)
- [docker_run_build()](scripts/lib/docker.sh:154)
- [docker_commit_builder()](scripts/lib/docker.sh:196)
- [docker_cleanup()](scripts/lib/docker.sh:208)
- [docker_check_capabilities()](scripts/lib/docker.sh:248)
- [docker_setup_rootless()](scripts/lib/docker.sh:279)
### Execution and System Functions
- [`safe_execute()`](lib/common.sh:54) - Execute commands with error handling and logging
- [`section_header()`](lib/common.sh:76) - Creates formatted section headers for output
- [`command_exists()`](lib/common.sh:86) - Check if command is available in PATH
- [`in_container()`](lib/common.sh:91) - Detect if running inside a container
- [`check_dependencies()`](lib/common.sh:96) - Verify required tools are installed
## initramfs.sh - Initramfs assembly
File: [scripts/lib/initramfs.sh](scripts/lib/initramfs.sh)
- [initramfs_setup_zinit()](scripts/lib/initramfs.sh:13)
- [initramfs_install_init_script()](scripts/lib/initramfs.sh:70)
- [initramfs_copy_components()](scripts/lib/initramfs.sh:97)
- [initramfs_setup_modules()](scripts/lib/initramfs.sh:225)
- [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313)
- [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422)
- [initramfs_strip_and_upx()](scripts/lib/initramfs.sh:486)
- [initramfs_finalize_customization()](scripts/lib/initramfs.sh:569)
- [initramfs_create_cpio()](scripts/lib/initramfs.sh:642)
- [initramfs_validate()](scripts/lib/initramfs.sh:710)
- [initramfs_test_archive()](scripts/lib/initramfs.sh:809)
- [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846)
### File System Operations
- [`safe_mkdir()`](lib/common.sh:139) - Create directories safely with error handling
- [`safe_rmdir()`](lib/common.sh:146) - Remove directories safely with error handling
- [`safe_copy()`](lib/common.sh:155) - Copy files/directories safely with error handling
- [`resolve_path()`](lib/common.sh:168) - Convert relative to absolute paths
- [`get_file_size()`](lib/common.sh:178) - Get human-readable file size
- [`wait_for_file()`](lib/common.sh:188) - Wait for file to exist with timeout
- [`cleanup_on_exit()`](lib/common.sh:202) - Cleanup function for exit traps
## kernel.sh - Kernel building
File: [scripts/lib/kernel.sh](scripts/lib/kernel.sh)
- [kernel_get_full_version()](scripts/lib/kernel.sh:14)
- [kernel_download_source()](scripts/lib/kernel.sh:28)
- [kernel_apply_config()](scripts/lib/kernel.sh:82)
- [kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:129)
- [kernel_build_with_initramfs()](scripts/lib/kernel.sh:174)
- [kernel_build_modules()](scripts/lib/kernel.sh:228)
- [kernel_cleanup()](scripts/lib/kernel.sh:284)
## **components.sh** - ThreeFold Component Management
## stages.sh - Build stage tracking
File: [scripts/lib/stages.sh](scripts/lib/stages.sh)
- [stages_init()](scripts/lib/stages.sh:12)
- [stage_is_completed()](scripts/lib/stages.sh:33)
- [stage_mark_completed()](scripts/lib/stages.sh:48)
- [stage_force_rebuild()](scripts/lib/stages.sh:69)
- [stages_clear_all()](scripts/lib/stages.sh:82)
- [stage_run()](scripts/lib/stages.sh:99)
- [stages_status()](scripts/lib/stages.sh:134)
### Component Processing
- [`components_parse_sources_conf()`](lib/components.sh:13) - Parse and build all components from sources.conf
- [`components_download_git()`](lib/components.sh:72) - Clone Git repositories with specific versions
- [`components_download_release()`](lib/components.sh:104) - Download pre-built release binaries
- [`components_process_extra_options()`](lib/components.sh:144) - Handle rename/extract options for components
- [`components_build_component()`](lib/components.sh:183) - Build component using specified build function
### Build Environment
- [`components_setup_rust_env()`](lib/components.sh:217) - Configure Rust environment for musl builds
### Component-Specific Build Functions
- [`build_zinit()`](lib/components.sh:252) - Build zinit init system from source (Rust)
- [`build_rfs()`](lib/components.sh:304) - Build rfs (rootfs) from source (Rust)
- [`build_mycelium()`](lib/components.sh:356) - Build mycelium networking from source (Rust, subdirectory)
- [`install_rfs()`](lib/components.sh:401) - Install pre-built rfs binary
- [`install_corex()`](lib/components.sh:427) - Install pre-built corex binary
### Verification and Cleanup
- [`components_verify_installation()`](lib/components.sh:457) - Verify all components were installed correctly
- [`components_cleanup()`](lib/components.sh:493) - Clean build artifacts
## **docker.sh** - Container Runtime Management
### Runtime Detection and Setup
- [`docker_detect_runtime()`](lib/docker.sh:14) - Detect available container runtime (Docker/Podman)
- [`docker_verify_rootless()`](lib/docker.sh:33) - Verify rootless container setup works
- [`docker_check_capabilities()`](lib/docker.sh:209) - Check container runtime capabilities
- [`docker_setup_rootless()`](lib/docker.sh:240) - Setup rootless environment (subuid/subgid)
### Container Image Management
- [`docker_build_container()`](lib/docker.sh:47) - Build container image with build tools
- [`docker_create_dockerfile()`](lib/docker.sh:65) - Create optimized Dockerfile for build environment
- [`docker_commit_builder()`](lib/docker.sh:178) - Commit container state for reuse
- [`docker_cleanup()`](lib/docker.sh:191) - Clean up container images
### Container Execution
- [`docker_start_rootless()`](lib/docker.sh:116) - Start rootless container for building
- [`docker_run_build()`](lib/docker.sh:154) - Run build command in container with proper mounts
## **initramfs.sh** - Initramfs Assembly
### Core Assembly Functions
- [`initramfs_setup_zinit()`](lib/initramfs.sh:13) - Setup zinit as init system (replaces OpenRC completely)
- [`initramfs_install_init_script()`](lib/initramfs.sh:71) - Install critical /init script for initramfs boot
- [`initramfs_setup_modules()`](lib/initramfs.sh:98) - Setup 2-stage module loading with dependencies
### Module Management
- [`initramfs_resolve_module_dependencies()`](lib/initramfs.sh:166) - Recursively resolve module dependencies using modinfo
- [`initramfs_create_module_scripts()`](lib/initramfs.sh:236) - Create stage1/stage2 module loading scripts for zinit
### Optimization and Packaging
- [`initramfs_strip_and_upx()`](lib/initramfs.sh:300) - Strip debug symbols and UPX compress binaries for size optimization
- [`initramfs_create_cpio()`](lib/initramfs.sh:383) - Create final compressed initramfs archive (xz/gzip/zstd/uncompressed)
### Validation and Testing
- [`initramfs_validate()`](lib/initramfs.sh:449) - Validate initramfs contents and structure
- [`initramfs_test_archive()`](lib/initramfs.sh:549) - Test initramfs archive integrity
## **kernel.sh** - Kernel Building
### Source Management
- [`kernel_download_source()`](lib/kernel.sh:14) - Download Linux kernel source code from kernel.org
- [`kernel_apply_config()`](lib/kernel.sh:68) - Apply kernel configuration with embedded initramfs path
- [`kernel_modify_config_for_initramfs()`](lib/kernel.sh:116) - Modify kernel config for embedded initramfs support
### Build Functions
- [`kernel_build_with_initramfs()`](lib/kernel.sh:144) - Build kernel with embedded initramfs (complete process)
- [`kernel_build_modules()`](lib/kernel.sh:203) - Build kernel modules for initramfs inclusion
### Cleanup
- [`kernel_cleanup()`](lib/kernel.sh:242) - Clean kernel build artifacts (with option to keep source)
## **testing.sh** - Virtualization Testing
### QEMU Testing
- [`testing_qemu_boot()`](lib/testing.sh:14) - Test kernel boot with QEMU (multiple modes: basic/serial/interactive)
- [`testing_qemu_basic_boot()`](lib/testing.sh:55) - Basic automated QEMU boot test with timeout
- [`testing_qemu_serial_boot()`](lib/testing.sh:90) - QEMU serial console test for debugging
- [`testing_qemu_interactive_boot()`](lib/testing.sh:114) - Interactive QEMU session (no timeout)
### Cloud Hypervisor Testing
- [`testing_cloud_hypervisor_boot()`](lib/testing.sh:135) - Test with cloud-hypervisor VMM
- [`testing_cloud_hypervisor_basic()`](lib/testing.sh:172) - Basic cloud-hypervisor test with timeout
- [`testing_cloud_hypervisor_serial()`](lib/testing.sh:206) - cloud-hypervisor serial console test
### Analysis and Orchestration
- [`testing_analyze_boot_log()`](lib/testing.sh:228) - Analyze boot logs for success/failure indicators
- [`testing_run_all()`](lib/testing.sh:299) - Run comprehensive test suite (QEMU + cloud-hypervisor)
## Usage Notes
### Function Availability
All functions are exported for sourcing and can be called from any script that sources the respective library file. The common pattern is:
```bash
# Source the library
source "${SCRIPT_DIR}/lib/common.sh"
source "${SCRIPT_DIR}/lib/alpine.sh"
# ... other libraries as needed
# Use the functions
alpine_extract_miniroot "/path/to/target"
components_parse_sources_conf "/path/to/sources.conf" "/path/to/components"
```
### Error Handling
All functions follow consistent error handling patterns:
- Return non-zero exit codes on failure
- Use [`safe_execute()`](lib/common.sh:54) for command execution
- Provide detailed logging via [`log_*()`](lib/common.sh:31) functions
- Clean up resources on failure
### Dependencies
Functions have dependencies on:
- External tools (checked via [`check_dependencies()`](lib/common.sh:96))
- Other library functions (noted in function descriptions)
- Configuration files and environment variables
- Proper directory structures
### Configuration
Most functions respect environment variables for configuration:
- `DEBUG=1` enables debug logging
- `ALPINE_VERSION`, `KERNEL_VERSION` set versions
- `RUST_TARGET` configures Rust builds
- Various `*_DIR` variables set paths
## testing.sh - Boot testing
File: [scripts/lib/testing.sh](scripts/lib/testing.sh)
- [testing_qemu_boot()](scripts/lib/testing.sh:14)
- [testing_qemu_basic_boot()](scripts/lib/testing.sh:55)
- [testing_qemu_serial_boot()](scripts/lib/testing.sh:90)
- [testing_qemu_interactive_boot()](scripts/lib/testing.sh:113)
- [testing_cloud_hypervisor_boot()](scripts/lib/testing.sh:135)
- [testing_cloud_hypervisor_basic()](scripts/lib/testing.sh:171)
- [testing_cloud_hypervisor_serial()](scripts/lib/testing.sh:206)
- [testing_analyze_boot_log()](scripts/lib/testing.sh:227)
- [testing_run_all()](scripts/lib/testing.sh:299)

474
scripts/rfs/common.sh Executable file
View File

@@ -0,0 +1,474 @@
#!/bin/bash
# Common helpers for RFS flist creation and manifest patching
# - No changes to existing build pipeline; this library is used by standalone scripts under scripts/rfs
# - Computes FULL_KERNEL_VERSION from configs (never uses uname -r)
# - Loads S3 (garage) config and builds rfs S3 store URI
# - Locates rfs binary and source trees for modules/firmware
# - Provides helper to patch .fl (sqlite) stores table to use HTTPS web endpoint
set -euo pipefail
# Resolve project root from this file location
rfs_common_project_root() {
local here
here="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# scripts/rfs -> project root is two levels up
dirname "$(dirname "$here")"
}
PROJECT_ROOT="${PROJECT_ROOT:-$(rfs_common_project_root)}"
SCRIPT_DIR="${PROJECT_ROOT}/scripts"
LIB_DIR="${SCRIPT_DIR}/lib"
# Bring in logging and helpers if available
if [[ -f "${LIB_DIR}/common.sh" ]]; then
# shellcheck source=/dev/null
source "${LIB_DIR}/common.sh"
else
# Minimal logging fallbacks
log_info() { echo "[INFO] $*"; }
log_warn() { echo "[WARN] $*" >&2; }
log_error() { echo "[ERROR] $*" >&2; }
log_debug() { if [[ "${DEBUG:-0}" == "1" ]]; then echo "[DEBUG] $*"; fi }
safe_execute() { echo "[EXEC] $*"; "$@"; }
fi
# -----------------------------------------------------------------------------
# Config loaders
# -----------------------------------------------------------------------------
# Load build.conf (KERNEL_VERSION, etc.) and compute FULL_KERNEL_VERSION
# FULL_KERNEL_VERSION = KERNEL_VERSION + CONFIG_LOCALVERSION from config/kernel.config
rfs_common_load_build_kernel_version() {
local build_conf="${PROJECT_ROOT}/config/build.conf"
local kcfg="${PROJECT_ROOT}/config/kernel.config"
if [[ -f "$build_conf" ]]; then
# shellcheck source=/dev/null
source "$build_conf"
else
log_error "Missing build config: ${build_conf}"
return 1
fi
local base_ver="${KERNEL_VERSION:-}"
if [[ -z "$base_ver" ]]; then
log_error "KERNEL_VERSION not set in ${build_conf}"
return 1
fi
if [[ ! -f "$kcfg" ]]; then
log_error "Missing kernel config: ${kcfg}"
return 1
fi
# Extract CONFIG_LOCALVERSION="..."; may include leading '-' in value
local localver
localver="$(grep -E '^CONFIG_LOCALVERSION=' "$kcfg" | cut -d'"' -f2 || true)"
local full_ver="${base_ver}${localver}"
if [[ -z "$full_ver" ]]; then
log_error "Failed to compute FULL_KERNEL_VERSION from configs"
return 1
fi
export FULL_KERNEL_VERSION="$full_ver"
log_info "Computed FULL_KERNEL_VERSION: ${FULL_KERNEL_VERSION}"
}
# Load RFS S3 configuration from config/rfs.conf or config/rfs.conf.example
# Required:
# S3_ENDPOINT, S3_REGION, S3_BUCKET, S3_PREFIX, S3_ACCESS_KEY, S3_SECRET_KEY
rfs_common_load_rfs_s3_config() {
local conf_real="${PROJECT_ROOT}/config/rfs.conf"
local conf_example="${PROJECT_ROOT}/config/rfs.conf.example"
if [[ -f "$conf_real" ]]; then
# shellcheck source=/dev/null
source "$conf_real"
log_info "Loaded RFS S3 config: ${conf_real}"
elif [[ -f "$conf_example" ]]; then
# shellcheck source=/dev/null
source "$conf_example"
log_warn "Using example RFS config: ${conf_example} (override with config/rfs.conf)"
else
log_error "No RFS config found. Create config/rfs.conf or config/rfs.conf.example"
return 1
fi
# Allow environment to override sourced values
S3_ENDPOINT="${S3_ENDPOINT:-}"
S3_REGION="${S3_REGION:-}"
S3_BUCKET="${S3_BUCKET:-}"
S3_PREFIX="${S3_PREFIX:-}"
S3_ACCESS_KEY="${S3_ACCESS_KEY:-}"
S3_SECRET_KEY="${S3_SECRET_KEY:-}"
local missing=0
for v in S3_ENDPOINT S3_REGION S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY; do
if [[ -z "${!v}" ]]; then
log_error "Missing required S3 config variable: ${v}"
missing=1
fi
done
if [[ $missing -ne 0 ]]; then
log_error "Incomplete RFS S3 configuration"
return 1
fi
export S3_ENDPOINT S3_REGION S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY
# Validate placeholders are not left as defaults
if [[ "${S3_ACCESS_KEY}" == "REPLACE_ME" || "${S3_SECRET_KEY}" == "REPLACE_ME" ]]; then
log_error "S3_ACCESS_KEY / S3_SECRET_KEY in config/rfs.conf are placeholders. Please set real credentials."
return 1
fi
# Optional read-only credentials for route URL; default to write keys if not provided
READ_ACCESS_KEY="${READ_ACCESS_KEY:-$S3_ACCESS_KEY}"
READ_SECRET_KEY="${READ_SECRET_KEY:-$S3_SECRET_KEY}"
# Garage blob route path (default /blobs)
ROUTE_PATH="${ROUTE_PATH:-/blobs}"
export READ_ACCESS_KEY READ_SECRET_KEY ROUTE_PATH
}
# Build rfs S3 store URI from loaded S3 config
# Format: s3://ACCESS:SECRET@HOST:PORT/BUCKET/PREFIX?region=REGION
rfs_common_build_s3_store_uri() {
if [[ -z "${S3_ENDPOINT:-}" ]]; then
log_error "S3_ENDPOINT not set; call rfs_common_load_rfs_s3_config first"
return 1
fi
# Strip scheme from endpoint
local hostport="${S3_ENDPOINT#http://}"
hostport="${hostport#https://}"
hostport="${hostport%/}"
# Ensure explicit port; default to Garage S3 port 3900 when missing
if [[ "$hostport" != *:* ]]; then
hostport="${hostport}:3900"
fi
# Minimal percent-encoding for ':' and '@' in credentials
local ak="${S3_ACCESS_KEY//:/%3A}"
ak="${ak//@/%40}"
local sk="${S3_SECRET_KEY//:/%3A}"
sk="${sk//@/%40}"
local path="${S3_BUCKET}/${S3_PREFIX}"
path="${path#/}" # ensure no leading slash duplication
local uri="s3://${ak}:${sk}@${hostport}/${path}?region=${S3_REGION}"
export RFS_S3_STORE_URI="$uri"
log_info "Constructed RFS S3 store URI: ${RFS_S3_STORE_URI}"
}
# -----------------------------------------------------------------------------
# Tool discovery
# -----------------------------------------------------------------------------
# Locate rfs binary: prefer PATH, fallback to components build
rfs_common_locate_rfs() {
if command -v rfs >/dev/null 2>&1; then
export RFS_BIN="$(command -v rfs)"
log_info "Using rfs from PATH: ${RFS_BIN}"
return 0
fi
# Fallback to components
local rtarget
if [[ -f "${PROJECT_ROOT}/config/build.conf" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/build.conf"
fi
rtarget="${RUST_TARGET:-x86_64-unknown-linux-musl}"
local candidate="${PROJECT_ROOT}/components/rfs/target/${rtarget}/release/rfs"
if [[ -x "$candidate" ]]; then
export RFS_BIN="$candidate"
log_info "Using rfs from components: ${RFS_BIN}"
return 0
fi
log_error "rfs binary not found. Build it via components stage or install it in PATH."
return 1
}
# Ensure sqlite3 is available (for manifest patch)
rfs_common_require_sqlite3() {
if ! command -v sqlite3 >/dev/null 2>&1; then
log_error "sqlite3 not found. Install sqlite3 to patch .fl manifest stores."
return 1
fi
}
# -----------------------------------------------------------------------------
# Source tree discovery
# -----------------------------------------------------------------------------
# Locate modules directory for FULL_KERNEL_VERSION
# Priority:
# 1) /lib/modules/<FULL_KERNEL_VERSION>
# 2) ${PROJECT_ROOT}/kernel/lib/modules/<FULL_KERNEL_VERSION>
# 3) ${PROJECT_ROOT}/initramfs/lib/modules/<FULL_KERNEL_VERSION>
rfs_common_locate_modules_dir() {
local kver="${1:-${FULL_KERNEL_VERSION:-}}"
if [[ -z "$kver" ]]; then
log_error "rfs_common_locate_modules_dir: FULL_KERNEL_VERSION is empty"
return 1
fi
local candidates=(
"/lib/modules/${kver}"
"${PROJECT_ROOT}/kernel/lib/modules/${kver}"
"${PROJECT_ROOT}/initramfs/lib/modules/${kver}"
)
local d
for d in "${candidates[@]}"; do
if [[ -d "$d" ]]; then
export MODULES_DIR="$d"
log_info "Found modules dir: ${MODULES_DIR}"
return 0
fi
done
log_error "No modules directory found for ${kver}. Checked: ${candidates[*]}"
return 1
}
# Locate firmware directory
# Priority:
# 1) ${PROJECT_ROOT}/firmware
# 2) ${PROJECT_ROOT}/initramfs/lib/firmware
# 3) /lib/firmware
rfs_common_locate_firmware_dir() {
local candidates=(
"${PROJECT_ROOT}/firmware"
"${PROJECT_ROOT}/initramfs/lib/firmware"
"/lib/firmware"
)
local d
for d in "${candidates[@]}"; do
if [[ -d "$d" ]]; then
export FIRMWARE_DIR="$d"
log_info "Found firmware dir: ${FIRMWARE_DIR}"
return 0
fi
done
log_error "No firmware directory found. Checked: ${candidates[*]}"
return 1
}
# Ensure precomputed modules metadata are present (to avoid depmod at boot)
rfs_common_validate_modules_metadata() {
local md="${MODULES_DIR:-}"
if [[ -z "$md" || ! -d "$md" ]]; then
log_error "MODULES_DIR not set or invalid"
return 1
fi
local ok=1
local files=(modules.dep modules.dep.bin modules.alias modules.alias.bin modules.symbols.bin modules.order modules.builtin modules.builtin.modinfo)
local missing=()
for f in "${files[@]}"; do
if [[ ! -f "${md}/${f}" ]]; then
missing+=("$f")
ok=0
fi
done
if [[ $ok -eq 1 ]]; then
log_info "Modules metadata present in ${md}"
return 0
else
log_warn "Missing some modules metadata in ${md}: ${missing[*]}"
# Not fatal; rfs pack can proceed, but boot may require depmod -A or full scan
return 0
fi
}
# -----------------------------------------------------------------------------
# Manifest patching (sqlite .fl)
# -----------------------------------------------------------------------------
# Patch the .fl manifest's stores table to use an HTTPS web endpoint
# Args:
# $1 = path to .fl file
# $2 = HTTPS base (e.g., https://hub.grid.tf/zos/zosbuilder) - no trailing slash
# $3 = keep_s3_fallback ("true"/"false") - if true, retain existing s3:// row(s)
rfs_common_patch_flist_stores() {
local fl="$1"
local web_base="$2"
local keep_s3="${3:-false}"
if [[ ! -f "$fl" ]]; then
log_error "Manifest file not found: ${fl}"
return 1
fi
if [[ -z "$web_base" ]]; then
log_error "Web endpoint base is empty"
return 1
fi
rfs_common_require_sqlite3
# Ensure no trailing slash
web_base="${web_base%/}"
# Heuristic: if stores table exists, update any s3:// URI to the web_base, or insert web_base if none.
local has_table
has_table="$(sqlite3 "$fl" "SELECT name FROM sqlite_master WHERE type='table' AND name='stores';" || true)"
if [[ -z "$has_table" ]]; then
log_error "stores table not found in manifest (unexpected schema): ${fl}"
return 1
fi
# Does any s3 store exist?
local s3_count
s3_count="$(sqlite3 "$fl" "SELECT COUNT(*) FROM stores WHERE uri LIKE 's3://%';" || echo 0)"
if [[ "${keep_s3}" != "true" ]]; then
# Replace all s3://... URIs with the HTTPS web base
log_info "Replacing s3 stores with HTTPS: ${web_base}"
sqlite3 "$fl" "UPDATE stores SET uri='${web_base}' WHERE uri LIKE 's3://%';"
else
# Keep s3, but ensure https row exists and is ordered first if applicable
local https_count
https_count="$(sqlite3 "$fl" "SELECT COUNT(*) FROM stores WHERE uri='${web_base}';" || echo 0)"
if [[ "$https_count" -eq 0 ]]; then
log_info "Adding HTTPS store ${web_base} alongside existing s3 store(s)"
# Attempt simple insert; table schema may include more columns, so try a best-effort approach:
# Assume minimal schema: (id INTEGER PRIMARY KEY, uri TEXT UNIQUE)
# If fails, user can adjust with rfs CLI.
set +e
sqlite3 "$fl" "INSERT OR IGNORE INTO stores(uri) VALUES('${web_base}');"
local rc=$?
set -e
if [[ $rc -ne 0 ]]; then
log_warn "Could not INSERT into stores; schema may be different. Consider using rfs CLI to add store."
fi
else
log_info "HTTPS store already present in manifest"
fi
fi
log_info "Patched stores in manifest: ${fl}"
return 0
}
# -----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Manifest route URL patching (sqlite .fl) - use read-only credentials
# -----------------------------------------------------------------------------
# Build route URL for the flist 'route' table using read-only keys
# Result example:
# s3://READ_KEY:READ_SECRET@host:3900/blobs?region=garage
rfs_common_build_route_url() {
# Ensure sqlite available for later patch step
rfs_common_require_sqlite3
# Defaults applicable to Garage
local route_region="${ROUTE_REGION:-garage}"
local route_path="${ROUTE_PATH:-/blobs}"
# Derive host:port from ROUTE_ENDPOINT or S3_ENDPOINT
local endpoint="${ROUTE_ENDPOINT:-${S3_ENDPOINT:-}}"
if [[ -z "$endpoint" ]]; then
log_error "No ROUTE_ENDPOINT or S3_ENDPOINT set; cannot build route URL"
return 1
fi
local hostport="${endpoint#http://}"
hostport="${hostport#https://}"
hostport="${hostport%/}"
# Ensure explicit port; default to Garage S3 port 3900 when missing
if [[ "$hostport" != *:* ]]; then
hostport="${hostport}:3900"
fi
# Percent-encode credentials minimally for ':' and '@'
local rak="${READ_ACCESS_KEY//:/%3A}"
rak="${rak//@/%40}"
local rsk="${READ_SECRET_KEY//:/%3A}"
rsk="${rsk//@/%40}"
# Normalize route path (ensure leading slash)
if [[ "$route_path" != /* ]]; then
route_path="/${route_path}"
fi
local url="s3://${rak}:${rsk}@${hostport}${route_path}?region=${route_region}"
export RFS_ROUTE_URL="$url"
log_info "Constructed route URL for flist: ${RFS_ROUTE_URL}"
}
# Patch the 'route' table URL inside the .fl manifest to use read-only key URL
# Args:
# $1 = path to .fl file
rfs_common_patch_flist_route_url() {
local fl="$1"
if [[ -z "${RFS_ROUTE_URL:-}" ]]; then
log_error "RFS_ROUTE_URL is empty; call rfs_common_build_route_url first"
return 1
fi
if [[ ! -f "$fl" ]]; then
log_error "Manifest file not found: ${fl}"
return 1
fi
rfs_common_require_sqlite3
# Ensure 'route' table exists
local has_route
has_route="$(sqlite3 "$fl" "SELECT name FROM sqlite_master WHERE type='table' AND name='route';" || true)"
if [[ -z "$has_route" ]]; then
log_error "route table not found in manifest (unexpected schema): ${fl}"
return 1
fi
log_info "Updating route.url to: ${RFS_ROUTE_URL}"
sqlite3 "$fl" "UPDATE route SET url='${RFS_ROUTE_URL}';"
log_info "Patched route URL in manifest: ${fl}"
}
# Packaging helpers
# -----------------------------------------------------------------------------
# Ensure output directory exists and echo final manifest path
# Args:
# $1 = basename for manifest (e.g., modules-6.12.44-Zero-OS.fl)
rfs_common_prepare_output() {
local base="$1"
local outdir="${PROJECT_ROOT}/dist/flists"
mkdir -p "$outdir"
echo "${outdir}/${base}"
}
# Sanitize firmware tag or generate date-based tag (YYYYMMDD)
rfs_common_firmware_tag() {
local tag="${FIRMWARE_TAG:-}"
if [[ -n "$tag" ]]; then
# Replace path-unfriendly chars
tag="${tag//[^A-Za-z0-9._-]/_}"
echo "$tag"
else
date -u +%Y%m%d
fi
}
# If executed directly, show a quick status summary
if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
log_info "rfs-common self-check..."
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
rfs_common_locate_modules_dir "${FULL_KERNEL_VERSION}"
rfs_common_validate_modules_metadata
rfs_common_locate_firmware_dir
log_info "All checks passed."
log_info "FULL_KERNEL_VERSION=${FULL_KERNEL_VERSION}"
log_info "RFS_S3_STORE_URI=${RFS_S3_STORE_URI}"
log_info "MODULES_DIR=${MODULES_DIR}"
log_info "FIRMWARE_DIR=${FIRMWARE_DIR}"
log_info "RFS_BIN=${RFS_BIN}"
fi

79
scripts/rfs/pack-firmware.sh Executable file
View File

@@ -0,0 +1,79 @@
#!/bin/bash
# Pack firmware tree into an RFS flist and patch manifest stores for Garage web endpoint.
# - Computes FULL_KERNEL_VERSION from configs (not strictly needed for firmware, but kept uniform)
# - Selects firmware directory with priority:
# 1) $PROJECT_ROOT/firmware
# 2) $PROJECT_ROOT/initramfs/lib/firmware
# 3) /lib/firmware
# - Manifest name: firmware-<FIRMWARE_TAG or YYYYMMDD>.fl
# - Uploads blobs to S3 (Garage) via rfs store URI
# - Patches .fl sqlite stores table to use WEB_ENDPOINT for read-only fetches
# - Optionally uploads the .fl manifest to S3 manifests/ using aws CLI
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
section "Loading configuration (kernel + RFS S3) and locating rfs"
# Kernel version is computed for consistency/logging (not required to pack firmware)
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
section "Locating firmware directory"
rfs_common_locate_firmware_dir
TAG="$(rfs_common_firmware_tag)"
MANIFEST_NAME="firmware-${TAG}.fl"
MANIFEST_PATH="$(rfs_common_prepare_output "${MANIFEST_NAME}")"
section "Packing firmware to flist"
log_info "Firmware dir: ${FIRMWARE_DIR}"
log_info "rfs pack -m ${MANIFEST_PATH} -s ${RFS_S3_STORE_URI} ${FIRMWARE_DIR}"
safe_execute "${RFS_BIN}" pack -m "${MANIFEST_PATH}" -s "${RFS_S3_STORE_URI}" "${FIRMWARE_DIR}"
# Patch manifest route URL to include read-only S3 credentials (Garage)
section "Updating route URL in manifest to include read-only S3 credentials"
rfs_common_build_route_url
rfs_common_patch_flist_route_url "${MANIFEST_PATH}"
# Patch manifest stores to HTTPS web endpoint if provided
if [[ -n "${WEB_ENDPOINT:-}" ]]; then
section "Patching manifest stores to HTTPS web endpoint"
log_info "Patching ${MANIFEST_PATH} stores to: ${WEB_ENDPOINT} (keep_s3_fallback=${KEEP_S3_FALLBACK:-false})"
rfs_common_patch_flist_stores "${MANIFEST_PATH}" "${WEB_ENDPOINT}" "${KEEP_S3_FALLBACK:-false}"
else
log_warn "WEB_ENDPOINT not set in config; manifest will reference only s3:// store"
fi
# Optional: upload .fl manifest to Garage via MinIO Client (separate from blobs)
if [[ "${UPLOAD_MANIFESTS:-false}" == "true" ]]; then
section "Uploading manifest .fl via MinIO Client to S3 manifests/"
# Support both mcli (new) and mc (legacy) binaries
if command -v mcli >/dev/null 2>&1; then
MCLI_BIN="mcli"
elif command -v mc >/dev/null 2>&1; then
MCLI_BIN="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping manifest upload"
MCLI_BIN=""
fi
if [[ -n "${MCLI_BIN}" ]]; then
local_subpath="${MANIFESTS_SUBPATH:-manifests}"
# Configure alias and upload using MinIO client
safe_execute "${MCLI_BIN}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${local_subpath%/}/${MANIFEST_NAME}"
log_info "${MCLI_BIN} cp ${MANIFEST_PATH} ${mcli_dst}"
safe_execute "${MCLI_BIN}" cp "${MANIFEST_PATH}" "${mcli_dst}"
fi
else
log_info "UPLOAD_MANIFESTS=false; skipping manifest upload"
fi
section "Done"
log_info "Manifest: ${MANIFEST_PATH}"

73
scripts/rfs/pack-modules.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/bin/bash
# Pack kernel modules into an RFS flist and patch manifest stores for Garage web endpoint.
# - Computes FULL_KERNEL_VERSION from configs (never uses uname -r)
# - Packs /lib/modules/<FULL_KERNEL_VERSION> (or fallback paths) to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
# - Uploads blobs to S3 (Garage) via rfs store URI
# - Patches .fl sqlite stores table to use WEB_ENDPOINT for read-only fetches
# - Optionally uploads the .fl manifest to S3 manifests/ using aws CLI
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
section "Loading configuration and computing kernel version"
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
section "Locating modules directory for ${FULL_KERNEL_VERSION}"
rfs_common_locate_modules_dir "${FULL_KERNEL_VERSION}"
rfs_common_validate_modules_metadata
MANIFEST_NAME="modules-${FULL_KERNEL_VERSION}.fl"
MANIFEST_PATH="$(rfs_common_prepare_output "${MANIFEST_NAME}")"
section "Packing modules to flist"
log_info "rfs pack -m ${MANIFEST_PATH} -s ${RFS_S3_STORE_URI} ${MODULES_DIR}"
safe_execute "${RFS_BIN}" pack -m "${MANIFEST_PATH}" -s "${RFS_S3_STORE_URI}" "${MODULES_DIR}"
# Patch manifest route URL to include read-only S3 credentials (Garage)
section "Updating route URL in manifest to include read-only S3 credentials"
rfs_common_build_route_url
rfs_common_patch_flist_route_url "${MANIFEST_PATH}"
# Patch manifest stores to HTTPS web endpoint if provided
if [[ -n "${WEB_ENDPOINT:-}" ]]; then
section "Patching manifest stores to HTTPS web endpoint"
log_info "Patching ${MANIFEST_PATH} stores to: ${WEB_ENDPOINT} (keep_s3_fallback=${KEEP_S3_FALLBACK:-false})"
rfs_common_patch_flist_stores "${MANIFEST_PATH}" "${WEB_ENDPOINT}" "${KEEP_S3_FALLBACK:-false}"
else
log_warn "WEB_ENDPOINT not set in config; manifest will reference only s3:// store"
fi
# Optional: upload .fl manifest to Garage via MinIO Client (separate from blobs)
if [[ "${UPLOAD_MANIFESTS:-false}" == "true" ]]; then
section "Uploading manifest .fl via MinIO Client to S3 manifests/"
# Support both mcli (new) and mc (legacy) binaries
if command -v mcli >/dev/null 2>&1; then
MCLI_BIN="mcli"
elif command -v mc >/dev/null 2>&1; then
MCLI_BIN="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping manifest upload"
MCLI_BIN=""
fi
if [[ -n "${MCLI_BIN}" ]]; then
local_subpath="${MANIFESTS_SUBPATH:-manifests}"
# Configure alias and upload using MinIO client
safe_execute "${MCLI_BIN}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${local_subpath%/}/${MANIFEST_NAME}"
log_info "${MCLI_BIN} cp ${MANIFEST_PATH} ${mcli_dst}"
safe_execute "${MCLI_BIN}" cp "${MANIFEST_PATH}" "${mcli_dst}"
fi
else
log_info "UPLOAD_MANIFESTS=false; skipping manifest upload"
fi
section "Done"
log_info "Manifest: ${MANIFEST_PATH}"

24
scripts/rfs/patch-stores.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
# Wrapper to patch an .fl manifest's stores to use an HTTPS web endpoint.
# Usage:
# ./scripts/rfs/patch-stores.sh dist/flists/modules-6.12.44-Zero-OS.fl https://hub.grid.tf/zos/zosbuilder/store [keep_s3_fallback]
#
# keep_s3_fallback: "true" to keep existing s3:// store rows as fallback; default "false"
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
if [[ $# -lt 2 ]]; then
echo "Usage: $0 /path/to/file.fl https://web.endpoint/base [keep_s3_fallback]" >&2
exit 1
fi
FL="$1"
WEB="$2"
KEEP="${3:-false}"
rfs_common_patch_flist_stores "${FL}" "${WEB}" "${KEEP}"
echo "[INFO] Patched stores in: ${FL}"

65
scripts/rfs/verify-flist.sh Executable file
View File

@@ -0,0 +1,65 @@
#!/bin/bash
# Verify an RFS flist (.fl) by inspecting, listing tree, and optional mount test.
# Usage:
# ./scripts/rfs/verify-flist.sh /path/to/foo.fl
# ./scripts/rfs/verify-flist.sh /path/to/foo.fl --mount
#
# Notes:
# - Requires the rfs binary (PATH or components fallback).
# - --mount performs a temporary mount (needs FUSE and privileges).
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
if [[ $# -lt 1 ]]; then
echo "Usage: $0 /path/to/foo.fl [--mount]"
exit 1
fi
FL_PATH="$1"
DO_MOUNT="${2:-}"
if [[ ! -f "$FL_PATH" ]]; then
echo "[ERROR] flist not found: ${FL_PATH}" >&2
exit 1
fi
section "Locating rfs binary"
rfs_common_locate_rfs
section "Inspect flist"
safe_execute "${RFS_BIN}" flist inspect "${FL_PATH}" || true
section "Tree (first 100 entries)"
safe_execute "${RFS_BIN}" flist tree "${FL_PATH}" | head -n 100 || true
if [[ "$DO_MOUNT" == "--mount" ]]; then
section "Attempting temporary mount"
MNT="$(mktemp -d /tmp/rfs-mnt-XXXXXX)"
cleanup() {
set +e
if mountpoint -q "${MNT}" 2>/dev/null; then
echo "[INFO] Unmounting ${MNT}"
fusermount -u "${MNT}" 2>/dev/null || umount "${MNT}" 2>/dev/null || true
fi
rmdir "${MNT}" 2>/dev/null || true
}
trap cleanup EXIT INT TERM
echo "[INFO] Mountpoint: ${MNT}"
# Try mount; some environments require sudo or have limited FUSE. Best-effort.
if "${RFS_BIN}" mount -m "${FL_PATH}" "${MNT}"; then
echo "[INFO] Mounted. Listing top-level entries:"
ls -la "${MNT}" | head -n 50 || true
else
echo "[WARN] rfs mount failed (FUSE/permissions?). Skipping mount verification." >&2
fi
fi
section "Done"
echo "[INFO] Verified flist: ${FL_PATH}"