Compare commits
4 Commits
b0e41b59b1
...
7cef73368b
| Author | SHA1 | Date | |
|---|---|---|---|
| 7cef73368b | |||
| 2d43005b07 | |||
| cd63506d3c | |||
| 04216b7f8f |
17
PROMPT.md
17
PROMPT.md
@@ -30,11 +30,22 @@ Partitioning Requirements
|
||||
- Before making changes, verify the device has no existing partitions or filesystem signatures; abort otherwise.
|
||||
|
||||
Filesystem Provisioning
|
||||
- All data mounts are placed somewhere under `/var/cache`. Precise mountpoints and subvolume strategies are configurable.
|
||||
- Mount scheme and subvolumes:
|
||||
* Root mounts for each data filesystem at `/var/mounts/{UUID}` (runtime only). For btrfs root, use `-o subvolid=5`; for bcachefs root, no subdir option.
|
||||
* Create or ensure subvolumes on the primary data filesystem with names: `system`, `etc`, `modules`, `vm-meta`.
|
||||
* Mount subvolumes to final targets:
|
||||
- `/var/cache/system`
|
||||
- `/var/cache/etc`
|
||||
- `/var/cache/modules`
|
||||
- `/var/cache/vm-meta`
|
||||
* Use UUID= sources for all mounts (never device paths).
|
||||
* Subvolume options:
|
||||
- btrfs: `-o subvol={name},noatime`
|
||||
- bcachefs: `-o X-mount.subdir={name},noatime`
|
||||
- Supported backends:
|
||||
* Single disk: default to `btrfs`, label `ZOSDATA`.
|
||||
* Two disks/NVMe: default to individual `btrfs` filesystems per disk, each labeled `ZOSDATA`, mounted under `/var/cache/<UUID>` (exact path pattern TBD). Optional support for `btrfs` RAID1 or `bcachefs` RAID1 if requested.
|
||||
* Mixed SSD/NVMe + HDD: default to `bcachefs` with SSD as cache/promote and HDD as backing store, label resulting filesystem `ZOSDATA`. Alternative mode: separate `btrfs` per device (label `ZOSDATA`).
|
||||
* Two disks/NVMe (dual_independent): default to independent `btrfs` per disk, each labeled `ZOSDATA`; root-mount all under `/var/mounts/{UUID}`, pick the first data FS as primary for final subvol mounts.
|
||||
* Mixed SSD/NVMe + HDD: default to `bcachefs` with SSD as cache/promote and HDD as backing store, resulting FS labeled `ZOSDATA`. Alternative mode: separate `btrfs` per device (label `ZOSDATA`).
|
||||
- Reserved filesystem labels: `ZOSBOOT` (ESP), `ZOSDATA` (all data filesystems). GPT partition names: `zosboot` (bios_boot and ESP), `zosdata` (data), `zoscache` (cache).
|
||||
- Filesystem tuning options (compression, RAID profile, etc.) must be configurable; define sensible defaults and provide extension points.
|
||||
|
||||
|
||||
20
README.md
20
README.md
@@ -24,7 +24,7 @@ Key modules
|
||||
- [src/mount/ops.rs](src/mount/ops.rs)
|
||||
|
||||
Features at a glance
|
||||
- Topology-driven planning with built-in defaults: Single, DualIndependent, BtrfsRaid1, SsdHddBcachefs
|
||||
- Topology-driven planning with built-in defaults: BtrfsSingle, BcachefsSingle, DualIndependent, Bcachefs2Copy, BtrfsRaid1, SsdHddBcachefs
|
||||
- Non-destructive preview: --show/--report outputs JSON summary (disks, partition plan, filesystems, planned mountpoints)
|
||||
- Safe discovery: excludes removable media by default (USB sticks) unless explicitly allowed
|
||||
- Config-optional: the tool runs without any YAML; sensible defaults are always present and may be overridden/merged by config
|
||||
@@ -45,7 +45,7 @@ Binary is target/release/zosstorage.
|
||||
|
||||
CLI usage
|
||||
- Topology selection (config optional):
|
||||
-t, --topology single|dual-independent|btrfs-raid1|ssd-hdd-bcachefs
|
||||
-t, --topology btrfs-single|bcachefs-single|dual-independent|bcachefs-2copy|btrfs-raid1|ssd-hdd-bcachefs
|
||||
- Preview (non-destructive):
|
||||
--show Print JSON summary to stdout
|
||||
--report PATH Write JSON summary to a file
|
||||
@@ -61,7 +61,7 @@ CLI usage
|
||||
|
||||
Examples
|
||||
- Single disk plan with debug logs:
|
||||
sudo ./zosstorage --show -t single -l debug
|
||||
sudo ./zosstorage --show -t btrfs-single -l debug
|
||||
- RAID1 btrfs across two disks; print and write summary:
|
||||
sudo ./zosstorage --show --report /run/zosstorage/plan.json -t btrfs-raid1 -l debug -L
|
||||
- SSD+HDD bcachefs plan, include removable devices (for lab cases):
|
||||
@@ -135,8 +135,18 @@ Defaults and policies
|
||||
btrfs (data) label: ZOSDATA
|
||||
bcachefs (data/cache) label: ZOSDATA
|
||||
- Mount scheme:
|
||||
per-UUID under /var/cache/{UUID}
|
||||
/etc/fstab generation is disabled by default
|
||||
- Root mounts (runtime only): each data filesystem is mounted at /var/mounts/{UUID}
|
||||
- btrfs root options: rw,noatime,subvolid=5
|
||||
- bcachefs root options: rw,noatime
|
||||
- Subvolume mounts (from the primary data filesystem only) to final targets:
|
||||
- /var/cache/system
|
||||
- /var/cache/etc
|
||||
- /var/cache/modules
|
||||
- /var/cache/vm-meta
|
||||
- Subvolume mount options:
|
||||
- btrfs: -o rw,noatime,subvol={name}
|
||||
- bcachefs: -o rw,noatime,X-mount.subdir={name}
|
||||
- /etc/fstab generation is disabled by default; when enabled, only the four subvolume mounts are written (UUID= sources, deterministic order)
|
||||
|
||||
Tracing and logs
|
||||
- stderr logging level controlled by -l/--log-level (info by default)
|
||||
|
||||
34
TODO.md
34
TODO.md
@@ -9,18 +9,30 @@ Conventions:
|
||||
|
||||
Core execution
|
||||
|
||||
- [ ] Add “apply mode” switch to orchestrator to perform destructive actions after preview validation
|
||||
- Wire phase execution in [orchestrator.run(&Context)](src/orchestrator/run.rs:101): apply partitions → udev settle → mkfs → mount → maybe write fstab → build/write report
|
||||
- Introduce a CLI flag (e.g. `--apply`) guarded by clear logs and safety checks (not preview)
|
||||
- [ ] Partition application (destructive) in [fn apply_partitions(...)](src/partition/plan.rs:287)
|
||||
- Translate [PartitionPlan](src/partition/plan.rs:80) to sgdisk commands (create GPT, partitions in order with alignment and names)
|
||||
- Enforce idempotency: skip if table already matches plan (or abort with explicit validation error)
|
||||
- Ensure unique partition GUIDs; capture partition device paths and GUIDs for results
|
||||
- Call [util::udev_settle()](src/util/mod.rs:128) after changes; robust error mapping to Error::Tool / Error::Partition
|
||||
- [-] Add “apply mode” switch to orchestrator to perform destructive actions after preview validation
|
||||
- [x] Introduce CLI flag --apply guarded by clear logs and safety checks (not preview) [src/cli/args.rs](src/cli/args.rs)
|
||||
- [x] Wire partition application and udev settle [orchestrator::run()](src/orchestrator/run.rs:1) → [partition::apply_partitions()](src/partition/plan.rs:1)
|
||||
- [-] Wire mkfs → mount → maybe write fstab → build/write report [src/orchestrator/run.rs](src/orchestrator/run.rs)
|
||||
- [x] Wire mkfs: plan_filesystems + make_filesystems [src/orchestrator/run.rs](src/orchestrator/run.rs) + [src/fs/plan.rs](src/fs/plan.rs)
|
||||
- [ ] Wire mounts (plan/apply) [src/mount/ops.rs](src/mount/ops.rs)
|
||||
- [ ] maybe write fstab [src/mount/ops.rs](src/mount/ops.rs)
|
||||
- [ ] build/write report [src/report/state.rs](src/report/state.rs)
|
||||
- [x] Partition application (destructive) in [partition::apply_partitions()](src/partition/plan.rs:1)
|
||||
- [x] Boot mode detection and BIOS boot policy
|
||||
- [x] Implement UEFI detection via /sys/firmware/efi: [is_efi_boot()](src/util/mod.rs:151)
|
||||
- [x] Planner skips BIOS boot partition when UEFI-booted: [partition::plan_partitions()](src/partition/plan.rs:133)
|
||||
- [ ] Future: revisit bootblock/bootloader specifics for BIOS vs EFI (confirm if any BIOS-targets require bios_boot) [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)
|
||||
- [x] Translate [PartitionPlan](src/partition/plan.rs:1) to sgdisk commands (create GPT, partitions in order with alignment and names)
|
||||
- [x] Enforce idempotency when required via [idempotency::is_empty_disk()](src/idempotency/mod.rs:1); abort on non-empty
|
||||
- [x] Capture partition GUIDs, names, device paths via sgdisk -i parsing; map to PartitionResult
|
||||
- [x] Call [util::udev_settle()](src/util/mod.rs:1) after changes; consistent Error::Tool/Error::Partition mapping
|
||||
- [-] Filesystem creation in [fn make_filesystems(...)](src/fs/plan.rs:182)
|
||||
- [x] Base mkfs implemented for vfat/btrfs/bcachefs (UUID capture via blkid)
|
||||
- [ ] Apply btrfs raid profile from config (e.g., `-m raid1 -d raid1`) for [Topology::BtrfsRaid1](src/types.rs:29) and the desired profile in [struct BtrfsOptions](src/types.rs:89)
|
||||
- [ ] Optionally map compression options for btrfs and bcachefs from config (e.g., `-O compress=zstd:3` or format-equivalent)
|
||||
- [x] Apply btrfs RAID profile when topology requires it (multi-device): pass -m raid1 -d raid1 in mkfs.btrfs [src/fs/plan.rs](src/fs/plan.rs)
|
||||
- [x] Force mkfs.btrfs in apply path with -f to handle leftover signatures from partial runs [src/fs/plan.rs](src/fs/plan.rs)
|
||||
- [ ] Compression/tuning mapping from config
|
||||
- [ ] btrfs: apply compression as mount options during mounting phase [src/mount/ops.rs](src/mount/ops.rs)
|
||||
- [ ] bcachefs: map compression/checksum/cache_mode to format flags (deferred) [src/fs/plan.rs](src/fs/plan.rs)
|
||||
- [ ] Consider verifying UUID consistency across multi-device filesystems and improve error messages
|
||||
- [ ] Mount planning and application in [mount::ops](src/mount/ops.rs:1)
|
||||
- [ ] Implement [fn plan_mounts(...)](src/mount/ops.rs:68): map FsResult UUIDs into `/var/cache/{UUID}` using [cfg.mount.base_dir](src/types.rs:136), and synthesize options per FS kind
|
||||
@@ -44,7 +56,7 @@ CLI, config, defaults
|
||||
- [x] Built-in sensible defaults (no YAML required) [src/config/loader.rs](src/config/loader.rs:320)
|
||||
- [x] Overlays from CLI: log level, file logging, fstab, removable policy, topology [src/config/loader.rs](src/config/loader.rs:247)
|
||||
- [x] Preview flags (`--show`, `--report`) and topology selection (`-t/--topology`) [src/cli/args.rs](src/cli/args.rs:55)
|
||||
- [ ] Add `--apply` flag to toggle execute mode and keep preview non-destructive by default [src/cli/args.rs](src/cli/args.rs:55)
|
||||
- [x] Add `--apply` flag to toggle execute mode and keep preview non-destructive by default [src/cli/args.rs](src/cli/args.rs)
|
||||
- [ ] Consider environment variable overlays [src/config/loader.rs](src/config/loader.rs:39)
|
||||
- [ ] Consider hidden/dev flags behind features (e.g., `--dry-run-verbose`, `--trace-io`) [src/cli/args.rs](src/cli/args.rs:26)
|
||||
|
||||
|
||||
@@ -72,7 +72,7 @@ Configuration types
|
||||
- [struct Config](src/types.rs:1)
|
||||
- The validated configuration used by the orchestrator, containing logging, device selection rules, topology, partitioning, filesystem options, mount scheme, and report path.
|
||||
- [enum Topology](src/types.rs:1)
|
||||
- Values: single, dual_independent, ssd_hdd_bcachefs, btrfs_raid1 (opt-in).
|
||||
- Values: btrfs_single, bcachefs_single, dual_independent, bcachefs_2copy, ssd_hdd_bcachefs, btrfs_raid1 (opt-in).
|
||||
- [struct DeviceSelection](src/types.rs:1)
|
||||
- Include and exclude regex patterns, minimum size, removable policy.
|
||||
- [struct Partitioning](src/types.rs:1)
|
||||
|
||||
@@ -81,9 +81,11 @@ report:
|
||||
```
|
||||
|
||||
Topology modes
|
||||
- single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Make a btrfs filesystem labeled ZOSDATA on the data partition.
|
||||
- dual_independent: Two eligible disks. On each disk, create BIOS boot (if enabled) + ESP + data. Create a separate btrfs filesystem labeled ZOSDATA on each data partition. No RAID by default.
|
||||
- ssd_hdd_bcachefs: One SSD/NVMe and one HDD. Create BIOS boot (if enabled) + ESP on both as required. Create cache (on SSD) and data/backing (on HDD) partitions named zoscache and zosdata respectively. Make a bcachefs filesystem across both with label ZOSDATA, using SSD as cache/promote and HDD as backing.
|
||||
- btrfs_single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Create a btrfs filesystem labeled ZOSDATA on the data partition.
|
||||
- bcachefs_single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Create a bcachefs filesystem labeled ZOSDATA on the data partition.
|
||||
- dual_independent: Two eligible disks. On each disk, create BIOS boot (if enabled) + ESP + data. Create an independent btrfs filesystem labeled ZOSDATA on each data partition. No RAID by default.
|
||||
- bcachefs_2copy: Two eligible disks. Create data partitions on both, then create a single multi-device bcachefs labeled ZOSDATA spanning the data partitions (two-copies semantics to be tuned via mkfs options in a follow-up).
|
||||
- ssd_hdd_bcachefs: One SSD/NVMe and one HDD. Create BIOS boot (if enabled) + ESP on both as required. Create cache (on SSD) and data/backing (on HDD) partitions named zoscache and zosdata respectively. Create a bcachefs labeled ZOSDATA across SSD(HDD) per policy (SSD cache/promote; HDD backing).
|
||||
- btrfs_raid1: Optional mode if explicitly requested. Create mirrored btrfs across two disks for the data role with raid1 profile. Not enabled by default.
|
||||
|
||||
Validation rules
|
||||
|
||||
@@ -51,27 +51,8 @@ impl std::fmt::Display for LogLevelArg {
|
||||
}
|
||||
}
|
||||
|
||||
/// Topology argument (maps to config Topology with snake_case semantics).
|
||||
#[derive(Debug, Clone, Copy, ValueEnum)]
|
||||
#[value(rename_all = "kebab_case")]
|
||||
pub enum TopologyArg {
|
||||
Single,
|
||||
DualIndependent,
|
||||
SsdHddBcachefs,
|
||||
BtrfsRaid1,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for TopologyArg {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
let s = match self {
|
||||
TopologyArg::Single => "single",
|
||||
TopologyArg::DualIndependent => "dual_independent",
|
||||
TopologyArg::SsdHddBcachefs => "ssd_hdd_bcachefs",
|
||||
TopologyArg::BtrfsRaid1 => "btrfs_raid1",
|
||||
};
|
||||
f.write_str(s)
|
||||
}
|
||||
}
|
||||
//// Using crate::types::Topology (ValueEnum) directly for CLI parsing to avoid duplication.
|
||||
// TopologyArg enum removed; CLI field uses crate::types::Topology
|
||||
|
||||
/// zosstorage - one-shot disk initializer for initramfs.
|
||||
#[derive(Debug, Parser)]
|
||||
@@ -95,7 +76,7 @@ pub struct Cli {
|
||||
|
||||
/// Select topology (overrides config topology)
|
||||
#[arg(short = 't', long = "topology", value_enum)]
|
||||
pub topology: Option<TopologyArg>,
|
||||
pub topology: Option<crate::types::Topology>,
|
||||
|
||||
/// Present but non-functional; returns unimplemented error
|
||||
#[arg(short = 'f', long = "force")]
|
||||
@@ -113,6 +94,10 @@ pub struct Cli {
|
||||
/// Write detection/planning JSON report to the given path (overrides config.report.path)
|
||||
#[arg(long = "report")]
|
||||
pub report: Option<String>,
|
||||
|
||||
/// Execute destructive actions (apply mode). When false, runs preview-only.
|
||||
#[arg(long = "apply", default_value_t = false)]
|
||||
pub apply: bool,
|
||||
}
|
||||
|
||||
/// Parse CLI arguments (non-interactive; suitable for initramfs).
|
||||
|
||||
@@ -187,9 +187,11 @@ pub fn validate(cfg: &Config) -> Result<()> {
|
||||
|
||||
// Topology-specific quick checks (basic for now)
|
||||
match cfg.topology {
|
||||
Topology::Single => {} // nothing special
|
||||
Topology::BtrfsSingle => {} // nothing special
|
||||
Topology::BcachefsSingle => {}
|
||||
Topology::DualIndependent => {}
|
||||
Topology::SsdHddBcachefs => {}
|
||||
Topology::Bcachefs2Copy => {}
|
||||
Topology::BtrfsRaid1 => {
|
||||
// No enforced requirement here beyond presence of two disks at runtime.
|
||||
if cfg.filesystem.btrfs.raid_profile != "raid1" && cfg.filesystem.btrfs.raid_profile != "none" {
|
||||
@@ -269,8 +271,8 @@ fn cli_overlay_value(cli: &Cli) -> Value {
|
||||
root.insert("device_selection".into(), Value::Object(device_selection));
|
||||
}
|
||||
|
||||
// topology override via --topology
|
||||
if let Some(t) = cli.topology {
|
||||
// topology override via --topology (avoid moving out of borrowed field)
|
||||
if let Some(t) = cli.topology.as_ref() {
|
||||
root.insert("topology".into(), Value::String(t.to_string()));
|
||||
}
|
||||
|
||||
@@ -352,7 +354,7 @@ fn default_config() -> Config {
|
||||
allow_removable: false,
|
||||
min_size_gib: 10,
|
||||
},
|
||||
topology: Topology::Single,
|
||||
topology: Topology::BtrfsSingle,
|
||||
partitioning: Partitioning {
|
||||
alignment_mib: 1,
|
||||
require_empty_disks: true,
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
// api: fs::FsPlan { specs: Vec<FsSpec> }
|
||||
// api: fs::FsResult { kind: FsKind, devices: Vec<String>, uuid: String, label: String }
|
||||
// api: fs::plan_filesystems(parts: &[crate::partition::PartitionResult], cfg: &crate::config::types::Config) -> crate::Result<FsPlan>
|
||||
// api: fs::make_filesystems(plan: &FsPlan) -> crate::Result<Vec<FsResult>>
|
||||
// api: fs::make_filesystems(plan: &FsPlan, cfg: &crate::types::Config) -> crate::Result<Vec<FsResult>>
|
||||
// REGION: API-END
|
||||
//
|
||||
// REGION: RESPONSIBILITIES
|
||||
@@ -18,20 +18,21 @@
|
||||
// ext: dry-run mode to emit mkfs commands without executing (future).
|
||||
// REGION: EXTENSION_POINTS-END
|
||||
//
|
||||
// REGION: SAFETY
|
||||
// safety: must not run mkfs on non-empty or unexpected partitions; assume prior validation enforced.
|
||||
// safety: ensure labels follow reserved semantics (ZOSBOOT for ESP, ZOSDATA for all data FS).
|
||||
// REGION: SAFETY-END
|
||||
// REGION: SAFETY
|
||||
// safety: must not run mkfs on non-empty or unexpected partitions; assume prior validation enforced.
|
||||
// safety: ensure labels follow reserved semantics (ZOSBOOT for ESP, ZOSDATA for all data FS).
|
||||
// safety: mkfs.btrfs uses -f in apply path immediately after partitioning to handle leftover signatures.
|
||||
// REGION: SAFETY-END
|
||||
//
|
||||
// REGION: ERROR_MAPPING
|
||||
// errmap: external mkfs/blkid failures -> crate::Error::Tool with captured stderr.
|
||||
// errmap: planning mismatches -> crate::Error::Filesystem with context.
|
||||
// REGION: ERROR_MAPPING-END
|
||||
//
|
||||
// REGION: TODO
|
||||
// todo: implement mapping of topology to FsSpec including bcachefs cache/backing composition.
|
||||
// todo: implement mkfs invocation and UUID capture via util::run_cmd / util::run_cmd_capture.
|
||||
// REGION: TODO-END
|
||||
// REGION: TODO
|
||||
// todo: bcachefs tuning flags mapping from config (compression/checksum/cache_mode) deferred
|
||||
// todo: add UUID consistency checks across multi-device filesystems
|
||||
// REGION: TODO-END
|
||||
//! Filesystem planning and creation for zosstorage.
|
||||
//!
|
||||
//! Maps partition results to concrete filesystems (vfat, btrfs, bcachefs)
|
||||
@@ -151,8 +152,36 @@ pub fn plan_filesystems(
|
||||
label: cfg.filesystem.btrfs.label.clone(),
|
||||
});
|
||||
}
|
||||
_ => {
|
||||
// Map each Data partition to individual Btrfs filesystems.
|
||||
Topology::Bcachefs2Copy => {
|
||||
// Group all Data partitions into a single Bcachefs filesystem across multiple devices (2-copy semantics).
|
||||
let data_devs: Vec<String> = parts
|
||||
.iter()
|
||||
.filter(|p| matches!(p.role, PartRole::Data))
|
||||
.map(|p| p.device_path.clone())
|
||||
.collect();
|
||||
if data_devs.len() < 2 {
|
||||
return Err(Error::Filesystem(
|
||||
"Bcachefs2Copy topology requires at least 2 data partitions".to_string(),
|
||||
));
|
||||
}
|
||||
specs.push(FsSpec {
|
||||
kind: FsKind::Bcachefs,
|
||||
devices: data_devs,
|
||||
label: cfg.filesystem.bcachefs.label.clone(),
|
||||
});
|
||||
}
|
||||
Topology::BcachefsSingle => {
|
||||
// Single-device bcachefs on the sole Data partition.
|
||||
let data = parts.iter().find(|p| matches!(p.role, PartRole::Data))
|
||||
.ok_or_else(|| Error::Filesystem("expected a Data partition for BcachefsSingle topology".to_string()))?;
|
||||
specs.push(FsSpec {
|
||||
kind: FsKind::Bcachefs,
|
||||
devices: vec![data.device_path.clone()],
|
||||
label: cfg.filesystem.bcachefs.label.clone(),
|
||||
});
|
||||
}
|
||||
Topology::BtrfsSingle | Topology::DualIndependent => {
|
||||
// Map Data partition(s) to Btrfs (single device per partition for DualIndependent).
|
||||
for p in parts.iter().filter(|p| matches!(p.role, PartRole::Data)) {
|
||||
specs.push(FsSpec {
|
||||
kind: FsKind::Btrfs,
|
||||
@@ -177,7 +206,7 @@ pub fn plan_filesystems(
|
||||
/// - This initial implementation applies labels and creates filesystems with minimal flags.
|
||||
/// - Btrfs RAID profile (e.g., raid1) will be applied in a follow-up by mapping config to mkfs flags.
|
||||
/// - UUID is captured via blkid -o export on the first device of each spec.
|
||||
pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
|
||||
pub fn make_filesystems(plan: &FsPlan, cfg: &Config) -> Result<Vec<FsResult>> {
|
||||
// Discover required tools up-front
|
||||
let vfat_tool = which_tool("mkfs.vfat")?;
|
||||
let btrfs_tool = which_tool("mkfs.btrfs")?;
|
||||
@@ -220,8 +249,25 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
|
||||
if spec.devices.is_empty() {
|
||||
return Err(Error::Filesystem("btrfs requires at least one device".into()));
|
||||
}
|
||||
// mkfs.btrfs -L LABEL dev1 [dev2 ...]
|
||||
// mkfs.btrfs -L LABEL [ -m raid1 -d raid1 (when multi-device/raid1) ] dev1 [dev2 ...]
|
||||
let mut args: Vec<String> = vec![mkfs.clone(), "-L".into(), spec.label.clone()];
|
||||
|
||||
// If this Btrfs is multi-device (as planned in BtrfsRaid1 topology),
|
||||
// set metadata/data profiles to raid1. This keeps plan/apply consistent.
|
||||
if spec.devices.len() >= 2 {
|
||||
args.push("-m".into());
|
||||
args.push("raid1".into());
|
||||
args.push("-d".into());
|
||||
args.push("raid1".into());
|
||||
}
|
||||
|
||||
// Note: compression is a mount-time option for btrfs; we will apply it in mount phase.
|
||||
// Leaving mkfs-time compression unset by design.
|
||||
|
||||
// Force formatting in apply path to avoid leftover signatures on freshly created partitions.
|
||||
// Safe because we just created these partitions in this run.
|
||||
args.push("-f".into());
|
||||
|
||||
args.extend(spec.devices.iter().cloned());
|
||||
let args_ref: Vec<&str> = args.iter().map(|s| s.as_str()).collect();
|
||||
run_cmd(&args_ref)?;
|
||||
@@ -240,11 +286,15 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
|
||||
let Some(ref mkfs) = bcachefs_tool else {
|
||||
return Err(Error::Filesystem("bcachefs not found in PATH".into()));
|
||||
};
|
||||
if spec.devices.len() < 2 {
|
||||
return Err(Error::Filesystem("bcachefs requires at least two devices (cache + backing)".into()));
|
||||
if spec.devices.is_empty() {
|
||||
return Err(Error::Filesystem("bcachefs requires at least one device".into()));
|
||||
}
|
||||
// bcachefs format --label LABEL dev_cache dev_backing ...
|
||||
// bcachefs format --label LABEL [--replicas=2] dev1 [dev2 ...]
|
||||
// Apply replicas policy for Bcachefs2Copy topology (data+metadata replicas = 2)
|
||||
let mut args: Vec<String> = vec![mkfs.clone(), "format".into(), "--label".into(), spec.label.clone()];
|
||||
if matches!(cfg.topology, Topology::Bcachefs2Copy) {
|
||||
args.push("--replicas=2".into());
|
||||
}
|
||||
args.extend(spec.devices.iter().cloned());
|
||||
let args_ref: Vec<&str> = args.iter().map(|s| s.as_str()).collect();
|
||||
run_cmd(&args_ref)?;
|
||||
|
||||
@@ -51,6 +51,7 @@ fn real_main() -> Result<()> {
|
||||
|
||||
let ctx = orchestrator::Context::new(cfg, log_opts)
|
||||
.with_show(cli.show)
|
||||
.with_apply(cli.apply)
|
||||
.with_report_path(cli.report.clone());
|
||||
orchestrator::run(&ctx)
|
||||
}
|
||||
|
||||
364
src/mount/ops.rs
364
src/mount/ops.rs
@@ -1,85 +1,355 @@
|
||||
// REGION: API
|
||||
// api: mount::MountPlan { entries: Vec<(String, String, String, String)> }
|
||||
// note: tuple order = (source, target, fstype, options)
|
||||
// REGION: API — one-liners for plan_mounts/apply_mounts/maybe_write_fstab and structs
|
||||
// api: mount::MountPlan { root_mounts: Vec<PlannedMount>, subvol_mounts: Vec<PlannedSubvolMount>, primary_uuid: Option<String> }
|
||||
// api: mount::MountResult { source: String, target: String, fstype: String, options: String }
|
||||
// api: mount::plan_mounts(fs_results: &[crate::fs::FsResult], cfg: &crate::config::types::Config) -> crate::Result<MountPlan>
|
||||
// api: mount::plan_mounts(fs_results: &[crate::fs::FsResult], cfg: &crate::types::Config) -> crate::Result<MountPlan>
|
||||
// api: mount::apply_mounts(plan: &MountPlan) -> crate::Result<Vec<MountResult>>
|
||||
// api: mount::maybe_write_fstab(mounts: &[MountResult], cfg: &crate::config::types::Config) -> crate::Result<()>
|
||||
// api: mount::maybe_write_fstab(mounts: &[MountResult], cfg: &crate::types::Config) -> crate::Result<()>
|
||||
// REGION: API-END
|
||||
//
|
||||
// REGION: RESPONSIBILITIES
|
||||
// - Translate filesystem identities to mount targets, defaulting to /var/cache/<UUID>.
|
||||
// - Perform mounts using syscalls (nix) and create target directories as needed.
|
||||
// - Optionally generate /etc/fstab entries in deterministic order.
|
||||
// Non-goals: filesystem creation, device discovery, partitioning.
|
||||
// - Implement mount phase only: plan root mounts under /var/mounts/{UUID}, ensure/plan subvols, and mount subvols to /var/cache/*.
|
||||
// - Use UUID= sources, deterministic primary selection (first FsResult) for dual_independent.
|
||||
// - Generate fstab entries only for four subvol targets; exclude runtime root mounts.
|
||||
// REGION: RESPONSIBILITIES-END
|
||||
//
|
||||
// REGION: EXTENSION_POINTS
|
||||
// ext: support custom mount scheme mapping beyond per-UUID.
|
||||
// ext: add configurable mount options per filesystem kind via Config.
|
||||
// REGION: EXTENSION_POINTS-END
|
||||
//
|
||||
// REGION: SAFETY
|
||||
// safety: must ensure target directories exist and avoid overwriting unintended paths.
|
||||
// safety: ensure options include sensible defaults (e.g., btrfs compress, ssd) when applicable.
|
||||
// - Never mount ESP; only Btrfs/Bcachefs data FS. Root btrfs mounts use subvolid=5 (top-level).
|
||||
// - Create-if-missing subvolumes prior to subvol mounts; ensure directories exist.
|
||||
// - Always use UUID= sources; no device paths.
|
||||
// - Bcachefs subvolume mounts use option key 'X-mount.subdir={name}' (not 'subvol=').
|
||||
// REGION: SAFETY-END
|
||||
//
|
||||
// REGION: ERROR_MAPPING
|
||||
// errmap: syscall failures -> crate::Error::Mount with context.
|
||||
// errmap: fstab write IO errors -> crate::Error::Mount with path details.
|
||||
// - External tool failures map to Error::Tool via util::run_cmd/run_cmd_capture.
|
||||
// - Missing required tools map to Error::Mount with clear explanation.
|
||||
// REGION: ERROR_MAPPING-END
|
||||
//
|
||||
// REGION: TODO
|
||||
// todo: implement option synthesis (e.g., compress=zstd:3 for btrfs) based on Config and device rotational hints.
|
||||
// todo: implement deterministic fstab ordering and idempotent writes.
|
||||
// - Defer compression/SSD options; later map from Config into mount options.
|
||||
// - Consider validating tool presence up-front for clearer early errors.
|
||||
// REGION: TODO-END
|
||||
//! Mount planning and application.
|
||||
//!
|
||||
//! Translates filesystem results into mount targets (default under /var/cache/<UUID>)
|
||||
//! and applies mounts using syscalls (via nix) in later implementation.
|
||||
//!
|
||||
//! See [fn plan_mounts](ops.rs:1), [fn apply_mounts](ops.rs:1),
|
||||
//! and [fn maybe_write_fstab](ops.rs:1).
|
||||
//! See [fn plan_mounts()](src/mount/ops.rs:1), [fn apply_mounts()](src/mount/ops.rs:1),
|
||||
//! and [fn maybe_write_fstab()](src/mount/ops.rs:1).
|
||||
|
||||
#![allow(dead_code)]
|
||||
|
||||
use crate::{Result, types::Config, fs::FsResult};
|
||||
use crate::{
|
||||
fs::{FsKind, FsResult},
|
||||
types::Config,
|
||||
util::{run_cmd, run_cmd_capture, which_tool},
|
||||
Error, Result,
|
||||
};
|
||||
use std::fs::{create_dir_all, File};
|
||||
use std::io::Write;
|
||||
use std::path::Path;
|
||||
|
||||
const ROOT_BASE: &str = "/var/mounts";
|
||||
const TARGET_SYSTEM: &str = "/var/cache/system";
|
||||
const TARGET_ETC: &str = "/var/cache/etc";
|
||||
const TARGET_MODULES: &str = "/var/cache/modules";
|
||||
const TARGET_VM_META: &str = "/var/cache/vm-meta";
|
||||
const SUBVOLS: &[&str] = &["system", "etc", "modules", "vm-meta"];
|
||||
|
||||
/// Mount plan entries: (source, target, fstype, options)
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MountPlan {
|
||||
/// Source device path, target directory, filesystem type, and mount options.
|
||||
pub entries: Vec<(String, String, String, String)>,
|
||||
pub struct PlannedMount {
|
||||
pub uuid: String, // UUID string without prefix
|
||||
pub target: String, // absolute path
|
||||
pub fstype: String, // "btrfs" | "bcachefs"
|
||||
pub options: String, // e.g., "rw,noatime,subvolid=5"
|
||||
}
|
||||
|
||||
/// Result of applying a single mount entry.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct PlannedSubvolMount {
|
||||
pub uuid: String, // UUID of primary FS
|
||||
pub name: String, // subvol name (system/etc/modules/vm-meta)
|
||||
pub target: String, // absolute final target
|
||||
pub fstype: String, // "btrfs" | "bcachefs"
|
||||
pub options: String, // e.g., "rw,noatime,subvol=system"
|
||||
}
|
||||
|
||||
/// Mount plan per policy.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MountPlan {
|
||||
/// Root mounts under /var/mounts/{UUID} for all data filesystems.
|
||||
pub root_mounts: Vec<PlannedMount>,
|
||||
/// Four subvol mounts chosen from the primary FS only.
|
||||
pub subvol_mounts: Vec<PlannedSubvolMount>,
|
||||
/// Primary UUID selection (only data FS; for multiple pick first in input order).
|
||||
pub primary_uuid: Option<String>,
|
||||
}
|
||||
|
||||
/// Result of applying a mount (root or subvol).
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MountResult {
|
||||
/// Source device path (e.g., /dev/nvme0n1p3).
|
||||
/// Source as "UUID=..." (never device paths).
|
||||
pub source: String,
|
||||
/// Target directory (e.g., /var/cache/<UUID>).
|
||||
/// Target directory.
|
||||
pub target: String,
|
||||
/// Filesystem type (e.g., "btrfs", "vfat").
|
||||
/// Filesystem type string.
|
||||
pub fstype: String,
|
||||
/// Options string (comma-separated).
|
||||
/// Options used for the mount.
|
||||
pub options: String,
|
||||
}
|
||||
|
||||
/// Build mount plan under /var/cache/<UUID> by default.
|
||||
fn fstype_str(kind: FsKind) -> &'static str {
|
||||
match kind {
|
||||
FsKind::Btrfs => "btrfs",
|
||||
FsKind::Bcachefs => "bcachefs",
|
||||
FsKind::Vfat => "vfat",
|
||||
}
|
||||
}
|
||||
|
||||
/// Build mount plan per policy.
|
||||
pub fn plan_mounts(fs_results: &[FsResult], _cfg: &Config) -> Result<MountPlan> {
|
||||
let _ = fs_results;
|
||||
// Placeholder: map filesystem UUIDs to per-UUID directories and assemble options.
|
||||
todo!("create per-UUID directories and mount mapping based on config")
|
||||
// Identify data filesystems (Btrfs/Bcachefs), ignore ESP (Vfat)
|
||||
let data: Vec<&FsResult> = fs_results
|
||||
.iter()
|
||||
.filter(|r| matches!(r.kind, FsKind::Btrfs | FsKind::Bcachefs))
|
||||
.collect();
|
||||
|
||||
if data.is_empty() {
|
||||
return Err(Error::Mount(
|
||||
"no data filesystems to mount (expected Btrfs or Bcachefs)".into(),
|
||||
));
|
||||
}
|
||||
|
||||
// Root mounts for all data filesystems
|
||||
let mut root_mounts: Vec<PlannedMount> = Vec::new();
|
||||
for r in &data {
|
||||
let uuid = r.uuid.clone();
|
||||
let fstype = fstype_str(r.kind).to_string();
|
||||
let target = format!("{}/{}", ROOT_BASE, uuid);
|
||||
let options = match r.kind {
|
||||
FsKind::Btrfs => "rw,noatime,subvolid=5".to_string(),
|
||||
FsKind::Bcachefs => "rw,noatime".to_string(),
|
||||
FsKind::Vfat => continue,
|
||||
};
|
||||
root_mounts.push(PlannedMount {
|
||||
uuid,
|
||||
target,
|
||||
fstype,
|
||||
options,
|
||||
});
|
||||
}
|
||||
|
||||
// Determine primary UUID
|
||||
let primary_uuid = Some(data[0].uuid.clone());
|
||||
|
||||
// Subvol mounts only from primary FS
|
||||
let primary = data[0];
|
||||
let mut subvol_mounts: Vec<PlannedSubvolMount> = Vec::new();
|
||||
let fstype = fstype_str(primary.kind).to_string();
|
||||
// Option key differs per filesystem: btrfs uses subvol=, bcachefs uses X-mount.subdir=
|
||||
let opt_key = match primary.kind {
|
||||
FsKind::Btrfs => "subvol=",
|
||||
FsKind::Bcachefs => "X-mount.subdir=",
|
||||
FsKind::Vfat => "subvol=", // not used for Vfat (ESP ignored)
|
||||
};
|
||||
for name in SUBVOLS {
|
||||
let target = match *name {
|
||||
"system" => TARGET_SYSTEM.to_string(),
|
||||
"etc" => TARGET_ETC.to_string(),
|
||||
"modules" => TARGET_MODULES.to_string(),
|
||||
"vm-meta" => TARGET_VM_META.to_string(),
|
||||
_ => continue,
|
||||
};
|
||||
let options = format!("rw,noatime,{}{}", opt_key, name);
|
||||
subvol_mounts.push(PlannedSubvolMount {
|
||||
uuid: primary.uuid.clone(),
|
||||
name: name.to_string(),
|
||||
target,
|
||||
fstype: fstype.clone(),
|
||||
options,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(MountPlan {
|
||||
root_mounts,
|
||||
subvol_mounts,
|
||||
primary_uuid,
|
||||
})
|
||||
}
|
||||
|
||||
/// Apply mounts using syscalls (nix), ensuring directories exist.
|
||||
pub fn apply_mounts(_plan: &MountPlan) -> Result<Vec<MountResult>> {
|
||||
// Placeholder: perform mount syscalls and return results.
|
||||
todo!("perform mount syscalls and return results")
|
||||
/// Apply mounts: ensure dirs, mount roots, create subvols if missing, mount subvols.
|
||||
pub fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>> {
|
||||
// Tool discovery
|
||||
let mount_tool = which_tool("mount")?
|
||||
.ok_or_else(|| Error::Mount("required tool 'mount' not found in PATH".into()))?;
|
||||
|
||||
// Ensure target directories exist for root mounts
|
||||
for pm in &plan.root_mounts {
|
||||
create_dir_all(&pm.target)
|
||||
.map_err(|e| Error::Mount(format!("failed to create dir {}: {}", pm.target, e)))?;
|
||||
}
|
||||
// Ensure final subvol targets exist
|
||||
for sm in &plan.subvol_mounts {
|
||||
create_dir_all(&sm.target)
|
||||
.map_err(|e| Error::Mount(format!("failed to create dir {}: {}", sm.target, e)))?;
|
||||
}
|
||||
|
||||
let mut results: Vec<MountResult> = Vec::new();
|
||||
|
||||
// Root mounts
|
||||
for pm in &plan.root_mounts {
|
||||
let source = format!("UUID={}", pm.uuid);
|
||||
let args = [
|
||||
mount_tool.as_str(),
|
||||
"-t",
|
||||
pm.fstype.as_str(),
|
||||
"-o",
|
||||
pm.options.as_str(),
|
||||
source.as_str(),
|
||||
pm.target.as_str(),
|
||||
];
|
||||
run_cmd(&args)?;
|
||||
results.push(MountResult {
|
||||
source,
|
||||
target: pm.target.clone(),
|
||||
fstype: pm.fstype.clone(),
|
||||
options: pm.options.clone(),
|
||||
});
|
||||
}
|
||||
|
||||
// Subvolume creation (create-if-missing) and mounts for the primary
|
||||
if let Some(primary_uuid) = &plan.primary_uuid {
|
||||
// Determine primary fs kind from planned subvols (they all share fstype for primary)
|
||||
let primary_kind = plan
|
||||
.subvol_mounts
|
||||
.get(0)
|
||||
.map(|s| s.fstype.clone())
|
||||
.unwrap_or_else(|| "btrfs".to_string());
|
||||
let root = format!("{}/{}", ROOT_BASE, primary_uuid);
|
||||
|
||||
if primary_kind == "btrfs" {
|
||||
let btrfs_tool = which_tool("btrfs")?
|
||||
.ok_or_else(|| Error::Mount("required tool 'btrfs' not found in PATH".into()))?;
|
||||
// List existing subvols under root
|
||||
let out = run_cmd_capture(&[
|
||||
btrfs_tool.as_str(),
|
||||
"subvolume",
|
||||
"list",
|
||||
"-o",
|
||||
root.as_str(),
|
||||
])?;
|
||||
for sm in &plan.subvol_mounts {
|
||||
if &sm.uuid != primary_uuid {
|
||||
continue;
|
||||
}
|
||||
// Check existence by scanning output for " path {name}"
|
||||
let exists = out
|
||||
.stdout
|
||||
.lines()
|
||||
.any(|l| l.contains(&format!(" path {}", sm.name)));
|
||||
if !exists {
|
||||
// Create subvolume
|
||||
let subvol_path = format!("{}/{}", root, sm.name);
|
||||
let args = [btrfs_tool.as_str(), "subvolume", "create", subvol_path.as_str()];
|
||||
run_cmd(&args)?;
|
||||
}
|
||||
}
|
||||
} else if primary_kind == "bcachefs" {
|
||||
let bcachefs_tool = which_tool("bcachefs")?.ok_or_else(|| {
|
||||
Error::Mount("required tool 'bcachefs' not found in PATH".into())
|
||||
})?;
|
||||
for sm in &plan.subvol_mounts {
|
||||
if &sm.uuid != primary_uuid {
|
||||
continue;
|
||||
}
|
||||
let subvol_path = format!("{}/{}", root, sm.name);
|
||||
if !Path::new(&subvol_path).exists() {
|
||||
let args = [
|
||||
bcachefs_tool.as_str(),
|
||||
"subvolume",
|
||||
"create",
|
||||
subvol_path.as_str(),
|
||||
];
|
||||
run_cmd(&args)?;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return Err(Error::Mount(format!(
|
||||
"unsupported primary fstype for subvols: {}",
|
||||
primary_kind
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
// Subvol mounts
|
||||
for sm in &plan.subvol_mounts {
|
||||
let source = format!("UUID={}", sm.uuid);
|
||||
let args = [
|
||||
mount_tool.as_str(),
|
||||
"-t",
|
||||
sm.fstype.as_str(),
|
||||
"-o",
|
||||
sm.options.as_str(),
|
||||
source.as_str(),
|
||||
sm.target.as_str(),
|
||||
];
|
||||
run_cmd(&args)?;
|
||||
results.push(MountResult {
|
||||
source,
|
||||
target: sm.target.clone(),
|
||||
fstype: sm.fstype.clone(),
|
||||
options: sm.options.clone(),
|
||||
});
|
||||
}
|
||||
|
||||
Ok(results)
|
||||
}
|
||||
|
||||
/// Optionally generate /etc/fstab entries in deterministic order.
|
||||
pub fn maybe_write_fstab(_mounts: &[MountResult], _cfg: &Config) -> Result<()> {
|
||||
// Placeholder: write fstab when enabled in configuration.
|
||||
todo!("when enabled, write fstab entries deterministically")
|
||||
/// Optionally write fstab entries for subvol mounts only (deterministic order).
|
||||
pub fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()> {
|
||||
if !cfg.mount.fstab_enabled {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Filter only the four subvol targets
|
||||
let wanted = [TARGET_ETC, TARGET_MODULES, TARGET_SYSTEM, TARGET_VM_META];
|
||||
let mut entries: Vec<&MountResult> = mounts
|
||||
.iter()
|
||||
.filter(|m| wanted.contains(&m.target.as_str()))
|
||||
.collect();
|
||||
|
||||
// Sort by target path ascending to be deterministic
|
||||
entries.sort_by(|a, b| a.target.cmp(&b.target));
|
||||
|
||||
// Compose lines
|
||||
let mut lines: Vec<String> = Vec::new();
|
||||
for m in entries {
|
||||
// m.source already "UUID=..."
|
||||
let line = format!(
|
||||
"{} {} {} {} 0 0",
|
||||
m.source, m.target, m.fstype, m.options
|
||||
);
|
||||
lines.push(line);
|
||||
}
|
||||
|
||||
// Atomic write to /etc/fstab
|
||||
let fstab_path = "/etc/fstab";
|
||||
let tmp_path = "/etc/fstab.zosstorage.tmp";
|
||||
if let Some(parent) = Path::new(fstab_path).parent() {
|
||||
create_dir_all(parent)
|
||||
.map_err(|e| Error::Mount(format!("failed to create {}: {}", parent.display(), e)))?;
|
||||
}
|
||||
{
|
||||
let mut f = File::create(tmp_path)
|
||||
.map_err(|e| Error::Mount(format!("failed to create {}: {}", tmp_path, e)))?;
|
||||
for line in lines {
|
||||
writeln!(f, "{}", line)
|
||||
.map_err(|e| Error::Mount(format!("failed to write tmp fstab: {}", e)))?;
|
||||
}
|
||||
f.flush()
|
||||
.map_err(|e| Error::Mount(format!("failed to flush tmp fstab: {}", e)))?;
|
||||
}
|
||||
std::fs::rename(tmp_path, fstab_path).map_err(|e| {
|
||||
Error::Mount(format!(
|
||||
"failed to replace {} atomically: {}",
|
||||
fstab_path, e
|
||||
))
|
||||
})?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -48,6 +48,7 @@ use crate::{
|
||||
device::{discover, DeviceFilter, Disk},
|
||||
idempotency,
|
||||
partition,
|
||||
fs as zfs,
|
||||
Error, Result,
|
||||
};
|
||||
use humantime::format_rfc3339;
|
||||
@@ -66,6 +67,8 @@ pub struct Context {
|
||||
pub log: LogOptions,
|
||||
/// When true, print detection and planning summary to stdout (JSON).
|
||||
pub show: bool,
|
||||
/// When true, perform destructive actions (apply mode).
|
||||
pub apply: bool,
|
||||
/// Optional report path override (when provided by CLI --report).
|
||||
pub report_path_override: Option<String>,
|
||||
}
|
||||
@@ -77,6 +80,7 @@ impl Context {
|
||||
cfg,
|
||||
log,
|
||||
show: false,
|
||||
apply: false,
|
||||
report_path_override: None,
|
||||
}
|
||||
}
|
||||
@@ -93,6 +97,16 @@ impl Context {
|
||||
self
|
||||
}
|
||||
|
||||
/// Enable or disable apply mode (destructive).
|
||||
///
|
||||
/// When set to true (e.g. via `--apply`), orchestrator:
|
||||
/// - Enforces empty-disk policy (unless disabled in config)
|
||||
/// - Applies partition plan, then (future) mkfs, mounts, and report
|
||||
pub fn with_apply(mut self, apply: bool) -> Self {
|
||||
self.apply = apply;
|
||||
self
|
||||
}
|
||||
|
||||
/// Override the report output path used by preview mode.
|
||||
///
|
||||
/// When provided (e.g. via `--report /path/file.json`), orchestrator:
|
||||
@@ -171,11 +185,30 @@ pub fn run(ctx: &Context) -> Result<()> {
|
||||
debug!("plan for {}: {} part(s)", dp.disk.path, dp.parts.len());
|
||||
}
|
||||
|
||||
// Note:
|
||||
// - Applying partitions, creating filesystems, mounting, and reporting
|
||||
// will be wired in subsequent steps. For now this performs pre-flight
|
||||
// checks and planning to exercise real code paths safely.
|
||||
// Apply mode: perform destructive partition application now.
|
||||
if ctx.apply {
|
||||
info!("orchestrator: apply mode enabled; applying partition plan");
|
||||
let part_results = partition::apply_partitions(&plan)?;
|
||||
info!(
|
||||
"orchestrator: applied partitions on {} disk(s), total parts created: {}",
|
||||
plan.disks.len(),
|
||||
part_results.len()
|
||||
);
|
||||
|
||||
// Filesystem planning and creation
|
||||
let fs_plan = zfs::plan_filesystems(&part_results, &ctx.cfg)?;
|
||||
info!("orchestrator: filesystem plan contains {} spec(s)", fs_plan.specs.len());
|
||||
let fs_results = zfs::make_filesystems(&fs_plan, &ctx.cfg)?;
|
||||
info!("orchestrator: created {} filesystem(s)", fs_results.len());
|
||||
|
||||
// Mount planning and application
|
||||
let mplan = crate::mount::plan_mounts(&fs_results, &ctx.cfg)?;
|
||||
let mres = crate::mount::apply_mounts(&mplan)?;
|
||||
crate::mount::maybe_write_fstab(&mres, &ctx.cfg)?;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Preview-only path
|
||||
info!("orchestrator: pre-flight complete (idempotency checked, devices discovered, plan computed)");
|
||||
|
||||
// Optional: emit JSON summary via --show or write via --report
|
||||
@@ -308,9 +341,11 @@ fn build_summary_json(disks: &[Disk], plan: &partition::PartitionPlan, cfg: &Con
|
||||
|
||||
// Decide filesystem kinds and planned mountpoints (template) from plan + cfg.topology
|
||||
let topo_str = match cfg.topology {
|
||||
crate::types::Topology::Single => "single",
|
||||
crate::types::Topology::BtrfsSingle => "btrfs_single",
|
||||
crate::types::Topology::BcachefsSingle => "bcachefs_single",
|
||||
crate::types::Topology::DualIndependent => "dual_independent",
|
||||
crate::types::Topology::SsdHddBcachefs => "ssd_hdd_bcachefs",
|
||||
crate::types::Topology::Bcachefs2Copy => "bcachefs2_copy",
|
||||
crate::types::Topology::BtrfsRaid1 => "btrfs_raid1",
|
||||
};
|
||||
|
||||
|
||||
@@ -19,11 +19,12 @@
|
||||
// ext: device-specific alignment or reserved areas configurable via cfg in the future.
|
||||
// REGION: EXTENSION_POINTS-END
|
||||
//
|
||||
// REGION: SAFETY
|
||||
// safety: must verify require_empty_disks before any modification.
|
||||
// safety: must ensure unique partition GUIDs; identical labels are allowed when expected (e.g., ESP ZOSBOOT).
|
||||
// safety: must call udev settle after partition table writes.
|
||||
// REGION: SAFETY-END
|
||||
// REGION: SAFETY
|
||||
// safety: must verify require_empty_disks before any modification.
|
||||
// safety: when UEFI-booted, suppress creating BIOS boot partition to avoid unnecessary ef02 on UEFI systems.
|
||||
// safety: must ensure unique partition GUIDs; identical labels are allowed when expected (e.g., ESP ZOSBOOT).
|
||||
// safety: must call udev settle after partition table writes.
|
||||
// REGION: SAFETY-END
|
||||
//
|
||||
// REGION: ERROR_MAPPING
|
||||
// errmap: external tool failure -> crate::Error::Tool { tool, status, stderr }.
|
||||
@@ -42,7 +43,14 @@
|
||||
//! See [fn plan_partitions](plan.rs:1) and
|
||||
//! [fn apply_partitions](plan.rs:1).
|
||||
|
||||
use crate::{types::{Config, Topology}, device::Disk, Error, Result};
|
||||
use crate::{
|
||||
types::{Config, Topology},
|
||||
device::Disk,
|
||||
util::{run_cmd, run_cmd_capture, which_tool, udev_settle, is_efi_boot},
|
||||
idempotency,
|
||||
Error, Result,
|
||||
};
|
||||
use tracing::{debug, warn};
|
||||
|
||||
/// Partition roles supported by zosstorage.
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
@@ -126,6 +134,8 @@ pub struct PartitionResult {
|
||||
pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
let align = cfg.partitioning.alignment_mib;
|
||||
let require_empty = cfg.partitioning.require_empty_disks;
|
||||
// If system booted via UEFI, suppress the BIOS boot partition even if enabled in config.
|
||||
let add_bios = cfg.partitioning.bios_boot.enabled && !is_efi_boot();
|
||||
|
||||
if disks.is_empty() {
|
||||
return Err(Error::Partition("no disks provided to partition planner".into()));
|
||||
@@ -134,10 +144,32 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
let mut plans: Vec<DiskPlan> = Vec::new();
|
||||
|
||||
match cfg.topology {
|
||||
Topology::Single => {
|
||||
Topology::BtrfsSingle => {
|
||||
let d0 = &disks[0];
|
||||
let mut parts = Vec::new();
|
||||
if cfg.partitioning.bios_boot.enabled {
|
||||
if add_bios {
|
||||
parts.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
gpt_name: cfg.partitioning.bios_boot.gpt_name.clone(),
|
||||
});
|
||||
}
|
||||
parts.push(PartitionSpec {
|
||||
role: PartRole::Esp,
|
||||
size_mib: Some(cfg.partitioning.esp.size_mib),
|
||||
gpt_name: cfg.partitioning.esp.gpt_name.clone(),
|
||||
});
|
||||
parts.push(PartitionSpec {
|
||||
role: PartRole::Data,
|
||||
size_mib: None,
|
||||
gpt_name: cfg.partitioning.data.gpt_name.clone(),
|
||||
});
|
||||
plans.push(DiskPlan { disk: d0.clone(), parts });
|
||||
}
|
||||
Topology::BcachefsSingle => {
|
||||
let d0 = &disks[0];
|
||||
let mut parts = Vec::new();
|
||||
if add_bios {
|
||||
parts.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
@@ -165,7 +197,7 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
|
||||
// Disk 0: BIOS (opt) + ESP + Data
|
||||
let mut parts0 = Vec::new();
|
||||
if cfg.partitioning.bios_boot.enabled {
|
||||
if add_bios {
|
||||
parts0.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
@@ -202,7 +234,7 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
|
||||
// Disk 0: BIOS (opt) + ESP + Data
|
||||
let mut parts0 = Vec::new();
|
||||
if cfg.partitioning.bios_boot.enabled {
|
||||
if add_bios {
|
||||
parts0.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
@@ -230,6 +262,43 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
});
|
||||
plans.push(DiskPlan { disk: d1.clone(), parts: parts1 });
|
||||
}
|
||||
Topology::Bcachefs2Copy => {
|
||||
if disks.len() < 2 {
|
||||
return Err(Error::Partition("Bcachefs2Copy topology requires at least 2 disks".into()));
|
||||
}
|
||||
let d0 = &disks[0];
|
||||
let d1 = &disks[1];
|
||||
|
||||
// Disk 0: BIOS (opt) + ESP + Data
|
||||
let mut parts0 = Vec::new();
|
||||
if add_bios {
|
||||
parts0.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
gpt_name: cfg.partitioning.bios_boot.gpt_name.clone(),
|
||||
});
|
||||
}
|
||||
parts0.push(PartitionSpec {
|
||||
role: PartRole::Esp,
|
||||
size_mib: Some(cfg.partitioning.esp.size_mib),
|
||||
gpt_name: cfg.partitioning.esp.gpt_name.clone(),
|
||||
});
|
||||
parts0.push(PartitionSpec {
|
||||
role: PartRole::Data,
|
||||
size_mib: None,
|
||||
gpt_name: cfg.partitioning.data.gpt_name.clone(),
|
||||
});
|
||||
plans.push(DiskPlan { disk: d0.clone(), parts: parts0 });
|
||||
|
||||
// Disk 1: Data only
|
||||
let mut parts1 = Vec::new();
|
||||
parts1.push(PartitionSpec {
|
||||
role: PartRole::Data,
|
||||
size_mib: None,
|
||||
gpt_name: cfg.partitioning.data.gpt_name.clone(),
|
||||
});
|
||||
plans.push(DiskPlan { disk: d1.clone(), parts: parts1 });
|
||||
}
|
||||
Topology::SsdHddBcachefs => {
|
||||
// Choose SSD (rotational=false) and HDD (rotational=true)
|
||||
let ssd = disks.iter().find(|d| !d.rotational)
|
||||
@@ -239,7 +308,7 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
|
||||
// SSD: BIOS (opt) + ESP + Cache remainder
|
||||
let mut parts_ssd = Vec::new();
|
||||
if cfg.partitioning.bios_boot.enabled {
|
||||
if add_bios {
|
||||
parts_ssd.push(PartitionSpec {
|
||||
role: PartRole::BiosBoot,
|
||||
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
|
||||
@@ -276,13 +345,177 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
|
||||
})
|
||||
}
|
||||
|
||||
/// Apply the partition plan using system utilities (sgdisk) via util wrappers.
|
||||
///
|
||||
/// Safety:
|
||||
/// - Must verify target disks are empty when required.
|
||||
/// - Must ensure unique partition GUIDs.
|
||||
/// - Should call udev settle after changes.
|
||||
pub fn apply_partitions(_plan: &PartitionPlan) -> Result<Vec<PartitionResult>> {
|
||||
// To be implemented: sgdisk orchestration + udev settle + GUID collection
|
||||
todo!("shell out to sgdisk, trigger udev settle, collect partition GUIDs")
|
||||
/**
|
||||
Apply the partition plan using system utilities (sgdisk) via util wrappers.
|
||||
|
||||
Safety:
|
||||
- Verifies target disks are empty when required (defense-in-depth; orchestrator should also enforce).
|
||||
- Ensures unique partition GUIDs by relying on sgdisk defaults.
|
||||
- Calls udev settle after changes to ensure /dev nodes exist.
|
||||
|
||||
Notes:
|
||||
- Uses sgdisk -og to create a new GPT on empty disks.
|
||||
- Adds partitions in declared order using -n (auto-aligned), -t (type code), -c (GPT name).
|
||||
- Derives partition device paths: NVMe uses "pN" suffix; others use trailing "N".
|
||||
- Captures per-partition GUID and geometry via `sgdisk -i <N> <disk>`.
|
||||
*/
|
||||
pub fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>> {
|
||||
// Locate required tools
|
||||
let Some(sgdisk) = which_tool("sgdisk")? else {
|
||||
return Err(Error::Partition("sgdisk not found in PATH".into()));
|
||||
};
|
||||
|
||||
// Helper: map role to GPT type code (gdisk codes)
|
||||
fn type_code(role: PartRole) -> &'static str {
|
||||
match role {
|
||||
PartRole::BiosBoot => "ef02", // BIOS boot partition (for GRUB BIOS on GPT)
|
||||
PartRole::Esp => "ef00", // EFI System Partition
|
||||
PartRole::Data => "8300", // Linux filesystem
|
||||
PartRole::Cache => "8300", // Treat cache as Linux filesystem (bcachefs)
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: build partition device path for a given disk and partition number
|
||||
fn part_dev_path(disk_path: &str, part_number: u32) -> String {
|
||||
if disk_path.starts_with("/dev/nvme") {
|
||||
format!("{disk_path}p{part_number}")
|
||||
} else {
|
||||
format!("{disk_path}{part_number}")
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: sector size in bytes for disk (fallback 512 with warning)
|
||||
fn sector_size_bytes(disk_path: &str) -> Result<u64> {
|
||||
if let Some(blockdev) = which_tool("blockdev")? {
|
||||
let out = run_cmd_capture(&[blockdev.as_str(), "--getss", disk_path])?;
|
||||
let s = out.stdout.trim();
|
||||
return s.parse::<u64>()
|
||||
.map_err(|e| Error::Partition(format!("failed to parse sector size from blockdev for {}: {}", disk_path, e)));
|
||||
}
|
||||
warn!("blockdev not found; assuming 512-byte sectors for {}", disk_path);
|
||||
Ok(512)
|
||||
}
|
||||
|
||||
// Helper: parse sgdisk -i output to (unique_guid, first_sector, last_sector)
|
||||
fn parse_sgdisk_info(info: &str) -> Result<(String, u64, u64)> {
|
||||
let mut guid = String::new();
|
||||
let mut first: Option<u64> = None;
|
||||
let mut last: Option<u64> = None;
|
||||
|
||||
for line in info.lines() {
|
||||
let line = line.trim();
|
||||
if let Some(rest) = line.strip_prefix("Partition unique GUID:") {
|
||||
guid = rest.trim().to_string();
|
||||
} else if let Some(rest) = line.strip_prefix("First sector:") {
|
||||
// Format: "First sector: 2048 (at 1024.0 KiB)"
|
||||
let val = rest.trim().split_whitespace().next().unwrap_or("");
|
||||
if !val.is_empty() {
|
||||
first = Some(val.parse::<u64>().map_err(|e| Error::Partition(format!("parse first sector: {}", e)))?);
|
||||
}
|
||||
} else if let Some(rest) = line.strip_prefix("Last sector:") {
|
||||
let val = rest.trim().split_whitespace().next().unwrap_or("");
|
||||
if !val.is_empty() {
|
||||
last = Some(val.parse::<u64>().map_err(|e| Error::Partition(format!("parse last sector: {}", e)))?);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let first = first.ok_or_else(|| Error::Partition("sgdisk -i missing First sector".into()))?;
|
||||
let last = last.ok_or_else(|| Error::Partition("sgdisk -i missing Last sector".into()))?;
|
||||
if guid.is_empty() {
|
||||
return Err(Error::Partition("sgdisk -i missing Partition unique GUID".into()));
|
||||
}
|
||||
Ok((guid, first, last))
|
||||
}
|
||||
|
||||
let mut results: Vec<PartitionResult> = Vec::new();
|
||||
|
||||
for dp in &plan.disks {
|
||||
let disk_path = dp.disk.path.as_str();
|
||||
|
||||
// Defense-in-depth: verify emptiness when required
|
||||
if plan.require_empty_disks {
|
||||
let empty = idempotency::is_empty_disk(&dp.disk)?;
|
||||
if !empty {
|
||||
return Err(Error::Validation(format!(
|
||||
"target disk {} is not empty (partitions or signatures present)",
|
||||
dp.disk.path
|
||||
)));
|
||||
}
|
||||
}
|
||||
|
||||
debug!("apply_partitions: creating GPT on {}", disk_path);
|
||||
// Initialize (or re-initialize) a new empty GPT; requires truly empty disks per policy
|
||||
run_cmd(&[sgdisk.as_str(), "-og", disk_path])?;
|
||||
|
||||
// Create partitions in order
|
||||
for (idx0, spec) in dp.parts.iter().enumerate() {
|
||||
let part_num = (idx0 as u32) + 1;
|
||||
let size_arg = match spec.size_mib {
|
||||
Some(mib) => format!("+{}M", mib), // rely on sgdisk MiB suffix support
|
||||
None => String::from("0"), // consume remainder
|
||||
};
|
||||
// Use automatic aligned start (0) and specified size
|
||||
let n_arg = format!("{}:0:{}", part_num, size_arg);
|
||||
let t_arg = format!("{}:{}", part_num, type_code(spec.role));
|
||||
let c_arg = format!("{}:{}", part_num, spec.gpt_name);
|
||||
|
||||
debug!(
|
||||
"apply_partitions: {} -n {} -t {} -c {} {}",
|
||||
sgdisk, n_arg, t_arg, c_arg, disk_path
|
||||
);
|
||||
|
||||
run_cmd(&[
|
||||
sgdisk.as_str(),
|
||||
"-n", n_arg.as_str(),
|
||||
"-t", t_arg.as_str(),
|
||||
"-c", c_arg.as_str(),
|
||||
disk_path,
|
||||
])?;
|
||||
}
|
||||
|
||||
// Settle udev so new partitions appear under /dev
|
||||
udev_settle(5_000)?;
|
||||
|
||||
// Gather per-partition details and build results
|
||||
let sector_bytes = sector_size_bytes(disk_path)?;
|
||||
let mib_div: u64 = 1024 * 1024;
|
||||
|
||||
for (idx0, spec) in dp.parts.iter().enumerate() {
|
||||
let part_num = (idx0 as u32) + 1;
|
||||
|
||||
// Query sgdisk for partition info
|
||||
let i_arg = format!("{}", part_num);
|
||||
let info_out = run_cmd_capture(&[
|
||||
sgdisk.as_str(),
|
||||
"-i", i_arg.as_str(),
|
||||
disk_path,
|
||||
])?;
|
||||
|
||||
let (unique_guid, first_sector, last_sector) = parse_sgdisk_info(&info_out.stdout)?;
|
||||
let sectors = if last_sector >= first_sector {
|
||||
last_sector - first_sector + 1
|
||||
} else {
|
||||
0
|
||||
};
|
||||
let start_mib = (first_sector.saturating_mul(sector_bytes)) / mib_div;
|
||||
let size_mib = (sectors.saturating_mul(sector_bytes)) / mib_div;
|
||||
|
||||
let dev_path = part_dev_path(disk_path, part_num);
|
||||
|
||||
results.push(PartitionResult {
|
||||
disk: dp.disk.path.clone(),
|
||||
part_number: part_num,
|
||||
role: spec.role,
|
||||
gpt_name: spec.gpt_name.clone(),
|
||||
uuid: unique_guid,
|
||||
start_mib,
|
||||
size_mib,
|
||||
device_path: dev_path,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
debug!("apply_partitions: created {} partition entries", results.len());
|
||||
Ok(results)
|
||||
}
|
||||
44
src/types.rs
44
src/types.rs
@@ -2,8 +2,21 @@
|
||||
//!
|
||||
//! Mirrors docs in [docs/SCHEMA.md](docs/SCHEMA.md) and is loaded/validated by
|
||||
//! [fn load_and_merge()](src/config/loader.rs:1) and [fn validate()](src/config/loader.rs:1).
|
||||
//
|
||||
// REGION: API
|
||||
// api: types::Topology { BtrfsSingle, BcachefsSingle, DualIndependent, Bcachefs2Copy, SsdHddBcachefs, BtrfsRaid1 }
|
||||
// api: types::Config { logging, device_selection, topology, partitioning, filesystem, mount, report }
|
||||
// api: types::Partitioning { alignment_mib, require_empty_disks, bios_boot, esp, data, cache }
|
||||
// api: types::FsOptions { btrfs, bcachefs, vfat }
|
||||
// REGION: API-END
|
||||
//
|
||||
// REGION: RESPONSIBILITIES
|
||||
// - Define serde-serializable configuration types and enums used across modules.
|
||||
// - Keep field names and enums stable; update docs/SCHEMA.md when public surface changes.
|
||||
// REGION: RESPONSIBILITIES-END
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use clap::ValueEnum;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct LoggingConfig {
|
||||
@@ -25,19 +38,44 @@ pub struct DeviceSelection {
|
||||
pub min_size_gib: u64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, ValueEnum)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
#[value(rename_all = "snake_case")]
|
||||
pub enum Topology {
|
||||
/// Single eligible disk; btrfs on remainder.
|
||||
Single,
|
||||
/// Two eligible disks; independent btrfs on each data partition.
|
||||
#[value(alias = "btrfs-single")]
|
||||
BtrfsSingle,
|
||||
/// Single eligible disk; bcachefs on remainder.
|
||||
#[value(alias = "bcachefs-single")]
|
||||
BcachefsSingle,
|
||||
/// Independent btrfs filesystems on each data partition (any number of disks).
|
||||
#[value(alias = "dual-independent")]
|
||||
DualIndependent,
|
||||
/// SSD + HDD; bcachefs with SSD cache/promote and HDD backing.
|
||||
#[value(alias = "ssd-hdd-bcachefs")]
|
||||
SsdHddBcachefs,
|
||||
/// Multi-device bcachefs with two replicas (data+metadata).
|
||||
#[value(alias = "bcachefs2-copy", alias = "bcachefs-2copy", alias = "bcachefs-2-copy")]
|
||||
Bcachefs2Copy,
|
||||
/// Optional mirrored btrfs across two disks when explicitly requested.
|
||||
#[value(alias = "btrfs-raid1")]
|
||||
BtrfsRaid1,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Topology {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
let s = match self {
|
||||
Topology::BtrfsSingle => "btrfs_single",
|
||||
Topology::BcachefsSingle => "bcachefs_single",
|
||||
Topology::DualIndependent => "dual_independent",
|
||||
Topology::SsdHddBcachefs => "ssd_hdd_bcachefs",
|
||||
Topology::Bcachefs2Copy => "bcachefs2_copy",
|
||||
Topology::BtrfsRaid1 => "btrfs_raid1",
|
||||
};
|
||||
f.write_str(s)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct BiosBootSpec {
|
||||
/// Whether to create a tiny BIOS boot partition.
|
||||
|
||||
@@ -4,11 +4,13 @@
|
||||
// api: util::run_cmd(args: &[&str]) -> crate::Result<()>
|
||||
// api: util::run_cmd_capture(args: &[&str]) -> crate::Result<CmdOutput>
|
||||
// api: util::udev_settle(timeout_ms: u64) -> crate::Result<()>
|
||||
// api: util::is_efi_boot() -> bool
|
||||
// REGION: API-END
|
||||
//
|
||||
// REGION: RESPONSIBILITIES
|
||||
// - Centralize external tool discovery and invocation (sgdisk, blkid, mkfs.*, udevadm).
|
||||
// - Provide capture and error mapping to crate::Error consistently.
|
||||
// - Provide environment helpers (udev settle, boot mode detection).
|
||||
// Non-goals: business logic (planning/validation), direct parsing of complex outputs beyond what callers need.
|
||||
// REGION: RESPONSIBILITIES-END
|
||||
//
|
||||
@@ -39,6 +41,7 @@
|
||||
|
||||
use crate::{Error, Result};
|
||||
use std::process::Command;
|
||||
use std::path::Path;
|
||||
use tracing::{debug, warn};
|
||||
|
||||
/// Captured output from an external tool invocation.
|
||||
@@ -147,6 +150,14 @@ pub fn udev_settle(timeout_ms: u64) -> Result<()> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect whether the current system booted via UEFI (initramfs-friendly).
|
||||
///
|
||||
/// Returns true when /sys/firmware/efi exists (standard on UEFI boots).
|
||||
/// Returns false on legacy BIOS boots where that path is absent.
|
||||
pub fn is_efi_boot() -> bool {
|
||||
Path::new("/sys/firmware/efi").exists()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
Reference in New Issue
Block a user