Merge commit '9790ef4dacdf729d8825dbe745379bd6c669b9dd' as 'components/rfs'

This commit is contained in:
2025-08-16 21:12:45 +02:00
96 changed files with 14003 additions and 0 deletions

View File

@@ -0,0 +1,34 @@
[package]
name = "docker2fl"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[build-dependencies]
git-version = "0.3.5"
[lib]
name = "docker2fl"
path = "src/docker2fl.rs"
[[bin]]
name = "docker2fl"
path = "src/main.rs"
[dependencies]
log = "0.4"
anyhow = "1.0.44"
regex = "1.9.6"
rfs = { path = "../rfs"}
tokio = { version = "1", features = [ "rt", "rt-multi-thread", "macros", "signal"] }
bollard = "0.15.0"
futures-util = "0.3"
simple_logger = {version = "1.0.1"}
uuid = { version = "1.3.1", features = ["v4"] }
tempdir = "0.3"
serde_json = "1.0"
toml = "0.4.2"
clap = { version = "4.2", features = ["derive"] }
serde = { version = "1.0.159" , features = ["derive"] }
tokio-async-drop = "0.1.0"
walkdir = "2.5.0"

View File

@@ -0,0 +1,137 @@
# docker2fl
`docker2fl` is a tool to extract docker images and convert them to flist using [rfs](../rfs) tool.
## Build
To build docker2fl make sure you have rust installed then run the following commands:
```bash
# this is needed to be run once to make sure the musl target is installed
rustup target add x86_64-unknown-linux-musl
# build the binary
cargo build --release --target=x86_64-unknown-linux-musl
```
the binary will be available under `./target/x86_64-unknown-linux-musl/release/docker2fl` you can copy that binary then to `/usr/bin/`
to be able to use from anywhere on your system.
```bash
sudo mv ./target/x86_64-unknown-linux-musl/release/docker2fl /usr/bin/
```
## Stores
A store in where the actual data lives. A store can be as simple as a `directory` on your local machine in that case the files on the `fl` are only 'accessible' on your local machine. A store can also be a `zdb` running remotely or a cluster of `zdb`. Right now only `dir`, `zdb` and `s3` stores are supported but this will change in the future to support even more stores.
## Usage
### Creating an `fl`
```bash
docker2fl -i redis -s <store-specs>
```
This tells docker2fl to create an `fl` named `redis-latest.fl` using the store defined by the url `<store-specs>` and upload all the files under the temp docker directory that include exported docker image recursively.
The simplest form of `<store-specs>` is a `url`. the store `url` defines the store to use. Any `url` has a schema that defines the store type. Right now we have support only for:
- `dir`: dir is a very simple store that is mostly used for testing. A dir store will store the fs blobs in another location defined by the url path. An example of a valid dir url is `dir:///tmp/store`
- `zdb`: [zdb](https://github.com/threefoldtech/0-db) is a append-only key value store and provides a redis like API. An example zdb url can be something like `zdb://<hostname>[:port][/namespace]`
- `s3`: aws-s3 is used for storing and retrieving large amounts of data (blobs) in buckets (directories). An example `s3://<username>:<password>@<host>:<port>/<bucket-name>`
`region` is an optional param for s3 stores, if you want to provide one you can add it as a query to the url `?region=<region-name>`
`<store-specs>` can also be of the form `<start>-<end>=<url>` where `start` and `end` are a hex bytes for partitioning of blob keys. rfs will then store a set of blobs on the defined store if they blob key falls in the `[start:end]` range (inclusive).
If the `start-end` range is not provided a `00-FF` range is assume basically a catch all range for the blob keys. In other words, all blobs will be written to that store.
This is only useful because `docker2fl` can accept multiple stores on the command line with different and/or overlapping ranges.
For example `-s 00-80=dir:///tmp/store0 -s 81-ff=dir://tmp/store1` means all keys that has prefix byte in range `[00-80]` will be written to /tmp/store0 all other keys `00-ff` will be written to store1.
The same range can appear multiple times, which means the blob will be replicated to all the stores that matches its key prefix.
To quickly test this operation
```bash
docker2fl -i redis -s "dir:///tmp/store0"
```
this command will use redis image and effectively create the `redis.fl` and store (and shard) the blobs across the location /tmp/store0.
```bash
#docker2fl --help
Usage: docker2fl [OPTIONS] --image-name <IMAGE_NAME>
Options:
--debug...
enable debugging logs
-i, --image-name <IMAGE_NAME>
name of the docker image to be converted to flist
-s, --store <STORE>
store url for rfs in the format [xx-xx=]<url>. the range xx-xx is optional and used for sharding. the URL is per store type, please check docs for more information
-h, --help
Print help
-V, --version
Print version
```
## Generate an flist using ZDB
### Deploy a vm
1. Deploy a vm with a public IP
2. add docker (don't forget to add a disk for it with mountpoint = "/var/lib/docker")
3. add caddy
### Install zdb and run an instance of it
1. Execute `git clone -b development-v2 https://github.com/threefoldtech/0-db /zdb` then `cd /zdb`
2. Build
```bash
cd libzdb
make
cd ..
cd zdbd
make STATIC=1
cd ..
make
```
3. Install `make install`
4. run `zdb --listen 0.0.0.0`
5. The result info you should know
```console
zdbEndpoint = "<vm public IP>:<port>"
zdbNameSpace = "default"
zdbPassword = "default"
```
### Install docker2fl
1. Execute `git clone -b development-v2 https://github.com/threefoldtech/rfs` then `cd /rfs`
2. Execute
```bash
rustup target add x86_64-unknown-linux-musl`
cargo build --features build-binary --release --target=x86_64-unknown-linux-musl
mv ./target/x86_64-unknown-linux-musl/release/docker2fl /usr/bin/
```
### Convert docker image to an fl
1. Try an image for example `threefolddev/ubuntu:22.04` image
2. Executing `docker2fl -i threefolddev/ubuntu:22.04 -s "zdb://<vm public IP>:<port>/default" -d`
3. You will end up having `threefolddev-ubuntu-22.04.fl` (flist)
### Serve the flist using caddy
1. In the directory includes the output flist, you can run `caddy file-server --listen 0.0.0.0:2015 --browse`
2. The flist will be available as `http://<vm public IP>:2015/threefolddev-ubuntu-22.04.fl`
3. Use the flist to deploy any virtual machine.

View File

@@ -0,0 +1,9 @@
fn main() {
println!(
"cargo:rustc-env=GIT_VERSION={}",
git_version::git_version!(
args = ["--tags", "--always", "--dirty=-modified"],
fallback = "unknown"
)
);
}

View File

@@ -0,0 +1,335 @@
use bollard::auth::DockerCredentials;
use bollard::container::{
Config, CreateContainerOptions, InspectContainerOptions, RemoveContainerOptions,
};
use bollard::image::{CreateImageOptions, RemoveImageOptions};
use bollard::Docker;
use std::sync::mpsc::Sender;
use tempdir::TempDir;
use walkdir::WalkDir;
use anyhow::{Context, Result};
use futures_util::stream::StreamExt;
use serde_json::json;
use std::collections::HashMap;
use std::default::Default;
use std::fs;
use std::path::Path;
use std::process::Command;
use tokio_async_drop::tokio_async_drop;
use rfs::fungi::Writer;
use rfs::store::Store;
struct DockerInfo {
image_name: String,
container_name: String,
docker: Docker,
}
impl Drop for DockerInfo {
fn drop(&mut self) {
tokio_async_drop!({
let res = clean(&self.docker, &self.image_name, &self.container_name)
.await
.context("failed to clean docker image and container");
if res.is_err() {
log::error!(
"cleaning docker image and container failed with error: {:?}",
res.err()
);
}
});
}
}
pub struct DockerImageToFlist {
meta: Writer,
image_name: String,
credentials: Option<DockerCredentials>,
docker_tmp_dir: TempDir,
}
impl DockerImageToFlist {
pub fn new(
meta: Writer,
image_name: String,
credentials: Option<DockerCredentials>,
docker_tmp_dir: TempDir,
) -> Self {
DockerImageToFlist {
meta,
image_name,
credentials,
docker_tmp_dir,
}
}
pub fn files_count(&self) -> usize {
WalkDir::new(self.docker_tmp_dir.path()).into_iter().count()
}
pub async fn prepare(&mut self) -> Result<()> {
#[cfg(unix)]
let docker = Docker::connect_with_socket_defaults().context("failed to create docker")?;
let container_file =
Path::file_stem(self.docker_tmp_dir.path()).expect("failed to get directory name");
let container_name = container_file
.to_str()
.expect("failed to get container name")
.to_owned();
let docker_info = DockerInfo {
image_name: self.image_name.to_owned(),
container_name,
docker,
};
extract_image(
&docker_info.docker,
&docker_info.image_name,
&docker_info.container_name,
self.docker_tmp_dir.path(),
self.credentials.clone(),
)
.await
.context("failed to extract docker image to a directory")?;
log::info!(
"docker image '{}' is extracted successfully",
docker_info.image_name
);
Ok(())
}
pub async fn pack<S: Store>(&mut self, store: S, sender: Option<Sender<u32>>) -> Result<()> {
rfs::pack(
self.meta.clone(),
store,
&self.docker_tmp_dir.path(),
true,
sender,
)
.await
.context("failed to pack flist")?;
log::info!("flist has been created successfully");
Ok(())
}
pub async fn convert<S: Store>(&mut self, store: S, sender: Option<Sender<u32>>) -> Result<()> {
self.prepare().await?;
self.pack(store, sender).await?;
Ok(())
}
}
async fn extract_image(
docker: &Docker,
image_name: &str,
container_name: &str,
docker_tmp_dir_path: &Path,
credentials: Option<DockerCredentials>,
) -> Result<()> {
pull_image(docker, image_name, credentials).await?;
create_container(docker, image_name, container_name)
.await
.context("failed to create docker container")?;
export_container(container_name, docker_tmp_dir_path)
.context("failed to export docker container")?;
container_boot(docker, container_name, docker_tmp_dir_path)
.await
.context("failed to boot docker container")?;
Ok(())
}
async fn pull_image(
docker: &Docker,
image_name: &str,
credentials: Option<DockerCredentials>,
) -> Result<()> {
log::info!("pulling docker image {}", image_name);
let options = Some(CreateImageOptions {
from_image: image_name,
..Default::default()
});
let mut image_pull_stream = docker.create_image(options, None, credentials);
while let Some(msg) = image_pull_stream.next().await {
msg.context("failed to pull docker image")?;
}
Ok(())
}
async fn create_container(docker: &Docker, image_name: &str, container_name: &str) -> Result<()> {
log::debug!("Inspecting docker image configurations {}", image_name);
let image = docker
.inspect_image(image_name)
.await
.context("failed to inspect docker image")?;
let image_config = image.config.context("failed to get docker image configs")?;
let mut command = "";
if image_config.cmd.is_none() && image_config.entrypoint.is_none() {
command = "/bin/sh";
}
log::debug!("Creating a docker container {}", container_name);
let options = Some(CreateContainerOptions {
name: container_name,
platform: None,
});
let config = Config {
image: Some(image_name),
hostname: Some(container_name),
cmd: Some(vec![command]),
..Default::default()
};
docker
.create_container(options, config)
.await
.context("failed to create docker temporary container")?;
Ok(())
}
fn export_container(container_name: &str, docker_tmp_dir_path: &Path) -> Result<()> {
log::debug!("Exporting docker container {}", container_name);
Command::new("sh")
.arg("-c")
.arg(format!(
"docker export {} | tar -xpf - -C {}",
container_name,
docker_tmp_dir_path.display()
))
.output()
.expect("failed to execute export docker container");
Ok(())
}
async fn container_boot(
docker: &Docker,
container_name: &str,
docker_tmp_dir_path: &Path,
) -> Result<()> {
log::debug!(
"Inspecting docker container configurations {}",
container_name
);
let options = Some(InspectContainerOptions { size: false });
let container = docker
.inspect_container(container_name, options)
.await
.context("failed to inspect docker container")?;
let container_config = container
.config
.context("failed to get docker container configs")?;
let command;
let args;
let mut env: HashMap<String, String> = HashMap::new();
let mut cwd = String::from("/");
let cmd = container_config.cmd.expect("failed to get cmd configs");
if let Some(entrypoint) = container_config.entrypoint {
command = (entrypoint.first().expect("failed to get first entrypoint")).to_string();
if entrypoint.len() > 1 {
let (_, entries) = entrypoint
.split_first()
.expect("failed to split entrypoint");
args = entries.to_vec();
} else {
args = cmd;
}
} else {
command = (cmd.first().expect("failed to get first cmd")).to_string();
let (_, entries) = cmd.split_first().expect("failed to split cmd");
args = entries.to_vec();
}
if let Some(envs) = container_config.env {
for entry in envs.iter() {
if let Some((key, value)) = entry.split_once('=') {
env.insert(key.to_string(), value.to_string());
}
}
}
if let Some(ref working_dir) = container_config.working_dir {
if !working_dir.is_empty() {
cwd = working_dir.to_string();
}
}
let metadata = json!({
"startup": {
"entry": {
"name": "core.system",
"args": {
"name": command,
"args": args,
"env": env,
"dir": cwd,
}
}
}
});
let toml_metadata: toml::Value = serde_json::from_str(&metadata.to_string())?;
log::info!(
"Creating '.startup.toml' file from container {} contains {}",
container_name,
toml_metadata.to_string()
);
fs::write(
docker_tmp_dir_path.join(".startup.toml"),
toml_metadata.to_string(),
)
.expect("failed to create '.startup.toml' file");
Ok(())
}
async fn clean(docker: &Docker, image_name: &str, container_name: &str) -> Result<()> {
log::info!("cleaning docker image and container");
let options = Some(RemoveContainerOptions {
force: true,
..Default::default()
});
docker
.remove_container(container_name, options)
.await
.context("failed to remove docker container")?;
let remove_options = Some(RemoveImageOptions {
force: true,
..Default::default()
});
docker
.remove_image(image_name, remove_options, None)
.await
.context("failed to remove docker image")?;
Ok(())
}

View File

@@ -0,0 +1,115 @@
use anyhow::Result;
use bollard::auth::DockerCredentials;
use clap::{ArgAction, Parser};
use rfs::fungi;
use rfs::store::parse_router;
use tokio::runtime::Builder;
use uuid::Uuid;
mod docker2fl;
#[derive(Parser, Debug)]
#[clap(name ="docker2fl", author, version = env!("GIT_VERSION"), about, long_about = None)]
struct Options {
/// enable debugging logs
#[clap(short, long, action=ArgAction::Count)]
debug: u8,
/// store url for rfs in the format [xx-xx=]<url>. the range xx-xx is optional and used for
/// sharding. the URL is per store type, please check docs for more information
#[clap(short, long, required = true, action=ArgAction::Append)]
store: Vec<String>,
/// name of the docker image to be converted to flist
#[clap(short, long, required = true)]
image_name: String,
// docker credentials
/// docker hub server username
#[clap(long, required = false)]
username: Option<String>,
/// docker hub server password
#[clap(long, required = false)]
password: Option<String>,
/// docker hub server auth
#[clap(long, required = false)]
auth: Option<String>,
/// docker hub server email
#[clap(long, required = false)]
email: Option<String>,
/// docker hub server address
#[clap(long, required = false)]
server_address: Option<String>,
/// docker hub server identity token
#[clap(long, required = false)]
identity_token: Option<String>,
/// docker hub server registry token
#[clap(long, required = false)]
registry_token: Option<String>,
}
fn main() -> Result<()> {
let rt = Builder::new_multi_thread()
.thread_stack_size(8 * 1024 * 1024)
.enable_all()
.build()
.unwrap();
rt.block_on(run())
}
async fn run() -> Result<()> {
let opts = Options::parse();
simple_logger::SimpleLogger::new()
.with_utc_timestamps()
.with_level({
match opts.debug {
0 => log::LevelFilter::Info,
1 => log::LevelFilter::Debug,
_ => log::LevelFilter::Trace,
}
})
.with_module_level("sqlx", log::Level::Error.to_level_filter())
.init()?;
let mut docker_image = opts.image_name.to_string();
if !docker_image.contains(':') {
docker_image.push_str(":latest");
}
let credentials = Some(DockerCredentials {
username: opts.username,
password: opts.password,
auth: opts.auth,
email: opts.email,
serveraddress: opts.server_address,
identitytoken: opts.identity_token,
registrytoken: opts.registry_token,
});
let fl_name = docker_image.replace([':', '/'], "-") + ".fl";
let meta = fungi::Writer::new(&fl_name, true).await?;
let store = parse_router(&opts.store).await?;
let container_name = Uuid::new_v4().to_string();
let docker_tmp_dir =
tempdir::TempDir::new(&container_name).expect("failed to create tmp directory");
let mut docker_to_fl =
docker2fl::DockerImageToFlist::new(meta, docker_image, credentials, docker_tmp_dir);
let res = docker_to_fl.convert(store, None).await;
// remove the file created with the writer if fl creation failed
if res.is_err() {
tokio::fs::remove_file(fl_name).await?;
return res;
}
Ok(())
}