Compare commits

...

67 Commits

Author SHA1 Message Date
Mahmoud-Emad
d7e4e8ec56 refactor: Change timestamp types to i64
- Update `created_at` type from u32 to i64
- Update `updated_at` type from u32 to i64
2025-12-02 15:40:16 +02:00
46e0e56e61 ... 2025-12-02 10:17:45 +01:00
ce3bb5cd9e atlas back 2025-12-02 09:53:35 +01:00
c3690f3d53 ... 2025-12-02 08:45:38 +01:00
e3aaa1b0f8 ... 2025-12-02 07:53:20 +01:00
4096b52244 ... 2025-12-02 05:41:57 +01:00
da2429104a ... 2025-12-02 05:05:11 +01:00
00ac4c8bd1 ... 2025-12-02 05:00:44 +01:00
7db14632d6 ... 2025-12-02 04:53:48 +01:00
63e160029e ... 2025-12-02 04:42:01 +01:00
e55a9741e2 ... 2025-12-02 04:38:48 +01:00
75b548b439 ... 2025-12-02 04:34:43 +01:00
ad65392806 ... 2025-12-02 04:23:21 +01:00
8c8369c42b ... 2025-12-02 04:15:22 +01:00
29ab30788e ... 2025-12-02 03:27:17 +01:00
690af291b5 ... 2025-12-01 20:53:20 +01:00
88680f1954 ... 2025-12-01 19:53:51 +01:00
7dba940d80 ... 2025-12-01 19:35:18 +01:00
5d44d49861 ... 2025-12-01 19:02:06 +01:00
c22e9ae8ce ... 2025-12-01 19:00:31 +01:00
55966be158 ... 2025-12-01 16:45:47 +01:00
Mahmoud-Emad
5f9a95f2ca refactor: Improve site configuration and navigation handling
- Consolidate site configuration loading and parsing
- Refactor navbar and menu item processing logic
- Add console output for configuration steps
- Update copyright year dynamically
- Simplify and clarify parameter handling
- Enhance error handling for missing required parameters
2025-12-01 15:32:09 +02:00
Omdanii
efbe50bdea Merge pull request #221 from Incubaid/dev_docusaurus
Docusaurus Landing Page Slug Handling & Documentation Updates
2025-12-01 15:21:11 +02:00
Mahmoud-Emad
f447c7a3f1 Merge branch 'development' into dev_docusaurus 2025-12-01 15:16:17 +02:00
Omdanii
c346d0c5ed Merge pull request #219 from Incubaid/development_hetzner
feat: Improve Ubuntu installation and SSH execution
2025-12-01 13:03:24 +02:00
Mahmoud-Emad
ba46ed62ef refactor: Update documentation for HeroLib Docusaurus integration
- Refactor site.page_category and site.page arguments
- Update hero command usage for ebook paths
- Clarify Atlas and Doctree integration
- Add new ebook structure examples
- Update HeroScript actions reference
2025-12-01 11:59:16 +02:00
Mahmoud-Emad
8fc560ae78 feat: add docs landing page slug handling
- Add function to find first doc in sidebar
- Pass found doc ID to process_page
- Set slug: / for landing page in frontmatter
2025-12-01 11:54:02 +02:00
ed785c79df ... 2025-12-01 10:35:46 +01:00
d53043dd65 ... 2025-12-01 05:28:15 +01:00
0a731f83e5 ... 2025-12-01 05:27:29 +01:00
Mahmoud-Emad
ed9ff35807 refactor: Improve navigation label generation
- Generate human-readable nav label
- Use title_case for page names without titles
2025-11-30 17:33:17 +02:00
Mahmoud-Emad
e2c2a560c8 feat: Refactor docusaurus playbook and sidebar JSON serialization
- Extract playbook action processing into separate functions
- Add auto-export for Atlas collections
- Simplify sidebar JSON serialization
- Update sidebar navigation item structure
2025-11-30 17:31:41 +02:00
5dcdf72310 Merge branch 'fix/install-hero-latest' into development_hetzner
* fix/install-hero-latest:
  fix: use GitHub 'latest' release URL in install_hero.sh
2025-11-30 15:59:35 +01:00
b6c883b5ac Merge branch 'development_k3s' into development_hetzner
* development_k3s:
  feat: Add K3s installer with complete lifecycle management
  feat: Add K3s installer with complete lifecycle management
  fixing startupcmd
  fix actions
  feat(k3s-installer)
2025-11-30 15:58:41 +01:00
78e4fade03 Merge branch 'development' into development_hetzner
* development:
  ...
  ...
  ...
  ...
  ...
2025-11-30 15:58:18 +01:00
0ca87c5f32 ... 2025-11-30 09:19:10 +01:00
5b2069c560 ... 2025-11-30 08:59:38 +01:00
0963910572 ... 2025-11-30 08:24:36 +01:00
394dd2c88e ... 2025-11-30 07:41:15 +01:00
d16aaa30db ... 2025-11-30 07:28:16 +01:00
d662e46a8d fix: use GitHub 'latest' release URL in install_hero.sh
- Remove hardcoded version, use releases/latest/download instead
- Always use musl builds for Linux (static binary works everywhere)
- Fix variable name bugs (OSNAME -> os_name, OSTYPE -> os_name)
- Only modify .zprofile on macOS (not Linux)
- Remove dead code
2025-11-28 18:06:01 +01:00
18da5823b7 ... 2025-11-28 14:18:16 +01:00
Mahmoud-Emad
1e9de962ad docs: Update Hetzner examples documentation
- Refactor Hetzner examples to use environment variables
- Clarify SSH key configuration for Hetzner
- Improve documentation structure and readability
2025-11-28 11:14:36 +02:00
Mahmoud-Emad
b9dc8996f5 feat: Improve Ubuntu installation and SSH execution
- Update example configuration comments
- Refactor server rescue check to use file_exists
- Add Ubuntu installation timeout and polling constants
- Implement non-interactive installation script execution
- Enhance SSH execution with argument parsing
- Add check to skip reinstallation if Ubuntu is already installed
- Copy SSH key to new system during installation
- Poll for installation completion with progress updates
- Use `node.exec` instead of `node.exec_interactive`
- Use `execvp` correctly for shell execution
- Recreate node connection after server reboot
- Adjust SSH wait timeout to milliseconds
2025-11-28 10:37:47 +02:00
7c03226054 ... 2025-11-28 09:37:21 +01:00
fc13f3e6ae ... 2025-11-28 09:27:19 +01:00
0414ea85df ... 2025-11-28 09:01:58 +01:00
60e2230448 ... 2025-11-28 05:47:47 +01:00
d9ad57985d Merge branch 'development' of github.com:incubaid/herolib into development 2025-11-28 05:42:38 +01:00
8368592267 ... 2025-11-28 05:42:35 +01:00
peternashaat
b9b8e7ab75 feat: Add K3s installer with complete lifecycle management
Implemented a production-ready K3s Kubernetes installer with full lifecycle
support including installation, startup management, and cleanup.

Key features:
- Install first master (cluster init), join additional masters (HA), and workers
- Systemd service management via StartupManager abstraction
- IPv6 support with Mycelium interface auto-detection
- Robust destroy/cleanup with proper ordering to prevent hanging
- Complete removal of services, processes, network interfaces, and data
2025-11-27 14:01:53 +01:00
peternashaat
dc2f8c2976 feat: Add K3s installer with complete lifecycle management
Implemented a production-ready K3s Kubernetes installer with full lifecycle
support including installation, startup management, and cleanup.

Key features:
- Install first master (cluster init), join additional masters (HA), and workers
- Systemd service management via StartupManager abstraction
- IPv6 support with Mycelium interface auto-detection
- Robust destroy/cleanup with proper ordering to prevent hanging
- Complete removal of services, processes, network interfaces, and data
2025-11-27 14:01:22 +01:00
Omdanii
fc592a2e27 Merge pull request #217 from Incubaid/development-docs
Enhance AI Prompts Files
2025-11-27 09:38:03 +02:00
Scott Yeager
0269277ac8 bump version to 1.0.38 2025-11-26 14:53:40 -08:00
Scott Yeager
ee0e7d44fd Fix release script 2025-11-26 14:53:29 -08:00
Scott Yeager
28f00d3dc6 Add build flow doc 2025-11-26 14:52:09 -08:00
Scott Yeager
a30646e3b1 Remove unused glibc upload 2025-11-26 14:02:39 -08:00
Scott Yeager
c7ae0ed393 Update workflow for single static build on Linux 2025-11-26 13:55:48 -08:00
805c900b02 Merge pull request #213 from Incubaid/development_linuxname
Rename "linux"
2025-11-26 13:51:24 -08:00
ab5fe67cc2 ... 2025-11-26 21:40:21 +01:00
peternashaat
449213681e fixing startupcmd 2025-11-26 14:51:53 +00:00
mik-tf
72ab099291 docs: update docs with latest herolib features 2025-11-26 09:10:14 -05:00
mik-tf
007361deab feat: Update AI prompts with Atlas integration details and code example fixes 2025-11-26 09:08:11 -05:00
mik-tf
8a458c6b3f feat: Update Docusaurus ebook manual to reflect Atlas integration and current pipeline 2025-11-26 08:47:32 -05:00
peternashaat
d69023e2c9 fix actions 2025-11-26 12:56:29 +00:00
peternashaat
3f09aad045 feat(k3s-installer) 2025-11-26 11:55:57 +00:00
Scott Yeager
dd293ce387 Rename "linux" 2025-11-25 21:30:01 -08:00
270 changed files with 4222 additions and 1771 deletions

32
.github/workflows/README.md vendored Normal file
View File

@@ -0,0 +1,32 @@
# Building Hero for release
Generally speaking, our scripts and docs for building hero produce non portable binaries for Linux. While that's fine for development purposes, statically linked binaries are much more convenient for releases and distribution.
The release workflow here creates a static binary for Linux using an Alpine container. A few notes follow about how that's done.
## Static builds in vlang
Since V compiles to C in our case, we are really concerned with how to produce static C builds. The V project provides [some guidance](https://github.com/vlang/v?tab=readme-ov-file#docker-with-alpinemusl) on using an Alpine container and passing `-cflags -static` to the V compiler.
That's fine for some projects. Hero has a dependency on the `libpq` C library for Postgres functionality, however, and this creates a complication.
## Static linking libpq
In order to create a static build of hero on Alpine, we need to install some additional packages:
* openssl-libs-static
* postgresql-dev
The full `apk` command to prepare the container for building looks like this:
```bash
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
```
Then we also need to instruct the C compiler to link against the Postgres static shared libraries. Here's the build command:
```bash
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v
```
Note that gcc is also the preferred compiler for static builds.

View File

@@ -35,9 +35,6 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
# We do the workaround as described here https://github.com/Incubaid/herolib?tab=readme-ov-file#tcc-compiler-error-on-macos
# gcc and clang also don't work on macOS due to https://github.com/vlang/v/issues/25467
# We can change the compiler or remove this when one is fixed
- name: Setup V & Herolib
id: setup
shell: bash
@@ -53,52 +50,34 @@ jobs:
echo "Herolib symlink created to $(pwd)/lib"
timeout-minutes: 10
# We can't make static builds for Linux easily, since we link to libql
# (Postgres) and this has no static version available in the Alpine
# repos. Therefore we build dynamic binaries for both glibc and musl.
#
# Again we work around a bug limiting our choice of C compiler tcc won't
# work on Alpine due to https://github.com/vlang/v/issues/24866
# So always use gcc for Linux
#
# For macOS, we can only use tcc (see above), but then we hit issues using
# the garbage collector, so disable that
# For Linux, we build a static binary linked against musl on Alpine. For
# static linking, gcc is preferred
- name: Build Hero
timeout-minutes: 15
run: |
set -e
set -ex
if [ "${{ runner.os }}" = "Linux" ]; then
sudo apt-get install libpq-dev
# Build for glibc
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}
# Build for musl using Alpine in Docker
docker run --rm \
-v ${{ github.workspace }}/lib:/root/.vmodules/incubaid/herolib \
-v ${{ github.workspace }}:/herolib \
-w /herolib \
alpine \
alpine:3.22 \
sh -c '
apk add --no-cache bash git build-base openssl-dev libpq-dev
set -ex
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
cd v
make clean
make
./v symlink
cd ..
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}-musl
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v -o cli/hero-${{ matrix.target }}-musl
'
else
v -w -d use_openssl -enable-globals -cc clang cli/hero.v -o cli/hero-${{ matrix.target }}
fi
- name: Upload glibc binary
if: runner.os == 'Linux'
uses: actions/upload-artifact@v4
with:
name: hero-${{ matrix.target }}
path: cli/hero-${{ matrix.target }}
- name: Upload musl binary
if: runner.os == 'Linux'
uses: actions/upload-artifact@v4

77
aiprompts/README.md Normal file
View File

@@ -0,0 +1,77 @@
# HeroLib AI Prompts (`aiprompts/`)
This directory contains AI-oriented instructions and manuals for working with the Hero tool and the `herolib` codebase.
It is the **entry point for AI agents** that generate or modify code/docs in this repository.
## Scope
- **Global rules for AI and V/Hero usage**
See:
- `herolib_start_here.md`
- `vlang_herolib_core.md`
- **Herolib core modules**
See:
- `herolib_core/` (core HeroLib modules)
- `herolib_advanced/` (advanced topics)
- **Docusaurus & Site module (Hero docs)**
See:
- `docusaurus/docusaurus_ebook_manual.md`
- `lib/web/docusaurus/README.md` (authoritative module doc)
- `lib/web/site/ai_instructions.md` and `lib/web/site/readme.md`
- **HeroModels / HeroDB**
See:
- `ai_instructions_hero_models.md`
- `heromodel_instruct.md`
- **V language & web server docs** (upstream-style, mostly language-level)
See:
- `v_core/`, `v_advanced/`
- `v_veb_webserver/`
## Sources of Truth
For any domain, **code and module-level docs are authoritative**:
- Core install & usage: `herolib/README.md`, scripts under `scripts/`
- Site module: `lib/web/site/ai_instructions.md`, `lib/web/site/readme.md`
- Docusaurus module: `lib/web/docusaurus/README.md`, `lib/web/docusaurus/*.v`
- DocTree client: `lib/data/doctree/client/README.md`
- HeroModels: `lib/hero/heromodels/*.v` + tests
`aiprompts/` files **must not contradict** these. When in doubt, follow the code / module docs first and treat prompts as guidance.
## Directory Overview
- `herolib_start_here.md` / `vlang_herolib_core.md`
Global AI rules and V/Hero basics.
- `herolib_core/` & `herolib_advanced/`
Per-module instructions for core/advanced HeroLib features.
- `docusaurus/`
AI manual for building Hero docs/ebooks with the Docusaurus + Site + DocTree pipeline.
- `instructions/`
Active, higher-level instructions (e.g. HeroDB base filesystem).
- `instructions_archive/`
**Legacy / historical** prompt material. See `instructions_archive/README.md`.
- `todo/`
Meta design/refactor notes (not up-to-date instructions for normal usage).
- `v_core/`, `v_advanced/`, `v_veb_webserver/`
V language and web framework references used when generating V code.
- `bizmodel/`, `unpolly/`, `doctree/`, `documentor/`
Domain-specific or feature-specific instructions.
## How to Treat Legacy Material
- Content under `instructions_archive/` is **kept for reference** and may describe older flows (e.g. older documentation or prompt pipelines).
Do **not** use it as a primary source for new work unless explicitly requested.
- Some prompts mention **Doctree**; the current default docs pipeline uses **DocTree**. Doctree/`doctreeclient` is an alternative/legacy backend.
## Guidelines for AI Agents
- Always:
- Respect global rules in `herolib_start_here.md` and `vlang_herolib_core.md`.
- Prefer module docs under `lib/` when behavior or parameters differ.
- Avoid modifying generated files (e.g. `*_ .v` or other generated artifacts) as instructed.
- When instructions conflict, resolve as:
1. **Code & module docs in `lib/`**
2. **AI instructions in `aiprompts/`**
3. **Archived docs (`instructions_archive/`) only when explicitly needed**.

View File

@@ -2,9 +2,9 @@
## Overview
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`.
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`.
These models are used to store structured data in Redis using the HeroDB system.
The message.v can be found in `lib/hero/heromodels/message.v`.s
The `message.v` example can be found in `lib/hero/heromodels/message.v`.
## Key Concepts
@@ -108,7 +108,7 @@ Add your model to the ModelsFactory struct in `factory.v`:
```v
pub struct ModelsFactory {
pub mut:
messages DBCalendar
calendar DBCalendar
// ... other models
}
```

View File

@@ -1,51 +0,0 @@
# Doctree Export Specification
## Overview
The `doctree` module in `lib/data/doctree` is responsible for processing and exporting documentation trees. This involves taking a structured representation of documentation (collections, pages, images, files) and writing it to a specified file system destination. Additionally, it leverages Redis to store metadata about the exported documentation, facilitating quick lookups and integration with other systems.
## Key Components
### `lib/data/doctree/export.v`
This file defines the main `export` function for the `Tree` object. It orchestrates the overall export process:
- Takes `TreeExportArgs` which includes parameters like `destination`, `reset` (to clear destination), `keep_structure`, `exclude_errors`, `toreplace` (for regex replacements), `concurrent` (for parallel processing), and `redis` (to control Redis metadata storage).
- Processes definitions, includes, actions, and macros within the `Tree`.
- Generates file paths for pages, images, and other files.
- Iterates through `Collection` objects within the `Tree` and calls their respective `export` methods, passing down the `redis` flag.
### `lib/data/doctree/collection/export.v`
This file defines the `export` function for the `Collection` object. This is where the actual file system writing and Redis interaction for individual collections occur:
- Takes `CollectionExportArgs` which includes `destination`, `file_paths`, `reset`, `keep_structure`, `exclude_errors`, `replacer`, and the `redis` flag.
- Creates a `.collection` file in the destination directory with basic collection information.
- **Redis Integration**:
- Obtains a Redis client using `base.context().redis()`.
- Stores the collection's destination path in Redis using `redis.hset('doctree:path', 'collection_name', 'destination_path')`.
- Calls `export_pages`, `export_files`, `export_images`, and `export_linked_pages` which all interact with Redis if the `redis` flag is true.
- **`export_pages`**:
- Processes page links and handles not-found errors.
- Writes markdown content to the destination file system.
- Stores page metadata in Redis: `redis.hset('doctree:collection_name', 'page_name', 'page_file_name.md')`.
- **`export_files` and `export_images`**:
- Copies files and images to the destination directory (e.g., `img/`).
- Stores file/image metadata in Redis: `redis.hset('doctree:collection_name', 'file_name', 'img/file_name.ext')`.
- **`export_linked_pages`**:
- Gathers linked pages within the collection.
- Writes a `.linkedpages` file.
- Stores linked pages file metadata in Redis: `redis.hset('doctree:collection_name', 'linkedpages', 'linkedpages_file_name.md')`.
## Link between Redis and Export
The `doctree` export process uses Redis as a metadata store. When the `redis` flag is set to `true` (which is the default), the export functions populate Redis with key-value pairs that map collection names, page names, file names, and image names to their respective paths and file names within the exported documentation structure.
This Redis integration serves as a quick lookup mechanism for other applications or services that might need to access or reference the exported documentation. Instead of traversing the file system, these services can query Redis to get the location of specific documentation elements.
## Is Export Needed?
Yes, the export functionality is crucial for making the processed `doctree` content available outside the internal `doctree` representation.
- **File System Export**: The core purpose of the export is to write the documentation content (markdown files, images, other assets) to a specified directory. This is essential for serving the documentation via a web server, integrating with static site generators (like Docusaurus, as suggested by other files in the project), or simply providing a browsable version of the documentation.
- **Redis Metadata**: While the file system export is fundamental, the Redis metadata storage is an important complementary feature. It provides an efficient way for other systems to programmatically discover and locate documentation assets. If there are downstream applications that rely on this Redis metadata for navigation, search, or content delivery, then the Redis part of the export is indeed needed. If no such applications exist or are planned, the `redis` flag can be set to `false` to skip this step, but the file system export itself remains necessary for external consumption of the documentation.

View File

@@ -2,13 +2,38 @@
This manual provides a comprehensive guide on how to leverage HeroLib's Docusaurus integration, Doctree, and HeroScript to create and manage technical ebooks, optimized for AI-driven content generation and project management.
## Quick Start - Recommended Ebook Structure
The recommended directory structure for an ebook:
```
my_ebook/
├── scan.hero # DocTree collection scanning
├── config.hero # Site configuration
├── menus.hero # Navbar and footer configuration
├── include.hero # Docusaurus define and doctree export
├── 1_intro.heroscript # Page definitions (numbered for ordering)
├── 2_concepts.heroscript # More page definitions
└── 3_advanced.heroscript # Additional pages
```
**Running an ebook:**
```bash
# Start development server
hero docs -d -p /path/to/my_ebook
# Build for production
hero docs -p /path/to/my_ebook
```
## 1. Core Concepts
To effectively create ebooks with HeroLib, it's crucial to understand the interplay of three core components:
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process.
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process. Files use `.hero` extension for configuration and `.heroscript` for page definitions.
* **Docusaurus**: A popular open-source static site generator. HeroLib uses Docusaurus as the underlying framework to render your ebook content into a navigable website.
* **Doctree**: HeroLib's content management system. Doctree organizes your markdown files into "collections" and "pages," allowing for structured content retrieval and reuse across multiple projects.
* **DocTree**: HeroLib's document collection layer. DocTree scans and exports markdown "collections" and "pages" that Docusaurus consumes.
## 2. Setting Up a Docusaurus Project with HeroLib
@@ -22,18 +47,26 @@ The `docusaurus.define` HeroScript directive configures the global settings for
```heroscript
!!docusaurus.define
name:"my_ebook" // must match the site name from !!site.config
path_build: "/tmp/my_ebook_build"
path_publish: "/tmp/my_ebook_publish"
production: true
update: true
reset: true // clean build dir before building (optional)
install: true // run bun install if needed (optional)
template_update: true // update the Docusaurus template (optional)
doctree_dir: "/tmp/doctree_export" // where DocTree exports collections
use_doctree: true // use DocTree as content backend
```
**Arguments:**
* `name` (string, required): The site/factory name. Must match the `name` used in `!!site.config` so Docusaurus can find the corresponding site definition.
* `path_build` (string, optional): The local path where the Docusaurus site will be built. Defaults to `~/hero/var/docusaurus/build`.
* `path_publish` (string, optional): The local path where the final Docusaurus site will be published (e.g., for deployment). Defaults to `~/hero/var/docusaurus/publish`.
* `production` (boolean, optional): If `true`, the site will be built for production (optimized). Default is `false`.
* `update` (boolean, optional): If `true`, the Docusaurus template and dependencies will be updated. Default is `false`.
* `reset` (boolean, optional): If `true`, clean the build directory before starting.
* `install` (boolean, optional): If `true`, run dependency installation (e.g., `bun install`).
* `template_update` (boolean, optional): If `true`, update the Docusaurus template.
* `doctree_dir` (string, optional): Directory where DocTree exports collections (used by the DocTree client in `lib/data/doctree/client`).
* `use_doctree` (boolean, optional): If `true`, use the DocTree client as the content backend (default behavior).
### 2.2. Adding a Docusaurus Site (`docusaurus.add`)
@@ -53,7 +86,7 @@ The `docusaurus.add` directive defines an individual Docusaurus site (your ebook
```heroscript
!!docusaurus.add
name:"tfgrid_tech_ebook"
git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
git_reset:true // Reset Git repository before pulling
git_pull:true // Pull latest changes
git_root:"/tmp/git_clones" // Optional: specify a root directory for git clones
@@ -190,18 +223,18 @@ Configure the footer section of your Docusaurus site.
* `href` (string, optional): External URL for the link.
* `to` (string, optional): Internal Docusaurus path.
### 3.4. Build Destinations (`site.build_dest`, `site.build_dest_dev`)
### 3.4. Publish Destinations (`site.publish`, `site.publish_dev`)
Specify where the built Docusaurus site should be deployed. This typically involves an SSH connection defined elsewhere (e.g., `!!site.ssh_connection`).
**HeroScript Example:**
```heroscript
!!site.build_dest
!!site.publish
ssh_name:"production_server" // Name of a pre-defined SSH connection
path:"/var/www/my-ebook" // Remote path on the server
path:"/var/www/my-ebook" // Remote path on the server
!!site.build_dest_dev
!!site.publish_dev
ssh_name:"dev_server"
path:"/tmp/dev-ebook"
```
@@ -219,7 +252,7 @@ This powerful feature allows you to pull markdown content and assets from other
```heroscript
!!site.import
url:'https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
url:'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
dest:'cloud_reinvented' // Destination subdirectory within your Docusaurus docs folder
replace:'NAME:MyName, URGENCY:red' // Optional: comma-separated key:value pairs for text replacement
```
@@ -238,49 +271,60 @@ This is where you define the actual content pages and how they are organized int
```heroscript
// Define a category
!!site.page_category path:'introduction' label:"Introduction to Ebook" position:10
!!site.page_category name:'introduction' label:"Introduction to Ebook"
// Define a page within that category, linking to Doctree content
!!site.page path:'introduction' src:"my_doctree_collection:chapter_1_overview"
// Define pages - first page specifies collection, subsequent pages reuse it
!!site.page src:"my_collection:chapter_1_overview"
title:"Chapter 1: Overview"
description:"A brief introduction to the ebook's content."
position:1 // Order within the category
hide_title:true // Hide the title on the page itself
!!site.page src:"chapter_2_basics"
title:"Chapter 2: Basics"
// New category with new collection
!!site.page_category name:'advanced' label:"Advanced Topics"
!!site.page src:"advanced_collection:performance"
title:"Performance Tuning"
hide_title:true
```
**Arguments:**
* **`site.page_category`**:
* `path` (string, required): The path to the category directory within your Docusaurus `docs` folder (e.g., `introduction` will create `docs/introduction/_category_.json`).
* `name` (string, required): Category identifier (used internally).
* `label` (string, required): The display name for the category in the sidebar.
* `position` (int, optional): The order of the category in the sidebar.
* `sitename` (string, optional): If you have multiple Docusaurus sites defined, specify which site this category belongs to. Defaults to the current site's name.
* `position` (int, optional): The order of the category in the sidebar (auto-incremented if omitted).
* **`site.page`**:
* `src` (string, required): **Crucial for Doctree integration.** This specifies the source of the page content in the format `collection_name:page_name`. HeroLib will fetch the markdown content from the specified Doctree collection and page.
* `path` (string, required): The relative path and filename for the generated markdown file within your Docusaurus `docs` folder (e.g., `introduction/chapter_1.md`). If only a directory is provided (e.g., `introduction/`), the `page_name` from `src` will be used as the filename.
* `title` (string, optional): The title of the page. If not provided, HeroLib will attempt to extract it from the markdown content or use the `page_name`.
* `src` (string, required): **Crucial for DocTree/collection integration.** Format: `collection_name:page_name` for the first page, or just `page_name` to reuse the previous collection.
* `title` (string, optional): The title of the page. If not provided, HeroLib extracts it from the markdown `# Heading` or uses the page name.
* `description` (string, optional): A short description for the page, used in frontmatter.
* `position` (int, optional): The order of the page within its category.
* `hide_title` (boolean, optional): If `true`, the title will not be displayed on the page itself.
* `draft` (boolean, optional): If `true`, the page will be marked as a draft and not included in production builds.
* `title_nr` (int, optional): If set, HeroLib will re-number the markdown headings (e.g., `title_nr:3` will make `# Heading` become `### Heading`). Useful for consistent heading levels across imported content.
* `draft` (boolean, optional): If `true`, the page will be hidden from navigation.
### 3.7. Doctree Integration Details
### 3.7. Collections and DocTree/Doctree Integration
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your Doctree content.
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your content collections.
**How Doctree Works:**
**Current default: DocTree export**
1. **Collections**: DocTree exports markdown files into collections under an `export_dir` (see `lib/data/doctree/client`).
2. **Export step**: A separate process (DocTree) writes the collections into `doctree_dir` (e.g., `/tmp/doctree_export`), following the `content/` + `meta/` structure.
3. **Docusaurus consumption**: The Docusaurus module uses the DocTree client (`doctree_client`) to resolve `collection_name:page_name` into markdown content and assets when generating docs.
**Alternative: Doctree/`doctreeclient`**
In older setups, or when explicitly configured, Doctree and `doctreeclient` can still be used to provide the same `collection:page` model:
1. **Collections**: Doctree organizes markdown files into logical groups called "collections." A collection is typically a directory containing markdown files and an empty `.collection` file.
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`).
**Example `doctree.heroscript`:**
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`):
```heroscript
!!doctree.scan git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections"
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections"
```
This will pull the `collections` directory from the specified Git URL and make its contents available to Doctree.
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, HeroLib's `doctreeclient` fetches the content of `my_page.md` from the `my_collection` collection that Doctree has scanned.
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, the client (`doctree_client` or `doctreeclient`, depending on configuration) fetches the content of `my_page.md` from the `my_collection` collection.
## 4. Building and Developing Your Ebook

View File

@@ -35,11 +35,11 @@ pub fn play(mut plbook PlayBook) ! {
if plbook.exists_once(filter: 'docusaurus.define') {
mut action := plbook.get(filter: 'docusaurus.define')!
mut p := action.params
//example how we get parameters from the action see core_params.md for more details
ds = new(
path: p.get_default('path_publish', '')!
production: p.get_default_false('production')
)!
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
path_build := p.get_default('path_build', '')!
path_publish := p.get_default('path_publish', '')!
reset := p.get_default_false('reset')
use_doctree := p.get_default_false('use_doctree')
}
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
@@ -51,4 +51,4 @@ pub fn play(mut plbook PlayBook) ! {
}
```
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.

View File

@@ -1,3 +1,5 @@
> NOTE: This document is an example snapshot of a developer's filesystem layout for HeroDB/HeroModels. Paths under `/Users/despiegk/...` are illustrative only. For the current, authoritative structure always use the live repository tree (this checkout) and the modules under `lib/hero/heromodels` and `lib/hero/db`.
<file_map>
/Users/despiegk/code/github/incubaid/herolib
├── .github

View File

@@ -0,0 +1,15 @@
# Instructions Archive (Legacy Prompts)
This directory contains **archived / legacy AI prompt material** for `herolib`.
- Files here may describe **older workflows** (e.g. previous documentation generation or model pipelines).
- They are kept for **historical reference** and to help understand how things evolved.
- They are **not** guaranteed to match the current `herolib` implementation.
## Usage Guidelines
- Do **not** use these files as the primary source for new features or refactors.
- When generating code or documentation, prefer:
1. Code and module docs under `lib/` (e.g. `lib/web/site/ai_instructions.md`, `lib/web/docusaurus/README.md`).
2. Up-to-date AI instructions under `aiprompts/` (outside of `instructions_archive/`).
- Only consult this directory when you explicitly need to understand **historical behavior** or migrate old flows.

View File

@@ -1,51 +1,10 @@
# module orm
## Contents
- [Constants](#Constants)
- [new_query](#new_query)
- [orm_select_gen](#orm_select_gen)
- [orm_stmt_gen](#orm_stmt_gen)
- [orm_table_gen](#orm_table_gen)
- [Connection](#Connection)
- [Primitive](#Primitive)
- [QueryBuilder[T]](#QueryBuilder[T])
- [reset](#reset)
- [where](#where)
- [or_where](#or_where)
- [order](#order)
- [limit](#limit)
- [offset](#offset)
- [select](#select)
- [set](#set)
- [query](#query)
- [count](#count)
- [insert](#insert)
- [insert_many](#insert_many)
- [update](#update)
- [delete](#delete)
- [create](#create)
- [drop](#drop)
- [last_id](#last_id)
- [MathOperationKind](#MathOperationKind)
- [OperationKind](#OperationKind)
- [OrderType](#OrderType)
- [SQLDialect](#SQLDialect)
- [StmtKind](#StmtKind)
- [InfixType](#InfixType)
- [Null](#Null)
- [QueryBuilder](#QueryBuilder)
- [QueryData](#QueryData)
- [SelectConfig](#SelectConfig)
- [Table](#Table)
- [TableField](#TableField)
## Constants
```v
const num64 = [typeof[i64]().idx, typeof[u64]().idx]
```
[[Return to contents]](#Contents)
```v
const nums = [
@@ -59,7 +18,7 @@ const nums = [
]
```
[[Return to contents]](#Contents)
```v
const float = [
@@ -68,31 +27,31 @@ const float = [
]
```
[[Return to contents]](#Contents)
```v
const type_string = typeof[string]().idx
```
[[Return to contents]](#Contents)
```v
const serial = -1
```
[[Return to contents]](#Contents)
```v
const time_ = -2
```
[[Return to contents]](#Contents)
```v
const enum_ = -3
```
[[Return to contents]](#Contents)
```v
const type_idx = {
@@ -111,19 +70,19 @@ const type_idx = {
}
```
[[Return to contents]](#Contents)
```v
const string_max_len = 2048
```
[[Return to contents]](#Contents)
```v
const null_primitive = Primitive(Null{})
```
[[Return to contents]](#Contents)
## new_query
```v
@@ -132,7 +91,7 @@ fn new_query[T](conn Connection) &QueryBuilder[T]
new_query create a new query object for struct `T`
[[Return to contents]](#Contents)
## orm_select_gen
```v
@@ -141,7 +100,7 @@ fn orm_select_gen(cfg SelectConfig, q string, num bool, qm string, start_pos int
Generates an sql select stmt, from universal parameter orm - See SelectConfig q, num, qm, start_pos - see orm_stmt_gen where - See QueryData
[[Return to contents]](#Contents)
## orm_stmt_gen
```v
@@ -151,7 +110,7 @@ fn orm_stmt_gen(sql_dialect SQLDialect, table Table, q string, kind StmtKind, nu
Generates an sql stmt, from universal parameter q - The quotes character, which can be different in every type, so it's variable num - Stmt uses nums at prepared statements (? or ?1) qm - Character for prepared statement (qm for question mark, as in sqlite) start_pos - When num is true, it's the start position of the counter
[[Return to contents]](#Contents)
## orm_table_gen
```v
@@ -161,7 +120,7 @@ fn orm_table_gen(sql_dialect SQLDialect, table Table, q string, defaults bool, d
Generates an sql table stmt, from universal parameter table - Table struct q - see orm_stmt_gen defaults - enables default values in stmt def_unique_len - sets default unique length for texts fields - See TableField sql_from_v - Function which maps type indices to sql type names alternative - Needed for msdb
[[Return to contents]](#Contents)
## Connection
```v
@@ -181,7 +140,7 @@ Interfaces gets called from the backend and can be implemented Since the orm sup
Every function without last_id() returns an optional, which returns an error if present last_id returns the last inserted id of the db
[[Return to contents]](#Contents)
## Primitive
```v
@@ -203,7 +162,7 @@ type Primitive = InfixType
| []Primitive
```
[[Return to contents]](#Contents)
## QueryBuilder[T]
## reset
@@ -213,7 +172,7 @@ fn (qb_ &QueryBuilder[T]) reset() &QueryBuilder[T]
reset reset a query object, but keep the connection and table name
[[Return to contents]](#Contents)
## where
```v
@@ -222,7 +181,7 @@ fn (qb_ &QueryBuilder[T]) where(condition string, params ...Primitive) !&QueryBu
where create a `where` clause, it will `AND` with previous `where` clause. valid token in the `condition` include: `field's names`, `operator`, `(`, `)`, `?`, `AND`, `OR`, `||`, `&&`, valid `operator` incldue: `=`, `!=`, `<>`, `>=`, `<=`, `>`, `<`, `LIKE`, `ILIKE`, `IS NULL`, `IS NOT NULL`, `IN`, `NOT IN` example: `where('(a > ? AND b <= ?) OR (c <> ? AND (x = ? OR y = ?))', a, b, c, x, y)`
[[Return to contents]](#Contents)
## or_where
```v
@@ -231,7 +190,7 @@ fn (qb_ &QueryBuilder[T]) or_where(condition string, params ...Primitive) !&Quer
or_where create a `where` clause, it will `OR` with previous `where` clause.
[[Return to contents]](#Contents)
## order
```v
@@ -240,7 +199,7 @@ fn (qb_ &QueryBuilder[T]) order(order_type OrderType, field string) !&QueryBuild
order create a `order` clause
[[Return to contents]](#Contents)
## limit
```v
@@ -249,7 +208,7 @@ fn (qb_ &QueryBuilder[T]) limit(limit int) !&QueryBuilder[T]
limit create a `limit` clause
[[Return to contents]](#Contents)
## offset
```v
@@ -258,7 +217,7 @@ fn (qb_ &QueryBuilder[T]) offset(offset int) !&QueryBuilder[T]
offset create a `offset` clause
[[Return to contents]](#Contents)
## select
```v
@@ -267,7 +226,7 @@ fn (qb_ &QueryBuilder[T]) select(fields ...string) !&QueryBuilder[T]
select create a `select` clause
[[Return to contents]](#Contents)
## set
```v
@@ -276,7 +235,7 @@ fn (qb_ &QueryBuilder[T]) set(assign string, values ...Primitive) !&QueryBuilder
set create a `set` clause for `update`
[[Return to contents]](#Contents)
## query
```v
@@ -285,7 +244,7 @@ fn (qb_ &QueryBuilder[T]) query() ![]T
query start a query and return result in struct `T`
[[Return to contents]](#Contents)
## count
```v
@@ -294,7 +253,7 @@ fn (qb_ &QueryBuilder[T]) count() !int
count start a count query and return result
[[Return to contents]](#Contents)
## insert
```v
@@ -303,7 +262,7 @@ fn (qb_ &QueryBuilder[T]) insert[T](value T) !&QueryBuilder[T]
insert insert a record into the database
[[Return to contents]](#Contents)
## insert_many
```v
@@ -312,7 +271,7 @@ fn (qb_ &QueryBuilder[T]) insert_many[T](values []T) !&QueryBuilder[T]
insert_many insert records into the database
[[Return to contents]](#Contents)
## update
```v
@@ -321,7 +280,7 @@ fn (qb_ &QueryBuilder[T]) update() !&QueryBuilder[T]
update update record(s) in the database
[[Return to contents]](#Contents)
## delete
```v
@@ -330,7 +289,7 @@ fn (qb_ &QueryBuilder[T]) delete() !&QueryBuilder[T]
delete delete record(s) in the database
[[Return to contents]](#Contents)
## create
```v
@@ -339,7 +298,7 @@ fn (qb_ &QueryBuilder[T]) create() !&QueryBuilder[T]
create create a table
[[Return to contents]](#Contents)
## drop
```v
@@ -348,7 +307,7 @@ fn (qb_ &QueryBuilder[T]) drop() !&QueryBuilder[T]
drop drop a table
[[Return to contents]](#Contents)
## last_id
```v
@@ -357,7 +316,7 @@ fn (qb_ &QueryBuilder[T]) last_id() int
last_id returns the last inserted id of the db
[[Return to contents]](#Contents)
## MathOperationKind
```v
@@ -369,7 +328,7 @@ enum MathOperationKind {
}
```
[[Return to contents]](#Contents)
## OperationKind
```v
@@ -389,7 +348,7 @@ enum OperationKind {
}
```
[[Return to contents]](#Contents)
## OrderType
```v
@@ -399,7 +358,7 @@ enum OrderType {
}
```
[[Return to contents]](#Contents)
## SQLDialect
```v
@@ -411,7 +370,7 @@ enum SQLDialect {
}
```
[[Return to contents]](#Contents)
## StmtKind
```v
@@ -422,7 +381,7 @@ enum StmtKind {
}
```
[[Return to contents]](#Contents)
## InfixType
```v
@@ -434,14 +393,14 @@ pub:
}
```
[[Return to contents]](#Contents)
## Null
```v
struct Null {}
```
[[Return to contents]](#Contents)
## QueryBuilder
```v
@@ -456,7 +415,7 @@ pub mut:
}
```
[[Return to contents]](#Contents)
## QueryData
```v
@@ -474,7 +433,7 @@ pub mut:
Examples for QueryData in SQL: abc == 3 && b == 'test' => fields[abc, b]; data[3, 'test']; types[index of int, index of string]; kinds[.eq, .eq]; is_and[true]; Every field, data, type & kind of operation in the expr share the same index in the arrays is_and defines how they're addicted to each other either and or or parentheses defines which fields will be inside () auto_fields are indexes of fields where db should generate a value when absent in an insert
[[Return to contents]](#Contents)
## SelectConfig
```v
@@ -496,7 +455,7 @@ pub mut:
table - Table struct is_count - Either the data will be returned or an integer with the count has_where - Select all or use a where expr has_order - Order the results order - Name of the column which will be ordered order_type - Type of order (asc, desc) has_limit - Limits the output data primary - Name of the primary field has_offset - Add an offset to the result fields - Fields to select types - Types to select
[[Return to contents]](#Contents)
## Table
```v
@@ -507,7 +466,7 @@ pub mut:
}
```
[[Return to contents]](#Contents)
## TableField
```v
@@ -521,7 +480,3 @@ pub mut:
is_arr bool
}
```
[[Return to contents]](#Contents)
#### Powered by vdoc. Generated on: 2 Sep 2025 07:19:37

View File

@@ -0,0 +1,282 @@
# V ORM — Developer Cheat Sheet
*Fast reference for Struct Mapping, CRUD, Attributes, Query Builder, and Usage Patterns*
---
## 1. What V ORM Is
* Built-in ORM for **SQLite**, **MySQL**, **PostgreSQL**
* Unified V-syntax; no SQL string building
* Automatic query sanitization
* Compile-time type & field checks
* Structs map directly to tables
---
## 2. Define Models (Struct ↔ Table)
### Basic Example
```v
struct User {
id int @[primary; sql: serial]
name string
email string @[unique]
}
```
### Nullable Fields
```v
age ?int // allows NULL
```
---
## 3. Struct Attributes
### Table-level
| Attribute | Meaning |
| ---------------------------- | ------------------------- |
| `@[table: 'custom_name']` | Override table name |
| `@[comment: '...']` | Table comment |
| `@[index: 'field1, field2']` | Creates multi-field index |
---
## 4. Field Attributes
| Attribute | Description |
| ------------------------------------------------ | ---------------------------- |
| `@[primary]` | Primary key |
| `@[unique]` | UNIQUE constraint |
| `@[unique: 'group']` | Composite unique group |
| `@[skip]` / `@[sql: '-']` | Ignore field |
| `@[sql: serial]` | Auto-increment key |
| `@[sql: 'col_name']` | Rename column |
| `@[sql_type: 'BIGINT']` | Force SQL type |
| `@[default: 'CURRENT_TIMESTAMP']` | Raw SQL default |
| `@[fkey: 'field']` | Foreign key on a child array |
| `@[references]`, `@[references: 'table(field)']` | FK relationship |
| `@[index]` | Index on field |
| `@[comment: '...']` | Column comment |
### Example
```v
struct Post {
id int @[primary; sql: serial]
title string
body string
author_id int @[references: 'users(id)']
}
```
---
## 5. ORM SQL Block (Primary API)
### Create Table
```v
sql db {
create table User
}!
```
### Drop Table
```v
sql db {
drop table User
}!
```
### Insert
```v
id := sql db {
insert new_user into User
}!
```
### Select
```v
users := sql db {
select from User where age > 18 && name != 'Tom'
order by id desc
limit 10
}!
```
### Update
```v
sql db {
update User set name = 'Alice' where id == 1
}!
```
### Delete
```v
sql db {
delete from User where id > 100
}!
```
---
## 6. Relationships
### One-to-Many
```v
struct Parent {
id int @[primary; sql: serial]
children []Child @[fkey: 'parent_id']
}
struct Child {
id int @[primary; sql: serial]
parent_id int
}
```
---
## 7. Notes on `time.Time`
* Stored as integer timestamps
* SQL defaults like `NOW()` / `CURRENT_TIMESTAMP` **dont work** for `time.Time` with V ORM defaults
* Use `@[default: 'CURRENT_TIMESTAMP']` only with custom SQL types
---
## 8. Query Builder API (Dynamic Queries)
### Create Builder
```v
mut qb := orm.new_query[User](db)
```
### Create Table
```v
qb.create()!
```
### Insert Many
```v
qb.insert_many(users)!
```
### Select
```v
results := qb
.select('id, name')!
.where('age > ?', 18)!
.order('id DESC')!
.limit(20)!
.query()!
```
### Update
```v
qb
.set('name = ?', 'NewName')!
.where('id = ?', 1)!
.update()!
```
### Delete
```v
qb.where('created_at IS NULL')!.delete()!
```
### Complex WHERE
```v
qb.where(
'(salary > ? AND age < ?) OR (role LIKE ?)',
3000, 40, '%engineer%'
)!
```
---
## 9. Connecting to Databases
### SQLite
```v
import db.sqlite
db := sqlite.connect('db.sqlite')!
```
### MySQL
```v
import db.mysql
db := mysql.connect(host: 'localhost', user: 'root', password: '', dbname: 'test')!
```
### PostgreSQL
```v
import db.pg
db := pg.connect(conn_str)!
```
---
## 10. Full Example (Complete CRUD)
```v
import db.sqlite
struct Customer {
id int @[primary; sql: serial]
name string
email string @[unique]
}
fn main() {
db := sqlite.connect('customers.db')!
sql db { create table Customer }!
new_c := Customer{name: 'Alice', email: 'alice@x.com'}
id := sql db { insert new_c into Customer }!
println(id)
list := sql db { select from Customer where name == 'Alice' }!
println(list)
sql db { update Customer set name = 'Alicia' where id == id }!
sql db { delete from Customer where id == id }!
}
```
---
## 11. Best Practices
* Always use `sql db { ... }` for static queries
* Use QueryBuilder for dynamic conditions
* Prefer `sql: serial` for primary keys
* Explicitly define foreign keys
* Use `?T` for nullable fields
* Keep struct names identical to table names unless overridden

View File

@@ -122,12 +122,12 @@ pub fn play(mut plbook PlayBook) ! {
if plbook.exists_once(filter: 'docusaurus.define') {
mut action := plbook.get(filter: 'docusaurus.define')!
mut p := action.params
//example how we get parameters from the action see core_params.md for more details
ds = new(
path: p.get_default('path_publish', '')!
production: p.get_default_false('production')
)!
}
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
path_build := p.get_default('path_build', '')!
path_publish := p.get_default('path_publish', '')!
reset := p.get_default_false('reset')
use_doctree := p.get_default_false('use_doctree')
}
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
actions := plbook.find(filter: 'docusaurus.add')!
@@ -138,7 +138,7 @@ pub fn play(mut plbook PlayBook) ! {
}
```
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.
# PlayBook, process heroscripts

View File

@@ -10,6 +10,7 @@ fp.version('v0.1.0')
fp.description('Compile hero binary in debug or production mode')
fp.skip_executable()
prod_mode := fp.bool('prod', `p`, false, 'Build production version (optimized)')
help_requested := fp.bool('help', `h`, false, 'Show help message')
@@ -61,6 +62,8 @@ compile_cmd := if os.user_os() == 'macos' {
'v -enable-globals -g -w -n -prod hero.v'
} else {
'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
// 'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
// 'v -cg -enable-globals -parallel-cc -w -n -d use_openssl hero.v'
}
} else {
if prod_mode {

View File

@@ -53,7 +53,7 @@ fn do() ! {
mut cmd := Command{
name: 'hero'
description: 'Your HERO toolset.'
version: '1.0.36'
version: '1.0.38'
}
mut toinstall := false
@@ -103,4 +103,4 @@ fn main() {
print_backtrace()
exit(1)
}
}
}

View File

@@ -40,4 +40,3 @@ RUN /tmp/install_herolib.vsh && \
ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]

17
examples/ai/aiclient_embed.vsh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.ai.client
mut cl := client.new()!
// response := cl.llms.llm_local.chat_completion(
// message: 'Explain quantum computing in simple terms'
// temperature: 0.5
// max_completion_tokens: 1024
// )!
response := cl.llms.llm_embed.chat_completion(
message: 'Explain quantum computing in simple terms'
)!
println(response)

View File

@@ -1,12 +1,12 @@
#!/usr/bin/env hero
!!atlas.scan
!!doctree.scan
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/mycelium_economics'
!!atlas.scan
!!doctree.scan
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/authentic_web'
// !!atlas.scan
// !!doctree.scan
// git_url: 'https://git.ourworld.tf/geomind/docs_geomind/src/branch/main/collections/usecases'
!!atlas.export destination: '/tmp/atlas_export'
!!doctree.export destination: '/tmp/doctree_export'

View File

@@ -1,15 +1,15 @@
#!/usr/bin/env hero
!!atlas.scan
git_url: 'https://git.ourworld.tf/geomind/atlas_geomind/src/branch/main/content'
meta_path: '/tmp/atlas_export_meta'
!!doctree.scan
git_url: 'https://git.ourworld.tf/geomind/doctree_geomind/src/branch/main/content'
meta_path: '/tmp/doctree_export_meta'
!!atlas.scan
git_url: 'https://git.ourworld.tf/tfgrid/atlas_threefold/src/branch/main/content'
meta_path: '/tmp/atlas_export_meta'
!!doctree.scan
git_url: 'https://git.ourworld.tf/tfgrid/doctree_threefold/src/branch/main/content'
meta_path: '/tmp/doctree_export_meta'
ignore3: 'static,templates,groups'
!!atlas.export
destination: '/tmp/atlas_export_test'
!!doctree.export
destination: '/tmp/doctree_export_test'
include: true
redis: true

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env hero
!!atlas.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/tests"
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/tests"
!!atlas.export destination: '/tmp/atlas_export'
!!doctree.export destination: '/tmp/doctree_export'

View File

@@ -0,0 +1,308 @@
#!/usr/bin/env -S vrun
import incubaid.herolib.data.doctree
import incubaid.herolib.ui.console
import os
fn main() {
println('=== ATLAS DEBUG SCRIPT ===\n')
// Create and scan doctree
mut a := doctree.new(name: 'main')!
// Scan the collections
println('Scanning collections...\n')
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/mycelium_nodes_tiers'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geomind_compare'
)!
a.scan(path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geoaware')!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_economics'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_concepts'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_cloud_tech'
)!
// Initialize doctree (post-scanning validation)
a.init_post()!
// Print all pages per collection
println('\n=== COLLECTIONS & PAGES ===\n')
for col_name, col in a.collections {
println('Collection: ${col_name}')
println(' Pages (${col.pages.len}):')
if col.pages.len > 0 {
for page_name, _ in col.pages {
println(' - ${page_name}')
}
} else {
println(' (empty)')
}
println(' Files/Images (${col.files.len}):')
if col.files.len > 0 {
for file_name, _ in col.files {
println(' - ${file_name}')
}
} else {
println(' (empty)')
}
}
// Validate links (this will recursively find links across collections)
println('\n=== VALIDATING LINKS (RECURSIVE) ===\n')
a.validate_links()!
println(' Link validation complete\n')
// Check for broken links
println('\n=== BROKEN LINKS ===\n')
mut total_errors := 0
for col_name, col in a.collections {
if col.has_errors() {
println('Collection: ${col_name} (${col.errors.len} errors)')
for err in col.errors {
println(' [${err.category_str()}] Page: ${err.page_key}')
println(' Message: ${err.message}')
println('')
total_errors++
}
}
}
if total_errors == 0 {
println(' No broken links found!')
} else {
println('\n Total broken link errors: ${total_errors}')
}
// Show discovered links per page (validates recursive discovery)
println('\n\n=== DISCOVERED LINKS (RECURSIVE RESOLUTION) ===\n')
println('Checking for files referenced by cross-collection pages...\n')
mut total_links := 0
for col_name, col in a.collections {
mut col_has_links := false
for page_name, page in col.pages {
if page.links.len > 0 {
if !col_has_links {
println('Collection: ${col_name}')
col_has_links = true
}
println(' Page: ${page_name} (${page.links.len} links)')
for link in page.links {
target_col := if link.target_collection_name != '' {
link.target_collection_name
} else {
col_name
}
println(' ${target_col}:${link.target_item_name} [${link.file_type}]')
total_links++
}
}
}
}
println('\n Total links discovered: ${total_links}')
// List pages that need investigation
println('\n=== CHECKING SPECIFIC MISSING PAGES ===\n')
missing_pages := [
'compare_electricity',
'internet_basics',
'centralization_risk',
'gdp_negative',
]
// Check in geoaware collection
if 'geoaware' in a.collections {
mut geoaware := a.get_collection('geoaware')!
println('Collection: geoaware')
if geoaware.pages.len > 0 {
println(' All pages in collection:')
for page_name, _ in geoaware.pages {
println(' - ${page_name}')
}
} else {
println(' (No pages found)')
}
println('\n Checking for specific missing pages:')
for page_name in missing_pages {
exists := page_name in geoaware.pages
status := if exists { '' } else { '' }
println(' ${status} ${page_name}')
}
}
// Check for pages across all collections
println('\n\n=== LOOKING FOR MISSING PAGES ACROSS ALL COLLECTIONS ===\n')
for missing_page in missing_pages {
println('Searching for "${missing_page}":')
mut found := false
for col_name, col in a.collections {
if missing_page in col.pages {
println(' Found in: ${col_name}')
found = true
}
}
if !found {
println(' Not found in any collection')
}
}
// Check for the solution page
println('\n\n=== CHECKING FOR "solution" PAGE ===\n')
for col_name in ['mycelium_nodes_tiers', 'geomind_compare', 'geoaware', 'mycelium_economics',
'mycelium_concepts', 'mycelium_cloud_tech'] {
if col_name in a.collections {
mut col := a.get_collection(col_name)!
exists := col.page_exists('solution')!
status := if exists { '' } else { '' }
println('${status} ${col_name}: "solution" page')
}
}
// Print error summary
println('\n\n=== ERROR SUMMARY BY CATEGORY ===\n')
mut category_counts := map[string]int{}
for _, col in a.collections {
for err in col.errors {
cat_str := err.category_str()
category_counts[cat_str]++
}
}
if category_counts.len == 0 {
println(' No errors found!')
} else {
for cat, count in category_counts {
println('${cat}: ${count}')
}
}
// ===== EXPORT AND FILE VERIFICATION TEST =====
println('\n\n=== EXPORT AND FILE VERIFICATION TEST ===\n')
// Create export directory
export_path := '/tmp/doctree_debug_export'
if os.exists(export_path) {
os.rmdir_all(export_path)!
}
os.mkdir_all(export_path)!
println('Exporting to: ${export_path}\n')
a.export(destination: export_path)!
println(' Export completed\n')
// Collect all files found during link validation
mut expected_files := map[string]string{} // key: file_name, value: collection_name
mut file_count := 0
for col_name, col in a.collections {
for page_name, page in col.pages {
for link in page.links {
if link.status == .found && (link.file_type == .file || link.file_type == .image) {
file_key := link.target_item_name
expected_files[file_key] = link.target_collection_name
file_count++
}
}
}
}
println('Expected to find ${file_count} file references in links\n')
println('=== VERIFYING FILES IN EXPORT DIRECTORY ===\n')
// Get the first collection name (the primary exported collection)
mut primary_col_name := ''
for col_name, _ in a.collections {
primary_col_name = col_name
break
}
if primary_col_name == '' {
println(' No collections found')
} else {
mut verified_count := 0
mut missing_count := 0
mut found_files := map[string]bool{}
// Check both img and files directories
img_dir := '${export_path}/content/${primary_col_name}/img'
files_dir := '${export_path}/content/${primary_col_name}/files'
// Scan img directory
if os.exists(img_dir) {
img_files := os.ls(img_dir) or { []string{} }
for img_file in img_files {
found_files[img_file] = true
}
}
// Scan files directory
if os.exists(files_dir) {
file_list := os.ls(files_dir) or { []string{} }
for file in file_list {
found_files[file] = true
}
}
println('Files/Images found in export directory:')
if found_files.len > 0 {
for file_name, _ in found_files {
println(' ${file_name}')
if file_name in expected_files {
verified_count++
}
}
} else {
println(' (none found)')
}
println('\n=== FILE VERIFICATION RESULTS ===\n')
println('Expected files from links: ${file_count}')
println('Files found in export: ${found_files.len}')
println('Files verified (present in export): ${verified_count}')
// Check for missing expected files
for expected_file, source_col in expected_files {
if expected_file !in found_files {
missing_count++
println(' Missing: ${expected_file} (from ${source_col})')
}
}
if missing_count > 0 {
println('\n ${missing_count} expected files are MISSING from export!')
} else if verified_count == file_count && file_count > 0 {
println('\n All expected files are present in export directory!')
} else if file_count == 0 {
println('\n No file links were found during validation (check if pages have file references)')
}
// Show directory structure
println('\n=== EXPORT DIRECTORY STRUCTURE ===\n')
if os.exists('${export_path}/content/${primary_col_name}') {
println('${export_path}/content/${primary_col_name}/')
content_files := os.ls('${export_path}/content/${primary_col_name}') or { []string{} }
for item in content_files {
full_path := '${export_path}/content/${primary_col_name}/${item}'
if os.is_dir(full_path) {
sub_items := os.ls(full_path) or { []string{} }
println(' ${item}/ (${sub_items.len} items)')
for sub_item in sub_items {
println(' - ${sub_item}')
}
} else {
println(' - ${item}')
}
}
}
}
}

View File

@@ -1,18 +1,18 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.data.atlas
import incubaid.herolib.data.doctree
import incubaid.herolib.core.pathlib
import incubaid.herolib.web.atlas_client
import incubaid.herolib.web.doctree_client
import os
// Example: Atlas Export and AtlasClient Usage
// Example: DocTree Export and AtlasClient Usage
println('Atlas Export & Client Example')
println('DocTree Export & Client Example')
println('============================================================')
// Setup test directory
test_dir := '/tmp/atlas_example'
export_dir := '/tmp/atlas_export'
test_dir := '/tmp/doctree_example'
export_dir := '/tmp/doctree_export'
os.rmdir_all(test_dir) or {}
os.rmdir_all(export_dir) or {}
os.mkdir_all(test_dir)!
@@ -30,9 +30,9 @@ page1.write('# Introduction\n\nWelcome to the docs!')!
mut page2 := pathlib.get_file(path: '${col_path}/guide.md', create: true)!
page2.write('# Guide\n\n!!include docs:intro\n\nMore content here.')!
// Create and scan atlas
println('\n1. Creating Atlas and scanning...')
mut a := atlas.new(name: 'my_docs')!
// Create and scan doctree
println('\n1. Creating DocTree and scanning...')
mut a := doctree.new(name: 'my_docs')!
a.scan(path: test_dir)!
println(' Found ${a.collections.len} collection(s)')
@@ -60,7 +60,7 @@ println(' ✓ Export complete')
// Use AtlasClient to access exported content
println('\n4. Using AtlasClient to read exported content...')
mut client := atlas_client.new(export_dir: export_dir)!
mut client := doctree_client.new(export_dir: export_dir)!
// List collections
collections := client.list_collections()!

View File

@@ -0,0 +1,3 @@
export HETZNER_USER="#ws+JdQtGCdL"
export HETZNER_PASSWORD="Kds007kds!"
export HETZNER_SSHKEY_NAME="mahmoud"

View File

@@ -1,35 +1,34 @@
#!/usr/bin/env hero
// # Configure HetznerManager, replace with your own credentials, server id's and ssh key name and all other parameters
// !!hetznermanager.configure
// name:"main"
// user:"krist"
// whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
// password:"wontsethere"
// sshkey:"kristof"
!!hetznermanager.configure
user:"user_name"
whitelist:"server_id"
password:"password"
sshkey:"ssh_key_name"
// !!hetznermanager.server_rescue
// server_name: 'kristof21' // The name of the server to manage (or use `id`)
// wait: true // Wait for the operation to complete
// hero_install: true // Automatically install Herolib in the rescue system
!!hetznermanager.server_rescue
server_name: 'server_name' // The name of the server to manage (or use `id`)
wait: true // Wait for the operation to complete
hero_install: true // Automatically install Herolib in the rescue system
// # Reset a server
// !!hetznermanager.server_reset
// instance: 'main'
// server_name: 'your-server-name'
// wait: true
!!hetznermanager.server_reset
instance: 'main'
server_name: 'server_name'
wait: true
// # Add a new SSH key to your Hetzner account
// !!hetznermanager.key_create
// instance: 'main'
// key_name: 'my-laptop-key'
// data: 'ssh-rsa AAAA...'
!!hetznermanager.key_create
instance: 'main'
key_name: 'ssh_key_name'
data: 'ssh-rsa AAAA...'
// Install Ubuntu 24.04 on a server
!!hetznermanager.ubuntu_install
server_name: 'kristof2'
server_name: 'server_name'
wait: true
hero_install: true // Install Herolib on the new OS

View File

@@ -8,23 +8,33 @@ import time
import os
import incubaid.herolib.core.playcmds
name := 'kristof1'
// Server-specific configuration
const server_name = 'kristof1'
const server_whitelist = '2521602'
user := os.environ()['HETZNER_USER'] or {
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
passwd := os.environ()['HETZNER_PASSWORD'] or {
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${user}"
whitelist:"2521602,2555487,2573047"
password:"${passwd}"
sshkey:"kristof"
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
@@ -42,7 +52,7 @@ mut cl := hetznermanager.get()!
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: name)!
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
@@ -55,7 +65,7 @@ println(serverinfo)
// console.print_header('SSH login')
cl.ubuntu_install(name: name, wait: true, hero_install: true)!
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!

View File

@@ -8,62 +8,47 @@ import time
import os
import incubaid.herolib.core.playcmds
name := 'kristof2'
// Server-specific configuration
const server_name = 'kristof2'
const server_whitelist = '2555487'
user := os.environ()['HETZNER_USER'] or {
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
passwd := os.environ()['HETZNER_PASSWORD'] or {
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hs := '
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hero_script := '
!!hetznermanager.configure
user:"${user}"
whitelist:"2521602,2555487"
password:"${passwd}"
sshkey:"kristof"
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
playcmds.run(heroscript: hero_script)!
mut hetznermanager_ := hetznermanager.get()!
playcmds.run(heroscript: hs)!
mut serverinfo := hetznermanager_.server_info_get(name: server_name)!
console.print_header('Hetzner Test.')
println('${server_name} ${serverinfo.server_ip}')
mut cl := hetznermanager.get()!
// println(cl)
hetznermanager_.server_rescue(name: server_name, wait: true, hero_install: true)!
mut keys := hetznermanager_.keys_get()!
// for i in 0 .. 5 {
// println('test cache, first time slow then fast')
// }
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: name)!
println(serverinfo)
// cl.server_reset(name: 'kristof2', wait: true)!
cl.server_rescue(name: name, wait: true, hero_install: true)!
mut ks := cl.keys_get()!
println(ks)
console.print_header('SSH login')
mut b := builder.new()!
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
// this will put hero in debug mode on the system
// n.hero_install(compile: true)!
hetznermanager_.ubuntu_install(name: server_name, wait: true, hero_install: true)!
n.shell('')!
cl.ubuntu_install(name: name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!

View File

@@ -8,23 +8,33 @@ import time
import os
import incubaid.herolib.core.playcmds
name := 'kristof3'
// Server-specific configuration
const server_name = 'kristof3'
const server_whitelist = '2573047'
user := os.environ()['HETZNER_USER'] or {
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
passwd := os.environ()['HETZNER_PASSWORD'] or {
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${user}"
whitelist:"2521602,2555487,2573047"
password:"${passwd}"
sshkey:"kristof"
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
@@ -42,7 +52,7 @@ mut cl := hetznermanager.get()!
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: name)!
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
@@ -55,7 +65,7 @@ println(serverinfo)
// console.print_header('SSH login')
cl.ubuntu_install(name: name, wait: true, hero_install: true)!
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!

View File

@@ -8,23 +8,33 @@ import time
import os
import incubaid.herolib.core.playcmds
name := 'test1'
// Server-specific configuration
const server_name = 'test1'
const server_whitelist = '2575034'
user := os.environ()['HETZNER_USER'] or {
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
passwd := os.environ()['HETZNER_PASSWORD'] or {
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${user}"
whitelist:"2575034"
password:"${passwd}"
sshkey:"kristof"
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
@@ -42,7 +52,7 @@ mut cl := hetznermanager.get()!
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: name)!
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
@@ -55,7 +65,7 @@ println(serverinfo)
// console.print_header('SSH login')
cl.ubuntu_install(name: name, wait: true, hero_install: true)!
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!

View File

@@ -1,20 +1,31 @@
# Hetzner Examples
## to get started
## Quick Start
make sure you have hero_secrets loaded
### 1. Configure Environment Variables
Copy `hetzner_env.sh` and fill in your credentials:
```bash
hero git pull https://git.threefold.info/despiegk/hero_secrets
source ~/code/git.ourworld.tf/despiegk/hero_secrets/mysecrets.sh
export HETZNER_USER="your-robot-username" # Hetzner Robot API username
export HETZNER_PASSWORD="your-password" # Hetzner Robot API password
export HETZNER_SSHKEY_NAME="my-key" # Name of SSH key registered in Hetzner
```
## to e.g. install test1
Each script has its own server name and whitelist ID defined at the top.
```
~/code/github/incubaid/herolib/examples/virt/hetzner/hetzner_test1.vsh
### 2. Run a Script
```bash
source hetzner_env.sh
./hetzner_kristof2.vsh
```
keys available:
## SSH Keys
The `HETZNER_SSHKEY_NAME` must be the **name** of an SSH key already registered in your Hetzner Robot account.
Available keys in our Hetzner account:
- hossnys (RSA 2048)
- Jan De Landtsheer (ED25519 256)
@@ -22,12 +33,25 @@ keys available:
- kristof (ED25519 256)
- maxime (ED25519 256)
## hetzner info
To add a new key, use `key_create` in your script or the Hetzner Robot web interface.
get the login passwd from:
## Alternative: Using hero_secrets
https://robot.hetzner.com/preferences/index
You can also use the shared secrets repository:
```bash
curl -u "#ws+JdQtGCdL:..." https://robot-ws.your-server.de/server
hero git pull https://git.threefold.info/despiegk/hero_secrets
source ~/code/git.ourworld.tf/despiegk/hero_secrets/mysecrets.sh
```
## Troubleshooting
### Get Robot API credentials
Get your login credentials from: https://robot.hetzner.com/preferences/index
### Test API access
```bash
curl -u "your-username:your-password" https://robot-ws.your-server.de/server
```

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.web.doctree.meta
import incubaid.herolib.core.playbook
import incubaid.herolib.ui.console
// Comprehensive HeroScript for testing multi-level navigation depths
const test_heroscript_nav_depth = '
!!site.config
name: "nav_depth_test"
title: "Navigation Depth Test Site"
description: "Testing multi-level nested navigation"
tagline: "Deep navigation structures"
!!site.navbar
title: "Nav Depth Test"
!!site.navbar_item
label: "Home"
to: "/"
position: "left"
// ============================================================
// LEVEL 1: Simple top-level category
// ============================================================
!!site.page_category
path: "Why"
collapsible: true
collapsed: false
//COLLECTION WILL BE REPEATED, HAS NO INFLUENCE ON NAVIGATION LEVELS
!!site.page src: "mycollection:intro"
label: "Why Choose Us"
title: "Why Choose Us"
description: "Reasons to use this platform"
!!site.page src: "benefits"
label: "Key Benefits"
title: "Key Benefits"
description: "Main benefits overview"
// ============================================================
// LEVEL 1: Simple top-level category
// ============================================================
!!site.page_category
path: "Tutorials"
collapsible: true
collapsed: false
!!site.page src: "getting_started"
label: "Getting Started"
title: "Getting Started"
description: "Basic tutorial to get started"
!!site.page src: "first_steps"
label: "First Steps"
title: "First Steps"
description: "Your first steps with the platform"
// ============================================================
// LEVEL 3: Three-level nested category (Tutorials > Operations > Urgent)
// ============================================================
!!site.page_category
path: "Tutorials/Operations/Urgent"
collapsible: true
collapsed: false
!!site.page src: "emergency_restart"
label: "Emergency Restart"
title: "Emergency Restart"
description: "How to emergency restart the system"
!!site.page src: "critical_fixes"
label: "Critical Fixes"
title: "Critical Fixes"
description: "Apply critical fixes immediately"
!!site.page src: "incident_response"
label: "Incident Response"
title: "Incident Response"
description: "Handle incidents in real-time"
// ============================================================
// LEVEL 2: Two-level nested category (Tutorials > Operations)
// ============================================================
!!site.page_category
path: "Tutorials/Operations"
collapsible: true
collapsed: false
!!site.page src: "daily_checks"
label: "Daily Checks"
title: "Daily Checks"
description: "Daily maintenance checklist"
!!site.page src: "monitoring"
label: "Monitoring"
title: "Monitoring"
description: "System monitoring procedures"
!!site.page src: "backups"
label: "Backups"
title: "Backups"
description: "Backup and restore procedures"
// ============================================================
// LEVEL 1: One-to-two level (Tutorials)
// ============================================================
// Note: This creates a sibling at the Tutorials level (not nested deeper)
!!site.page src: "advanced_concepts"
label: "Advanced Concepts"
title: "Advanced Concepts"
description: "Deep dive into advanced concepts"
!!site.page src: "troubleshooting"
label: "Troubleshooting"
title: "Troubleshooting"
description: "Troubleshooting guide"
// ============================================================
// LEVEL 2: Two-level nested category (Why > FAQ)
// ============================================================
!!site.page_category
path: "Why/FAQ"
collapsible: true
collapsed: false
!!site.page src: "general"
label: "General Questions"
title: "General Questions"
description: "Frequently asked questions"
!!site.page src: "pricing_questions"
label: "Pricing"
title: "Pricing Questions"
description: "Questions about pricing"
!!site.page src: "technical_faq"
label: "Technical FAQ"
title: "Technical FAQ"
description: "Technical frequently asked questions"
!!site.page src: "support_faq"
label: "Support"
title: "Support FAQ"
description: "Support-related FAQ"
// ============================================================
// LEVEL 4: Four-level nested category (Tutorials > Operations > Database > Optimization)
// ============================================================
!!site.page_category
path: "Tutorials/Operations/Database/Optimization"
collapsible: true
collapsed: false
!!site.page src: "query_optimization"
label: "Query Optimization"
title: "Query Optimization"
description: "Optimize your database queries"
!!site.page src: "indexing_strategy"
label: "Indexing Strategy"
title: "Indexing Strategy"
description: "Effective indexing strategies"
!!site.page_category
path: "Tutorials/Operations/Database"
collapsible: true
collapsed: false
!!site.page src: "configuration"
label: "Configuration"
title: "Database Configuration"
description: "Configure your database"
!!site.page src: "replication"
label: "Replication"
title: "Database Replication"
description: "Set up database replication"
'
fn check(s2 meta.Site) {
// assert s == s2
}
// ========================================================
// SETUP: Create and process playbook
// ========================================================
console.print_item('Creating playbook from HeroScript')
mut plbook := playbook.new(text: test_heroscript_nav_depth)!
console.print_green(' Playbook created')
console.lf()
console.print_item('Processing site configuration')
meta.play(mut plbook)!
console.print_green(' Site processed')
console.lf()
console.print_item('Retrieving configured site')
mut nav_site := meta.get(name: 'nav_depth_test')!
console.print_green(' Site retrieved')
console.lf()
// check(nav_site)

201
examples/web/site/USAGE.md Normal file
View File

@@ -0,0 +1,201 @@
# Site Module Usage Guide
## Quick Examples
### 1. Run Basic Example
```bash
cd examples/web/site
vrun process_site.vsh ./
```
With output:
```
=== Site Configuration Processor ===
Processing HeroScript files from: ./
Found 1 HeroScript file(s):
- basic.heroscript
Processing: basic.heroscript
=== Configuration Complete ===
Site: simple_docs
Title: Simple Documentation
Pages: 4
Description: A basic documentation site
Navigation structure:
- [Page] Getting Started
- [Page] Installation
- [Page] Usage Guide
- [Page] FAQ
✓ Site configuration ready for deployment
```
### 2. Run Multi-Section Example
```bash
vrun process_site.vsh ./
# Edit process_site.vsh to use multi_section.heroscript instead
```
### 3. Process Custom Directory
```bash
vrun process_site.vsh /path/to/your/site/config
```
## File Structure
```
docs/
├── 0_config.heroscript # Basic config
├── 1_menu.heroscript # Navigation
├── 2_pages.heroscript # Pages and categories
└── process.vsh # Your processing script
```
## Creating Your Own Site
1. **Create a config directory:**
```bash
mkdir my_site
cd my_site
```
2. **Create config file (0_config.heroscript):**
```heroscript
!!site.config
name: "my_site"
title: "My Site"
```
3. **Create pages file (1_pages.heroscript):**
```heroscript
!!site.page src: "docs:intro"
title: "Getting Started"
```
4. **Process with script:**
```bash
vrun ../process_site.vsh ./
```
## Common Workflows
### Workflow 1: Documentation Site
```
docs/
├── 0_config.heroscript
│ └── Basic config + metadata
├── 1_menu.heroscript
│ └── Navbar + footer
├── 2_getting_started.heroscript
│ └── Getting started pages
├── 3_api.heroscript
│ └── API reference pages
└── 4_advanced.heroscript
└── Advanced topic pages
```
### Workflow 2: Internal Knowledge Base
```
kb/
├── 0_config.heroscript
├── 1_navigation.heroscript
└── 2_articles.heroscript
```
### Workflow 3: Product Documentation with Imports
```
product_docs/
├── 0_config.heroscript
├── 1_imports.heroscript
│ └── Import shared templates
├── 2_menu.heroscript
└── 3_pages.heroscript
```
## Tips & Tricks
### Tip 1: Reuse Collections
```heroscript
# Specify once, reuse multiple times
!!site.page src: "guides:intro"
!!site.page src: "setup" # Reuses "guides"
!!site.page src: "deployment" # Still "guides"
# Switch to new collection
!!site.page src: "api:reference"
!!site.page src: "examples" # Now "api"
```
### Tip 2: Auto-Increment Categories
```heroscript
# Automatically positioned at 100, 200, 300...
!!site.page_category name: "basics"
!!site.page_category name: "advanced"
!!site.page_category name: "expert"
# Or specify explicit positions
!!site.page_category name: "basics" position: 10
!!site.page_category name: "advanced" position: 20
```
### Tip 3: Title Extraction
Let titles come from markdown files:
```heroscript
# Don't specify title
!!site.page src: "docs:introduction"
# Title will be extracted from # Heading in introduction.md
```
### Tip 4: Draft Pages
Hide pages while working on them:
```heroscript
!!site.page src: "docs:work_in_progress"
draft: true
title: "Work in Progress"
```
## Debugging
### Debug: Check What Got Configured
```v
mut s := site.get(name: 'my_site')!
println(s.pages) // All pages
println(s.nav) // Navigation structure
println(s.siteconfig) // Configuration
```
### Debug: List All Sites
```v
sites := site.list()
for site_name in sites {
println('Site: ${site_name}')
}
```
### Debug: Enable Verbose Output
Add `console.print_debug()` calls in your HeroScript processing.
## Next Steps
- Customize `process_site.vsh` for your needs
- Add your existing pages (in markdown)
- Export to Docusaurus
- Deploy to production
For more info, see the main [Site Module README](./readme.md).

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env hero
# Basic single-section documentation site
!!site.config
name: "simple_docs"
title: "Simple Documentation"
description: "A basic documentation site"
copyright: "© 2024 Example"
url: "https://docs.example.com"
base_url: "/"
!!site.navbar
title: "Simple Docs"
logo_src: "img/logo.png"
!!site.navbar_item
label: "Docs"
to: "/"
position: "left"
!!site.navbar_item
label: "GitHub"
href: "https://github.com/example/repo"
position: "right"
!!site.footer
style: "dark"
!!site.footer_item
title: "Documentation"
label: "Getting Started"
to: "getting-started"
!!site.footer_item
title: "Community"
label: "Discord"
href: "https://discord.gg/example"
!!site.page src: "docs:introduction"
title: "Getting Started"
description: "Learn the basics"
!!site.page src: "installation"
title: "Installation"
description: "How to install"
!!site.page src: "usage"
title: "Usage Guide"
description: "How to use the system"
!!site.page src: "faq"
title: "FAQ"
description: "Frequently asked questions"

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env hero
# Multi-section documentation with categories
!!site.config
name: "multi_docs"
title: "Complete Documentation"
description: "Comprehensive documentation with multiple sections"
tagline: "Everything you need to know"
copyright: "© 2024 Tech Company"
url: "https://docs.techcompany.com"
base_url: "/docs"
!!site.navbar
title: "Tech Documentation"
logo_src: "img/logo.svg"
!!site.navbar_item
label: "Documentation"
to: "/"
position: "left"
!!site.navbar_item
label: "API"
to: "api"
position: "left"
!!site.navbar_item
label: "GitHub"
href: "https://github.com/techcompany"
position: "right"
!!site.footer
style: "dark"
!!site.footer_item
title: "Guides"
label: "Getting Started"
to: "getting-started"
!!site.footer_item
title: "Guides"
label: "Installation"
to: "installation"
!!site.footer_item
title: "Company"
label: "Website"
href: "https://techcompany.com"
!!site.footer_item
title: "Legal"
label: "Privacy"
href: "https://techcompany.com/privacy"
# ==================================================
# Getting Started Section
# ==================================================
!!site.page_category
name: "getting_started"
label: "Getting Started"
position: 100
!!site.page src: "docs:introduction"
title: "Introduction"
description: "What is this project?"
!!site.page src: "installation"
title: "Installation"
description: "Get up and running"
!!site.page src: "quickstart"
title: "Quick Start"
description: "Your first steps"
# ==================================================
# Core Concepts Section
# ==================================================
!!site.page_category
name: "concepts"
label: "Core Concepts"
position: 200
!!site.page src: "concepts:architecture"
title: "Architecture"
description: "System design and architecture"
!!site.page src: "components"
title: "Components"
description: "Main system components"
!!site.page src: "data_flow"
title: "Data Flow"
description: "How data flows through the system"
!!site.page src: "security"
title: "Security"
description: "Security considerations"
# ==================================================
# Advanced Topics Section
# ==================================================
!!site.page_category
name: "advanced"
label: "Advanced Topics"
position: 300
!!site.page src: "advanced:performance"
title: "Performance Tuning"
description: "Optimize your system"
!!site.page src: "scaling"
title: "Scaling"
description: "Scale to millions of users"
!!site.page src: "deployment"
title: "Deployment"
description: "Deploy to production"
# ==================================================
# API Reference Section
# ==================================================
!!site.page_category
name: "api"
label: "API Reference"
position: 400
!!site.page src: "api:overview"
title: "API Overview"
description: "API capabilities and base URLs"
!!site.page src: "rest_api"
title: "REST API"
description: "Complete REST API documentation"
!!site.page src: "graphql_api"
title: "GraphQL"
description: "GraphQL API documentation"
!!site.page src: "webhooks"
title: "Webhooks"
description: "Implement webhooks in your app"
# ==================================================
# Publishing
# ==================================================
!!site.publish
path: "/var/www/html/docs"
!!site.publish_dev
path: "/tmp/docs-preview"

View File

@@ -0,0 +1,116 @@
#!/usr/bin/env -S v -n -w -gc none -cg -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.playbook
import incubaid.herolib.web.site
import incubaid.herolib.ui.console
import os
// Process a site configuration from HeroScript files
println(console.color_fg(.green) + '=== Site Configuration Processor ===' + console.reset())
// Get directory from command line or use default
mut config_dir := './docs'
if os.args.len > 1 {
config_dir = os.args[1]
}
if !os.exists(config_dir) {
console.print_stderr('Error: Directory not found: ${config_dir}')
exit(1)
}
console.print_item('Processing HeroScript files from: ${config_dir}')
// Find all heroscript files
mut heroscript_files := []string{}
entries := os.ls(config_dir) or {
console.print_stderr('Error reading directory: ${err}')
exit(1)
}
for entry in entries {
if entry.ends_with('.heroscript') {
heroscript_files << entry
}
}
// Sort files (to ensure numeric prefix order)
heroscript_files.sort()
if heroscript_files.len == 0 {
console.print_stderr('No .heroscript files found in ${config_dir}')
exit(1)
}
console.print_item('Found ${heroscript_files.len} HeroScript file(s):')
for file in heroscript_files {
console.print_item(' - ${file}')
}
// Process each file
mut site_names := []string{}
for file in heroscript_files {
full_path := os.join_path(config_dir, file)
console.print_lf(1)
console.print_header('Processing: ${file}')
mut plbook := playbook.new(path: full_path) or {
console.print_stderr('Error loading ${file}: ${err}')
continue
}
site.play(mut plbook) or {
console.print_stderr('Error processing ${file}: ${err}')
continue
}
}
// Get all configured sites
site_names = site.list()
if site_names.len == 0 {
console.print_stderr('No sites were configured')
exit(1)
}
console.print_lf(2)
console.print_green('=== Configuration Complete ===')
// Display configured sites
for site_name in site_names {
mut configured_site := site.get(name: site_name) or { continue }
console.print_header('Site: ${site_name}')
console.print_item('Title: ${configured_site.siteconfig.title}')
console.print_item('Pages: ${configured_site.pages.len}')
console.print_item('Description: ${configured_site.siteconfig.description}')
// Show pages organized by category
if configured_site.nav.my_sidebar.len > 0 {
console.print_item('Navigation structure:')
for nav_item in configured_site.nav.my_sidebar {
match nav_item {
site.NavDoc {
console.print_item(' - [Page] ${nav_item.label}')
}
site.NavCat {
console.print_item(' - [Category] ${nav_item.label}')
for sub_item in nav_item.items {
match sub_item {
site.NavDoc {
console.print_item(' - ${sub_item.label}')
}
else {}
}
}
}
else {}
}
}
}
console.print_lf(1)
}
println(console.color_fg(.green) + ' Site configuration ready for deployment' + console.reset())

View File

@@ -13,7 +13,7 @@ import incubaid.herolib.installers.lang.python
import os
fn startupcmd() ![]startupmanager.ZProcessNewArgs {
mut installer := get()!
_ := get()!
mut res := []startupmanager.ZProcessNewArgs{}
// THIS IS EXAMPLE CODEAND NEEDS TO BE CHANGED
// res << startupmanager.ZProcessNewArgs{
@@ -28,7 +28,7 @@ fn startupcmd() ![]startupmanager.ZProcessNewArgs {
}
fn running() !bool {
mut installer := get()!
_ := get()!
// THIS IS EXAMPLE CODEAND NEEDS TO BE CHANGED
// this checks health of erpnext
// curl http://localhost:3333/api/v1/s --oauth2-bearer 1234 works

View File

@@ -16,7 +16,7 @@ pub mut:
pub fn (b BizModel) export(args ExportArgs) ! {
name := if args.name != '' { args.name } else { texttools.snake_case(args.title) }
path := pathlib.get_dir(
pathlib.get_dir(
path: os.join_path(os.home_dir(), 'hero/var/bizmodel/exports/${name}')
create: true
empty: true
@@ -52,7 +52,7 @@ pub fn (model BizModel) write_operational_plan(args ExportArgs) ! {
mut hr_page := pathlib.get_file(path: '${hr_dir.path}/human_resources.md')!
hr_page.template_write($tmpl('./templates/human_resources.md'), true)!
for key, employee in model.employees {
for _, employee in model.employees {
mut employee_page := pathlib.get_file(
path: '${hr_dir.path}/${texttools.snake_case(employee.name)}.md'
)!
@@ -73,7 +73,7 @@ pub fn (model BizModel) write_operational_plan(args ExportArgs) ! {
}
}
for key, department in model.departments {
for _, department in model.departments {
dept := department
mut dept_page := pathlib.get_file(
path: '${depts_dir.path}/${texttools.snake_case(department.name)}.md'
@@ -94,7 +94,7 @@ pub fn (model BizModel) write_revenue_model(args ExportArgs) ! {
products_page.template_write('# Products', true)!
name1 := 'example'
for key, product in model.products {
for _, product in model.products {
mut product_page := pathlib.get_file(
path: '${products_dir.path}/${texttools.snake_case(product.name)}.md'
)!

View File

@@ -7,7 +7,7 @@ import incubaid.herolib.core.pathlib
pub struct ExportCSVArgs {
pub mut:
path string
include_empty bool = false // whether to include empty cells or not
include_empty bool // whether to include empty cells or not
separator string = '|' // separator character for CSV
}

View File

@@ -22,7 +22,7 @@ pub fn play(mut plbook PlayBook) ! {
})
// play actions for each biz in plbook
for biz, actions in actions_by_biz {
for biz, _ in actions_by_biz {
mut model := getset(biz)!
model.play(mut plbook)!
}

View File

@@ -8,7 +8,7 @@ import incubaid.herolib.core.playbook { Action }
// title:'Engineering Division'
// avg_monthly_cost:'6000USD' avg_indexation:'5%'
fn (mut m BizModel) department_define_action(action Action) !Action {
bizname := action.params.get_default('bizname', '')!
_ := action.params.get_default('bizname', '')!
mut name := action.params.get('name') or { return error('department name is required') }
mut descr := action.params.get_default('descr', '')!
if descr.len == 0 {

View File

@@ -74,7 +74,7 @@ fn (mut m BizModel) employee_define_action(action Action) !Action {
mut curcost := -costpeople_row.cells[x].val
mut curpeople := nrpeople_row.cells[x].val
mut currev := revtotal.cells[x].val
// println("currev: ${currev}, curcost: ${curcost}, curpeople: ${curpeople}, costpercent_revenue: ${cost_percent_revenue}")
println("currev: ${currev}, curcost: ${curcost}, curpeople: ${curpeople}, costpercent_revenue: ${cost_percent_revenue}")
if currev * cost_percent_revenue > curcost {
costpeople_row.cells[x].val = -currev * cost_percent_revenue
nrpeople_row.cells[x].val = f64(currev * cost_percent_revenue / costperson_default.usd())

View File

@@ -10,7 +10,7 @@ fn (mut sim BizModel) pl_total() ! {
// sheet.pprint(nr_columns: 10)!
mut pl_total := sheet.group2row(
_ := sheet.group2row(
name: 'pl_summary'
include: ['pl']
tags: 'summary'

View File

@@ -77,7 +77,7 @@ fn (mut m BizModel) revenue_action(action Action) !Action {
product.has_revenue = true
}
mut margin := revenue.action(
_ := revenue.action(
name: '${r.name}_margin'
descr: 'Margin for ${r.name}'
action: .substract

View File

@@ -6,7 +6,7 @@ import incubaid.herolib.core.texttools
// see lib/biz/bizmodel/docs/revenue.md
fn (mut m BizModel) revenue_item_action(action Action) !Action {
mut r := get_action_descr(action)!
mut product := m.products[r.name]
mut product := m.products[r.name] or { return error('Product "${r.name}" not found for revenue item action') }
mut nr_sold := m.sheet.row_new(
name: '${r.name}_nr_sold'
@@ -193,7 +193,7 @@ fn (mut m BizModel) revenue_item_action(action Action) !Action {
tags: 'name:${r.name}'
)!
mut margin := margin_setup.action(
_ := margin_setup.action(
name: '${r.name}_margin'
descr: 'Margin for ${r.name}'
action: .add

View File

@@ -6,19 +6,19 @@ import incubaid.herolib.core.playbook
fn (mut sim BizModel) revenue_total() ! {
mut sheet := sim.sheet
mut revenue_total := sheet.group2row(
_ := sheet.group2row(
name: 'revenue_total'
include: ['rev']
tags: 'total revtotal pl'
descr: 'Revenue Total'
)!
mut cogs_total := sheet.group2row(
_ := sheet.group2row(
name: 'cogs_total'
include: ['cogs']
tags: 'total cogstotal pl'
descr: 'Cost of Goods Total.'
)!
mut margin_total := sheet.group2row(
_ := sheet.group2row(
name: 'margin_total'
include: ['margin']
tags: 'total margintotal'

View File

@@ -7,7 +7,7 @@ import incubaid.herolib.core.pathlib
pub struct ExportCSVArgs {
pub mut:
path string
include_empty bool = false // whether to include empty cells or not
include_empty bool // whether to include empty cells or not
separator string = '|' // separator character for CSV
}

View File

@@ -118,23 +118,23 @@ pub fn (s Sheet) data_get_as_string(args RowGetArgs) !string {
}
nryears := 5
err_pre := "Can't get data for sheet:${s.name} row:${args.rowname}.\n"
mut s2 := s
mut s2 := s
if args.period_type == .year {
s2 = s.toyear(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
s2 = *s.toyear(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
}
if args.period_type == .quarter {
s2 = s.toquarter(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
s2 = *s.toquarter(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
}
mut out := ''

View File

@@ -20,7 +20,7 @@ fn pad_right(s string, length int) string {
pub struct PPrintArgs {
pub mut:
group_months int = 1 // e.g. if 2 then will group by 2 months
nr_columns int = 0 // number of columns to show in the table, 0 is all
nr_columns int // number of columns to show in the table, 0 is all
description bool // show description in the table
aggrtype bool = true // show aggregate type in the table
tags bool = true // show tags in the table
@@ -151,7 +151,7 @@ pub fn (mut s Sheet) pprint(args PPrintArgs) ! {
}
max_cols := data_start_index + args.nr_columns
mut new_all_rows := [][]string{}
for i, row in all_rows {
for _, row in all_rows {
if row.len > max_cols {
new_all_rows << row[0..max_cols]
} else {

View File

@@ -67,7 +67,9 @@ pub fn (mut node Node) hero_install(args HeroInstallArgs) ! {
todo << 'bash /tmp/install_v.sh --herolib '
}
}
node.exec_interactive(todo.join('\n'))!
// Use exec instead of exec_interactive since user interaction is not needed
// exec_interactive uses shell mode which replaces the process and never returns
node.exec(cmd: todo.join('\n'), stdout: true)!
}
@[params]

View File

@@ -99,8 +99,11 @@ pub fn (mut executor ExecutorLocal) download(args SyncArgs) ! {
}
pub fn (mut executor ExecutorLocal) shell(cmd string) ! {
// Note: os.execvp replaces the current process and never returns.
// This is intentional - shell() is designed to hand over control to the shell.
// Do not put shell() before any other code that needs to execute.
if cmd.len > 0 {
os.execvp('/bin/bash', ["-c '${cmd}'"])!
os.execvp('/bin/bash', ['-c', cmd])!
} else {
os.execvp('/bin/bash', [])!
}

View File

@@ -235,11 +235,12 @@ pub fn (mut executor ExecutorSSH) info() map[string]string {
// forwarding ssh traffic to certain container
pub fn (mut executor ExecutorSSH) shell(cmd string) ! {
mut args := ['-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null',
'${executor.user}@${executor.ipaddr.addr}', '-p', '${executor.ipaddr.port}']
if cmd.len > 0 {
panic('TODO IMPLEMENT SHELL EXEC OVER SSH')
args << cmd
}
os.execvp('ssh', ['-o StrictHostKeyChecking=no', '${executor.user}@${executor.ipaddr.addr}',
'-p ${executor.ipaddr.port}'])!
os.execvp('ssh', args)!
}
pub fn (mut executor ExecutorSSH) list(path string) ![]string {

View File

@@ -228,7 +228,7 @@ pub fn (mut client MeilisearchClient) similar_documents(uid string, args Similar
method: .post
data: json.encode(args)
}
res := client.enable_eperimental_feature(vector_store: true)! // Enable the feature first.
client.enable_eperimental_feature(vector_store: true)! // Enable the feature first.
mut http := client.httpclient()!
rsponse := http.post_json_str(req)!
println('rsponse: ${rsponse}')

View File

@@ -19,7 +19,7 @@ pub mut:
user string = 'root'
port int = 5432
host string = 'localhost'
password string = ''
password string
dbname string = 'postgres'
}
@@ -52,8 +52,7 @@ pub fn heroscript_dumps(obj PostgresqlClient) !string {
}
pub fn heroscript_loads(heroscript string) !PostgresqlClient {
mut obj := encoderhero.decode[PostgresqlClient](heroscript)!
return PostgresqlClient{
db_: pg.DB{}
}
mut client := encoderhero.decode[PostgresqlClient](heroscript)!
client.db_ = pg.DB{}
return client
}

View File

@@ -114,5 +114,5 @@ fn (q QueryBuilder) build_query(args BuildQueryArgs) string {
fn type_to_map[T](t T) !map[string]json2.Any {
encoded_input := json2.encode(t)
return json2.raw_decode(encoded_input)!.as_map()
return json2.decode[json2.Any](encoded_input)!.as_map()
}

View File

@@ -217,7 +217,7 @@ fn cmd_git_execute(cmd Command) ! {
mut gs := gittools.new(coderoot: coderoot)!
// create the filter for doing group actions, or action on 1 repo
mut filter := ''
_ := ''
mut url := ''
mut path := ''

View File

@@ -164,7 +164,7 @@ pub fn plbook_run(cmd Command) !(&playbook.PlayBook, string) {
playbook.new(path: path)!
}
dagu := cmd.flags.get_bool('dagu') or { false }
_ := cmd.flags.get_bool('dagu') or { false }
playcmds.run(plbook: plbook)!

View File

@@ -3,7 +3,7 @@ module playcmds
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.data.atlas
import incubaid.herolib.biz.bizmodel
import incubaid.herolib.threefold.incatokens
import incubaid.herolib.mycelium.incatokens
import incubaid.herolib.web.site
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.virt.heropods
@@ -20,6 +20,7 @@ import incubaid.herolib.installers.horus.herorunner
import incubaid.herolib.installers.horus.osirisrunner
import incubaid.herolib.installers.horus.salrunner
import incubaid.herolib.installers.virt.podman
import incubaid.herolib.installers.virt.kubernetes_installer
import incubaid.herolib.installers.infra.gitea
import incubaid.herolib.builder
@@ -80,6 +81,7 @@ pub fn run(args_ PlayArgs) ! {
herolib.play(mut plbook)!
vlang.play(mut plbook)!
podman.play(mut plbook)!
kubernetes_installer.play(mut plbook)!
gitea.play(mut plbook)!
giteaclient.play(mut plbook)!

View File

@@ -11,7 +11,7 @@ pub fn play_ssh(mut plbook PlayBook) ! {
}
// Get or create a single SSH agent instance
mut agent := sshagent.new_single(sshagent.SSHAgentNewArgs{})!
_ := sshagent.new_single(sshagent.SSHAgentNewArgs{})!
// TO IMPLEMENT:

View File

@@ -2,8 +2,8 @@ module playmacros
import incubaid.herolib.ui.console
import incubaid.herolib.core.playbook { Action, PlayBook }
import incubaid.herolib.threefold.grid4.gridsimulator
import incubaid.herolib.threefold.grid4.farmingsimulator
import incubaid.herolib.mycelium.grid4.gridsimulator
import incubaid.herolib.mycelium.grid4.farmingsimulator
import incubaid.herolib.biz.bizmodel
import incubaid.herolib.biz.spreadsheet

View File

@@ -23,7 +23,7 @@ pub fn escape_regex_chars(s string) string {
// This function does not add implicit ^ and $ anchors, allowing for substring matches.
fn wildcard_to_regex(wildcard_pattern string) string {
mut regex_pattern := ''
for i, r in wildcard_pattern.runes() {
for _, r in wildcard_pattern.runes() {
match r {
`*` {
regex_pattern += '.*'

View File

@@ -381,3 +381,48 @@ fn test_get_edit_url() {
// Assert the URLs are correct
// assert edit_url == 'https://github.com/test/repo/edit/main/test_page.md'
}
fn test_export_recursive_links() {
// Create 3 collections with chained links
col_a_path := '${test_base}/recursive_export/col_a'
col_b_path := '${test_base}/recursive_export/col_b'
col_c_path := '${test_base}/recursive_export/col_c'
os.mkdir_all(col_a_path)!
os.mkdir_all(col_b_path)!
os.mkdir_all(col_c_path)!
// Collection A
mut cfile_a := pathlib.get_file(path: '${col_a_path}/.collection', create: true)!
cfile_a.write('name:col_a')!
mut page_a := pathlib.get_file(path: '${col_a_path}/page_a.md', create: true)!
page_a.write('# Page A\n\n[Link to B](col_b:page_b)')!
// Collection B
mut cfile_b := pathlib.get_file(path: '${col_b_path}/.collection', create: true)!
cfile_b.write('name:col_b')!
mut page_b := pathlib.get_file(path: '${col_b_path}/page_b.md', create: true)!
page_b.write('# Page B\n\n[Link to C](col_c:page_c)')!
// Collection C
mut cfile_c := pathlib.get_file(path: '${col_c_path}/.collection', create: true)!
cfile_c.write('name:col_c')!
mut page_c := pathlib.get_file(path: '${col_c_path}/page_c.md', create: true)!
page_c.write('# Page C\n\nFinal content')!
// Export
mut a := new()!
a.add_collection(mut pathlib.get_dir(path: col_a_path)!)!
a.add_collection(mut pathlib.get_dir(path: col_b_path)!)!
a.add_collection(mut pathlib.get_dir(path: col_c_path)!)!
export_path := '${test_base}/export_recursive'
a.export(destination: export_path)!
// Verify all pages were exported
assert os.exists('${export_path}/content/col_a/page_a.md')
assert os.exists('${export_path}/content/col_a/page_b.md') // From Collection B
assert os.exists('${export_path}/content/col_a/page_c.md') // From Collection C
// TODO: test not complete
}

View File

@@ -18,7 +18,7 @@ AtlasClient provides methods to:
import incubaid.herolib.web.atlas_client
// Create client
mut client := atlas_client.new(export_dir: '/tmp/atlas_export')!
mut client := atlas_client.new(export_dir: '${os.home_dir()}/hero/var/atlas_export')!
// List collections
collections := client.list_collections()!

View File

@@ -44,6 +44,7 @@ pub mut:
}
// Export a single collection
// Export a single collection with recursive link processing
pub fn (mut c Collection) export(args CollectionExportArgs) ! {
// Create collection directory
mut col_dir := pathlib.get_dir(
@@ -66,11 +67,14 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
)!
json_file.write(meta)!
// Track cross-collection pages and files that need to be copied for self-contained export
mut cross_collection_pages := map[string]&Page{} // key: page.name, value: &Page
mut cross_collection_files := map[string]&File{} // key: file.name, value: &File
// Track all cross-collection pages and files that need to be exported
// Use maps with collection:name as key to track globally across all resolutions
mut cross_collection_pages := map[string]&Page{} // key: "collection:page_name"
mut cross_collection_files := map[string]&File{} // key: "collection:file_name"
mut processed_local_pages := map[string]bool{} // Track which local pages we've already processed
mut processed_cross_pages := map[string]bool{} // Track which cross-collection pages we've processed for links
// First pass: export all pages in this collection and collect cross-collection references
// First pass: export all pages in this collection and recursively collect ALL cross-collection references
for _, mut page in c.pages {
// Get content with includes processed and links transformed for export
content := page.content_with_fixed_links(
@@ -82,33 +86,11 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
mut dest_file := pathlib.get_file(path: '${col_dir.path}/${page.name}.md', create: true)!
dest_file.write(content)!
// Collect cross-collection references for copying (pages and files/images)
// IMPORTANT: Use cached links from validation (before transformation) to preserve collection info
for mut link in page.links {
if link.status != .found {
continue
}
// Recursively collect cross-collection references from this page
c.collect_cross_collection_references(mut page, mut cross_collection_pages, mut
cross_collection_files, mut processed_cross_pages)!
// Collect cross-collection page references
is_local := link.target_collection_name == c.name
if link.file_type == .page && !is_local {
mut target_page := link.target_page() or { continue }
// Use page name as key to avoid duplicates
if target_page.name !in cross_collection_pages {
cross_collection_pages[target_page.name] = target_page
}
}
// Collect cross-collection file/image references
if (link.file_type == .file || link.file_type == .image) && !is_local {
mut target_file := link.target_file() or { continue }
// Use file name as key to avoid duplicates
file_key := target_file.name
if file_key !in cross_collection_files {
cross_collection_files[file_key] = target_file
}
}
}
processed_local_pages[page.name] = true
// Redis operations...
if args.redis {
@@ -136,21 +118,48 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
src_file.copy(dest: dest_file.path)!
}
// Second pass: copy cross-collection referenced pages to make collection self-contained
for _, mut ref_page in cross_collection_pages {
// Get the referenced page content with includes processed
ref_content := ref_page.content_with_fixed_links(
include: args.include
cross_collection: true
export_mode: true
)!
// Second pass: copy all collected cross-collection pages and process their links recursively
// Keep iterating until no new cross-collection references are found
for {
mut found_new_references := false
// Write the referenced page to this collection's directory
mut dest_file := pathlib.get_file(path: '${col_dir.path}/${ref_page.name}.md', create: true)!
dest_file.write(ref_content)!
// Process all cross-collection pages we haven't processed yet
for page_key, mut ref_page in cross_collection_pages {
if page_key in processed_cross_pages {
continue // Already processed this page's links
}
// Mark as processed to avoid infinite loops
processed_cross_pages[page_key] = true
found_new_references = true
// Get the referenced page content with includes processed
ref_content := ref_page.content_with_fixed_links(
include: args.include
cross_collection: true
export_mode: true
)!
// Write the referenced page to this collection's directory
mut dest_file := pathlib.get_file(
path: '${col_dir.path}/${ref_page.name}.md'
create: true
)!
dest_file.write(ref_content)!
// CRITICAL: Recursively process links in this cross-collection page
// This ensures we get pages/files/images referenced by ref_page
c.collect_cross_collection_references(mut ref_page, mut cross_collection_pages, mut
cross_collection_files, mut processed_cross_pages)!
}
// If we didn't find any new references, we're done with the recursive pass
if !found_new_references {
break
}
}
// Third pass: copy cross-collection referenced files/images to make collection self-contained
// Third pass: copy ALL collected cross-collection referenced files/images
for _, mut ref_file in cross_collection_files {
mut src_file := ref_file.path()!
@@ -168,3 +177,42 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
src_file.copy(dest: dest_file.path)!
}
}
// Helper function to recursively collect cross-collection references
// This processes a page's links and adds all non-local references to the collections
fn (mut c Collection) collect_cross_collection_references(mut page Page,
mut all_cross_pages map[string]&Page,
mut all_cross_files map[string]&File,
mut processed_pages map[string]bool) ! {
// Use cached links from validation (before transformation) to preserve collection info
for mut link in page.links {
if link.status != .found {
continue
}
is_local := link.target_collection_name == c.name
// Collect cross-collection page references
if link.file_type == .page && !is_local {
page_key := '${link.target_collection_name}:${link.target_item_name}'
// Only add if not already collected
if page_key !in all_cross_pages {
mut target_page := link.target_page()!
all_cross_pages[page_key] = target_page
// Don't mark as processed yet - we'll do that when we actually process its links
}
}
// Collect cross-collection file/image references
if (link.file_type == .file || link.file_type == .image) && !is_local {
file_key := '${link.target_collection_name}:${link.target_item_name}'
// Only add if not already collected
if file_key !in all_cross_files {
mut target_file := link.target_file()!
all_cross_files[file_key] = target_file
}
}
}
}

View File

@@ -3,6 +3,7 @@ module atlas
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.develop.gittools
import incubaid.herolib.ui.console
import os
// Play function to process HeroScript actions for Atlas
pub fn play(mut plbook PlayBook) ! {
@@ -66,7 +67,7 @@ pub fn play(mut plbook PlayBook) ! {
for mut action in export_actions {
mut p := action.params
name = p.get_default('name', 'main')!
destination := p.get_default('destination', '/tmp/atlas_export')!
destination := p.get_default('destination', '${os.home_dir()}/hero/var/atlas_export')!
reset := p.get_default_true('reset')
include := p.get_default_true('include')
redis := p.get_default_true('redis')

View File

@@ -38,7 +38,7 @@ pub fn set_titles(page string, maxnr int) string {
for line in lines {
mut hash_count := 0
mut first_char_idx := 0
for char_idx, r in line.runes() {
for _, r in line.runes() {
if r == ` ` {
first_char_idx++
continue
@@ -89,7 +89,7 @@ pub fn set_titles(page string, maxnr int) string {
// Remove existing numbering (e.g., "1. ", "1.1. ")
mut skip_chars := 0
mut in_numbering := true
for r_idx, r in original_title_text.runes() {
for _, r in original_title_text.runes() {
if in_numbering {
if (r >= `0` && r <= `9`) || r == `.` || r == ` ` {
skip_chars++

View File

@@ -22,8 +22,8 @@ pub mut:
recursive bool
pull bool
reload bool // means reload the info into the cache
script bool = true // run non interactive
reset bool = true // means we will lose changes (only relevant for clone, pull)
script bool // run non interactive
reset bool // means we will lose changes (only relevant for clone, pull)
}
// do group actions on repo
@@ -38,14 +38,12 @@ pub mut:
// url string
// pull bool
// reload bool //means reload the info into the cache
// script bool = true // run non interactive
// reset bool = true // means we will lose changes (only relevant for clone, pull)
// script bool // run non interactive
// reset bool// means we will lose changes (only relevant for clone, pull)
//```
pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
mut args := args_
console.print_debug('git do ${args.cmd}')
// println(args)
// $dbg;
if args.path.len > 0 && args.url.len > 0 {
panic('bug')
@@ -99,7 +97,9 @@ pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
provider: args.provider
)!
if repos.len < 4 || args.cmd in 'pull,push,commit,delete'.split(',') {
// println(repos.map(it.name))
if repos.len < 4 || args.cmd in 'pull,push,commit'.split(',') {
args.reload = true
}

View File

@@ -19,7 +19,7 @@ pub fn (mut repo GitRepo) status_update(args StatusUpdateArgs) ! {
}
if args.reset || repo.last_load == 0 {
// console.print_debug('${repo.name} : Cache get')
// console.print_debug('${repo.name} : Cache Get')
repo.cache_get()!
}
@@ -30,6 +30,8 @@ pub fn (mut repo GitRepo) status_update(args StatusUpdateArgs) ! {
// Decide if a full load is needed.
if args.reset || repo.last_load == 0
|| current_time - repo.last_load >= repo.config.remote_check_period {
// console.print_debug("reload ${repo.name}:\n args reset:${args.reset}\n lastload:${repo.last_load}\n currtime-lastload:${current_time- repo.last_load}\n period:${repo.config.remote_check_period}")
// $dbg;
repo.load_internal() or {
// Persist the error state to the cache
console.print_stderr('Failed to load repository ${repo.name} at ${repo.path()}: ${err}')
@@ -51,7 +53,8 @@ fn (mut repo GitRepo) load_internal() ! {
repo.exec('fetch --all') or {
repo.status.error = 'Failed to fetch updates: ${err}'
return error('Failed to fetch updates for ${repo.name} at ${repo.path()}: ${err}. Please check network connection and repository access.')
console.print_stderr('Failed to fetch updates for ${repo.name} at ${repo.path()}: ${err}. \nPlease check git repo source, network connection and repository access.')
return
}
repo.load_branches()!
repo.load_tags()!

View File

@@ -466,7 +466,7 @@ pub fn generate_random_workspace_name() string {
'script',
'ocean',
'phoenix',
'atlas',
'doctree',
'quest',
'shield',
'dragon',

View File

@@ -2,6 +2,7 @@ module db
import x.json2
import json
import incubaid.herolib.data.ourtime
import strconv
pub fn decode_int(data string) !int {
@@ -30,6 +31,17 @@ pub fn decode_u32(data string) !u32 {
return u32(parsed_uint)
}
pub fn u32_ourtime(t u32) ourtime.OurTime {
return ourtime.OurTime{
unixt: i64(t)
}
}
pub fn ourtime_u32(t ourtime.OurTime) u32 {
return u32(t.unixt) // Convert unix time to u32
}
pub fn decode_string(data string) !string {
// Try JSON decode first (for proper JSON strings)
// if result := json2.decode[string](data) {

View File

@@ -498,7 +498,6 @@ pub fn calendar_event_handle(mut f ModelsFactory, rpcid int, servercontext map[s
}
else {
println('Method not found on calendar_event: ${method}')
$dbg;
return new_error(rpcid,
code: 32601
message: 'Method ${method} not found on calendar_event'

View File

@@ -0,0 +1,219 @@
module ledger
import incubaid.herolib.hero.db
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.schemas.jsonrpc { Response, new_error, new_response, new_response_false, new_response_int, new_response_true }
import json
pub struct DBAccount {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self Account) type_name() string {
return 'Account'
}
@[params]
pub struct AccountPolicyArg {
pub mut:
policy_id u32
admins []u32
min_signatures u8
limits []AccountLimit
whitelist_out []u32
whitelist_in []u32
lock_till u32
admin_lock_type LockType
admin_lock_till u32
admin_unlock []u32
admin_unlock_min_signature u8
clawback_accounts []u32
clawback_min_signatures u8
clawback_from u32
clawback_till u32
}
pub struct AccountArg {
pub mut:
name string
description string
owner_id u32
location_id u32
accountpolicies []AccountPolicyArg
assets []AccountAsset
assetid u32
last_activity u32 // timestamp
administrators []u32
}
pub fn (mut self DBAccount) new(args AccountArg) !Account {
mut accountpolicies := []AccountPolicy{}
for policy_arg in args.accountpolicies {
accountpolicies << AccountPolicy{
policy_id: policy_arg.policy_id
admins: policy_arg.admins
min_signatures: policy_arg.min_signatures
limits: policy_arg.limits
whitelist_out: policy_arg.whitelist_out
whitelist_in: policy_arg.whitelist_in
lock_till: policy_arg.lock_till
admin_lock_type: policy_arg.admin_lock_type
admin_lock_till: policy_arg.admin_lock_till
admin_unlock: policy_arg.admin_unlock
admin_unlock_min_signature: policy_arg.admin_unlock_min_signature
clawback_accounts: policy_arg.clawback_accounts
clawback_min_signatures: policy_arg.clawback_min_signatures
clawback_from: policy_arg.clawback_from
clawback_till: policy_arg.clawback_till
}
}
mut o := Account{
owner_id: args.owner_id
location_id: args.location_id
accountpolicies: accountpolicies
assets: args.assets
assetid: args.assetid
last_activity: args.last_activity
administrators: args.administrators
}
o.name = args.name
o.description = args.description
o.updated_at = u32(ourtime.now().unix())
return o
}
pub fn (mut self DBAccount) set(o Account) !Account {
return self.db.set[Account](o)!
}
pub fn (mut self DBAccount) delete(id u32) !bool {
if !self.db.exists[Account](id)! {
return false
}
self.db.delete[Account](id)!
return true
}
pub fn (mut self DBAccount) exist(id u32) !bool {
return self.db.exists[Account](id)!
}
pub fn (mut self DBAccount) get(id u32) !Account {
mut o, data := self.db.get_data[Account](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
@[params]
pub struct AccountListArg {
pub mut:
filter string
status int = -1
limit int = 20
offset int = 0
}
pub fn (mut self DBAccount) list(args AccountListArg) ![]Account {
mut all_accounts := self.db.list[Account]()!.map(self.get(it)!)
mut filtered_accounts := []Account{}
for account in all_accounts {
// Add filter logic based on Account properties
if args.filter != '' && !account.name.contains(args.filter)
&& !account.description.contains(args.filter) {
continue
}
// We could add more filters based on status if the Account struct has a status field
filtered_accounts << account
}
// Apply pagination
mut start := args.offset
if start >= filtered_accounts.len {
start = 0
}
mut limit := args.limit
if limit > 100 {
limit = 100
}
if start + limit > filtered_accounts.len {
limit = filtered_accounts.len - start
}
if limit <= 0 {
return []Account{}
}
return if filtered_accounts.len > 0 {
filtered_accounts[start..start + limit]
} else {
[]Account{}
}
}
pub fn (mut self DBAccount) list_all() ![]Account {
return self.db.list[Account]()!.map(self.get(it)!)
}
pub struct UserRef {
pub mut:
id u32
}
pub fn account_handle(mut f ModelsFactory, rpcid int, servercontext map[string]string, userref UserRef, method string, params string) !Response {
match method {
'get' {
id := db.decode_u32(params)!
res := f.account.get(id)!
return new_response(rpcid, json.encode_pretty(res))
}
'set' {
mut args := db.decode_generic[AccountArg](params)!
mut o := f.account.new(args)!
if args.id != 0 {
o.id = args.id
}
o = f.account.set(o)!
return new_response_int(rpcid, int(o.id))
}
'delete' {
id := db.decode_u32(params)!
success := f.account.delete(id)!
if success {
return new_response_true(rpcid)
} else {
return new_response_false(rpcid)
}
}
'exist' {
id := db.decode_u32(params)!
if f.account.exist(id)! {
return new_response_true(rpcid)
} else {
return new_response_false(rpcid)
}
}
'list' {
args := db.decode_generic_or_default[AccountListArg](params, AccountListArg{})!
result := f.account.list(args)!
return new_response(rpcid, json.encode_pretty(result))
}
else {
return new_error(
rpcid: rpcid
code: 32601
message: 'Method ${method} not found on Account'
)
}
}
}

View File

@@ -0,0 +1,72 @@
module ledger
import incubaid.herolib.hero.db
// Account represents an account in the financial system
@[heap]
pub struct Account {
db.Base
pub mut:
owner_id u32 // link to user, o is not defined
location_id u32 // link to location, 0 is none
accountpolicies []AccountPolicy
assets []AccountAsset
assetid u32
last_activity u32
administrators []u32
status AccountStatus
}
// AccountStatus represents the status of an account
pub enum AccountStatus {
active
inactive
suspended
archived
}
// AccountPolicy represents a set of rules for an account
pub struct AccountPolicy {
pub mut:
policy_id u32 @[index]
admins []u32 // people who can transfer money out
min_signatures u8 // nr of people who need to sign
limits []AccountLimit
whitelist_out []u32 // where money can go to
whitelist_in []u32 // where money can come from
lock_till u32 // date in epoch till no money can be transfered, only after
admin_lock_type LockType
admin_lock_till u32 // date in epoch when admin can unlock (0 means its free), this is unlock for changing this policy
admin_unlock []u32 // users who can unlock the admin policy
admin_unlock_min_signature u8 // nr of signatures from the adminunlock
clawback_accounts []u32 // account(s) which can clawback
clawback_min_signatures u8
clawback_from u32 // from epoch money can be clawed back, 0 is always
clawback_till u32 // till which date
}
pub enum LockType {
locked_till
locked
free
}
pub struct AccountLimit {
pub mut:
amount u64 // in smallest unit
asset_id u32
period AccountLimitPeriodLimit
}
pub enum AccountLimitPeriodLimit {
daily
weekly
monthly
}
pub struct AccountAsset {
pub mut:
assetid u32
balance u64
metadata map[string]string
}

View File

@@ -0,0 +1,122 @@
module ledger
import incubaid.herolib.data.encoder
pub fn (self AccountPolicy) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.policy_id)
e.add_list_u32(self.admins)
e.add_u8(self.min_signatures)
e.add_u32(u32(self.limits.len))
for limit in self.limits {
limit.dump(mut e)!
}
e.add_list_u32(self.whitelist_out)
e.add_list_u32(self.whitelist_in)
e.add_u32(self.lock_till)
e.add_u8(u8(self.admin_lock_type))
e.add_u32(self.admin_lock_till)
e.add_list_u32(self.admin_unlock)
e.add_u8(self.admin_unlock_min_signature)
e.add_list_u32(self.clawback_accounts)
e.add_u8(self.clawback_min_signatures)
e.add_u32(self.clawback_from)
e.add_u32(self.clawback_till)
}
fn (mut self AccountPolicy) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountPolicyP version: ${version}'
self.policy_id = e.get_u32()!
self.admins = e.get_list_u32()!
self.min_signatures = e.get_u8()!
limits_len := e.get_u32()!
self.limits = []AccountLimit{cap: int(limits_len)}
for _ in 0 .. limits_len {
mut limit := AccountLimit{}
limit.load(mut e)!
self.limits << limit
}
self.whitelist_out = e.get_list_u32()!
self.whitelist_in = e.get_list_u32()!
self.lock_till = e.get_u32()!
self.admin_lock_type = unsafe { LockType(e.get_u8()!) }
self.admin_lock_till = e.get_u32()!
self.admin_unlock = e.get_list_u32()!
self.admin_unlock_min_signature = e.get_u8()!
self.clawback_accounts = e.get_list_u32()!
self.clawback_min_signatures = e.get_u8()!
self.clawback_from = e.get_u32()!
self.clawback_till = e.get_u32()!
}
pub fn (self AccountLimit) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u64(self.amount)
e.add_u32(self.asset_id)
e.add_u8(u8(self.period))
}
fn (mut self AccountLimit) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountLimitP version: ${version}'
self.amount = e.get_u32()!
self.asset_id = e.get_u32()!
self.period = unsafe { AccountLimitPeriodLimit(e.get_u8()!) }
}
pub fn (self AccountAsset) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.assetid)
e.add_u64(self.balance)
e.add_map_string(self.metadata)
}
fn (mut self AccountAsset) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountAssetP version: ${version}'
self.assetid = e.get_u32()!
self.balance = e.get_u32()!
self.metadata = e.get_map_string()!
}
pub fn (self Account) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.owner_id)
e.add_u32(self.location_id)
e.add_u32(u32(self.accountpolicies.len))
for policy in self.accountpolicies {
policy.dump(mut e)!
}
e.add_u32(u32(self.assets.len))
for asset in self.assets {
asset.dump(mut e)!
}
e.add_u32(self.assetid)
e.add_u32(self.last_activity)
e.add_list_u32(self.administrators)
}
fn (mut self DBAccount) load(mut o Account, mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported Account version: ${version}'
o.owner_id = e.get_u32()!
o.location_id = e.get_u32()!
policies_len := e.get_u32()!
o.accountpolicies = []AccountPolicy{cap: int(policies_len)}
for _ in 0 .. policies_len {
mut policy := AccountPolicy{}
policy.load(mut e)!
o.accountpolicies << policy
}
assets_len := e.get_u32()!
o.assets = []AccountAsset{cap: int(assets_len)}
for _ in 0 .. assets_len {
mut asset := AccountAsset{}
asset.load(mut e)!
o.assets << asset
}
o.assetid = e.get_u32()!
o.last_activity = e.get_u32()!
o.administrators = e.get_list_u32()!
}

View File

@@ -0,0 +1,61 @@
module ledger
import incubaid.herolib.hero.db
import json
pub struct ModelsFactory {
pub mut:
db &db.DB
account &DBAccount
// asset &DBAsset
// dnszone &DBDNSZone
// group &DBGroup
// member &DBMember
// notary &DBNotary
// signature &DBSignature
// transaction &DBTransaction
// user &DBUser
// userkvs &DBUserKVS
// userkvsitem &DBUserKVSItem
}
pub fn new_models_factory(mut database db.DB) !&ModelsFactory {
mut factory := &ModelsFactory{
db: database
account: &DBAccount{
db: database
}
}
// factory.asset = &DBAsset{
// db: database
// }
// factory.dnszone = &DBDNSZone{
// db: database
// }
// factory.group = &DBGroup{
// db: database
// }
// factory.member = &DBMember{
// db: database
// }
// factory.notary = &DBNotary{
// db: database
// }
// factory.signature = &DBSignature{
// db: database
// }
// factory.transaction = &DBTransaction{
// db: database
// }
// factory.user = &DBUser{
// db: database
// }
// factory.userkvs = &DBUserKVS{
// db: database
// }
// factory.userkvsitem = &DBUserKVSItem{
// db: database
// }
return factory
}

View File

@@ -1,52 +1,4 @@
module models_ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
import json
// AccountStatus represents the status of an account
pub enum AccountStatus {
active
inactive
suspended
archived
}
// Account represents an account in the financial system
@[heap]
pub struct Account {
db.Base
pub mut:
owner_id u32 // link to user
location_id u32 // link to location, 0 is none
accountpolicies []AccountPolicy
assets []AccountAsset
assetid u32
last_activity u64
administrators []u32
}
// AccountPolicy represents a set of rules for an account
pub struct AccountPolicy {
pub mut:
policy_id u32 @[index]
admins []u32 // people who can transfer money out
min_signatures u8 // nr of people who need to sign
limits []AccountLimit
whitelist_out []u32 // where money can go to
whitelist_in []u32 // where money can come from
lock_till u64 // date in epoch till no money can be transfered, only after
admin_lock_type LockType
admin_lock_till u64 // date in epoch when admin can unlock (0 means its free), this is unlock for changing this policy
admin_unlock []u32 // users who can unlock the admin policy
admin_unlock_min_signature u8 // nr of signatures from the adminunlock
clawback_accounts []u32 // account(s) which can clawback
clawback_min_signatures u8
clawback_from u64 // from epoch money can be clawed back, 0 is always
clawback_till u64 // till which date
}
module ledger
pub fn (self AccountPolicy) dump(mut e encoder.Encoder) ! {
e.add_u32(self.policy_id)
@@ -58,15 +10,15 @@ pub fn (self AccountPolicy) dump(mut e encoder.Encoder) ! {
}
e.add_list_u32(self.whitelist_out)
e.add_list_u32(self.whitelist_in)
e.add_u64(self.lock_till)
e.add_u32(self.lock_till)
e.add_u8(u8(self.admin_lock_type))
e.add_u64(self.admin_lock_till)
e.add_u32(self.admin_lock_till)
e.add_list_u32(self.admin_unlock)
e.add_u8(self.admin_unlock_min_signature)
e.add_list_u32(self.clawback_accounts)
e.add_u8(self.clawback_min_signatures)
e.add_u64(self.clawback_from)
e.add_u64(self.clawback_till)
e.add_u32(self.clawback_from)
e.add_u32(self.clawback_till)
}
fn (mut self AccountPolicy) load(mut e encoder.Decoder) ! {
@@ -82,65 +34,38 @@ fn (mut self AccountPolicy) load(mut e encoder.Decoder) ! {
}
self.whitelist_out = e.get_list_u32()!
self.whitelist_in = e.get_list_u32()!
self.lock_till = e.get_u64()!
self.lock_till = e.get_u32()!
self.admin_lock_type = unsafe { LockType(e.get_u8()!) }
self.admin_lock_till = e.get_u64()!
self.admin_lock_till = e.get_u32()!
self.admin_unlock = e.get_list_u32()!
self.admin_unlock_min_signature = e.get_u8()!
self.clawback_accounts = e.get_list_u32()!
self.clawback_min_signatures = e.get_u8()!
self.clawback_from = e.get_u64()!
self.clawback_till = e.get_u64()!
}
pub enum LockType {
locked_till
locked
free
}
pub struct AccountLimit {
pub mut:
amount f64
asset_id u32
period AccountLimitPeriodLimit
self.clawback_from = e.get_u32()!
self.clawback_till = e.get_u32()!
}
pub fn (self AccountLimit) dump(mut e encoder.Encoder) ! {
e.add_f64(self.amount)
e.add_u32(self.amount)
e.add_u32(self.asset_id)
e.add_u8(u8(self.period))
}
fn (mut self AccountLimit) load(mut e encoder.Decoder) ! {
self.amount = e.get_f64()!
self.amount = e.get_u32()!
self.asset_id = e.get_u32()!
self.period = unsafe { AccountLimitPeriodLimit(e.get_u8()!) }
}
pub enum AccountLimitPeriodLimit {
daily
weekly
monthly
}
pub struct AccountAsset {
db.Base
pub mut:
assetid u32
balance f64
metadata map[string]string
}
pub fn (self AccountAsset) dump(mut e encoder.Encoder) ! {
e.add_u32(self.assetid)
e.add_f64(self.balance)
e.add_u32(self.balance)
e.add_map_string(self.metadata)
}
fn (mut self AccountAsset) load(mut e encoder.Decoder) ! {
self.assetid = e.get_u32()!
self.balance = e.get_f64()!
self.balance = e.get_u32()!
self.metadata = e.get_map_string()!
}
@@ -211,7 +136,7 @@ pub fn (self Account) dump(mut e encoder.Encoder) ! {
asset.dump(mut e)!
}
e.add_u32(self.assetid)
e.add_u64(self.last_activity)
e.add_u32(self.last_activity)
e.add_list_u32(self.administrators)
}
@@ -233,7 +158,7 @@ fn (mut self DBAccount) load(mut o Account, mut e encoder.Decoder) ! {
o.assets << asset
}
o.assetid = e.get_u32()!
o.last_activity = e.get_u64()!
o.last_activity = e.get_u32()!
o.administrators = e.get_list_u32()!
}
@@ -387,59 +312,6 @@ pub fn (mut self DBAccount) list_all() ![]Account {
return self.db.list[Account]()!.map(self.get(it)!)
}
// Response struct for API
pub struct Response {
pub mut:
id int
jsonrpc string = '2.0'
result string
error ?ResponseError
}
pub struct ResponseError {
pub mut:
code int
message string
}
pub fn new_response(rpcid int, result string) Response {
return Response{
id: rpcid
result: result
}
}
pub fn new_response_true(rpcid int) Response {
return Response{
id: rpcid
result: 'true'
}
}
pub fn new_response_false(rpcid int) Response {
return Response{
id: rpcid
result: 'false'
}
}
pub fn new_response_int(rpcid int, result int) Response {
return Response{
id: rpcid
result: result.str()
}
}
pub fn new_error(rpcid int, code int, message string) Response {
return Response{
id: rpcid
error: ResponseError{
code: code
message: message
}
}
}
pub struct UserRef {
pub mut:
id u32

View File

@@ -0,0 +1,77 @@
module ledger
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
import json
// Account represents an account in the financial system
@[heap]
pub struct Account {
db.Base
pub mut:
owner_id u32 // link to user, o is not defined
location_id u32 // link to location, 0 is none
accountpolicies []AccountPolicy
assets []AccountAsset
assetid u32
last_activity u64
administrators []u32
status AccountStatus
}
// AccountStatus represents the status of an account
pub enum AccountStatus {
active
inactive
suspended
archived
}
// AccountPolicy represents a set of rules for an account
pub struct AccountPolicy {
pub mut:
policy_id u32 @[index]
admins []u32 // people who can transfer money out
min_signatures u8 // nr of people who need to sign
limits []AccountLimit
whitelist_out []u32 // where money can go to
whitelist_in []u32 // where money can come from
lock_till u32 // date in epoch till no money can be transfered, only after
admin_lock_type LockType
admin_lock_till u32 // date in epoch when admin can unlock (0 means its free), this is unlock for changing this policy
admin_unlock []u32 // users who can unlock the admin policy
admin_unlock_min_signature u8 // nr of signatures from the adminunlock
clawback_accounts []u32 // account(s) which can clawback
clawback_min_signatures u8
clawback_from u32 // from epoch money can be clawed back, 0 is always
clawback_till u32 // till which date
}
pub enum LockType {
locked_till
locked
free
}
pub struct AccountLimit {
pub mut:
amount u64 //in smallest unit
asset_id u32
period AccountLimitPeriodLimit
}
pub enum AccountLimitPeriodLimit {
daily
weekly
monthly
}
pub struct AccountAsset {
pub mut:
assetid u32
balance u64
metadata map[string]string
}

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import json

View File

@@ -1,5 +1,5 @@
// lib/threefold/models_ledger/asset.v
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
@@ -14,7 +14,7 @@ pub mut:
address string @[index; required] // The unique address or identifier for the asset.
asset_type string @[required] // The type of the asset (e.g., 'token', 'nft').
issuer u32 @[required] // The user ID of the issuer of the asset.
supply f64 // The total supply of the asset.
supply u64 // The total supply of the asset.
decimals u8 // The number of decimal places for the asset's value.
is_frozen bool // Indicates if the asset is currently frozen and cannot be transferred.
metadata map[string]string // A map for storing arbitrary metadata as key-value pairs.
@@ -61,7 +61,7 @@ pub fn (self Asset) dump(mut e encoder.Encoder) ! {
e.add_string(self.address)
e.add_string(self.asset_type)
e.add_u32(self.issuer)
e.add_f64(self.supply)
e.add_u64(self.supply)
e.add_u8(self.decimals)
e.add_bool(self.is_frozen)
@@ -74,7 +74,7 @@ fn (mut self DBAsset) load(mut o Asset, mut e encoder.Decoder) ! {
o.address = e.get_string()!
o.asset_type = e.get_string()!
o.issuer = e.get_u32()!
o.supply = e.get_f64()!
o.supply = e.get_u64()!
o.decimals = e.get_u8()!
o.is_frozen = e.get_bool()!
@@ -91,7 +91,7 @@ pub mut:
address string
asset_type string
issuer u32
supply f64
supply u64
decimals u8
is_frozen bool
metadata map[string]string

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import json
@@ -77,7 +77,7 @@ fn test_asset_list_filtering() ! {
address: 'ADDR${i}'
asset_type: if i < 3 { 'token' } else { 'nft' }
issuer: if i % 2 == 0 { u32(1) } else { u32(2) }
supply: 1000.0 * f64(i + 1)
supply: 1000.0 * u64(i + 1)
decimals: 8
is_frozen: i >= 3
}

View File

@@ -1,5 +1,5 @@
// lib/threefold/models_ledger/dnszone.v
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
fn test_setup_db_only() ! {
mut store := setup_test_db()!

View File

@@ -1,5 +1,5 @@
// lib/threefold/models_ledger/group.v
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.hero.db
import json

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.hero.db

View File

@@ -1,5 +1,5 @@
// lib/threefold/models_ledger/transaction.v
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
@@ -14,7 +14,7 @@ pub mut:
source u32
destination u32
assetid u32
amount f64
amount u64
timestamp u64
status string
memo string
@@ -100,7 +100,7 @@ pub fn (self Transaction) dump(mut e encoder.Encoder) ! {
e.add_u32(self.source)
e.add_u32(self.destination)
e.add_u32(self.assetid)
e.add_f64(self.amount)
e.add_u64(self.amount)
e.add_u64(self.timestamp)
e.add_string(self.status)
e.add_string(self.memo)
@@ -120,7 +120,7 @@ fn (mut self DBTransaction) load(mut o Transaction, mut e encoder.Decoder) ! {
o.source = e.get_u32()!
o.destination = e.get_u32()!
o.assetid = e.get_u32()!
o.amount = e.get_f64()!
o.amount = e.get_u64()!
o.timestamp = e.get_u64()!
o.status = e.get_string()!
o.memo = e.get_string()!
@@ -148,7 +148,7 @@ pub mut:
source u32
destination u32
assetid u32
amount f64
amount u64
timestamp u64
status string
memo string

View File

@@ -1,5 +1,5 @@
// lib/threefold/models_ledger/user.v
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,4 +1,4 @@
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime

View File

@@ -1,7 +1,6 @@
module models_ledger
module ledger
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// UserKVSItem represents a single item in a user's key-value store
@@ -12,7 +11,6 @@ pub mut:
kvs_id u32 @[index]
key string @[index]
value string
timestamp u64
}
pub struct DBUserKVSItem {

View File

@@ -17,7 +17,7 @@ pub mut:
apikey string
apisecret string @[secret]
configpath string
nr int = 0 // each specific instance onto this server needs to have a unique nr
nr int // each specific instance onto this server needs to have a unique nr
}
fn obj_init(obj_ LivekitServer) !LivekitServer {

View File

@@ -1,217 +0,0 @@
need to install following
#!/bin/bash
set -euo pipefail
EXTRA_ARGS=""
log_info() {
echo '[INFO] ' "$@"
}
log_fatal() {
echo '[ERROR] ' "$@" >&2
exit 1
}
source_env_file() {
local env_file="${1:-}"
if [ ! -f "$env_file" ]; then
log_fatal "Environment file not found: $env_file"
fi
set -a
source "$env_file"
set +a
}
check_root() {
if [ "$EUID" -ne 0 ]; then
log_fatal "This script must be run as root"
fi
}
install_deps() {
log_info "Updating package lists..."
if ! apt-get update -qq > /dev/null 2>&1; then
log_fatal "Failed to update package lists"
fi
if ! command -v curl &> /dev/null; then
log_info "Installing curl..."
apt-get install -y -qq curl > /dev/null 2>&1 || log_fatal "Failed to install curl"
fi
if ! command -v ip &> /dev/null; then
log_info "Installing iproute2 for ip command..."
apt-get install -y -qq iproute2 > /dev/null 2>&1 || log_fatal "Failed to install iproute2"
fi
if ! command -v k3s &> /dev/null; then
log_info "Installing k3s..."
if ! curl -fsSL -o /usr/local/bin/k3s https://github.com/k3s-io/k3s/releases/download/v1.33.1+k3s1/k3s 2>/dev/null; then
log_fatal "Failed to download k3s"
fi
chmod +x /usr/local/bin/k3s
fi
if ! command -v kubectl &> /dev/null; then
log_info "Installing kubectl..."
if ! curl -fsSL -o /usr/local/bin/kubectl https://dl.k8s.io/release/v1.33.1/bin/linux/amd64/kubectl 2>/dev/null; then
log_fatal "Failed to download kubectl"
fi
chmod +x /usr/local/bin/kubectl
fi
}
get_iface_ipv6() {
local iface="$1"
# Step 1: Find the next-hop for 400::/7
local route_line
route_line=$(ip -6 route | grep "^400::/7.*dev ${iface}" || true)
if [ -z "$route_line" ]; then
log_fatal "No 400::/7 route found via interface ${iface}"
fi
# Extract next-hop IPv6
local nexthop
nexthop=$(echo "$route_line" | awk '{for(i=1;i<=NF;i++) if ($i=="via") print $(i+1)}')
local prefix
prefix=$(echo "$nexthop" | cut -d':' -f1-4)
# Step 3: Get global IPv6 addresses and match subnet
local ipv6_list
ipv6_list=$(ip -6 addr show dev "$iface" scope global | awk '/inet6/ {print $2}' | cut -d'/' -f1)
local ip ip_prefix
for ip in $ipv6_list; do
ip_prefix=$(echo "$ip" | cut -d':' -f1-4)
if [ "$ip_prefix" = "$prefix" ]; then
echo "$ip"
return 0
fi
done
log_fatal "No global IPv6 address found on ${iface} matching prefix ${prefix}"
}
prepare_args() {
log_info "Preparing k3s arguments..."
if [ -z "${K3S_FLANNEL_IFACE:-}" ]; then
log_fatal "K3S_FLANNEL_IFACE not set, it should be your mycelium interface"
else
local ipv6
ipv6=$(get_iface_ipv6 "$K3S_FLANNEL_IFACE")
EXTRA_ARGS="$EXTRA_ARGS --node-ip=$ipv6"
fi
if [ -n "${K3S_DATA_DIR:-}" ]; then
log_info "k3s data-dir set to: $K3S_DATA_DIR"
if [ -d "/var/lib/rancher/k3s" ] && [ -n "$(ls -A /var/lib/rancher/k3s 2>/dev/null)" ]; then
cp -r /var/lib/rancher/k3s/* $K3S_DATA_DIR && rm -rf /var/lib/rancher/k3s
fi
EXTRA_ARGS="$EXTRA_ARGS --data-dir $K3S_DATA_DIR --kubelet-arg=root-dir=$K3S_DATA_DIR/kubelet"
fi
if [[ "${MASTER:-}" = "true" ]]; then
EXTRA_ARGS="$EXTRA_ARGS --cluster-cidr=2001:cafe:42::/56"
EXTRA_ARGS="$EXTRA_ARGS --service-cidr=2001:cafe:43::/112"
EXTRA_ARGS="$EXTRA_ARGS --flannel-ipv6-masq"
fi
if [ -z "${K3S_URL:-}" ]; then
# Add additional SANs for planetary network IP, public IPv4, and public IPv6
# https://github.com/threefoldtech/tf-images/issues/98
local ifaces=( "tun0" "eth1" "eth2" )
for iface in "${ifaces[@]}"
do
# Check if interface exists before querying
if ! ip addr show "$iface" &>/dev/null; then
continue
fi
local addrs
addrs=$(ip addr show "$iface" 2>/dev/null | grep -E "inet |inet6 " | grep "global" | cut -d '/' -f1 | awk '{print $2}' || true)
local addr
for addr in $addrs
do
# Validate the IP address by trying to route to it
if ip route get "$addr" &>/dev/null; then
EXTRA_ARGS="$EXTRA_ARGS --tls-san $addr"
fi
done
done
if [ "${HA:-}" = "true" ]; then
EXTRA_ARGS="$EXTRA_ARGS --cluster-init"
fi
else
if [ -z "${K3S_TOKEN:-}" ]; then
log_fatal "K3S_TOKEN must be set when K3S_URL is specified (joining a cluster)"
fi
fi
}
patch_manifests() {
log_info "Patching manifests..."
dir="${K3S_DATA_DIR:-/var/lib/rancher/k3s}"
manifest="$dir/server/manifests/tfgw-crd.yaml"
# If K3S_URL found, remove manifest and exit. it is an agent node
if [[ -n "${K3S_URL:-}" ]]; then
rm -f "$manifest"
log_info "Agent node detected, removed manifest: $manifest"
exit 0
fi
# If K3S_URL not found, patch the manifest. it is a server node
[[ ! -f "$manifest" ]] && echo "Manifest not found: $manifest" >&2 && exit 1
sed -i \
-e "s|\${MNEMONIC}|${MNEMONIC:-}|g" \
-e "s|\${NETWORK}|${NETWORK:-}|g" \
-e "s|\${TOKEN}|${TOKEN:-}|g" \
"$manifest"
}
run_node() {
if [ -z "${K3S_URL:-}" ]; then
log_info "Starting k3s server (initializing new cluster)..."
log_info "Command: k3s server --flannel-iface $K3S_FLANNEL_IFACE $EXTRA_ARGS"
exec k3s server --flannel-iface "$K3S_FLANNEL_IFACE" $EXTRA_ARGS 2>&1
elif [ "${MASTER:-}" = "true" ]; then
log_info "Starting k3s server (joining existing cluster as master)..."
log_info "Command: k3s server --server $K3S_URL --flannel-iface $K3S_FLANNEL_IFACE $EXTRA_ARGS"
exec k3s server --server "$K3S_URL" --flannel-iface "$K3S_FLANNEL_IFACE" $EXTRA_ARGS 2>&1
else
log_info "Starting k3s agent (joining existing cluster as worker)..."
log_info "Command: k3s agent --server $K3S_URL --flannel-iface $K3S_FLANNEL_IFACE $EXTRA_ARGS"
exec k3s agent --server "$K3S_URL" --flannel-iface "$K3S_FLANNEL_IFACE" $EXTRA_ARGS 2>&1
fi
}
main() {
source_env_file "${1:-}"
check_root
install_deps
prepare_args
patch_manifests
run_node
}
main "$@"
INSTRUCTIONS: USE HEROLIB AS MUCH AS POSSIBLE e.g. SAL

View File

@@ -2,119 +2,537 @@ module kubernetes_installer
import incubaid.herolib.osal.core as osal
import incubaid.herolib.ui.console
import incubaid.herolib.core.texttools
import incubaid.herolib.core
import incubaid.herolib.core.pathlib
import incubaid.herolib.installers.ulist
import incubaid.herolib.osal.startupmanager
import os
//////////////////// following actions are not specific to instance of the object
//////////////////// STARTUP COMMAND ////////////////////
// checks if kubectl is installed and meets minimum version requirement
fn installed() !bool {
if !osal.cmd_exists('kubectl') {
return false
fn (self &KubernetesInstaller) startupcmd() ![]startupmanager.ZProcessNewArgs {
mut res := []startupmanager.ZProcessNewArgs{}
// Get Mycelium IPv6 address
ipv6 := self.get_mycelium_ipv6()!
// Build K3s command based on node type
mut cmd := ''
mut extra_args := '--node-ip=${ipv6} --flannel-iface ${self.mycelium_interface}'
// Add data directory if specified
if self.data_dir != '' {
extra_args += ' --data-dir ${self.data_dir} --kubelet-arg=root-dir=${self.data_dir}/kubelet'
}
res := os.execute('${osal.profile_path_source_and()!} kubectl version --client --output=json')
if res.exit_code != 0 {
// Try older kubectl version command format
res2 := os.execute('${osal.profile_path_source_and()!} kubectl version --client --short')
if res2.exit_code != 0 {
return false
}
// Parse version from output like "Client Version: v1.31.0"
lines := res2.output.split_into_lines().filter(it.contains('Client Version'))
if lines.len == 0 {
return false
}
version_str := lines[0].all_after('v').trim_space()
if texttools.version(version) <= texttools.version(version_str) {
return true
}
return false
// Add token
if self.token != '' {
extra_args += ' --token ${self.token}'
}
// For newer kubectl versions with JSON output
// Just check if kubectl exists and runs - version checking is optional
return true
if self.is_master {
// Master node configuration
extra_args += ' --cluster-cidr=2001:cafe:42::/56 --service-cidr=2001:cafe:43::/112 --flannel-ipv6-masq'
if self.is_first_master {
// First master: initialize cluster
cmd = 'k3s server --cluster-init ${extra_args}'
} else {
// Additional master: join existing cluster
if self.master_url == '' {
return error('master_url is required for joining as additional master')
}
cmd = 'k3s server --server ${self.master_url} ${extra_args}'
}
} else {
// Worker node: join as agent
if self.master_url == '' {
return error('master_url is required for worker nodes')
}
cmd = 'k3s agent --server ${self.master_url} ${extra_args}'
}
res << startupmanager.ZProcessNewArgs{
name: 'k3s_${self.name}'
startuptype: .systemd
cmd: cmd
env: {
'HOME': os.home_dir()
}
}
return res
}
// get the Upload List of the files
//////////////////// RUNNING CHECK ////////////////////
fn running() !bool {
// Check if k3s process is running
res := osal.exec(cmd: 'pgrep -f "k3s (server|agent)"', stdout: false, raise_error: false)!
if res.exit_code == 0 {
// K3s process is running, that's enough for basic check
// We don't check kubectl connectivity here as it might not be ready immediately
// and could hang if kubeconfig is not properly configured
return true
}
return false
}
//////////////////// OS CHECK ////////////////////
fn check_ubuntu() ! {
// Check if running on Ubuntu
if !core.is_linux()! {
return error('K3s installer requires Linux. Current OS is not supported.')
}
// Check /etc/os-release for Ubuntu
content := os.read_file('/etc/os-release') or {
return error('Could not read /etc/os-release. Is this Ubuntu?')
}
if !content.contains('Ubuntu') && !content.contains('ubuntu') {
return error('This installer requires Ubuntu. Current OS is not Ubuntu.')
}
console.print_debug('OS check passed: Running on Ubuntu')
}
//////////////////// DEPENDENCY INSTALLATION ////////////////////
fn install_deps(k3s_version string) ! {
console.print_header('Installing dependencies...')
// Check and install curl
if !osal.cmd_exists('curl') {
console.print_header('Installing curl...')
osal.package_install('curl')!
}
// Check and install iproute2 (for ip command)
if !osal.cmd_exists('ip') {
console.print_header('Installing iproute2...')
osal.package_install('iproute2')!
}
// Install K3s binary
if !osal.cmd_exists('k3s') {
console.print_header('Installing K3s ${k3s_version}...')
k3s_url := 'https://github.com/k3s-io/k3s/releases/download/${k3s_version}+k3s1/k3s'
osal.download(
url: k3s_url
dest: '/tmp/k3s'
)!
// Make it executable and move to /usr/local/bin
osal.exec(cmd: 'chmod +x /tmp/k3s')!
osal.cmd_add(
cmdname: 'k3s'
source: '/tmp/k3s'
)!
}
// Install kubectl
if !osal.cmd_exists('kubectl') {
console.print_header('Installing kubectl...')
// Extract version number from k3s_version (e.g., v1.33.1)
kubectl_version := k3s_version
kubectl_url := 'https://dl.k8s.io/release/${kubectl_version}/bin/linux/amd64/kubectl'
osal.download(
url: kubectl_url
dest: '/tmp/kubectl'
)!
osal.exec(cmd: 'chmod +x /tmp/kubectl')!
osal.cmd_add(
cmdname: 'kubectl'
source: '/tmp/kubectl'
)!
}
console.print_header('All dependencies installed successfully')
}
//////////////////// INSTALLATION ACTIONS ////////////////////
fn installed() !bool {
return osal.cmd_exists('k3s') && osal.cmd_exists('kubectl')
}
// Install first master node
pub fn (mut self KubernetesInstaller) install_master() ! {
console.print_header('Installing K3s as first master node')
// Check OS
check_ubuntu()!
// Set flags
self.is_master = true
self.is_first_master = true
// Install dependencies
install_deps(self.k3s_version)!
// Ensure data directory exists
osal.dir_ensure(self.data_dir)!
// Save configuration
set(self)!
console.print_header('K3s first master installation completed')
console.print_header('Token: ${self.token}')
console.print_header('To start K3s, run: kubernetes_installer.start')
// Generate join script
join_script := self.generate_join_script()!
console.print_header('Join script generated. Save this for other nodes:\n${join_script}')
}
// Join as additional master
pub fn (mut self KubernetesInstaller) join_master() ! {
console.print_header('Joining K3s cluster as additional master')
// Check OS
check_ubuntu()!
// Validate required fields
if self.token == '' {
return error('token is required to join cluster')
}
if self.master_url == '' {
return error('master_url is required to join cluster')
}
// Set flags
self.is_master = true
self.is_first_master = false
// Install dependencies
install_deps(self.k3s_version)!
// Ensure data directory exists
osal.dir_ensure(self.data_dir)!
// Save configuration
set(self)!
console.print_header('K3s additional master installation completed')
console.print_header('To start K3s, run: kubernetes_installer.start')
}
// Install worker node
pub fn (mut self KubernetesInstaller) install_worker() ! {
console.print_header('Installing K3s as worker node')
// Check OS
check_ubuntu()!
// Validate required fields
if self.token == '' {
return error('token is required to join cluster')
}
if self.master_url == '' {
return error('master_url is required to join cluster')
}
// Set flags
self.is_master = false
self.is_first_master = false
// Install dependencies
install_deps(self.k3s_version)!
// Ensure data directory exists
osal.dir_ensure(self.data_dir)!
// Save configuration
set(self)!
console.print_header('K3s worker installation completed')
console.print_header('To start K3s, run: kubernetes_installer.start')
}
//////////////////// UTILITY FUNCTIONS ////////////////////
// Get kubeconfig content
pub fn (self &KubernetesInstaller) get_kubeconfig() !string {
kubeconfig_path := self.kubeconfig_path()
mut kubeconfig_file := pathlib.get_file(path: kubeconfig_path) or {
return error('Kubeconfig not found at ${kubeconfig_path}. Is K3s running?')
}
if !kubeconfig_file.exists() {
return error('Kubeconfig not found at ${kubeconfig_path}. Is K3s running?')
}
return kubeconfig_file.read()!
}
// Generate join script for other nodes
pub fn (self &KubernetesInstaller) generate_join_script() !string {
if !self.is_first_master {
return error('Can only generate join script from first master node')
}
// Get Mycelium IPv6 of this master
master_ipv6 := self.get_mycelium_ipv6()!
master_url := 'https://[${master_ipv6}]:6443'
mut script := '#!/usr/bin/env hero
// ============================================================================
// K3s Cluster Join Script
// Generated from master node: ${self.node_name}
// ============================================================================
// Section 1: Join as Additional Master (HA)
// Uncomment to join as additional master node
/*
!!kubernetes_installer.configure
name:\'k3s_master_2\'
k3s_version:\'${self.k3s_version}\'
data_dir:\'${self.data_dir}\'
node_name:\'master-2\'
mycelium_interface:\'${self.mycelium_interface}\'
token:\'${self.token}\'
master_url:\'${master_url}\'
!!kubernetes_installer.join_master name:\'k3s_master_2\'
!!kubernetes_installer.start name:\'k3s_master_2\'
*/
// Section 2: Join as Worker Node
// Uncomment to join as worker node
/*
!!kubernetes_installer.configure
name:\'k3s_worker_1\'
k3s_version:\'${self.k3s_version}\'
data_dir:\'${self.data_dir}\'
node_name:\'worker-1\'
mycelium_interface:\'${self.mycelium_interface}\'
token:\'${self.token}\'
master_url:\'${master_url}\'
!!kubernetes_installer.install_worker name:\'k3s_worker_1\'
!!kubernetes_installer.start name:\'k3s_worker_1\'
*/
'
return script
}
//////////////////// CLEANUP ////////////////////
fn destroy() ! {
console.print_header('Destroying K3s installation')
// Get configuration to find data directory
// Try to get from current configuration, otherwise use common paths
mut data_dirs := []string{}
if cfg := get() {
data_dirs << cfg.data_dir
console.print_debug('Found configured data directory: ${cfg.data_dir}')
} else {
console.print_debug('No configuration found, will clean up common K3s paths')
}
// Always add common K3s directories to ensure complete cleanup
data_dirs << '/var/lib/rancher/k3s'
data_dirs << '/root/hero/var/k3s'
// CRITICAL: Complete systemd service deletion FIRST before any other cleanup
// This prevents the service from auto-restarting during cleanup
// Step 1: Stop and delete ALL k3s systemd services using startupmanager
console.print_header('Stopping and removing systemd services...')
// Get systemd startup manager
mut sm := startupmanager_get(.systemd) or {
console.print_debug('Failed to get systemd manager: ${err}')
return error('Could not get systemd manager: ${err}')
}
// List all k3s services
all_services := sm.list() or {
console.print_debug('Failed to list services: ${err}')
[]string{}
}
// Filter and delete k3s services
for service_name in all_services {
if service_name.starts_with('k3s_') {
console.print_debug('Deleting systemd service: ${service_name}')
// Use startupmanager.delete() which properly stops, disables, and removes the service
sm.delete(service_name) or {
console.print_debug('Failed to delete service ${service_name}: ${err}')
}
}
}
console.print_header(' Systemd services removed')
// Step 2: Kill any remaining K3s processes
console.print_header('Killing any remaining K3s processes...')
osal.exec(cmd: 'killall -9 k3s 2>/dev/null || true', stdout: false, raise_error: false) or {
console.print_debug('No k3s processes to kill or killall failed')
}
// Wait for processes to fully terminate
osal.exec(cmd: 'sleep 2', stdout: false) or {}
// Step 3: Unmount kubelet mounts (before network cleanup)
cleanup_mounts()!
// Step 4: Clean up network interfaces (after processes are stopped)
cleanup_network()!
// Step 5: Remove data directories
console.print_header('Removing data directories...')
// Remove all K3s data directories (deduplicated)
mut cleaned_dirs := map[string]bool{}
for data_dir in data_dirs {
if data_dir != '' && data_dir !in cleaned_dirs {
cleaned_dirs[data_dir] = true
console.print_debug('Removing data directory: ${data_dir}')
osal.exec(cmd: 'rm -rf ${data_dir}', stdout: false, raise_error: false) or {
console.print_debug('Failed to remove ${data_dir}: ${err}')
}
}
}
// Also remove /etc/rancher which K3s creates
console.print_debug('Removing /etc/rancher')
osal.exec(cmd: 'rm -rf /etc/rancher', stdout: false, raise_error: false) or {}
// Step 6: Clean up CNI
console.print_header('Cleaning up CNI directories...')
osal.exec(cmd: 'rm -rf /var/lib/cni/', stdout: false, raise_error: false) or {}
// Step 7: Clean up iptables rules
console.print_header('Cleaning up iptables rules')
osal.exec(
cmd: 'iptables-save | grep -v KUBE- | grep -v CNI- | grep -iv flannel | iptables-restore'
stdout: false
raise_error: false
) or {}
osal.exec(
cmd: 'ip6tables-save | grep -v KUBE- | grep -v CNI- | grep -iv flannel | ip6tables-restore'
stdout: false
raise_error: false
) or {}
console.print_header('K3s destruction completed')
}
fn cleanup_network() ! {
console.print_header('Cleaning up network interfaces')
// Remove interfaces that are slaves of cni0
// Get the list first, then delete one by one
if veth_result := osal.exec(
cmd: 'ip link show | grep "master cni0" | awk -F: \'{print $2}\' | xargs'
stdout: false
raise_error: false
) {
if veth_result.output.trim_space() != '' {
veth_interfaces := veth_result.output.trim_space().split(' ')
for veth in veth_interfaces {
veth_trimmed := veth.trim_space()
if veth_trimmed != '' {
console.print_debug('Deleting veth interface: ${veth_trimmed}')
osal.exec(cmd: 'ip link delete ${veth_trimmed}', stdout: false, raise_error: false) or {
console.print_debug('Failed to delete ${veth_trimmed}, continuing...')
}
}
}
}
} else {
console.print_debug('No veth interfaces found or error getting list')
}
// Remove CNI-related interfaces
interfaces := ['cni0', 'flannel.1', 'flannel-v6.1', 'kube-ipvs0', 'flannel-wg', 'flannel-wg-v6']
for iface in interfaces {
console.print_debug('Deleting interface: ${iface}')
// Use timeout to prevent hanging, and redirect stderr to avoid blocking
osal.exec(cmd: 'timeout 5 ip link delete ${iface} 2>/dev/null || true', stdout: false, raise_error: false) or {
console.print_debug('Interface ${iface} not found or already deleted')
}
}
// Remove CNI namespaces
if ns_result := osal.exec(
cmd: 'ip netns show | grep cni- | xargs'
stdout: false
raise_error: false
) {
if ns_result.output.trim_space() != '' {
namespaces := ns_result.output.trim_space().split(' ')
for ns in namespaces {
ns_trimmed := ns.trim_space()
if ns_trimmed != '' {
console.print_debug('Deleting namespace: ${ns_trimmed}')
osal.exec(cmd: 'ip netns delete ${ns_trimmed}', stdout: false, raise_error: false) or {
console.print_debug('Failed to delete namespace ${ns_trimmed}')
}
}
}
}
} else {
console.print_debug('No CNI namespaces found')
}
}
fn cleanup_mounts() ! {
console.print_header('Cleaning up mounts')
// Unmount and remove kubelet directories
paths := ['/run/k3s', '/var/lib/kubelet/pods', '/var/lib/kubelet/plugins', '/run/netns/cni-']
for path in paths {
// Find all mounts under this path and unmount them
if mount_result := osal.exec(
cmd: 'mount | grep "${path}" | awk \'{print $3}\' | sort -r'
stdout: false
raise_error: false
) {
if mount_result.output.trim_space() != '' {
mount_points := mount_result.output.split_into_lines()
for mount_point in mount_points {
mp_trimmed := mount_point.trim_space()
if mp_trimmed != '' {
console.print_debug('Unmounting: ${mp_trimmed}')
osal.exec(cmd: 'umount -f ${mp_trimmed}', stdout: false, raise_error: false) or {
console.print_debug('Failed to unmount ${mp_trimmed}')
}
}
}
}
} else {
console.print_debug('No mounts found for ${path}')
}
// Remove the directory
console.print_debug('Removing directory: ${path}')
osal.exec(cmd: 'rm -rf ${path}', stdout: false, raise_error: false) or {}
}
}
//////////////////// GENERIC INSTALLER FUNCTIONS ////////////////////
fn ulist_get() !ulist.UList {
return ulist.UList{}
}
// uploads to S3 server if configured
fn upload() ! {
// Not applicable for kubectl
// Not applicable for K3s
}
fn install() ! {
console.print_header('install kubectl')
mut url := ''
mut dest_path := '/tmp/kubectl'
// Determine download URL based on platform
if core.is_linux_arm()! {
url = 'https://dl.k8s.io/release/v${version}/bin/linux/arm64/kubectl'
} else if core.is_linux_intel()! {
url = 'https://dl.k8s.io/release/v${version}/bin/linux/amd64/kubectl'
} else if core.is_osx_arm()! {
url = 'https://dl.k8s.io/release/v${version}/bin/darwin/arm64/kubectl'
} else if core.is_osx_intel()! {
url = 'https://dl.k8s.io/release/v${version}/bin/darwin/amd64/kubectl'
} else {
return error('unsupported platform for kubectl installation')
}
console.print_header('downloading kubectl from ${url}')
// Download kubectl binary
osal.download(
url: url
// minsize_kb: 40000 // kubectl is ~45MB
dest: dest_path
)!
// Make it executable
os.chmod(dest_path, 0o755)!
// Install to system
osal.cmd_add(
cmdname: 'kubectl'
source: dest_path
)!
// Create .kube directory with proper permissions
kube_dir := os.join_path(os.home_dir(), '.kube')
if !os.exists(kube_dir) {
console.print_header('creating ${kube_dir} directory')
os.mkdir_all(kube_dir)!
os.chmod(kube_dir, 0o700)! // read/write/execute for owner only
console.print_header('${kube_dir} directory created with permissions 0700')
} else {
// Ensure correct permissions even if directory exists
os.chmod(kube_dir, 0o700)!
console.print_header('${kube_dir} directory permissions set to 0700')
}
console.print_header('kubectl installed successfully')
}
fn destroy() ! {
console.print_header('destroy kubectl')
if !installed()! {
console.print_header('kubectl is not installed')
return
}
// Remove kubectl command
osal.cmd_delete('kubectl')!
// Clean up any temporary files
osal.rm('/tmp/kubectl')!
console.print_header('kubectl destruction completed')
return error('Use install_master, join_master, or install_worker instead of generic install')
}

View File

@@ -4,6 +4,9 @@ import incubaid.herolib.core.base
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.ui.console
import json
import incubaid.herolib.osal.startupmanager
import incubaid.herolib.osal.core as osal
import time
__global (
kubernetes_installer_global map[string]&KubernetesInstaller
@@ -125,22 +128,70 @@ pub fn play(mut plbook PlayBook) ! {
}
mut install_actions := plbook.find(filter: 'kubernetes_installer.configure')!
if install_actions.len > 0 {
return error("can't configure kubernetes_installer, because no configuration allowed for this installer.")
for mut install_action in install_actions {
heroscript := install_action.heroscript()
mut obj2 := heroscript_loads(heroscript)!
set(obj2)!
install_action.done = true
}
}
mut other_actions := plbook.find(filter: 'kubernetes_installer.')!
for mut other_action in other_actions {
if other_action.name in ['destroy', 'install'] {
mut p := other_action.params
reset := p.get_default_false('reset')
mut p := other_action.params
name := p.get_default('name', 'default')!
reset := p.get_default_false('reset')
mut k8s_obj := get(name: name, create: true)!
console.print_debug('action object:\n${k8s_obj}')
if other_action.name in ['destroy', 'install', 'build'] {
if other_action.name == 'destroy' || reset {
console.print_debug('install action kubernetes_installer.destroy')
destroy()!
k8s_obj.destroy()!
}
if other_action.name == 'install' {
console.print_debug('install action kubernetes_installer.install')
install()!
k8s_obj.install(reset: reset)!
}
}
if other_action.name in ['start', 'stop', 'restart'] {
if other_action.name == 'start' {
console.print_debug('install action kubernetes_installer.${other_action.name}')
k8s_obj.start()!
}
if other_action.name == 'stop' {
console.print_debug('install action kubernetes_installer.${other_action.name}')
k8s_obj.stop()!
}
if other_action.name == 'restart' {
console.print_debug('install action kubernetes_installer.${other_action.name}')
k8s_obj.restart()!
}
}
// K3s-specific actions
if other_action.name in ['install_master', 'join_master', 'install_worker'] {
if other_action.name == 'install_master' {
console.print_debug('install action kubernetes_installer.install_master')
k8s_obj.install_master()!
}
if other_action.name == 'join_master' {
console.print_debug('install action kubernetes_installer.join_master')
k8s_obj.join_master()!
}
if other_action.name == 'install_worker' {
console.print_debug('install action kubernetes_installer.install_worker')
k8s_obj.install_worker()!
}
}
if other_action.name == 'get_kubeconfig' {
console.print_debug('install action kubernetes_installer.get_kubeconfig')
kubeconfig := k8s_obj.get_kubeconfig()!
console.print_header('Kubeconfig:\n${kubeconfig}')
}
if other_action.name == 'generate_join_script' {
console.print_debug('install action kubernetes_installer.generate_join_script')
script := k8s_obj.generate_join_script()!
console.print_header('Join Script:\n${script}')
}
other_action.done = true
}
}
@@ -149,12 +200,107 @@ pub fn play(mut plbook PlayBook) ! {
//////////////////////////# LIVE CYCLE MANAGEMENT FOR INSTALLERS ///////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////
fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.StartupManager {
match cat {
.screen {
console.print_debug("installer: kubernetes_installer' startupmanager get screen")
return startupmanager.get(.screen)!
}
.zinit {
console.print_debug("installer: kubernetes_installer' startupmanager get zinit")
return startupmanager.get(.zinit)!
}
.systemd {
console.print_debug("installer: kubernetes_installer' startupmanager get systemd")
return startupmanager.get(.systemd)!
}
else {
console.print_debug("installer: kubernetes_installer' startupmanager get auto")
return startupmanager.get(.auto)!
}
}
}
// load from disk and make sure is properly intialized
pub fn (mut self KubernetesInstaller) reload() ! {
switch(self.name)
self = obj_init(self)!
}
pub fn (mut self KubernetesInstaller) start() ! {
switch(self.name)
if self.running()! {
return
}
console.print_header('installer: kubernetes_installer start')
if !installed()! {
return error('K3s is not installed. Please run install_master, join_master, or install_worker first.')
}
// Ensure data directory exists
osal.dir_ensure(self.data_dir)!
// Create manifests directory for auto-apply
manifests_dir := '${self.data_dir}/server/manifests'
osal.dir_ensure(manifests_dir)!
for zprocess in self.startupcmd()! {
mut sm := startupmanager_get(zprocess.startuptype)!
console.print_debug('installer: kubernetes_installer starting with ${zprocess.startuptype}...')
sm.new(zprocess)!
sm.start(zprocess.name)!
}
for _ in 0 .. 50 {
if self.running()! {
return
}
time.sleep(100 * time.millisecond)
}
return error('kubernetes_installer did not start properly.')
}
pub fn (mut self KubernetesInstaller) install_start(args InstallArgs) ! {
switch(self.name)
self.install(args)!
self.start()!
}
pub fn (mut self KubernetesInstaller) stop() ! {
switch(self.name)
for zprocess in self.startupcmd()! {
mut sm := startupmanager_get(zprocess.startuptype)!
sm.stop(zprocess.name)!
}
}
pub fn (mut self KubernetesInstaller) restart() ! {
switch(self.name)
self.stop()!
self.start()!
}
pub fn (mut self KubernetesInstaller) running() !bool {
switch(self.name)
// walk over the generic processes, if not running return
for zprocess in self.startupcmd()! {
if zprocess.startuptype != .screen {
mut sm := startupmanager_get(zprocess.startuptype)!
r := sm.running(zprocess.name)!
if r == false {
return false
}
}
}
return running()!
}
@[params]
pub struct InstallArgs {
pub mut:
@@ -170,6 +316,7 @@ pub fn (mut self KubernetesInstaller) install(args InstallArgs) ! {
pub fn (mut self KubernetesInstaller) destroy() ! {
switch(self.name)
self.stop() or {}
destroy()!
}

View File

@@ -1,27 +1,203 @@
module kubernetes_installer
import incubaid.herolib.data.encoderhero
import incubaid.herolib.osal.core as osal
import os
import rand
pub const version = '1.31.0'
pub const version = 'v1.33.1'
const singleton = true
const default = true
// Kubernetes installer - handles kubectl installation
// K3s installer - handles K3s cluster installation with Mycelium IPv6 networking
@[heap]
pub struct KubernetesInstaller {
pub mut:
name string = 'default'
name string = 'default'
// K3s version to install
k3s_version string = version
// Data directory for K3s (default: ~/hero/var/k3s)
data_dir string
// Unique node name/identifier
node_name string
// Mycelium interface name (auto-detected if not specified)
mycelium_interface string
// Cluster token for authentication (auto-generated if empty)
token string
// Master URL for joining cluster (e.g., 'https://[ipv6]:6443')
master_url string
// Node IPv6 address (auto-detected from Mycelium if empty)
node_ip string
// Is this a master/control-plane node?
is_master bool
// Is this the first master (uses --cluster-init)?
is_first_master bool
}
// your checking & initialization code if needed
fn obj_init(mycfg_ KubernetesInstaller) !KubernetesInstaller {
mut mycfg := mycfg_
// Set default data directory if not provided
if mycfg.data_dir == '' {
mycfg.data_dir = os.join_path(os.home_dir(), 'hero/var/k3s')
}
// Expand home directory in data_dir if it contains ~
if mycfg.data_dir.starts_with('~') {
mycfg.data_dir = mycfg.data_dir.replace_once('~', os.home_dir())
}
// Set default node name if not provided
if mycfg.node_name == '' {
hostname := os.execute('hostname').output.trim_space()
mycfg.node_name = if hostname != '' { hostname } else { 'k3s-node-${rand.hex(4)}' }
}
// Auto-detect Mycelium interface if not provided
if mycfg.mycelium_interface == '' {
mycfg.mycelium_interface = detect_mycelium_interface()!
}
// Generate token if not provided and this is the first master
if mycfg.token == '' && mycfg.is_first_master {
// Generate a secure random token
mycfg.token = rand.hex(32)
}
// Note: Validation of token/master_url is done in the specific action functions
// (join_master, install_worker) where the context is clear
return mycfg
}
// Get path to kubeconfig file
pub fn (self &KubernetesInstaller) kubeconfig_path() string {
return '${self.data_dir}/server/cred/admin.kubeconfig'
}
// Get Mycelium IPv6 address from interface
pub fn (self &KubernetesInstaller) get_mycelium_ipv6() !string {
// If node_ip is already set, use it
if self.node_ip != '' {
return self.node_ip
}
// Otherwise, detect from Mycelium interface
return get_mycelium_ipv6_from_interface(self.mycelium_interface)!
}
// Auto-detect Mycelium interface by finding 400::/7 route
fn detect_mycelium_interface() !string {
// Find all 400::/7 routes
route_result := osal.exec(
cmd: 'ip -6 route | grep "^400::/7"'
stdout: false
raise_error: false
)!
if route_result.exit_code != 0 || route_result.output.trim_space() == '' {
return error('No Mycelium interface found (no 400::/7 route detected). Please ensure Mycelium is installed and running.')
}
// Parse interface name from route (format: "400::/7 dev <interface> ...")
route_line := route_result.output.trim_space()
parts := route_line.split(' ')
for i, part in parts {
if part == 'dev' && i + 1 < parts.len {
iface := parts[i + 1]
return iface
}
}
return error('Could not parse Mycelium interface from route output: ${route_line}')
}
// Helper function to detect Mycelium IPv6 from interface
fn get_mycelium_ipv6_from_interface(iface string) !string {
// Step 1: Find the 400::/7 route via the interface
route_result := osal.exec(
cmd: 'ip -6 route | grep "^400::/7.*dev ${iface}"'
stdout: false
) or { return error('No 400::/7 route found via interface ${iface}') }
route_line := route_result.output.trim_space()
if route_line == '' {
return error('No 400::/7 route found via interface ${iface}')
}
// Step 2: Get all global IPv6 addresses on the interface
addr_result := osal.exec(
cmd: 'ip -6 addr show dev ${iface} scope global | grep inet6 | awk \'{print $2}\' | cut -d/ -f1'
stdout: false
)!
ipv6_list := addr_result.output.split_into_lines()
// Check if route has a next-hop (via keyword)
parts := route_line.split(' ')
mut nexthop := ''
for i, part in parts {
if part == 'via' && i + 1 < parts.len {
nexthop = parts[i + 1]
break
}
}
if nexthop != '' {
// Route has a next-hop: match by prefix (first 4 segments)
prefix_parts := nexthop.split(':')
if prefix_parts.len < 4 {
return error('Invalid IPv6 next-hop format: ${nexthop}')
}
prefix := prefix_parts[0..4].join(':')
// Step 3: Match the one with the same prefix
for ip in ipv6_list {
ip_trimmed := ip.trim_space()
if ip_trimmed == '' {
continue
}
ip_parts := ip_trimmed.split(':')
if ip_parts.len >= 4 {
ip_prefix := ip_parts[0..4].join(':')
if ip_prefix == prefix {
return ip_trimmed
}
}
}
return error('No global IPv6 address found on ${iface} matching prefix ${prefix}')
} else {
// Direct route (no via): return the first IPv6 address in 400::/7 range
for ip in ipv6_list {
ip_trimmed := ip.trim_space()
if ip_trimmed == '' {
continue
}
// Check if IP is in 400::/7 range (starts with 4 or 5)
if ip_trimmed.starts_with('4') || ip_trimmed.starts_with('5') {
return ip_trimmed
}
}
return error('No global IPv6 address found on ${iface} in 400::/7 range')
}
}
// called before start if done
fn configure() ! {
// No configuration needed for kubectl
mut cfg := get()!
// Ensure data directory exists
osal.dir_ensure(cfg.data_dir)!
// Create manifests directory for auto-apply
manifests_dir := '${cfg.data_dir}/server/manifests'
osal.dir_ensure(manifests_dir)!
}
/////////////NORMALLY NO NEED TO TOUCH

View File

@@ -1,3 +0,0 @@
https://github.com/codescalers/kubecloud/blob/master/k3s/native_guide/k3s_killall.sh
still need to implement this

View File

@@ -1,44 +1,224 @@
# kubernetes_installer
# K3s Installer
Complete K3s cluster installer with multi-master HA support, worker nodes, and Mycelium IPv6 networking.
## Features
To get started
- **Multi-Master HA**: Install multiple master nodes with `--cluster-init`
- **Worker Nodes**: Add worker nodes to the cluster
- **Mycelium IPv6**: Automatic detection of Mycelium IPv6 addresses from the 400::/7 range
- **Lifecycle Management**: Start, stop, restart K3s via startupmanager (systemd/zinit/screen)
- **Join Scripts**: Auto-generate heroscripts for joining additional nodes
- **Complete Cleanup**: Destroy removes all K3s components, network interfaces, and data
## Quick Start
### Install First Master
```v
import incubaid.herolib.installers.virt.kubernetes_installer
heroscript := "
!!kubernetes_installer.configure
name:'k3s_master_1'
k3s_version:'v1.33.1'
node_name:'master-1'
mycelium_interface:'mycelium0'
import incubaid.herolib.installers.something.kubernetes_installer as kubernetes_installer_installer
heroscript:="
!!kubernetes_installer.configure name:'test'
password: '1234'
port: 7701
!!kubernetes_installer.start name:'test' reset:1
!!kubernetes_installer.install_master name:'k3s_master_1'
!!kubernetes_installer.start name:'k3s_master_1'
"
kubernetes_installer_installer.play(heroscript=heroscript)!
//or we can call the default and do a start with reset
//mut installer:= kubernetes_installer_installer.get()!
//installer.start(reset:true)!
kubernetes_installer.play(heroscript: heroscript)!
```
## example heroscript
### Join Additional Master (HA)
```hero
```v
heroscript := "
!!kubernetes_installer.configure
homedir: '/home/user/kubernetes_installer'
username: 'admin'
password: 'secretpassword'
title: 'Some Title'
host: 'localhost'
port: 8888
name:'k3s_master_2'
node_name:'master-2'
token:'<TOKEN_FROM_FIRST_MASTER>'
master_url:'https://[<MASTER_IPV6>]:6443'
!!kubernetes_installer.join_master name:'k3s_master_2'
!!kubernetes_installer.start name:'k3s_master_2'
"
kubernetes_installer.play(heroscript: heroscript)!
```
### Install Worker Node
```v
heroscript := "
!!kubernetes_installer.configure
name:'k3s_worker_1'
node_name:'worker-1'
token:'<TOKEN_FROM_FIRST_MASTER>'
master_url:'https://[<MASTER_IPV6>]:6443'
!!kubernetes_installer.install_worker name:'k3s_worker_1'
!!kubernetes_installer.start name:'k3s_worker_1'
"
kubernetes_installer.play(heroscript: heroscript)!
```
## Configuration Options
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `name` | string | 'default' | Instance name |
| `k3s_version` | string | 'v1.33.1' | K3s version to install |
| `data_dir` | string | '~/hero/var/k3s' | Data directory for K3s |
| `node_name` | string | hostname | Unique node identifier |
| `mycelium_interface` | string | auto-detected | Mycelium interface name (auto-detected from 400::/7 route) |
| `token` | string | auto-generated | Cluster authentication token |
| `master_url` | string | - | Master URL for joining (e.g., 'https://[ipv6]:6443') |
| `node_ip` | string | auto-detected | Node IPv6 (auto-detected from Mycelium) |
## Actions
### Installation Actions
- `install_master` - Install first master node (generates token, uses --cluster-init)
- `join_master` - Join as additional master (requires token + master_url)
- `install_worker` - Install worker node (requires token + master_url)
### Lifecycle Actions
- `start` - Start K3s via startupmanager
- `stop` - Stop K3s
- `restart` - Restart K3s
- `destroy` - Complete cleanup (removes all K3s components)
### Utility Actions
- `get_kubeconfig` - Get kubeconfig content
- `generate_join_script` - Generate heroscript for joining nodes
## Requirements
- **OS**: Ubuntu (installer checks and fails on non-Ubuntu systems)
- **Mycelium**: Must be installed and running with interface in 400::/7 range
- **Root Access**: Required for installing system packages and managing network
## How It Works
### Mycelium IPv6 Detection
The installer automatically detects your Mycelium IPv6 address by:
1. Finding the 400::/7 route via the Mycelium interface
2. Extracting the next-hop IPv6 and getting the prefix (first 4 segments)
3. Matching global IPv6 addresses on the interface with the same prefix
4. Using the matched IPv6 for K3s `--node-ip`
This ensures K3s binds to the correct Mycelium IPv6 even if the server has other IPv6 addresses.
### Cluster Setup
**First Master:**
- Uses `--cluster-init` flag
- Auto-generates secure token
- Configures IPv6 CIDRs: cluster=2001:cafe:42::/56, service=2001:cafe:43::/112
- Generates join script for other nodes
**Additional Masters:**
- Joins with `--server <master_url>`
- Requires token and master_url from first master
- Provides HA for control plane
**Workers:**
- Joins as agent with `--server <master_url>`
- Requires token and master_url from first master
### Cleanup
The `destroy` action performs complete cleanup:
- Stops K3s process
- Removes network interfaces (cni0, flannel.*, etc.)
- Unmounts kubelet mounts
- Removes data directory
- Cleans up iptables/ip6tables rules
- Removes CNI namespaces
## Example Workflow
1. **Install first master on server1:**
```bash
hero run templates/examples.heroscript
# Note the token and IPv6 address displayed
```
2. **Join additional master on server2:**
```bash
# Edit examples.heroscript Section 2 with token and master_url
hero run templates/examples.heroscript
```
3. **Add worker on server3:**
```bash
# Edit examples.heroscript Section 3 with token and master_url
hero run templates/examples.heroscript
```
4. **Verify cluster:**
```bash
kubectl get nodes
kubectl get pods --all-namespaces
```
## Kubeconfig
The kubeconfig is located at: `<data_dir>/server/cred/admin.kubeconfig`
To use kubectl:
```bash
export KUBECONFIG=~/hero/var/k3s/server/cred/admin.kubeconfig
kubectl get nodes
```
Or copy to default location:
```bash
mkdir -p ~/.kube
cp ~/hero/var/k3s/server/cred/admin.kubeconfig ~/.kube/config
```
## Troubleshooting
**K3s won't start:**
- Check if Mycelium is running: `ip -6 addr show mycelium0`
- Verify 400::/7 route exists: `ip -6 route | grep 400::/7`
- Check logs: `journalctl -u k3s_* -f`
**Can't join cluster:**
- Verify token matches first master
- Ensure master_url uses correct IPv6 in brackets: `https://[ipv6]:6443`
- Check network connectivity over Mycelium: `ping6 <master_ipv6>`
**Cleanup issues:**
- Run destroy with sudo if needed
- Manually check for remaining processes: `pgrep -f k3s`
- Check for remaining mounts: `mount | grep k3s`
## See Also
- [K3s Documentation](https://docs.k3s.io/)
- [Mycelium Documentation](https://github.com/threefoldtech/mycelium)
- [Example Heroscript](templates/examples.heroscript)

View File

@@ -0,0 +1,116 @@
#!/usr/bin/env hero
// ============================================================================
// K3s Cluster Installation Examples
// ============================================================================
//
// This file contains examples for installing K3s clusters with Mycelium IPv6
// networking. Choose the appropriate section based on your node type.
//
// Prerequisites:
// - Ubuntu OS
// - Mycelium installed and running
// - Mycelium interface (default: mycelium0)
// ============================================================================
// ============================================================================
// SECTION 1: Install First Master Node
// ============================================================================
// This creates the initial master node and initializes the cluster.
// The token will be auto-generated and displayed for use with other nodes.
!!kubernetes_installer.configure
name:'k3s_master_1'
k3s_version:'v1.33.1'
data_dir:'~/hero/var/k3s'
node_name:'master-1'
// mycelium_interface:'mycelium0' // Optional: auto-detected if not specified
// Install as first master (will generate token and use --cluster-init)
!!kubernetes_installer.install_master name:'k3s_master_1'
// Start K3s
!!kubernetes_installer.start name:'k3s_master_1'
// Get kubeconfig (optional - to verify installation)
// !!kubernetes_installer.get_kubeconfig name:'k3s_master_1'
// Generate join script for other nodes (optional)
// !!kubernetes_installer.generate_join_script name:'k3s_master_1'
// ============================================================================
// SECTION 2: Join as Additional Master (HA Setup)
// ============================================================================
// Use this to add more master nodes for high availability.
// You MUST have the token and master_url from the first master.
/*
!!kubernetes_installer.configure
name:'k3s_master_2'
k3s_version:'v1.33.1'
data_dir:'~/hero/var/k3s'
node_name:'master-2'
// mycelium_interface:'mycelium0' // Optional: auto-detected if not specified
token:'<TOKEN_FROM_FIRST_MASTER>'
master_url:'https://[<MASTER_IPV6>]:6443'
// Join as additional master
!!kubernetes_installer.join_master name:'k3s_master_2'
// Start K3s
!!kubernetes_installer.start name:'k3s_master_2'
*/
// ============================================================================
// SECTION 3: Install Worker Node
// ============================================================================
// Use this to add worker nodes to the cluster.
// You MUST have the token and master_url from the first master.
/*
!!kubernetes_installer.configure
name:'k3s_worker_1'
k3s_version:'v1.33.1'
data_dir:'~/hero/var/k3s'
node_name:'worker-1'
// mycelium_interface:'mycelium0' // Optional: auto-detected if not specified
token:'<TOKEN_FROM_FIRST_MASTER>'
master_url:'https://[<MASTER_IPV6>]:6443'
// Install as worker
!!kubernetes_installer.install_worker name:'k3s_worker_1'
// Start K3s
!!kubernetes_installer.start name:'k3s_worker_1'
*/
// ============================================================================
// SECTION 4: Lifecycle Management
// ============================================================================
// Common operations for managing K3s
// Stop K3s
// !!kubernetes_installer.stop name:'k3s_master_1'
// Restart K3s
// !!kubernetes_installer.restart name:'k3s_master_1'
// Get kubeconfig
// !!kubernetes_installer.get_kubeconfig name:'k3s_master_1'
// Destroy K3s (complete cleanup)
// !!kubernetes_installer.destroy name:'k3s_master_1'
// ============================================================================
// NOTES:
// ============================================================================
// 1. Replace <TOKEN_FROM_FIRST_MASTER> with the actual token displayed after
// installing the first master
// 2. Replace <MASTER_IPV6> with the Mycelium IPv6 address of the first master
// 3. The data_dir defaults to ~/hero/var/k3s if not specified
// 4. The mycelium_interface defaults to 'mycelium0' if not specified
// 5. The k3s_version defaults to 'v1.33.1' if not specified
// 6. After installation, use kubectl to manage your cluster:
// - kubectl get nodes
// - kubectl get pods --all-namespaces
// 7. The kubeconfig is located at: <data_dir>/server/cred/admin.kubeconfig

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.playcmds
import incubaid.herolib.ui.console
// ============================================================================
// K3s Join Additional Master (HA Setup)
// ============================================================================
// This script shows how to join an additional master node to an existing
// K3s cluster for high availability.
//
// Prerequisites:
// 1. First master must be running
// 2. You need the token from the first master
// 3. You need the master URL (IPv6 address and port)
// ============================================================================
console.print_header('='.repeat(80))
console.print_header('K3s Join Additional Master Node')
console.print_header('='.repeat(80))
// IMPORTANT: Replace these values with your actual cluster information
// You can get these from the first master's join script or by running:
// !!kubernetes_installer.generate_join_script name:"k3s_master_1"
master_token := 'YOUR_CLUSTER_TOKEN_HERE' // Get from first master
master_url := 'https://[YOUR_MASTER_IPV6]:6443' // First master's IPv6 address
join_master_script := '
!!kubernetes_installer.configure
name:"k3s_master_2"
k3s_version:"v1.33.1"
data_dir:"~/hero/var/k3s"
node_name:"master-2"
mycelium_interface:"mycelium"
token:"${master_token}"
master_url:"${master_url}"
!!kubernetes_installer.join_master name:"k3s_master_2"
!!kubernetes_installer.start name:"k3s_master_2"
'
console.print_header('⚠️ Before running, make sure to:')
console.print_header(' 1. Update master_token with your cluster token')
console.print_header(' 2. Update master_url with your first master IPv6')
console.print_header(' 3. Ensure first master is running')
console.print_header('')
// Uncomment the line below to actually run the join
// playcmds.run(heroscript: join_master_script)!
console.print_header('✅ Script ready. Uncomment playcmds.run() to execute.')
console.print_header('='.repeat(80))

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.playcmds
import incubaid.herolib.ui.console
// ============================================================================
// K3s Join Worker Node
// ============================================================================
// This script shows how to join a worker node to an existing K3s cluster.
//
// Prerequisites:
// 1. At least one master must be running
// 2. You need the token from the master
// 3. You need the master URL (IPv6 address and port)
// ============================================================================
console.print_header('='.repeat(80))
console.print_header('K3s Join Worker Node')
console.print_header('='.repeat(80))
// IMPORTANT: Replace these values with your actual cluster information
// You can get these from the master's join script or by running:
// !!kubernetes_installer.generate_join_script name:"k3s_master_1"
master_token := 'YOUR_CLUSTER_TOKEN_HERE' // Get from master
master_url := 'https://[YOUR_MASTER_IPV6]:6443' // Master's IPv6 address
join_worker_script := '
!!kubernetes_installer.configure
name:"k3s_worker_1"
k3s_version:"v1.33.1"
data_dir:"~/hero/var/k3s"
node_name:"worker-1"
mycelium_interface:"mycelium"
token:"${master_token}"
master_url:"${master_url}"
!!kubernetes_installer.install_worker name:"k3s_worker_1"
!!kubernetes_installer.start name:"k3s_worker_1"
'
console.print_header('⚠️ Before running, make sure to:')
console.print_header(' 1. Update master_token with your cluster token')
console.print_header(' 2. Update master_url with your master IPv6')
console.print_header(' 3. Ensure master is running')
console.print_header('')
// Uncomment the line below to actually run the join
// playcmds.run(heroscript: join_worker_script)!
console.print_header('✅ Script ready. Uncomment playcmds.run() to execute.')
console.print_header('='.repeat(80))

Some files were not shown because too many files have changed in this diff Show More