Compare commits
322 Commits
v1.0.35
...
developmen
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d7e4e8ec56 | ||
| 46e0e56e61 | |||
| ce3bb5cd9e | |||
| c3690f3d53 | |||
| e3aaa1b0f8 | |||
| 4096b52244 | |||
| da2429104a | |||
| 00ac4c8bd1 | |||
| 7db14632d6 | |||
| 63e160029e | |||
| e55a9741e2 | |||
| 75b548b439 | |||
| ad65392806 | |||
| 8c8369c42b | |||
| 29ab30788e | |||
| 690af291b5 | |||
| 88680f1954 | |||
| 7dba940d80 | |||
| 5d44d49861 | |||
| c22e9ae8ce | |||
| 55966be158 | |||
|
|
5f9a95f2ca | ||
|
|
efbe50bdea | ||
|
|
f447c7a3f1 | ||
|
|
c346d0c5ed | ||
|
|
ba46ed62ef | ||
|
|
8fc560ae78 | ||
| ed785c79df | |||
| d53043dd65 | |||
| 0a731f83e5 | |||
|
|
ed9ff35807 | ||
|
|
e2c2a560c8 | ||
| 5dcdf72310 | |||
| b6c883b5ac | |||
| 78e4fade03 | |||
| 0ca87c5f32 | |||
| 5b2069c560 | |||
| 0963910572 | |||
| 394dd2c88e | |||
| d16aaa30db | |||
| d662e46a8d | |||
| 18da5823b7 | |||
|
|
1e9de962ad | ||
|
|
b9dc8996f5 | ||
| 7c03226054 | |||
| fc13f3e6ae | |||
| 0414ea85df | |||
| 60e2230448 | |||
| d9ad57985d | |||
| 8368592267 | |||
|
|
b9b8e7ab75 | ||
|
|
dc2f8c2976 | ||
|
|
fc592a2e27 | ||
|
|
0269277ac8 | ||
|
|
ee0e7d44fd | ||
|
|
28f00d3dc6 | ||
|
|
a30646e3b1 | ||
|
|
c7ae0ed393 | ||
| 805c900b02 | |||
| ab5fe67cc2 | |||
|
|
449213681e | ||
|
|
72ab099291 | ||
|
|
007361deab | ||
|
|
8a458c6b3f | ||
|
|
d69023e2c9 | ||
|
|
3f09aad045 | ||
|
|
aa434fddee | ||
|
|
deca6387f2 | ||
|
|
dd293ce387 | ||
| a6d746319c | |||
| 0e20b9696a | |||
| b3e673b38f | |||
| c94be548bf | |||
| 8472d20609 | |||
| 27279f8959 | |||
| 7d4dc2496c | |||
| 3547f04a79 | |||
| 253e26aec6 | |||
| a96d6e8aaa | |||
| 9fe669c5b8 | |||
|
|
7e9bc1c41e | ||
|
|
ee64a9fc58 | ||
| 769c88adc8 | |||
|
|
520769a63e | ||
|
|
1399d53748 | ||
|
|
9f75a454fa | ||
|
|
9a5973d366 | ||
| fc41d3c62c | |||
| 74146177e3 | |||
| c755821e34 | |||
| 50a770c3ca | |||
| 22dfcf4afa | |||
| b09e3ec0e1 | |||
| de7e1abcba | |||
| 03d9e97008 | |||
| 43eb15be7a | |||
|
|
76876049be | ||
| 803828e808 | |||
| 9343772bc5 | |||
| d282a5dc95 | |||
|
|
dd7baa59b0 | ||
|
|
69264adc3d | ||
|
|
3f943de9ed | ||
|
|
feffc09f73 | ||
|
|
f11e0c689a | ||
|
|
7c9f7c7568 | ||
|
|
dcd5af4d5f | ||
| 4402cba8ac | |||
| 01639853ce | |||
| 0a25fc95b5 | |||
| 9b5301f2c3 | |||
| 2998a6e806 | |||
| 0916ff07f8 | |||
| 679108eb9e | |||
| 1d4770aca5 | |||
| 61a3677883 | |||
| 27d2723023 | |||
| 3d8effeac7 | |||
| a080fa8330 | |||
| d1584e929e | |||
| 8733bc3fa8 | |||
| 3ecb8c1130 | |||
|
|
26528a889d | ||
|
|
41e3d2afe4 | ||
|
|
8daca7328d | ||
|
|
86da2cd435 | ||
|
|
a5c4b8f6f8 | ||
|
|
856a6202ee | ||
|
|
a40e172457 | ||
|
|
012a59b3d8 | ||
|
|
6334036b79 | ||
|
|
df462174e5 | ||
|
|
1452d65f48 | ||
|
|
fcb178156b | ||
|
|
28313ad22f | ||
|
|
9986dca758 | ||
|
|
9af9ab40b5 | ||
|
|
4b5a9741a0 | ||
|
|
11c3ea9ca5 | ||
|
|
e7a38e555b | ||
|
|
adb012e9cf | ||
|
|
8ca7985753 | ||
|
|
82378961db | ||
|
|
f9a2ebf24b | ||
|
|
9dc33e3ce9 | ||
|
|
ae4997d80a | ||
|
|
b26c1f74e3 | ||
|
|
bf6dec48f1 | ||
|
|
3a05dc8ae0 | ||
|
|
5559bd4f2f | ||
|
|
9d79408931 | ||
|
|
f5c2b306b8 | ||
|
|
49868a18e1 | ||
|
|
2ab0dfa6b8 | ||
|
|
82375f9b89 | ||
|
|
f664823a90 | ||
|
|
9b2e9114b8 | ||
|
|
8dc2b360ba | ||
|
|
49e48e7aca | ||
|
|
586c6db34e | ||
|
|
122a864601 | ||
|
|
571bc31179 | ||
|
|
35734b5ebc | ||
|
|
15f81aca41 | ||
|
|
1484fec898 | ||
|
|
06fcfa5b50 | ||
|
|
a26f0a93fe | ||
|
|
eb0fe4d3a9 | ||
| 8a7987b9c3 | |||
| 70d581fb57 | |||
| d267c1131f | |||
| 1ac9092eed | |||
|
|
bf79d6d198 | ||
|
|
eada09135c | ||
| a447aeec43 | |||
|
|
78d848783a | ||
|
|
8c2b5a8f5e | ||
|
|
fcb5964f8d | ||
| e97e0d77be | |||
| 16155480de | |||
| e7611d4dc2 | |||
|
|
45215b0abb | ||
|
|
7246223e3b | ||
|
|
1958f24528 | ||
| c033cacd5b | |||
|
|
d3f05c1834 | ||
|
|
4bf16d6f70 | ||
|
|
ad7e1980a5 | ||
|
|
7836a48ad4 | ||
|
|
1d67522937 | ||
|
|
e6c3ed93fa | ||
|
|
2fafd025eb | ||
|
|
3ae980d4c5 | ||
|
|
e94734ecc7 | ||
|
|
deb1210405 | ||
|
|
759870e01e | ||
| 891f3bf66d | |||
| 3179d362fc | |||
| 69d9949c39 | |||
| 5d2adb1a2c | |||
| c409d42f64 | |||
| 2dad87ad5e | |||
| fd5a348e20 | |||
| 93fc823e00 | |||
| f40565c571 | |||
| 5a6f3d323b | |||
| 836a8f799e | |||
| b9a84ee8fc | |||
| 0d3b4357ac | |||
| ea1a49ffd5 | |||
| f4de662fc2 | |||
|
|
a149845fc7 | ||
| a2eaf6096e | |||
|
|
5fccd03ee7 | ||
|
|
347ebed5ea | ||
| b582bd03ef | |||
|
|
ac09648a5b | ||
|
|
04e1e2375f | ||
|
|
a2ac8c0027 | ||
|
|
2150b93a80 | ||
|
|
10b9af578a | ||
|
|
8bfb021939 | ||
|
|
ecfe77a2dc | ||
|
|
683008da8f | ||
|
|
ef14bc6d82 | ||
|
|
bafc519cd7 | ||
|
|
472e4bfaaa | ||
|
|
e3c8d032f7 | ||
|
|
7d72faa934 | ||
|
|
8e5507b04e | ||
|
|
6746d885f8 | ||
|
|
2e56311cd0 | ||
|
|
4d3071f2d2 | ||
|
|
3ee0e5b29c | ||
|
|
672ff886d4 | ||
|
|
44c8793074 | ||
|
|
86549480b5 | ||
|
|
80108d4b36 | ||
|
|
81adc60eea | ||
|
|
82d37374d8 | ||
|
|
c556cc71d4 | ||
|
|
79b78aa6fe | ||
|
|
f6734a3568 | ||
| 0adb38a8a7 | |||
| 88f83cbfe2 | |||
| 4e4abc055b | |||
| 05c789da7e | |||
| 9c8bcbff0c | |||
| fbed626771 | |||
| 8583238fdb | |||
| c5f1d39958 | |||
| 15ec641bc6 | |||
|
|
4222dac72e | ||
| d1c0c8f03e | |||
| 1973b58deb | |||
| 46ce903d4d | |||
| 9d1c347da7 | |||
| 216eb262dd | |||
| b85ac9adc9 | |||
| 79f2752b30 | |||
| d4911748ec | |||
| e574bcbc50 | |||
| 9d2dedb2b6 | |||
| 569d980336 | |||
| 3df101afc7 | |||
| 19fd4649be | |||
|
|
521596b29b | ||
|
|
53b5ee950f | ||
| 5cdac4d7fd | |||
| 581ae4808c | |||
| bc26c88188 | |||
| f7ea3ec420 | |||
| 091aef5859 | |||
|
|
c99831ee9b | ||
|
|
4ab78c65e3 | ||
|
|
67d4137b61 | ||
|
|
afbfa11516 | ||
|
|
4f3a81b097 | ||
|
|
0bfb5cfdd0 | ||
|
|
d0ca0ca42d | ||
|
|
98ba344d65 | ||
| fee3314653 | |||
| caedf2e2dd | |||
|
|
37f0aa0e96 | ||
|
|
63c2efc921 | ||
|
|
8ef9522676 | ||
|
|
a120ef2676 | ||
|
|
27c8273eec | ||
|
|
c1489fc491 | ||
| 12c6aabed5 | |||
| 67d13d081b | |||
| 12ad7b1e6f | |||
| 1cde14f640 | |||
| a5f8074411 | |||
| f69078a42e | |||
|
|
9f3f1914ce | ||
|
|
f2e1e7c11c | ||
| b538540cd4 | |||
| 1a76c31865 | |||
|
|
f477fe46b3 | ||
|
|
b18c6824d6 | ||
| b2bc0d1b6a | |||
| f2f87eb7fd | |||
|
|
112894b24f | ||
| 4cfc018ace | |||
| 05db43fe83 | |||
| c35ba97682 | |||
| f4711681dc | |||
| cb52bcfbe4 | |||
| 099b21510d | |||
| 91fdf9a774 | |||
| cf601283b1 | |||
| 6918a02eff | |||
|
|
b4971de5e9 | ||
|
|
9240e2ede8 | ||
|
|
446c54b0b5 | ||
|
|
8edf8c9299 | ||
|
|
9c4520a645 | ||
|
|
fc9142b005 | ||
|
|
cc31ce0f6f | ||
|
|
1688c4f9b0 | ||
|
|
9a7b9b8a10 |
32
.github/workflows/README.md
vendored
Normal file
32
.github/workflows/README.md
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
# Building Hero for release
|
||||
|
||||
Generally speaking, our scripts and docs for building hero produce non portable binaries for Linux. While that's fine for development purposes, statically linked binaries are much more convenient for releases and distribution.
|
||||
|
||||
The release workflow here creates a static binary for Linux using an Alpine container. A few notes follow about how that's done.
|
||||
|
||||
## Static builds in vlang
|
||||
|
||||
Since V compiles to C in our case, we are really concerned with how to produce static C builds. The V project provides [some guidance](https://github.com/vlang/v?tab=readme-ov-file#docker-with-alpinemusl) on using an Alpine container and passing `-cflags -static` to the V compiler.
|
||||
|
||||
That's fine for some projects. Hero has a dependency on the `libpq` C library for Postgres functionality, however, and this creates a complication.
|
||||
|
||||
## Static linking libpq
|
||||
|
||||
In order to create a static build of hero on Alpine, we need to install some additional packages:
|
||||
|
||||
* openssl-libs-static
|
||||
* postgresql-dev
|
||||
|
||||
The full `apk` command to prepare the container for building looks like this:
|
||||
|
||||
```bash
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
|
||||
```
|
||||
|
||||
Then we also need to instruct the C compiler to link against the Postgres static shared libraries. Here's the build command:
|
||||
|
||||
```bash
|
||||
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v
|
||||
```
|
||||
|
||||
Note that gcc is also the preferred compiler for static builds.
|
||||
2
.github/workflows/documentation.yml
vendored
2
.github/workflows/documentation.yml
vendored
@@ -27,7 +27,7 @@ jobs:
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Vlang
|
||||
run: ./install_v.sh
|
||||
run: ./scripts/install_v.sh
|
||||
|
||||
- name: Generate documentation
|
||||
run: |
|
||||
|
||||
39
.github/workflows/hero_build.yml
vendored
39
.github/workflows/hero_build.yml
vendored
@@ -35,9 +35,6 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# We do the workaround as described here https://github.com/Incubaid/herolib?tab=readme-ov-file#tcc-compiler-error-on-macos
|
||||
# gcc and clang also don't work on macOS due to https://github.com/vlang/v/issues/25467
|
||||
# We can change the compiler or remove this when one is fixed
|
||||
- name: Setup V & Herolib
|
||||
id: setup
|
||||
shell: bash
|
||||
@@ -46,9 +43,6 @@ jobs:
|
||||
cd v
|
||||
make
|
||||
./v symlink
|
||||
if [ "${{ runner.os }}" = "macOS" ]; then
|
||||
sudo sed -i '' '618,631d' /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/math.h
|
||||
fi
|
||||
cd -
|
||||
|
||||
mkdir -p ~/.vmodules/incubaid
|
||||
@@ -56,51 +50,34 @@ jobs:
|
||||
echo "Herolib symlink created to $(pwd)/lib"
|
||||
timeout-minutes: 10
|
||||
|
||||
# We can't make static builds for Linux easily, since we link to libql
|
||||
# (Postgres) and this has no static version available in the Alpine
|
||||
# repos. Therefore we build dynamic binaries for both glibc and musl.
|
||||
#
|
||||
# Again we work around a bug limiting our choice of C compiler tcc won't
|
||||
# work on Alpine due to https://github.com/vlang/v/issues/24866
|
||||
# So always use gcc for Linux
|
||||
#
|
||||
# For macOS, we can only use tcc (see above), but then we hit issues using
|
||||
# the garbage collector, so disable that
|
||||
# For Linux, we build a static binary linked against musl on Alpine. For
|
||||
# static linking, gcc is preferred
|
||||
- name: Build Hero
|
||||
timeout-minutes: 15
|
||||
run: |
|
||||
set -e
|
||||
set -ex
|
||||
if [ "${{ runner.os }}" = "Linux" ]; then
|
||||
# Build for glibc
|
||||
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}
|
||||
|
||||
# Build for musl using Alpine in Docker
|
||||
docker run --rm \
|
||||
-v ${{ github.workspace }}/lib:/root/.vmodules/incubaid/herolib \
|
||||
-v ${{ github.workspace }}:/herolib \
|
||||
-w /herolib \
|
||||
alpine \
|
||||
alpine:3.22 \
|
||||
sh -c '
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev
|
||||
set -ex
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
|
||||
cd v
|
||||
make clean
|
||||
make
|
||||
./v symlink
|
||||
cd ..
|
||||
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}-musl
|
||||
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v -o cli/hero-${{ matrix.target }}-musl
|
||||
'
|
||||
|
||||
else
|
||||
v -w -d use_openssl -enable-globals -gc none -cc tcc cli/hero.v -o cli/hero-${{ matrix.target }}
|
||||
v -w -d use_openssl -enable-globals -cc clang cli/hero.v -o cli/hero-${{ matrix.target }}
|
||||
fi
|
||||
|
||||
- name: Upload glibc binary
|
||||
if: runner.os == 'Linux'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: hero-${{ matrix.target }}
|
||||
path: cli/hero-${{ matrix.target }}
|
||||
|
||||
- name: Upload musl binary
|
||||
if: runner.os == 'Linux'
|
||||
uses: actions/upload-artifact@v4
|
||||
|
||||
2
.github/workflows/test.yml
vendored
2
.github/workflows/test.yml
vendored
@@ -24,7 +24,7 @@ jobs:
|
||||
run: |
|
||||
# Updating man-db takes a long time on every run. We don't need it
|
||||
sudo apt-get remove -y --purge man-db
|
||||
./install_v.sh
|
||||
./scripts/install_v.sh
|
||||
|
||||
- name: Setup Herolib from current branch
|
||||
run: |
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -57,4 +57,5 @@ MCP_HTTP_REST_IMPLEMENTATION_PLAN.md
|
||||
tmux_logger
|
||||
release
|
||||
install_herolib
|
||||
doc
|
||||
doc
|
||||
priv_key.bin
|
||||
@@ -1,5 +0,0 @@
|
||||
|
||||
when fixing or creating code, refer to the following hints:
|
||||
@aiprompts/vlang_herolib_core.md
|
||||
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"context": "Workspace",
|
||||
"bindings": {
|
||||
"cmd-r": ["task::Spawn", { "task_name": "ET", "reveal_target": "center" }]
|
||||
}
|
||||
}
|
||||
@@ -1,47 +0,0 @@
|
||||
[
|
||||
{
|
||||
"label": "ET",
|
||||
"command": "for i in {1..5}; do echo \"Hello $i/5\"; sleep 1; done",
|
||||
//"args": [],
|
||||
// Env overrides for the command, will be appended to the terminal's environment from the settings.
|
||||
"env": { "foo": "bar" },
|
||||
// Current working directory to spawn the command into, defaults to current project root.
|
||||
//"cwd": "/path/to/working/directory",
|
||||
// Whether to use a new terminal tab or reuse the existing one to spawn the process, defaults to `false`.
|
||||
"use_new_terminal": true,
|
||||
// Whether to allow multiple instances of the same task to be run, or rather wait for the existing ones to finish, defaults to `false`.
|
||||
"allow_concurrent_runs": false,
|
||||
// What to do with the terminal pane and tab, after the command was started:
|
||||
// * `always` — always show the task's pane, and focus the corresponding tab in it (default)
|
||||
// * `no_focus` — always show the task's pane, add the task's tab in it, but don't focus it
|
||||
// * `never` — do not alter focus, but still add/reuse the task's tab in its pane
|
||||
"reveal": "always",
|
||||
// What to do with the terminal pane and tab, after the command has finished:
|
||||
// * `never` — Do nothing when the command finishes (default)
|
||||
// * `always` — always hide the terminal tab, hide the pane also if it was the last tab in it
|
||||
// * `on_success` — hide the terminal tab on task success only, otherwise behaves similar to `always`
|
||||
"hide": "never",
|
||||
// Which shell to use when running a task inside the terminal.
|
||||
// May take 3 values:
|
||||
// 1. (default) Use the system's default terminal configuration in /etc/passwd
|
||||
// "shell": "system"
|
||||
// 2. A program:
|
||||
// "shell": {
|
||||
// "program": "sh"
|
||||
// }
|
||||
// 3. A program with arguments:
|
||||
// "shell": {
|
||||
// "with_arguments": {
|
||||
// "program": "/bin/bash",
|
||||
// "args": ["--login"]
|
||||
// }
|
||||
// }
|
||||
"shell": "system",
|
||||
// Whether to show the task line in the output of the spawned task, defaults to `true`.
|
||||
"show_summary": true,
|
||||
// Whether to show the command line in the output of the spawned task, defaults to `true`.
|
||||
// "show_output": true,
|
||||
// Represents the tags for inline runnable indicators, or spawning multiple tasks at once.
|
||||
"tags": ["DODO"]
|
||||
}
|
||||
]
|
||||
@@ -24,7 +24,7 @@ Thank you for your interest in contributing to Herolib! This document provides g
|
||||
For developers, you can use the automated installation script:
|
||||
|
||||
```bash
|
||||
curl 'https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/install_v.sh' > /tmp/install_v.sh
|
||||
curl 'https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/scripts/install_v.sh' > /tmp/install_v.sh
|
||||
bash /tmp/install_v.sh --analyzer --herolib
|
||||
# IMPORTANT: Start a new shell after installation for paths to be set correctly
|
||||
```
|
||||
|
||||
24
README.md
24
README.md
@@ -14,7 +14,7 @@ Herolib is an opinionated library primarily used by ThreeFold to automate cloud
|
||||
The Hero tool can be installed with a single command:
|
||||
|
||||
```bash
|
||||
curl https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/install_hero.sh | bash
|
||||
curl https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/scripts/install_hero.sh | bash
|
||||
```
|
||||
|
||||
Hero will be installed in:
|
||||
@@ -35,11 +35,11 @@ The Hero tool can be used to work with git, build documentation, interact with H
|
||||
For development purposes, use the automated installation script:
|
||||
|
||||
```bash
|
||||
curl 'https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/install_v.sh' > /tmp/install_v.sh
|
||||
curl 'https://raw.githubusercontent.com/incubaid/herolib/refs/heads/development/scripts/install_v.sh' > /tmp/install_v.sh
|
||||
bash /tmp/install_v.sh --analyzer --herolib
|
||||
|
||||
#do not forget to do the following this makes sure vtest and vrun exists
|
||||
cd ~/code/github/incubaid/herolib
|
||||
cd ~/code/github/incubaid/herolib/scripts
|
||||
v install_herolib.vsh
|
||||
|
||||
# IMPORTANT: Start a new shell after installation for paths to be set correctly
|
||||
@@ -51,7 +51,7 @@ v install_herolib.vsh
|
||||
```
|
||||
V & HeroLib Installer Script
|
||||
|
||||
Usage: ~/code/github/incubaid/herolib/install_v.sh [options]
|
||||
Usage: ~/code/github/incubaid/herolib/scripts/install_v.sh [options]
|
||||
|
||||
Options:
|
||||
-h, --help Show this help message
|
||||
@@ -61,12 +61,12 @@ Options:
|
||||
--herolib Install our herolib
|
||||
|
||||
Examples:
|
||||
~/code/github/incubaid/herolib/install_v.sh
|
||||
~/code/github/incubaid/herolib/install_v.sh --reset
|
||||
~/code/github/incubaid/herolib/install_v.sh --remove
|
||||
~/code/github/incubaid/herolib/install_v.sh --analyzer
|
||||
~/code/github/incubaid/herolib/install_v.sh --herolib
|
||||
~/code/github/incubaid/herolib/install_v.sh --reset --analyzer # Fresh install of both
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh --reset
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh --remove
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh --analyzer
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh --herolib
|
||||
~/code/github/incubaid/herolib/scripts/install_v.sh --reset --analyzer # Fresh install of both
|
||||
```
|
||||
|
||||
## Features
|
||||
@@ -175,7 +175,3 @@ To generate documentation locally:
|
||||
cd ~/code/github/incubaid/herolib
|
||||
bash doc.sh
|
||||
```
|
||||
|
||||
<!-- Security scan triggered at 2025-09-02 01:58:41 -->
|
||||
|
||||
<!-- Security scan triggered at 2025-09-09 05:33:18 -->
|
||||
@@ -16,4 +16,4 @@ NC='\033[0m' # No Color
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
/workspace/herolib/install_v.sh
|
||||
/workspace/herolib/scripts/install_v.sh
|
||||
77
aiprompts/README.md
Normal file
77
aiprompts/README.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# HeroLib AI Prompts (`aiprompts/`)
|
||||
|
||||
This directory contains AI-oriented instructions and manuals for working with the Hero tool and the `herolib` codebase.
|
||||
|
||||
It is the **entry point for AI agents** that generate or modify code/docs in this repository.
|
||||
|
||||
## Scope
|
||||
|
||||
- **Global rules for AI and V/Hero usage**
|
||||
See:
|
||||
- `herolib_start_here.md`
|
||||
- `vlang_herolib_core.md`
|
||||
- **Herolib core modules**
|
||||
See:
|
||||
- `herolib_core/` (core HeroLib modules)
|
||||
- `herolib_advanced/` (advanced topics)
|
||||
- **Docusaurus & Site module (Hero docs)**
|
||||
See:
|
||||
- `docusaurus/docusaurus_ebook_manual.md`
|
||||
- `lib/web/docusaurus/README.md` (authoritative module doc)
|
||||
- `lib/web/site/ai_instructions.md` and `lib/web/site/readme.md`
|
||||
- **HeroModels / HeroDB**
|
||||
See:
|
||||
- `ai_instructions_hero_models.md`
|
||||
- `heromodel_instruct.md`
|
||||
- **V language & web server docs** (upstream-style, mostly language-level)
|
||||
See:
|
||||
- `v_core/`, `v_advanced/`
|
||||
- `v_veb_webserver/`
|
||||
|
||||
## Sources of Truth
|
||||
|
||||
For any domain, **code and module-level docs are authoritative**:
|
||||
|
||||
- Core install & usage: `herolib/README.md`, scripts under `scripts/`
|
||||
- Site module: `lib/web/site/ai_instructions.md`, `lib/web/site/readme.md`
|
||||
- Docusaurus module: `lib/web/docusaurus/README.md`, `lib/web/docusaurus/*.v`
|
||||
- DocTree client: `lib/data/doctree/client/README.md`
|
||||
- HeroModels: `lib/hero/heromodels/*.v` + tests
|
||||
|
||||
`aiprompts/` files **must not contradict** these. When in doubt, follow the code / module docs first and treat prompts as guidance.
|
||||
|
||||
## Directory Overview
|
||||
|
||||
- `herolib_start_here.md` / `vlang_herolib_core.md`
|
||||
Global AI rules and V/Hero basics.
|
||||
- `herolib_core/` & `herolib_advanced/`
|
||||
Per-module instructions for core/advanced HeroLib features.
|
||||
- `docusaurus/`
|
||||
AI manual for building Hero docs/ebooks with the Docusaurus + Site + DocTree pipeline.
|
||||
- `instructions/`
|
||||
Active, higher-level instructions (e.g. HeroDB base filesystem).
|
||||
- `instructions_archive/`
|
||||
**Legacy / historical** prompt material. See `instructions_archive/README.md`.
|
||||
- `todo/`
|
||||
Meta design/refactor notes (not up-to-date instructions for normal usage).
|
||||
- `v_core/`, `v_advanced/`, `v_veb_webserver/`
|
||||
V language and web framework references used when generating V code.
|
||||
- `bizmodel/`, `unpolly/`, `doctree/`, `documentor/`
|
||||
Domain-specific or feature-specific instructions.
|
||||
|
||||
## How to Treat Legacy Material
|
||||
|
||||
- Content under `instructions_archive/` is **kept for reference** and may describe older flows (e.g. older documentation or prompt pipelines).
|
||||
Do **not** use it as a primary source for new work unless explicitly requested.
|
||||
- Some prompts mention **Doctree**; the current default docs pipeline uses **DocTree**. Doctree/`doctreeclient` is an alternative/legacy backend.
|
||||
|
||||
## Guidelines for AI Agents
|
||||
|
||||
- Always:
|
||||
- Respect global rules in `herolib_start_here.md` and `vlang_herolib_core.md`.
|
||||
- Prefer module docs under `lib/` when behavior or parameters differ.
|
||||
- Avoid modifying generated files (e.g. `*_ .v` or other generated artifacts) as instructed.
|
||||
- When instructions conflict, resolve as:
|
||||
1. **Code & module docs in `lib/`**
|
||||
2. **AI instructions in `aiprompts/`**
|
||||
3. **Archived docs (`instructions_archive/`) only when explicitly needed**.
|
||||
@@ -5,12 +5,15 @@ This file provides guidance to WARP (warp.dev) when working with code in this re
|
||||
## Commands to Use
|
||||
|
||||
### Testing
|
||||
|
||||
- **Run Tests**: Utilize `vtest ~/code/github/incubaid/herolib/lib/osal/package_test.v` to run specific tests.
|
||||
|
||||
## High-Level Architecture
|
||||
|
||||
- **Project Structure**: The project is organized into multiple modules located in `lib` and `src` directories. Prioritized compilation and caching strategies are utilized across modules.
|
||||
- **Script Handling**: Vlang scripts are crucial and should follow instructions from `aiprompts/vlang_herolib_core.md`.
|
||||
|
||||
## Special Instructions
|
||||
|
||||
- **Documentation Reference**: Always refer to `aiprompts/vlang_herolib_core.md` for essential instructions regarding Vlang and Heroscript code generation and execution.
|
||||
- **Environment Specifics**: Ensure Redis and other dependencies are configured as per scripts provided in the codebase.
|
||||
@@ -2,11 +2,12 @@
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`. These models are used to store structured data in Redis using the HeroDB system.
|
||||
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`.
|
||||
These models are used to store structured data in Redis using the HeroDB system.
|
||||
The `message.v` example can be found in `lib/hero/heromodels/message.v`.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
- Each model represents a data type stored in Redis hash sets
|
||||
- Models must implement serialization/deserialization using the `encoder` module
|
||||
- Models inherit from the `Base` struct which provides common fields
|
||||
- The database uses a factory pattern for model access
|
||||
@@ -107,7 +108,7 @@ Add your model to the ModelsFactory struct in `factory.v`:
|
||||
```v
|
||||
pub struct ModelsFactory {
|
||||
pub mut:
|
||||
messages DBCalendar
|
||||
calendar DBCalendar
|
||||
// ... other models
|
||||
}
|
||||
```
|
||||
@@ -2,13 +2,38 @@
|
||||
|
||||
This manual provides a comprehensive guide on how to leverage HeroLib's Docusaurus integration, Doctree, and HeroScript to create and manage technical ebooks, optimized for AI-driven content generation and project management.
|
||||
|
||||
## Quick Start - Recommended Ebook Structure
|
||||
|
||||
The recommended directory structure for an ebook:
|
||||
|
||||
```
|
||||
my_ebook/
|
||||
├── scan.hero # DocTree collection scanning
|
||||
├── config.hero # Site configuration
|
||||
├── menus.hero # Navbar and footer configuration
|
||||
├── include.hero # Docusaurus define and doctree export
|
||||
├── 1_intro.heroscript # Page definitions (numbered for ordering)
|
||||
├── 2_concepts.heroscript # More page definitions
|
||||
└── 3_advanced.heroscript # Additional pages
|
||||
```
|
||||
|
||||
**Running an ebook:**
|
||||
|
||||
```bash
|
||||
# Start development server
|
||||
hero docs -d -p /path/to/my_ebook
|
||||
|
||||
# Build for production
|
||||
hero docs -p /path/to/my_ebook
|
||||
```
|
||||
|
||||
## 1. Core Concepts
|
||||
|
||||
To effectively create ebooks with HeroLib, it's crucial to understand the interplay of three core components:
|
||||
|
||||
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process.
|
||||
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process. Files use `.hero` extension for configuration and `.heroscript` for page definitions.
|
||||
* **Docusaurus**: A popular open-source static site generator. HeroLib uses Docusaurus as the underlying framework to render your ebook content into a navigable website.
|
||||
* **Doctree**: HeroLib's content management system. Doctree organizes your markdown files into "collections" and "pages," allowing for structured content retrieval and reuse across multiple projects.
|
||||
* **DocTree**: HeroLib's document collection layer. DocTree scans and exports markdown "collections" and "pages" that Docusaurus consumes.
|
||||
|
||||
## 2. Setting Up a Docusaurus Project with HeroLib
|
||||
|
||||
@@ -22,18 +47,26 @@ The `docusaurus.define` HeroScript directive configures the global settings for
|
||||
|
||||
```heroscript
|
||||
!!docusaurus.define
|
||||
name:"my_ebook" // must match the site name from !!site.config
|
||||
path_build: "/tmp/my_ebook_build"
|
||||
path_publish: "/tmp/my_ebook_publish"
|
||||
production: true
|
||||
update: true
|
||||
reset: true // clean build dir before building (optional)
|
||||
install: true // run bun install if needed (optional)
|
||||
template_update: true // update the Docusaurus template (optional)
|
||||
doctree_dir: "/tmp/doctree_export" // where DocTree exports collections
|
||||
use_doctree: true // use DocTree as content backend
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
|
||||
* `name` (string, required): The site/factory name. Must match the `name` used in `!!site.config` so Docusaurus can find the corresponding site definition.
|
||||
* `path_build` (string, optional): The local path where the Docusaurus site will be built. Defaults to `~/hero/var/docusaurus/build`.
|
||||
* `path_publish` (string, optional): The local path where the final Docusaurus site will be published (e.g., for deployment). Defaults to `~/hero/var/docusaurus/publish`.
|
||||
* `production` (boolean, optional): If `true`, the site will be built for production (optimized). Default is `false`.
|
||||
* `update` (boolean, optional): If `true`, the Docusaurus template and dependencies will be updated. Default is `false`.
|
||||
* `reset` (boolean, optional): If `true`, clean the build directory before starting.
|
||||
* `install` (boolean, optional): If `true`, run dependency installation (e.g., `bun install`).
|
||||
* `template_update` (boolean, optional): If `true`, update the Docusaurus template.
|
||||
* `doctree_dir` (string, optional): Directory where DocTree exports collections (used by the DocTree client in `lib/data/doctree/client`).
|
||||
* `use_doctree` (boolean, optional): If `true`, use the DocTree client as the content backend (default behavior).
|
||||
|
||||
### 2.2. Adding a Docusaurus Site (`docusaurus.add`)
|
||||
|
||||
@@ -53,7 +86,7 @@ The `docusaurus.add` directive defines an individual Docusaurus site (your ebook
|
||||
```heroscript
|
||||
!!docusaurus.add
|
||||
name:"tfgrid_tech_ebook"
|
||||
git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
|
||||
git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
|
||||
git_reset:true // Reset Git repository before pulling
|
||||
git_pull:true // Pull latest changes
|
||||
git_root:"/tmp/git_clones" // Optional: specify a root directory for git clones
|
||||
@@ -190,18 +223,18 @@ Configure the footer section of your Docusaurus site.
|
||||
* `href` (string, optional): External URL for the link.
|
||||
* `to` (string, optional): Internal Docusaurus path.
|
||||
|
||||
### 3.4. Build Destinations (`site.build_dest`, `site.build_dest_dev`)
|
||||
### 3.4. Publish Destinations (`site.publish`, `site.publish_dev`)
|
||||
|
||||
Specify where the built Docusaurus site should be deployed. This typically involves an SSH connection defined elsewhere (e.g., `!!site.ssh_connection`).
|
||||
|
||||
**HeroScript Example:**
|
||||
|
||||
```heroscript
|
||||
!!site.build_dest
|
||||
!!site.publish
|
||||
ssh_name:"production_server" // Name of a pre-defined SSH connection
|
||||
path:"/var/www/my-ebook" // Remote path on the server
|
||||
path:"/var/www/my-ebook" // Remote path on the server
|
||||
|
||||
!!site.build_dest_dev
|
||||
!!site.publish_dev
|
||||
ssh_name:"dev_server"
|
||||
path:"/tmp/dev-ebook"
|
||||
```
|
||||
@@ -219,7 +252,7 @@ This powerful feature allows you to pull markdown content and assets from other
|
||||
|
||||
```heroscript
|
||||
!!site.import
|
||||
url:'https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
|
||||
url:'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
|
||||
dest:'cloud_reinvented' // Destination subdirectory within your Docusaurus docs folder
|
||||
replace:'NAME:MyName, URGENCY:red' // Optional: comma-separated key:value pairs for text replacement
|
||||
```
|
||||
@@ -238,49 +271,60 @@ This is where you define the actual content pages and how they are organized int
|
||||
|
||||
```heroscript
|
||||
// Define a category
|
||||
!!site.page_category path:'introduction' label:"Introduction to Ebook" position:10
|
||||
!!site.page_category name:'introduction' label:"Introduction to Ebook"
|
||||
|
||||
// Define a page within that category, linking to Doctree content
|
||||
!!site.page path:'introduction' src:"my_doctree_collection:chapter_1_overview"
|
||||
// Define pages - first page specifies collection, subsequent pages reuse it
|
||||
!!site.page src:"my_collection:chapter_1_overview"
|
||||
title:"Chapter 1: Overview"
|
||||
description:"A brief introduction to the ebook's content."
|
||||
position:1 // Order within the category
|
||||
hide_title:true // Hide the title on the page itself
|
||||
|
||||
!!site.page src:"chapter_2_basics"
|
||||
title:"Chapter 2: Basics"
|
||||
|
||||
// New category with new collection
|
||||
!!site.page_category name:'advanced' label:"Advanced Topics"
|
||||
|
||||
!!site.page src:"advanced_collection:performance"
|
||||
title:"Performance Tuning"
|
||||
hide_title:true
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
|
||||
* **`site.page_category`**:
|
||||
* `path` (string, required): The path to the category directory within your Docusaurus `docs` folder (e.g., `introduction` will create `docs/introduction/_category_.json`).
|
||||
* `name` (string, required): Category identifier (used internally).
|
||||
* `label` (string, required): The display name for the category in the sidebar.
|
||||
* `position` (int, optional): The order of the category in the sidebar.
|
||||
* `sitename` (string, optional): If you have multiple Docusaurus sites defined, specify which site this category belongs to. Defaults to the current site's name.
|
||||
* `position` (int, optional): The order of the category in the sidebar (auto-incremented if omitted).
|
||||
* **`site.page`**:
|
||||
* `src` (string, required): **Crucial for Doctree integration.** This specifies the source of the page content in the format `collection_name:page_name`. HeroLib will fetch the markdown content from the specified Doctree collection and page.
|
||||
* `path` (string, required): The relative path and filename for the generated markdown file within your Docusaurus `docs` folder (e.g., `introduction/chapter_1.md`). If only a directory is provided (e.g., `introduction/`), the `page_name` from `src` will be used as the filename.
|
||||
* `title` (string, optional): The title of the page. If not provided, HeroLib will attempt to extract it from the markdown content or use the `page_name`.
|
||||
* `src` (string, required): **Crucial for DocTree/collection integration.** Format: `collection_name:page_name` for the first page, or just `page_name` to reuse the previous collection.
|
||||
* `title` (string, optional): The title of the page. If not provided, HeroLib extracts it from the markdown `# Heading` or uses the page name.
|
||||
* `description` (string, optional): A short description for the page, used in frontmatter.
|
||||
* `position` (int, optional): The order of the page within its category.
|
||||
* `hide_title` (boolean, optional): If `true`, the title will not be displayed on the page itself.
|
||||
* `draft` (boolean, optional): If `true`, the page will be marked as a draft and not included in production builds.
|
||||
* `title_nr` (int, optional): If set, HeroLib will re-number the markdown headings (e.g., `title_nr:3` will make `# Heading` become `### Heading`). Useful for consistent heading levels across imported content.
|
||||
* `draft` (boolean, optional): If `true`, the page will be hidden from navigation.
|
||||
|
||||
### 3.7. Doctree Integration Details
|
||||
### 3.7. Collections and DocTree/Doctree Integration
|
||||
|
||||
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your Doctree content.
|
||||
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your content collections.
|
||||
|
||||
**How Doctree Works:**
|
||||
**Current default: DocTree export**
|
||||
|
||||
1. **Collections**: DocTree exports markdown files into collections under an `export_dir` (see `lib/data/doctree/client`).
|
||||
2. **Export step**: A separate process (DocTree) writes the collections into `doctree_dir` (e.g., `/tmp/doctree_export`), following the `content/` + `meta/` structure.
|
||||
3. **Docusaurus consumption**: The Docusaurus module uses the DocTree client (`doctree_client`) to resolve `collection_name:page_name` into markdown content and assets when generating docs.
|
||||
|
||||
**Alternative: Doctree/`doctreeclient`**
|
||||
|
||||
In older setups, or when explicitly configured, Doctree and `doctreeclient` can still be used to provide the same `collection:page` model:
|
||||
|
||||
1. **Collections**: Doctree organizes markdown files into logical groups called "collections." A collection is typically a directory containing markdown files and an empty `.collection` file.
|
||||
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`).
|
||||
**Example `doctree.heroscript`:**
|
||||
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`):
|
||||
|
||||
```heroscript
|
||||
!!doctree.scan git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections"
|
||||
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections"
|
||||
```
|
||||
|
||||
This will pull the `collections` directory from the specified Git URL and make its contents available to Doctree.
|
||||
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, HeroLib's `doctreeclient` fetches the content of `my_page.md` from the `my_collection` collection that Doctree has scanned.
|
||||
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, the client (`doctree_client` or `doctreeclient`, depending on configuration) fetches the content of `my_page.md` from the `my_collection` collection.
|
||||
|
||||
## 4. Building and Developing Your Ebook
|
||||
|
||||
|
||||
@@ -35,11 +35,11 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
if plbook.exists_once(filter: 'docusaurus.define') {
|
||||
mut action := plbook.get(filter: 'docusaurus.define')!
|
||||
mut p := action.params
|
||||
//example how we get parameters from the action see core_params.md for more details
|
||||
ds = new(
|
||||
path: p.get_default('path_publish', '')!
|
||||
production: p.get_default_false('production')
|
||||
)!
|
||||
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
|
||||
path_build := p.get_default('path_build', '')!
|
||||
path_publish := p.get_default('path_publish', '')!
|
||||
reset := p.get_default_false('reset')
|
||||
use_doctree := p.get_default_false('use_doctree')
|
||||
}
|
||||
|
||||
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
|
||||
@@ -51,4 +51,4 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
```
|
||||
|
||||
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.
|
||||
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
> NOTE: This document is an example snapshot of a developer's filesystem layout for HeroDB/HeroModels. Paths under `/Users/despiegk/...` are illustrative only. For the current, authoritative structure always use the live repository tree (this checkout) and the modules under `lib/hero/heromodels` and `lib/hero/db`.
|
||||
|
||||
<file_map>
|
||||
/Users/despiegk/code/github/incubaid/herolib
|
||||
├── .github
|
||||
|
||||
15
aiprompts/instructions_archive/README.md
Normal file
15
aiprompts/instructions_archive/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Instructions Archive (Legacy Prompts)
|
||||
|
||||
This directory contains **archived / legacy AI prompt material** for `herolib`.
|
||||
|
||||
- Files here may describe **older workflows** (e.g. previous documentation generation or model pipelines).
|
||||
- They are kept for **historical reference** and to help understand how things evolved.
|
||||
- They are **not** guaranteed to match the current `herolib` implementation.
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
- Do **not** use these files as the primary source for new features or refactors.
|
||||
- When generating code or documentation, prefer:
|
||||
1. Code and module docs under `lib/` (e.g. `lib/web/site/ai_instructions.md`, `lib/web/docusaurus/README.md`).
|
||||
2. Up-to-date AI instructions under `aiprompts/` (outside of `instructions_archive/`).
|
||||
- Only consult this directory when you explicitly need to understand **historical behavior** or migrate old flows.
|
||||
371
aiprompts/unpolly/unpolly_api.md
Normal file
371
aiprompts/unpolly/unpolly_api.md
Normal file
@@ -0,0 +1,371 @@
|
||||
# Unpoly API – Complete Reference & Operational Guide for AI Coding Agents
|
||||
|
||||
## Introduction
|
||||
|
||||
This document serves as a complete, self-contained reference manual for the **Unpoly JavaScript API**. It is written explicitly for AI coding agents that need to generate, validate, or refactor Unpoly-enhanced HTML and JavaScript code.
|
||||
|
||||
Unpoly is a front-end framework for progressive enhancement. It allows HTML pages to update fragments instead of reloading the full document, enabling seamless transitions, modals, caching, and navigation while maintaining server-side rendering.
|
||||
|
||||
An AI agent should use this manual to:
|
||||
- Understand each Unpoly module and its purpose.
|
||||
- Generate correct `up-*` attributes and JavaScript calls.
|
||||
- Detect configuration errors and provide correction suggestions.
|
||||
- Apply Unpoly in automated scaffolding, validation, and optimization workflows.
|
||||
|
||||
---
|
||||
|
||||
## 1. The up.link Module
|
||||
|
||||
### Purpose
|
||||
Handles Unpoly-enhanced navigation. Converts normal links into AJAX-based fragment updates rather than full-page reloads.
|
||||
|
||||
### Core Concepts
|
||||
When a user clicks a link with certain attributes, Unpoly intercepts the event and fetches the new page in the background. It then replaces specified fragments in the current document with matching elements from the response.
|
||||
|
||||
### Common Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| --------------- | -------------------------------------------------------- |
|
||||
| `up-follow` | Marks the link as handled by Unpoly. Usually implied. |
|
||||
| `up-target` | CSS selector identifying which fragment(s) to replace. |
|
||||
| `up-method` | Overrides HTTP method (e.g. `GET`, `POST`). |
|
||||
| `up-params` | Adds query parameters to the request. |
|
||||
| `up-headers` | Adds or overrides HTTP headers. |
|
||||
| `up-layer` | Determines which layer (page, overlay, modal) to update. |
|
||||
| `up-transition` | Defines animation during fragment replacement. |
|
||||
| `up-cache` | Enables caching of the response. |
|
||||
| `up-history` | Controls browser history behavior. |
|
||||
|
||||
### JavaScript API Methods
|
||||
- `up.link.isFollowable(element)` – Returns true if Unpoly will intercept the link.
|
||||
- `up.link.follow(element, options)` – Programmatically follow the link via Unpoly.
|
||||
- `up.link.preload(element, options)` – Preload the linked resource into the cache.
|
||||
|
||||
### Agent Reasoning & Validation
|
||||
- Ensure that every `up-follow` element has a valid `up-target` selector.
|
||||
- Validate that target elements exist in both the current DOM and the server response.
|
||||
- Recommend `up-cache` for commonly visited links to improve performance.
|
||||
- Prevent using `target="_blank"` or `download` attributes with Unpoly links.
|
||||
|
||||
### Example
|
||||
```html
|
||||
<a href="/profile" up-target="#main" up-transition="fade">View Profile</a>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. The up.form Module
|
||||
|
||||
### Purpose
|
||||
Handles progressive enhancement for forms. Submissions happen via AJAX and update only specific fragments.
|
||||
|
||||
### Core Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| ---------------- | --------------------------------------- |
|
||||
| `up-submit` | Marks form to be submitted via Unpoly. |
|
||||
| `up-target` | Fragment selector to update on success. |
|
||||
| `up-fail-target` | Selector to update if submission fails. |
|
||||
| `up-validate` | Enables live field validation. |
|
||||
| `up-autosubmit` | Submits automatically on change. |
|
||||
| `up-disable-for` | Disables fields during request. |
|
||||
| `up-enable-for` | Enables fields after request completes. |
|
||||
|
||||
### JavaScript API
|
||||
- `up.form.submit(form, options)` – Submit programmatically.
|
||||
- `up.validate(field, options)` – Trigger server validation.
|
||||
- `up.form.fields(form)` – Returns all input fields.
|
||||
|
||||
### Agent Reasoning
|
||||
- Always ensure form has both `action` and `method` attributes.
|
||||
- Match `up-target` to an element existing in the rendered HTML.
|
||||
- For validation, ensure server supports `X-Up-Validate` header.
|
||||
- When generating forms, add `up-fail-target` to handle errors gracefully.
|
||||
|
||||
### Example
|
||||
```html
|
||||
<form action="/update" method="POST" up-submit up-target="#user-info" up-fail-target="#form-errors">
|
||||
<input name="email" up-validate required>
|
||||
<button type="submit">Save</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. The up.layer Module
|
||||
|
||||
### Purpose
|
||||
Manages overlays, modals, and stacked layers of navigation.
|
||||
|
||||
### Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| ---------------- | -------------------------------------------------- |
|
||||
| `up-layer="new"` | Opens content in a new overlay. |
|
||||
| `up-size` | Controls modal size (e.g., `small`, `large`). |
|
||||
| `up-dismissable` | Allows overlay to close by clicking outside. |
|
||||
| `up-history` | Determines if the overlay updates browser history. |
|
||||
| `up-title` | Sets overlay title. |
|
||||
|
||||
### JavaScript API
|
||||
- `up.layer.open(options)` – Opens a new layer.
|
||||
- `up.layer.close(layer)` – Closes a given layer.
|
||||
- `up.layer.on(event, callback)` – Hooks into lifecycle events.
|
||||
|
||||
### Agent Notes
|
||||
- Ensure `up-layer="new"` only used with valid targets.
|
||||
- For overlays, set `up-history="false"` unless explicitly required.
|
||||
- Auto-generate dismiss buttons with `up-layer-close`.
|
||||
|
||||
### Example
|
||||
```html
|
||||
<a href="/settings" up-layer="new" up-size="large" up-target=".modal-content">Open Settings</a>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. The up.fragment Module
|
||||
|
||||
### Purpose
|
||||
Handles low-level fragment rendering, preserving, replacing, and merging.
|
||||
|
||||
### JavaScript API
|
||||
- `up.render(options)` – Replace fragment(s) with new content.
|
||||
- `up.fragment.config` – Configure defaults for rendering.
|
||||
- `up.fragment.get(target)` – Retrieve a fragment.
|
||||
|
||||
### Example
|
||||
```js
|
||||
up.render({ target: '#main', url: '/dashboard', transition: 'fade' })
|
||||
```
|
||||
|
||||
### Agent Notes
|
||||
- Ensure only fragment HTML is sent from server (not full document).
|
||||
- Use `preserve` for elements like forms where input state matters.
|
||||
|
||||
---
|
||||
|
||||
## 5. The up.network Module
|
||||
|
||||
### Purpose
|
||||
Handles network requests, caching, and aborting background loads.
|
||||
|
||||
### JavaScript API
|
||||
- `up.network.loadPage(url, options)` – Load a page via Unpoly.
|
||||
- `up.network.abort()` – Abort ongoing requests.
|
||||
- `up.network.config.timeout` – Default timeout setting.
|
||||
|
||||
### Agent Tasks
|
||||
- Preload probable links (`up.link.preload`).
|
||||
- Use caching for frequent calls.
|
||||
- Handle `up:network:late` event to show spinners.
|
||||
|
||||
---
|
||||
|
||||
## 6. The up.event Module
|
||||
|
||||
### Purpose
|
||||
Manages custom events fired throughout Unpoly’s lifecycle.
|
||||
|
||||
### Common Events
|
||||
- `up:link:follow`
|
||||
- `up:form:submit`
|
||||
- `up:layer:open`
|
||||
- `up:layer:close`
|
||||
- `up:rendered`
|
||||
- `up:network:late`
|
||||
|
||||
### Example
|
||||
```js
|
||||
up.on('up:layer:close', (event) => {
|
||||
console.log('Overlay closed');
|
||||
});
|
||||
```
|
||||
|
||||
### Agent Actions
|
||||
- Register listeners for key events.
|
||||
- Prevent duplicate bindings.
|
||||
- Offer analytics hooks for `up:rendered` or `up:location:changed`.
|
||||
|
||||
---
|
||||
|
||||
## 7. The up.motion Module
|
||||
|
||||
Handles animations and transitions.
|
||||
|
||||
### API
|
||||
- `up.motion()` – Animate elements.
|
||||
- `up.animate(element, keyframes, options)` – Custom animation.
|
||||
|
||||
### Agent Notes
|
||||
- Suggest `up-transition="fade"` or similar for fragment changes.
|
||||
- Avoid heavy animations for performance-sensitive devices.
|
||||
|
||||
---
|
||||
|
||||
## 8. The up.radio Module
|
||||
|
||||
Handles broadcasting and receiving cross-fragment events.
|
||||
|
||||
### Example
|
||||
```js
|
||||
up.radio.emit('user:updated', { id: 5 })
|
||||
up.radio.on('user:updated', (data) => console.log(data))
|
||||
```
|
||||
|
||||
### Agent Tasks
|
||||
- Use for coordinating multiple fragments.
|
||||
- Ensure channel names are namespaced (e.g., `form:valid`, `modal:open`).
|
||||
|
||||
---
|
||||
|
||||
## 9. The up.history Module
|
||||
|
||||
### Purpose
|
||||
Manages URL history, titles, and restoration.
|
||||
|
||||
### API
|
||||
- `up.history.push(url, options)` – Push new history entry.
|
||||
- `up.history.restore()` – Restore previous state.
|
||||
|
||||
### Agent Guidance
|
||||
- Disable history (`up-history="false"`) for temporary overlays.
|
||||
- Ensure proper title update via `up-title`.
|
||||
|
||||
---
|
||||
|
||||
## 10. The up.viewport Module
|
||||
|
||||
### Purpose
|
||||
Manages scrolling, focusing, and viewport restoration.
|
||||
|
||||
### API
|
||||
- `up.viewport.scroll(element)` – Scroll to element.
|
||||
- `up.viewport.restoreScroll()` – Restore previous position.
|
||||
|
||||
### Agent Tasks
|
||||
- Restore scroll after fragment updates.
|
||||
- Manage focus for accessibility after `up.render()`.
|
||||
|
||||
---
|
||||
|
||||
## 11. The up.protocol Module
|
||||
|
||||
Handles request headers and special HTTP status codes.
|
||||
|
||||
### Key Concepts
|
||||
- Adds `X-Up-Target` and `X-Up-Validate` headers.
|
||||
- Interprets redirect and validation responses.
|
||||
|
||||
### Agent Notes
|
||||
- Ensure backend supports Unpoly headers.
|
||||
- Warn if 204 or redirect status used without fragments.
|
||||
|
||||
---
|
||||
|
||||
## 12. The up.util Module
|
||||
|
||||
Contains helpers for arrays, strings, and timing.
|
||||
|
||||
### Examples
|
||||
```js
|
||||
up.util.escapeHTML("<script>") // "<script>"
|
||||
up.util.copy(text)
|
||||
```
|
||||
|
||||
### Agent Tasks
|
||||
- Use helpers for DOM-safe generation.
|
||||
- Avoid reimplementing basic utilities.
|
||||
|
||||
---
|
||||
|
||||
## 13. The up.framework and up.log Modules
|
||||
|
||||
- `up.framework.init()` – Initializes Unpoly.
|
||||
- `up.log.enable()` – Enables debug logging.
|
||||
|
||||
### Agent Tasks
|
||||
- Always call `up.framework.init()` once.
|
||||
- Enable logging during debug sessions.
|
||||
|
||||
---
|
||||
|
||||
Good addition — including **Unpoly from a CDN** is essential for setup and automation.
|
||||
Here’s the section you can append to the Markdown file (I can also regenerate the full file with it included if you prefer):
|
||||
|
||||
---
|
||||
|
||||
## 14. Including Unpoly from CDN
|
||||
|
||||
### Purpose
|
||||
|
||||
Before any Unpoly features or attributes (`up-*`) can work, the library must be loaded into the page.
|
||||
For AI agents generating HTML pages or templates, always include Unpoly’s **CSS** and **JavaScript** assets early in the `<head>` section.
|
||||
|
||||
### Basic Setup (Latest Stable Version)
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>My Unpoly Page</title>
|
||||
|
||||
<!-- Unpoly Core Styles -->
|
||||
<link rel="stylesheet" href="https://unpkg.com/unpoly@3.7.2/unpoly.min.css">
|
||||
|
||||
<!-- Optional Bootstrap integration -->
|
||||
<link rel="stylesheet" href="https://unpkg.com/unpoly@3.7.2/unpoly-bootstrap3.min.css">
|
||||
|
||||
<!-- Unpoly Core Script -->
|
||||
<script src="https://unpkg.com/unpoly@3.7.2/unpoly.min.js"></script>
|
||||
|
||||
<!-- Initialize framework -->
|
||||
<script>
|
||||
up.framework.init()
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
<!-- Page content -->
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### AI Agent Notes
|
||||
|
||||
* Always include **`unpoly.min.css`** before any custom styles to ensure default animations and transitions function properly.
|
||||
* **Never** load Unpoly twice; check if the page already has a `<script src="...unpoly.min.js">`.
|
||||
* After loading, the agent may immediately use Unpoly JavaScript APIs (`up.render`, `up.link.follow`, etc.) or rely on HTML attributes.
|
||||
* If dynamically injecting HTML pages, the agent should re-run `up.framework.init()` **only once globally**, not after every fragment load.
|
||||
|
||||
### Recommended CDN Sources
|
||||
|
||||
* `https://unpkg.com/unpoly@3.x/`
|
||||
* `https://cdn.jsdelivr.net/npm/unpoly@3.x/`
|
||||
|
||||
### Offline Use
|
||||
|
||||
For fully offline or embedded environments, the agent can download both `.js` and `.css` files and reference them locally:
|
||||
|
||||
```html
|
||||
<link rel="stylesheet" href="/assets/unpoly.min.css">
|
||||
<script src="/assets/unpoly.min.js"></script>
|
||||
```
|
||||
---
|
||||
|
||||
## Agent Validation Checklist
|
||||
|
||||
1. Verify `up-*` attributes match existing fragments.
|
||||
2. Check backend returns valid fragment markup.
|
||||
3. Ensure forms use `up-submit` and `up-fail-target`.
|
||||
4. Overlay layers must have dismissable controls.
|
||||
5. Use caching wisely (`up-cache`, `up.link.preload`).
|
||||
6. Handle network and render events gracefully.
|
||||
7. Log events (`up.log`) for debugging.
|
||||
8. Confirm scroll/focus restoration after renders.
|
||||
9. Gracefully degrade if JavaScript disabled.
|
||||
10. Document reasoning and configuration.
|
||||
|
||||
|
||||
|
||||
|
||||
647
aiprompts/unpolly/unpolly_core.md
Normal file
647
aiprompts/unpolly/unpolly_core.md
Normal file
@@ -0,0 +1,647 @@
|
||||
# Unpoly Quick Reference for AI Agents
|
||||
|
||||
## Installation
|
||||
|
||||
Include Unpoly from CDN in your HTML `<head>`:
|
||||
|
||||
```html
|
||||
<script src="https://unpoly.com/unpoly.min.js"></script>
|
||||
<link rel="stylesheet" href="https://unpoly.com/unpoly.min.css">
|
||||
```
|
||||
|
||||
## Core Concept
|
||||
|
||||
Unpoly updates page fragments without full page reloads. Users click links/submit forms → server responds with HTML → Unpoly extracts and swaps matching fragments.
|
||||
|
||||
---
|
||||
|
||||
## 1. Following Links (Fragment Updates)
|
||||
|
||||
### Basic Link Following
|
||||
|
||||
```html
|
||||
<a href="/users/5" up-follow>View User</a>
|
||||
```
|
||||
|
||||
Updates the `<main>` element (or `<body>` if no main exists) with content from `/users/5`.
|
||||
|
||||
### Target Specific Fragment
|
||||
|
||||
```html
|
||||
<a href="/users/5" up-target=".user-details">View User</a>
|
||||
|
||||
<div class="user-details">
|
||||
<!-- Content replaced here -->
|
||||
</div>
|
||||
```
|
||||
|
||||
### Multiple Fragments
|
||||
|
||||
```html
|
||||
<a href="/users/5" up-target=".profile, .activity">View User</a>
|
||||
```
|
||||
|
||||
Updates both `.profile` and `.activity` from single response.
|
||||
|
||||
### Append/Prepend Content
|
||||
|
||||
```html
|
||||
<!-- Append to list -->
|
||||
<a href="/items?page=2" up-target=".items:after">Load More</a>
|
||||
|
||||
<!-- Prepend to list -->
|
||||
<a href="/latest" up-target=".items:before">Show Latest</a>
|
||||
```
|
||||
|
||||
### Handle All Links Automatically
|
||||
|
||||
```js
|
||||
up.link.config.followSelectors.push('a[href]')
|
||||
```
|
||||
|
||||
Now all links update fragments by default.
|
||||
|
||||
---
|
||||
|
||||
## 2. Submitting Forms
|
||||
|
||||
### Basic Form Submission
|
||||
|
||||
```html
|
||||
<form action="/users" method="post" up-submit>
|
||||
<input name="email">
|
||||
<button type="submit">Create</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
Submits via AJAX and updates `<main>` with response.
|
||||
|
||||
### Target Specific Fragment
|
||||
|
||||
```html
|
||||
<form action="/search" up-submit up-target=".results">
|
||||
<input name="query">
|
||||
<button>Search</button>
|
||||
</form>
|
||||
|
||||
<div class="results">
|
||||
<!-- Search results appear here -->
|
||||
</div>
|
||||
```
|
||||
|
||||
### Handle Success vs. Error Responses
|
||||
|
||||
```html
|
||||
<form action="/users" method="post" up-submit
|
||||
up-target="#success"
|
||||
up-fail-target="form">
|
||||
<input name="email">
|
||||
<button>Create</button>
|
||||
</form>
|
||||
|
||||
<div id="success">Success message here</div>
|
||||
```
|
||||
|
||||
- **Success (2xx status)**: Updates `#success`
|
||||
- **Error (4xx/5xx status)**: Re-renders `form` with validation errors
|
||||
|
||||
**Server must return HTTP 422** (or similar error code) for validation failures.
|
||||
|
||||
---
|
||||
|
||||
## 3. Opening Overlays (Modal, Drawer, Popup)
|
||||
|
||||
### Modal Dialog
|
||||
|
||||
```html
|
||||
<a href="/details" up-layer="new">Open Modal</a>
|
||||
```
|
||||
|
||||
Opens `/details` in a modal overlay.
|
||||
|
||||
### Drawer (Sidebar)
|
||||
|
||||
```html
|
||||
<a href="/menu" up-layer="new drawer">Open Drawer</a>
|
||||
```
|
||||
|
||||
### Popup (Anchored to Link)
|
||||
|
||||
```html
|
||||
<a href="/help" up-layer="new popup">Help</a>
|
||||
```
|
||||
|
||||
### Close Overlay When Condition Met
|
||||
|
||||
```html
|
||||
<a href="/users/new"
|
||||
up-layer="new"
|
||||
up-accept-location="/users/$id"
|
||||
up-on-accepted="console.log('Created user:', value.id)">
|
||||
New User
|
||||
</a>
|
||||
```
|
||||
|
||||
Overlay auto-closes when URL matches `/users/123`, passes `{ id: 123 }` to callback.
|
||||
|
||||
### Local Content (No Server Request)
|
||||
|
||||
```html
|
||||
<a up-layer="new popup" up-content="<p>Help text here</p>">Help</a>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Validation
|
||||
|
||||
### Validate on Field Change
|
||||
|
||||
```html
|
||||
<form action="/users" method="post">
|
||||
<input name="email" up-validate>
|
||||
<input name="password" up-validate>
|
||||
<button type="submit">Register</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
When field loses focus → submits form with `X-Up-Validate: email` header → server re-renders form → Unpoly updates the field's parent `<fieldset>` (or closest form group).
|
||||
|
||||
**Server must return HTTP 422** for validation errors.
|
||||
|
||||
### Validate While Typing
|
||||
|
||||
```html
|
||||
<input name="email" up-validate
|
||||
up-watch-event="input"
|
||||
up-watch-delay="300">
|
||||
```
|
||||
|
||||
Validates 300ms after user stops typing.
|
||||
|
||||
---
|
||||
|
||||
## 5. Lazy Loading & Polling
|
||||
|
||||
### Load When Element Appears in DOM
|
||||
|
||||
```html
|
||||
<div id="menu" up-defer up-href="/menu">
|
||||
Loading menu...
|
||||
</div>
|
||||
```
|
||||
|
||||
Immediately loads `/menu` when placeholder renders.
|
||||
|
||||
### Load When Scrolled Into View
|
||||
|
||||
```html
|
||||
<div id="comments" up-defer="reveal" up-href="/comments">
|
||||
Loading comments...
|
||||
</div>
|
||||
```
|
||||
|
||||
Loads when element scrolls into viewport.
|
||||
|
||||
### Auto-Refresh (Polling)
|
||||
|
||||
```html
|
||||
<div class="status" up-poll up-interval="5000">
|
||||
Current status
|
||||
</div>
|
||||
```
|
||||
|
||||
Reloads fragment every 5 seconds from original URL.
|
||||
|
||||
---
|
||||
|
||||
## 6. Caching & Revalidation
|
||||
|
||||
### Enable Caching
|
||||
|
||||
```html
|
||||
<a href="/users" up-cache="true">Users</a>
|
||||
```
|
||||
|
||||
Caches response, instantly shows cached content, then revalidates with server.
|
||||
|
||||
### Disable Caching
|
||||
|
||||
```html
|
||||
<a href="/stock" up-cache="false">Live Prices</a>
|
||||
```
|
||||
|
||||
### Conditional Requests (Server-Side)
|
||||
|
||||
Server sends:
|
||||
|
||||
```http
|
||||
HTTP/1.1 200 OK
|
||||
ETag: "abc123"
|
||||
|
||||
<div class="data">Content</div>
|
||||
```
|
||||
|
||||
Next reload, Unpoly sends:
|
||||
|
||||
```http
|
||||
GET /path
|
||||
If-None-Match: "abc123"
|
||||
```
|
||||
|
||||
Server responds `304 Not Modified` if unchanged → saves bandwidth.
|
||||
|
||||
---
|
||||
|
||||
## 7. Navigation Bar (Current Link Highlighting)
|
||||
|
||||
```html
|
||||
<nav>
|
||||
<a href="/home">Home</a>
|
||||
<a href="/about">About</a>
|
||||
</nav>
|
||||
```
|
||||
|
||||
Current page link gets `.up-current` class automatically.
|
||||
|
||||
**Style it:**
|
||||
|
||||
```css
|
||||
.up-current {
|
||||
font-weight: bold;
|
||||
color: blue;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Loading State
|
||||
|
||||
### Feedback Classes
|
||||
|
||||
Automatically applied:
|
||||
|
||||
- `.up-active` on clicked link/button
|
||||
- `.up-loading` on targeted fragment
|
||||
|
||||
**Style them:**
|
||||
|
||||
```css
|
||||
.up-active { opacity: 0.6; }
|
||||
.up-loading { opacity: 0.8; }
|
||||
```
|
||||
|
||||
### Disable Form While Submitting
|
||||
|
||||
```html
|
||||
<form up-submit up-disable>
|
||||
<input name="email">
|
||||
<button>Submit</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
All fields disabled during submission.
|
||||
|
||||
### Show Placeholder While Loading
|
||||
|
||||
```html
|
||||
<a href="/data" up-target=".data"
|
||||
up-placeholder="<p>Loading...</p>">
|
||||
Load Data
|
||||
</a>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Preloading
|
||||
|
||||
### Preload on Hover
|
||||
|
||||
```html
|
||||
<a href="/users/5" up-preload>User Profile</a>
|
||||
```
|
||||
|
||||
Starts loading when user hovers (90ms delay by default).
|
||||
|
||||
### Preload Immediately
|
||||
|
||||
```html
|
||||
<a href="/menu" up-preload="insert">Menu</a>
|
||||
```
|
||||
|
||||
Loads as soon as link appears in DOM.
|
||||
|
||||
---
|
||||
|
||||
## 10. Templates (Client-Side HTML)
|
||||
|
||||
### Define Template
|
||||
|
||||
```html
|
||||
<template id="user-card">
|
||||
<div class="card">
|
||||
<h3>{{name}}</h3>
|
||||
<p>{{email}}</p>
|
||||
</div>
|
||||
</template>
|
||||
```
|
||||
|
||||
### Use Template
|
||||
|
||||
```html
|
||||
<a up-fragment="#user-card"
|
||||
up-use-data="{ name: 'Alice', email: 'alice@example.com' }">
|
||||
Show User
|
||||
</a>
|
||||
```
|
||||
|
||||
**Process variables with compiler:**
|
||||
|
||||
```js
|
||||
up.compiler('.card', function(element, data) {
|
||||
element.innerHTML = element.innerHTML
|
||||
.replace(/{{name}}/g, data.name)
|
||||
.replace(/{{email}}/g, data.email)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. JavaScript API
|
||||
|
||||
### Render Fragment
|
||||
|
||||
```js
|
||||
up.render({
|
||||
url: '/users/5',
|
||||
target: '.user-details'
|
||||
})
|
||||
```
|
||||
|
||||
### Navigate (Updates History)
|
||||
|
||||
```js
|
||||
up.navigate({
|
||||
url: '/users',
|
||||
target: 'main'
|
||||
})
|
||||
```
|
||||
|
||||
### Submit Form
|
||||
|
||||
```js
|
||||
let form = document.querySelector('form')
|
||||
up.submit(form)
|
||||
```
|
||||
|
||||
### Open Overlay
|
||||
|
||||
```js
|
||||
up.layer.open({
|
||||
url: '/users/new',
|
||||
onAccepted: (event) => {
|
||||
console.log('User created:', event.value)
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### Close Overlay with Value
|
||||
|
||||
```js
|
||||
up.layer.accept({ id: 123, name: 'Alice' })
|
||||
```
|
||||
|
||||
### Reload Fragment
|
||||
|
||||
```js
|
||||
up.reload('.status')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12. Request Headers (Server Protocol)
|
||||
|
||||
Unpoly sends these headers with requests:
|
||||
|
||||
| Header | Value | Purpose |
|
||||
| --------------- | -------- | ------------------------------- |
|
||||
| `X-Up-Version` | `1.0.0` | Identifies Unpoly request |
|
||||
| `X-Up-Target` | `.users` | Fragment selector being updated |
|
||||
| `X-Up-Mode` | `modal` | Current layer mode |
|
||||
| `X-Up-Validate` | `email` | Field being validated |
|
||||
|
||||
**Server can respond with:**
|
||||
|
||||
| Header | Effect |
|
||||
| ------------------------ | ------------------------ |
|
||||
| `X-Up-Target: .other` | Changes target selector |
|
||||
| `X-Up-Accept-Layer: {}` | Closes overlay (success) |
|
||||
| `X-Up-Dismiss-Layer: {}` | Closes overlay (cancel) |
|
||||
|
||||
---
|
||||
|
||||
## 13. Common Patterns
|
||||
|
||||
### Infinite Scrolling
|
||||
|
||||
```html
|
||||
<div id="items">
|
||||
<div>Item 1</div>
|
||||
<div>Item 2</div>
|
||||
</div>
|
||||
|
||||
<a id="next" href="/items?page=2"
|
||||
up-defer="reveal"
|
||||
up-target="#items:after, #next">
|
||||
Load More
|
||||
</a>
|
||||
```
|
||||
|
||||
### Dependent Form Fields
|
||||
|
||||
```html
|
||||
<form action="/order">
|
||||
<!-- Changing country updates city select -->
|
||||
<select name="country" up-validate="#city">
|
||||
<option>USA</option>
|
||||
<option>Canada</option>
|
||||
</select>
|
||||
|
||||
<select name="city" id="city">
|
||||
<option>New York</option>
|
||||
</select>
|
||||
</form>
|
||||
```
|
||||
|
||||
### Confirm Before Action
|
||||
|
||||
```html
|
||||
<a href="/delete" up-method="delete"
|
||||
up-confirm="Really delete?">
|
||||
Delete
|
||||
</a>
|
||||
```
|
||||
|
||||
### Auto-Submit on Change
|
||||
|
||||
```html
|
||||
<form action="/search" up-autosubmit>
|
||||
<input name="query">
|
||||
</form>
|
||||
```
|
||||
|
||||
Submits form when any field changes.
|
||||
|
||||
---
|
||||
|
||||
## 14. Error Handling
|
||||
|
||||
### Handle Network Errors
|
||||
|
||||
```js
|
||||
up.on('up:fragment:offline', function(event) {
|
||||
if (confirm('You are offline. Retry?')) {
|
||||
event.retry()
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### Handle Failed Responses
|
||||
|
||||
```js
|
||||
try {
|
||||
await up.render({ url: '/path', target: '.data' })
|
||||
} catch (error) {
|
||||
if (error instanceof up.RenderResult) {
|
||||
console.log('Server error:', error)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 15. Compilers (Enhance Elements)
|
||||
|
||||
### Basic Compiler
|
||||
|
||||
```js
|
||||
up.compiler('.current-time', function(element) {
|
||||
element.textContent = new Date().toString()
|
||||
})
|
||||
```
|
||||
|
||||
Runs when `.current-time` is inserted (initial load OR fragment update).
|
||||
|
||||
### Compiler with Cleanup
|
||||
|
||||
```js
|
||||
up.compiler('.auto-refresh', function(element) {
|
||||
let timer = setInterval(() => {
|
||||
element.textContent = new Date().toString()
|
||||
}, 1000)
|
||||
|
||||
// Return destructor function
|
||||
return () => clearInterval(timer)
|
||||
})
|
||||
```
|
||||
|
||||
Destructor called when element is removed from DOM.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Table
|
||||
|
||||
| Task | HTML | JavaScript |
|
||||
| --------------- | ---------------------------- | -------------------------- |
|
||||
| Follow link | `<a href="/path" up-follow>` | `up.follow(link)` |
|
||||
| Submit form | `<form up-submit>` | `up.submit(form)` |
|
||||
| Target fragment | `up-target=".foo"` | `{ target: '.foo' }` |
|
||||
| Open modal | `up-layer="new"` | `up.layer.open({ url })` |
|
||||
| Validate field | `up-validate` | `up.validate(field)` |
|
||||
| Lazy load | `up-defer` | — |
|
||||
| Poll fragment | `up-poll` | — |
|
||||
| Preload link | `up-preload` | `up.link.preload(link)` |
|
||||
| Local content | `up-content="<p>Hi</p>"` | `{ content: '<p>Hi</p>' }` |
|
||||
| Append content | `up-target=".list:after"` | — |
|
||||
| Confirm action | `up-confirm="Sure?"` | `{ confirm: 'Sure?' }` |
|
||||
|
||||
---
|
||||
|
||||
## Key Defaults
|
||||
|
||||
- **Target**: Updates `<main>` (or `<body>`) if no `up-target` specified
|
||||
- **Caching**: Auto-enabled for GET requests during navigation
|
||||
- **History**: Auto-updated when rendering `<main>` or major fragments
|
||||
- **Scrolling**: Auto-scrolls to top when updating `<main>`
|
||||
- **Focus**: Auto-focuses new fragment
|
||||
- **Validation**: Targets field's parent `<fieldset>` or form group
|
||||
|
||||
---
|
||||
|
||||
## Best Practices for AI Agents
|
||||
|
||||
1. **Always provide HTTP error codes**: Return 422 for validation errors, 404 for not found, etc.
|
||||
2. **Send full HTML responses**: Include entire page structure; Unpoly extracts needed fragments
|
||||
3. **Use semantic HTML**: `<main>`, `<nav>`, `<form>` elements work best
|
||||
4. **Set IDs on fragments**: Makes targeting easier (e.g., `<div id="user-123">`)
|
||||
5. **Return consistent selectors**: If request targets `.users`, response must contain `.users`
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
❌ **Don't**: Return only partial HTML without wrapper
|
||||
```html
|
||||
<h1>Title</h1>
|
||||
<p>Content</p>
|
||||
```
|
||||
|
||||
✅ **Do**: Wrap in target selector
|
||||
```html
|
||||
<div class="content">
|
||||
<h1>Title</h1>
|
||||
<p>Content</p>
|
||||
</div>
|
||||
```
|
||||
|
||||
❌ **Don't**: Return 200 OK for validation errors
|
||||
✅ **Do**: Return 422 Unprocessable Entity
|
||||
|
||||
❌ **Don't**: Use `onclick="up.follow(this)"`
|
||||
✅ **Do**: Use `up-follow` attribute (handles keyboard, accessibility)
|
||||
|
||||
---
|
||||
|
||||
## Server Response Examples
|
||||
|
||||
### Successful Form Submission
|
||||
|
||||
```http
|
||||
HTTP/1.1 200 OK
|
||||
|
||||
<div id="success">
|
||||
User created successfully!
|
||||
</div>
|
||||
```
|
||||
|
||||
### Validation Error
|
||||
|
||||
```http
|
||||
HTTP/1.1 422 Unprocessable Entity
|
||||
|
||||
<form action="/users" method="post" up-submit>
|
||||
<input name="email" value="invalid">
|
||||
<div class="error">Email is invalid</div>
|
||||
<button>Submit</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
### Partial Response (Optimized)
|
||||
|
||||
```http
|
||||
HTTP/1.1 200 OK
|
||||
Vary: X-Up-Target
|
||||
|
||||
<div class="user-details">
|
||||
<!-- Only the targeted fragment -->
|
||||
</div>
|
||||
```
|
||||
@@ -1,51 +1,10 @@
|
||||
# module orm
|
||||
|
||||
|
||||
## Contents
|
||||
- [Constants](#Constants)
|
||||
- [new_query](#new_query)
|
||||
- [orm_select_gen](#orm_select_gen)
|
||||
- [orm_stmt_gen](#orm_stmt_gen)
|
||||
- [orm_table_gen](#orm_table_gen)
|
||||
- [Connection](#Connection)
|
||||
- [Primitive](#Primitive)
|
||||
- [QueryBuilder[T]](#QueryBuilder[T])
|
||||
- [reset](#reset)
|
||||
- [where](#where)
|
||||
- [or_where](#or_where)
|
||||
- [order](#order)
|
||||
- [limit](#limit)
|
||||
- [offset](#offset)
|
||||
- [select](#select)
|
||||
- [set](#set)
|
||||
- [query](#query)
|
||||
- [count](#count)
|
||||
- [insert](#insert)
|
||||
- [insert_many](#insert_many)
|
||||
- [update](#update)
|
||||
- [delete](#delete)
|
||||
- [create](#create)
|
||||
- [drop](#drop)
|
||||
- [last_id](#last_id)
|
||||
- [MathOperationKind](#MathOperationKind)
|
||||
- [OperationKind](#OperationKind)
|
||||
- [OrderType](#OrderType)
|
||||
- [SQLDialect](#SQLDialect)
|
||||
- [StmtKind](#StmtKind)
|
||||
- [InfixType](#InfixType)
|
||||
- [Null](#Null)
|
||||
- [QueryBuilder](#QueryBuilder)
|
||||
- [QueryData](#QueryData)
|
||||
- [SelectConfig](#SelectConfig)
|
||||
- [Table](#Table)
|
||||
- [TableField](#TableField)
|
||||
|
||||
## Constants
|
||||
```v
|
||||
const num64 = [typeof[i64]().idx, typeof[u64]().idx]
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const nums = [
|
||||
@@ -59,7 +18,7 @@ const nums = [
|
||||
]
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const float = [
|
||||
@@ -68,31 +27,31 @@ const float = [
|
||||
]
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const type_string = typeof[string]().idx
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const serial = -1
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const time_ = -2
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const enum_ = -3
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const type_idx = {
|
||||
@@ -111,19 +70,19 @@ const type_idx = {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const string_max_len = 2048
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
```v
|
||||
const null_primitive = Primitive(Null{})
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## new_query
|
||||
```v
|
||||
@@ -132,7 +91,7 @@ fn new_query[T](conn Connection) &QueryBuilder[T]
|
||||
|
||||
new_query create a new query object for struct `T`
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## orm_select_gen
|
||||
```v
|
||||
@@ -141,7 +100,7 @@ fn orm_select_gen(cfg SelectConfig, q string, num bool, qm string, start_pos int
|
||||
|
||||
Generates an sql select stmt, from universal parameter orm - See SelectConfig q, num, qm, start_pos - see orm_stmt_gen where - See QueryData
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## orm_stmt_gen
|
||||
```v
|
||||
@@ -151,7 +110,7 @@ fn orm_stmt_gen(sql_dialect SQLDialect, table Table, q string, kind StmtKind, nu
|
||||
|
||||
Generates an sql stmt, from universal parameter q - The quotes character, which can be different in every type, so it's variable num - Stmt uses nums at prepared statements (? or ?1) qm - Character for prepared statement (qm for question mark, as in sqlite) start_pos - When num is true, it's the start position of the counter
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## orm_table_gen
|
||||
```v
|
||||
@@ -161,7 +120,7 @@ fn orm_table_gen(sql_dialect SQLDialect, table Table, q string, defaults bool, d
|
||||
|
||||
Generates an sql table stmt, from universal parameter table - Table struct q - see orm_stmt_gen defaults - enables default values in stmt def_unique_len - sets default unique length for texts fields - See TableField sql_from_v - Function which maps type indices to sql type names alternative - Needed for msdb
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## Connection
|
||||
```v
|
||||
@@ -181,7 +140,7 @@ Interfaces gets called from the backend and can be implemented Since the orm sup
|
||||
|
||||
Every function without last_id() returns an optional, which returns an error if present last_id returns the last inserted id of the db
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## Primitive
|
||||
```v
|
||||
@@ -203,7 +162,7 @@ type Primitive = InfixType
|
||||
| []Primitive
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## QueryBuilder[T]
|
||||
## reset
|
||||
@@ -213,7 +172,7 @@ fn (qb_ &QueryBuilder[T]) reset() &QueryBuilder[T]
|
||||
|
||||
reset reset a query object, but keep the connection and table name
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## where
|
||||
```v
|
||||
@@ -222,7 +181,7 @@ fn (qb_ &QueryBuilder[T]) where(condition string, params ...Primitive) !&QueryBu
|
||||
|
||||
where create a `where` clause, it will `AND` with previous `where` clause. valid token in the `condition` include: `field's names`, `operator`, `(`, `)`, `?`, `AND`, `OR`, `||`, `&&`, valid `operator` incldue: `=`, `!=`, `<>`, `>=`, `<=`, `>`, `<`, `LIKE`, `ILIKE`, `IS NULL`, `IS NOT NULL`, `IN`, `NOT IN` example: `where('(a > ? AND b <= ?) OR (c <> ? AND (x = ? OR y = ?))', a, b, c, x, y)`
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## or_where
|
||||
```v
|
||||
@@ -231,7 +190,7 @@ fn (qb_ &QueryBuilder[T]) or_where(condition string, params ...Primitive) !&Quer
|
||||
|
||||
or_where create a `where` clause, it will `OR` with previous `where` clause.
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## order
|
||||
```v
|
||||
@@ -240,7 +199,7 @@ fn (qb_ &QueryBuilder[T]) order(order_type OrderType, field string) !&QueryBuild
|
||||
|
||||
order create a `order` clause
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## limit
|
||||
```v
|
||||
@@ -249,7 +208,7 @@ fn (qb_ &QueryBuilder[T]) limit(limit int) !&QueryBuilder[T]
|
||||
|
||||
limit create a `limit` clause
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## offset
|
||||
```v
|
||||
@@ -258,7 +217,7 @@ fn (qb_ &QueryBuilder[T]) offset(offset int) !&QueryBuilder[T]
|
||||
|
||||
offset create a `offset` clause
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## select
|
||||
```v
|
||||
@@ -267,7 +226,7 @@ fn (qb_ &QueryBuilder[T]) select(fields ...string) !&QueryBuilder[T]
|
||||
|
||||
select create a `select` clause
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## set
|
||||
```v
|
||||
@@ -276,7 +235,7 @@ fn (qb_ &QueryBuilder[T]) set(assign string, values ...Primitive) !&QueryBuilder
|
||||
|
||||
set create a `set` clause for `update`
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## query
|
||||
```v
|
||||
@@ -285,7 +244,7 @@ fn (qb_ &QueryBuilder[T]) query() ![]T
|
||||
|
||||
query start a query and return result in struct `T`
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## count
|
||||
```v
|
||||
@@ -294,7 +253,7 @@ fn (qb_ &QueryBuilder[T]) count() !int
|
||||
|
||||
count start a count query and return result
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## insert
|
||||
```v
|
||||
@@ -303,7 +262,7 @@ fn (qb_ &QueryBuilder[T]) insert[T](value T) !&QueryBuilder[T]
|
||||
|
||||
insert insert a record into the database
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## insert_many
|
||||
```v
|
||||
@@ -312,7 +271,7 @@ fn (qb_ &QueryBuilder[T]) insert_many[T](values []T) !&QueryBuilder[T]
|
||||
|
||||
insert_many insert records into the database
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## update
|
||||
```v
|
||||
@@ -321,7 +280,7 @@ fn (qb_ &QueryBuilder[T]) update() !&QueryBuilder[T]
|
||||
|
||||
update update record(s) in the database
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## delete
|
||||
```v
|
||||
@@ -330,7 +289,7 @@ fn (qb_ &QueryBuilder[T]) delete() !&QueryBuilder[T]
|
||||
|
||||
delete delete record(s) in the database
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## create
|
||||
```v
|
||||
@@ -339,7 +298,7 @@ fn (qb_ &QueryBuilder[T]) create() !&QueryBuilder[T]
|
||||
|
||||
create create a table
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## drop
|
||||
```v
|
||||
@@ -348,7 +307,7 @@ fn (qb_ &QueryBuilder[T]) drop() !&QueryBuilder[T]
|
||||
|
||||
drop drop a table
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## last_id
|
||||
```v
|
||||
@@ -357,7 +316,7 @@ fn (qb_ &QueryBuilder[T]) last_id() int
|
||||
|
||||
last_id returns the last inserted id of the db
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## MathOperationKind
|
||||
```v
|
||||
@@ -369,7 +328,7 @@ enum MathOperationKind {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## OperationKind
|
||||
```v
|
||||
@@ -389,7 +348,7 @@ enum OperationKind {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## OrderType
|
||||
```v
|
||||
@@ -399,7 +358,7 @@ enum OrderType {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## SQLDialect
|
||||
```v
|
||||
@@ -411,7 +370,7 @@ enum SQLDialect {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## StmtKind
|
||||
```v
|
||||
@@ -422,7 +381,7 @@ enum StmtKind {
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## InfixType
|
||||
```v
|
||||
@@ -434,14 +393,14 @@ pub:
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## Null
|
||||
```v
|
||||
struct Null {}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## QueryBuilder
|
||||
```v
|
||||
@@ -456,7 +415,7 @@ pub mut:
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## QueryData
|
||||
```v
|
||||
@@ -474,7 +433,7 @@ pub mut:
|
||||
|
||||
Examples for QueryData in SQL: abc == 3 && b == 'test' => fields[abc, b]; data[3, 'test']; types[index of int, index of string]; kinds[.eq, .eq]; is_and[true]; Every field, data, type & kind of operation in the expr share the same index in the arrays is_and defines how they're addicted to each other either and or or parentheses defines which fields will be inside () auto_fields are indexes of fields where db should generate a value when absent in an insert
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## SelectConfig
|
||||
```v
|
||||
@@ -496,7 +455,7 @@ pub mut:
|
||||
|
||||
table - Table struct is_count - Either the data will be returned or an integer with the count has_where - Select all or use a where expr has_order - Order the results order - Name of the column which will be ordered order_type - Type of order (asc, desc) has_limit - Limits the output data primary - Name of the primary field has_offset - Add an offset to the result fields - Fields to select types - Types to select
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## Table
|
||||
```v
|
||||
@@ -507,7 +466,7 @@ pub mut:
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
|
||||
## TableField
|
||||
```v
|
||||
@@ -521,7 +480,3 @@ pub mut:
|
||||
is_arr bool
|
||||
}
|
||||
```
|
||||
|
||||
[[Return to contents]](#Contents)
|
||||
|
||||
#### Powered by vdoc. Generated on: 2 Sep 2025 07:19:37
|
||||
|
||||
282
aiprompts/v_core/orm/orm_cheat.md
Normal file
282
aiprompts/v_core/orm/orm_cheat.md
Normal file
@@ -0,0 +1,282 @@
|
||||
|
||||
# V ORM — Developer Cheat Sheet
|
||||
|
||||
*Fast reference for Struct Mapping, CRUD, Attributes, Query Builder, and Usage Patterns*
|
||||
|
||||
---
|
||||
|
||||
## 1. What V ORM Is
|
||||
|
||||
* Built-in ORM for **SQLite**, **MySQL**, **PostgreSQL**
|
||||
* Unified V-syntax; no SQL string building
|
||||
* Automatic query sanitization
|
||||
* Compile-time type & field checks
|
||||
* Structs map directly to tables
|
||||
|
||||
---
|
||||
|
||||
## 2. Define Models (Struct ↔ Table)
|
||||
|
||||
### Basic Example
|
||||
|
||||
```v
|
||||
struct User {
|
||||
id int @[primary; sql: serial]
|
||||
name string
|
||||
email string @[unique]
|
||||
}
|
||||
```
|
||||
|
||||
### Nullable Fields
|
||||
|
||||
```v
|
||||
age ?int // allows NULL
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Struct Attributes
|
||||
|
||||
### Table-level
|
||||
|
||||
| Attribute | Meaning |
|
||||
| ---------------------------- | ------------------------- |
|
||||
| `@[table: 'custom_name']` | Override table name |
|
||||
| `@[comment: '...']` | Table comment |
|
||||
| `@[index: 'field1, field2']` | Creates multi-field index |
|
||||
|
||||
---
|
||||
|
||||
## 4. Field Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| ------------------------------------------------ | ---------------------------- |
|
||||
| `@[primary]` | Primary key |
|
||||
| `@[unique]` | UNIQUE constraint |
|
||||
| `@[unique: 'group']` | Composite unique group |
|
||||
| `@[skip]` / `@[sql: '-']` | Ignore field |
|
||||
| `@[sql: serial]` | Auto-increment key |
|
||||
| `@[sql: 'col_name']` | Rename column |
|
||||
| `@[sql_type: 'BIGINT']` | Force SQL type |
|
||||
| `@[default: 'CURRENT_TIMESTAMP']` | Raw SQL default |
|
||||
| `@[fkey: 'field']` | Foreign key on a child array |
|
||||
| `@[references]`, `@[references: 'table(field)']` | FK relationship |
|
||||
| `@[index]` | Index on field |
|
||||
| `@[comment: '...']` | Column comment |
|
||||
|
||||
### Example
|
||||
|
||||
```v
|
||||
struct Post {
|
||||
id int @[primary; sql: serial]
|
||||
title string
|
||||
body string
|
||||
author_id int @[references: 'users(id)']
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. ORM SQL Block (Primary API)
|
||||
|
||||
### Create Table
|
||||
|
||||
```v
|
||||
sql db {
|
||||
create table User
|
||||
}!
|
||||
```
|
||||
|
||||
### Drop Table
|
||||
|
||||
```v
|
||||
sql db {
|
||||
drop table User
|
||||
}!
|
||||
```
|
||||
|
||||
### Insert
|
||||
|
||||
```v
|
||||
id := sql db {
|
||||
insert new_user into User
|
||||
}!
|
||||
```
|
||||
|
||||
### Select
|
||||
|
||||
```v
|
||||
users := sql db {
|
||||
select from User where age > 18 && name != 'Tom'
|
||||
order by id desc
|
||||
limit 10
|
||||
}!
|
||||
```
|
||||
|
||||
### Update
|
||||
|
||||
```v
|
||||
sql db {
|
||||
update User set name = 'Alice' where id == 1
|
||||
}!
|
||||
```
|
||||
|
||||
### Delete
|
||||
|
||||
```v
|
||||
sql db {
|
||||
delete from User where id > 100
|
||||
}!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Relationships
|
||||
|
||||
### One-to-Many
|
||||
|
||||
```v
|
||||
struct Parent {
|
||||
id int @[primary; sql: serial]
|
||||
children []Child @[fkey: 'parent_id']
|
||||
}
|
||||
|
||||
struct Child {
|
||||
id int @[primary; sql: serial]
|
||||
parent_id int
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Notes on `time.Time`
|
||||
|
||||
* Stored as integer timestamps
|
||||
* SQL defaults like `NOW()` / `CURRENT_TIMESTAMP` **don’t work** for `time.Time` with V ORM defaults
|
||||
* Use `@[default: 'CURRENT_TIMESTAMP']` only with custom SQL types
|
||||
|
||||
---
|
||||
|
||||
## 8. Query Builder API (Dynamic Queries)
|
||||
|
||||
### Create Builder
|
||||
|
||||
```v
|
||||
mut qb := orm.new_query[User](db)
|
||||
```
|
||||
|
||||
### Create Table
|
||||
|
||||
```v
|
||||
qb.create()!
|
||||
```
|
||||
|
||||
### Insert Many
|
||||
|
||||
```v
|
||||
qb.insert_many(users)!
|
||||
```
|
||||
|
||||
### Select
|
||||
|
||||
```v
|
||||
results := qb
|
||||
.select('id, name')!
|
||||
.where('age > ?', 18)!
|
||||
.order('id DESC')!
|
||||
.limit(20)!
|
||||
.query()!
|
||||
```
|
||||
|
||||
### Update
|
||||
|
||||
```v
|
||||
qb
|
||||
.set('name = ?', 'NewName')!
|
||||
.where('id = ?', 1)!
|
||||
.update()!
|
||||
```
|
||||
|
||||
### Delete
|
||||
|
||||
```v
|
||||
qb.where('created_at IS NULL')!.delete()!
|
||||
```
|
||||
|
||||
### Complex WHERE
|
||||
|
||||
```v
|
||||
qb.where(
|
||||
'(salary > ? AND age < ?) OR (role LIKE ?)',
|
||||
3000, 40, '%engineer%'
|
||||
)!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Connecting to Databases
|
||||
|
||||
### SQLite
|
||||
|
||||
```v
|
||||
import db.sqlite
|
||||
db := sqlite.connect('db.sqlite')!
|
||||
```
|
||||
|
||||
### MySQL
|
||||
|
||||
```v
|
||||
import db.mysql
|
||||
db := mysql.connect(host: 'localhost', user: 'root', password: '', dbname: 'test')!
|
||||
```
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
```v
|
||||
import db.pg
|
||||
db := pg.connect(conn_str)!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Full Example (Complete CRUD)
|
||||
|
||||
```v
|
||||
import db.sqlite
|
||||
|
||||
struct Customer {
|
||||
id int @[primary; sql: serial]
|
||||
name string
|
||||
email string @[unique]
|
||||
}
|
||||
|
||||
fn main() {
|
||||
db := sqlite.connect('customers.db')!
|
||||
|
||||
sql db { create table Customer }!
|
||||
|
||||
new_c := Customer{name: 'Alice', email: 'alice@x.com'}
|
||||
|
||||
id := sql db { insert new_c into Customer }!
|
||||
println(id)
|
||||
|
||||
list := sql db { select from Customer where name == 'Alice' }!
|
||||
println(list)
|
||||
|
||||
sql db { update Customer set name = 'Alicia' where id == id }!
|
||||
|
||||
sql db { delete from Customer where id == id }!
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Best Practices
|
||||
|
||||
* Always use `sql db { ... }` for static queries
|
||||
* Use QueryBuilder for dynamic conditions
|
||||
* Prefer `sql: serial` for primary keys
|
||||
* Explicitly define foreign keys
|
||||
* Use `?T` for nullable fields
|
||||
* Keep struct names identical to table names unless overridden
|
||||
|
||||
@@ -122,12 +122,12 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
if plbook.exists_once(filter: 'docusaurus.define') {
|
||||
mut action := plbook.get(filter: 'docusaurus.define')!
|
||||
mut p := action.params
|
||||
//example how we get parameters from the action see core_params.md for more details
|
||||
ds = new(
|
||||
path: p.get_default('path_publish', '')!
|
||||
production: p.get_default_false('production')
|
||||
)!
|
||||
}
|
||||
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
|
||||
path_build := p.get_default('path_build', '')!
|
||||
path_publish := p.get_default('path_publish', '')!
|
||||
reset := p.get_default_false('reset')
|
||||
use_doctree := p.get_default_false('use_doctree')
|
||||
}
|
||||
|
||||
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
|
||||
actions := plbook.find(filter: 'docusaurus.add')!
|
||||
@@ -138,7 +138,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
```
|
||||
|
||||
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.
|
||||
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.
|
||||
|
||||
# PlayBook, process heroscripts
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ fp.version('v0.1.0')
|
||||
fp.description('Compile hero binary in debug or production mode')
|
||||
fp.skip_executable()
|
||||
|
||||
|
||||
prod_mode := fp.bool('prod', `p`, false, 'Build production version (optimized)')
|
||||
help_requested := fp.bool('help', `h`, false, 'Show help message')
|
||||
|
||||
@@ -61,6 +62,8 @@ compile_cmd := if os.user_os() == 'macos' {
|
||||
'v -enable-globals -g -w -n -prod hero.v'
|
||||
} else {
|
||||
'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
|
||||
// 'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
|
||||
// 'v -cg -enable-globals -parallel-cc -w -n -d use_openssl hero.v'
|
||||
}
|
||||
} else {
|
||||
if prod_mode {
|
||||
|
||||
@@ -53,11 +53,9 @@ fn do() ! {
|
||||
mut cmd := Command{
|
||||
name: 'hero'
|
||||
description: 'Your HERO toolset.'
|
||||
version: '1.0.35'
|
||||
version: '1.0.38'
|
||||
}
|
||||
|
||||
// herocmds.cmd_run_add_flags(mut cmd)
|
||||
|
||||
mut toinstall := false
|
||||
if !osal.cmd_exists('mc') || !osal.cmd_exists('redis-cli') {
|
||||
toinstall = true
|
||||
@@ -86,11 +84,13 @@ fn do() ! {
|
||||
|
||||
base.redis_install()!
|
||||
|
||||
herocmds.cmd_run(mut cmd)
|
||||
herocmds.cmd_git(mut cmd)
|
||||
herocmds.cmd_generator(mut cmd)
|
||||
herocmds.cmd_docusaurus(mut cmd)
|
||||
herocmds.cmd_web(mut cmd)
|
||||
herocmds.cmd_sshagent(mut cmd)
|
||||
herocmds.cmd_atlas(mut cmd)
|
||||
|
||||
cmd.setup()
|
||||
cmd.parse(os.args)
|
||||
|
||||
47
compare_dirs.sh
Executable file
47
compare_dirs.sh
Executable file
@@ -0,0 +1,47 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Usage: ./compare_dirs.sh <branch1> <branch2> <dir_path>
|
||||
# Example: ./compare_dirs.sh main feature-branch src
|
||||
|
||||
if [ "$#" -ne 3 ]; then
|
||||
echo "Usage: $0 <branch1> <branch2> <dir_path>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BRANCH1=$1
|
||||
BRANCH2=$2
|
||||
DIR_PATH=$3
|
||||
|
||||
TMP_DIR1=$(mktemp -d)
|
||||
TMP_DIR2=$(mktemp -d)
|
||||
|
||||
# Ensure we're in a Git repo
|
||||
if ! git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
|
||||
echo "Error: Not inside a Git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Fetch branch contents without switching branches
|
||||
git worktree add "$TMP_DIR1" "$BRANCH1" > /dev/null 2>&1
|
||||
git worktree add "$TMP_DIR2" "$BRANCH2" > /dev/null 2>&1
|
||||
|
||||
# Check if the directory exists in both branches
|
||||
if [ ! -d "$TMP_DIR1/$DIR_PATH" ]; then
|
||||
echo "Error: $DIR_PATH does not exist in $BRANCH1"
|
||||
exit 1
|
||||
fi
|
||||
if [ ! -d "$TMP_DIR2/$DIR_PATH" ]; then
|
||||
echo "Error: $DIR_PATH does not exist in $BRANCH2"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Compare directories
|
||||
echo "Comparing $DIR_PATH between $BRANCH1 and $BRANCH2..."
|
||||
diff -qr "$TMP_DIR1/$DIR_PATH" "$TMP_DIR2/$DIR_PATH"
|
||||
|
||||
# Detailed differences
|
||||
diff -u -r "$TMP_DIR1/$DIR_PATH" "$TMP_DIR2/$DIR_PATH"
|
||||
|
||||
# Clean up temporary worktrees
|
||||
git worktree remove "$TMP_DIR1" --force
|
||||
git worktree remove "$TMP_DIR2" --force
|
||||
@@ -40,4 +40,3 @@ RUN /tmp/install_herolib.vsh && \
|
||||
|
||||
ENTRYPOINT ["/bin/bash"]
|
||||
CMD ["/bin/bash"]
|
||||
|
||||
|
||||
@@ -5,8 +5,8 @@ SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Copy installation files
|
||||
cp ../../install_v.sh ./scripts/install_v.sh
|
||||
cp ../../install_herolib.vsh ./scripts/install_herolib.vsh
|
||||
cp ../../scripts/install_v.sh ./scripts/install_v.sh
|
||||
cp ../../scripts/install_herolib.vsh ./scripts/install_herolib.vsh
|
||||
|
||||
# Docker image and container names
|
||||
DOCKER_IMAGE_NAME="herolib"
|
||||
|
||||
29
examples/ai/aiclient.vsh
Executable file
29
examples/ai/aiclient.vsh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.ai.client
|
||||
|
||||
mut cl := client.new()!
|
||||
|
||||
// response := cl.llms.llm_local.chat_completion(
|
||||
// message: 'Explain quantum computing in simple terms'
|
||||
// temperature: 0.5
|
||||
// max_completion_tokens: 1024
|
||||
// )!
|
||||
|
||||
response := cl.llms.llm_maverick.chat_completion(
|
||||
message: 'Explain quantum computing in simple terms'
|
||||
temperature: 0.5
|
||||
max_completion_tokens: 1024
|
||||
)!
|
||||
|
||||
println(response)
|
||||
|
||||
// response := cl.llms.llm_embed_local.embed(input: [
|
||||
// 'The food was delicious and the waiter..',
|
||||
// ])!
|
||||
|
||||
// response2 := cl.llms.llm_embed.embed(input: [
|
||||
// 'The food was delicious and the waiter..',
|
||||
// ])!
|
||||
|
||||
println(response2)
|
||||
17
examples/ai/aiclient_embed.vsh
Executable file
17
examples/ai/aiclient_embed.vsh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.ai.client
|
||||
|
||||
mut cl := client.new()!
|
||||
|
||||
// response := cl.llms.llm_local.chat_completion(
|
||||
// message: 'Explain quantum computing in simple terms'
|
||||
// temperature: 0.5
|
||||
// max_completion_tokens: 1024
|
||||
// )!
|
||||
|
||||
response := cl.llms.llm_embed.chat_completion(
|
||||
message: 'Explain quantum computing in simple terms'
|
||||
)!
|
||||
|
||||
println(response)
|
||||
8
examples/ai/flow_test1.vsh
Executable file
8
examples/ai/flow_test1.vsh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.ai.client
|
||||
import incubaid.herolib.ai.flow_calendar
|
||||
|
||||
prompt = 'Explain quantum computing in simple terms'
|
||||
|
||||
flow_calendar.start(mut coordinator, prompt)!
|
||||
26
examples/ai/groq.vsh
Executable file
26
examples/ai/groq.vsh
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// models see https://console.groq.com/docs/models
|
||||
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure name:"groq"
|
||||
url:"https://api.groq.com/openai/v1"
|
||||
model_default:"openai/gpt-oss-120b"
|
||||
'
|
||||
reset: true
|
||||
)!
|
||||
|
||||
mut client := openai.get(name: 'groq')!
|
||||
|
||||
response := client.chat_completion(
|
||||
message: 'Explain quantum computing in simple terms'
|
||||
temperature: 0.5
|
||||
max_completion_tokens: 1024
|
||||
)!
|
||||
|
||||
println(response.result)
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
import incubaid.herolib.clients.jina
|
||||
|
||||
mut jina_client := jina.get()!
|
||||
mut jina_client := jina.new()!
|
||||
health := jina_client.health()!
|
||||
println('Server health: ${health}')
|
||||
|
||||
@@ -34,7 +34,7 @@ train_result := jina_client.train(
|
||||
label: 'positive'
|
||||
},
|
||||
jina.TrainingExample{
|
||||
image: 'https://letsenhance.io/static/73136da51c245e80edc6ccfe44888a99/1015f/MainBefore.jpg'
|
||||
image: 'https://picsum.photos/id/11/367/267'
|
||||
label: 'negative'
|
||||
},
|
||||
]
|
||||
@@ -50,7 +50,7 @@ classify_result := jina_client.classify(
|
||||
text: 'A photo of a cat'
|
||||
},
|
||||
jina.ClassificationInput{
|
||||
image: 'https://letsenhance.io/static/73136da51c245e80edc6ccfe44888a99/1015f/MainBefore.jpg'
|
||||
image: 'https://picsum.photos/id/11/367/267'
|
||||
},
|
||||
]
|
||||
labels: ['cat', 'dog']
|
||||
30
examples/ai/jina_simple.vsh
Executable file
30
examples/ai/jina_simple.vsh
Executable file
@@ -0,0 +1,30 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.jina
|
||||
import os
|
||||
import json
|
||||
|
||||
mut j := jina.new()!
|
||||
|
||||
embeddings := j.create_embeddings(
|
||||
input: ['Hello world', 'This is a test']
|
||||
model: .jina_embeddings_v3
|
||||
task: 'separation'
|
||||
) or {
|
||||
println('Error creating embeddings: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Embeddings created successfully!')
|
||||
println('Model: ${embeddings.model}')
|
||||
println('Dimension: ${embeddings.dimension}')
|
||||
println('Number of embeddings: ${embeddings.data.len}')
|
||||
|
||||
// If there are embeddings, print the first one (truncated)
|
||||
if embeddings.data.len > 0 {
|
||||
first_embedding := embeddings.data[0]
|
||||
println('First embedding (first 5 values): ${first_embedding.embedding[0..5]}')
|
||||
}
|
||||
|
||||
// Usage information
|
||||
println('Token usage: ${embeddings.usage.total_tokens} ${embeddings.usage.unit}')
|
||||
120
examples/ai/openai/README.md
Normal file
120
examples/ai/openai/README.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# OpenRouter Examples - Proof of Concept
|
||||
|
||||
## Overview
|
||||
|
||||
This folder contains **example scripts** demonstrating how to use the **OpenAI client** (`herolib.clients.openai`) configured to work with **OpenRouter**.
|
||||
|
||||
* **Goal:** Show how to send messages to OpenRouter models using the OpenAI client, run a **two-model pipeline** for code enhancement, and illustrate multi-model usage.
|
||||
* **Key Insight:** The OpenAI client is OpenRouter-compatible by design - simply configure it with OpenRouter's base URL (`https://openrouter.ai/api/v1`) and API key.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
All examples configure the OpenAI client to use OpenRouter by setting:
|
||||
|
||||
* **URL**: `https://openrouter.ai/api/v1`
|
||||
* **API Key**: Read from `OPENROUTER_API_KEY` environment variable
|
||||
* **Model**: OpenRouter model IDs (e.g., `qwen/qwen-2.5-coder-32b-instruct`)
|
||||
|
||||
Example configuration:
|
||||
|
||||
```v
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure
|
||||
name: "default"
|
||||
url: "https://openrouter.ai/api/v1"
|
||||
model_default: "qwen/qwen-2.5-coder-32b-instruct"
|
||||
'
|
||||
)!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Scripts
|
||||
|
||||
### 1. `openai_init.vsh`
|
||||
|
||||
* **Purpose:** Basic initialization example showing OpenAI client configured for OpenRouter.
|
||||
* **Demonstrates:** Client configuration and simple chat completion.
|
||||
* **Usage:**
|
||||
|
||||
```bash
|
||||
examples/ai/openai/openai_init.vsh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. `openai_hello.vsh`
|
||||
|
||||
* **Purpose:** Simple hello message to OpenRouter.
|
||||
* **Demonstrates:** Sending a single message using `client.chat_completion`.
|
||||
* **Usage:**
|
||||
|
||||
```bash
|
||||
examples/ai/openai/openai_hello.vsh
|
||||
```
|
||||
|
||||
* **Expected output:** A friendly "hello" response from the AI and token usage.
|
||||
|
||||
---
|
||||
|
||||
### 3. `openai_example.vsh`
|
||||
|
||||
* **Purpose:** Demonstrates basic conversation features.
|
||||
* **Demonstrates:**
|
||||
* Sending a single message
|
||||
* Using system + user messages for conversation context
|
||||
* Printing token usage
|
||||
* **Usage:**
|
||||
|
||||
```bash
|
||||
examples/ai/openai/openai_example.vsh
|
||||
```
|
||||
|
||||
* **Expected output:** Responses from the AI for both simple and system-prompt conversations.
|
||||
|
||||
---
|
||||
|
||||
### 4. `openai_two_model_pipeline.vsh`
|
||||
|
||||
* **Purpose:** Two-model code enhancement pipeline (proof of concept).
|
||||
* **Demonstrates:**
|
||||
* Model A (`Qwen3 Coder`) suggests code improvements.
|
||||
* Model B (`morph-v3-fast`) applies the suggested edits.
|
||||
* Tracks tokens and shows before/after code.
|
||||
* Using two separate OpenAI client instances with different models
|
||||
* **Usage:**
|
||||
|
||||
```bash
|
||||
examples/ai/openai/openai_two_model_pipeline.vsh
|
||||
```
|
||||
|
||||
* **Expected output:**
|
||||
* Original code
|
||||
* Suggested edits
|
||||
* Final updated code
|
||||
* Token usage summary
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Set your OpenRouter API key before running the examples:
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="sk-or-v1-..."
|
||||
```
|
||||
|
||||
The OpenAI client automatically detects when the URL contains "openrouter" and will use the `OPENROUTER_API_KEY` environment variable.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
1. **No separate OpenRouter client needed** - The OpenAI client is fully compatible with OpenRouter's API.
|
||||
2. All scripts configure the OpenAI client with OpenRouter's base URL.
|
||||
3. The two-model pipeline uses **two separate client instances** (one per model) to demonstrate multi-model workflows.
|
||||
4. Scripts can be run individually using the `v -enable-globals run` command.
|
||||
5. The two-model pipeline is a **proof of concept**; the flow can later be extended to multiple files or OpenRPC specs.
|
||||
59
examples/ai/openai/openai_example.vsh
Executable file
59
examples/ai/openai/openai_example.vsh
Executable file
@@ -0,0 +1,59 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Configure OpenAI client to use OpenRouter
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure
|
||||
name: "default"
|
||||
url: "https://openrouter.ai/api/v1"
|
||||
model_default: "qwen/qwen-2.5-coder-32b-instruct"
|
||||
'
|
||||
)!
|
||||
|
||||
// Get the client instance
|
||||
mut client := openai.get()!
|
||||
|
||||
println('🤖 OpenRouter Client Example (using OpenAI client)')
|
||||
println('═'.repeat(50))
|
||||
println('')
|
||||
|
||||
// Example 1: Simple message
|
||||
println('Example 1: Simple Hello')
|
||||
println('─'.repeat(50))
|
||||
mut r := client.chat_completion(
|
||||
model: 'qwen/qwen-2.5-coder-32b-instruct'
|
||||
message: 'Say hello in a creative way!'
|
||||
temperature: 0.7
|
||||
max_completion_tokens: 150
|
||||
)!
|
||||
|
||||
println('AI: ${r.result}')
|
||||
println('Tokens: ${r.usage.total_tokens}\n')
|
||||
|
||||
// Example 2: Conversation with system prompt
|
||||
println('Example 2: Conversation with System Prompt')
|
||||
println('─'.repeat(50))
|
||||
r = client.chat_completion(
|
||||
model: 'qwen/qwen-2.5-coder-32b-instruct'
|
||||
messages: [
|
||||
openai.Message{
|
||||
role: .system
|
||||
content: 'You are a helpful coding assistant who speaks concisely.'
|
||||
},
|
||||
openai.Message{
|
||||
role: .user
|
||||
content: 'What is V programming language?'
|
||||
},
|
||||
]
|
||||
temperature: 0.3
|
||||
max_completion_tokens: 200
|
||||
)!
|
||||
|
||||
println('AI: ${r.result}')
|
||||
println('Tokens: ${r.usage.total_tokens}\n')
|
||||
|
||||
println('═'.repeat(50))
|
||||
println('✓ Examples completed successfully!')
|
||||
41
examples/ai/openai/openai_hello.vsh
Executable file
41
examples/ai/openai/openai_hello.vsh
Executable file
@@ -0,0 +1,41 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Configure OpenAI client to use OpenRouter
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure
|
||||
name: "default"
|
||||
url: "https://openrouter.ai/api/v1"
|
||||
model_default: "qwen/qwen-2.5-coder-32b-instruct"
|
||||
'
|
||||
)!
|
||||
|
||||
// Get the client instance
|
||||
mut client := openai.get() or {
|
||||
eprintln('Failed to get client: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Sending message to OpenRouter...\n')
|
||||
|
||||
// Simple hello message
|
||||
response := client.chat_completion(
|
||||
model: 'qwen/qwen-2.5-coder-32b-instruct'
|
||||
message: 'Say hello in a friendly way!'
|
||||
temperature: 0.7
|
||||
max_completion_tokens: 100
|
||||
) or {
|
||||
eprintln('Failed to get completion: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Response from AI:')
|
||||
println('─'.repeat(50))
|
||||
println(response.result)
|
||||
println('─'.repeat(50))
|
||||
println('\nTokens used: ${response.usage.total_tokens}')
|
||||
println(' - Prompt: ${response.usage.prompt_tokens}')
|
||||
println(' - Completion: ${response.usage.completion_tokens}')
|
||||
@@ -3,6 +3,8 @@
|
||||
import incubaid.herolib.clients.openai
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// to set the API key, either set it here, or set the OPENAI_API_KEY environment variable
|
||||
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure name: "default" key: "" url: "https://openrouter.ai/api/v1" model_default: "gpt-oss-120b"
|
||||
@@ -18,3 +20,5 @@ mut r := client.chat_completion(
|
||||
temperature: 0.3
|
||||
max_completion_tokens: 1024
|
||||
)!
|
||||
|
||||
println(r.result)
|
||||
134
examples/ai/openai/openai_two_model_pipeline.vsh
Executable file
134
examples/ai/openai/openai_two_model_pipeline.vsh
Executable file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Sample code file to be improved
|
||||
const sample_code = '
|
||||
def calculate_sum(numbers):
|
||||
total = 0
|
||||
for i in range(len(numbers)):
|
||||
total = total + numbers[i]
|
||||
return total
|
||||
|
||||
def find_max(lst):
|
||||
max = lst[0]
|
||||
for i in range(1, len(lst)):
|
||||
if lst[i] > max:
|
||||
max = lst[i]
|
||||
return max
|
||||
'
|
||||
|
||||
// Configure two OpenAI client instances to use OpenRouter with different models
|
||||
// Model A: Enhancement model (Qwen Coder)
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure
|
||||
name: "enhancer"
|
||||
url: "https://openrouter.ai/api/v1"
|
||||
model_default: "qwen/qwen-2.5-coder-32b-instruct"
|
||||
'
|
||||
)!
|
||||
|
||||
// Model B: Modification model (Llama 3.3 70B)
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure
|
||||
name: "modifier"
|
||||
url: "https://openrouter.ai/api/v1"
|
||||
model_default: "meta-llama/llama-3.3-70b-instruct"
|
||||
'
|
||||
)!
|
||||
|
||||
mut enhancer := openai.get(name: 'enhancer') or { panic('Failed to get enhancer client: ${err}') }
|
||||
|
||||
mut modifier := openai.get(name: 'modifier') or { panic('Failed to get modifier client: ${err}') }
|
||||
|
||||
println('═'.repeat(70))
|
||||
println('🔧 Two-Model Code Enhancement Pipeline - Proof of Concept')
|
||||
println('🔧 Using OpenAI client configured for OpenRouter')
|
||||
println('═'.repeat(70))
|
||||
println('')
|
||||
|
||||
// Step 1: Get enhancement suggestions from Model A (Qwen Coder)
|
||||
println('📝 STEP 1: Code Enhancement Analysis')
|
||||
println('─'.repeat(70))
|
||||
println('Model: qwen/qwen-2.5-coder-32b-instruct')
|
||||
println('Task: Analyze code and suggest improvements\n')
|
||||
|
||||
enhancement_prompt := 'You are a code enhancement agent.
|
||||
Your job is to analyze the following Python code and propose improvements or fixes.
|
||||
Output your response as **pure edits or diffs only**, not a full rewritten file.
|
||||
Focus on:
|
||||
- Performance improvements
|
||||
- Pythonic idioms
|
||||
- Bug fixes
|
||||
- Code clarity
|
||||
|
||||
Here is the code to analyze:
|
||||
${sample_code}
|
||||
|
||||
Provide specific edit instructions or diffs.'
|
||||
|
||||
println('🤖 Sending to enhancement model...')
|
||||
enhancement_result := enhancer.chat_completion(
|
||||
message: enhancement_prompt
|
||||
temperature: 0.3
|
||||
max_completion_tokens: 2000
|
||||
) or {
|
||||
eprintln('❌ Enhancement failed: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('\n✅ Enhancement suggestions received:')
|
||||
println('─'.repeat(70))
|
||||
println(enhancement_result.result)
|
||||
println('─'.repeat(70))
|
||||
println('Tokens used: ${enhancement_result.usage.total_tokens}\n')
|
||||
|
||||
// Step 2: Apply edits using Model B (Llama 3.3 70B)
|
||||
println('\n📝 STEP 2: Apply Code Modifications')
|
||||
println('─'.repeat(70))
|
||||
println('Model: meta-llama/llama-3.3-70b-instruct')
|
||||
println('Task: Apply the suggested edits to produce updated code\n')
|
||||
|
||||
modification_prompt := 'You are a file editing agent.
|
||||
Apply the given edits or diffs to the provided file.
|
||||
Output the updated Python code only, without comments or explanations.
|
||||
|
||||
ORIGINAL CODE:
|
||||
${sample_code}
|
||||
|
||||
EDITS TO APPLY:
|
||||
${enhancement_result.result}
|
||||
|
||||
Output only the final, updated Python code.'
|
||||
|
||||
println('🤖 Sending to modification model...')
|
||||
modification_result := modifier.chat_completion(
|
||||
message: modification_prompt
|
||||
temperature: 0.1
|
||||
max_completion_tokens: 2000
|
||||
) or {
|
||||
eprintln('❌ Modification failed: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('\n✅ Modified code received:')
|
||||
println('─'.repeat(70))
|
||||
println(modification_result.result)
|
||||
println('─'.repeat(70))
|
||||
println('Tokens used: ${modification_result.usage.total_tokens}\n')
|
||||
|
||||
// Summary
|
||||
println('\n📊 PIPELINE SUMMARY')
|
||||
println('═'.repeat(70))
|
||||
println('Original code length: ${sample_code.len} chars')
|
||||
println('Enhancement model: qwen/qwen-2.5-coder-32b-instruct')
|
||||
println('Enhancement tokens: ${enhancement_result.usage.total_tokens}')
|
||||
println('Modification model: meta-llama/llama-3.3-70b-instruct')
|
||||
println('Modification tokens: ${modification_result.usage.total_tokens}')
|
||||
println('Total tokens: ${enhancement_result.usage.total_tokens +
|
||||
modification_result.usage.total_tokens}')
|
||||
println('═'.repeat(70))
|
||||
println('\n✅ Two-model pipeline completed successfully!')
|
||||
42
examples/ai/openaiclient_openrouter.vsh
Executable file
42
examples/ai/openaiclient_openrouter.vsh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
playcmds.run(
|
||||
heroscript: '
|
||||
!!openai.configure name:"default"
|
||||
url:"https://openrouter.ai/api/v1"
|
||||
model_default:"gpt-oss-120b"
|
||||
'
|
||||
reset: false
|
||||
)!
|
||||
|
||||
// Get the client instance
|
||||
mut client := openai.get() or {
|
||||
eprintln('Failed to get client: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println(client.list_models()!)
|
||||
|
||||
println('Sending message to OpenRouter...\n')
|
||||
|
||||
// Simple hello message
|
||||
response := client.chat_completion(
|
||||
model: 'qwen/qwen-2.5-coder-32b-instruct'
|
||||
message: 'Say hello in a friendly way!'
|
||||
temperature: 0.7
|
||||
max_completion_tokens: 100
|
||||
) or {
|
||||
eprintln('Failed to get completion: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Response from AI:')
|
||||
println('─'.repeat(50))
|
||||
println(response.result)
|
||||
println('─'.repeat(50))
|
||||
println('\nTokens used: ${response.usage.total_tokens}')
|
||||
println(' - Prompt: ${response.usage.prompt_tokens}')
|
||||
println(' - Completion: ${response.usage.completion_tokens}')
|
||||
9
examples/ai/readme.md
Normal file
9
examples/ai/readme.md
Normal file
@@ -0,0 +1,9 @@
|
||||
|
||||
configuration can happen by means of environment variables, e.g.:
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY='sk-or-v1-..'
|
||||
export JINAKEY='jina_..'
|
||||
export GROQKEY='gsk_'
|
||||
```
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
module main
|
||||
|
||||
import incubaid.herolib.clients.openai
|
||||
import os
|
||||
|
||||
fn test1(mut client openai.OpenAI) ! {
|
||||
instruction := '
|
||||
You are a template language converter. You convert Pug templates to Jet templates.
|
||||
|
||||
The target template language, Jet, is defined as follows:
|
||||
'
|
||||
|
||||
// Create a chat completion request
|
||||
res := client.chat_completion(
|
||||
msgs: openai.Messages{
|
||||
messages: [
|
||||
openai.Message{
|
||||
role: .user
|
||||
content: 'What are the key differences between Groq and other AI inference providers?'
|
||||
},
|
||||
]
|
||||
}
|
||||
)!
|
||||
|
||||
// Print the response
|
||||
println('\nGroq AI Response:')
|
||||
println('==================')
|
||||
println(res.choices[0].message.content)
|
||||
println('\nUsage Statistics:')
|
||||
println('Prompt tokens: ${res.usage.prompt_tokens}')
|
||||
println('Completion tokens: ${res.usage.completion_tokens}')
|
||||
println('Total tokens: ${res.usage.total_tokens}')
|
||||
}
|
||||
|
||||
fn test2(mut client openai.OpenAI) ! {
|
||||
// Create a chat completion request
|
||||
res := client.chat_completion(
|
||||
model: 'deepseek-r1-distill-llama-70b'
|
||||
msgs: openai.Messages{
|
||||
messages: [
|
||||
openai.Message{
|
||||
role: .user
|
||||
content: 'A story of 10 lines?'
|
||||
},
|
||||
]
|
||||
}
|
||||
)!
|
||||
|
||||
println('\nGroq AI Response:')
|
||||
println('==================')
|
||||
println(res.choices[0].message.content)
|
||||
println('\nUsage Statistics:')
|
||||
println('Prompt tokens: ${res.usage.prompt_tokens}')
|
||||
println('Completion tokens: ${res.usage.completion_tokens}')
|
||||
println('Total tokens: ${res.usage.total_tokens}')
|
||||
}
|
||||
|
||||
println("
|
||||
TO USE:
|
||||
export AIKEY='gsk_...'
|
||||
export AIURL='https://api.groq.com/openai/v1'
|
||||
export AIMODEL='llama-3.3-70b-versatile'
|
||||
")
|
||||
|
||||
mut client := openai.get(name: 'test')!
|
||||
println(client)
|
||||
|
||||
// test1(mut client)!
|
||||
test2(mut client)!
|
||||
391
examples/builder/zosbuilder.vsh
Executable file
391
examples/builder/zosbuilder.vsh
Executable file
@@ -0,0 +1,391 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.builder
|
||||
import incubaid.herolib.core.pathlib
|
||||
|
||||
// Configuration for the remote builder
|
||||
// Update these values for your remote machine
|
||||
const remote_host = 'root@65.109.31.171' // Change to your remote host
|
||||
|
||||
const remote_port = 22 // SSH port
|
||||
|
||||
// Build configuration
|
||||
const build_dir = '/root/zosbuilder'
|
||||
const repo_url = 'https://git.ourworld.tf/tfgrid/zosbuilder'
|
||||
|
||||
// Optional: Set to true to upload kernel to S3
|
||||
const upload_kernel = false
|
||||
|
||||
fn main() {
|
||||
println('=== Zero OS Builder - Remote Build System ===\n')
|
||||
|
||||
// Initialize builder
|
||||
mut b := builder.new() or {
|
||||
eprintln('Failed to initialize builder: ${err}')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
// Connect to remote node
|
||||
println('Connecting to remote builder: ${remote_host}:${remote_port}')
|
||||
mut node := b.node_new(
|
||||
ipaddr: '${remote_host}:${remote_port}'
|
||||
name: 'zosbuilder'
|
||||
) or {
|
||||
eprintln('Failed to connect to remote node: ${err}')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
// Run the build process
|
||||
build_zos(mut node) or {
|
||||
eprintln('Build failed: ${err}')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
println('\n=== Build completed successfully! ===')
|
||||
}
|
||||
|
||||
fn build_zos(mut node builder.Node) ! {
|
||||
println('\n--- Step 1: Installing prerequisites ---')
|
||||
install_prerequisites(mut node)!
|
||||
|
||||
println('\n--- Step 2: Cloning zosbuilder repository ---')
|
||||
clone_repository(mut node)!
|
||||
|
||||
println('\n--- Step 3: Creating RFS configuration ---')
|
||||
create_rfs_config(mut node)!
|
||||
|
||||
println('\n--- Step 4: Running build ---')
|
||||
run_build(mut node)!
|
||||
|
||||
println('\n--- Step 5: Checking build artifacts ---')
|
||||
check_artifacts(mut node)!
|
||||
|
||||
println('\n=== Build completed successfully! ===')
|
||||
}
|
||||
|
||||
fn install_prerequisites(mut node builder.Node) ! {
|
||||
println('Detecting platform...')
|
||||
|
||||
// Check platform type
|
||||
if node.platform == .ubuntu {
|
||||
println('Installing Ubuntu/Debian prerequisites...')
|
||||
|
||||
// Update package list and install all required packages
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
apt-get update
|
||||
apt-get install -y \\
|
||||
build-essential \\
|
||||
upx-ucl \\
|
||||
binutils \\
|
||||
git \\
|
||||
wget \\
|
||||
curl \\
|
||||
qemu-system-x86 \\
|
||||
podman \\
|
||||
musl-tools \\
|
||||
cpio \\
|
||||
xz-utils \\
|
||||
bc \\
|
||||
flex \\
|
||||
bison \\
|
||||
libelf-dev \\
|
||||
libssl-dev
|
||||
|
||||
# Install rustup and Rust toolchain
|
||||
if ! command -v rustup &> /dev/null; then
|
||||
echo "Installing rustup..."
|
||||
curl --proto "=https" --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
|
||||
source "\$HOME/.cargo/env"
|
||||
fi
|
||||
|
||||
# Add Rust musl target
|
||||
source "\$HOME/.cargo/env"
|
||||
rustup target add x86_64-unknown-linux-musl
|
||||
'
|
||||
name: 'install_ubuntu_packages'
|
||||
reset: true
|
||||
)!
|
||||
} else if node.platform == .alpine {
|
||||
println('Installing Alpine prerequisites...')
|
||||
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
apk add --no-cache \\
|
||||
build-base \\
|
||||
rust \\
|
||||
cargo \\
|
||||
upx \\
|
||||
git \\
|
||||
wget \\
|
||||
qemu-system-x86 \\
|
||||
podman
|
||||
|
||||
# Add Rust musl target
|
||||
rustup target add x86_64-unknown-linux-musl || echo "rustup not available"
|
||||
'
|
||||
name: 'install_alpine_packages'
|
||||
reset: true
|
||||
)!
|
||||
} else {
|
||||
return error('Unsupported platform: ${node.platform}. Only Ubuntu/Debian and Alpine are supported.')
|
||||
}
|
||||
|
||||
println('Prerequisites installed successfully')
|
||||
}
|
||||
|
||||
fn clone_repository(mut node builder.Node) ! {
|
||||
// Clean up disk space first
|
||||
println('Cleaning up disk space...')
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
# Remove old build directories if they exist
|
||||
rm -rf ${build_dir} || true
|
||||
|
||||
# Clean up podman/docker cache to free space
|
||||
podman system prune -af || true
|
||||
|
||||
# Clean up package manager cache
|
||||
if command -v apt-get &> /dev/null; then
|
||||
apt-get clean || true
|
||||
fi
|
||||
|
||||
# Show disk space
|
||||
df -h /
|
||||
'
|
||||
name: 'cleanup_disk_space'
|
||||
stdout: true
|
||||
)!
|
||||
|
||||
// Clone the repository
|
||||
println('Cloning from ${repo_url}...')
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
git clone ${repo_url} ${build_dir}
|
||||
cd ${build_dir}
|
||||
git log -1 --oneline
|
||||
'
|
||||
name: 'clone_zosbuilder'
|
||||
stdout: true
|
||||
)!
|
||||
|
||||
println('Repository cloned successfully')
|
||||
}
|
||||
|
||||
fn create_rfs_config(mut node builder.Node) ! {
|
||||
println('Creating config/rfs.conf...')
|
||||
|
||||
rfs_config := 'S3_ENDPOINT="http://wizenoze.grid.tf:3900"
|
||||
S3_REGION="garage"
|
||||
S3_BUCKET="zos"
|
||||
S3_PREFIX="store"
|
||||
S3_ACCESS_KEY="<put key here>"
|
||||
S3_SECRET_KEY="<put key here>"
|
||||
WEB_ENDPOINT=""
|
||||
MANIFESTS_SUBPATH="flists"
|
||||
READ_ACCESS_KEY="<put key here>"
|
||||
READ_SECRET_KEY="<put key here>"
|
||||
ROUTE_ENDPOINT="http://wizenoze.grid.tf:3900"
|
||||
ROUTE_PATH="/zos/store"
|
||||
ROUTE_REGION="garage"
|
||||
KEEP_S3_FALLBACK="false"
|
||||
UPLOAD_MANIFESTS="true"
|
||||
'
|
||||
|
||||
// Create config directory if it doesn't exist
|
||||
node.exec_cmd(
|
||||
cmd: 'mkdir -p ${build_dir}/config'
|
||||
name: 'create_config_dir'
|
||||
stdout: false
|
||||
)!
|
||||
|
||||
// Write the RFS configuration file
|
||||
node.file_write('${build_dir}/config/rfs.conf', rfs_config)!
|
||||
|
||||
// Verify the file was created
|
||||
result := node.exec(
|
||||
cmd: 'cat ${build_dir}/config/rfs.conf'
|
||||
stdout: false
|
||||
)!
|
||||
|
||||
println('RFS configuration created successfully')
|
||||
println('Config preview:')
|
||||
println(result)
|
||||
|
||||
// Skip youki component by removing it from sources.conf
|
||||
println('\nRemoving youki from sources.conf (requires SSH keys)...')
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
# Remove any line containing youki from sources.conf
|
||||
grep -v "youki" ${build_dir}/config/sources.conf > ${build_dir}/config/sources.conf.tmp
|
||||
mv ${build_dir}/config/sources.conf.tmp ${build_dir}/config/sources.conf
|
||||
|
||||
# Verify it was removed
|
||||
echo "Updated sources.conf:"
|
||||
cat ${build_dir}/config/sources.conf
|
||||
'
|
||||
name: 'remove_youki'
|
||||
stdout: true
|
||||
)!
|
||||
println('youki component skipped')
|
||||
}
|
||||
|
||||
fn run_build(mut node builder.Node) ! {
|
||||
println('Starting build process...')
|
||||
println('This may take 15-30 minutes depending on your system...')
|
||||
println('Status updates will be printed every 2 minutes...\n')
|
||||
|
||||
// Check disk space before building
|
||||
println('Checking disk space...')
|
||||
disk_info := node.exec(
|
||||
cmd: 'df -h ${build_dir}'
|
||||
stdout: false
|
||||
)!
|
||||
println(disk_info)
|
||||
|
||||
// Clean up any previous build artifacts and corrupted databases
|
||||
println('Cleaning up previous build artifacts...')
|
||||
node.exec_cmd(
|
||||
cmd: '
|
||||
cd ${build_dir}
|
||||
|
||||
# Remove dist directory to clean up any corrupted databases
|
||||
rm -rf dist/
|
||||
|
||||
# Clean up any temporary files
|
||||
rm -rf /tmp/rfs-* || true
|
||||
|
||||
# Show available disk space after cleanup
|
||||
df -h ${build_dir}
|
||||
'
|
||||
name: 'cleanup_before_build'
|
||||
stdout: true
|
||||
)!
|
||||
|
||||
// Make scripts executable and run build with periodic status messages
|
||||
mut build_cmd := '
|
||||
cd ${build_dir}
|
||||
|
||||
# Source Rust environment
|
||||
source "\$HOME/.cargo/env"
|
||||
|
||||
# Make scripts executable
|
||||
chmod +x scripts/build.sh scripts/clean.sh
|
||||
|
||||
# Set environment variables
|
||||
export UPLOAD_KERNEL=${upload_kernel}
|
||||
export UPLOAD_MANIFESTS=false
|
||||
|
||||
# Create a wrapper script that prints status every 2 minutes
|
||||
cat > /tmp/build_with_status.sh << "EOF"
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Source Rust environment
|
||||
source "\$HOME/.cargo/env"
|
||||
|
||||
# Start the build in background
|
||||
./scripts/build.sh &
|
||||
BUILD_PID=\$!
|
||||
|
||||
# Print status every 2 minutes while build is running
|
||||
COUNTER=0
|
||||
while kill -0 \$BUILD_PID 2>/dev/null; do
|
||||
sleep 120
|
||||
COUNTER=\$((COUNTER + 2))
|
||||
echo ""
|
||||
echo "=== Build still in progress... (\${COUNTER} minutes elapsed) ==="
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Wait for build to complete and get exit code
|
||||
wait \$BUILD_PID
|
||||
EXIT_CODE=\$?
|
||||
|
||||
if [ \$EXIT_CODE -eq 0 ]; then
|
||||
echo ""
|
||||
echo "=== Build completed successfully after \${COUNTER} minutes ==="
|
||||
else
|
||||
echo ""
|
||||
echo "=== Build failed after \${COUNTER} minutes with exit code \$EXIT_CODE ==="
|
||||
fi
|
||||
|
||||
exit \$EXIT_CODE
|
||||
EOF
|
||||
|
||||
chmod +x /tmp/build_with_status.sh
|
||||
/tmp/build_with_status.sh
|
||||
' // Execute build with output
|
||||
|
||||
result := node.exec_cmd(
|
||||
cmd: build_cmd
|
||||
name: 'zos_build'
|
||||
stdout: true
|
||||
reset: true
|
||||
period: 0 // Don't cache, always rebuild
|
||||
)!
|
||||
|
||||
println('\nBuild completed!')
|
||||
println(result)
|
||||
}
|
||||
|
||||
fn check_artifacts(mut node builder.Node) ! {
|
||||
println('Checking build artifacts in ${build_dir}/dist/...')
|
||||
|
||||
// List the dist directory
|
||||
result := node.exec(
|
||||
cmd: 'ls -lh ${build_dir}/dist/'
|
||||
stdout: true
|
||||
)!
|
||||
|
||||
println('\nBuild artifacts:')
|
||||
println(result)
|
||||
|
||||
// Check for expected files
|
||||
vmlinuz_exists := node.file_exists('${build_dir}/dist/vmlinuz.efi')
|
||||
initramfs_exists := node.file_exists('${build_dir}/dist/initramfs.cpio.xz')
|
||||
|
||||
if vmlinuz_exists && initramfs_exists {
|
||||
println('\n✓ Build artifacts created successfully:')
|
||||
println(' - vmlinuz.efi (Kernel with embedded initramfs)')
|
||||
println(' - initramfs.cpio.xz (Standalone initramfs archive)')
|
||||
|
||||
// Get file sizes
|
||||
size_info := node.exec(
|
||||
cmd: 'du -h ${build_dir}/dist/vmlinuz.efi ${build_dir}/dist/initramfs.cpio.xz'
|
||||
stdout: false
|
||||
)!
|
||||
println('\nFile sizes:')
|
||||
println(size_info)
|
||||
} else {
|
||||
return error('Build artifacts not found. Build may have failed.')
|
||||
}
|
||||
}
|
||||
|
||||
// Download artifacts to local machine
|
||||
fn download_artifacts(mut node builder.Node, local_dest string) ! {
|
||||
println('Downloading artifacts to local machine...')
|
||||
|
||||
mut dest_path := pathlib.get_dir(path: local_dest, create: true)!
|
||||
|
||||
println('Downloading to ${dest_path.path}...')
|
||||
|
||||
// Download the entire dist directory
|
||||
node.download(
|
||||
source: '${build_dir}/dist/'
|
||||
dest: dest_path.path
|
||||
)!
|
||||
|
||||
println('\n✓ Artifacts downloaded successfully to ${dest_path.path}')
|
||||
|
||||
// List downloaded files
|
||||
println('\nDownloaded files:')
|
||||
result := node.exec(
|
||||
cmd: 'ls -lh ${dest_path.path}'
|
||||
stdout: false
|
||||
) or {
|
||||
println('Could not list local files')
|
||||
return
|
||||
}
|
||||
println(result)
|
||||
}
|
||||
224
examples/builder/zosbuilder_README.md
Normal file
224
examples/builder/zosbuilder_README.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Zero OS Builder - Remote Build System
|
||||
|
||||
This example demonstrates how to build [Zero OS (zosbuilder)](https://git.ourworld.tf/tfgrid/zosbuilder) on a remote machine using the herolib builder module.
|
||||
|
||||
## Overview
|
||||
|
||||
The zosbuilder creates a Zero OS Alpine Initramfs with:
|
||||
- Alpine Linux 3.22 base
|
||||
- Custom kernel with embedded initramfs
|
||||
- ThreeFold components (zinit, rfs, mycelium, zosstorage)
|
||||
- Optimized size with UPX compression
|
||||
- Two-stage module loading
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Local Machine
|
||||
- V compiler installed
|
||||
- SSH access to a remote build machine
|
||||
- herolib installed
|
||||
|
||||
### Remote Build Machine
|
||||
The script will automatically install these on the remote machine:
|
||||
- **Ubuntu/Debian**: build-essential, rustc, cargo, upx-ucl, binutils, git, wget, qemu-system-x86, podman, musl-tools
|
||||
- **Alpine Linux**: build-base, rust, cargo, upx, git, wget, qemu-system-x86, podman
|
||||
- Rust musl target (x86_64-unknown-linux-musl)
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit the constants in `zosbuilder.vsh`:
|
||||
|
||||
```v
|
||||
const (
|
||||
// Remote machine connection
|
||||
remote_host = 'root@195.192.213.2' // Your remote host
|
||||
remote_port = 22 // SSH port
|
||||
|
||||
// Build configuration
|
||||
build_dir = '/root/zosbuilder' // Build directory on remote
|
||||
repo_url = 'https://git.ourworld.tf/tfgrid/zosbuilder'
|
||||
|
||||
// Optional: Upload kernel to S3
|
||||
upload_kernel = false
|
||||
)
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Build
|
||||
|
||||
```bash
|
||||
# Make the script executable
|
||||
chmod +x zosbuilder.vsh
|
||||
|
||||
# Run the build
|
||||
./zosbuilder.vsh
|
||||
```
|
||||
|
||||
### What the Script Does
|
||||
|
||||
1. **Connects to Remote Machine**: Establishes SSH connection to the build server
|
||||
2. **Installs Prerequisites**: Automatically installs all required build tools
|
||||
3. **Clones Repository**: Fetches the latest zosbuilder code
|
||||
4. **Runs Build**: Executes the build process (takes 15-30 minutes)
|
||||
5. **Verifies Artifacts**: Checks that build outputs were created successfully
|
||||
|
||||
### Build Output
|
||||
|
||||
The build creates two main artifacts in `${build_dir}/dist/`:
|
||||
- `vmlinuz.efi` - Kernel with embedded initramfs (bootable)
|
||||
- `initramfs.cpio.xz` - Standalone initramfs archive
|
||||
|
||||
## Build Process Details
|
||||
|
||||
The zosbuilder follows these phases:
|
||||
|
||||
### Phase 1: Environment Setup
|
||||
- Creates build directories
|
||||
- Installs build dependencies
|
||||
- Sets up Rust musl target
|
||||
|
||||
### Phase 2: Alpine Base
|
||||
- Downloads Alpine 3.22 miniroot
|
||||
- Extracts to initramfs directory
|
||||
- Installs packages from config/packages.list
|
||||
|
||||
### Phase 3: Component Building
|
||||
- Builds zinit (init system)
|
||||
- Builds rfs (remote filesystem)
|
||||
- Builds mycelium (networking)
|
||||
- Builds zosstorage (storage orchestration)
|
||||
|
||||
### Phase 4: System Configuration
|
||||
- Replaces /sbin/init with zinit
|
||||
- Copies zinit configuration
|
||||
- Sets up 2-stage module loading
|
||||
- Configures system services
|
||||
|
||||
### Phase 5: Optimization
|
||||
- Removes docs, man pages, locales
|
||||
- Strips executables and libraries
|
||||
- UPX compresses all binaries
|
||||
- Aggressive cleanup
|
||||
|
||||
### Phase 6: Packaging
|
||||
- Creates initramfs.cpio.xz with XZ compression
|
||||
- Builds kernel with embedded initramfs
|
||||
- Generates vmlinuz.efi
|
||||
- Optionally uploads to S3
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Download Artifacts to Local Machine
|
||||
|
||||
Add this to your script after the build completes:
|
||||
|
||||
```v
|
||||
// Download artifacts to local machine
|
||||
download_artifacts(mut node, '/tmp/zos-artifacts') or {
|
||||
eprintln('Failed to download artifacts: ${err}')
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Build Configuration
|
||||
|
||||
You can modify the build by editing files on the remote machine before building:
|
||||
|
||||
```v
|
||||
// After cloning, before building
|
||||
node.file_write('${build_dir}/config/packages.list', 'your custom packages')!
|
||||
```
|
||||
|
||||
### Rebuild Without Re-cloning
|
||||
|
||||
To rebuild without re-cloning the repository, modify the script to skip the clone step:
|
||||
|
||||
```v
|
||||
// Comment out the clone_repository call
|
||||
// clone_repository(mut node)!
|
||||
|
||||
// Or just run the build directly
|
||||
node.exec_cmd(
|
||||
cmd: 'cd ${build_dir} && ./scripts/build.sh'
|
||||
name: 'zos_rebuild'
|
||||
)!
|
||||
```
|
||||
|
||||
## Testing the Build
|
||||
|
||||
After building, you can test the kernel with QEMU:
|
||||
|
||||
```bash
|
||||
# On the remote machine
|
||||
cd /root/zosbuilder
|
||||
./scripts/test-qemu.sh
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Build Fails
|
||||
|
||||
1. Check the build output for specific errors
|
||||
2. Verify all prerequisites are installed
|
||||
3. Ensure sufficient disk space (at least 5GB)
|
||||
4. Check internet connectivity for downloading components
|
||||
|
||||
### SSH Connection Issues
|
||||
|
||||
1. Verify SSH access: `ssh root@195.192.213.2`
|
||||
2. Check SSH key authentication is set up
|
||||
3. Verify the remote host and port are correct
|
||||
|
||||
### Missing Dependencies
|
||||
|
||||
The script automatically installs dependencies, but if manual installation is needed:
|
||||
|
||||
**Ubuntu/Debian:**
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential rustc cargo upx-ucl binutils git wget qemu-system-x86 podman musl-tools
|
||||
rustup target add x86_64-unknown-linux-musl
|
||||
```
|
||||
|
||||
**Alpine Linux:**
|
||||
```bash
|
||||
apk add --no-cache build-base rust cargo upx git wget qemu-system-x86 podman
|
||||
rustup target add x86_64-unknown-linux-musl
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
This builder can be integrated into CI/CD pipelines:
|
||||
|
||||
```v
|
||||
// Example: Build and upload to artifact storage
|
||||
fn ci_build() ! {
|
||||
mut b := builder.new()!
|
||||
mut node := b.node_new(ipaddr: '${ci_builder_host}')!
|
||||
|
||||
build_zos(mut node)!
|
||||
|
||||
// Upload to artifact storage
|
||||
node.exec_cmd(
|
||||
cmd: 's3cmd put ${build_dir}/dist/* s3://artifacts/zos/'
|
||||
name: 'upload_artifacts'
|
||||
)!
|
||||
}
|
||||
```
|
||||
|
||||
## Related Examples
|
||||
|
||||
- `simple.vsh` - Basic builder usage
|
||||
- `remote_executor/` - Remote code execution
|
||||
- `simple_ip4.vsh` - IPv4 connection example
|
||||
- `simple_ip6.vsh` - IPv6 connection example
|
||||
|
||||
## References
|
||||
|
||||
- [zosbuilder Repository](https://git.ourworld.tf/tfgrid/zosbuilder)
|
||||
- [herolib Builder Documentation](../../lib/builder/readme.md)
|
||||
- [Zero OS Documentation](https://manual.grid.tf/)
|
||||
|
||||
## License
|
||||
|
||||
This example follows the same license as herolib.
|
||||
@@ -1,50 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.jina
|
||||
import os
|
||||
import json
|
||||
|
||||
fn main() {
|
||||
// Initialize Jina client
|
||||
mut j := jina.Jina{
|
||||
name: 'test_client'
|
||||
secret: os.getenv('JINAKEY')
|
||||
}
|
||||
|
||||
// Initialize the client
|
||||
j = jina.obj_init(j) or {
|
||||
println('Error initializing Jina client: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
// Check if authentication works
|
||||
auth_ok := j.check_auth() or {
|
||||
println('Authentication failed: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Authentication successful: ${auth_ok}')
|
||||
|
||||
// Create embeddings
|
||||
model := 'jina-embeddings-v2-base-en'
|
||||
input := ['Hello world', 'This is a test']
|
||||
|
||||
embeddings := j.create_embeddings(input, model, 'search') or {
|
||||
println('Error creating embeddings: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
println('Embeddings created successfully!')
|
||||
println('Model: ${embeddings.model}')
|
||||
println('Dimension: ${embeddings.dimension}')
|
||||
println('Number of embeddings: ${embeddings.data.len}')
|
||||
|
||||
// If there are embeddings, print the first one (truncated)
|
||||
if embeddings.data.len > 0 {
|
||||
first_embedding := embeddings.data[0]
|
||||
println('First embedding (first 5 values): ${first_embedding.embedding[0..5]}')
|
||||
}
|
||||
|
||||
// Usage information
|
||||
println('Token usage: ${embeddings.usage.total_tokens} ${embeddings.usage.unit}')
|
||||
}
|
||||
182
examples/core/code/code_generator.vsh
Executable file
182
examples/core/code/code_generator.vsh
Executable file
@@ -0,0 +1,182 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.core.pathlib
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.ai.client
|
||||
import os
|
||||
|
||||
fn main() {
|
||||
console.print_header('Code Generator - V File Analyzer Using AI')
|
||||
|
||||
// Find herolib root directory using @FILE
|
||||
script_dir := os.dir(@FILE)
|
||||
// Navigate from examples/core/code to root: up 4 levels
|
||||
herolib_root := os.dir(os.dir(os.dir(script_dir)))
|
||||
|
||||
console.print_item('HeroLib Root: ${herolib_root}')
|
||||
|
||||
// The directory we want to analyze (lib/core in this case)
|
||||
target_dir := herolib_root + '/lib/core'
|
||||
console.print_item('Target Directory: ${target_dir}')
|
||||
console.print_lf(1)
|
||||
|
||||
// Load instruction files from aiprompts
|
||||
console.print_item('Loading instruction files...')
|
||||
|
||||
mut ai_instructions_file := pathlib.get(herolib_root +
|
||||
'/aiprompts/ai_instructions_hero_models.md')
|
||||
mut vlang_core_file := pathlib.get(herolib_root + '/aiprompts/vlang_herolib_core.md')
|
||||
|
||||
ai_instructions_content := ai_instructions_file.read()!
|
||||
vlang_core_content := vlang_core_file.read()!
|
||||
|
||||
console.print_green('✓ Instruction files loaded successfully')
|
||||
console.print_lf(1)
|
||||
|
||||
// Initialize AI client
|
||||
console.print_item('Initializing AI client...')
|
||||
mut aiclient := client.new()!
|
||||
console.print_green('✓ AI client initialized')
|
||||
console.print_lf(1)
|
||||
|
||||
// Get all V files from target directory
|
||||
console.print_item('Scanning directory for V files...')
|
||||
|
||||
mut target_path := pathlib.get_dir(path: target_dir, create: false)!
|
||||
mut all_files := target_path.list(
|
||||
regex: [r'\.v$']
|
||||
recursive: true
|
||||
)!
|
||||
|
||||
console.print_item('Found ${all_files.paths.len} total V files')
|
||||
|
||||
// TODO: Walk over all files which do NOT end with _test.v and do NOT start with factory
|
||||
// Each file becomes a src_file_content object
|
||||
mut files_to_process := []pathlib.Path{}
|
||||
|
||||
for file in all_files.paths {
|
||||
file_name := file.name()
|
||||
|
||||
// Skip test files
|
||||
if file_name.ends_with('_test.v') {
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip factory files
|
||||
if file_name.starts_with('factory') {
|
||||
continue
|
||||
}
|
||||
|
||||
files_to_process << file
|
||||
}
|
||||
|
||||
console.print_green('✓ After filtering: ${files_to_process.len} files to process')
|
||||
console.print_lf(2)
|
||||
|
||||
// Process each file with AI
|
||||
total_files := files_to_process.len
|
||||
|
||||
for idx, mut file in files_to_process {
|
||||
current_idx := idx + 1
|
||||
process_file_with_ai(mut aiclient, mut file, ai_instructions_content, vlang_core_content,
|
||||
current_idx, total_files)!
|
||||
}
|
||||
|
||||
console.print_lf(1)
|
||||
console.print_header('✓ Code Generation Complete')
|
||||
console.print_item('Processed ${files_to_process.len} files')
|
||||
console.print_lf(1)
|
||||
}
|
||||
|
||||
fn process_file_with_ai(mut aiclient client.AIClient, mut file pathlib.Path, ai_instructions string, vlang_core string, current int, total int) ! {
|
||||
file_name := file.name()
|
||||
src_file_path := file.absolute()
|
||||
|
||||
console.print_item('[${current}/${total}] Analyzing: ${file_name}')
|
||||
|
||||
// Read the file content - this is the src_file_content
|
||||
src_file_content := file.read()!
|
||||
|
||||
// Build comprehensive system prompt
|
||||
// TODO: Load instructions from prompt files and use in prompt
|
||||
|
||||
// Build the user prompt with context
|
||||
user_prompt := '
|
||||
File: ${file_name}
|
||||
Path: ${src_file_path}
|
||||
|
||||
Current content:
|
||||
\`\`\`v
|
||||
${src_file_content}
|
||||
\`\`\`
|
||||
|
||||
Please improve this V file by:
|
||||
1. Following V language best practices
|
||||
2. Ensuring proper error handling with ! and or blocks
|
||||
3. Adding clear documentation comments
|
||||
4. Following herolib patterns and conventions
|
||||
5. Improving code clarity and readability
|
||||
|
||||
Context from herolib guidelines:
|
||||
|
||||
VLANG HEROLIB CORE:
|
||||
${vlang_core}
|
||||
|
||||
AI INSTRUCTIONS FOR HERO MODELS:
|
||||
${ai_instructions}
|
||||
|
||||
Return ONLY the complete improved file wrapped in \`\`\`v code block.
|
||||
'
|
||||
|
||||
console.print_debug_title('Sending to AI', 'Calling AI model to improve ${file_name}...')
|
||||
|
||||
// TODO: Call AI client with model gemini-3-pro
|
||||
aiclient.write_from_prompt(file, user_prompt, [.pro]) or {
|
||||
console.print_stderr('Error processing ${file_name}: ${err}')
|
||||
return
|
||||
}
|
||||
|
||||
mut improved_file := pathlib.get(src_file_path + '.improved')
|
||||
improved_content := improved_file.read()!
|
||||
|
||||
// Display improvements summary
|
||||
sample_chars := 250
|
||||
preview := if improved_content.len > sample_chars {
|
||||
improved_content[..sample_chars] + '... (preview truncated)'
|
||||
} else {
|
||||
improved_content
|
||||
}
|
||||
|
||||
console.print_debug_title('AI Analysis Results for ${file_name}', preview)
|
||||
|
||||
// Optional: Save improved version for review
|
||||
// Uncomment to enable saving
|
||||
// improved_file_path := src_file_path + '.improved'
|
||||
// mut improved_file := pathlib.get_file(path: improved_file_path, create: true)!
|
||||
// improved_file.write(improved_content)!
|
||||
// console.print_green('✓ Improvements saved to: ${improved_file_path}')
|
||||
|
||||
console.print_lf(1)
|
||||
}
|
||||
|
||||
// Extract V code from markdown code block
|
||||
fn extract_code_block(response string) string {
|
||||
// Look for ```v ... ``` block
|
||||
start_marker := '\`\`\`v'
|
||||
end_marker := '\`\`\`'
|
||||
|
||||
start_idx := response.index(start_marker) or {
|
||||
// If no ```v, try to return as-is
|
||||
return response
|
||||
}
|
||||
|
||||
mut content_start := start_idx + start_marker.len
|
||||
if content_start < response.len && response[content_start] == `\n` {
|
||||
content_start++
|
||||
}
|
||||
|
||||
end_idx := response.index(end_marker) or { return response[content_start..] }
|
||||
|
||||
extracted := response[content_start..end_idx]
|
||||
return extracted.trim_space()
|
||||
}
|
||||
56
examples/core/code/code_parser.vsh
Executable file
56
examples/core/code/code_parser.vsh
Executable file
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.core.code
|
||||
import incubaid.herolib.ui.console
|
||||
import os
|
||||
|
||||
console.print_header('Code Parser Example - lib/core/pathlib Analysis')
|
||||
console.print_lf(1)
|
||||
|
||||
pathlib_dir := os.home_dir() + '/code/github/incubaid/herolib/lib/core/pathlib'
|
||||
|
||||
// Step 1: List all V files
|
||||
console.print_header('1. Listing V Files')
|
||||
v_files := code.list_v_files(pathlib_dir)!
|
||||
for file in v_files {
|
||||
console.print_item(os.base(file))
|
||||
}
|
||||
console.print_lf(1)
|
||||
|
||||
// Step 2: Parse and analyze each file
|
||||
console.print_header('2. Parsing Files - Summary')
|
||||
for v_file_path in v_files {
|
||||
content := os.read_file(v_file_path)!
|
||||
vfile := code.parse_vfile(content)!
|
||||
|
||||
console.print_item('${os.base(v_file_path)}')
|
||||
console.print_item(' Module: ${vfile.mod}')
|
||||
console.print_item(' Imports: ${vfile.imports.len}')
|
||||
console.print_item(' Structs: ${vfile.structs().len}')
|
||||
console.print_item(' Functions: ${vfile.functions().len}')
|
||||
}
|
||||
console.print_lf(1)
|
||||
|
||||
// // Step 3: Find Path struct
|
||||
// console.print_header('3. Analyzing Path Struct')
|
||||
// path_code := code.get_type_from_module(pathlib_dir, 'Path')!
|
||||
// console.print_stdout(path_code)
|
||||
// console.print_lf(1)
|
||||
|
||||
// Step 4: List all public functions
|
||||
console.print_header('4. Public Functions in pathlib')
|
||||
for v_file_path in v_files {
|
||||
content := os.read_file(v_file_path)!
|
||||
vfile := code.parse_vfile(content)!
|
||||
|
||||
pub_functions := vfile.functions().filter(it.is_pub)
|
||||
if pub_functions.len > 0 {
|
||||
console.print_item('From ${os.base(v_file_path)}:')
|
||||
for f in pub_functions {
|
||||
console.print_item(' ${f.name}() -> ${f.result.typ.symbol()}')
|
||||
}
|
||||
}
|
||||
}
|
||||
console.print_lf(1)
|
||||
|
||||
console.print_green('✓ Analysis completed!')
|
||||
337
examples/core/flows/runner_test.vsh
Executable file
337
examples/core/flows/runner_test.vsh
Executable file
@@ -0,0 +1,337 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.core.flows
|
||||
import incubaid.herolib.core.redisclient
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.data.ourtime
|
||||
import time
|
||||
|
||||
fn main() {
|
||||
mut cons := console.new()
|
||||
|
||||
console.print_header('Flow Runner Test Suite')
|
||||
console.print_lf(1)
|
||||
|
||||
// Test 1: Basic Flow Execution
|
||||
console.print_item('Test 1: Basic Flow with Successful Steps')
|
||||
test_basic_flow()!
|
||||
console.print_lf(1)
|
||||
|
||||
// Test 2: Error Handling
|
||||
console.print_item('Test 2: Error Handling with Error Steps')
|
||||
test_error_handling()!
|
||||
console.print_lf(1)
|
||||
|
||||
// Test 3: Multiple Next Steps
|
||||
console.print_item('Test 3: Multiple Next Steps')
|
||||
test_multiple_next_steps()!
|
||||
console.print_lf(1)
|
||||
|
||||
// Test 4: Redis State Retrieval
|
||||
console.print_item('Test 4: Redis State Retrieval and JSON')
|
||||
test_redis_state()!
|
||||
console.print_lf(1)
|
||||
|
||||
// Test 5: Complex Flow Chain
|
||||
console.print_item('Test 5: Complex Flow Chain')
|
||||
test_complex_flow()!
|
||||
console.print_lf(1)
|
||||
|
||||
console.print_header('All Tests Completed Successfully!')
|
||||
}
|
||||
|
||||
fn test_basic_flow() ! {
|
||||
mut redis := redisclient.core_get()!
|
||||
redis.flushdb()!
|
||||
|
||||
mut coordinator := flows.new(
|
||||
name: 'test_basic_flow'
|
||||
redis: redis
|
||||
ai: none
|
||||
)!
|
||||
|
||||
// Step 1: Initialize
|
||||
mut step1 := coordinator.step_new(
|
||||
name: 'initialize'
|
||||
description: 'Initialize test environment'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Step 1: Initializing...')
|
||||
s.context['init_time'] = ourtime.now().str()
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 2: Process
|
||||
mut step2 := coordinator.step_new(
|
||||
name: 'process'
|
||||
description: 'Process data'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Step 2: Processing...')
|
||||
s.context['processed'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 3: Finalize
|
||||
mut step3 := coordinator.step_new(
|
||||
name: 'finalize'
|
||||
description: 'Finalize results'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Step 3: Finalizing...')
|
||||
s.context['status'] = 'completed'
|
||||
}
|
||||
)!
|
||||
|
||||
step1.next_step_add(step2)
|
||||
step2.next_step_add(step3)
|
||||
|
||||
coordinator.run()!
|
||||
|
||||
// Verify Redis state
|
||||
state := coordinator.get_all_steps_state()!
|
||||
assert state.len >= 3, 'Expected at least 3 steps in Redis'
|
||||
|
||||
for step_state in state {
|
||||
assert step_state['status'] == 'success', 'Expected all steps to be successful'
|
||||
}
|
||||
|
||||
println(' ✓ Test 1 PASSED: All steps executed successfully')
|
||||
coordinator.clear_redis()!
|
||||
}
|
||||
|
||||
fn test_error_handling() ! {
|
||||
mut redis := redisclient.core_get()!
|
||||
redis.flushdb()!
|
||||
|
||||
mut coordinator := flows.new(
|
||||
name: 'test_error_flow'
|
||||
redis: redis
|
||||
ai: none
|
||||
)!
|
||||
|
||||
// Error step
|
||||
mut error_recovery := coordinator.step_new(
|
||||
name: 'error_recovery'
|
||||
description: 'Recover from error'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Error Step: Executing recovery...')
|
||||
s.context['recovered'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Main step that fails
|
||||
mut main_step := coordinator.step_new(
|
||||
name: 'failing_step'
|
||||
description: 'This step will fail'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✗ Main Step: Intentionally failing...')
|
||||
return error('Simulated error for testing')
|
||||
}
|
||||
)!
|
||||
|
||||
main_step.error_step_add(error_recovery)
|
||||
|
||||
// Run and expect error
|
||||
coordinator.run() or { println(' ✓ Error caught as expected: ${err.msg()}') }
|
||||
|
||||
// Verify error state in Redis
|
||||
error_state := coordinator.get_step_state('failing_step')!
|
||||
assert error_state['status'] == 'error', 'Expected step to be in error state'
|
||||
|
||||
recovery_state := coordinator.get_step_state('error_recovery')!
|
||||
assert recovery_state['status'] == 'success', 'Expected error step to execute'
|
||||
|
||||
println(' ✓ Test 2 PASSED: Error handling works correctly')
|
||||
coordinator.clear_redis()!
|
||||
}
|
||||
|
||||
fn test_multiple_next_steps() ! {
|
||||
mut redis := redisclient.core_get()!
|
||||
redis.flushdb()!
|
||||
|
||||
mut coordinator := flows.new(
|
||||
name: 'test_parallel_steps'
|
||||
redis: redis
|
||||
ai: none
|
||||
)!
|
||||
|
||||
// Parent step
|
||||
mut parent := coordinator.step_new(
|
||||
name: 'parent_step'
|
||||
description: 'Parent step with multiple children'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Parent Step: Executing...')
|
||||
}
|
||||
)!
|
||||
|
||||
// Child steps
|
||||
mut child1 := coordinator.step_new(
|
||||
name: 'child_step_1'
|
||||
description: 'First child'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Child Step 1: Executing...')
|
||||
}
|
||||
)!
|
||||
|
||||
mut child2 := coordinator.step_new(
|
||||
name: 'child_step_2'
|
||||
description: 'Second child'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Child Step 2: Executing...')
|
||||
}
|
||||
)!
|
||||
|
||||
mut child3 := coordinator.step_new(
|
||||
name: 'child_step_3'
|
||||
description: 'Third child'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Child Step 3: Executing...')
|
||||
}
|
||||
)!
|
||||
|
||||
// Add multiple next steps
|
||||
parent.next_step_add(child1)
|
||||
parent.next_step_add(child2)
|
||||
parent.next_step_add(child3)
|
||||
|
||||
coordinator.run()!
|
||||
|
||||
// Verify all steps executed
|
||||
all_states := coordinator.get_all_steps_state()!
|
||||
assert all_states.len >= 4, 'Expected 4 steps to execute'
|
||||
|
||||
println(' ✓ Test 3 PASSED: Multiple next steps executed sequentially')
|
||||
coordinator.clear_redis()!
|
||||
}
|
||||
|
||||
fn test_redis_state() ! {
|
||||
mut redis := redisclient.core_get()!
|
||||
redis.flushdb()!
|
||||
|
||||
mut coordinator := flows.new(
|
||||
name: 'test_redis_state'
|
||||
redis: redis
|
||||
ai: none
|
||||
)!
|
||||
|
||||
mut step1 := coordinator.step_new(
|
||||
name: 'redis_test_step'
|
||||
description: 'Test Redis state storage'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Executing step with context...')
|
||||
s.context['user'] = 'test_user'
|
||||
s.context['action'] = 'test_action'
|
||||
}
|
||||
)!
|
||||
|
||||
coordinator.run()!
|
||||
|
||||
// Retrieve state from Redis
|
||||
step_state := coordinator.get_step_state('redis_test_step')!
|
||||
|
||||
println(' Step state in Redis:')
|
||||
for key, value in step_state {
|
||||
println(' ${key}: ${value}')
|
||||
}
|
||||
|
||||
// Verify fields
|
||||
assert step_state['name'] == 'redis_test_step', 'Step name mismatch'
|
||||
assert step_state['status'] == 'success', 'Step status should be success'
|
||||
assert step_state['description'] == 'Test Redis state storage', 'Description mismatch'
|
||||
|
||||
// Verify JSON is stored
|
||||
if json_data := step_state['json'] {
|
||||
println(' ✓ JSON data stored in Redis: ${json_data[0..50]}...')
|
||||
}
|
||||
|
||||
// Verify log count
|
||||
logs_count := step_state['logs_count'] or { '0' }
|
||||
println(' ✓ Logs count: ${logs_count}')
|
||||
|
||||
println(' ✓ Test 4 PASSED: Redis state correctly stored and retrieved')
|
||||
coordinator.clear_redis()!
|
||||
}
|
||||
|
||||
fn test_complex_flow() ! {
|
||||
mut redis := redisclient.core_get()!
|
||||
redis.flushdb()!
|
||||
|
||||
mut coordinator := flows.new(
|
||||
name: 'test_complex_flow'
|
||||
redis: redis
|
||||
ai: none
|
||||
)!
|
||||
|
||||
// Step 1: Validate
|
||||
mut validate := coordinator.step_new(
|
||||
name: 'validate_input'
|
||||
description: 'Validate input parameters'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Validating input...')
|
||||
s.context['validated'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 2: Transform (next step after validate)
|
||||
mut transform := coordinator.step_new(
|
||||
name: 'transform_data'
|
||||
description: 'Transform input data'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Transforming data...')
|
||||
s.context['transformed'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 3a: Save to DB (next step after transform)
|
||||
mut save_db := coordinator.step_new(
|
||||
name: 'save_to_database'
|
||||
description: 'Save data to database'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Saving to database...')
|
||||
s.context['saved'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 3b: Send notification (next step after transform)
|
||||
mut notify := coordinator.step_new(
|
||||
name: 'send_notification'
|
||||
description: 'Send notification'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Sending notification...')
|
||||
s.context['notified'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Step 4: Cleanup (final step)
|
||||
mut cleanup := coordinator.step_new(
|
||||
name: 'cleanup'
|
||||
description: 'Cleanup resources'
|
||||
f: fn (mut s flows.Step) ! {
|
||||
println(' ✓ Cleaning up...')
|
||||
s.context['cleaned'] = 'true'
|
||||
}
|
||||
)!
|
||||
|
||||
// Build the flow chain
|
||||
validate.next_step_add(transform)
|
||||
transform.next_step_add(save_db)
|
||||
transform.next_step_add(notify)
|
||||
save_db.next_step_add(cleanup)
|
||||
notify.next_step_add(cleanup)
|
||||
|
||||
coordinator.run()!
|
||||
|
||||
// Verify all steps executed
|
||||
all_states := coordinator.get_all_steps_state()!
|
||||
println(' Total steps executed: ${all_states.len}')
|
||||
|
||||
for state in all_states {
|
||||
name := state['name'] or { 'unknown' }
|
||||
status := state['status'] or { 'unknown' }
|
||||
duration := state['duration'] or { '0' }
|
||||
println(' - ${name}: ${status} (${duration}ms)')
|
||||
}
|
||||
|
||||
assert all_states.len >= 5, 'Expected at least 5 steps'
|
||||
|
||||
println(' ✓ Test 5 PASSED: Complex flow executed successfully')
|
||||
coordinator.clear_redis()!
|
||||
}
|
||||
12
examples/data/atlas/atlas_auth_web.hero
Executable file
12
examples/data/atlas/atlas_auth_web.hero
Executable file
@@ -0,0 +1,12 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
!!doctree.scan
|
||||
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/mycelium_economics'
|
||||
|
||||
!!doctree.scan
|
||||
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/authentic_web'
|
||||
|
||||
// !!doctree.scan
|
||||
// git_url: 'https://git.ourworld.tf/geomind/docs_geomind/src/branch/main/collections/usecases'
|
||||
|
||||
!!doctree.export destination: '/tmp/doctree_export'
|
||||
15
examples/data/atlas/atlas_example.hero
Executable file
15
examples/data/atlas/atlas_example.hero
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
!!doctree.scan
|
||||
git_url: 'https://git.ourworld.tf/geomind/doctree_geomind/src/branch/main/content'
|
||||
meta_path: '/tmp/doctree_export_meta'
|
||||
|
||||
!!doctree.scan
|
||||
git_url: 'https://git.ourworld.tf/tfgrid/doctree_threefold/src/branch/main/content'
|
||||
meta_path: '/tmp/doctree_export_meta'
|
||||
ignore3: 'static,templates,groups'
|
||||
|
||||
!!doctree.export
|
||||
destination: '/tmp/doctree_export_test'
|
||||
include: true
|
||||
redis: true
|
||||
5
examples/data/atlas/atlas_test.hero
Executable file
5
examples/data/atlas/atlas_test.hero
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/tests"
|
||||
|
||||
!!doctree.export destination: '/tmp/doctree_export'
|
||||
308
examples/data/atlas/debug_recursive/debug_atlas.vsh
Normal file
308
examples/data/atlas/debug_recursive/debug_atlas.vsh
Normal file
@@ -0,0 +1,308 @@
|
||||
#!/usr/bin/env -S vrun
|
||||
|
||||
import incubaid.herolib.data.doctree
|
||||
import incubaid.herolib.ui.console
|
||||
import os
|
||||
|
||||
fn main() {
|
||||
println('=== ATLAS DEBUG SCRIPT ===\n')
|
||||
|
||||
// Create and scan doctree
|
||||
mut a := doctree.new(name: 'main')!
|
||||
|
||||
// Scan the collections
|
||||
println('Scanning collections...\n')
|
||||
a.scan(
|
||||
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/mycelium_nodes_tiers'
|
||||
)!
|
||||
a.scan(
|
||||
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geomind_compare'
|
||||
)!
|
||||
a.scan(path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geoaware')!
|
||||
a.scan(
|
||||
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_economics'
|
||||
)!
|
||||
a.scan(
|
||||
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_concepts'
|
||||
)!
|
||||
a.scan(
|
||||
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_cloud_tech'
|
||||
)!
|
||||
|
||||
// Initialize doctree (post-scanning validation)
|
||||
a.init_post()!
|
||||
|
||||
// Print all pages per collection
|
||||
println('\n=== COLLECTIONS & PAGES ===\n')
|
||||
for col_name, col in a.collections {
|
||||
println('Collection: ${col_name}')
|
||||
println(' Pages (${col.pages.len}):')
|
||||
if col.pages.len > 0 {
|
||||
for page_name, _ in col.pages {
|
||||
println(' - ${page_name}')
|
||||
}
|
||||
} else {
|
||||
println(' (empty)')
|
||||
}
|
||||
println(' Files/Images (${col.files.len}):')
|
||||
if col.files.len > 0 {
|
||||
for file_name, _ in col.files {
|
||||
println(' - ${file_name}')
|
||||
}
|
||||
} else {
|
||||
println(' (empty)')
|
||||
}
|
||||
}
|
||||
|
||||
// Validate links (this will recursively find links across collections)
|
||||
println('\n=== VALIDATING LINKS (RECURSIVE) ===\n')
|
||||
a.validate_links()!
|
||||
println('✓ Link validation complete\n')
|
||||
|
||||
// Check for broken links
|
||||
println('\n=== BROKEN LINKS ===\n')
|
||||
mut total_errors := 0
|
||||
for col_name, col in a.collections {
|
||||
if col.has_errors() {
|
||||
println('Collection: ${col_name} (${col.errors.len} errors)')
|
||||
for err in col.errors {
|
||||
println(' [${err.category_str()}] Page: ${err.page_key}')
|
||||
println(' Message: ${err.message}')
|
||||
println('')
|
||||
total_errors++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if total_errors == 0 {
|
||||
println('✓ No broken links found!')
|
||||
} else {
|
||||
println('\n❌ Total broken link errors: ${total_errors}')
|
||||
}
|
||||
|
||||
// Show discovered links per page (validates recursive discovery)
|
||||
println('\n\n=== DISCOVERED LINKS (RECURSIVE RESOLUTION) ===\n')
|
||||
println('Checking for files referenced by cross-collection pages...\n')
|
||||
mut total_links := 0
|
||||
for col_name, col in a.collections {
|
||||
mut col_has_links := false
|
||||
for page_name, page in col.pages {
|
||||
if page.links.len > 0 {
|
||||
if !col_has_links {
|
||||
println('Collection: ${col_name}')
|
||||
col_has_links = true
|
||||
}
|
||||
println(' Page: ${page_name} (${page.links.len} links)')
|
||||
for link in page.links {
|
||||
target_col := if link.target_collection_name != '' {
|
||||
link.target_collection_name
|
||||
} else {
|
||||
col_name
|
||||
}
|
||||
println(' → ${target_col}:${link.target_item_name} [${link.file_type}]')
|
||||
total_links++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
println('\n✓ Total links discovered: ${total_links}')
|
||||
|
||||
// List pages that need investigation
|
||||
println('\n=== CHECKING SPECIFIC MISSING PAGES ===\n')
|
||||
|
||||
missing_pages := [
|
||||
'compare_electricity',
|
||||
'internet_basics',
|
||||
'centralization_risk',
|
||||
'gdp_negative',
|
||||
]
|
||||
|
||||
// Check in geoaware collection
|
||||
if 'geoaware' in a.collections {
|
||||
mut geoaware := a.get_collection('geoaware')!
|
||||
|
||||
println('Collection: geoaware')
|
||||
if geoaware.pages.len > 0 {
|
||||
println(' All pages in collection:')
|
||||
for page_name, _ in geoaware.pages {
|
||||
println(' - ${page_name}')
|
||||
}
|
||||
} else {
|
||||
println(' (No pages found)')
|
||||
}
|
||||
|
||||
println('\n Checking for specific missing pages:')
|
||||
for page_name in missing_pages {
|
||||
exists := page_name in geoaware.pages
|
||||
status := if exists { '✓' } else { '✗' }
|
||||
println(' ${status} ${page_name}')
|
||||
}
|
||||
}
|
||||
|
||||
// Check for pages across all collections
|
||||
println('\n\n=== LOOKING FOR MISSING PAGES ACROSS ALL COLLECTIONS ===\n')
|
||||
|
||||
for missing_page in missing_pages {
|
||||
println('Searching for "${missing_page}":')
|
||||
mut found := false
|
||||
for col_name, col in a.collections {
|
||||
if missing_page in col.pages {
|
||||
println(' ✓ Found in: ${col_name}')
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
println(' ✗ Not found in any collection')
|
||||
}
|
||||
}
|
||||
|
||||
// Check for the solution page
|
||||
println('\n\n=== CHECKING FOR "solution" PAGE ===\n')
|
||||
for col_name in ['mycelium_nodes_tiers', 'geomind_compare', 'geoaware', 'mycelium_economics',
|
||||
'mycelium_concepts', 'mycelium_cloud_tech'] {
|
||||
if col_name in a.collections {
|
||||
mut col := a.get_collection(col_name)!
|
||||
exists := col.page_exists('solution')!
|
||||
status := if exists { '✓' } else { '✗' }
|
||||
println('${status} ${col_name}: "solution" page')
|
||||
}
|
||||
}
|
||||
|
||||
// Print error summary
|
||||
println('\n\n=== ERROR SUMMARY BY CATEGORY ===\n')
|
||||
mut category_counts := map[string]int{}
|
||||
for _, col in a.collections {
|
||||
for err in col.errors {
|
||||
cat_str := err.category_str()
|
||||
category_counts[cat_str]++
|
||||
}
|
||||
}
|
||||
|
||||
if category_counts.len == 0 {
|
||||
println('✓ No errors found!')
|
||||
} else {
|
||||
for cat, count in category_counts {
|
||||
println('${cat}: ${count}')
|
||||
}
|
||||
}
|
||||
|
||||
// ===== EXPORT AND FILE VERIFICATION TEST =====
|
||||
println('\n\n=== EXPORT AND FILE VERIFICATION TEST ===\n')
|
||||
|
||||
// Create export directory
|
||||
export_path := '/tmp/doctree_debug_export'
|
||||
if os.exists(export_path) {
|
||||
os.rmdir_all(export_path)!
|
||||
}
|
||||
os.mkdir_all(export_path)!
|
||||
|
||||
println('Exporting to: ${export_path}\n')
|
||||
a.export(destination: export_path)!
|
||||
println('✓ Export completed\n')
|
||||
|
||||
// Collect all files found during link validation
|
||||
mut expected_files := map[string]string{} // key: file_name, value: collection_name
|
||||
mut file_count := 0
|
||||
for col_name, col in a.collections {
|
||||
for page_name, page in col.pages {
|
||||
for link in page.links {
|
||||
if link.status == .found && (link.file_type == .file || link.file_type == .image) {
|
||||
file_key := link.target_item_name
|
||||
expected_files[file_key] = link.target_collection_name
|
||||
file_count++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
println('Expected to find ${file_count} file references in links\n')
|
||||
println('=== VERIFYING FILES IN EXPORT DIRECTORY ===\n')
|
||||
|
||||
// Get the first collection name (the primary exported collection)
|
||||
mut primary_col_name := ''
|
||||
for col_name, _ in a.collections {
|
||||
primary_col_name = col_name
|
||||
break
|
||||
}
|
||||
|
||||
if primary_col_name == '' {
|
||||
println('❌ No collections found')
|
||||
} else {
|
||||
mut verified_count := 0
|
||||
mut missing_count := 0
|
||||
mut found_files := map[string]bool{}
|
||||
|
||||
// Check both img and files directories
|
||||
img_dir := '${export_path}/content/${primary_col_name}/img'
|
||||
files_dir := '${export_path}/content/${primary_col_name}/files'
|
||||
|
||||
// Scan img directory
|
||||
if os.exists(img_dir) {
|
||||
img_files := os.ls(img_dir) or { []string{} }
|
||||
for img_file in img_files {
|
||||
found_files[img_file] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Scan files directory
|
||||
if os.exists(files_dir) {
|
||||
file_list := os.ls(files_dir) or { []string{} }
|
||||
for file in file_list {
|
||||
found_files[file] = true
|
||||
}
|
||||
}
|
||||
|
||||
println('Files/Images found in export directory:')
|
||||
if found_files.len > 0 {
|
||||
for file_name, _ in found_files {
|
||||
println(' ✓ ${file_name}')
|
||||
if file_name in expected_files {
|
||||
verified_count++
|
||||
}
|
||||
}
|
||||
} else {
|
||||
println(' (none found)')
|
||||
}
|
||||
|
||||
println('\n=== FILE VERIFICATION RESULTS ===\n')
|
||||
println('Expected files from links: ${file_count}')
|
||||
println('Files found in export: ${found_files.len}')
|
||||
println('Files verified (present in export): ${verified_count}')
|
||||
|
||||
// Check for missing expected files
|
||||
for expected_file, source_col in expected_files {
|
||||
if expected_file !in found_files {
|
||||
missing_count++
|
||||
println(' ✗ Missing: ${expected_file} (from ${source_col})')
|
||||
}
|
||||
}
|
||||
|
||||
if missing_count > 0 {
|
||||
println('\n❌ ${missing_count} expected files are MISSING from export!')
|
||||
} else if verified_count == file_count && file_count > 0 {
|
||||
println('\n✓ All expected files are present in export directory!')
|
||||
} else if file_count == 0 {
|
||||
println('\n⚠ No file links were found during validation (check if pages have file references)')
|
||||
}
|
||||
|
||||
// Show directory structure
|
||||
println('\n=== EXPORT DIRECTORY STRUCTURE ===\n')
|
||||
if os.exists('${export_path}/content/${primary_col_name}') {
|
||||
println('${export_path}/content/${primary_col_name}/')
|
||||
|
||||
content_files := os.ls('${export_path}/content/${primary_col_name}') or { []string{} }
|
||||
for item in content_files {
|
||||
full_path := '${export_path}/content/${primary_col_name}/${item}'
|
||||
if os.is_dir(full_path) {
|
||||
sub_items := os.ls(full_path) or { []string{} }
|
||||
println(' ${item}/ (${sub_items.len} items)')
|
||||
for sub_item in sub_items {
|
||||
println(' - ${sub_item}')
|
||||
}
|
||||
} else {
|
||||
println(' - ${item}')
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
98
examples/data/atlas/example.vsh
Executable file
98
examples/data/atlas/example.vsh
Executable file
@@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.data.doctree
|
||||
import incubaid.herolib.core.pathlib
|
||||
import incubaid.herolib.web.doctree_client
|
||||
import os
|
||||
|
||||
// Example: DocTree Export and AtlasClient Usage
|
||||
|
||||
println('DocTree Export & Client Example')
|
||||
println('============================================================')
|
||||
|
||||
// Setup test directory
|
||||
test_dir := '/tmp/doctree_example'
|
||||
export_dir := '/tmp/doctree_export'
|
||||
os.rmdir_all(test_dir) or {}
|
||||
os.rmdir_all(export_dir) or {}
|
||||
os.mkdir_all(test_dir)!
|
||||
|
||||
// Create a collection with some content
|
||||
col_path := '${test_dir}/docs'
|
||||
os.mkdir_all(col_path)!
|
||||
|
||||
mut cfile := pathlib.get_file(path: '${col_path}/.collection', create: true)!
|
||||
cfile.write('name:docs')!
|
||||
|
||||
mut page1 := pathlib.get_file(path: '${col_path}/intro.md', create: true)!
|
||||
page1.write('# Introduction\n\nWelcome to the docs!')!
|
||||
|
||||
mut page2 := pathlib.get_file(path: '${col_path}/guide.md', create: true)!
|
||||
page2.write('# Guide\n\n!!include docs:intro\n\nMore content here.')!
|
||||
|
||||
// Create and scan doctree
|
||||
println('\n1. Creating DocTree and scanning...')
|
||||
mut a := doctree.new(name: 'my_docs')!
|
||||
a.scan(path: test_dir)!
|
||||
|
||||
println(' Found ${a.collections.len} collection(s)')
|
||||
|
||||
// Validate links
|
||||
println('\n2. Validating links...')
|
||||
a.validate_links()!
|
||||
|
||||
col := a.get_collection('docs')!
|
||||
if col.has_errors() {
|
||||
println(' Errors found:')
|
||||
col.print_errors()
|
||||
} else {
|
||||
println(' No errors found!')
|
||||
}
|
||||
|
||||
// Export collections
|
||||
println('\n3. Exporting collections to ${export_dir}...')
|
||||
a.export(
|
||||
destination: export_dir
|
||||
include: true // Process includes during export
|
||||
redis: false // Don't use Redis for this example
|
||||
)!
|
||||
println(' ✓ Export complete')
|
||||
|
||||
// Use AtlasClient to access exported content
|
||||
println('\n4. Using AtlasClient to read exported content...')
|
||||
mut client := doctree_client.new(export_dir: export_dir)!
|
||||
|
||||
// List collections
|
||||
collections := client.list_collections()!
|
||||
println(' Collections: ${collections}')
|
||||
|
||||
// List pages in docs collection
|
||||
pages := client.list_pages('docs')!
|
||||
println(' Pages in docs: ${pages}')
|
||||
|
||||
// Read page content
|
||||
println('\n5. Reading page content via AtlasClient...')
|
||||
intro_content := client.get_page_content('docs', 'intro')!
|
||||
println(' intro.md content:')
|
||||
println(' ${intro_content}')
|
||||
|
||||
guide_content := client.get_page_content('docs', 'guide')!
|
||||
println('\n guide.md content (with includes processed):')
|
||||
println(' ${guide_content}')
|
||||
|
||||
// Get metadata
|
||||
println('\n6. Accessing metadata...')
|
||||
metadata := client.get_collection_metadata('docs')!
|
||||
println(' Collection name: ${metadata.name}')
|
||||
println(' Collection path: ${metadata.path}')
|
||||
println(' Number of pages: ${metadata.pages.len}')
|
||||
|
||||
println('\n✓ Example completed successfully!')
|
||||
println('\nExported files are in: ${export_dir}')
|
||||
println(' - content/docs/intro.md')
|
||||
println(' - content/docs/guide.md')
|
||||
println(' - meta/docs.json')
|
||||
|
||||
// Cleanup (commented out so you can inspect the files)
|
||||
// os.rmdir_all(test_dir) or {}
|
||||
// os.rmdir_all(export_dir) or {}
|
||||
@@ -1 +0,0 @@
|
||||
!!git.check filter:'herolib'
|
||||
@@ -1,13 +1,15 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals -no-skip-unused run
|
||||
|
||||
import incubaid.herolib.hero.heromodels
|
||||
import incubaid.herolib.hero.db
|
||||
import time
|
||||
|
||||
fn main() {
|
||||
// Start the server in a background thread with authentication disabled for testing
|
||||
spawn fn () ! {
|
||||
heromodels.new(reset: true, name: 'test')!
|
||||
spawn fn () {
|
||||
heromodels.new(reset: true, name: 'test') or {
|
||||
eprintln('Failed to initialize HeroModels: ${err}')
|
||||
exit(1)
|
||||
}
|
||||
heromodels.server_start(
|
||||
name: 'test'
|
||||
port: 8080
|
||||
@@ -17,7 +19,10 @@ fn main() {
|
||||
allowed_origins: [
|
||||
'http://localhost:5173',
|
||||
]
|
||||
) or { panic('Failed to start HeroModels server: ${err}') }
|
||||
) or {
|
||||
eprintln('Failed to start HeroModels server: ${err}')
|
||||
exit(1)
|
||||
}
|
||||
}()
|
||||
|
||||
// Keep the main thread alive
|
||||
|
||||
93
examples/hero/heromodels/prd.vsh
Normal file
93
examples/hero/heromodels/prd.vsh
Normal file
@@ -0,0 +1,93 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.hero.heromodels
|
||||
|
||||
// Initialize database
|
||||
mut mydb := heromodels.new()!
|
||||
|
||||
// Create goals
|
||||
mut goals := [
|
||||
heromodels.Goal{
|
||||
id: 'G1'
|
||||
title: 'Faster Requirements'
|
||||
description: 'Reduce PRD creation time to under 1 day'
|
||||
gtype: .product
|
||||
},
|
||||
]
|
||||
|
||||
// Create use cases
|
||||
mut use_cases := [
|
||||
heromodels.UseCase{
|
||||
id: 'UC1'
|
||||
title: 'Generate PRD'
|
||||
actor: 'Product Manager'
|
||||
goal: 'Create validated PRD'
|
||||
steps: ['Select template', 'Fill fields', 'Export to Markdown']
|
||||
success: 'Complete PRD generated'
|
||||
failure: 'Validation failed'
|
||||
},
|
||||
]
|
||||
|
||||
// Create requirements
|
||||
mut criterion := heromodels.AcceptanceCriterion{
|
||||
id: 'AC1'
|
||||
description: 'Display template list'
|
||||
condition: 'List contains >= 5 templates'
|
||||
}
|
||||
|
||||
mut requirements := [
|
||||
heromodels.Requirement{
|
||||
id: 'R1'
|
||||
category: 'Editor'
|
||||
title: 'Template Selection'
|
||||
rtype: .functional
|
||||
description: 'User can select from templates'
|
||||
priority: .high
|
||||
criteria: [criterion]
|
||||
dependencies: []
|
||||
},
|
||||
]
|
||||
|
||||
// Create constraints
|
||||
mut constraints := [
|
||||
heromodels.Constraint{
|
||||
id: 'C1'
|
||||
title: 'ARM64 Support'
|
||||
description: 'Must run on ARM64 infrastructure'
|
||||
ctype: .technica
|
||||
},
|
||||
]
|
||||
|
||||
// Create risks
|
||||
mut risks := map[string]string{}
|
||||
risks['RISK1'] = 'Templates too limited → Add community contributions'
|
||||
risks['RISK2'] = 'AI suggestions inaccurate → Add review workflow'
|
||||
|
||||
// Create a new PRD object
|
||||
mut prd := mydb.prd.new(
|
||||
product_name: 'Lumina PRD Builder'
|
||||
version: 'v1.0'
|
||||
overview: 'Tool to create structured PRDs quickly'
|
||||
vision: 'Enable teams to generate clear requirements in minutes'
|
||||
goals: goals
|
||||
use_cases: use_cases
|
||||
requirements: requirements
|
||||
constraints: constraints
|
||||
risks: risks
|
||||
)!
|
||||
|
||||
// Save to database
|
||||
prd = mydb.prd.set(prd)!
|
||||
println('✓ Created PRD with ID: ${prd.id}')
|
||||
|
||||
// Retrieve from database
|
||||
mut retrieved := mydb.prd.get(prd.id)!
|
||||
println('✓ Retrieved PRD: ${retrieved.product_name}')
|
||||
|
||||
// List all PRDs
|
||||
mut all_prds := mydb.prd.list()!
|
||||
println('✓ Total PRDs in database: ${all_prds.len}')
|
||||
|
||||
// Check if exists
|
||||
exists := mydb.prd.exist(prd.id)!
|
||||
println('✓ PRD exists: ${exists}')
|
||||
@@ -5,7 +5,7 @@ import incubaid.herolib.schemas.openrpc
|
||||
import os
|
||||
|
||||
// 1. Create a new server instance
|
||||
mut server := heroserver.new(port: 8080)!
|
||||
mut server := heroserver.new(port: 8081, auth_enabled: false)!
|
||||
|
||||
// 2. Create and register your OpenRPC handlers
|
||||
// These handlers must conform to the `openrpc.OpenRPCHandler` interface.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
69
examples/hero/heroserver/openrpc_back.json
Normal file
69
examples/hero/heroserver/openrpc_back.json
Normal file
@@ -0,0 +1,69 @@
|
||||
{
|
||||
"openrpc": "1.2.6",
|
||||
"info": {
|
||||
"title": "Comment Service",
|
||||
"description": "A simple service for managing comments.",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"methods": [
|
||||
{
|
||||
"name": "add_comment",
|
||||
"summary": "Add a new comment",
|
||||
"params": [
|
||||
{
|
||||
"name": "text",
|
||||
"description": "The content of the comment.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"result": {
|
||||
"name": "comment_id",
|
||||
"description": "The ID of the newly created comment.",
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "get_comment",
|
||||
"summary": "Get a comment by ID",
|
||||
"description": "Retrieves a specific comment using its unique identifier.",
|
||||
"params": [
|
||||
{
|
||||
"name": "id",
|
||||
"description": "The unique identifier of the comment to retrieve.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "number",
|
||||
"example": "1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "include_metadata",
|
||||
"description": "Whether to include metadata in the response.",
|
||||
"required": false,
|
||||
"schema": {
|
||||
"type": "boolean",
|
||||
"example": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"result": {
|
||||
"name": "comment",
|
||||
"description": "The requested comment object.",
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"example": {
|
||||
"id": 1,
|
||||
"text": "This is a sample comment",
|
||||
"created_at": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"components": {}
|
||||
}
|
||||
46
examples/installers/base/redis.vsh
Executable file
46
examples/installers/base/redis.vsh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.base.redis
|
||||
|
||||
println('=== Redis Installer Example ===\n')
|
||||
|
||||
// Create configuration
|
||||
// You can customize port, datadir, and ipaddr as needed
|
||||
config := redis.RedisInstall{
|
||||
port: 6379 // Redis port
|
||||
datadir: '/var/lib/redis' // Data directory (standard location)
|
||||
ipaddr: 'localhost' // Bind address
|
||||
}
|
||||
|
||||
// Check if Redis is already running
|
||||
if redis.check(config) {
|
||||
println('INFO: Redis is already running on port ${config.port}')
|
||||
println(' To reinstall, stop Redis first: redis.stop()!')
|
||||
} else {
|
||||
// Install and start Redis
|
||||
println('Installing and starting Redis...')
|
||||
println(' Port: ${config.port}')
|
||||
println(' Data directory: ${config.datadir}')
|
||||
println(' Bind address: ${config.ipaddr}\n')
|
||||
|
||||
redis.redis_install(config)!
|
||||
|
||||
// Verify installation
|
||||
if redis.check(config) {
|
||||
println('\nSUCCESS: Redis installed and started successfully!')
|
||||
println(' You can now connect to Redis on port ${config.port}')
|
||||
println(' Test with: redis-cli ping')
|
||||
} else {
|
||||
println('\nERROR: Redis installation completed but failed to start')
|
||||
println(' Check logs: journalctl -u redis-server -n 20')
|
||||
}
|
||||
}
|
||||
|
||||
println('\n=== Available Functions ===')
|
||||
println(' redis.redis_install(config)! - Install and start Redis')
|
||||
println(' redis.start(config)! - Start Redis')
|
||||
println(' redis.stop()! - Stop Redis')
|
||||
println(' redis.restart(config)! - Restart Redis')
|
||||
println(' redis.check(config) - Check if running')
|
||||
|
||||
println('\nDone!')
|
||||
209
examples/installers/horus/README.md
Normal file
209
examples/installers/horus/README.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Horus Installation Examples
|
||||
|
||||
This directory contains example scripts for installing and managing all Horus components using the herolib installer framework.
|
||||
|
||||
## Components
|
||||
|
||||
The Horus ecosystem consists of the following components:
|
||||
|
||||
1. **Coordinator** - Central coordination service (HTTP: 8081, WS: 9653)
|
||||
2. **Supervisor** - Supervision and monitoring service (HTTP: 8082, WS: 9654)
|
||||
3. **Hero Runner** - Command execution runner for Hero jobs
|
||||
4. **Osiris Runner** - Database-backed runner
|
||||
5. **SAL Runner** - System Abstraction Layer runner
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Full Installation and Start
|
||||
|
||||
To install and start all Horus components:
|
||||
|
||||
```bash
|
||||
# 1. Install all components (this will take several minutes)
|
||||
./horus_full_install.vsh
|
||||
|
||||
# 2. Start all services
|
||||
./horus_start_all.vsh
|
||||
|
||||
# 3. Check status
|
||||
./horus_status.vsh
|
||||
```
|
||||
|
||||
### Stop All Services
|
||||
|
||||
```bash
|
||||
./horus_stop_all.vsh
|
||||
```
|
||||
|
||||
## Available Scripts
|
||||
|
||||
### `horus_full_install.vsh`
|
||||
Installs all Horus components:
|
||||
- Checks and installs Redis if needed
|
||||
- Checks and installs Rust if needed
|
||||
- Clones the horus repository
|
||||
- Builds all binaries from source
|
||||
|
||||
**Note:** This script can take 10-30 minutes depending on your system, as it compiles Rust code.
|
||||
|
||||
### `horus_start_all.vsh`
|
||||
Starts all Horus services in the correct order:
|
||||
1. Coordinator
|
||||
2. Supervisor
|
||||
3. Hero Runner
|
||||
4. Osiris Runner
|
||||
5. SAL Runner
|
||||
|
||||
### `horus_stop_all.vsh`
|
||||
Stops all running Horus services in reverse order.
|
||||
|
||||
### `horus_status.vsh`
|
||||
Checks and displays the status of all Horus services.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Operating System**: Linux or macOS
|
||||
- **Dependencies** (automatically installed):
|
||||
- Redis (required for all components)
|
||||
- Rust toolchain (for building from source)
|
||||
- Git (for cloning repositories)
|
||||
|
||||
## Configuration
|
||||
|
||||
All components use default configurations:
|
||||
|
||||
### Coordinator
|
||||
- Binary: `/hero/var/bin/coordinator`
|
||||
- HTTP Port: `8081`
|
||||
- WebSocket Port: `9653`
|
||||
- Redis: `127.0.0.1:6379`
|
||||
|
||||
### Supervisor
|
||||
- Binary: `/hero/var/bin/supervisor`
|
||||
- HTTP Port: `8082`
|
||||
- WebSocket Port: `9654`
|
||||
- Redis: `127.0.0.1:6379`
|
||||
|
||||
### Runners
|
||||
- Hero Runner: `/hero/var/bin/herorunner`
|
||||
- Osiris Runner: `/hero/var/bin/runner_osiris`
|
||||
- SAL Runner: `/hero/var/bin/runner_sal`
|
||||
|
||||
## Custom Configuration
|
||||
|
||||
To customize the configuration, you can use heroscript:
|
||||
|
||||
```v
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
|
||||
mut coordinator := herocoordinator.get(create: true)!
|
||||
coordinator.http_port = 9000
|
||||
coordinator.ws_port = 9001
|
||||
coordinator.log_level = 'debug'
|
||||
herocoordinator.set(coordinator)!
|
||||
coordinator.install()!
|
||||
coordinator.start()!
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
After starting the services, you can test them:
|
||||
|
||||
```bash
|
||||
# Test Coordinator HTTP endpoint
|
||||
curl http://127.0.0.1:8081
|
||||
|
||||
# Test Supervisor HTTP endpoint
|
||||
curl http://127.0.0.1:8082
|
||||
|
||||
# Check running processes
|
||||
pgrep -f coordinator
|
||||
pgrep -f supervisor
|
||||
pgrep -f herorunner
|
||||
pgrep -f runner_osiris
|
||||
pgrep -f runner_sal
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Redis Not Running
|
||||
If you get Redis connection errors:
|
||||
```bash
|
||||
# Check if Redis is running
|
||||
redis-cli ping
|
||||
|
||||
# Start Redis (Ubuntu/Debian)
|
||||
sudo systemctl start redis-server
|
||||
|
||||
# Start Redis (macOS with Homebrew)
|
||||
brew services start redis
|
||||
```
|
||||
|
||||
### Build Failures
|
||||
If the build fails:
|
||||
1. Ensure you have enough disk space (at least 5GB free)
|
||||
2. Check that Rust is properly installed: `rustc --version`
|
||||
3. Try cleaning the build: `cd /root/code/git.ourworld.tf/herocode/horus && cargo clean`
|
||||
|
||||
### Port Conflicts
|
||||
If ports 8081 or 8082 are already in use, you can customize the ports in the configuration.
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Individual Component Installation
|
||||
|
||||
You can install components individually:
|
||||
|
||||
```bash
|
||||
# Install only coordinator
|
||||
v run coordinator_only.vsh
|
||||
|
||||
# Install only supervisor
|
||||
v run supervisor_only.vsh
|
||||
```
|
||||
|
||||
### Using with Heroscript
|
||||
|
||||
You can also use heroscript files for configuration:
|
||||
|
||||
```heroscript
|
||||
!!herocoordinator.configure
|
||||
name:'production'
|
||||
http_port:8081
|
||||
ws_port:9653
|
||||
log_level:'info'
|
||||
|
||||
!!herocoordinator.install
|
||||
|
||||
!!herocoordinator.start
|
||||
```
|
||||
|
||||
## Service Management
|
||||
|
||||
Services are managed using the system's startup manager (zinit or systemd):
|
||||
|
||||
```bash
|
||||
# Check service status with systemd
|
||||
systemctl status coordinator
|
||||
|
||||
# View logs
|
||||
journalctl -u coordinator -f
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
To completely remove all Horus components:
|
||||
|
||||
```bash
|
||||
# Stop all services
|
||||
./horus_stop_all.vsh
|
||||
|
||||
# Destroy all components (removes binaries)
|
||||
v run horus_destroy_all.vsh
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- Check the main Horus repository: https://git.ourworld.tf/herocode/horus
|
||||
- Review the installer code in `lib/installers/horus/`
|
||||
36
examples/installers/horus/coordinator.vsh
Executable file
36
examples/installers/horus/coordinator.vsh
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
|
||||
// Example usage of coordinator installer
|
||||
// This will:
|
||||
// 1. Check if Rust is installed (installs if not present)
|
||||
// 2. Clone the horus repository
|
||||
// 3. Build the coordinator binary
|
||||
//
|
||||
// Note: Redis must be pre-installed and running before using the coordinator
|
||||
|
||||
println('Building coordinator from horus repository...')
|
||||
println('(This will install Rust if not already installed)\n')
|
||||
|
||||
// Create coordinator instance
|
||||
mut coord := coordinator.new()!
|
||||
|
||||
// Build and install
|
||||
// Note: This will skip the build if the binary already exists
|
||||
coord.install()!
|
||||
|
||||
// To force a rebuild even if binary exists, use:
|
||||
// coord.install(reset: true)!
|
||||
|
||||
println('\nCoordinator built and installed successfully!')
|
||||
println('Binary location: ${coord.binary_path}')
|
||||
|
||||
// Note: To start the service, uncomment the lines below
|
||||
// (requires proper zinit or screen session setup and Redis running)
|
||||
// coord.start()!
|
||||
// if coord.running()! {
|
||||
// println('Coordinator is running!')
|
||||
// }
|
||||
// coord.stop()!
|
||||
// coord.destroy()!
|
||||
60
examples/installers/horus/horus_config.heroscript
Normal file
60
examples/installers/horus/horus_config.heroscript
Normal file
@@ -0,0 +1,60 @@
|
||||
// Horus Configuration Heroscript
|
||||
// This file demonstrates how to configure all Horus components using heroscript
|
||||
|
||||
// Configure Coordinator
|
||||
!!coordinator.configure
|
||||
name:'default'
|
||||
binary_path:'/hero/var/bin/coordinator'
|
||||
redis_addr:'127.0.0.1:6379'
|
||||
http_port:8081
|
||||
ws_port:9653
|
||||
log_level:'info'
|
||||
repo_path:'/root/code/git.ourworld.tf/herocode/horus'
|
||||
|
||||
// Configure Supervisor
|
||||
!!supervisor.configure
|
||||
name:'default'
|
||||
binary_path:'/hero/var/bin/supervisor'
|
||||
redis_addr:'127.0.0.1:6379'
|
||||
http_port:8082
|
||||
ws_port:9654
|
||||
log_level:'info'
|
||||
repo_path:'/root/code/git.ourworld.tf/herocode/horus'
|
||||
|
||||
// Configure Hero Runner
|
||||
!!herorunner.configure
|
||||
name:'default'
|
||||
binary_path:'/hero/var/bin/herorunner'
|
||||
redis_addr:'127.0.0.1:6379'
|
||||
log_level:'info'
|
||||
repo_path:'/root/code/git.ourworld.tf/herocode/horus'
|
||||
|
||||
// Configure Osiris Runner
|
||||
!!osirisrunner.configure
|
||||
name:'default'
|
||||
binary_path:'/hero/var/bin/runner_osiris'
|
||||
redis_addr:'127.0.0.1:6379'
|
||||
log_level:'info'
|
||||
repo_path:'/root/code/git.ourworld.tf/herocode/horus'
|
||||
|
||||
// Configure SAL Runner
|
||||
!!salrunner.configure
|
||||
name:'default'
|
||||
binary_path:'/hero/var/bin/runner_sal'
|
||||
redis_addr:'127.0.0.1:6379'
|
||||
log_level:'info'
|
||||
repo_path:'/root/code/git.ourworld.tf/herocode/horus'
|
||||
|
||||
// Install all components
|
||||
!!herocoordinator.install
|
||||
!!supervisor.install
|
||||
!!herorunner.install
|
||||
!!osirisrunner.install
|
||||
!!salrunner.install
|
||||
|
||||
// Start all services
|
||||
!!herocoordinator.start name:'default'
|
||||
!!supervisor.start name:'default'
|
||||
!!herorunner.start name:'default'
|
||||
!!osirisrunner.start name:'default'
|
||||
!!salrunner.start name:'default'
|
||||
60
examples/installers/horus/horus_full_install.vsh
Executable file
60
examples/installers/horus/horus_full_install.vsh
Executable file
@@ -0,0 +1,60 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
import incubaid.herolib.installers.horus.supervisor
|
||||
import incubaid.herolib.installers.horus.herorunner
|
||||
import incubaid.herolib.installers.horus.osirisrunner
|
||||
import incubaid.herolib.installers.horus.salrunner
|
||||
|
||||
// Full Horus Installation Example
|
||||
// This script installs and configures all Horus components:
|
||||
// - Coordinator (port 8081)
|
||||
// - Supervisor (port 8082)
|
||||
// - Hero Runner
|
||||
// - Osiris Runner
|
||||
// - SAL Runner
|
||||
|
||||
println('🚀 Starting Full Horus Installation')
|
||||
|
||||
// Step 1: Install Coordinator
|
||||
println('\n📦 Step 1/5: Installing Coordinator...')
|
||||
mut coordinator_installer := coordinator.get(create: true)!
|
||||
coordinator_installer.install()!
|
||||
println('✅ Coordinator installed at ${coordinator_installer.binary_path}')
|
||||
|
||||
// Step 2: Install Supervisor
|
||||
println('\n📦 Step 2/5: Installing Supervisor...')
|
||||
mut supervisor_inst := supervisor.get(create: true)!
|
||||
supervisor_inst.install()!
|
||||
println('✅ Supervisor installed at ${supervisor_inst.binary_path}')
|
||||
|
||||
// Step 3: Install Hero Runner
|
||||
println('\n📦 Step 3/5: Installing Hero Runner...')
|
||||
mut hero_runner := herorunner.get(create: true)!
|
||||
hero_runner.install()!
|
||||
println('✅ Hero Runner installed at ${hero_runner.binary_path}')
|
||||
|
||||
// Step 4: Install Osiris Runner
|
||||
println('\n📦 Step 4/5: Installing Osiris Runner...')
|
||||
mut osiris_runner := osirisrunner.get(create: true)!
|
||||
osiris_runner.install()!
|
||||
println('✅ Osiris Runner installed at ${osiris_runner.binary_path}')
|
||||
|
||||
// Step 5: Install SAL Runner
|
||||
println('\n📦 Step 5/5: Installing SAL Runner...')
|
||||
mut sal_runner := salrunner.get(create: true)!
|
||||
sal_runner.install()!
|
||||
println('✅ SAL Runner installed at ${sal_runner.binary_path}')
|
||||
|
||||
println('🎉 All Horus components installed successfully!')
|
||||
|
||||
println('\n📋 Installation Summary:')
|
||||
println(' • Coordinator: ${coordinator_installer.binary_path} (HTTP: ${coordinator_installer.http_port}, WS: ${coordinator_installer.ws_port})')
|
||||
println(' • Supervisor: ${supervisor_inst.binary_path} (HTTP: ${supervisor_inst.http_port}, WS: ${supervisor_inst.ws_port})')
|
||||
println(' • Hero Runner: ${hero_runner.binary_path}')
|
||||
println(' • Osiris Runner: ${osiris_runner.binary_path}')
|
||||
println(' • SAL Runner: ${sal_runner.binary_path}')
|
||||
|
||||
println('\n💡 Next Steps:')
|
||||
println(' To start services, run: ./horus_start_all.vsh')
|
||||
println(' To test individual components, see the other example scripts')
|
||||
85
examples/installers/horus/horus_start_all.vsh
Executable file
85
examples/installers/horus/horus_start_all.vsh
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
import incubaid.herolib.installers.horus.supervisor
|
||||
import incubaid.herolib.installers.horus.herorunner
|
||||
import incubaid.herolib.installers.horus.osirisrunner
|
||||
import incubaid.herolib.installers.horus.salrunner
|
||||
import time
|
||||
|
||||
// Start All Horus Services
|
||||
// This script starts all Horus components in the correct order
|
||||
|
||||
println('🚀 Starting All Horus Services')
|
||||
|
||||
// Step 1: Start Coordinator
|
||||
println('\n▶️ Step 1/5: Starting Coordinator...')
|
||||
mut coordinator_installer := coordinator.get(name: 'ayman', create: true)!
|
||||
coordinator_installer.start()!
|
||||
if coordinator_installer.running()! {
|
||||
println('✅ Coordinator is running on HTTP:${coordinator_installer.http_port} WS:${coordinator_installer.ws_port}')
|
||||
} else {
|
||||
println('❌ Coordinator failed to start')
|
||||
}
|
||||
|
||||
// Step 2: Start Supervisor
|
||||
println('\n▶️ Step 2/5: Starting Supervisor...')
|
||||
mut supervisor_inst := supervisor.get(create: true)!
|
||||
supervisor_inst.start()!
|
||||
if supervisor_inst.running()! {
|
||||
println('✅ Supervisor is running on HTTP:${supervisor_inst.http_port} WS:${supervisor_inst.ws_port}')
|
||||
} else {
|
||||
println('❌ Supervisor failed to start')
|
||||
}
|
||||
|
||||
// Step 3: Start Hero Runner
|
||||
println('\n▶️ Step 3/5: Starting Hero Runner...')
|
||||
mut hero_runner := herorunner.get(create: true)!
|
||||
hero_runner.start()!
|
||||
if hero_runner.running()! {
|
||||
println('✅ Hero Runner is running')
|
||||
} else {
|
||||
println('❌ Hero Runner failed to start')
|
||||
}
|
||||
|
||||
// Step 4: Start Osiris Runner
|
||||
println('\n▶️ Step 4/5: Starting Osiris Runner...')
|
||||
mut osiris_runner := osirisrunner.get(create: true)!
|
||||
osiris_runner.start()!
|
||||
if osiris_runner.running()! {
|
||||
println('✅ Osiris Runner is running')
|
||||
} else {
|
||||
println('❌ Osiris Runner failed to start')
|
||||
}
|
||||
|
||||
// Step 5: Start SAL Runner
|
||||
println('\n▶️ Step 5/5: Starting SAL Runner...')
|
||||
mut sal_runner := salrunner.get(create: true)!
|
||||
sal_runner.start()!
|
||||
if sal_runner.running()! {
|
||||
println('✅ SAL Runner is running')
|
||||
} else {
|
||||
println('❌ SAL Runner failed to start')
|
||||
}
|
||||
|
||||
println('🎉 All Horus services started!')
|
||||
|
||||
println('\n📊 Service Status:')
|
||||
coordinator_status := if coordinator_installer.running()! { '✅ Running' } else { '❌ Stopped' }
|
||||
println(' • Coordinator: ${coordinator_status} (http://127.0.0.1:${coordinator_installer.http_port})')
|
||||
|
||||
supervisor_status := if supervisor_inst.running()! { '✅ Running' } else { '❌ Stopped' }
|
||||
println(' • Supervisor: ${supervisor_status} (http://127.0.0.1:${supervisor_inst.http_port})')
|
||||
|
||||
hero_runner_status := if hero_runner.running()! { '✅ Running' } else { '❌ Stopped' }
|
||||
println(' • Hero Runner: ${hero_runner_status}')
|
||||
|
||||
osiris_runner_status := if osiris_runner.running()! { '✅ Running' } else { '❌ Stopped' }
|
||||
println(' • Osiris Runner: ${osiris_runner_status}')
|
||||
|
||||
sal_runner_status := if sal_runner.running()! { '✅ Running' } else { '❌ Stopped' }
|
||||
println(' • SAL Runner: ${sal_runner_status}')
|
||||
|
||||
println('\n💡 Next Steps:')
|
||||
println(' To stop services, run: ./horus_stop_all.vsh')
|
||||
println(' To check status, run: ./horus_status.vsh')
|
||||
66
examples/installers/horus/horus_status.vsh
Executable file
66
examples/installers/horus/horus_status.vsh
Executable file
@@ -0,0 +1,66 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
import incubaid.herolib.installers.horus.supervisor
|
||||
import incubaid.herolib.installers.horus.herorunner
|
||||
import incubaid.herolib.installers.horus.osirisrunner
|
||||
import incubaid.herolib.installers.horus.salrunner
|
||||
|
||||
// Check Status of All Horus Services
|
||||
|
||||
println('📊 Horus Services Status')
|
||||
println('=' * 60)
|
||||
|
||||
// Get all services
|
||||
mut coordinator := herocoordinator.get()!
|
||||
mut supervisor_inst := supervisor.get()!
|
||||
mut hero_runner := herorunner.get()!
|
||||
mut osiris_runner := osirisrunner.get()!
|
||||
mut sal_runner := salrunner.get()!
|
||||
|
||||
// Check status
|
||||
println('\n🔍 Checking service status...\n')
|
||||
|
||||
coord_running := coordinator.running()!
|
||||
super_running := supervisor_inst.running()!
|
||||
hero_running := hero_runner.running()!
|
||||
osiris_running := osiris_runner.running()!
|
||||
sal_running := sal_runner.running()!
|
||||
|
||||
println('Service Status Details')
|
||||
println('-' * 60)
|
||||
println('Coordinator ${if coord_running { '✅ Running' } else { '❌ Stopped' }} http://127.0.0.1:${coordinator.http_port}')
|
||||
println('Supervisor ${if super_running { '✅ Running' } else { '❌ Stopped' }} http://127.0.0.1:${supervisor_inst.http_port}')
|
||||
println('Hero Runner ${if hero_running { '✅ Running' } else { '❌ Stopped' }}')
|
||||
println('Osiris Runner ${if osiris_running { '✅ Running' } else { '❌ Stopped' }}')
|
||||
println('SAL Runner ${if sal_running { '✅ Running' } else { '❌ Stopped' }}')
|
||||
|
||||
println('\n' + '=' * 60)
|
||||
|
||||
// Count running services
|
||||
mut running_count := 0
|
||||
if coord_running {
|
||||
running_count++
|
||||
}
|
||||
if super_running {
|
||||
running_count++
|
||||
}
|
||||
if hero_running {
|
||||
running_count++
|
||||
}
|
||||
if osiris_running {
|
||||
running_count++
|
||||
}
|
||||
if sal_running {
|
||||
running_count++
|
||||
}
|
||||
|
||||
println('Summary: ${running_count}/5 services running')
|
||||
|
||||
if running_count == 5 {
|
||||
println('🎉 All services are running!')
|
||||
} else if running_count == 0 {
|
||||
println('💤 All services are stopped')
|
||||
} else {
|
||||
println('⚠️ Some services are not running')
|
||||
}
|
||||
43
examples/installers/horus/horus_stop_all.vsh
Executable file
43
examples/installers/horus/horus_stop_all.vsh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
import incubaid.herolib.installers.horus.supervisor
|
||||
import incubaid.herolib.installers.horus.herorunner
|
||||
import incubaid.herolib.installers.horus.osirisrunner
|
||||
import incubaid.herolib.installers.horus.salrunner
|
||||
|
||||
// Stop All Horus Services
|
||||
// This script stops all running Horus components
|
||||
|
||||
println('🛑 Stopping All Horus Services')
|
||||
println('=' * 60)
|
||||
|
||||
// Stop in reverse order
|
||||
println('\n⏹️ Stopping SAL Runner...')
|
||||
mut sal_runner := salrunner.get()!
|
||||
sal_runner.stop()!
|
||||
println('✅ SAL Runner stopped')
|
||||
|
||||
println('\n⏹️ Stopping Osiris Runner...')
|
||||
mut osiris_runner := osirisrunner.get()!
|
||||
osiris_runner.stop()!
|
||||
println('✅ Osiris Runner stopped')
|
||||
|
||||
println('\n⏹️ Stopping Hero Runner...')
|
||||
mut hero_runner := herorunner.get()!
|
||||
hero_runner.stop()!
|
||||
println('✅ Hero Runner stopped')
|
||||
|
||||
println('\n⏹️ Stopping Supervisor...')
|
||||
mut supervisor_inst := supervisor.get()!
|
||||
supervisor_inst.stop()!
|
||||
println('✅ Supervisor stopped')
|
||||
|
||||
println('\n⏹️ Stopping Coordinator...')
|
||||
mut coordinator := herocoordinator.get()!
|
||||
coordinator.stop()!
|
||||
println('✅ Coordinator stopped')
|
||||
|
||||
println('\n' + '=' * 60)
|
||||
println('✅ All Horus services stopped!')
|
||||
println('=' * 60)
|
||||
52
examples/installers/horus/quick_start.vsh
Executable file
52
examples/installers/horus/quick_start.vsh
Executable file
@@ -0,0 +1,52 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.horus.coordinator
|
||||
import incubaid.herolib.installers.horus.supervisor
|
||||
|
||||
// Quick Start Example - Install and Start Coordinator and Supervisor
|
||||
// This is a minimal example to get started with Horus
|
||||
|
||||
println('🚀 Horus Quick Start')
|
||||
println('=' * 60)
|
||||
println('This will install and start Coordinator and Supervisor')
|
||||
println('(Runners can be added later using the full install script)')
|
||||
println('=' * 60)
|
||||
|
||||
// Install Coordinator
|
||||
println('\n📦 Installing Coordinator...')
|
||||
mut coordinator := herocoordinator.get(create: true)!
|
||||
coordinator.install()!
|
||||
println('✅ Coordinator installed')
|
||||
|
||||
// Install Supervisor
|
||||
println('\n📦 Installing Supervisor...')
|
||||
mut supervisor_inst := supervisor.get(create: true)!
|
||||
supervisor_inst.install()!
|
||||
println('✅ Supervisor installed')
|
||||
|
||||
// Start services
|
||||
println('\n▶️ Starting Coordinator...')
|
||||
coordinator.start()!
|
||||
if coordinator.running()! {
|
||||
println('✅ Coordinator is running on http://127.0.0.1:${coordinator.http_port}')
|
||||
}
|
||||
|
||||
println('\n▶️ Starting Supervisor...')
|
||||
supervisor_inst.start()!
|
||||
if supervisor_inst.running()! {
|
||||
println('✅ Supervisor is running on http://127.0.0.1:${supervisor_inst.http_port}')
|
||||
}
|
||||
|
||||
println('\n' + '=' * 60)
|
||||
println('🎉 Quick Start Complete!')
|
||||
println('=' * 60)
|
||||
println('\n📊 Services Running:')
|
||||
println(' • Coordinator: http://127.0.0.1:${coordinator.http_port}')
|
||||
println(' • Supervisor: http://127.0.0.1:${supervisor_inst.http_port}')
|
||||
|
||||
println('\n💡 Next Steps:')
|
||||
println(' • Test coordinator: curl http://127.0.0.1:${coordinator.http_port}')
|
||||
println(' • Test supervisor: curl http://127.0.0.1:${supervisor_inst.http_port}')
|
||||
println(' • Install runners: ./horus_full_install.vsh')
|
||||
println(' • Check status: ./horus_status.vsh')
|
||||
println(' • Stop services: ./horus_stop_all.vsh')
|
||||
3
examples/installers/k8s/.gitignore
vendored
Normal file
3
examples/installers/k8s/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
cryptpad
|
||||
element_chat
|
||||
gitea
|
||||
27
examples/installers/k8s/cryptpad.vsh
Executable file
27
examples/installers/k8s/cryptpad.vsh
Executable file
@@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.k8s.cryptpad
|
||||
|
||||
// This example demonstrates how to use the CryptPad installer.
|
||||
|
||||
// 1. Create a new installer instance with a specific hostname.
|
||||
// Replace 'mycryptpad' with your desired hostname.
|
||||
mut installer := cryptpad.get(
|
||||
name: 'kristof'
|
||||
create: true
|
||||
)!
|
||||
|
||||
// cryptpad.delete()!
|
||||
// 2. Configure the installer (all settings are optional with sensible defaults)
|
||||
// installer.hostname = 'mycryptpad'
|
||||
// installer.namespace = 'cryptpad'
|
||||
|
||||
// 3. Install CryptPad.
|
||||
// This will generate the necessary Kubernetes YAML files and apply them to your cluster.
|
||||
installer.install()!
|
||||
|
||||
// println('CryptPad installation started.')
|
||||
// println('You can access it at https://${installer.hostname}.gent01.grid.tf')
|
||||
|
||||
// 4. To destroy the deployment, you can run the following:
|
||||
// installer.destroy()!
|
||||
42
examples/installers/k8s/element_chat.vsh
Executable file
42
examples/installers/k8s/element_chat.vsh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.k8s.element_chat
|
||||
|
||||
// This example demonstrates how to use the Element Chat installer.
|
||||
|
||||
// 1. Create a new installer instance with specific hostnames.
|
||||
// Replace 'matrixchattest' and 'elementchattest' with your desired hostnames.
|
||||
// Note: Use only alphanumeric characters (no underscores or dashes).
|
||||
mut installer := element_chat.get(
|
||||
name: 'kristof'
|
||||
create: true
|
||||
)!
|
||||
|
||||
// element_chat.delete()!
|
||||
|
||||
// 2. Configure the installer (all settings are optional with sensible defaults)
|
||||
// installer.matrix_hostname = 'matrixchattest'
|
||||
// installer.element_hostname = 'elementchattest'
|
||||
// installer.namespace = 'chat'
|
||||
|
||||
// // Conduit (Matrix homeserver) configuration
|
||||
// installer.conduit_port = 6167 // Default: 6167
|
||||
// installer.database_backend = 'rocksdb' // Default: 'rocksdb' (can be 'sqlite')
|
||||
// installer.database_path = '/var/lib/matrix-conduit' // Default: '/var/lib/matrix-conduit'
|
||||
// installer.allow_registration = true // Default: true
|
||||
// installer.allow_federation = true // Default: true
|
||||
// installer.log_level = 'info' // Default: 'info' (can be 'debug', 'warn', 'error')
|
||||
|
||||
// // Element web client configuration
|
||||
// installer.element_brand = 'Element' // Default: 'Element'
|
||||
|
||||
// 3. Install Element Chat.
|
||||
// This will generate the necessary Kubernetes YAML files and apply them to your cluster.
|
||||
installer.install()!
|
||||
|
||||
// println('Element Chat installation started.')
|
||||
// println('Matrix homeserver will be available at: https://${installer.matrix_hostname}.gent01.grid.tf')
|
||||
// println('Element web client will be available at: https://${installer.element_hostname}.gent01.grid.tf')
|
||||
|
||||
// 4. To destroy the deployment, you can run the following:
|
||||
// installer.destroy()!
|
||||
44
examples/installers/k8s/gitea.vsh
Executable file
44
examples/installers/k8s/gitea.vsh
Executable file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.k8s.gitea
|
||||
|
||||
// This example demonstrates how to use the Gitea installer.
|
||||
|
||||
// 1. Create a new installer instance with a specific hostname.
|
||||
// Replace 'mygitea' with your desired hostname.
|
||||
// Note: Use only alphanumeric characters (no underscores or dashes).
|
||||
mut installer := gitea.get(
|
||||
name: 'kristof'
|
||||
create: true
|
||||
)!
|
||||
|
||||
// 2. Configure the installer (all settings are optional with sensible defaults)
|
||||
// installer.hostname = 'giteaapp' // Default: 'giteaapp'
|
||||
// installer.namespace = 'forge' // Default: 'forge'
|
||||
|
||||
// // Gitea server configuration
|
||||
// installer.http_port = 3000 // Default: 3000
|
||||
// installer.disable_registration = false // Default: false (allow new user registration)
|
||||
|
||||
// // Database configuration - Option 1: SQLite (default)
|
||||
// installer.db_type = 'sqlite3' // Default: 'sqlite3'
|
||||
// installer.db_path = '/data/gitea/gitea.db' // Default: '/data/gitea/gitea.db'
|
||||
|
||||
// // Database configuration - Option 2: PostgreSQL
|
||||
// // When using postgres, a PostgreSQL pod will be automatically deployed
|
||||
installer.db_type = 'postgres' // Use PostgreSQL instead of SQLite
|
||||
installer.db_host = 'postgres' // Default: 'postgres' (PostgreSQL service name)
|
||||
installer.db_name = 'gitea' // Default: 'gitea' (database name)
|
||||
installer.db_user = 'gitea' // Default: 'gitea' (database user)
|
||||
installer.db_password = 'gitea' // Default: 'gitea' (database password)
|
||||
installer.storage_size = '5Gi' // Default: '5Gi' (PVC storage size)
|
||||
|
||||
// 3. Install Gitea.
|
||||
// This will generate the necessary Kubernetes YAML files and apply them to your cluster.
|
||||
installer.install()!
|
||||
|
||||
// println('Gitea installation started.')
|
||||
// println('You can access it at: https://${installer.hostname}.gent01.grid.tf')
|
||||
|
||||
// 4. To destroy the deployment, you can run the following:
|
||||
// installer.destroy()!
|
||||
11
examples/installers/virt/crun.vsh
Executable file
11
examples/installers/virt/crun.vsh
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.virt.crun_installer
|
||||
|
||||
mut crun := crun_installer.get()!
|
||||
|
||||
// To install
|
||||
crun.install()!
|
||||
|
||||
// To remove
|
||||
crun.destroy()!
|
||||
11
examples/installers/virt/kubernetes.vsh
Executable file
11
examples/installers/virt/kubernetes.vsh
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.installers.virt.kubernetes_installer
|
||||
|
||||
mut kubectl := kubernetes_installer.get(name: 'k_installer', create: true)!
|
||||
|
||||
// To install
|
||||
kubectl.install()!
|
||||
|
||||
// To remove
|
||||
kubectl.destroy()!
|
||||
169
examples/virt/heropods/README.md
Normal file
169
examples/virt/heropods/README.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# HeroPods Examples
|
||||
|
||||
This directory contains example HeroScript files demonstrating different HeroPods use cases.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Linux system** (HeroPods requires Linux-specific tools: ip, iptables, nsenter, crun)
|
||||
- **Root/sudo access** (required for network configuration and container management)
|
||||
- **Podman** (optional but recommended for image management)
|
||||
- **Hero CLI** installed and configured
|
||||
|
||||
## Example Scripts
|
||||
|
||||
### 1. simple_container.heroscript
|
||||
|
||||
**Purpose**: Demonstrate basic container lifecycle management
|
||||
|
||||
**What it does**:
|
||||
|
||||
- Creates a HeroPods instance
|
||||
- Creates an Alpine Linux container
|
||||
- Starts the container
|
||||
- Executes basic commands inside the container (uname, ls, cat, ps, env)
|
||||
- Stops the container
|
||||
- Deletes the container
|
||||
|
||||
**Run it**:
|
||||
|
||||
```bash
|
||||
hero run examples/virt/heropods/simple_container.heroscript
|
||||
```
|
||||
|
||||
**Use this when**: You want to learn the basic container operations without networking complexity.
|
||||
|
||||
---
|
||||
|
||||
### 2. ipv4_connection.heroscript
|
||||
|
||||
**Purpose**: Demonstrate IPv4 networking and internet connectivity
|
||||
|
||||
**What it does**:
|
||||
|
||||
- Creates a HeroPods instance with bridge networking
|
||||
- Creates an Alpine Linux container
|
||||
- Starts the container with IPv4 networking
|
||||
- Verifies network configuration (interfaces, routes, DNS)
|
||||
- Tests DNS resolution
|
||||
- Tests HTTP/HTTPS connectivity to the internet
|
||||
- Stops and deletes the container
|
||||
|
||||
**Run it**:
|
||||
|
||||
```bash
|
||||
hero run examples/virt/heropods/ipv4_connection.heroscript
|
||||
```
|
||||
|
||||
**Use this when**: You want to verify that IPv4 bridge networking and internet access work correctly.
|
||||
|
||||
---
|
||||
|
||||
### 3. container_mycelium.heroscript
|
||||
|
||||
**Purpose**: Demonstrate Mycelium IPv6 overlay networking
|
||||
|
||||
**What it does**:
|
||||
|
||||
- Creates a HeroPods instance
|
||||
- Enables Mycelium IPv6 overlay network with all required configuration
|
||||
- Creates an Alpine Linux container
|
||||
- Starts the container with both IPv4 and IPv6 (Mycelium) networking
|
||||
- Verifies IPv6 configuration
|
||||
- Tests Mycelium IPv6 connectivity to public nodes
|
||||
- Verifies dual-stack networking (IPv4 + IPv6)
|
||||
- Stops and deletes the container
|
||||
|
||||
**Run it**:
|
||||
|
||||
```bash
|
||||
hero run examples/virt/heropods/container_mycelium.heroscript
|
||||
```
|
||||
|
||||
**Use this when**: You want to test Mycelium IPv6 overlay networking for encrypted peer-to-peer connectivity.
|
||||
|
||||
**Note**: Requires Mycelium to be installed and configured on the host system.
|
||||
|
||||
---
|
||||
|
||||
### 4. demo.heroscript
|
||||
|
||||
**Purpose**: Quick demonstration of HeroPods with both IPv4 and IPv6 networking
|
||||
|
||||
**What it does**:
|
||||
|
||||
- Combines IPv4 and Mycelium IPv6 networking in a single demo
|
||||
- Shows a complete workflow from configuration to cleanup
|
||||
- Serves as a quick reference for common operations
|
||||
|
||||
**Run it**:
|
||||
|
||||
```bash
|
||||
hero run examples/virt/heropods/demo.heroscript
|
||||
```
|
||||
|
||||
**Use this when**: You want a quick overview of HeroPods capabilities.
|
||||
|
||||
---
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Permission Denied for ping/ping6
|
||||
|
||||
Alpine Linux containers don't have `CAP_NET_RAW` capability by default, which is required for ICMP packets (ping).
|
||||
|
||||
**Solution**: Use `wget`, `curl`, or `nc` for connectivity testing instead of ping.
|
||||
|
||||
### Mycelium Not Found
|
||||
|
||||
If you get errors about Mycelium not being installed:
|
||||
|
||||
**Solution**: The HeroPods Mycelium integration will automatically install Mycelium when you run `heropods.enable_mycelium`. Make sure you have internet connectivity and the required permissions.
|
||||
|
||||
### Container Already Exists
|
||||
|
||||
If you get errors about containers already existing:
|
||||
|
||||
**Solution**: Either delete the existing container manually or set `reset:true` in the `heropods.configure` action.
|
||||
|
||||
---
|
||||
|
||||
## Learning Path
|
||||
|
||||
We recommend running the examples in this order:
|
||||
|
||||
1. **simple_container.heroscript** - Learn basic container operations
|
||||
2. **ipv4_connection.heroscript** - Understand IPv4 networking
|
||||
3. **container_mycelium.heroscript** - Explore IPv6 overlay networking
|
||||
4. **demo.heroscript** - See everything together
|
||||
|
||||
---
|
||||
|
||||
## Customization
|
||||
|
||||
Feel free to modify these scripts to:
|
||||
|
||||
- Use different container images (Ubuntu, custom images, etc.)
|
||||
- Test different network configurations
|
||||
- Add your own commands and tests
|
||||
- Experiment with multiple containers
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
For more information, see:
|
||||
|
||||
- [HeroPods Main README](../../../lib/virt/heropods/readme.md)
|
||||
- [Mycelium Integration Guide](../../../lib/virt/heropods/MYCELIUM_README.md)
|
||||
- [Production Readiness Review](../../../lib/virt/heropods/PRODUCTION_READINESS_REVIEW.md)
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
If you encounter issues:
|
||||
|
||||
1. Check the logs in `~/.containers/logs/`
|
||||
2. Verify your system meets the prerequisites
|
||||
3. Review the error messages carefully
|
||||
4. Consult the documentation linked above
|
||||
114
examples/virt/heropods/container_mycelium.heroscript
Normal file
114
examples/virt/heropods/container_mycelium.heroscript
Normal file
@@ -0,0 +1,114 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// ============================================================================
|
||||
// HeroPods Example: Mycelium IPv6 Overlay Networking
|
||||
// ============================================================================
|
||||
//
|
||||
// This script demonstrates Mycelium IPv6 overlay networking:
|
||||
// - End-to-end encrypted IPv6 connectivity
|
||||
// - Peer-to-peer routing through public relay nodes
|
||||
// - Container IPv6 address assignment from host's /64 prefix
|
||||
// - Connectivity to other Mycelium nodes across the internet
|
||||
//
|
||||
// Mycelium provides each container with an IPv6 address in the 400::/7 range
|
||||
// and enables encrypted communication with other Mycelium nodes.
|
||||
// ============================================================================
|
||||
|
||||
// Step 1: Configure HeroPods instance
|
||||
// This creates a HeroPods instance with default IPv4 networking
|
||||
!!heropods.configure
|
||||
name:'mycelium_demo'
|
||||
reset:false
|
||||
use_podman:true
|
||||
|
||||
// Step 2: Enable Mycelium IPv6 overlay network
|
||||
// All parameters are required for Mycelium configuration
|
||||
!!heropods.enable_mycelium
|
||||
heropods:'mycelium_demo'
|
||||
version:'v0.5.6'
|
||||
ipv6_range:'400::/7'
|
||||
key_path:'~/hero/cfg/priv_key.bin'
|
||||
peers:'tcp://185.69.166.8:9651,quic://[2a02:1802:5e:0:ec4:7aff:fe51:e36b]:9651,tcp://65.109.18.113:9651,quic://[2a01:4f9:5a:1042::2]:9651,tcp://5.78.122.16:9651,quic://[2a01:4ff:1f0:8859::1]:9651,tcp://5.223.43.251:9651,quic://[2a01:4ff:2f0:3621::1]:9651,tcp://142.93.217.194:9651,quic://[2400:6180:100:d0::841:2001]:9651'
|
||||
|
||||
// Step 3: Create a new Alpine Linux container
|
||||
// Alpine includes basic IPv6 networking tools
|
||||
!!heropods.container_new
|
||||
name:'mycelium_container'
|
||||
image:'custom'
|
||||
custom_image_name:'alpine_3_20'
|
||||
docker_url:'docker.io/library/alpine:3.20'
|
||||
|
||||
// Step 4: Start the container
|
||||
// This sets up both IPv4 and IPv6 (Mycelium) networking
|
||||
!!heropods.container_start
|
||||
name:'mycelium_container'
|
||||
|
||||
// Step 5: Verify IPv6 network configuration
|
||||
|
||||
// Show all network interfaces (including IPv6 addresses)
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ip addr show'
|
||||
stdout:true
|
||||
|
||||
// Show IPv6 addresses specifically
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ip -6 addr show'
|
||||
stdout:true
|
||||
|
||||
// Show IPv6 routing table
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ip -6 route show'
|
||||
stdout:true
|
||||
|
||||
// Step 6: Test Mycelium IPv6 connectivity
|
||||
// Ping a known public Mycelium node to verify connectivity
|
||||
// Note: This requires the container to have CAP_NET_RAW capability for ping6
|
||||
// If ping6 fails with permission denied, this is expected behavior in Alpine
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ping6 -c 3 400:8f3a:8d0e:3503:db8e:6a02:2e9:83dd'
|
||||
stdout:true
|
||||
|
||||
// Alternative: Test IPv6 connectivity using nc (netcat) if available
|
||||
// This doesn't require special capabilities
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'nc -6 -zv -w 3 400:8f3a:8d0e:3503:db8e:6a02:2e9:83dd 80 2>&1 || echo nc test completed'
|
||||
stdout:true
|
||||
|
||||
// Step 7: Show Mycelium-specific information
|
||||
|
||||
// Display the container's Mycelium IPv6 address
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ip -6 addr show | grep 400: || echo No Mycelium IPv6 address found'
|
||||
stdout:true
|
||||
|
||||
// Show IPv6 neighbors (if any)
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'ip -6 neigh show'
|
||||
stdout:true
|
||||
|
||||
// Step 8: Verify dual-stack networking (IPv4 + IPv6)
|
||||
// The container should have both IPv4 and IPv6 connectivity
|
||||
|
||||
// Test IPv4 connectivity
|
||||
!!heropods.container_exec
|
||||
name:'mycelium_container'
|
||||
cmd:'wget -O- http://google.com --timeout=5 2>&1 | head -n 5'
|
||||
stdout:true
|
||||
|
||||
// Step 9: Stop the container
|
||||
// This cleans up both IPv4 and IPv6 (Mycelium) networking
|
||||
!!heropods.container_stop
|
||||
name:'mycelium_container'
|
||||
|
||||
// Step 10: Delete the container
|
||||
// This removes the container and all associated resources
|
||||
!!heropods.container_delete
|
||||
name:'mycelium_container'
|
||||
|
||||
75
examples/virt/heropods/hello_world_keepalive.heroscript
Normal file
75
examples/virt/heropods/hello_world_keepalive.heroscript
Normal file
@@ -0,0 +1,75 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// ============================================================================
|
||||
// HeroPods Keep-Alive Feature Test - Alpine Container
|
||||
// ============================================================================
|
||||
//
|
||||
// This script demonstrates the keep_alive feature with an Alpine container.
|
||||
//
|
||||
// Test Scenario:
|
||||
// Alpine's default CMD is /bin/sh, which exits immediately when run
|
||||
// non-interactively (no stdin). This makes it perfect for testing keep_alive:
|
||||
//
|
||||
// 1. Container starts with CMD=["/bin/sh"]
|
||||
// 2. /bin/sh exits immediately (exit code 0)
|
||||
// 3. HeroPods detects the successful exit
|
||||
// 4. HeroPods recreates the container with keep-alive command
|
||||
// 5. Container remains running and accepts exec commands
|
||||
//
|
||||
// This demonstrates the core keep_alive functionality:
|
||||
// - Detecting when a container's entrypoint/cmd exits
|
||||
// - Checking the exit code
|
||||
// - Injecting a keep-alive process on successful exit
|
||||
// - Allowing subsequent exec commands
|
||||
//
|
||||
// ============================================================================
|
||||
|
||||
// Step 1: Configure HeroPods instance
|
||||
!!heropods.configure
|
||||
name:'hello_world'
|
||||
reset:true
|
||||
use_podman:true
|
||||
|
||||
// Step 2: Create a container with Alpine 3.20 image
|
||||
// Using custom image type to automatically download from Docker Hub
|
||||
!!heropods.container_new
|
||||
name:'alpine_test_keepalive'
|
||||
image:'custom'
|
||||
custom_image_name:'alpine_test'
|
||||
docker_url:'docker.io/library/alpine:3.20'
|
||||
|
||||
// Step 3: Start the container with keep_alive enabled
|
||||
// Alpine's CMD is /bin/sh which exits immediately when run non-interactively.
|
||||
// With keep_alive:true, HeroPods will:
|
||||
// 1. Start the container with /bin/sh
|
||||
// 2. Wait for /bin/sh to exit (which happens immediately)
|
||||
// 3. Detect the successful exit (exit code 0)
|
||||
// 4. Recreate the container with a keep-alive command (tail -f /dev/null)
|
||||
// 5. The container will then remain running and accept exec commands
|
||||
!!heropods.container_start
|
||||
name:'alpine_test_keepalive'
|
||||
keep_alive:true
|
||||
|
||||
// Step 4: Execute a simple hello world command
|
||||
!!heropods.container_exec
|
||||
name:'alpine_test_keepalive'
|
||||
cmd:'echo Hello World from HeroPods'
|
||||
stdout:true
|
||||
|
||||
// Step 5: Display OS information
|
||||
!!heropods.container_exec
|
||||
name:'alpine_test_keepalive'
|
||||
cmd:'cat /etc/os-release'
|
||||
stdout:true
|
||||
|
||||
// Step 6: Show running processes
|
||||
!!heropods.container_exec
|
||||
name:'alpine_test_keepalive'
|
||||
cmd:'ps aux'
|
||||
stdout:true
|
||||
|
||||
// Step 7: Verify Alpine version
|
||||
!!heropods.container_exec
|
||||
name:'alpine_test_keepalive'
|
||||
cmd:'cat /etc/alpine-release'
|
||||
stdout:true
|
||||
27
examples/virt/heropods/herobin.heroscript
Normal file
27
examples/virt/heropods/herobin.heroscript
Normal file
@@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// Step 1: Configure HeroPods instance
|
||||
!!heropods.configure
|
||||
name:'simple_demo'
|
||||
reset:false
|
||||
use_podman:true
|
||||
|
||||
|
||||
// Step 2: Create a container with hero binary
|
||||
!!heropods.container_new
|
||||
name:'simple_container'
|
||||
image:'custom'
|
||||
custom_image_name:'hero_container'
|
||||
docker_url:'docker.io/threefolddev/hero-container:latest'
|
||||
|
||||
// Step 3: Start the container with keep_alive enabled
|
||||
// This will run the entrypoint, wait for it to complete, then inject a keep-alive process
|
||||
!!heropods.container_start
|
||||
name:'simple_container'
|
||||
keep_alive:true
|
||||
|
||||
// Step 4: Execute hero command inside the container
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'hero -help'
|
||||
stdout:true
|
||||
@@ -2,17 +2,17 @@
|
||||
|
||||
import incubaid.herolib.virt.heropods
|
||||
|
||||
// Initialize factory
|
||||
mut factory := heropods.new(
|
||||
// Initialize heropods
|
||||
mut heropods_ := heropods.new(
|
||||
reset: false
|
||||
use_podman: true
|
||||
) or { panic('Failed to init ContainerFactory: ${err}') }
|
||||
) or { panic('Failed to init HeroPods: ${err}') }
|
||||
|
||||
println('=== HeroPods Refactored API Demo ===')
|
||||
|
||||
// Step 1: factory.new() now only creates a container definition/handle
|
||||
// Step 1: heropods_.new() now only creates a container definition/handle
|
||||
// It does NOT create the actual container in the backend yet
|
||||
mut container := factory.new(
|
||||
mut container := heropods_.container_new(
|
||||
name: 'demo_alpine'
|
||||
image: .custom
|
||||
custom_image_name: 'alpine_3_20'
|
||||
@@ -56,7 +56,7 @@ println('✓ Container deleted successfully')
|
||||
|
||||
println('\n=== Demo completed! ===')
|
||||
println('The refactored API now works as expected:')
|
||||
println('- factory.new() creates definition only')
|
||||
println('- heropods_.new() creates definition only')
|
||||
println('- container.start() is idempotent')
|
||||
println('- container.exec() works and returns results')
|
||||
println('- container.delete() works on instances')
|
||||
|
||||
96
examples/virt/heropods/ipv4_connection.heroscript
Normal file
96
examples/virt/heropods/ipv4_connection.heroscript
Normal file
@@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// ============================================================================
|
||||
// HeroPods Example: IPv4 Networking and Internet Connectivity
|
||||
// ============================================================================
|
||||
//
|
||||
// This script demonstrates IPv4 networking functionality:
|
||||
// - Bridge networking with automatic IP allocation
|
||||
// - NAT for outbound internet access
|
||||
// - DNS resolution
|
||||
// - HTTP connectivity testing
|
||||
//
|
||||
// The container gets an IP address from the bridge subnet (default: 10.10.0.0/24)
|
||||
// and can access the internet through NAT.
|
||||
// ============================================================================
|
||||
|
||||
// Step 1: Configure HeroPods instance with IPv4 networking
|
||||
// This creates a HeroPods instance with bridge networking enabled
|
||||
!!heropods.configure
|
||||
name:'ipv4_demo'
|
||||
reset:false
|
||||
use_podman:true
|
||||
bridge_name:'heropods0'
|
||||
subnet:'10.10.0.0/24'
|
||||
gateway_ip:'10.10.0.1'
|
||||
dns_servers:['8.8.8.8', '8.8.4.4']
|
||||
|
||||
// Step 2: Create a new Alpine Linux container
|
||||
// Alpine is lightweight and includes basic networking tools
|
||||
!!heropods.container_new
|
||||
name:'ipv4_container'
|
||||
image:'custom'
|
||||
custom_image_name:'alpine_3_20'
|
||||
docker_url:'docker.io/library/alpine:3.20'
|
||||
|
||||
// Step 3: Start the container
|
||||
// This sets up the veth pair and configures IPv4 networking
|
||||
!!heropods.container_start
|
||||
name:'ipv4_container'
|
||||
|
||||
// Step 4: Verify network configuration inside the container
|
||||
|
||||
// Show network interfaces and IP addresses
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'ip addr show'
|
||||
stdout:true
|
||||
|
||||
// Show routing table
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'ip route show'
|
||||
stdout:true
|
||||
|
||||
// Show DNS configuration
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'cat /etc/resolv.conf'
|
||||
stdout:true
|
||||
|
||||
// Step 5: Test DNS resolution
|
||||
// Verify that DNS queries work correctly
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'nslookup google.com'
|
||||
stdout:true
|
||||
|
||||
// Step 6: Test HTTP connectivity
|
||||
// Use wget to verify internet access (ping requires CAP_NET_RAW capability)
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'wget -O- http://google.com --timeout=5 2>&1 | head -n 10'
|
||||
stdout:true
|
||||
|
||||
// Test another website to confirm connectivity
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'wget -O- http://example.com --timeout=5 2>&1 | head -n 10'
|
||||
stdout:true
|
||||
|
||||
// Step 7: Test HTTPS connectivity (if wget supports it)
|
||||
!!heropods.container_exec
|
||||
name:'ipv4_container'
|
||||
cmd:'wget -O- https://www.google.com --timeout=5 --no-check-certificate 2>&1 | head -n 10'
|
||||
stdout:true
|
||||
|
||||
// Step 8: Stop the container
|
||||
// This removes the veth pair and cleans up network configuration
|
||||
!!heropods.container_stop
|
||||
name:'ipv4_container'
|
||||
|
||||
// Step 9: Delete the container
|
||||
// This removes the container and all associated resources
|
||||
!!heropods.container_delete
|
||||
name:'ipv4_container'
|
||||
|
||||
6
examples/virt/heropods/runcommands.vsh
Normal file → Executable file
6
examples/virt/heropods/runcommands.vsh
Normal file → Executable file
@@ -2,12 +2,12 @@
|
||||
|
||||
import incubaid.herolib.virt.heropods
|
||||
|
||||
mut factory := heropods.new(
|
||||
mut heropods_ := heropods.new(
|
||||
reset: false
|
||||
use_podman: true
|
||||
) or { panic('Failed to init ContainerFactory: ${err}') }
|
||||
) or { panic('Failed to init HeroPods: ${err}') }
|
||||
|
||||
mut container := factory.new(
|
||||
mut container := heropods_.container_new(
|
||||
name: 'alpine_demo'
|
||||
image: .custom
|
||||
custom_image_name: 'alpine_3_20'
|
||||
|
||||
79
examples/virt/heropods/simple_container.heroscript
Normal file
79
examples/virt/heropods/simple_container.heroscript
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// ============================================================================
|
||||
// HeroPods Example: Simple Container Lifecycle Management
|
||||
// ============================================================================
|
||||
//
|
||||
// This script demonstrates the basic container lifecycle operations:
|
||||
// - Creating a container
|
||||
// - Starting a container
|
||||
// - Executing commands inside the container
|
||||
// - Stopping a container
|
||||
// - Deleting a container
|
||||
//
|
||||
// No networking tests - just basic container operations.
|
||||
// ============================================================================
|
||||
|
||||
// Step 1: Configure HeroPods instance
|
||||
// This creates a HeroPods instance named 'simple_demo' with default settings
|
||||
!!heropods.configure
|
||||
name:'simple_demo'
|
||||
reset:false
|
||||
use_podman:true
|
||||
|
||||
// Step 2: Create a new Alpine Linux container
|
||||
// This pulls the Alpine 3.20 image from Docker Hub and prepares it for use
|
||||
!!heropods.container_new
|
||||
name:'simple_container'
|
||||
image:'custom'
|
||||
custom_image_name:'alpine_3_20'
|
||||
docker_url:'docker.io/library/alpine:3.20'
|
||||
|
||||
// Step 3: Start the container
|
||||
// This starts the container using crun (OCI runtime)
|
||||
!!heropods.container_start
|
||||
name:'simple_container'
|
||||
|
||||
// Step 4: Execute basic commands inside the container
|
||||
// These commands demonstrate that the container is running and functional
|
||||
|
||||
// Show kernel information
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'uname -a'
|
||||
stdout:true
|
||||
|
||||
// List root directory contents
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'ls -la /'
|
||||
stdout:true
|
||||
|
||||
// Show OS release information
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'cat /etc/os-release'
|
||||
stdout:true
|
||||
|
||||
// Show current processes
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'ps aux'
|
||||
stdout:true
|
||||
|
||||
// Show environment variables
|
||||
!!heropods.container_exec
|
||||
name:'simple_container'
|
||||
cmd:'env'
|
||||
stdout:true
|
||||
|
||||
// Step 5: Stop the container
|
||||
// This gracefully stops the container (SIGTERM, then SIGKILL if needed)
|
||||
!!heropods.container_stop
|
||||
name:'simple_container'
|
||||
|
||||
// Step 6: Delete the container
|
||||
// This removes the container and cleans up all associated resources
|
||||
!!heropods.container_delete
|
||||
name:'simple_container'
|
||||
|
||||
5
examples/virt/hetzner/.gitignore
vendored
5
examples/virt/hetzner/.gitignore
vendored
@@ -1 +1,4 @@
|
||||
hetzner_example
|
||||
hetzner_kristof1
|
||||
hetzner_kristof2
|
||||
hetzner_kristof3
|
||||
hetzner_test1
|
||||
|
||||
3
examples/virt/hetzner/hetzner_env.sh
Executable file
3
examples/virt/hetzner/hetzner_env.sh
Executable file
@@ -0,0 +1,3 @@
|
||||
export HETZNER_USER="#ws+JdQtGCdL"
|
||||
export HETZNER_PASSWORD="Kds007kds!"
|
||||
export HETZNER_SSHKEY_NAME="mahmoud"
|
||||
@@ -1,37 +1,34 @@
|
||||
#!/usr/bin/env hero
|
||||
#!/usr/bin/env hero
|
||||
|
||||
// # Configure HetznerManager, replace with your own credentials, server id's and ssh key name and all other parameters
|
||||
|
||||
// !!hetznermanager.configure
|
||||
// name:"main"
|
||||
// user:"krist"
|
||||
// whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
|
||||
// password:"wontsethere"
|
||||
// sshkey:"kristof"
|
||||
!!hetznermanager.configure
|
||||
user:"user_name"
|
||||
whitelist:"server_id"
|
||||
password:"password"
|
||||
sshkey:"ssh_key_name"
|
||||
|
||||
|
||||
// !!hetznermanager.server_rescue
|
||||
// server_name: 'kristof21' // The name of the server to manage (or use `id`)
|
||||
// wait: true // Wait for the operation to complete
|
||||
// hero_install: true // Automatically install Herolib in the rescue system
|
||||
!!hetznermanager.server_rescue
|
||||
server_name: 'server_name' // The name of the server to manage (or use `id`)
|
||||
wait: true // Wait for the operation to complete
|
||||
hero_install: true // Automatically install Herolib in the rescue system
|
||||
|
||||
|
||||
// # Reset a server
|
||||
// !!hetznermanager.server_reset
|
||||
// instance: 'main'
|
||||
// server_name: 'your-server-name'
|
||||
// wait: true
|
||||
!!hetznermanager.server_reset
|
||||
instance: 'main'
|
||||
server_name: 'server_name'
|
||||
wait: true
|
||||
|
||||
// # Add a new SSH key to your Hetzner account
|
||||
// !!hetznermanager.key_create
|
||||
// instance: 'main'
|
||||
// key_name: 'my-laptop-key'
|
||||
// data: 'ssh-rsa AAAA...'
|
||||
!!hetznermanager.key_create
|
||||
instance: 'main'
|
||||
key_name: 'ssh_key_name'
|
||||
data: 'ssh-rsa AAAA...'
|
||||
|
||||
|
||||
// Install Ubuntu 24.04 on a server
|
||||
!!hetznermanager.ubuntu_install
|
||||
server_name: 'kristof2'
|
||||
server_name: 'server_name'
|
||||
wait: true
|
||||
hero_install: true // Install Herolib on the new OS
|
||||
|
||||
|
||||
|
||||
@@ -1,68 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.hetznermanager
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.core.base
|
||||
import incubaid.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hs := '
|
||||
!!hetznermanager.configure
|
||||
user:"${user}"
|
||||
whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
|
||||
password:"${passwd}"
|
||||
sshkey:"kristof"
|
||||
'
|
||||
|
||||
println(hs)
|
||||
|
||||
playcmds.run(heroscript: hs)!
|
||||
|
||||
console.print_header('Hetzner Test.')
|
||||
|
||||
mut cl := hetznermanager.get()!
|
||||
// println(cl)
|
||||
|
||||
// for i in 0 .. 5 {
|
||||
// println('test cache, first time slow then fast')
|
||||
// }
|
||||
|
||||
// println(cl.servers_list()!)
|
||||
|
||||
// mut serverinfo := cl.server_info_get(name: 'kristof2')!
|
||||
|
||||
// println(serverinfo)
|
||||
|
||||
// cl.server_reset(name:"kristof2",wait:true)!
|
||||
|
||||
// don't forget to specify the keyname needed
|
||||
// cl.server_rescue(name:"kristof2",wait:true, hero_install:true,sshkey_name:"kristof")!
|
||||
|
||||
// mut ks:=cl.keys_get()!
|
||||
// println(ks)
|
||||
|
||||
// console.print_header('SSH login')
|
||||
// mut b := builder.new()!
|
||||
// mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
|
||||
// this will put hero in debug mode on the system
|
||||
// n.hero_install(compile:true)!
|
||||
|
||||
// n.shell("")!
|
||||
|
||||
// cl.ubuntu_install(name: 'kristof2', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
|
||||
cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
|
||||
79
examples/virt/hetzner/hetzner_kristof1.vsh
Executable file
79
examples/virt/hetzner/hetzner_kristof1.vsh
Executable file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.hetznermanager
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.core.base
|
||||
import incubaid.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Server-specific configuration
|
||||
const server_name = 'kristof1'
|
||||
const server_whitelist = '2521602'
|
||||
|
||||
// Load credentials from environment variables
|
||||
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
|
||||
hetzner_user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
|
||||
println('HETZNER_SSHKEY_NAME not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hs := '
|
||||
!!hetznermanager.configure
|
||||
user:"${hetzner_user}"
|
||||
whitelist:"${server_whitelist}"
|
||||
password:"${hetzner_passwd}"
|
||||
sshkey:"${hetzner_sshkey_name}"
|
||||
'
|
||||
|
||||
println(hs)
|
||||
|
||||
playcmds.run(heroscript: hs)!
|
||||
|
||||
console.print_header('Hetzner Test.')
|
||||
|
||||
mut cl := hetznermanager.get()!
|
||||
// println(cl)
|
||||
|
||||
// for i in 0 .. 5 {
|
||||
// println('test cache, first time slow then fast')
|
||||
// }
|
||||
|
||||
println(cl.servers_list()!)
|
||||
|
||||
mut serverinfo := cl.server_info_get(name: server_name)!
|
||||
|
||||
println(serverinfo)
|
||||
|
||||
// cl.server_reset(name: 'kristof2', wait: true)!
|
||||
|
||||
// cl.server_rescue(name: name, wait: true, hero_install: true)!
|
||||
|
||||
// mut ks := cl.keys_get()!
|
||||
// println(ks)
|
||||
|
||||
// console.print_header('SSH login')
|
||||
|
||||
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
|
||||
|
||||
// this will put hero in debug mode on the system
|
||||
mut b := builder.new()!
|
||||
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
n.hero_install(compile: true)!
|
||||
|
||||
n.shell('')!
|
||||
54
examples/virt/hetzner/hetzner_kristof2.vsh
Executable file
54
examples/virt/hetzner/hetzner_kristof2.vsh
Executable file
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.hetznermanager
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.core.base
|
||||
import incubaid.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Server-specific configuration
|
||||
const server_name = 'kristof2'
|
||||
const server_whitelist = '2555487'
|
||||
|
||||
// Load credentials from environment variables
|
||||
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
|
||||
hetzner_user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
|
||||
println('HETZNER_SSHKEY_NAME not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hero_script := '
|
||||
!!hetznermanager.configure
|
||||
user:"${hetzner_user}"
|
||||
whitelist:"${server_whitelist}"
|
||||
password:"${hetzner_passwd}"
|
||||
sshkey:"${hetzner_sshkey_name}"
|
||||
'
|
||||
|
||||
playcmds.run(heroscript: hero_script)!
|
||||
mut hetznermanager_ := hetznermanager.get()!
|
||||
|
||||
mut serverinfo := hetznermanager_.server_info_get(name: server_name)!
|
||||
|
||||
println('${server_name} ${serverinfo.server_ip}')
|
||||
|
||||
hetznermanager_.server_rescue(name: server_name, wait: true, hero_install: true)!
|
||||
mut keys := hetznermanager_.keys_get()!
|
||||
|
||||
mut b := builder.new()!
|
||||
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
|
||||
hetznermanager_.ubuntu_install(name: server_name, wait: true, hero_install: true)!
|
||||
n.shell('')!
|
||||
79
examples/virt/hetzner/hetzner_kristof3.vsh
Executable file
79
examples/virt/hetzner/hetzner_kristof3.vsh
Executable file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.hetznermanager
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.core.base
|
||||
import incubaid.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Server-specific configuration
|
||||
const server_name = 'kristof3'
|
||||
const server_whitelist = '2573047'
|
||||
|
||||
// Load credentials from environment variables
|
||||
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
|
||||
hetzner_user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
|
||||
println('HETZNER_SSHKEY_NAME not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hs := '
|
||||
!!hetznermanager.configure
|
||||
user:"${hetzner_user}"
|
||||
whitelist:"${server_whitelist}"
|
||||
password:"${hetzner_passwd}"
|
||||
sshkey:"${hetzner_sshkey_name}"
|
||||
'
|
||||
|
||||
println(hs)
|
||||
|
||||
playcmds.run(heroscript: hs)!
|
||||
|
||||
console.print_header('Hetzner Test.')
|
||||
|
||||
mut cl := hetznermanager.get()!
|
||||
// println(cl)
|
||||
|
||||
// for i in 0 .. 5 {
|
||||
// println('test cache, first time slow then fast')
|
||||
// }
|
||||
|
||||
println(cl.servers_list()!)
|
||||
|
||||
mut serverinfo := cl.server_info_get(name: server_name)!
|
||||
|
||||
println(serverinfo)
|
||||
|
||||
// cl.server_reset(name: 'kristof2', wait: true)!
|
||||
|
||||
// cl.server_rescue(name: name, wait: true, hero_install: true)!
|
||||
|
||||
// mut ks := cl.keys_get()!
|
||||
// println(ks)
|
||||
|
||||
// console.print_header('SSH login')
|
||||
|
||||
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
|
||||
|
||||
// this will put hero in debug mode on the system
|
||||
mut b := builder.new()!
|
||||
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
n.hero_install(compile: true)!
|
||||
|
||||
n.shell('')!
|
||||
79
examples/virt/hetzner/hetzner_test1.vsh
Executable file
79
examples/virt/hetzner/hetzner_test1.vsh
Executable file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.hetznermanager
|
||||
import incubaid.herolib.ui.console
|
||||
import incubaid.herolib.core.base
|
||||
import incubaid.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import incubaid.herolib.core.playcmds
|
||||
|
||||
// Server-specific configuration
|
||||
const server_name = 'test1'
|
||||
const server_whitelist = '2575034'
|
||||
|
||||
// Load credentials from environment variables
|
||||
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
|
||||
hetzner_user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
|
||||
println('HETZNER_SSHKEY_NAME not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
hs := '
|
||||
!!hetznermanager.configure
|
||||
user:"${hetzner_user}"
|
||||
whitelist:"${server_whitelist}"
|
||||
password:"${hetzner_passwd}"
|
||||
sshkey:"${hetzner_sshkey_name}"
|
||||
'
|
||||
|
||||
println(hs)
|
||||
|
||||
playcmds.run(heroscript: hs)!
|
||||
|
||||
console.print_header('Hetzner Test.')
|
||||
|
||||
mut cl := hetznermanager.get()!
|
||||
// println(cl)
|
||||
|
||||
// for i in 0 .. 5 {
|
||||
// println('test cache, first time slow then fast')
|
||||
// }
|
||||
|
||||
println(cl.servers_list()!)
|
||||
|
||||
mut serverinfo := cl.server_info_get(name: server_name)!
|
||||
|
||||
println(serverinfo)
|
||||
|
||||
// cl.server_reset(name: 'kristof2', wait: true)!
|
||||
|
||||
// cl.server_rescue(name: name, wait: true, hero_install: true)!
|
||||
|
||||
// mut ks := cl.keys_get()!
|
||||
// println(ks)
|
||||
|
||||
// console.print_header('SSH login')
|
||||
|
||||
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
|
||||
|
||||
// this will put hero in debug mode on the system
|
||||
mut b := builder.new()!
|
||||
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
n.hero_install(compile: true)!
|
||||
|
||||
n.shell('')!
|
||||
@@ -1,9 +1,57 @@
|
||||
# Hetzner Examples
|
||||
|
||||
## Quick Start
|
||||
|
||||
get the login passwd from:
|
||||
### 1. Configure Environment Variables
|
||||
|
||||
https://robot.hetzner.com/preferences/index
|
||||
Copy `hetzner_env.sh` and fill in your credentials:
|
||||
|
||||
```bash
|
||||
curl -u "#ws+JdQtGCdL:..." https://robot-ws.your-server.de/server
|
||||
```
|
||||
export HETZNER_USER="your-robot-username" # Hetzner Robot API username
|
||||
export HETZNER_PASSWORD="your-password" # Hetzner Robot API password
|
||||
export HETZNER_SSHKEY_NAME="my-key" # Name of SSH key registered in Hetzner
|
||||
```
|
||||
|
||||
Each script has its own server name and whitelist ID defined at the top.
|
||||
|
||||
### 2. Run a Script
|
||||
|
||||
```bash
|
||||
source hetzner_env.sh
|
||||
./hetzner_kristof2.vsh
|
||||
```
|
||||
|
||||
## SSH Keys
|
||||
|
||||
The `HETZNER_SSHKEY_NAME` must be the **name** of an SSH key already registered in your Hetzner Robot account.
|
||||
|
||||
Available keys in our Hetzner account:
|
||||
|
||||
- hossnys (RSA 2048)
|
||||
- Jan De Landtsheer (ED25519 256)
|
||||
- mahmoud (ED25519 256)
|
||||
- kristof (ED25519 256)
|
||||
- maxime (ED25519 256)
|
||||
|
||||
To add a new key, use `key_create` in your script or the Hetzner Robot web interface.
|
||||
|
||||
## Alternative: Using hero_secrets
|
||||
|
||||
You can also use the shared secrets repository:
|
||||
|
||||
```bash
|
||||
hero git pull https://git.threefold.info/despiegk/hero_secrets
|
||||
source ~/code/git.ourworld.tf/despiegk/hero_secrets/mysecrets.sh
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Get Robot API credentials
|
||||
|
||||
Get your login credentials from: https://robot.hetzner.com/preferences/index
|
||||
|
||||
### Test API access
|
||||
|
||||
```bash
|
||||
curl -u "your-username:your-password" https://robot-ws.your-server.de/server
|
||||
```
|
||||
|
||||
1
examples/virt/kubernetes/.gitignore
vendored
Normal file
1
examples/virt/kubernetes/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
kubernetes_example
|
||||
177
examples/virt/kubernetes/README.md
Normal file
177
examples/virt/kubernetes/README.md
Normal file
@@ -0,0 +1,177 @@
|
||||
# Kubernetes Client Example
|
||||
|
||||
This example demonstrates the Kubernetes client functionality in HeroLib, including JSON parsing and cluster interaction.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **kubectl installed**: The Kubernetes command-line tool must be installed on your system.
|
||||
- macOS: `brew install kubectl`
|
||||
- Linux: See [official installation guide](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
|
||||
- Windows: See [official installation guide](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/)
|
||||
|
||||
2. **Kubernetes cluster**: You need access to a Kubernetes cluster. For local development, you can use:
|
||||
- **Minikube**: `brew install minikube && minikube start`
|
||||
- **Kind**: `brew install kind && kind create cluster`
|
||||
- **Docker Desktop**: Enable Kubernetes in Docker Desktop settings
|
||||
- **k3s**: Lightweight Kubernetes distribution
|
||||
|
||||
## Running the Example
|
||||
|
||||
### Method 1: Direct Execution (Recommended)
|
||||
|
||||
```bash
|
||||
# Make the script executable
|
||||
chmod +x examples/virt/kubernetes/kubernetes_example.vsh
|
||||
|
||||
# Run the script
|
||||
./examples/virt/kubernetes/kubernetes_example.vsh
|
||||
```
|
||||
|
||||
### Method 2: Using V Command
|
||||
|
||||
```bash
|
||||
v -enable-globals run examples/virt/kubernetes/kubernetes_example.vsh
|
||||
```
|
||||
|
||||
## What the Example Demonstrates
|
||||
|
||||
The example script demonstrates the following functionality:
|
||||
|
||||
### 1. **Cluster Information**
|
||||
|
||||
- Retrieves Kubernetes cluster version
|
||||
- Counts total nodes in the cluster
|
||||
- Counts total namespaces
|
||||
- Counts running pods across all namespaces
|
||||
|
||||
### 2. **Pod Management**
|
||||
|
||||
- Lists all pods in the `default` namespace
|
||||
- Displays pod details:
|
||||
- Name, namespace, status
|
||||
- Node assignment and IP address
|
||||
- Container names
|
||||
- Labels and creation timestamp
|
||||
|
||||
### 3. **Deployment Management**
|
||||
|
||||
- Lists all deployments in the `default` namespace
|
||||
- Shows deployment information:
|
||||
- Name and namespace
|
||||
- Replica counts (desired, ready, available, updated)
|
||||
- Labels and creation timestamp
|
||||
|
||||
### 4. **Service Management**
|
||||
|
||||
- Lists all services in the `default` namespace
|
||||
- Displays service details:
|
||||
- Name, namespace, and type (ClusterIP, NodePort, LoadBalancer)
|
||||
- Cluster IP and external IP (if applicable)
|
||||
- Exposed ports and protocols
|
||||
- Labels and creation timestamp
|
||||
|
||||
## Expected Output
|
||||
|
||||
### With a Running Cluster
|
||||
|
||||
When connected to a Kubernetes cluster with resources, you'll see formatted output like:
|
||||
|
||||
```
|
||||
╔════════════════════════════════════════════════════════════════╗
|
||||
║ Kubernetes Client Example - HeroLib ║
|
||||
║ Demonstrates JSON parsing and cluster interaction ║
|
||||
╚════════════════════════════════════════════════════════════════╝
|
||||
|
||||
[INFO] Creating Kubernetes client instance...
|
||||
[SUCCESS] Kubernetes client created successfully
|
||||
|
||||
- 1. Cluster Information
|
||||
[INFO] Retrieving cluster information...
|
||||
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Cluster Overview │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ API Server: https://127.0.0.1:6443 │
|
||||
│ Version: v1.31.0 │
|
||||
│ Nodes: 3 │
|
||||
│ Namespaces: 5 │
|
||||
│ Running Pods: 12 │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Without a Cluster
|
||||
|
||||
If kubectl is not installed or no cluster is configured, you'll see helpful error messages:
|
||||
|
||||
```
|
||||
Error: Failed to get cluster information
|
||||
...
|
||||
This usually means:
|
||||
- kubectl is not installed
|
||||
- No Kubernetes cluster is configured (check ~/.kube/config)
|
||||
- The cluster is not accessible
|
||||
|
||||
To set up a local cluster, you can use:
|
||||
- Minikube: https://minikube.sigs.k8s.io/docs/start/
|
||||
- Kind: https://kind.sigs.k8s.io/docs/user/quick-start/
|
||||
- Docker Desktop (includes Kubernetes)
|
||||
```
|
||||
|
||||
## Creating Test Resources
|
||||
|
||||
If your cluster is empty, you can create test resources to see the example in action:
|
||||
|
||||
```bash
|
||||
# Create a test pod
|
||||
kubectl run nginx --image=nginx
|
||||
|
||||
# Create a test deployment
|
||||
kubectl create deployment nginx-deployment --image=nginx --replicas=3
|
||||
|
||||
# Expose the deployment as a service
|
||||
kubectl expose deployment nginx-deployment --port=80 --type=ClusterIP
|
||||
```
|
||||
|
||||
## Code Structure
|
||||
|
||||
The example demonstrates proper usage of the HeroLib Kubernetes client:
|
||||
|
||||
1. **Factory Pattern**: Uses `kubernetes.new()` to create a client instance
|
||||
2. **Error Handling**: Proper use of V's `!` error propagation and `or {}` blocks
|
||||
3. **JSON Parsing**: All kubectl JSON output is parsed into structured V types
|
||||
4. **Console Output**: Clear, formatted output using the `console` module
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The Kubernetes client module uses:
|
||||
|
||||
- **Struct-based JSON decoding**: V's `json.decode(Type, data)` for type-safe parsing
|
||||
- **Kubernetes API response structs**: Matching kubectl's JSON output format
|
||||
- **Runtime resource structs**: Clean data structures for application use (`Pod`, `Deployment`, `Service`)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "kubectl: command not found"
|
||||
|
||||
Install kubectl using your package manager (see Prerequisites above).
|
||||
|
||||
### "The connection to the server was refused"
|
||||
|
||||
Start a local Kubernetes cluster:
|
||||
|
||||
```bash
|
||||
minikube start
|
||||
# or
|
||||
kind create cluster
|
||||
```
|
||||
|
||||
### "No resources found in default namespace"
|
||||
|
||||
Create test resources using the commands in the "Creating Test Resources" section above.
|
||||
|
||||
## Related Files
|
||||
|
||||
- **Implementation**: `lib/virt/kubernetes/kubernetes_client.v`
|
||||
- **Data Models**: `lib/virt/kubernetes/kubernetes_resources_model.v`
|
||||
- **Unit Tests**: `lib/virt/kubernetes/kubernetes_test.v`
|
||||
- **Factory**: `lib/virt/kubernetes/kubernetes_factory_.v`
|
||||
231
examples/virt/kubernetes/kubernetes_example.vsh
Executable file
231
examples/virt/kubernetes/kubernetes_example.vsh
Executable file
@@ -0,0 +1,231 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.virt.kubernetes
|
||||
import incubaid.herolib.ui.console
|
||||
|
||||
println('╔════════════════════════════════════════════════════════════════╗')
|
||||
println('║ Kubernetes Client Example - HeroLib ║')
|
||||
println('║ Demonstrates JSON parsing and cluster interaction ║')
|
||||
println('╚════════════════════════════════════════════════════════════════╝')
|
||||
println('')
|
||||
|
||||
// Create a Kubernetes client instance using the factory pattern
|
||||
println('[INFO] Creating Kubernetes client instance...')
|
||||
mut client := kubernetes.new() or {
|
||||
console.print_header('Error: Failed to create Kubernetes client')
|
||||
eprintln('${err}')
|
||||
eprintln('')
|
||||
eprintln('Make sure kubectl is installed and configured properly.')
|
||||
eprintln('You can install kubectl ')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
println('[SUCCESS] Kubernetes client created successfully')
|
||||
println('')
|
||||
|
||||
// ============================================================================
|
||||
// 1. Get Cluster Information
|
||||
// ============================================================================
|
||||
console.print_header('1. Cluster Information')
|
||||
println('[INFO] Retrieving cluster information...')
|
||||
println('')
|
||||
|
||||
cluster := client.cluster_info() or {
|
||||
console.print_header('Error: Failed to get cluster information')
|
||||
eprintln('${err}')
|
||||
eprintln('')
|
||||
eprintln('This usually means:')
|
||||
eprintln(' - kubectl is not installed')
|
||||
eprintln(' - No Kubernetes cluster is configured (check ~/.kube/config)')
|
||||
eprintln(' - The cluster is not accessible')
|
||||
eprintln('')
|
||||
eprintln('To set up a local cluster, you can use:')
|
||||
eprintln(' - Minikube: https://minikube.sigs.k8s.io/docs/start/')
|
||||
eprintln(' - Kind: https://kind.sigs.k8s.io/docs/user/quick-start/')
|
||||
eprintln(' - Docker Desktop (includes Kubernetes)')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
println('┌─────────────────────────────────────────────────────────────┐')
|
||||
println('│ Cluster Overview │')
|
||||
println('├─────────────────────────────────────────────────────────────┤')
|
||||
println('│ API Server: ${cluster.api_server:-50}│')
|
||||
println('│ Version: ${cluster.version:-50}│')
|
||||
println('│ Nodes: ${cluster.nodes.str():-50}│')
|
||||
println('│ Namespaces: ${cluster.namespaces.str():-50}│')
|
||||
println('│ Running Pods: ${cluster.running_pods.str():-50}│')
|
||||
println('└─────────────────────────────────────────────────────────────┘')
|
||||
println('')
|
||||
|
||||
// ============================================================================
|
||||
// 2. Get Pods in the 'default' namespace
|
||||
// ============================================================================
|
||||
console.print_header('2. Pods in "default" Namespace')
|
||||
println('[INFO] Retrieving pods from the default namespace...')
|
||||
println('')
|
||||
|
||||
pods := client.get_pods('default') or {
|
||||
console.print_header('Warning: Failed to get pods')
|
||||
eprintln('${err}')
|
||||
eprintln('')
|
||||
[]kubernetes.Pod{}
|
||||
}
|
||||
|
||||
if pods.len == 0 {
|
||||
println('No pods found in the default namespace.')
|
||||
println('')
|
||||
println('To create a test pod, run:')
|
||||
println(' kubectl run nginx --image=nginx')
|
||||
println('')
|
||||
} else {
|
||||
println('Found ${pods.len} pod(s) in the default namespace:')
|
||||
println('')
|
||||
|
||||
for i, pod in pods {
|
||||
println('┌─────────────────────────────────────────────────────────────┐')
|
||||
println('│ Pod #${i + 1:-56}│')
|
||||
println('├─────────────────────────────────────────────────────────────┤')
|
||||
println('│ Name: ${pod.name:-50}│')
|
||||
println('│ Namespace: ${pod.namespace:-50}│')
|
||||
println('│ Status: ${pod.status:-50}│')
|
||||
println('│ Node: ${pod.node:-50}│')
|
||||
println('│ IP: ${pod.ip:-50}│')
|
||||
println('│ Containers: ${pod.containers.join(', '):-50}│')
|
||||
println('│ Created: ${pod.created_at:-50}│')
|
||||
|
||||
if pod.labels.len > 0 {
|
||||
println('│ Labels: │')
|
||||
for key, value in pod.labels {
|
||||
label_str := ' ${key}=${value}'
|
||||
println('│ ${label_str:-58}│')
|
||||
}
|
||||
}
|
||||
|
||||
println('└─────────────────────────────────────────────────────────────┘')
|
||||
println('')
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 3. Get Deployments in the 'default' namespace
|
||||
// ============================================================================
|
||||
console.print_header('3. Deployments in "default" Namespace')
|
||||
println('[INFO] Retrieving deployments from the default namespace...')
|
||||
println('')
|
||||
|
||||
deployments := client.get_deployments('default') or {
|
||||
console.print_header('Warning: Failed to get deployments')
|
||||
eprintln('${err}')
|
||||
eprintln('')
|
||||
[]kubernetes.Deployment{}
|
||||
}
|
||||
|
||||
if deployments.len == 0 {
|
||||
println('No deployments found in the default namespace.')
|
||||
println('')
|
||||
println('To create a test deployment, run:')
|
||||
println(' kubectl create deployment nginx --image=nginx --replicas=3')
|
||||
println('')
|
||||
} else {
|
||||
println('Found ${deployments.len} deployment(s) in the default namespace:')
|
||||
println('')
|
||||
|
||||
for i, deploy in deployments {
|
||||
ready_status := if deploy.ready_replicas == deploy.replicas { '✓' } else { '⚠' }
|
||||
|
||||
println('┌─────────────────────────────────────────────────────────────┐')
|
||||
println('│ Deployment #${i + 1:-53}│')
|
||||
println('├─────────────────────────────────────────────────────────────┤')
|
||||
println('│ Name: ${deploy.name:-44}│')
|
||||
println('│ Namespace: ${deploy.namespace:-44}│')
|
||||
println('│ Replicas: ${deploy.replicas.str():-44}│')
|
||||
println('│ Ready Replicas: ${deploy.ready_replicas.str():-44}│')
|
||||
println('│ Available: ${deploy.available_replicas.str():-44}│')
|
||||
println('│ Updated: ${deploy.updated_replicas.str():-44}│')
|
||||
println('│ Status: ${ready_status:-44}│')
|
||||
println('│ Created: ${deploy.created_at:-44}│')
|
||||
|
||||
if deploy.labels.len > 0 {
|
||||
println('│ Labels: │')
|
||||
for key, value in deploy.labels {
|
||||
label_str := ' ${key}=${value}'
|
||||
println('│ ${label_str:-58}│')
|
||||
}
|
||||
}
|
||||
|
||||
println('└─────────────────────────────────────────────────────────────┘')
|
||||
println('')
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// 4. Get Services in the 'default' namespace
|
||||
// ============================================================================
|
||||
console.print_header('4. Services in "default" Namespace')
|
||||
println('[INFO] Retrieving services from the default namespace...')
|
||||
println('')
|
||||
|
||||
services := client.get_services('default') or {
|
||||
console.print_header('Warning: Failed to get services')
|
||||
eprintln('${err}')
|
||||
eprintln('')
|
||||
[]kubernetes.Service{}
|
||||
}
|
||||
|
||||
if services.len == 0 {
|
||||
println('No services found in the default namespace.')
|
||||
println('')
|
||||
println('To create a test service, run:')
|
||||
println(' kubectl expose deployment nginx --port=80 --type=ClusterIP')
|
||||
println('')
|
||||
} else {
|
||||
println('Found ${services.len} service(s) in the default namespace:')
|
||||
println('')
|
||||
|
||||
for i, svc in services {
|
||||
println('┌─────────────────────────────────────────────────────────────┐')
|
||||
println('│ Service #${i + 1:-54}│')
|
||||
println('├─────────────────────────────────────────────────────────────┤')
|
||||
println('│ Name: ${svc.name:-48}│')
|
||||
println('│ Namespace: ${svc.namespace:-48}│')
|
||||
println('│ Type: ${svc.service_type:-48}│')
|
||||
println('│ Cluster IP: ${svc.cluster_ip:-48}│')
|
||||
|
||||
if svc.external_ip.len > 0 {
|
||||
println('│ External IP: ${svc.external_ip:-48}│')
|
||||
}
|
||||
|
||||
if svc.ports.len > 0 {
|
||||
println('│ Ports: ${svc.ports.join(', '):-48}│')
|
||||
}
|
||||
|
||||
println('│ Created: ${svc.created_at:-48}│')
|
||||
|
||||
if svc.labels.len > 0 {
|
||||
println('│ Labels: │')
|
||||
for key, value in svc.labels {
|
||||
label_str := ' ${key}=${value}'
|
||||
println('│ ${label_str:-58}│')
|
||||
}
|
||||
}
|
||||
|
||||
println('└─────────────────────────────────────────────────────────────┘')
|
||||
println('')
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Summary
|
||||
// ============================================================================
|
||||
console.print_header('Summary')
|
||||
println('✓ Successfully demonstrated Kubernetes client functionality')
|
||||
println('✓ Cluster information retrieved and parsed')
|
||||
println('✓ Pods: ${pods.len} found')
|
||||
println('✓ Deployments: ${deployments.len} found')
|
||||
println('✓ Services: ${services.len} found')
|
||||
println('')
|
||||
println('All JSON parsing operations completed successfully!')
|
||||
println('')
|
||||
println('╔════════════════════════════════════════════════════════════════╗')
|
||||
println('║ Example Complete ║')
|
||||
println('╚════════════════════════════════════════════════════════════════╝')
|
||||
208
examples/web/doctree/doctree_meta.vsh
Executable file
208
examples/web/doctree/doctree_meta.vsh
Executable file
@@ -0,0 +1,208 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import incubaid.herolib.web.doctree.meta
|
||||
|
||||
import incubaid.herolib.core.playbook
|
||||
import incubaid.herolib.ui.console
|
||||
|
||||
// Comprehensive HeroScript for testing multi-level navigation depths
|
||||
const test_heroscript_nav_depth = '
|
||||
!!site.config
|
||||
name: "nav_depth_test"
|
||||
title: "Navigation Depth Test Site"
|
||||
description: "Testing multi-level nested navigation"
|
||||
tagline: "Deep navigation structures"
|
||||
|
||||
!!site.navbar
|
||||
title: "Nav Depth Test"
|
||||
|
||||
!!site.navbar_item
|
||||
label: "Home"
|
||||
to: "/"
|
||||
position: "left"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 1: Simple top-level category
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Why"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
//COLLECTION WILL BE REPEATED, HAS NO INFLUENCE ON NAVIGATION LEVELS
|
||||
!!site.page src: "mycollection:intro"
|
||||
label: "Why Choose Us"
|
||||
title: "Why Choose Us"
|
||||
description: "Reasons to use this platform"
|
||||
|
||||
!!site.page src: "benefits"
|
||||
label: "Key Benefits"
|
||||
title: "Key Benefits"
|
||||
description: "Main benefits overview"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 1: Simple top-level category
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Tutorials"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "getting_started"
|
||||
label: "Getting Started"
|
||||
title: "Getting Started"
|
||||
description: "Basic tutorial to get started"
|
||||
|
||||
!!site.page src: "first_steps"
|
||||
label: "First Steps"
|
||||
title: "First Steps"
|
||||
description: "Your first steps with the platform"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 3: Three-level nested category (Tutorials > Operations > Urgent)
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Tutorials/Operations/Urgent"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "emergency_restart"
|
||||
label: "Emergency Restart"
|
||||
title: "Emergency Restart"
|
||||
description: "How to emergency restart the system"
|
||||
|
||||
!!site.page src: "critical_fixes"
|
||||
label: "Critical Fixes"
|
||||
title: "Critical Fixes"
|
||||
description: "Apply critical fixes immediately"
|
||||
|
||||
!!site.page src: "incident_response"
|
||||
label: "Incident Response"
|
||||
title: "Incident Response"
|
||||
description: "Handle incidents in real-time"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 2: Two-level nested category (Tutorials > Operations)
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Tutorials/Operations"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "daily_checks"
|
||||
label: "Daily Checks"
|
||||
title: "Daily Checks"
|
||||
description: "Daily maintenance checklist"
|
||||
|
||||
!!site.page src: "monitoring"
|
||||
label: "Monitoring"
|
||||
title: "Monitoring"
|
||||
description: "System monitoring procedures"
|
||||
|
||||
!!site.page src: "backups"
|
||||
label: "Backups"
|
||||
title: "Backups"
|
||||
description: "Backup and restore procedures"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 1: One-to-two level (Tutorials)
|
||||
// ============================================================
|
||||
// Note: This creates a sibling at the Tutorials level (not nested deeper)
|
||||
!!site.page src: "advanced_concepts"
|
||||
label: "Advanced Concepts"
|
||||
title: "Advanced Concepts"
|
||||
description: "Deep dive into advanced concepts"
|
||||
|
||||
!!site.page src: "troubleshooting"
|
||||
label: "Troubleshooting"
|
||||
title: "Troubleshooting"
|
||||
description: "Troubleshooting guide"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 2: Two-level nested category (Why > FAQ)
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Why/FAQ"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "general"
|
||||
label: "General Questions"
|
||||
title: "General Questions"
|
||||
description: "Frequently asked questions"
|
||||
|
||||
!!site.page src: "pricing_questions"
|
||||
label: "Pricing"
|
||||
title: "Pricing Questions"
|
||||
description: "Questions about pricing"
|
||||
|
||||
!!site.page src: "technical_faq"
|
||||
label: "Technical FAQ"
|
||||
title: "Technical FAQ"
|
||||
description: "Technical frequently asked questions"
|
||||
|
||||
!!site.page src: "support_faq"
|
||||
label: "Support"
|
||||
title: "Support FAQ"
|
||||
description: "Support-related FAQ"
|
||||
|
||||
// ============================================================
|
||||
// LEVEL 4: Four-level nested category (Tutorials > Operations > Database > Optimization)
|
||||
// ============================================================
|
||||
!!site.page_category
|
||||
path: "Tutorials/Operations/Database/Optimization"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "query_optimization"
|
||||
label: "Query Optimization"
|
||||
title: "Query Optimization"
|
||||
description: "Optimize your database queries"
|
||||
|
||||
!!site.page src: "indexing_strategy"
|
||||
label: "Indexing Strategy"
|
||||
title: "Indexing Strategy"
|
||||
description: "Effective indexing strategies"
|
||||
|
||||
!!site.page_category
|
||||
path: "Tutorials/Operations/Database"
|
||||
collapsible: true
|
||||
collapsed: false
|
||||
|
||||
!!site.page src: "configuration"
|
||||
label: "Configuration"
|
||||
title: "Database Configuration"
|
||||
description: "Configure your database"
|
||||
|
||||
!!site.page src: "replication"
|
||||
label: "Replication"
|
||||
title: "Database Replication"
|
||||
description: "Set up database replication"
|
||||
|
||||
'
|
||||
|
||||
fn check(s2 meta.Site) {
|
||||
|
||||
// assert s == s2
|
||||
}
|
||||
|
||||
|
||||
// ========================================================
|
||||
// SETUP: Create and process playbook
|
||||
// ========================================================
|
||||
console.print_item('Creating playbook from HeroScript')
|
||||
mut plbook := playbook.new(text: test_heroscript_nav_depth)!
|
||||
console.print_green('✓ Playbook created')
|
||||
console.lf()
|
||||
|
||||
console.print_item('Processing site configuration')
|
||||
meta.play(mut plbook)!
|
||||
console.print_green('✓ Site processed')
|
||||
console.lf()
|
||||
|
||||
console.print_item('Retrieving configured site')
|
||||
mut nav_site := meta.get(name: 'nav_depth_test')!
|
||||
console.print_green('✓ Site retrieved')
|
||||
console.lf()
|
||||
|
||||
// check(nav_site)
|
||||
201
examples/web/site/USAGE.md
Normal file
201
examples/web/site/USAGE.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# Site Module Usage Guide
|
||||
|
||||
## Quick Examples
|
||||
|
||||
### 1. Run Basic Example
|
||||
|
||||
```bash
|
||||
cd examples/web/site
|
||||
vrun process_site.vsh ./
|
||||
```
|
||||
|
||||
With output:
|
||||
```
|
||||
=== Site Configuration Processor ===
|
||||
Processing HeroScript files from: ./
|
||||
Found 1 HeroScript file(s):
|
||||
- basic.heroscript
|
||||
|
||||
Processing: basic.heroscript
|
||||
|
||||
=== Configuration Complete ===
|
||||
Site: simple_docs
|
||||
Title: Simple Documentation
|
||||
Pages: 4
|
||||
Description: A basic documentation site
|
||||
Navigation structure:
|
||||
- [Page] Getting Started
|
||||
- [Page] Installation
|
||||
- [Page] Usage Guide
|
||||
- [Page] FAQ
|
||||
|
||||
✓ Site configuration ready for deployment
|
||||
```
|
||||
|
||||
### 2. Run Multi-Section Example
|
||||
|
||||
```bash
|
||||
vrun process_site.vsh ./
|
||||
# Edit process_site.vsh to use multi_section.heroscript instead
|
||||
```
|
||||
|
||||
### 3. Process Custom Directory
|
||||
|
||||
```bash
|
||||
vrun process_site.vsh /path/to/your/site/config
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
├── 0_config.heroscript # Basic config
|
||||
├── 1_menu.heroscript # Navigation
|
||||
├── 2_pages.heroscript # Pages and categories
|
||||
└── process.vsh # Your processing script
|
||||
```
|
||||
|
||||
## Creating Your Own Site
|
||||
|
||||
1. **Create a config directory:**
|
||||
```bash
|
||||
mkdir my_site
|
||||
cd my_site
|
||||
```
|
||||
|
||||
2. **Create config file (0_config.heroscript):**
|
||||
```heroscript
|
||||
!!site.config
|
||||
name: "my_site"
|
||||
title: "My Site"
|
||||
```
|
||||
|
||||
3. **Create pages file (1_pages.heroscript):**
|
||||
```heroscript
|
||||
!!site.page src: "docs:intro"
|
||||
title: "Getting Started"
|
||||
```
|
||||
|
||||
4. **Process with script:**
|
||||
```bash
|
||||
vrun ../process_site.vsh ./
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Documentation Site
|
||||
|
||||
```
|
||||
docs/
|
||||
├── 0_config.heroscript
|
||||
│ └── Basic config + metadata
|
||||
├── 1_menu.heroscript
|
||||
│ └── Navbar + footer
|
||||
├── 2_getting_started.heroscript
|
||||
│ └── Getting started pages
|
||||
├── 3_api.heroscript
|
||||
│ └── API reference pages
|
||||
└── 4_advanced.heroscript
|
||||
└── Advanced topic pages
|
||||
```
|
||||
|
||||
### Workflow 2: Internal Knowledge Base
|
||||
|
||||
```
|
||||
kb/
|
||||
├── 0_config.heroscript
|
||||
├── 1_navigation.heroscript
|
||||
└── 2_articles.heroscript
|
||||
```
|
||||
|
||||
### Workflow 3: Product Documentation with Imports
|
||||
|
||||
```
|
||||
product_docs/
|
||||
├── 0_config.heroscript
|
||||
├── 1_imports.heroscript
|
||||
│ └── Import shared templates
|
||||
├── 2_menu.heroscript
|
||||
└── 3_pages.heroscript
|
||||
```
|
||||
|
||||
## Tips & Tricks
|
||||
|
||||
### Tip 1: Reuse Collections
|
||||
|
||||
```heroscript
|
||||
# Specify once, reuse multiple times
|
||||
!!site.page src: "guides:intro"
|
||||
!!site.page src: "setup" # Reuses "guides"
|
||||
!!site.page src: "deployment" # Still "guides"
|
||||
|
||||
# Switch to new collection
|
||||
!!site.page src: "api:reference"
|
||||
!!site.page src: "examples" # Now "api"
|
||||
```
|
||||
|
||||
### Tip 2: Auto-Increment Categories
|
||||
|
||||
```heroscript
|
||||
# Automatically positioned at 100, 200, 300...
|
||||
!!site.page_category name: "basics"
|
||||
!!site.page_category name: "advanced"
|
||||
!!site.page_category name: "expert"
|
||||
|
||||
# Or specify explicit positions
|
||||
!!site.page_category name: "basics" position: 10
|
||||
!!site.page_category name: "advanced" position: 20
|
||||
```
|
||||
|
||||
### Tip 3: Title Extraction
|
||||
|
||||
Let titles come from markdown files:
|
||||
|
||||
```heroscript
|
||||
# Don't specify title
|
||||
!!site.page src: "docs:introduction"
|
||||
# Title will be extracted from # Heading in introduction.md
|
||||
```
|
||||
|
||||
### Tip 4: Draft Pages
|
||||
|
||||
Hide pages while working on them:
|
||||
|
||||
```heroscript
|
||||
!!site.page src: "docs:work_in_progress"
|
||||
draft: true
|
||||
title: "Work in Progress"
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
### Debug: Check What Got Configured
|
||||
|
||||
```v
|
||||
mut s := site.get(name: 'my_site')!
|
||||
println(s.pages) // All pages
|
||||
println(s.nav) // Navigation structure
|
||||
println(s.siteconfig) // Configuration
|
||||
```
|
||||
|
||||
### Debug: List All Sites
|
||||
|
||||
```v
|
||||
sites := site.list()
|
||||
for site_name in sites {
|
||||
println('Site: ${site_name}')
|
||||
}
|
||||
```
|
||||
|
||||
### Debug: Enable Verbose Output
|
||||
|
||||
Add `console.print_debug()` calls in your HeroScript processing.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Customize `process_site.vsh` for your needs
|
||||
- Add your existing pages (in markdown)
|
||||
- Export to Docusaurus
|
||||
- Deploy to production
|
||||
|
||||
For more info, see the main [Site Module README](./readme.md).
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user