Merge branch 'development' into development_decartive
* development: (53 commits) ... feat: Implement theming and modal UI improvements ... ... ... ... ... ... zinit client fixes git herocmd improvements ... ... ... ... ... ... ... ... ... ... ...
This commit is contained in:
@@ -134,7 +134,7 @@ Returns the current username.
|
||||
|
||||
## 2. Network Utilities
|
||||
|
||||
### `osal.ping(args: PingArgs) !PingResult`
|
||||
### `osal.ping(args: PingArgs) ! bool`
|
||||
Checks host reachability.
|
||||
* **Parameters**:
|
||||
### `osal.ipaddr_pub_get_check() !string`
|
||||
|
||||
@@ -23,12 +23,12 @@ This document describes the core functionalities of the Operating System Abstrac
|
||||
|
||||
## 2. Network Utilities
|
||||
|
||||
* **`osal.ping(args: PingArgs) !PingResult`**: Check host reachability.
|
||||
* **Key Parameters**: `address` (string).
|
||||
* **Returns**: `PingResult` (`.ok`, `.timeout`, `.unknownhost`).
|
||||
* **`osal.tcp_port_test(args: TcpPortTestArgs) bool`**: Test if a TCP port is open.
|
||||
* **Key Parameters**: `address` (string), `port` (int).
|
||||
* **`osal.ipaddr_pub_get() !string`**: Get public IP address.
|
||||
* **`osal.ping(args: PingArgs) !bool`**: Check host reachability.
|
||||
- address string = "8.8.8.8"
|
||||
- nr_ping u16 = 3 // amount of ping requests we will do
|
||||
- nr_ok u16 = 3 //how many of them need to be ok
|
||||
- retry u8 //how many times fo we retry above sequence, basically we ping ourselves with -c 1
|
||||
**`osal.ipaddr_pub_get() !string`**: Get public IP address.
|
||||
|
||||
## 3. File System Operations
|
||||
|
||||
|
||||
165
aiprompts/herolib_core/v_templates.md
Normal file
165
aiprompts/herolib_core/v_templates.md
Normal file
@@ -0,0 +1,165 @@
|
||||
V allows for easily using text templates, expanded at compile time to
|
||||
V functions, that efficiently produce text output. This is especially
|
||||
useful for templated HTML views, but the mechanism is general enough
|
||||
to be used for other kinds of text output also.
|
||||
|
||||
# Template directives
|
||||
|
||||
Each template directive begins with an `@` sign.
|
||||
Some directives contain a `{}` block, others only have `''` (string) parameters.
|
||||
|
||||
Newlines on the beginning and end are ignored in `{}` blocks,
|
||||
otherwise this (see [if](#if) for this syntax):
|
||||
|
||||
```html
|
||||
@if bool_val {
|
||||
<span>This is shown if bool_val is true</span>
|
||||
}
|
||||
```
|
||||
|
||||
... would output:
|
||||
|
||||
```html
|
||||
|
||||
<span>This is shown if bool_val is true</span>
|
||||
|
||||
```
|
||||
|
||||
... which is less readable.
|
||||
|
||||
## if
|
||||
|
||||
The if directive, consists of three parts, the `@if` tag, the condition (same syntax like in V)
|
||||
and the `{}` block, where you can write html, which will be rendered if the condition is true:
|
||||
|
||||
```
|
||||
@if <condition> {}
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```html
|
||||
@if bool_val {
|
||||
<span>This is shown if bool_val is true</span>
|
||||
}
|
||||
```
|
||||
|
||||
One-liner:
|
||||
|
||||
```html
|
||||
@if bool_val { <span>This is shown if bool_val is true</span> }
|
||||
```
|
||||
|
||||
The first example would result in:
|
||||
|
||||
```html
|
||||
<span>This is shown if bool_val is true</span>
|
||||
```
|
||||
|
||||
... while the one-liner results in:
|
||||
|
||||
```html
|
||||
<span>This is shown if bool_val is true</span>
|
||||
```
|
||||
|
||||
## for
|
||||
|
||||
The for directive consists of three parts, the `@for` tag,
|
||||
the condition (same syntax like in V) and the `{}` block,
|
||||
where you can write text, rendered for each iteration of the loop:
|
||||
|
||||
```
|
||||
@for <condition> {}
|
||||
```
|
||||
|
||||
### Example for @for
|
||||
|
||||
```html
|
||||
@for i, val in my_vals {
|
||||
<span>$i - $val</span>
|
||||
}
|
||||
```
|
||||
|
||||
One-liner:
|
||||
|
||||
```html
|
||||
@for i, val in my_vals { <span>$i - $val</span> }
|
||||
```
|
||||
|
||||
The first example would result in:
|
||||
|
||||
```html
|
||||
<span>0 - "First"</span>
|
||||
<span>1 - "Second"</span>
|
||||
<span>2 - "Third"</span>
|
||||
...
|
||||
```
|
||||
|
||||
... while the one-liner results in:
|
||||
|
||||
```html
|
||||
<span>0 - "First"</span>
|
||||
<span>1 - "Second"</span>
|
||||
<span>2 - "Third"</span>
|
||||
...
|
||||
```
|
||||
|
||||
You can also write (and all other for condition syntaxes that are allowed in V):
|
||||
|
||||
```html
|
||||
@for i = 0; i < 5; i++ {
|
||||
<span>$i</span>
|
||||
}
|
||||
```
|
||||
|
||||
## include
|
||||
|
||||
The include directive is for including other html files (which will be processed as well)
|
||||
and consists of two parts, the `@include` tag and a following `'<path>'` string.
|
||||
The path parameter is relative to the template file being called.
|
||||
|
||||
### Example for the folder structure of a project using templates:
|
||||
|
||||
```
|
||||
Project root
|
||||
/templates
|
||||
- index.html
|
||||
/headers
|
||||
- base.html
|
||||
```
|
||||
|
||||
`index.html`
|
||||
|
||||
```html
|
||||
|
||||
<div>@include 'header/base'</div>
|
||||
```
|
||||
|
||||
> Note that there shouldn't be a file suffix,
|
||||
> it is automatically appended and only allows `html` files.
|
||||
|
||||
|
||||
## js
|
||||
|
||||
The js directive consists of two parts, the `@js` tag and `'<path>'` string,
|
||||
where you can insert your src
|
||||
|
||||
```
|
||||
@js '<url>'
|
||||
```
|
||||
|
||||
### Example for the @js directive:
|
||||
|
||||
```html
|
||||
@js 'myscripts.js'
|
||||
```
|
||||
|
||||
# Variables
|
||||
|
||||
All variables, which are declared before the $tmpl can be used through the `@{my_var}` syntax.
|
||||
It's also possible to use properties of structs here like `@{my_struct.prop}`.
|
||||
|
||||
# Escaping
|
||||
|
||||
The `@` symbol starts a template directive. If you need to use `@` as a regular
|
||||
character within a template, escape it by using a double `@` like this: `@@`.
|
||||
@@ -761,9 +761,7 @@ this document has info about the most core functions, more detailed info can be
|
||||
|
||||
### 2. Network Utilities
|
||||
|
||||
* **`osal.ping(args: PingArgs) !PingResult`**: Check host reachability.
|
||||
* **Key Parameters**: `address` (string).
|
||||
* **Returns**: `PingResult` (`.ok`, `.timeout`, `.unknownhost`).
|
||||
* **`osal.ping(args: PingArgs) !bool`**: Check host reachability.
|
||||
* **`osal.tcp_port_test(args: TcpPortTestArgs) bool`**: Test if a TCP port is open.
|
||||
* **Key Parameters**: `address` (string), `port` (int).
|
||||
* **`osal.ipaddr_pub_get() !string`**: Get public IP address.
|
||||
|
||||
73
aiprompts/v_advanced/blake3.md
Normal file
73
aiprompts/v_advanced/blake3.md
Normal file
@@ -0,0 +1,73 @@
|
||||
|
||||
## `crypto.blake3` Module
|
||||
|
||||
|
||||
```v
|
||||
fn sum256(data []u8) []u8
|
||||
```
|
||||
|
||||
Returns the Blake3 256-bit hash of the provided data.
|
||||
|
||||
```v
|
||||
fn sum_derive_key256(context []u8, key_material []u8) []u8
|
||||
```
|
||||
|
||||
Computes the Blake3 256-bit derived-key hash based on the context and key material.
|
||||
|
||||
```v
|
||||
fn sum_keyed256(data []u8, key []u8) []u8
|
||||
```
|
||||
|
||||
Returns the Blake3 256-bit keyed hash of the data using the specified key.
|
||||
|
||||
---
|
||||
|
||||
### Digest-Based API
|
||||
|
||||
```v
|
||||
fn Digest.new_derive_key_hash(context []u8) !Digest
|
||||
```
|
||||
|
||||
Initializes a `Digest` struct for creating a Blake3 derived‑key hash, using the provided context.
|
||||
|
||||
```v
|
||||
fn Digest.new_hash() !Digest
|
||||
```
|
||||
|
||||
Initializes a `Digest` struct for a standard (unkeyed) Blake3 hash.
|
||||
|
||||
```v
|
||||
fn Digest.new_keyed_hash(key []u8) !Digest
|
||||
```
|
||||
|
||||
Initializes a `Digest` struct for a keyed Blake3 hash, with the given key.
|
||||
|
||||
---
|
||||
|
||||
### `Digest` Methods
|
||||
|
||||
```v
|
||||
fn (mut d Digest) write(data []u8) !
|
||||
```
|
||||
|
||||
Feeds additional data bytes into the ongoing hash computation.
|
||||
|
||||
```v
|
||||
fn (mut d Digest) checksum(size u64) []u8
|
||||
```
|
||||
|
||||
Finalizes the hash and returns the resulting output.
|
||||
|
||||
* The `size` parameter specifies the number of output bytes—commonly `32` for a 256-bit digest, but can be up to `2**64`.
|
||||
|
||||
---
|
||||
|
||||
### Recommended Usage (in V)
|
||||
|
||||
```v
|
||||
import crypto.blake3
|
||||
|
||||
mut hasher := crypto.blake3.Digest.new_hash() or { panic(err) }
|
||||
hasher.write(data) or { panic(err) }
|
||||
digest := hasher.checksum(24) // returns a []u8 of length 24 (192 bits)
|
||||
```
|
||||
@@ -38,7 +38,7 @@ fn do() ! {
|
||||
|
||||
if os.args.len == 2 {
|
||||
mypath := os.args[1]
|
||||
if mypath.to_lower().ends_with('.hero') {
|
||||
if mypath.to_lower().ends_with('.hero') || mypath.to_lower().ends_with('.heroscript') || mypath.to_lower().ends_with('.hs') {
|
||||
// hero was called from a file
|
||||
playcmds_do(mypath)!
|
||||
return
|
||||
@@ -94,7 +94,7 @@ fn do() ! {
|
||||
|
||||
fn main() {
|
||||
do() or {
|
||||
$dbg;
|
||||
// $dbg;
|
||||
eprintln('Error: ${err}')
|
||||
print_backtrace()
|
||||
exit(1)
|
||||
|
||||
7
examples/builder/heroscript_example.hs
Normal file
7
examples/builder/heroscript_example.hs
Normal file
@@ -0,0 +1,7 @@
|
||||
!!node.new
|
||||
name:'mynode'
|
||||
ipaddr:'127.0.0.1'
|
||||
|
||||
!!cmd.run
|
||||
node:'mynode'
|
||||
cmd:'ls /'
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.clients.zinit
|
||||
import freeflowuniverse.herolib.installers.infra.zinit_installer
|
||||
@@ -52,6 +52,7 @@ println(' - API title: ${spec.info.title}')
|
||||
println(' - API version: ${spec.info.version}')
|
||||
println(' - Methods available: ${spec.methods.len}')
|
||||
|
||||
|
||||
// 2. List all services
|
||||
println('\n2. Listing all services...')
|
||||
services := client.service_list() or {
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.data.dbfs
|
||||
import time
|
||||
import os
|
||||
|
||||
data_dir := '/tmp/db'
|
||||
os.rmdir_all(data_dir) or {}
|
||||
mut dbcollection := dbfs.get(contextid: 1, dbpath: data_dir, secret: '123456')!
|
||||
|
||||
mut db := dbcollection.db_create(name: 'db_a', encrypted: true, withkeys: true)!
|
||||
|
||||
id := db.set(key: 'a', value: 'bbbb')!
|
||||
assert 'bbbb' == db.get(key: 'a')!
|
||||
|
||||
id2 := db.set(key: 'a', value: 'bbbb2')!
|
||||
assert 'bbbb2' == db.get(key: 'a')!
|
||||
assert id == id2
|
||||
assert id == 1
|
||||
|
||||
id3 := db.set(key: 'b', value: 'bbbb3')!
|
||||
assert 'bbbb3' == db.get(key: 'b')!
|
||||
assert id3 == id2 + 1
|
||||
|
||||
assert db.exists(key: 'a')!
|
||||
assert db.exists(key: 'b')!
|
||||
assert db.exists(id: id2)!
|
||||
assert db.exists(id: id3)!
|
||||
id3_exsts := db.exists(id: id3 + 1)!
|
||||
println(id3 + 1)
|
||||
assert id3_exsts == false
|
||||
|
||||
for i in 3 .. 100 {
|
||||
id4 := db.set(key: 'a${i}', value: 'b${i}')!
|
||||
println('${i} --> ${id4}')
|
||||
assert i == id4
|
||||
}
|
||||
|
||||
db.delete(key: 'a')!
|
||||
assert db.exists(key: 'a')! == false
|
||||
assert db.exists(id: id2)! == false
|
||||
|
||||
db.delete(id: 50)!
|
||||
assert db.exists(key: 'a50')! == false
|
||||
assert db.exists(id: 50)! == false
|
||||
@@ -1,8 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.crypt.secrets
|
||||
|
||||
secrets.delete_passwd()!
|
||||
r := secrets.encrypt('aaa')!
|
||||
println(r)
|
||||
assert 'aaa' == secrets.decrypt(r)!
|
||||
5
examples/installers/infra/.gitignore
vendored
Normal file
5
examples/installers/infra/.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
zinit_installer
|
||||
dify
|
||||
screen
|
||||
livekit
|
||||
gitea
|
||||
Binary file not shown.
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.installers.infra.livekit as livekit_installer
|
||||
|
||||
mut livekit := livekit_installer.get()!
|
||||
mut livekit := livekit_installer.get(create: true)!
|
||||
livekit.install()!
|
||||
livekit.start()!
|
||||
livekit.destroy()!
|
||||
|
||||
6
examples/installers_remote/hero_compile.hero
Executable file
6
examples/installers_remote/hero_compile.hero
Executable file
@@ -0,0 +1,6 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
//root@65.21.132.119
|
||||
|
||||
|
||||
!!herolib.compile reset:1
|
||||
0
examples/osal/tmux/heroscripts/tmux_setup.heroscript
Normal file → Executable file
0
examples/osal/tmux/heroscripts/tmux_setup.heroscript
Normal file → Executable file
@@ -9,18 +9,18 @@ import freeflowuniverse.herolib.ui.console
|
||||
fn main() {
|
||||
console.print_header('🔑 Hero SSH Agent Test Suite')
|
||||
os.execute('${os.dir(os.dir(@FILE))}/cli/compile.vsh')
|
||||
|
||||
|
||||
hero_bin := '${os.home_dir()}/hero/bin/hero'
|
||||
|
||||
|
||||
// Check if hero binary exists
|
||||
if !os.exists(hero_bin) {
|
||||
console.print_stderr('Hero binary not found at ${hero_bin}')
|
||||
console.print_stderr('Please compile hero first with: ./cli/compile.vsh')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
|
||||
console.print_green('✓ Hero binary found at ${hero_bin}')
|
||||
|
||||
|
||||
// Test 1: Profile initialization
|
||||
console.print_header('Test 1: Profile Initialization')
|
||||
result1 := os.execute('${hero_bin} sshagent profile')
|
||||
@@ -29,17 +29,17 @@ fn main() {
|
||||
} else {
|
||||
console.print_stderr('❌ Profile initialization failed: ${result1.output}')
|
||||
}
|
||||
|
||||
|
||||
// Test 2: Status check
|
||||
console.print_header('Test 2: Status Check')
|
||||
result2 := os.execute('${hero_bin} sshagent status')
|
||||
if result2.exit_code == 0 && result2.output.contains("- SSH Agent Status") {
|
||||
if result2.exit_code == 0 && result2.output.contains('- SSH Agent Status') {
|
||||
console.print_green('✓ Status check successful')
|
||||
println(result2.output)
|
||||
} else {
|
||||
console.print_stderr('❌ Status check failed: ${result2.output}')
|
||||
}
|
||||
|
||||
|
||||
// Test 3: List keys
|
||||
console.print_header('Test 3: List SSH Keys')
|
||||
result3 := os.execute('${hero_bin} sshagent list')
|
||||
@@ -49,7 +49,7 @@ fn main() {
|
||||
} else {
|
||||
console.print_stderr('❌ List keys failed: ${result3.output}')
|
||||
}
|
||||
|
||||
|
||||
// Test 4: Generate test key
|
||||
console.print_header('Test 4: Generate Test Key')
|
||||
test_key_name := 'hero_test_${os.getpid()}'
|
||||
@@ -57,11 +57,11 @@ fn main() {
|
||||
if result4.exit_code == 0 && result4.output.contains('Generating SSH key') {
|
||||
console.print_green('✓ Key generation successful')
|
||||
println(result4.output)
|
||||
|
||||
|
||||
// Cleanup: remove test key files
|
||||
test_key_path := '${os.home_dir()}/.ssh/${test_key_name}'
|
||||
test_pub_path := '${test_key_path}.pub'
|
||||
|
||||
|
||||
if os.exists(test_key_path) {
|
||||
os.rm(test_key_path) or {}
|
||||
console.print_debug('Cleaned up test private key')
|
||||
@@ -73,7 +73,7 @@ fn main() {
|
||||
} else {
|
||||
console.print_stderr('❌ Key generation failed: ${result4.output}')
|
||||
}
|
||||
|
||||
|
||||
// Test 5: Help output
|
||||
console.print_header('Test 5: Help Output')
|
||||
result5 := os.execute('${hero_bin} sshagent')
|
||||
@@ -82,10 +82,10 @@ fn main() {
|
||||
} else {
|
||||
console.print_stderr('❌ Help output unexpected')
|
||||
}
|
||||
|
||||
|
||||
console.print_header('🎉 Test Suite Complete')
|
||||
console.print_green('Hero SSH Agent is ready for use!')
|
||||
|
||||
|
||||
// Show usage examples
|
||||
console.print_header('Usage Examples:')
|
||||
println('')
|
||||
|
||||
58
examples/threefold/incatokens/data/simulation.hero
Normal file
58
examples/threefold/incatokens/data/simulation.hero
Normal file
@@ -0,0 +1,58 @@
|
||||
!!incatokens.simulate
|
||||
name: 'inca_mainnet_simulation'
|
||||
total_supply: 10000000000
|
||||
public_pct: 0.50
|
||||
team_pct: 0.15
|
||||
treasury_pct: 0.15
|
||||
investor_pct: 0.20
|
||||
nrcol: 60
|
||||
currency: 'USD'
|
||||
epoch1_floor_uplift: 1.20
|
||||
epochn_floor_uplift: 1.20
|
||||
amm_liquidity_depth_factor: 2.0
|
||||
team_cliff_months: 12
|
||||
team_vesting_months: 36
|
||||
treasury_cliff_months: 12
|
||||
treasury_vesting_months: 48
|
||||
export_dir: './simulation_output'
|
||||
generate_csv: true
|
||||
generate_charts: true
|
||||
generate_report: true
|
||||
|
||||
!!incatokens.scenario
|
||||
name: 'Conservative'
|
||||
demands: [8000000, 8000000, 0]
|
||||
amm_trades: [0, 0, 0]
|
||||
|
||||
!!incatokens.scenario
|
||||
name: 'Moderate'
|
||||
demands: [25000000, 50000000, 0]
|
||||
amm_trades: [0, 0, 0]
|
||||
|
||||
!!incatokens.scenario
|
||||
name: 'Optimistic'
|
||||
demands: [50000000, 100000000, 0]
|
||||
amm_trades: [5000000, 10000000, 0]
|
||||
|
||||
!!incatokens.investor_round
|
||||
name: 'Seed'
|
||||
allocation_pct: 0.03
|
||||
price: 0.003
|
||||
cliff_months: 6
|
||||
vesting_months: 24
|
||||
|
||||
!!incatokens.investor_round
|
||||
name: 'Series_A'
|
||||
allocation_pct: 0.07
|
||||
price: 0.008
|
||||
cliff_months: 6
|
||||
vesting_months: 24
|
||||
|
||||
!!incatokens.investor_round
|
||||
name: 'Series_B'
|
||||
allocation_pct: 0.10
|
||||
price: 0.012
|
||||
cliff_months: 3
|
||||
vesting_months: 18
|
||||
|
||||
!!incatokens.export path:"/tmp/incatokens_export"
|
||||
14
examples/threefold/incatokens/incatokens_simulate.vsh
Executable file
14
examples/threefold/incatokens/incatokens_simulate.vsh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.threefold.incatokens
|
||||
import os
|
||||
import freeflowuniverse.herolib.core.playcmds
|
||||
|
||||
current_dir := os.dir(@FILE)
|
||||
heroscript_path := '${current_dir}/data'
|
||||
|
||||
playcmds.run(
|
||||
heroscript_path: heroscript_path
|
||||
)!
|
||||
|
||||
println('Simulation complete!')
|
||||
1
examples/virt/hetzner/.gitignore
vendored
Normal file
1
examples/virt/hetzner/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
hetzner_example
|
||||
37
examples/virt/hetzner/hetzner_example.hero
Executable file
37
examples/virt/hetzner/hetzner_example.hero
Executable file
@@ -0,0 +1,37 @@
|
||||
#!/usr/bin/env hero
|
||||
|
||||
|
||||
// !!hetznermanager.configure
|
||||
// name:"main"
|
||||
// user:"krist"
|
||||
// whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
|
||||
// password:"wontsethere"
|
||||
// sshkey:"kristof"
|
||||
|
||||
|
||||
// !!hetznermanager.server_rescue
|
||||
// server_name: 'kristof21' // The name of the server to manage (or use `id`)
|
||||
// wait: true // Wait for the operation to complete
|
||||
// hero_install: true // Automatically install Herolib in the rescue system
|
||||
|
||||
|
||||
// # Reset a server
|
||||
// !!hetznermanager.server_reset
|
||||
// instance: 'main'
|
||||
// server_name: 'your-server-name'
|
||||
// wait: true
|
||||
|
||||
// # Add a new SSH key to your Hetzner account
|
||||
// !!hetznermanager.key_create
|
||||
// instance: 'main'
|
||||
// key_name: 'my-laptop-key'
|
||||
// data: 'ssh-rsa AAAA...'
|
||||
|
||||
|
||||
// Install Ubuntu 24.04 on a server
|
||||
!!hetznermanager.ubuntu_install
|
||||
server_name: 'kristof2'
|
||||
wait: true
|
||||
hero_install: true // Install Herolib on the new OS
|
||||
|
||||
|
||||
@@ -1,40 +1,68 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.hetzner
|
||||
import freeflowuniverse.herolib.virt.hetznermanager
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.core.base
|
||||
import freeflowuniverse.herolib.builder
|
||||
import time
|
||||
import os
|
||||
import freeflowuniverse.herolib.core.playcmds
|
||||
|
||||
console.print_header('Hetzner login.')
|
||||
|
||||
// USE IF YOU WANT TO CONFIGURE THE HETZNER, ONLY DO THIS ONCE
|
||||
// hetzner.configure("test")!
|
||||
|
||||
mut cl := hetzner.get('test')!
|
||||
|
||||
for i in 0 .. 5 {
|
||||
println('test cache, first time slow then fast')
|
||||
cl.servers_list()!
|
||||
user := os.environ()['HETZNER_USER'] or {
|
||||
println('HETZNER_USER not set')
|
||||
exit(1)
|
||||
}
|
||||
passwd := os.environ()['HETZNER_PASSWORD'] or {
|
||||
println('HETZNER_PASSWORD not set')
|
||||
exit(1)
|
||||
}
|
||||
|
||||
println(cl.servers_list()!)
|
||||
hs := '
|
||||
!!hetznermanager.configure
|
||||
user:"${user}"
|
||||
whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
|
||||
password:"${passwd}"
|
||||
sshkey:"kristof"
|
||||
'
|
||||
|
||||
mut serverinfo := cl.server_info_get(name: 'kristof2')!
|
||||
println(hs)
|
||||
|
||||
println(serverinfo)
|
||||
playcmds.run(heroscript: hs)!
|
||||
|
||||
console.print_header('Hetzner Test.')
|
||||
|
||||
mut cl := hetznermanager.get()!
|
||||
// println(cl)
|
||||
|
||||
// for i in 0 .. 5 {
|
||||
// println('test cache, first time slow then fast')
|
||||
// }
|
||||
|
||||
// println(cl.servers_list()!)
|
||||
|
||||
// mut serverinfo := cl.server_info_get(name: 'kristof2')!
|
||||
|
||||
// println(serverinfo)
|
||||
|
||||
// cl.server_reset(name:"kristof2",wait:true)!
|
||||
|
||||
// cl.server_rescue(name:"kristof2",wait:true)!
|
||||
|
||||
console.print_header('SSH login')
|
||||
mut b := builder.new()!
|
||||
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
|
||||
// n.hero_install()!
|
||||
// n.hero_compile_debug()!
|
||||
// don't forget to specify the keyname needed
|
||||
// cl.server_rescue(name:"kristof2",wait:true, hero_install:true,sshkey_name:"kristof")!
|
||||
|
||||
// mut ks:=cl.keys_get()!
|
||||
// println(ks)
|
||||
|
||||
// console.print_header('SSH login')
|
||||
// mut b := builder.new()!
|
||||
// mut n := b.node_new(ipaddr: serverinfo.server_ip)!
|
||||
|
||||
// this will put hero in debug mode on the system
|
||||
// n.hero_install(compile:true)!
|
||||
|
||||
// n.shell("")!
|
||||
|
||||
// cl.ubuntu_install(name: 'kristof2', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
|
||||
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
|
||||
cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
|
||||
|
||||
@@ -1,26 +1,27 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.lima
|
||||
// import freeflowuniverse.herolib.virt.lima
|
||||
import freeflowuniverse.herolib.core.texttools
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.installers.virt.lima as limainstaller
|
||||
import os
|
||||
|
||||
limainstaller.install()!
|
||||
mut i := limainstaller.get(create: true)!
|
||||
i.install(reset: true)!
|
||||
|
||||
mut virtmanager := lima.new()!
|
||||
// mut virtmanager := lima.new()!
|
||||
|
||||
virtmanager.vm_delete_all()!
|
||||
// virtmanager.vm_delete_all()!
|
||||
|
||||
// virtmanager.vm_new(reset:true,template:.alpine,name:'alpine',install_hero:false)!
|
||||
// // virtmanager.vm_new(reset:true,template:.alpine,name:'alpine',install_hero:false)!
|
||||
|
||||
// virtmanager.vm_new(reset:true,template:.arch,name:'arch',install_hero:true)!
|
||||
// // virtmanager.vm_new(reset:true,template:.arch,name:'arch',install_hero:true)!
|
||||
|
||||
virtmanager.vm_new(reset: true, template: .ubuntucloud, name: 'hero', install_hero: false)!
|
||||
mut vm := virtmanager.vm_get('hero')!
|
||||
// virtmanager.vm_new(reset: true, template: .ubuntucloud, name: 'hero', install_hero: false)!
|
||||
// mut vm := virtmanager.vm_get('hero')!
|
||||
|
||||
println(vm)
|
||||
// println(vm)
|
||||
|
||||
// vm.install_hero()!
|
||||
// // vm.install_hero()!
|
||||
|
||||
// console.print_debug_title("MYVM", vm.str())
|
||||
// // console.print_debug_title("MYVM", vm.str())
|
||||
|
||||
239
examples/virt/podman/podman.vsh
Executable file
239
examples/virt/podman/podman.vsh
Executable file
@@ -0,0 +1,239 @@
|
||||
#!/usr/bin/env -S v -n -w -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.podman
|
||||
import freeflowuniverse.herolib.installers.virt.podman as podman_installer
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
|
||||
console.print_header('🐳 Comprehensive Podman Module Demo')
|
||||
console.print_stdout('This demo showcases both Simple API and Factory API approaches')
|
||||
console.print_stdout('Note: This demo requires podman to be available or will install it automatically')
|
||||
|
||||
// =============================================================================
|
||||
// SECTION 1: INSTALLATION
|
||||
// =============================================================================
|
||||
|
||||
console.print_header('📦 Section 1: Podman Installation')
|
||||
|
||||
console.print_stdout('Installing podman automatically...')
|
||||
if mut installer := podman_installer.get() {
|
||||
installer.install() or {
|
||||
console.print_stdout('⚠️ Podman installation failed (may already be installed): ${err}')
|
||||
}
|
||||
console.print_stdout('✅ Podman installation step completed')
|
||||
} else {
|
||||
console.print_stdout('⚠️ Failed to get podman installer, continuing with demo...')
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// SECTION 2: SIMPLE API DEMONSTRATION
|
||||
// =============================================================================
|
||||
|
||||
console.print_header('🚀 Section 2: Simple API Functions')
|
||||
|
||||
console.print_stdout('The Simple API provides direct functions for quick operations')
|
||||
|
||||
// Ensure podman machine is available before using Simple API
|
||||
console.print_stdout('Ensuring podman machine is available...')
|
||||
podman.ensure_machine_available() or {
|
||||
console.print_stdout('⚠️ Failed to ensure podman machine: ${err}')
|
||||
console.print_stdout('Continuing with demo - some operations may fail...')
|
||||
}
|
||||
|
||||
// Test 2.1: List existing containers and images
|
||||
console.print_stdout('\n📋 2.1 Listing existing resources...')
|
||||
|
||||
containers := podman.list_containers(true) or {
|
||||
console.print_stdout('⚠️ Failed to list containers: ${err}')
|
||||
[]podman.PodmanContainer{}
|
||||
}
|
||||
console.print_stdout('Found ${containers.len} containers (including stopped)')
|
||||
|
||||
images := podman.list_images() or {
|
||||
console.print_stdout('⚠️ Failed to list images: ${err}')
|
||||
[]podman.PodmanImage{}
|
||||
}
|
||||
console.print_stdout('Found ${images.len} images')
|
||||
|
||||
// Test 2.2: Run a simple container
|
||||
console.print_debug('\n🏃 2.2 Running a container with Simple API...')
|
||||
|
||||
options := podman.RunOptions{
|
||||
name: 'simple-demo-container'
|
||||
detach: true
|
||||
remove: true // Auto-remove when stopped
|
||||
env: {
|
||||
'DEMO_MODE': 'simple_api'
|
||||
'TEST_VAR': 'hello_world'
|
||||
}
|
||||
command: ['echo', 'Hello from Simple API container!']
|
||||
}
|
||||
|
||||
container_id := podman.run_container('alpine:latest', options) or {
|
||||
console.print_debug('⚠️ Failed to run container: ${err}')
|
||||
console.print_debug('This might be due to podman not being available or image not found')
|
||||
''
|
||||
}
|
||||
|
||||
if container_id != '' {
|
||||
console.print_debug('✅ Container started with ID: ${container_id[..12]}...')
|
||||
console.print_debug('Waiting for container to complete...')
|
||||
console.print_debug('✅ Container completed and auto-removed')
|
||||
} else {
|
||||
console.print_debug('❌ Container creation failed - continuing with demo...')
|
||||
}
|
||||
|
||||
// Test 2.3: Error handling demonstration
|
||||
console.print_debug('\n⚠️ 2.3 Error handling demonstration...')
|
||||
|
||||
podman.run_container('nonexistent:image', options) or {
|
||||
match err {
|
||||
podman.ImageError {
|
||||
console.print_debug('✅ Caught image error: ${err.msg()}')
|
||||
}
|
||||
podman.ContainerError {
|
||||
console.print_debug('✅ Caught container error: ${err.msg()}')
|
||||
}
|
||||
else {
|
||||
console.print_debug('✅ Caught other error: ${err.msg()}')
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// SECTION 3: FACTORY API DEMONSTRATION
|
||||
// =============================================================================
|
||||
|
||||
console.print_header('🏭 Section 3: Factory API Pattern')
|
||||
|
||||
console.print_debug('The Factory API provides advanced workflows and state management')
|
||||
|
||||
// Test 3.1: Create factory
|
||||
console.print_debug('\n🔧 3.1 Creating PodmanFactory...')
|
||||
|
||||
if mut factory := podman.new(install: false, herocompile: false) {
|
||||
console.print_debug('✅ PodmanFactory created successfully')
|
||||
|
||||
// Test 3.2: Advanced container creation
|
||||
console.print_debug('\n📦 3.2 Creating container with advanced options...')
|
||||
|
||||
if container := factory.container_create(
|
||||
name: 'factory-demo-container'
|
||||
image_repo: 'alpine'
|
||||
image_tag: 'latest'
|
||||
command: 'sh -c "echo Factory API Demo && sleep 2 && echo Container completed"'
|
||||
memory: '128m'
|
||||
cpus: 0.5
|
||||
env: {
|
||||
'DEMO_MODE': 'factory_api'
|
||||
'CONTAINER_TYPE': 'advanced'
|
||||
}
|
||||
detach: true
|
||||
remove_when_done: true
|
||||
interactive: false
|
||||
)
|
||||
{
|
||||
console.print_debug('✅ Advanced container created: ${container.name} (${container.id[..12]}...)')
|
||||
|
||||
// Test 3.3: Container management
|
||||
console.print_debug('\n🎛️ 3.3 Container management operations...')
|
||||
|
||||
// Load current state
|
||||
factory.load() or { console.print_debug('⚠️ Failed to load factory state: ${err}') }
|
||||
|
||||
// List containers through factory
|
||||
factory_containers := factory.containers_get(name: '*demo*') or {
|
||||
console.print_debug('⚠️ No demo containers found: ${err}')
|
||||
[]&podman.Container{}
|
||||
}
|
||||
|
||||
console.print_debug('Found ${factory_containers.len} demo containers through factory')
|
||||
console.print_debug('Waiting for factory container to complete...')
|
||||
} else {
|
||||
console.print_debug('⚠️ Failed to create container: ${err}')
|
||||
}
|
||||
|
||||
// Test 3.4: Builder Integration (if available)
|
||||
console.print_debug('\n🔨 3.4 Builder Integration (Buildah)...')
|
||||
|
||||
if mut builder := factory.builder_new(
|
||||
name: 'demo-app-builder'
|
||||
from: 'alpine:latest'
|
||||
delete: true
|
||||
)
|
||||
{
|
||||
console.print_debug('✅ Builder created: ${builder.containername}')
|
||||
|
||||
// Simple build operations
|
||||
builder.run('apk add --no-cache curl') or {
|
||||
console.print_debug('⚠️ Failed to install packages: ${err}')
|
||||
}
|
||||
|
||||
builder.run('echo "echo Hello from built image" > /usr/local/bin/demo-app') or {
|
||||
console.print_debug('⚠️ Failed to create app: ${err}')
|
||||
}
|
||||
|
||||
builder.run('chmod +x /usr/local/bin/demo-app') or {
|
||||
console.print_debug('⚠️ Failed to make app executable: ${err}')
|
||||
}
|
||||
|
||||
// Configure and commit
|
||||
builder.set_entrypoint('/usr/local/bin/demo-app') or {
|
||||
console.print_debug('⚠️ Failed to set entrypoint: ${err}')
|
||||
}
|
||||
|
||||
builder.commit('demo-app:latest') or {
|
||||
console.print_debug('⚠️ Failed to commit image: ${err}')
|
||||
}
|
||||
|
||||
console.print_debug('✅ Image built and committed: demo-app:latest')
|
||||
|
||||
// Run container from built image
|
||||
if built_container_id := factory.create_from_buildah_image('demo-app:latest',
|
||||
podman.ContainerRuntimeConfig{
|
||||
name: 'demo-app-container'
|
||||
detach: true
|
||||
remove: true
|
||||
})
|
||||
{
|
||||
console.print_debug('✅ Container running from built image: ${built_container_id[..12]}...')
|
||||
} else {
|
||||
console.print_debug('⚠️ Failed to run container from built image: ${err}')
|
||||
}
|
||||
|
||||
// Cleanup builder
|
||||
factory.builder_delete('demo-app-builder') or {
|
||||
console.print_debug('⚠️ Failed to delete builder: ${err}')
|
||||
}
|
||||
} else {
|
||||
console.print_debug('⚠️ Failed to create builder (buildah may not be available): ${err}')
|
||||
}
|
||||
} else {
|
||||
console.print_debug('❌ Failed to create podman factory: ${err}')
|
||||
console.print_debug('This usually means podman is not installed or not accessible')
|
||||
console.print_debug('Skipping factory API demonstrations...')
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// DEMO COMPLETION
|
||||
// =============================================================================
|
||||
|
||||
console.print_header('🎉 Demo Completed Successfully!')
|
||||
|
||||
console.print_debug('This demo demonstrated the independent podman module:')
|
||||
console.print_debug(' ✅ Automatic podman installation')
|
||||
console.print_debug(' ✅ Simple API functions (run_container, list_containers, list_images)')
|
||||
console.print_debug(' ✅ Factory API pattern (advanced container creation)')
|
||||
console.print_debug(' ✅ Buildah integration (builder creation, image building)')
|
||||
console.print_debug(' ✅ Seamless podman-buildah workflows')
|
||||
console.print_debug(' ✅ Comprehensive error handling with module-specific types')
|
||||
console.print_debug(' ✅ Module independence (no shared dependencies)')
|
||||
console.print_debug('')
|
||||
console.print_debug('Key Features:')
|
||||
console.print_debug(' 🔒 Self-contained module with own error types')
|
||||
console.print_debug(' 🎯 Two API approaches: Simple functions & Factory pattern')
|
||||
console.print_debug(' 🔧 Advanced container configuration options')
|
||||
console.print_debug(' 🏗️ Buildah integration for image building')
|
||||
console.print_debug(' 📦 Ready for open source publication')
|
||||
console.print_debug('')
|
||||
console.print_debug('The podman module provides both simple and advanced APIs')
|
||||
console.print_debug('for all your container management needs! 🐳')
|
||||
4
examples/virt/podman_buildah/.gitignore
vendored
4
examples/virt/podman_buildah/.gitignore
vendored
@@ -1,4 +0,0 @@
|
||||
buildah_example
|
||||
buildah_run_clean
|
||||
buildah_run_mdbook
|
||||
buildah_run
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.herocontainers
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.core.base
|
||||
import freeflowuniverse.herolib.osal
|
||||
import freeflowuniverse.herolib.installers.virt.podman as podman_installer
|
||||
|
||||
mut podman_installer0 := podman_installer.get()!
|
||||
// podman_installer0.destroy()!
|
||||
podman_installer0.install()!
|
||||
|
||||
// exit(0)
|
||||
|
||||
// interative means will ask for login/passwd
|
||||
|
||||
mut engine := herocontainers.new(install: true, herocompile: false)!
|
||||
|
||||
// engine.reset_all()!
|
||||
|
||||
// mut builder_gorust := engine.builder_go_rust()!
|
||||
|
||||
// will build nodejs, python build & herolib, hero
|
||||
// mut builder_hero := engine.builder_hero(reset:true)!
|
||||
|
||||
// mut builder_web := engine.builder_heroweb(reset:true)!
|
||||
|
||||
// builder_gorust.shell()!
|
||||
@@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.herocontainers
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.core.base
|
||||
// import freeflowuniverse.herolib.builder
|
||||
import time
|
||||
import os
|
||||
|
||||
// herocompile means we do it for the host system
|
||||
mut pm := herocontainers.new(herocompile: false, install: false)!
|
||||
|
||||
// pm.builder_base(reset:true)!
|
||||
|
||||
mut builder := pm.builder_get('base')!
|
||||
builder.shell()!
|
||||
|
||||
println(builder)
|
||||
|
||||
// builder.install_zinit()!
|
||||
|
||||
// bash & python can be executed directly in build container
|
||||
|
||||
// any of the herocommands can be executed like this
|
||||
// mybuildcontainer.run(cmd: 'installers -n heroweb', runtime: .herocmd)!
|
||||
|
||||
// //following will execute heroscript in the buildcontainer
|
||||
// mybuildcontainer.run(
|
||||
// cmd:"
|
||||
|
||||
// !!play.echo content:'this is just a test'
|
||||
|
||||
// !!play.echo content:'this is another test'
|
||||
|
||||
// ",
|
||||
// runtime:.heroscript)!
|
||||
|
||||
// there are also shortcuts for this
|
||||
|
||||
// mybuildcontainer.hero_copy()!
|
||||
// mybuildcontainer.shell()!
|
||||
|
||||
// mut b2:=pm.builder_get("builderv")!
|
||||
// b2.shell()!
|
||||
@@ -1,21 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import freeflowuniverse.herolib.virt.herocontainers
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
// import freeflowuniverse.herolib.builder
|
||||
import time
|
||||
import os
|
||||
|
||||
mut pm := herocontainers.new(herocompile: false)!
|
||||
|
||||
mut b := pm.builder_new()!
|
||||
|
||||
println(b)
|
||||
|
||||
// mut mybuildcontainer := pm.builder_get("builderv")!
|
||||
|
||||
// mybuildcontainer.clean()!
|
||||
|
||||
// mybuildcontainer.commit('localhost/buildersmall')!
|
||||
|
||||
b.shell()!
|
||||
@@ -1,32 +0,0 @@
|
||||
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
import os
|
||||
import flag
|
||||
import freeflowuniverse.herolib.virt.herocontainers
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.core.base
|
||||
// import freeflowuniverse.herolib.builder
|
||||
import time
|
||||
|
||||
mut fp := flag.new_flag_parser(os.args)
|
||||
fp.application('buildah mdbook example')
|
||||
fp.limit_free_args(0, 0)! // comment this, if you expect arbitrary texts after the options
|
||||
fp.skip_executable()
|
||||
url := fp.string_opt('url', `u`, 'mdbook heroscript url')!
|
||||
|
||||
additional_args := fp.finalize() or {
|
||||
eprintln(err)
|
||||
println(fp.usage())
|
||||
return
|
||||
}
|
||||
|
||||
mut pm := herocontainers.new(herocompile: true, install: false)!
|
||||
|
||||
mut mybuildcontainer := pm.builder_get('builder_heroweb')!
|
||||
|
||||
// //bash & python can be executed directly in build container
|
||||
|
||||
// //any of the herocommands can be executed like this
|
||||
mybuildcontainer.run(cmd: 'installers -n heroweb', runtime: .herocmd)!
|
||||
|
||||
mybuildcontainer.run(cmd: 'hero mdbook -u ${url} -o', runtime: .bash)!
|
||||
@@ -10,8 +10,6 @@ import v.embed_file
|
||||
const heropath_ = os.dir(@FILE) + '/../'
|
||||
|
||||
pub struct BootStrapper {
|
||||
pub mut:
|
||||
embedded_files map[string]embed_file.EmbedFileData @[skip; str: skip]
|
||||
}
|
||||
|
||||
@[params]
|
||||
@@ -23,18 +21,9 @@ pub mut:
|
||||
debug bool
|
||||
}
|
||||
|
||||
fn (mut bs BootStrapper) load() {
|
||||
panic('not implemented')
|
||||
|
||||
// TODO: check how to install hero. maybe once we have releases, we could just download the binary
|
||||
// bs.embedded_files['install_base.sh'] = $embed_file('../../scripts/install_base.sh')
|
||||
// bs.embedded_files['install_hero.sh'] = $embed_file('../../scripts/install_hero.sh')
|
||||
}
|
||||
|
||||
// to use do something like: export NODES="195.192.213.3" .
|
||||
pub fn bootstrapper() BootStrapper {
|
||||
mut bs := BootStrapper{}
|
||||
bs.load()
|
||||
return bs
|
||||
}
|
||||
|
||||
@@ -48,99 +37,40 @@ pub fn (mut bs BootStrapper) run(args_ BootstrapperArgs) ! {
|
||||
name := '${args.name}_${counter}'
|
||||
mut n := b.node_new(ipaddr: a, name: name)!
|
||||
n.hero_install()!
|
||||
n.hero_install()!
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut node Node) upgrade() ! {
|
||||
mut bs := bootstrapper()
|
||||
install_base_content_ := bs.embedded_files['install_base.sh'] or { panic('bug') }
|
||||
install_base_content := install_base_content_.to_string()
|
||||
cmd := '${install_base_content}\n'
|
||||
node.exec_cmd(
|
||||
cmd: cmd
|
||||
period: 48 * 3600
|
||||
reset: false
|
||||
description: 'upgrade operating system packages'
|
||||
)!
|
||||
}
|
||||
|
||||
pub fn (mut node Node) hero_install() ! {
|
||||
console.print_debug('install hero')
|
||||
mut bs := bootstrapper()
|
||||
install_hero_content_ := bs.embedded_files['install_hero.sh'] or { panic('bug') }
|
||||
install_hero_content := install_hero_content_.to_string()
|
||||
if node.platform == .osx {
|
||||
// we have no choice then to do it interactive
|
||||
myenv := node.environ_get()!
|
||||
homedir := myenv['HOME'] or { return error("can't find HOME in env") }
|
||||
node.exec_silent('mkdir -p ${homedir}/hero/bin')!
|
||||
node.file_write('${homedir}/hero/bin/install.sh', install_hero_content)!
|
||||
node.exec_silent('chmod +x ${homedir}/hero/bin/install.sh')!
|
||||
node.exec_interactive('${homedir}/hero/bin/install.sh')!
|
||||
} else if node.platform == .ubuntu {
|
||||
myenv := node.environ_get()!
|
||||
homedir := myenv['HOME'] or { return error("can't find HOME in env") }
|
||||
node.exec_silent('mkdir -p ${homedir}/hero/bin')!
|
||||
node.file_write('${homedir}/hero/bin/install.sh', install_hero_content)!
|
||||
node.exec_silent('chmod +x ${homedir}/hero/bin/install.sh')!
|
||||
node.exec_interactive('${homedir}/hero/bin/install.sh')!
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut node Node) dagu_install() ! {
|
||||
console.print_debug('install dagu')
|
||||
if !osal.cmd_exists('dagu') {
|
||||
_ = bootstrapper()
|
||||
node.exec_silent('curl -L https://raw.githubusercontent.com/yohamta/dagu/main/scripts/downloader.sh | bash')!
|
||||
// n.hero_install()!
|
||||
}
|
||||
}
|
||||
|
||||
@[params]
|
||||
pub struct HeroInstallArgs {
|
||||
pub mut:
|
||||
reset bool
|
||||
reset bool
|
||||
compile bool
|
||||
v_analyzer bool
|
||||
debug bool // will go in shell
|
||||
}
|
||||
|
||||
// pub fn (mut node Node) hero_install(args HeroInstallArgs) ! {
|
||||
// mut bs := bootstrapper()
|
||||
// install_base_content_ := bs.embedded_files['install_base.sh'] or { panic('bug') }
|
||||
// install_base_content := install_base_content_.to_string()
|
||||
pub fn (mut node Node) hero_install(args HeroInstallArgs) ! {
|
||||
console.print_debug('install hero')
|
||||
mut bs := bootstrapper()
|
||||
|
||||
// if args.reset {
|
||||
// console.clear()
|
||||
// console.print_debug('')
|
||||
// console.print_stderr('will remove: .vmodules, hero lib code and ~/hero')
|
||||
// console.print_debug('')
|
||||
// mut myui := ui.new()!
|
||||
// toinstall := myui.ask_yesno(
|
||||
// question: 'Ok to reset?'
|
||||
// default: true
|
||||
// )!
|
||||
// if !toinstall {
|
||||
// exit(1)
|
||||
// }
|
||||
// os.rmdir_all('${os.home_dir()}/.vmodules')!
|
||||
// os.rmdir_all('${os.home_dir()}/hero')!
|
||||
// os.rmdir_all('${os.home_dir()}/code/github/freeflowuniverse/herolib')!
|
||||
// os.rmdir_all('${os.home_dir()}/code/github/freeflowuniverse/webcomponents')!
|
||||
// }
|
||||
myenv := node.environ_get()!
|
||||
homedir := myenv['HOME'] or { return error("can't find HOME in env") }
|
||||
|
||||
// cmd := '
|
||||
// ${install_base_content}
|
||||
|
||||
// rm -f /usr/local/bin/hero
|
||||
// freeflow_dev_env_install
|
||||
|
||||
// ~/code/github/freeflowuniverse/herolib/install.sh
|
||||
|
||||
// echo HERO, V, CRYSTAL ALL INSTALL OK
|
||||
// echo WE ARE READY TO HERO...
|
||||
|
||||
// '
|
||||
// console.print_debug('executing cmd ${cmd}')
|
||||
// node.exec_cmd(cmd: cmd)!
|
||||
// }
|
||||
mut todo := []string{}
|
||||
if !args.compile {
|
||||
todo << 'curl https://raw.githubusercontent.com/freeflowuniverse/herolib/refs/heads/development/install_hero.sh > /tmp/install.sh'
|
||||
todo << 'bash /tmp/install.sh'
|
||||
} else {
|
||||
todo << "curl 'https://raw.githubusercontent.com/freeflowuniverse/herolib/refs/heads/development/install_v.sh' > /tmp/install_v.sh"
|
||||
if args.v_analyzer {
|
||||
todo << 'bash /tmp/install_v.sh --analyzer --herolib '
|
||||
} else {
|
||||
todo << 'bash /tmp/install_v.sh --herolib '
|
||||
}
|
||||
}
|
||||
node.exec_interactive(todo.join('\n'))!
|
||||
}
|
||||
|
||||
@[params]
|
||||
pub struct HeroUpdateArgs {
|
||||
|
||||
@@ -7,6 +7,7 @@ import freeflowuniverse.herolib.osal.rsync
|
||||
import freeflowuniverse.herolib.core.pathlib
|
||||
import freeflowuniverse.herolib.data.ipaddress
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.core.texttools
|
||||
|
||||
@[heap]
|
||||
pub struct ExecutorSSH {
|
||||
@@ -21,15 +22,6 @@ pub mut:
|
||||
|
||||
fn (mut executor ExecutorSSH) init() ! {
|
||||
if !executor.initialized {
|
||||
// if executor.ipaddr.port == 0 {
|
||||
// return error('port cannot be 0.\n${executor}')
|
||||
// }
|
||||
// TODO: need to call code from SSHAGENT do not reimplement here, not nicely done
|
||||
os.execute('pgrep -x ssh-agent || eval `ssh-agent -s`')
|
||||
|
||||
if executor.sshkey != '' {
|
||||
osal.exec(cmd: 'ssh-add ${executor.sshkey}')!
|
||||
}
|
||||
mut addr := executor.ipaddr.addr
|
||||
if addr == '' {
|
||||
addr = 'localhost'
|
||||
@@ -61,7 +53,16 @@ pub fn (mut executor ExecutorSSH) exec(args_ ExecArgs) !string {
|
||||
if executor.ipaddr.port > 10 {
|
||||
port = '-p ${executor.ipaddr.port}'
|
||||
}
|
||||
args.cmd = 'ssh -o StrictHostKeyChecking=no ${executor.user}@${executor.ipaddr.addr} ${port} "${args.cmd}"'
|
||||
|
||||
if args.cmd.contains('\n') {
|
||||
// need to upload the file first
|
||||
args.cmd = texttools.dedent(args.cmd)
|
||||
executor.file_write('/tmp/toexec.sh', args.cmd)!
|
||||
args.cmd = 'bash /tmp/toexec.sh'
|
||||
}
|
||||
|
||||
args.cmd = 'ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${executor.user}@${executor.ipaddr.addr} ${port} "${args.cmd}"'
|
||||
|
||||
res := osal.exec(cmd: args.cmd, stdout: args.stdout, debug: executor.debug)!
|
||||
return res.output
|
||||
}
|
||||
@@ -72,7 +73,16 @@ pub fn (mut executor ExecutorSSH) exec_interactive(args_ ExecArgs) ! {
|
||||
if executor.ipaddr.port > 10 {
|
||||
port = '-p ${executor.ipaddr.port}'
|
||||
}
|
||||
args.cmd = 'ssh -tt -o StrictHostKeyChecking=no ${executor.user}@${executor.ipaddr.addr} ${port} "${args.cmd}"'
|
||||
|
||||
if args.cmd.contains('\n') {
|
||||
args.cmd = texttools.dedent(args.cmd)
|
||||
// need to upload the file first
|
||||
executor.file_write('/tmp/toexec.sh', args.cmd)!
|
||||
args.cmd = 'bash /tmp/toexec.sh'
|
||||
}
|
||||
args.cmd = 'ssh -tt -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${executor.user}@${executor.ipaddr.addr} ${port} "${args.cmd}"'
|
||||
|
||||
console.print_debug(args.cmd)
|
||||
osal.execute_interactive(args.cmd)!
|
||||
}
|
||||
|
||||
|
||||
67
lib/builder/heroscript.md
Normal file
67
lib/builder/heroscript.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Builder Module Heroscript
|
||||
|
||||
The Builder module can be controlled using Heroscript to define and interact with nodes.
|
||||
|
||||
## Defining a Node
|
||||
|
||||
You can define a new node using the `node.new` action.
|
||||
|
||||
```heroscript
|
||||
!!node.new
|
||||
name:'mynode'
|
||||
ipaddr:'127.0.0.1' // for a local node
|
||||
user:'root' // optional, defaults to root
|
||||
debug:false // optional
|
||||
reload:false // optional
|
||||
```
|
||||
|
||||
This will create a new node instance that can be referenced by its name in subsequent actions.
|
||||
|
||||
## Executing Commands
|
||||
|
||||
To execute a command on a previously defined node, use the `cmd.run` action.
|
||||
|
||||
```heroscript
|
||||
!!cmd.run
|
||||
node:'mynode'
|
||||
cmd:'ls -la /tmp'
|
||||
```
|
||||
|
||||
The `node` parameter should match the name of a node defined with `node.new`. The `cmd` parameter contains the command to be executed on that node.
|
||||
|
||||
## Example Playbook
|
||||
|
||||
Here is a full example of a Heroscript playbook for the builder module:
|
||||
|
||||
```heroscript
|
||||
!!node.new
|
||||
name:'local_node'
|
||||
ipaddr:'127.0.0.1'
|
||||
|
||||
!!node.new
|
||||
name:'remote_node'
|
||||
ipaddr:'user@remote.server.com:22'
|
||||
|
||||
!!cmd.run
|
||||
node:'local_node'
|
||||
cmd:'echo "Hello from local node"'
|
||||
|
||||
!!cmd.run
|
||||
node:'remote_node'
|
||||
cmd:'uname -a'
|
||||
|
||||
```
|
||||
|
||||
## Running a Playbook
|
||||
|
||||
To run a playbook, you can use the `play` function in `builder.play`.
|
||||
|
||||
```v
|
||||
import freeflowuniverse.herolib.core.playbook
|
||||
import freeflowuniverse.herolib.builder
|
||||
|
||||
mut plbook := playbook.new(path: "path/to/your/playbook.hs")!
|
||||
builder.play(mut plbook)!
|
||||
```
|
||||
|
||||
This will parse the Heroscript file and execute the defined actions.
|
||||
@@ -1,9 +0,0 @@
|
||||
module builder
|
||||
|
||||
fn test_nodedb() {
|
||||
// TODO URGENT create tests for nodedb
|
||||
}
|
||||
|
||||
fn test_nodedone() {
|
||||
// TODO URGENT create tests for nodedone
|
||||
}
|
||||
60
lib/builder/play.v
Normal file
60
lib/builder/play.v
Normal file
@@ -0,0 +1,60 @@
|
||||
module builder
|
||||
|
||||
import freeflowuniverse.herolib.core.playbook
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
|
||||
// execute a playbook which can build nodes
|
||||
pub fn play(mut plbook playbook.PlayBook) ! {
|
||||
mut b := new()!
|
||||
|
||||
// Process actions to configure nodes
|
||||
actions := plbook.find(filter: 'node.new')!
|
||||
for action in actions {
|
||||
mut p := action.params
|
||||
mut n := b.node_new(
|
||||
name: p.get_default('name', '')!
|
||||
ipaddr: p.get_default('ipaddr', '')!
|
||||
user: p.get_default('user', 'root')!
|
||||
debug: p.get_default_false('debug')
|
||||
reload: p.get_default_false('reload')
|
||||
)!
|
||||
console.print_header('Created node: ${n.name}')
|
||||
}
|
||||
|
||||
// Process 'cmd.run' actions to execute commands on nodes
|
||||
cmd_actions := plbook.find(filter: 'cmd.run')!
|
||||
for action in cmd_actions {
|
||||
mut p := action.params
|
||||
node_name := p.get('node')!
|
||||
cmd := p.get('cmd')!
|
||||
|
||||
// a bit ugly but we don't have node management in a central place yet
|
||||
// this will get the node created previously
|
||||
// we need a better way to get the nodes, maybe from a global scope
|
||||
mut found_node := &Node{
|
||||
factory: &b
|
||||
}
|
||||
mut found := false
|
||||
nodes_to_find := plbook.find(filter: 'node.new')!
|
||||
for node_action in nodes_to_find {
|
||||
mut node_p := node_action.params
|
||||
if node_p.get_default('name', '')! == node_name {
|
||||
found_node = b.node_new(
|
||||
name: node_p.get_default('name', '')!
|
||||
ipaddr: node_p.get_default('ipaddr', '')!
|
||||
user: node_p.get_default('user', 'root')!
|
||||
)!
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
return error('Could not find node with name ${node_name}')
|
||||
}
|
||||
|
||||
console.print_debug('Executing command on node ${found_node.name}:\n${cmd}')
|
||||
result := found_node.exec_cmd(cmd: cmd)!
|
||||
console.print_debug('Result:\n${result}')
|
||||
}
|
||||
}
|
||||
@@ -131,37 +131,37 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
|
||||
// Handle access token generation
|
||||
mut token_create_actions := plbook.find(filter: 'livekit.token_create')!
|
||||
for mut action in token_create_actions {
|
||||
mut p := action.params
|
||||
// for mut action in token_create_actions {
|
||||
// mut p := action.params
|
||||
|
||||
client_name := texttools.name_fix(p.get_default('client', 'default')!)
|
||||
identity := p.get('identity')!
|
||||
name := p.get_default('name', identity)!
|
||||
room := p.get_default('room', '')!
|
||||
ttl := p.get_int_default('ttl', 21600)!
|
||||
can_publish := p.get_default_false('can_publish')
|
||||
can_subscribe := p.get_default_true('can_subscribe')
|
||||
can_publish_data := p.get_default_false('can_publish_data')
|
||||
// client_name := texttools.name_fix(p.get_default('client', 'default')!)
|
||||
// identity := p.get('identity')!
|
||||
// name := p.get_default('name', identity)!
|
||||
// room := p.get_default('room', '')!
|
||||
// ttl := p.get_int_default('ttl', 21600)!
|
||||
// can_publish := p.get_default_false('can_publish')
|
||||
// can_subscribe := p.get_default_true('can_subscribe')
|
||||
// can_publish_data := p.get_default_false('can_publish_data')
|
||||
|
||||
mut client := get(name: client_name)!
|
||||
// mut client := get(name: client_name)!
|
||||
|
||||
mut token := client.new_access_token(
|
||||
identity: identity
|
||||
name: name
|
||||
ttl: ttl
|
||||
)!
|
||||
// mut token := client.new_access_token(
|
||||
// identity: identity
|
||||
// name: name
|
||||
// ttl: ttl
|
||||
// )!
|
||||
|
||||
token.add_video_grant(VideoGrant{
|
||||
room: room
|
||||
room_join: true
|
||||
can_publish: can_publish
|
||||
can_subscribe: can_subscribe
|
||||
can_publish_data: can_publish_data
|
||||
})
|
||||
// token.add_video_grant(VideoGrant{
|
||||
// room: room
|
||||
// room_join: true
|
||||
// can_publish: can_publish
|
||||
// can_subscribe: can_subscribe
|
||||
// can_publish_data: can_publish_data
|
||||
// })
|
||||
|
||||
jwt := token.to_jwt()!
|
||||
console.print_header('Access token generated for "${identity}"')
|
||||
console.print_debug('Token: ${jwt}')
|
||||
action.done = true
|
||||
}
|
||||
// jwt := token.to_jwt()!
|
||||
// console.print_header('Access token generated for "${identity}"')
|
||||
// console.print_debug('Token: ${jwt}')
|
||||
// action.done = true
|
||||
// }
|
||||
}
|
||||
|
||||
@@ -24,10 +24,8 @@ pub fn check() bool {
|
||||
// }
|
||||
|
||||
// TODO: might be dangerous if that one goes out
|
||||
ping_result := osal.ping(address: '40a:152c:b85b:9646:5b71:d03a:eb27:2462', retry: 2) or {
|
||||
return false
|
||||
}
|
||||
if ping_result == .ok {
|
||||
ping_result := osal.ping(address: '40a:152c:b85b:9646:5b71:d03a:eb27:2462') or { panic(err) }
|
||||
if ping_result {
|
||||
console.print_debug('could reach 40a:152c:b85b:9646:5b71:d03a:eb27:2462')
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
module zinit
|
||||
|
||||
import json
|
||||
import freeflowuniverse.herolib.schemas.jsonrpc
|
||||
import freeflowuniverse.herolib.schemas.jsonrpcmodel
|
||||
import freeflowuniverse.herolib.schemas.openrpc
|
||||
|
||||
// Helper function to get or create the RPC client
|
||||
fn (mut c ZinitRPC) client_() !&jsonrpc.Client {
|
||||
@@ -13,10 +14,11 @@ fn (mut c ZinitRPC) client_() !&jsonrpc.Client {
|
||||
// Admin methods
|
||||
|
||||
// rpc_discover returns the OpenRPC specification for the API
|
||||
pub fn (mut c ZinitRPC) rpc_discover() !jsonrpcmodel.OpenRPCSpec {
|
||||
pub fn (mut c ZinitRPC) rpc_discover() !openrpc.OpenRPC {
|
||||
mut client := c.client_()!
|
||||
request := jsonrpc.new_request_generic('rpc.discover', []string{})
|
||||
return client.send[[]string, jsonrpcmodel.OpenRPCSpec](request)!
|
||||
request := jsonrpc.new_request('rpc.discover', '')
|
||||
openrpc_str := client.send_str(request)!
|
||||
return json.decode(openrpc.OpenRPC, openrpc_str)
|
||||
}
|
||||
|
||||
// service_list lists all services managed by Zinit
|
||||
@@ -97,10 +99,7 @@ pub fn (mut c ZinitRPC) service_create(name string, config ServiceConfig) !strin
|
||||
name: name
|
||||
content: config
|
||||
}
|
||||
println(params)
|
||||
$dbg;
|
||||
request := jsonrpc.new_request_generic('service_create', params)
|
||||
$dbg;
|
||||
return client.send[ServiceCreateParams, string](request)!
|
||||
}
|
||||
|
||||
|
||||
@@ -54,6 +54,7 @@ pub fn get(args ArgsGet) !&${args.classname} {
|
||||
if r.hexists('context:${args.name}', args.name)! {
|
||||
data := r.hget('context:${args.name}', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('${args.classname} with name: ${args.name} does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(${args.classname},data)!
|
||||
@@ -62,12 +63,14 @@ pub fn get(args ArgsGet) !&${args.classname} {
|
||||
if args.create {
|
||||
new(args)!
|
||||
}else{
|
||||
print_backtrace()
|
||||
return error("${args.classname} with name '${args.name}' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! //no longer from db nor create
|
||||
}
|
||||
return ${args.name}_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for ${args.name} with name:${args.name}')
|
||||
}
|
||||
}
|
||||
@@ -153,10 +156,11 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
mut install_actions := plbook.find(filter: '${args.name}.configure')!
|
||||
if install_actions.len > 0 {
|
||||
@if args.hasconfig
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
@else
|
||||
return error("can't configure ${args.name}, because no configuration allowed for this installer.")
|
||||
@@ -164,7 +168,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
@if args.cat == .installer
|
||||
mut other_actions := plbook.find(filter: '${args.name}.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ["destroy","install","build"]{
|
||||
mut p := other_action.params
|
||||
reset:=p.get_default_false("reset")
|
||||
@@ -198,6 +202,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
}
|
||||
@end
|
||||
other_action.done = true
|
||||
}
|
||||
@end
|
||||
}
|
||||
@@ -218,18 +223,18 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat{
|
||||
.screen {
|
||||
console.print_debug("startupmanager: screen")
|
||||
console.print_debug("installer: ${args.name}' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit{
|
||||
console.print_debug("startupmanager: zinit")
|
||||
console.print_debug("installer: ${args.name}' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd{
|
||||
console.print_debug("startupmanager: systemd")
|
||||
console.print_debug("installer: ${args.name}' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}else{
|
||||
console.print_debug("startupmanager: auto")
|
||||
console.print_debug("installer: ${args.name}' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -255,7 +260,7 @@ pub fn (mut self ${args.classname}) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('${args.name} start')
|
||||
console.print_header('installer: ${args.name} start')
|
||||
|
||||
if ! installed()!{
|
||||
install()!
|
||||
@@ -268,7 +273,7 @@ pub fn (mut self ${args.classname}) start() ! {
|
||||
for zprocess in startupcmd()!{
|
||||
mut sm:=startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting ${args.name} with ??{zprocess.startuptype}...')
|
||||
console.print_debug('installer: ${args.name} starting with ??{zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -95,7 +95,6 @@ fn cmd_bootstrap_execute(cmd Command) ! {
|
||||
if develop {
|
||||
// n.crystal_install(reset: reset)!
|
||||
n.hero_install()!
|
||||
n.dagu_install()!
|
||||
} else {
|
||||
panic('implement, need to download here and install')
|
||||
}
|
||||
|
||||
@@ -78,23 +78,30 @@ pub fn cmd_git(mut cmdroot Command) {
|
||||
description: 'Open visual studio code on found repos, will do for max 5.'
|
||||
}
|
||||
|
||||
mut cmd_cd := Command{
|
||||
mut exists_command := Command{
|
||||
sort_flags: true
|
||||
name: 'cd'
|
||||
name: 'exists'
|
||||
execute: cmd_git_execute
|
||||
description: 'cd to a git repo, use e.g. eval $(git cd -u https://github.com/threefoldfoundation/www_threefold_io)'
|
||||
description: 'Check if git repository exists. Returns exit code 0 if exists, 1 if not.'
|
||||
}
|
||||
|
||||
cmd_cd.add_flag(Flag{
|
||||
mut cmd_path := Command{
|
||||
sort_flags: true
|
||||
name: 'path'
|
||||
execute: cmd_git_execute
|
||||
description: 'Get the path to a git repository. Use with cd $(hero git path <url>)'
|
||||
}
|
||||
|
||||
cmd_path.add_flag(Flag{
|
||||
flag: .string
|
||||
required: false
|
||||
name: 'url'
|
||||
abbrev: 'u'
|
||||
description: 'url for git cd operation, so we know where to cd to'
|
||||
description: 'url for git path operation, so we know which repo path to get'
|
||||
})
|
||||
|
||||
mut allcmdsref := [&list_command, &clone_command, &push_command, &pull_command, &commit_command,
|
||||
&reload_command, &delete_command, &sourcetree_command, &editor_command]
|
||||
&reload_command, &delete_command, &sourcetree_command, &editor_command, &exists_command]
|
||||
|
||||
for mut c in allcmdsref {
|
||||
c.add_flag(Flag{
|
||||
@@ -181,7 +188,7 @@ pub fn cmd_git(mut cmdroot Command) {
|
||||
})
|
||||
cmd_run.add_command(c)
|
||||
}
|
||||
cmd_run.add_command(cmd_cd)
|
||||
cmd_run.add_command(cmd_path)
|
||||
cmdroot.add_command(cmd_run)
|
||||
}
|
||||
|
||||
@@ -189,7 +196,8 @@ fn cmd_git_execute(cmd Command) ! {
|
||||
mut is_silent := cmd.flags.get_bool('silent') or { false }
|
||||
mut reload := cmd.flags.get_bool('load') or { false }
|
||||
|
||||
if is_silent || cmd.name == 'cd' {
|
||||
// path command is silent so it just outputs repo path
|
||||
if is_silent || cmd.name == 'path' {
|
||||
console.silent_set()
|
||||
}
|
||||
mut coderoot := cmd.flags.get_string('coderoot') or { '' }
|
||||
@@ -235,8 +243,8 @@ fn cmd_git_execute(cmd Command) ! {
|
||||
url: url
|
||||
path: path
|
||||
)!
|
||||
if cmd.name == 'cd' {
|
||||
print('cd ${mypath}\n')
|
||||
if cmd.name == 'path' {
|
||||
print('${mypath}\n')
|
||||
}
|
||||
return
|
||||
} else {
|
||||
|
||||
@@ -91,9 +91,9 @@ fn cmd_init_execute(cmd Command) ! {
|
||||
}
|
||||
if hero {
|
||||
base.install(reset: reset, develop: true)!
|
||||
herolib.install(reset: reset, git_pull: git_pull, git_reset: git_reset)!
|
||||
herolib.install(reset: reset)!
|
||||
base.redis_install()!
|
||||
herolib.hero_compile(reset: reset)!
|
||||
herolib.compile(reset: reset, git_pull: git_pull, git_reset: git_reset)!
|
||||
r := osal.profile_path_add_hero()!
|
||||
console.print_header(' add path ${r} to profile.')
|
||||
return
|
||||
|
||||
@@ -72,6 +72,13 @@ pub fn cmd_run_add_flags(mut cmd_run Command) {
|
||||
description: 'runs non interactive!'
|
||||
})
|
||||
|
||||
cmd_run.add_flag(Flag{
|
||||
flag: .string
|
||||
name: 'heroscript'
|
||||
abbrev: 'h'
|
||||
description: 'runs non interactive!'
|
||||
})
|
||||
|
||||
cmd_run.add_flag(Flag{
|
||||
flag: .bool
|
||||
name: 'reset'
|
||||
@@ -143,14 +150,21 @@ pub fn plbook_code_get(cmd Command) !string {
|
||||
|
||||
// same as session_run_get but will also run the plbook
|
||||
pub fn plbook_run(cmd Command) !(&playbook.PlayBook, string) {
|
||||
path := plbook_code_get(cmd)!
|
||||
if path.len == 0 {
|
||||
return error(cmd.help_message())
|
||||
heroscript := cmd.flags.get_string('heroscript') or { '' }
|
||||
mut path := ''
|
||||
|
||||
mut plbook := if heroscript.len > 0 {
|
||||
playbook.new(text: heroscript)!
|
||||
} else {
|
||||
path
|
||||
= plbook_code_get(cmd)!
|
||||
if path.len == 0 {
|
||||
return error(cmd.help_message())
|
||||
}
|
||||
// add all actions inside to the plbook
|
||||
playbook.new(path: path)!
|
||||
}
|
||||
|
||||
// add all actions inside to the plbook
|
||||
mut plbook := playbook.new(path: path)!
|
||||
|
||||
|
||||
dagu := cmd.flags.get_bool('dagu') or { false }
|
||||
|
||||
playcmds.run(plbook: plbook)!
|
||||
@@ -160,14 +174,15 @@ pub fn plbook_run(cmd Command) !(&playbook.PlayBook, string) {
|
||||
return &plbook, path
|
||||
}
|
||||
|
||||
fn plbook_edit_sourcetree(cmd Command) !(&playbook.PlayBook, string) {
|
||||
fn plbook_edit_sourcetree(cmd Command) !&playbook.PlayBook {
|
||||
edit := cmd.flags.get_bool('edit') or { false }
|
||||
treedo := cmd.flags.get_bool('sourcetree') or { false }
|
||||
|
||||
mut plbook, path := plbook_run(cmd)!
|
||||
|
||||
if path.len == 0 {
|
||||
return error('path or url needs to be specified')
|
||||
// THIS CAN HAPPEN IF RUNNING HEROSCRIPT STRAIGHT FROM STRING
|
||||
// return error('path or url needs to be specified')
|
||||
}
|
||||
|
||||
if treedo {
|
||||
@@ -179,5 +194,5 @@ fn plbook_edit_sourcetree(cmd Command) !(&playbook.PlayBook, string) {
|
||||
vscode_.open()!
|
||||
}
|
||||
|
||||
return plbook, path
|
||||
return plbook
|
||||
}
|
||||
|
||||
@@ -51,8 +51,13 @@ pub fn (mut h HTTPConnection) send(req_ Request) !Result {
|
||||
mut from_cache := false // used to know if result came from cache
|
||||
mut req := req_
|
||||
|
||||
is_cacheable := h.is_cacheable(req)
|
||||
// console.print_debug("is cacheable: ${is_cacheable}")
|
||||
// println("Sending request: ${req}")
|
||||
|
||||
mut is_cacheable := h.is_cacheable(req)
|
||||
if req.debug {
|
||||
// in debug mode should not cache
|
||||
is_cacheable = false
|
||||
}
|
||||
|
||||
// 1 - Check cache if enabled try to get result from cache
|
||||
if is_cacheable {
|
||||
@@ -71,11 +76,6 @@ pub fn (mut h HTTPConnection) send(req_ Request) !Result {
|
||||
}
|
||||
url := h.url(req)
|
||||
|
||||
// println("----")
|
||||
// println(url)
|
||||
// println(req.data)
|
||||
// println("----")
|
||||
|
||||
mut new_req := http.new_request(req.method, url, req.data)
|
||||
// joining the header from the HTTPConnection with the one from Request
|
||||
new_req.header = h.header()
|
||||
@@ -99,7 +99,10 @@ pub fn (mut h HTTPConnection) send(req_ Request) !Result {
|
||||
if req.debug {
|
||||
console.print_debug('http request:\n${new_req.str()}')
|
||||
}
|
||||
for _ in 0 .. h.retry {
|
||||
for counter in 0 .. h.retry {
|
||||
if req.debug {
|
||||
console.print_debug('request attempt:${counter}')
|
||||
}
|
||||
response = new_req.do() or {
|
||||
err_message = 'Cannot send request:${req}\nerror:${err}'
|
||||
// console.print_debug(err_message)
|
||||
@@ -108,6 +111,7 @@ pub fn (mut h HTTPConnection) send(req_ Request) !Result {
|
||||
break
|
||||
}
|
||||
if req.debug {
|
||||
console.print_debug('request done')
|
||||
console.print_debug(response.str())
|
||||
}
|
||||
if response.status_code == 0 {
|
||||
@@ -186,9 +190,11 @@ pub fn (mut h HTTPConnection) get_json(req Request) !string {
|
||||
// Get Request with json data and return response as string
|
||||
pub fn (mut h HTTPConnection) get(req_ Request) !string {
|
||||
mut req := req_
|
||||
req.debug
|
||||
req.method = .get
|
||||
result := h.send(req)!
|
||||
if !result.is_ok() {
|
||||
return error('Could not get ${req}\result:\n${result}')
|
||||
}
|
||||
return result.data
|
||||
}
|
||||
|
||||
@@ -197,6 +203,9 @@ pub fn (mut h HTTPConnection) delete(req_ Request) !string {
|
||||
mut req := req_
|
||||
req.method = .delete
|
||||
result := h.send(req)!
|
||||
if !result.is_ok() {
|
||||
return error('Could not delete ${req}\result:\n${result}')
|
||||
}
|
||||
return result.data
|
||||
}
|
||||
|
||||
@@ -207,5 +216,6 @@ pub fn (mut h HTTPConnection) post_multi_part(req Request, form http.PostMultipa
|
||||
header.set(http.CommonHeader.content_type, 'multipart/form-data')
|
||||
req_form.header = header
|
||||
url := h.url(req)
|
||||
// TODO: should that not be on line with above? seems to be other codepath.
|
||||
return http.post_multipart_form(url, req_form)!
|
||||
}
|
||||
|
||||
@@ -3,11 +3,16 @@ module playcmds
|
||||
import freeflowuniverse.herolib.core.playbook { PlayBook }
|
||||
import freeflowuniverse.herolib.data.doctree
|
||||
import freeflowuniverse.herolib.biz.bizmodel
|
||||
import freeflowuniverse.herolib.threefold.incatokens
|
||||
import freeflowuniverse.herolib.web.site
|
||||
import freeflowuniverse.herolib.virt.hetznermanager
|
||||
import freeflowuniverse.herolib.web.docusaurus
|
||||
import freeflowuniverse.herolib.clients.openai
|
||||
import freeflowuniverse.herolib.clients.giteaclient
|
||||
import freeflowuniverse.herolib.osal.tmux
|
||||
import freeflowuniverse.herolib.installers.base
|
||||
import freeflowuniverse.herolib.installers.lang.vlang
|
||||
import freeflowuniverse.herolib.installers.lang.herolib
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// run – entry point for all HeroScript play‑commands
|
||||
@@ -53,7 +58,15 @@ pub fn run(args_ PlayArgs) ! {
|
||||
site.play(mut plbook)!
|
||||
doctree.play(mut plbook)!
|
||||
|
||||
incatokens.play(mut plbook)!
|
||||
|
||||
docusaurus.play(mut plbook)!
|
||||
hetznermanager.play(mut plbook)!
|
||||
hetznermanager.play2(mut plbook)!
|
||||
|
||||
base.play(mut plbook)!
|
||||
herolib.play(mut plbook)!
|
||||
vlang.play(mut plbook)!
|
||||
|
||||
giteaclient.play(mut plbook)!
|
||||
|
||||
|
||||
279
lib/core/playcmds/play_osal_core.v
Normal file
279
lib/core/playcmds/play_osal_core.v
Normal file
@@ -0,0 +1,279 @@
|
||||
module playcmds
|
||||
|
||||
import freeflowuniverse.herolib.core.playbook { PlayBook }
|
||||
import freeflowuniverse.herolib.osal.core as osal
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
|
||||
pub fn play_osal_core(mut plbook PlayBook) ! {
|
||||
if !plbook.exists(filter: 'osal.') {
|
||||
return
|
||||
}
|
||||
|
||||
// Process done actions
|
||||
play_done(mut plbook)!
|
||||
|
||||
// Process environment actions
|
||||
play_env(mut plbook)!
|
||||
|
||||
// Process execution actions
|
||||
play_exec(mut plbook)!
|
||||
|
||||
// Process package actions
|
||||
play_package(mut plbook)!
|
||||
}
|
||||
|
||||
fn play_done(mut plbook PlayBook) ! {
|
||||
// done_set actions
|
||||
mut done_set_actions := plbook.find(filter: 'osal.done_set')!
|
||||
for mut action in done_set_actions {
|
||||
mut p := action.params
|
||||
key := p.get('key')!
|
||||
val := p.get('val')!
|
||||
|
||||
console.print_header('Setting done flag: ${key} = ${val}')
|
||||
osal.done_set(key, val)!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// done_delete actions
|
||||
mut done_delete_actions := plbook.find(filter: 'osal.done_delete')!
|
||||
for mut action in done_delete_actions {
|
||||
mut p := action.params
|
||||
key := p.get('key')!
|
||||
|
||||
console.print_header('Deleting done flag: ${key}')
|
||||
osal.done_delete(key)!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// done_reset actions
|
||||
mut done_reset_actions := plbook.find(filter: 'osal.done_reset')!
|
||||
for mut action in done_reset_actions {
|
||||
console.print_header('Resetting all done flags')
|
||||
osal.done_reset()!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// done_print actions
|
||||
mut done_print_actions := plbook.find(filter: 'osal.done_print')!
|
||||
for mut action in done_print_actions {
|
||||
console.print_header('Printing done flags')
|
||||
osal.done_print()!
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
fn play_env(mut plbook PlayBook) ! {
|
||||
// env_set actions
|
||||
mut env_set_actions := plbook.find(filter: 'osal.env_set')!
|
||||
for mut action in env_set_actions {
|
||||
mut p := action.params
|
||||
key := p.get('key')!
|
||||
value := p.get('value')!
|
||||
overwrite := p.get_default_true('overwrite')
|
||||
|
||||
console.print_header('Setting environment variable: ${key}')
|
||||
osal.env_set(
|
||||
key: key
|
||||
value: value
|
||||
overwrite: overwrite
|
||||
)
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// env_unset actions
|
||||
mut env_unset_actions := plbook.find(filter: 'osal.env_unset')!
|
||||
for mut action in env_unset_actions {
|
||||
mut p := action.params
|
||||
key := p.get('key')!
|
||||
|
||||
console.print_header('Unsetting environment variable: ${key}')
|
||||
osal.env_unset(key)
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// env_set_all actions
|
||||
mut env_set_all_actions := plbook.find(filter: 'osal.env_set_all')!
|
||||
for mut action in env_set_all_actions {
|
||||
mut p := action.params
|
||||
|
||||
// Parse environment variables from parameters
|
||||
mut env_vars := map[string]string{}
|
||||
// Get all parameters and filter out the control parameters
|
||||
params_map := p.get_map()
|
||||
for key, value in params_map {
|
||||
if key !in ['clear_before_set', 'overwrite_if_exists'] {
|
||||
env_vars[key] = value
|
||||
}
|
||||
}
|
||||
|
||||
clear_before_set := p.get_default_false('clear_before_set')
|
||||
overwrite_if_exists := p.get_default_true('overwrite_if_exists')
|
||||
|
||||
console.print_header('Setting multiple environment variables')
|
||||
osal.env_set_all(
|
||||
env: env_vars
|
||||
clear_before_set: clear_before_set
|
||||
overwrite_if_exists: overwrite_if_exists
|
||||
)
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// env_load_file actions
|
||||
mut env_load_file_actions := plbook.find(filter: 'osal.env_load_file')!
|
||||
for mut action in env_load_file_actions {
|
||||
mut p := action.params
|
||||
file_path := p.get('file_path')!
|
||||
|
||||
console.print_header('Loading environment from file: ${file_path}')
|
||||
osal.load_env_file(file_path)!
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
fn play_exec(mut plbook PlayBook) ! {
|
||||
// exec actions
|
||||
mut exec_actions := plbook.find(filter: 'osal.exec')!
|
||||
for mut action in exec_actions {
|
||||
mut p := action.params
|
||||
|
||||
cmd := p.get('cmd')!
|
||||
|
||||
mut command := osal.Command{
|
||||
cmd: cmd
|
||||
name: p.get_default('name', '')!
|
||||
description: p.get_default('description', '')!
|
||||
timeout: p.get_int_default('timeout', 3600)!
|
||||
stdout: p.get_default_true('stdout')
|
||||
stdout_log: p.get_default_true('stdout_log')
|
||||
raise_error: p.get_default_true('raise_error')
|
||||
ignore_error: p.get_default_false('ignore_error')
|
||||
work_folder: p.get_default('work_folder', '')!
|
||||
retry: p.get_int_default('retry', 0)!
|
||||
interactive: p.get_default_true('interactive')
|
||||
debug: p.get_default_false('debug')
|
||||
}
|
||||
|
||||
// Parse environment variables if provided
|
||||
if p.exists('environment') {
|
||||
env_str := p.get('environment')!
|
||||
// Parse environment string (format: "KEY1=value1,KEY2=value2")
|
||||
env_pairs := env_str.split(',')
|
||||
mut env_map := map[string]string{}
|
||||
for pair in env_pairs {
|
||||
if pair.contains('=') {
|
||||
key := pair.all_before('=').trim_space()
|
||||
value := pair.all_after('=').trim_space()
|
||||
env_map[key] = value
|
||||
}
|
||||
}
|
||||
command.environment = env_map.clone()
|
||||
}
|
||||
|
||||
// Parse ignore_error_codes if provided
|
||||
if p.exists('ignore_error_codes') {
|
||||
ignore_codes := p.get_list_int('ignore_error_codes')!
|
||||
command.ignore_error_codes = ignore_codes
|
||||
}
|
||||
|
||||
console.print_header('Executing command: ${cmd}')
|
||||
osal.exec(command)!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// exec_silent actions
|
||||
mut exec_silent_actions := plbook.find(filter: 'osal.exec_silent')!
|
||||
for mut action in exec_silent_actions {
|
||||
mut p := action.params
|
||||
cmd := p.get('cmd')!
|
||||
|
||||
console.print_header('Executing command silently: ${cmd}')
|
||||
osal.execute_silent(cmd)!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// exec_interactive actions
|
||||
mut exec_interactive_actions := plbook.find(filter: 'osal.exec_interactive')!
|
||||
for mut action in exec_interactive_actions {
|
||||
mut p := action.params
|
||||
cmd := p.get('cmd')!
|
||||
|
||||
console.print_header('Executing command interactively: ${cmd}')
|
||||
osal.execute_interactive(cmd)!
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
fn play_package(mut plbook PlayBook) ! {
|
||||
// package_refresh actions
|
||||
mut package_refresh_actions := plbook.find(filter: 'osal.package_refresh')!
|
||||
for mut action in package_refresh_actions {
|
||||
console.print_header('Refreshing package lists')
|
||||
osal.package_refresh()!
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// package_install actions
|
||||
mut package_install_actions := plbook.find(filter: 'osal.package_install')!
|
||||
for mut action in package_install_actions {
|
||||
mut p := action.params
|
||||
|
||||
// Support both 'name' parameter and arguments
|
||||
mut packages := []string{}
|
||||
|
||||
if p.exists('name') {
|
||||
packages << p.get('name')!
|
||||
}
|
||||
|
||||
// Add any arguments (packages without keys)
|
||||
mut i := 0
|
||||
for {
|
||||
arg := p.get_arg_default(i, '')!
|
||||
if arg == '' {
|
||||
break
|
||||
}
|
||||
packages << arg
|
||||
i++
|
||||
}
|
||||
|
||||
for package in packages {
|
||||
if package != '' {
|
||||
console.print_header('Installing package: ${package}')
|
||||
osal.package_install(package)!
|
||||
}
|
||||
}
|
||||
action.done = true
|
||||
}
|
||||
|
||||
// package_remove actions
|
||||
mut package_remove_actions := plbook.find(filter: 'osal.package_remove')!
|
||||
for mut action in package_remove_actions {
|
||||
mut p := action.params
|
||||
|
||||
// Support both 'name' parameter and arguments
|
||||
mut packages := []string{}
|
||||
|
||||
if p.exists('name') {
|
||||
packages << p.get('name')!
|
||||
}
|
||||
|
||||
// Add any arguments (packages without keys)
|
||||
mut i := 0
|
||||
for {
|
||||
arg := p.get_arg_default(i, '')!
|
||||
if arg == '' {
|
||||
break
|
||||
}
|
||||
packages << arg
|
||||
i++
|
||||
}
|
||||
|
||||
for package in packages {
|
||||
if package != '' {
|
||||
console.print_header('Removing package: ${package}')
|
||||
osal.package_remove(package)!
|
||||
}
|
||||
}
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
@@ -54,24 +54,30 @@ fn decode_struct[T](_ T, data string) !T {
|
||||
should_skip = true
|
||||
break
|
||||
}
|
||||
if attr.contains('skipdecode') {
|
||||
should_skip = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !should_skip {
|
||||
$if field.is_struct {
|
||||
$if field.typ !is time.Time {
|
||||
if !field.name[0].is_capital() {
|
||||
// skip embedded ones
|
||||
mut data_fmt := data.replace(action_str, '')
|
||||
data_fmt = data.replace('define.${obj_name}', 'define')
|
||||
typ.$(field.name) = decode_struct(typ.$(field.name), data_fmt)!
|
||||
}
|
||||
}
|
||||
// $if field.typ !is time.Time {
|
||||
// if !field.name[0].is_capital() {
|
||||
// // skip embedded ones
|
||||
// mut data_fmt := data.replace(action_str, '')
|
||||
// data_fmt = data.replace('define.${obj_name}', 'define')
|
||||
// typ.$(field.name) = decode_struct(typ.$(field.name), data_fmt)!
|
||||
// }
|
||||
// }
|
||||
} $else $if field.is_array {
|
||||
if is_struct_array(typ.$(field.name))! {
|
||||
mut data_fmt := data.replace(action_str, '')
|
||||
data_fmt = data.replace('define.${obj_name}', 'define')
|
||||
arr := decode_array(typ.$(field.name), data_fmt)!
|
||||
typ.$(field.name) = arr
|
||||
}
|
||||
// arr := decode_array(typ.$(field.name), data_fmt)!
|
||||
// typ.$(field.name) = arr
|
||||
// if is_struct_array(typ.$(field.name))! {
|
||||
// mut data_fmt := data.replace(action_str, '')
|
||||
// data_fmt = data.replace('define.${obj_name}', 'define')
|
||||
// arr := decode_array(typ.$(field.name), data_fmt)!
|
||||
// typ.$(field.name) = arr
|
||||
// }
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -93,7 +99,9 @@ pub fn decode_array[T](_ []T, data string) ![]T {
|
||||
// for i in 0 .. val.len {
|
||||
value := T{}
|
||||
$if T is $struct {
|
||||
arr << decode_struct(value, data)!
|
||||
// arr << decode_struct(value, data)!
|
||||
} $else {
|
||||
arr << decode[T](data)!
|
||||
}
|
||||
// }
|
||||
return arr
|
||||
|
||||
@@ -24,8 +24,7 @@ pub mut:
|
||||
|
||||
// is_running checks if the node is operational by pinging its address
|
||||
fn (node &StreamerNode) is_running() bool {
|
||||
ping_result := osal.ping(address: node.address, retry: 2) or { return false }
|
||||
return ping_result == .ok
|
||||
return osal.ping(address: node.address, retry: 2)!
|
||||
}
|
||||
|
||||
// connect_to_master connects the worker node to its master
|
||||
|
||||
@@ -23,7 +23,6 @@ develop-eggs/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
|
||||
@@ -5,7 +5,7 @@ import freeflowuniverse.herolib.core.pathlib
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import os
|
||||
|
||||
pub const gitcmds = 'clone,commit,pull,push,delete,reload,list,edit,sourcetree,cd'
|
||||
pub const gitcmds = 'clone,commit,pull,push,delete,reload,list,edit,sourcetree,path,exists'
|
||||
|
||||
@[params]
|
||||
pub struct ReposActionsArgs {
|
||||
@@ -99,7 +99,7 @@ pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
|
||||
provider: args.provider
|
||||
)!
|
||||
|
||||
if repos.len<4 || args.cmd in 'pull,push,commit,delete'.split(',') {
|
||||
if repos.len < 4 || args.cmd in 'pull,push,commit,delete'.split(',') {
|
||||
args.reload = true
|
||||
}
|
||||
|
||||
@@ -117,6 +117,20 @@ pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
|
||||
return ''
|
||||
}
|
||||
|
||||
if args.cmd == 'exists' {
|
||||
return gs.check_repos_exist(args)
|
||||
}
|
||||
|
||||
if args.cmd == 'path' {
|
||||
if repos.len == 0 {
|
||||
return error('No repository found for path command')
|
||||
}
|
||||
if repos.len > 1 {
|
||||
return error('Multiple repositories found for path command, please be more specific')
|
||||
}
|
||||
return repos[0].path()
|
||||
}
|
||||
|
||||
// means we are on 1 repo
|
||||
if args.cmd in 'sourcetree,edit'.split(',') {
|
||||
if repos.len == 0 {
|
||||
|
||||
@@ -27,6 +27,7 @@ fn (mut repo GitRepo) cache_get() ! {
|
||||
if repo_json.len > 0 {
|
||||
mut cached := json.decode(GitRepo, repo_json)!
|
||||
cached.gs = repo.gs
|
||||
cached.config.remote_check_period = 3600 * 24 * 7
|
||||
repo = cached
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,9 +39,18 @@ pub fn (mut gitstructure GitStructure) clone(args GitCloneArgs) !&GitRepo {
|
||||
key_ := repo.cache_key()
|
||||
gitstructure.repos[key_] = &repo
|
||||
|
||||
mut repopath := repo.patho()!
|
||||
if repopath.exists() {
|
||||
return error("can't clone on existing path, came from url, path found is ${repopath.path}.\n")
|
||||
if repo.exists() {
|
||||
console.print_green("Repository already exists at ${repo.path()}")
|
||||
// Load the existing repository status
|
||||
repo.load_internal() or {
|
||||
console.print_debug('Could not load existing repository status: ${err}')
|
||||
}
|
||||
return &repo
|
||||
}
|
||||
|
||||
// Check if path exists but is not a git repository
|
||||
if os.exists(repo.path()) {
|
||||
return error("Path exists but is not a git repository: ${repo.path()}")
|
||||
}
|
||||
|
||||
if args.sshkey.len > 0 {
|
||||
|
||||
@@ -163,3 +163,37 @@ pub fn (mut repo GitRepo) open_vscode() ! {
|
||||
mut vs_code := vscode.new(path)
|
||||
vs_code.open()!
|
||||
}
|
||||
|
||||
// Check if repository exists at its expected path
|
||||
pub fn (repo GitRepo) exists() bool {
|
||||
repo_path := repo.path()
|
||||
if !os.exists(repo_path) {
|
||||
return false
|
||||
}
|
||||
git_dir := os.join_path(repo_path, '.git')
|
||||
return os.exists(git_dir)
|
||||
}
|
||||
|
||||
// Check if any repositories exist based on filter criteria and return result for exists command
|
||||
pub fn (mut gs GitStructure) check_repos_exist(args ReposActionsArgs) !string {
|
||||
repos := gs.get_repos(
|
||||
filter: args.filter
|
||||
name: args.repo
|
||||
account: args.account
|
||||
provider: args.provider
|
||||
)!
|
||||
|
||||
if repos.len > 0 {
|
||||
// Repository exists - print path and return success
|
||||
if !args.script {
|
||||
console.print_green('Repository exists: ${repos[0].path()}')
|
||||
}
|
||||
return repos[0].path()
|
||||
} else {
|
||||
// Repository doesn't exist - return error for exit code 1
|
||||
if !args.script {
|
||||
console.print_stderr('Repository not found')
|
||||
}
|
||||
return error('Repository not found')
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,228 +0,0 @@
|
||||
module hero_db
|
||||
|
||||
import json
|
||||
import freeflowuniverse.herolib.clients.postgresql_client
|
||||
import db.pg
|
||||
import freeflowuniverse.herolib.core.texttools
|
||||
|
||||
// Generic database interface for Hero root objects
|
||||
pub struct HeroDB[T] {
|
||||
pub mut:
|
||||
db pg.DB
|
||||
table_name string
|
||||
}
|
||||
|
||||
// new creates a new HeroDB instance for a specific type T
|
||||
pub fn new[T]() !HeroDB[T] {
|
||||
mut table_name := '${texttools.snake_case(T.name)}s'
|
||||
// Map dirname from module path
|
||||
module_path := T.name.split('.')
|
||||
if module_path.len >= 2 {
|
||||
dirname := texttools.snake_case(module_path[module_path.len - 2])
|
||||
table_name = '${dirname}_${texttools.snake_case(T.name)}'
|
||||
}
|
||||
|
||||
mut dbclient := postgresql_client.get()!
|
||||
|
||||
mut dbcl := dbclient.db() or { return error('Failed to connect to database') }
|
||||
|
||||
return HeroDB[T]{
|
||||
db: dbcl
|
||||
table_name: table_name
|
||||
}
|
||||
}
|
||||
|
||||
// ensure_table creates the database table with proper schema for type T
|
||||
pub fn (mut self HeroDB[T]) ensure_table() ! {
|
||||
// Get index fields from struct reflection
|
||||
index_fields := self.get_index_fields()
|
||||
|
||||
// Build index column definitions
|
||||
mut index_cols := []string{}
|
||||
for field in index_fields {
|
||||
index_cols << '${field} varchar(255)'
|
||||
}
|
||||
|
||||
// Create table with JSON storage
|
||||
create_sql := '
|
||||
CREATE TABLE IF NOT EXISTS ${self.table_name} (
|
||||
id serial PRIMARY KEY,
|
||||
${index_cols.join(', ')},
|
||||
data jsonb NOT NULL,
|
||||
created_at timestamp DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at timestamp DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
' // self.db.exec(create_sql)!
|
||||
// Create indexes on index fields
|
||||
|
||||
for field in index_fields {
|
||||
index_sql := 'CREATE INDEX IF NOT EXISTS idx_${self.table_name}_${field} ON ${self.table_name}(${field})'
|
||||
// self.db.exec(index_sql)!
|
||||
}
|
||||
}
|
||||
|
||||
// Get index fields marked with @[index] from struct
|
||||
fn (self HeroDB[T]) get_index_fields() []string {
|
||||
mut fields := []string{}
|
||||
$for field in T.fields {
|
||||
if field.attrs.contains('index') {
|
||||
fields << texttools.snake_case(field.name)
|
||||
}
|
||||
}
|
||||
return fields
|
||||
}
|
||||
|
||||
// save stores the object T in the database, updating if it already exists
|
||||
pub fn (mut self HeroDB[T]) save(obj T) ! {
|
||||
// Get index values from object
|
||||
index_data := self.extract_index_values(obj)
|
||||
|
||||
// Serialize to JSON
|
||||
json_data := json.encode_pretty(obj)
|
||||
|
||||
// Check if object already exists
|
||||
mut query := 'SELECT id FROM ${self.table_name} WHERE '
|
||||
mut params := []string{}
|
||||
|
||||
// Build WHERE clause for unique lookup
|
||||
for key, value in index_data {
|
||||
params << '${key} = \'${value}\''
|
||||
}
|
||||
query += params.join(' AND ')
|
||||
|
||||
existing := self.db.exec(query)!
|
||||
|
||||
if existing.len > 0 {
|
||||
// Update existing record
|
||||
id_val := existing[0].vals[0] or { return error('no id') }
|
||||
// id := id_val.int()
|
||||
println('Updating existing record with ID: ${id_val}')
|
||||
if true {
|
||||
panic('sd111')
|
||||
}
|
||||
// update_sql := '
|
||||
// UPDATE ${self.table_name}
|
||||
// SET data = \$1, updated_at = CURRENT_TIMESTAMP
|
||||
// WHERE id = \$2
|
||||
// '
|
||||
// self.db_client.db()!.exec_param(update_sql, [json_data, id.str()])!
|
||||
} else {
|
||||
// Insert new record
|
||||
mut columns := []string{}
|
||||
mut values := []string{}
|
||||
|
||||
// Add index columns
|
||||
for key, value in index_data {
|
||||
columns << key
|
||||
values << "'${value}'"
|
||||
}
|
||||
|
||||
// Add JSON data
|
||||
columns << 'data'
|
||||
values << "'${json_data}'"
|
||||
|
||||
insert_sql := '
|
||||
INSERT INTO ${self.table_name} (${columns.join(', ')})
|
||||
VALUES (${values.join(', ')})
|
||||
' // self.db.exec(insert_sql)!
|
||||
}
|
||||
}
|
||||
|
||||
// get_by_index retrieves an object T by its index values
|
||||
pub fn (mut self HeroDB[T]) get_by_index(index_values map[string]string) !T {
|
||||
mut query := 'SELECT data FROM ${self.table_name} WHERE '
|
||||
mut params := []string{}
|
||||
|
||||
for key, value in index_values {
|
||||
params << '${key} = \'${value}\''
|
||||
}
|
||||
query += params.join(' AND ')
|
||||
|
||||
rows := self.db.exec(query)!
|
||||
if rows.len == 0 {
|
||||
return error('${T.name} not found with index values: ${index_values}')
|
||||
}
|
||||
|
||||
json_data_val := rows[0].vals[0] or { return error('no data') }
|
||||
println('json_data_val: ${json_data_val}')
|
||||
if true {
|
||||
panic('sd2221')
|
||||
}
|
||||
// mut obj := json.decode(T, json_data_val) or {
|
||||
// return error('Failed to decode JSON: ${err}')
|
||||
// }
|
||||
|
||||
// return &obj
|
||||
return T{}
|
||||
}
|
||||
|
||||
// // get_all retrieves all objects T from the database
|
||||
// pub fn (mut self HeroDB[T]) get_all() ![]T {
|
||||
// query := 'SELECT data FROM ${self.table_name} ORDER BY id DESC'
|
||||
// rows := self.db_client.db()!.exec(query)!
|
||||
|
||||
// mut results := []T{}
|
||||
// for row in rows {
|
||||
// json_data_val := row.vals[0] or { continue }
|
||||
// json_data := json_data_val.str()
|
||||
// mut obj := json.decode(T, json_data) or {
|
||||
// // e.g. an error could be given here
|
||||
// continue // Skip invalid JSON
|
||||
// }
|
||||
// results << &obj
|
||||
// }
|
||||
|
||||
// return results
|
||||
// }
|
||||
|
||||
// // search_by_index searches for objects T by a specific index field
|
||||
// pub fn (mut self HeroDB[T]) search_by_index(field_name string, value string) ![]T {
|
||||
// query := 'SELECT data FROM ${self.table_name} WHERE ${field_name} = \'${value}\' ORDER BY id DESC'
|
||||
// rows := self.db_client.db()!.exec(query)!
|
||||
|
||||
// mut results := []T{}
|
||||
// for row in rows {
|
||||
// json_data_val := row.vals[0] or { continue }
|
||||
// json_data := json_data_val.str()
|
||||
// mut obj := json.decode(T, json_data) or {
|
||||
// continue
|
||||
// }
|
||||
// results << &obj
|
||||
// }
|
||||
|
||||
// return results
|
||||
// }
|
||||
|
||||
// // delete_by_index removes objects T matching the given index values
|
||||
// pub fn (mut self HeroDB[T]) delete_by_index(index_values map[string]string) ! {
|
||||
// mut query := 'DELETE FROM ${self.table_name} WHERE '
|
||||
// mut params := []string{}
|
||||
|
||||
// for key, value in index_values {
|
||||
// params << '${key} = \'${value}\''
|
||||
// }
|
||||
// query += params.join(' AND ')
|
||||
|
||||
// self.db_client.db()!.exec(query)!
|
||||
// }
|
||||
|
||||
// Helper to extract index values from object
|
||||
fn (self HeroDB[T]) extract_index_values(obj T) map[string]string {
|
||||
mut index_data := map[string]string{}
|
||||
$for field in T.fields {
|
||||
// $if field.attrs.contains('index') {
|
||||
// field_name := texttools.snake_case(field.name)
|
||||
// $if field.typ is string {
|
||||
// value := obj.$(field.name)
|
||||
// index_data[field_name] = value
|
||||
// } $else $if field.typ is int {
|
||||
// value := obj.$(field.name).str()
|
||||
// index_data[field_name] = value
|
||||
// } $else {
|
||||
// value := obj.$(field.name).str()
|
||||
// index_data[field_name] = value
|
||||
// }
|
||||
// }
|
||||
}
|
||||
return index_data
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
|
||||
## hero db - OSIS in vlang
|
||||
|
||||
```v
|
||||
// Example usage:
|
||||
// Initialize database client
|
||||
mut db_client := postgresql_client.get(name: "default")!
|
||||
|
||||
// Create HeroDB for Circle type
|
||||
mut circle_db := hero_db.new[circle.Circle](db_client)!
|
||||
circle_db.ensure_table()!
|
||||
|
||||
// Create and save a circle
|
||||
mut my_circle := circle.Circle{
|
||||
name: "Tech Community"
|
||||
description: "A community for tech enthusiasts"
|
||||
domain: "tech.example.com"
|
||||
config: circle.CircleConfig{
|
||||
max_members: 1000
|
||||
allow_guests: true
|
||||
auto_approve: false
|
||||
theme: "modern"
|
||||
}
|
||||
status: circle.CircleStatus.active
|
||||
}
|
||||
|
||||
circle_db.save(&my_circle)!
|
||||
|
||||
// Retrieve the circle
|
||||
retrieved_circle := circle_db.get_by_index({
|
||||
"domain": "tech.example.com"
|
||||
})!
|
||||
|
||||
// Search circles by status
|
||||
active_circles := circle_db.search_by_index("status", "active")!
|
||||
```
|
||||
|
||||
94
lib/hero/db/instruction.md
Normal file
94
lib/hero/db/instruction.md
Normal file
@@ -0,0 +1,94 @@
|
||||
|
||||
|
||||
the main data is in key value stor:
|
||||
|
||||
- each object has u32 id
|
||||
- each object has u16 version (version of same data)
|
||||
- each object has u16 schemaid (if schema changes)
|
||||
- each object has tags u32 (to tag table)
|
||||
- each object has a created_at timestamp
|
||||
- each object has a updated_at timestamp
|
||||
- each object has binary content (the data)
|
||||
- each object has link to who can read/write/delete (lists of u32 per read/write/delete to group or user), link to security policy u32
|
||||
- each object has a signature of the data by the user who created/updated it
|
||||
|
||||
|
||||
- there are users & groups
|
||||
- groups can have other groups and users inside
|
||||
- users & groups are unique u32 as well in the DB, so no collision
|
||||
|
||||
this database does not know what the data is about, its agnostic to schema
|
||||
|
||||
|
||||
now make the 4 structs which represent above
|
||||
|
||||
- data
|
||||
- user
|
||||
- group ([]u32) each links to user or group, name, description
|
||||
- tags ([]string which gets a unique id, so its shorter to link to data object)
|
||||
- securitypolicy (see below)
|
||||
|
||||
and encoding scheme using lib/data/encoder, we need encode/decode on the structs, so we have densest possible encoding
|
||||
|
||||
now we need the implementation details for each struct, including the fields and their types, as well as the encoding/decoding logic.
|
||||
|
||||
the outside is a server over openrpc which has
|
||||
|
||||
- set (userid:u32, id:u32, data: Data, signature: string, tags:[]string) -> u32. (id can be 0 then its new, if existing we need to check if user can do it), tags will be recalculated based on []string (lower case, sorted list then md5 -> u32)
|
||||
- get (userid:u32, id: u32, signedid: string) -> Data,Tags as []string
|
||||
- exist (userid:u32, id: u32) -> bool //this we allow without signature
|
||||
- delete (userid:u32, id: u32, signedid: string) -> bool
|
||||
- list (userid:u32, signature: string, based on tags, schemaid, from creation/update and to creation/update), returns max 200 items -> u32
|
||||
|
||||
|
||||
the interface is stateless, no previous connection known, based on signature the server can verify the user is allowed to perform the action
|
||||
|
||||
the backend database is redis (hsets and sets)
|
||||
|
||||
|
||||
## signing implementation
|
||||
|
||||
the signing is in the same redis implemented, so no need to use vlang for that
|
||||
|
||||
```bash
|
||||
# Generate an ephemeral signing keypair
|
||||
redis-cli -p $PORT AGE GENSIGN
|
||||
# Example output:
|
||||
# 1) "<verify_pub_b64>"
|
||||
# 2) "<sign_secret_b64>"
|
||||
|
||||
# Sign a message with the secret
|
||||
redis-cli -p $PORT AGE SIGN "<sign_secret_b64>" "msg"
|
||||
# → returns "<signature_b64>"
|
||||
|
||||
# Verify with the public key
|
||||
redis-cli -p $PORT AGE VERIFY "<verify_pub_b64>" "msg" "<signature_b64>"
|
||||
# → 1 (valid) or 0 (invalid)
|
||||
```
|
||||
|
||||
|
||||
versioning: when stored we don't have to worry about version the database will check if it exists, newest version and then update
|
||||
|
||||
|
||||
## some of the base objects
|
||||
|
||||
```v
|
||||
@[heap]
|
||||
pub struct SecurityPolicy {
|
||||
pub mut:
|
||||
id u32
|
||||
read []u32 //links to users & groups
|
||||
write []u32 //links to users & groups
|
||||
delete []u32 //links to users & groups
|
||||
public bool
|
||||
}
|
||||
|
||||
|
||||
@[heap]
|
||||
pub struct Tags {
|
||||
pub mut:
|
||||
id u32
|
||||
names []string //unique per id
|
||||
md5 string //of sorted names, to make easy to find unique id
|
||||
}
|
||||
```
|
||||
67
lib/hero/herocluster/example/example.vsh
Normal file
67
lib/hero/herocluster/example/example.vsh
Normal file
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
|
||||
|
||||
if os.args.len < 3 {
|
||||
eprintln('Usage: ./prog <node_id> <status>')
|
||||
eprintln(' status: active|buffer')
|
||||
return
|
||||
}
|
||||
node_id := os.args[1]
|
||||
status_str := os.args[2]
|
||||
|
||||
status := match status_str {
|
||||
'active' { NodeStatus.active }
|
||||
'buffer' { NodeStatus.buffer }
|
||||
else {
|
||||
eprintln('Invalid status. Use: active|buffer')
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// --- Generate ephemeral keys for demo ---
|
||||
// In real use: load from PEM files
|
||||
priv, pub := ed25519.generate_key(rand.reader) or { panic(err) }
|
||||
|
||||
mut pubkeys := map[string]ed25519.PublicKey{}
|
||||
pubkeys[node_id] = pub
|
||||
// TODO: load all pubkeys from config file so every node knows others
|
||||
|
||||
// Initialize all nodes (in real scenario, load from config)
|
||||
mut all_nodes := map[string]Node{}
|
||||
all_nodes['node1'] = Node{id: 'node1', status: .active}
|
||||
all_nodes['node2'] = Node{id: 'node2', status: .active}
|
||||
all_nodes['node3'] = Node{id: 'node3', status: .active}
|
||||
all_nodes['node4'] = Node{id: 'node4', status: .buffer}
|
||||
|
||||
// Set current node status
|
||||
all_nodes[node_id].status = status
|
||||
|
||||
servers := ['127.0.0.1:6379', '127.0.0.1:6380', '127.0.0.1:6381', '127.0.0.1:6382']
|
||||
mut conns := []redis.Connection{}
|
||||
for s in servers {
|
||||
mut c := redis.connect(redis.Options{ server: s }) or {
|
||||
panic('could not connect to redis $s: $err')
|
||||
}
|
||||
conns << c
|
||||
}
|
||||
|
||||
mut election := Election{
|
||||
clients: conns
|
||||
pubkeys: pubkeys
|
||||
self: Node{
|
||||
id: node_id
|
||||
term: 0
|
||||
leader: false
|
||||
status: status
|
||||
}
|
||||
keys: Keys{ priv: priv, pub: pub }
|
||||
all_nodes: all_nodes
|
||||
buffer_nodes: ['node4'] // Initially node4 is buffer
|
||||
}
|
||||
|
||||
println('[$node_id] started as $status_str, connected to 4 redis servers.')
|
||||
|
||||
// Start health monitoring in background
|
||||
go election.health_monitor_loop()
|
||||
|
||||
// Start main heartbeat loop
|
||||
election.heartbeat_loop()
|
||||
308
lib/hero/herocluster/factory.v
Normal file
308
lib/hero/herocluster/factory.v
Normal file
@@ -0,0 +1,308 @@
|
||||
module herocluster
|
||||
|
||||
import db.redis
|
||||
import crypto.ed25519
|
||||
import crypto.rand
|
||||
import encoding.hex
|
||||
import os
|
||||
import time
|
||||
|
||||
const election_timeout_ms = 3000
|
||||
const heartbeat_interval_ms = 1000
|
||||
const node_unavailable_threshold_ms = 24 * 60 * 60 * 1000 // 1 day in milliseconds
|
||||
const health_check_interval_ms = 30000 // 30 seconds
|
||||
|
||||
// --- Crypto helpers ---
|
||||
|
||||
struct Keys {
|
||||
priv ed25519.PrivateKey
|
||||
pub ed25519.PublicKey
|
||||
}
|
||||
|
||||
// sign a message
|
||||
fn (k Keys) sign(msg string) string {
|
||||
sig := ed25519.sign(k.priv, msg.bytes())
|
||||
return hex.encode(sig)
|
||||
}
|
||||
|
||||
// verify signature
|
||||
fn verify(pub ed25519.PublicKey, msg string, sig_hex string) bool {
|
||||
sig := hex.decode(sig_hex) or { return false }
|
||||
return ed25519.verify(pub, msg.bytes(), sig)
|
||||
}
|
||||
|
||||
// --- Node & Election ---
|
||||
|
||||
enum NodeStatus {
|
||||
active
|
||||
buffer
|
||||
unavailable
|
||||
}
|
||||
|
||||
struct Node {
|
||||
id string
|
||||
mut:
|
||||
term int
|
||||
leader bool
|
||||
voted_for string
|
||||
status NodeStatus
|
||||
last_seen i64 // timestamp
|
||||
}
|
||||
|
||||
struct HealthReport {
|
||||
reporter_id string
|
||||
target_id string
|
||||
status string // "available" or "unavailable"
|
||||
timestamp i64
|
||||
signature string
|
||||
}
|
||||
|
||||
struct Election {
|
||||
mut:
|
||||
clients []redis.Connection
|
||||
pubkeys map[string]ed25519.PublicKey
|
||||
self Node
|
||||
keys Keys
|
||||
all_nodes map[string]Node
|
||||
buffer_nodes []string
|
||||
}
|
||||
|
||||
// Redis keys
|
||||
fn vote_key(term int, node_id string) string { return 'vote:${term}:${node_id}' }
|
||||
fn health_key(reporter_id string, target_id string) string { return 'health:${reporter_id}:${target_id}' }
|
||||
fn node_status_key(node_id string) string { return 'node_status:${node_id}' }
|
||||
|
||||
// Write vote (signed) to ALL redis servers
|
||||
fn (mut e Election) vote_for(candidate string) {
|
||||
msg := '${e.self.term}:${candidate}'
|
||||
sig_hex := e.keys.sign(msg)
|
||||
for mut c in e.clients {
|
||||
k := vote_key(e.self.term, e.self.id)
|
||||
c.hset(k, 'candidate', candidate) or {}
|
||||
c.hset(k, 'sig', sig_hex) or {}
|
||||
c.expire(k, 5) or {}
|
||||
}
|
||||
println('[${e.self.id}] voted for $candidate (term=${e.self.term})')
|
||||
}
|
||||
|
||||
// Report node health status
|
||||
fn (mut e Election) report_node_health(target_id string, status string) {
|
||||
now := time.now().unix_time()
|
||||
msg := '${target_id}:${status}:${now}'
|
||||
sig_hex := e.keys.sign(msg)
|
||||
|
||||
report := HealthReport{
|
||||
reporter_id: e.self.id
|
||||
target_id: target_id
|
||||
status: status
|
||||
timestamp: now
|
||||
signature: sig_hex
|
||||
}
|
||||
|
||||
for mut c in e.clients {
|
||||
k := health_key(e.self.id, target_id)
|
||||
c.hset(k, 'status', status) or {}
|
||||
c.hset(k, 'timestamp', now.str()) or {}
|
||||
c.hset(k, 'signature', sig_hex) or {}
|
||||
c.expire(k, 86400) or {} // expire after 24 hours
|
||||
}
|
||||
println('[${e.self.id}] reported $target_id as $status')
|
||||
}
|
||||
|
||||
// Collect health reports and check for consensus on unavailable nodes
|
||||
fn (mut e Election) check_node_availability() {
|
||||
now := time.now().unix_time()
|
||||
mut unavailable_reports := map[string]map[string]i64{} // target_id -> reporter_id -> timestamp
|
||||
|
||||
for mut c in e.clients {
|
||||
keys := c.keys('health:*') or { continue }
|
||||
for k in keys {
|
||||
parts := k.split(':')
|
||||
if parts.len != 3 { continue }
|
||||
reporter_id := parts[1]
|
||||
target_id := parts[2]
|
||||
|
||||
vals := c.hgetall(k) or { continue }
|
||||
status := vals['status']
|
||||
timestamp_str := vals['timestamp']
|
||||
sig_hex := vals['signature']
|
||||
|
||||
if reporter_id !in e.pubkeys { continue }
|
||||
|
||||
timestamp := timestamp_str.i64()
|
||||
msg := '${target_id}:${status}:${timestamp}'
|
||||
|
||||
if verify(e.pubkeys[reporter_id], msg, sig_hex) {
|
||||
if status == 'unavailable' && (now - timestamp) >= (node_unavailable_threshold_ms / 1000) {
|
||||
if target_id !in unavailable_reports {
|
||||
unavailable_reports[target_id] = map[string]i64{}
|
||||
}
|
||||
unavailable_reports[target_id][reporter_id] = timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for consensus (2 out of 3 active nodes agree)
|
||||
for target_id, reports in unavailable_reports {
|
||||
if reports.len >= 2 && target_id in e.all_nodes {
|
||||
if e.all_nodes[target_id].status == .active {
|
||||
println('[${e.self.id}] Consensus reached: $target_id is unavailable for >1 day')
|
||||
e.promote_buffer_node(target_id)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Promote a buffer node to active status
|
||||
fn (mut e Election) promote_buffer_node(failed_node_id string) {
|
||||
if e.buffer_nodes.len == 0 {
|
||||
println('[${e.self.id}] No buffer nodes available for promotion')
|
||||
return
|
||||
}
|
||||
|
||||
// Select first available buffer node
|
||||
buffer_id := e.buffer_nodes[0]
|
||||
|
||||
// Update node statuses
|
||||
if failed_node_id in e.all_nodes {
|
||||
e.all_nodes[failed_node_id].status = .unavailable
|
||||
}
|
||||
if buffer_id in e.all_nodes {
|
||||
e.all_nodes[buffer_id].status = .active
|
||||
}
|
||||
|
||||
// Remove from buffer list
|
||||
e.buffer_nodes = e.buffer_nodes.filter(it != buffer_id)
|
||||
|
||||
// Announce the promotion
|
||||
for mut c in e.clients {
|
||||
k := node_status_key(buffer_id)
|
||||
c.hset(k, 'status', 'active') or {}
|
||||
c.hset(k, 'promoted_at', time.now().unix_time().str()) or {}
|
||||
c.hset(k, 'replaced_node', failed_node_id) or {}
|
||||
|
||||
// Mark failed node as unavailable
|
||||
failed_k := node_status_key(failed_node_id)
|
||||
c.hset(failed_k, 'status', 'unavailable') or {}
|
||||
c.hset(failed_k, 'failed_at', time.now().unix_time().str()) or {}
|
||||
}
|
||||
|
||||
println('[${e.self.id}] Promoted buffer node $buffer_id to replace failed node $failed_node_id')
|
||||
}
|
||||
|
||||
// Collect votes from ALL redis servers, verify signatures (only from active nodes)
|
||||
fn (mut e Election) collect_votes(term int) map[string]int {
|
||||
mut counts := map[string]int{}
|
||||
mut seen := map[string]bool{} // avoid double-counting same vote from multiple servers
|
||||
|
||||
for mut c in e.clients {
|
||||
keys := c.keys('vote:${term}:*') or { continue }
|
||||
for k in keys {
|
||||
if seen[k] { continue }
|
||||
seen[k] = true
|
||||
vals := c.hgetall(k) or { continue }
|
||||
candidate := vals['candidate']
|
||||
sig_hex := vals['sig']
|
||||
voter_id := k.split(':')[2]
|
||||
|
||||
// Only count votes from active nodes
|
||||
if voter_id !in e.pubkeys || voter_id !in e.all_nodes { continue }
|
||||
if e.all_nodes[voter_id].status != .active { continue }
|
||||
|
||||
msg := '${term}:${candidate}'
|
||||
if verify(e.pubkeys[voter_id], msg, sig_hex) {
|
||||
counts[candidate]++
|
||||
} else {
|
||||
println('[${e.self.id}] invalid signature from $voter_id')
|
||||
}
|
||||
}
|
||||
}
|
||||
return counts
|
||||
}
|
||||
|
||||
// Run election (only active nodes participate)
|
||||
fn (mut e Election) run_election() {
|
||||
if e.self.status != .active {
|
||||
return // Buffer nodes don't participate in elections
|
||||
}
|
||||
|
||||
e.self.term++
|
||||
e.vote_for(e.self.id)
|
||||
|
||||
// wait a bit for other nodes to also vote
|
||||
time.sleep(500 * time.millisecond)
|
||||
|
||||
votes := e.collect_votes(e.self.term)
|
||||
active_node_count := e.all_nodes.values().filter(it.status == .active).len
|
||||
majority_threshold := (active_node_count / 2) + 1
|
||||
|
||||
for cand, cnt in votes {
|
||||
if cnt >= majority_threshold {
|
||||
if cand == e.self.id {
|
||||
println('[${e.self.id}] I AM LEADER (term=${e.self.term}, votes=$cnt, active_nodes=$active_node_count)')
|
||||
e.self.leader = true
|
||||
} else {
|
||||
println('[${e.self.id}] sees LEADER = $cand (term=${e.self.term}, votes=$cnt, active_nodes=$active_node_count)')
|
||||
e.self.leader = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Health monitoring loop (runs in background)
|
||||
fn (mut e Election) health_monitor_loop() {
|
||||
for {
|
||||
if e.self.status == .active {
|
||||
// Check health of other nodes
|
||||
for node_id, node in e.all_nodes {
|
||||
if node_id == e.self.id { continue }
|
||||
|
||||
// Simple health check: try to read a heartbeat key
|
||||
mut is_available := false
|
||||
for mut c in e.clients {
|
||||
heartbeat_key := 'heartbeat:${node_id}'
|
||||
val := c.get(heartbeat_key) or { continue }
|
||||
last_heartbeat := val.i64()
|
||||
if (time.now().unix_time() - last_heartbeat) < 60 { // 60 seconds threshold
|
||||
is_available = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
status := if is_available { 'available' } else { 'unavailable' }
|
||||
e.report_node_health(node_id, status)
|
||||
}
|
||||
|
||||
// Check for consensus on failed nodes
|
||||
e.check_node_availability()
|
||||
}
|
||||
|
||||
time.sleep(health_check_interval_ms * time.millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// Heartbeat loop
|
||||
fn (mut e Election) heartbeat_loop() {
|
||||
for {
|
||||
// Update own heartbeat
|
||||
now := time.now().unix_time()
|
||||
for mut c in e.clients {
|
||||
heartbeat_key := 'heartbeat:${e.self.id}'
|
||||
c.set(heartbeat_key, now.str()) or {}
|
||||
c.expire(heartbeat_key, 120) or {} // expire after 2 minutes
|
||||
}
|
||||
|
||||
if e.self.status == .active {
|
||||
if e.self.leader {
|
||||
println('[${e.self.id}] Heartbeat term=${e.self.term} (LEADER)')
|
||||
} else {
|
||||
e.run_election()
|
||||
}
|
||||
} else if e.self.status == .buffer {
|
||||
println('[${e.self.id}] Buffer node monitoring cluster')
|
||||
}
|
||||
|
||||
time.sleep(heartbeat_interval_ms * time.millisecond)
|
||||
}
|
||||
}
|
||||
202
lib/hero/herocluster/instruct1.md
Normal file
202
lib/hero/herocluster/instruct1.md
Normal file
@@ -0,0 +1,202 @@
|
||||
Great 👍 Let’s extend the **Redis + ed25519 leader election** so that:
|
||||
|
||||
* We have **3 Redis servers** (`:6379`, `:6380`, `:6381`).
|
||||
* Each node writes its **signed vote** to **all 3 servers**.
|
||||
* Each node reads all votes from all servers, verifies them with the **known public keys**, and tallies majority (≥2/3 = 2 votes).
|
||||
* Leader is declared if majority agrees.
|
||||
|
||||
---
|
||||
|
||||
## Full V Implementation
|
||||
|
||||
```v
|
||||
import db.redis
|
||||
import crypto.ed25519
|
||||
import crypto.rand
|
||||
import encoding.hex
|
||||
import os
|
||||
import time
|
||||
|
||||
const election_timeout_ms = 3000
|
||||
const heartbeat_interval_ms = 1000
|
||||
|
||||
// --- Crypto helpers ---
|
||||
|
||||
struct Keys {
|
||||
priv ed25519.PrivateKey
|
||||
pub ed25519.PublicKey
|
||||
}
|
||||
|
||||
// sign a message
|
||||
fn (k Keys) sign(msg string) string {
|
||||
sig := ed25519.sign(k.priv, msg.bytes())
|
||||
return hex.encode(sig)
|
||||
}
|
||||
|
||||
// verify signature
|
||||
fn verify(pub ed25519.PublicKey, msg string, sig_hex string) bool {
|
||||
sig := hex.decode(sig_hex) or { return false }
|
||||
return ed25519.verify(pub, msg.bytes(), sig)
|
||||
}
|
||||
|
||||
// --- Node & Election ---
|
||||
|
||||
struct Node {
|
||||
id string
|
||||
mut:
|
||||
term int
|
||||
leader bool
|
||||
voted_for string
|
||||
}
|
||||
|
||||
struct Election {
|
||||
mut:
|
||||
clients []redis.Connection
|
||||
pubkeys map[string]ed25519.PublicKey
|
||||
self Node
|
||||
keys Keys
|
||||
}
|
||||
|
||||
// Redis keys
|
||||
fn vote_key(term int, node_id string) string { return 'vote:${term}:${node_id}' }
|
||||
|
||||
// Write vote (signed) to ALL redis servers
|
||||
fn (mut e Election) vote_for(candidate string) {
|
||||
msg := '${e.self.term}:${candidate}'
|
||||
sig_hex := e.keys.sign(msg)
|
||||
for mut c in e.clients {
|
||||
k := vote_key(e.self.term, e.self.id)
|
||||
c.hset(k, 'candidate', candidate) or {}
|
||||
c.hset(k, 'sig', sig_hex) or {}
|
||||
c.expire(k, 5) or {}
|
||||
}
|
||||
println('[${e.self.id}] voted for $candidate (term=${e.self.term})')
|
||||
}
|
||||
|
||||
// Collect votes from ALL redis servers, verify signatures
|
||||
fn (mut e Election) collect_votes(term int) map[string]int {
|
||||
mut counts := map[string]int{}
|
||||
mut seen := map[string]bool{} // avoid double-counting same vote from multiple servers
|
||||
|
||||
for mut c in e.clients {
|
||||
keys := c.keys('vote:${term}:*') or { continue }
|
||||
for k in keys {
|
||||
if seen[k] { continue }
|
||||
seen[k] = true
|
||||
vals := c.hgetall(k) or { continue }
|
||||
candidate := vals['candidate']
|
||||
sig_hex := vals['sig']
|
||||
voter_id := k.split(':')[2]
|
||||
if voter_id !in e.pubkeys {
|
||||
println('[${e.self.id}] unknown voter $voter_id')
|
||||
continue
|
||||
}
|
||||
msg := '${term}:${candidate}'
|
||||
if verify(e.pubkeys[voter_id], msg, sig_hex) {
|
||||
counts[candidate]++
|
||||
} else {
|
||||
println('[${e.self.id}] invalid signature from $voter_id')
|
||||
}
|
||||
}
|
||||
}
|
||||
return counts
|
||||
}
|
||||
|
||||
// Run election
|
||||
fn (mut e Election) run_election() {
|
||||
e.self.term++
|
||||
e.vote_for(e.self.id)
|
||||
|
||||
// wait a bit for other nodes to also vote
|
||||
time.sleep(500 * time.millisecond)
|
||||
|
||||
votes := e.collect_votes(e.self.term)
|
||||
for cand, cnt in votes {
|
||||
if cnt >= 2 { // majority of 3
|
||||
if cand == e.self.id {
|
||||
println('[${e.self.id}] I AM LEADER (term=${e.self.term}, votes=$cnt)')
|
||||
e.self.leader = true
|
||||
} else {
|
||||
println('[${e.self.id}] sees LEADER = $cand (term=${e.self.term}, votes=$cnt)')
|
||||
e.self.leader = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Heartbeat loop
|
||||
fn (mut e Election) heartbeat_loop() {
|
||||
for {
|
||||
if e.self.leader {
|
||||
println('[${e.self.id}] Heartbeat term=${e.self.term}')
|
||||
} else {
|
||||
e.run_election()
|
||||
}
|
||||
time.sleep(heartbeat_interval_ms * time.millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// --- MAIN ---
|
||||
|
||||
fn main() {
|
||||
if os.args.len < 2 {
|
||||
eprintln('Usage: ./prog <node_id>')
|
||||
return
|
||||
}
|
||||
node_id := os.args[1]
|
||||
|
||||
// --- Generate ephemeral keys for demo ---
|
||||
// In real use: load from PEM files
|
||||
priv, pub := ed25519.generate_key(rand.reader) or { panic(err) }
|
||||
|
||||
mut pubkeys := map[string]ed25519.PublicKey{}
|
||||
pubkeys[node_id] = pub
|
||||
// TODO: load all pubkeys from config file so every node knows others
|
||||
|
||||
servers := ['127.0.0.1:6379', '127.0.0.1:6380', '127.0.0.1:6381']
|
||||
mut conns := []redis.Connection{}
|
||||
for s in servers {
|
||||
mut c := redis.connect(redis.Options{ server: s }) or {
|
||||
panic('could not connect to redis $s: $err')
|
||||
}
|
||||
conns << c
|
||||
}
|
||||
|
||||
mut election := Election{
|
||||
clients: conns
|
||||
pubkeys: pubkeys
|
||||
self: Node{
|
||||
id: node_id
|
||||
term: 0
|
||||
leader: false
|
||||
}
|
||||
keys: Keys{ priv: priv, pub: pub }
|
||||
}
|
||||
|
||||
println('[$node_id] started, connected to 3 redis servers.')
|
||||
election.heartbeat_loop()
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How to Run
|
||||
|
||||
1. Start 3 redis servers (different ports):
|
||||
|
||||
```bash
|
||||
redis-server --port 6379 --dir /tmp/redis1 --daemonize yes
|
||||
redis-server --port 6380 --dir /tmp/redis2 --daemonize yes
|
||||
redis-server --port 6381 --dir /tmp/redis3 --daemonize yes
|
||||
```
|
||||
|
||||
2. Run 3 nodes, each with its own ID:
|
||||
|
||||
```bash
|
||||
v run raft_sign.v node1
|
||||
v run raft_sign.v node2
|
||||
v run raft_sign.v node3
|
||||
```
|
||||
|
||||
3. You’ll see one leader elected with **2/3 majority verified votes**.
|
||||
|
||||
455
lib/hero/herocluster/instruct2.md
Normal file
455
lib/hero/herocluster/instruct2.md
Normal file
@@ -0,0 +1,455 @@
|
||||
# Hero Cluster Instructions v2: 4-Node Cluster with Buffer Node
|
||||
|
||||
This extends the **Redis + ed25519 leader election** from instruct1.md to include a **4th buffer node** mechanism for enhanced fault tolerance.
|
||||
|
||||
## Overview
|
||||
|
||||
* We have **4 Redis servers** (`:6379`, `:6380`, `:6381`, `:6382`).
|
||||
* **3 active nodes** participate in normal leader election.
|
||||
* **1 buffer node** remains standby and monitors the cluster health.
|
||||
* If **2 of 3 active nodes** agree that a 3rd node is unavailable for **longer than 1 day**, the buffer node automatically becomes active.
|
||||
|
||||
---
|
||||
|
||||
## Extended V Implementation
|
||||
|
||||
```v
|
||||
import db.redis
|
||||
import crypto.ed25519
|
||||
import crypto.rand
|
||||
import encoding.hex
|
||||
import os
|
||||
import time
|
||||
|
||||
const election_timeout_ms = 3000
|
||||
const heartbeat_interval_ms = 1000
|
||||
const node_unavailable_threshold_ms = 24 * 60 * 60 * 1000 // 1 day in milliseconds
|
||||
const health_check_interval_ms = 30000 // 30 seconds
|
||||
|
||||
// --- Crypto helpers ---
|
||||
|
||||
struct Keys {
|
||||
priv ed25519.PrivateKey
|
||||
pub ed25519.PublicKey
|
||||
}
|
||||
|
||||
// sign a message
|
||||
fn (k Keys) sign(msg string) string {
|
||||
sig := ed25519.sign(k.priv, msg.bytes())
|
||||
return hex.encode(sig)
|
||||
}
|
||||
|
||||
// verify signature
|
||||
fn verify(pub ed25519.PublicKey, msg string, sig_hex string) bool {
|
||||
sig := hex.decode(sig_hex) or { return false }
|
||||
return ed25519.verify(pub, msg.bytes(), sig)
|
||||
}
|
||||
|
||||
// --- Node & Election ---
|
||||
|
||||
enum NodeStatus {
|
||||
active
|
||||
buffer
|
||||
unavailable
|
||||
}
|
||||
|
||||
struct Node {
|
||||
id string
|
||||
mut:
|
||||
term int
|
||||
leader bool
|
||||
voted_for string
|
||||
status NodeStatus
|
||||
last_seen i64 // timestamp
|
||||
}
|
||||
|
||||
struct HealthReport {
|
||||
reporter_id string
|
||||
target_id string
|
||||
status string // "available" or "unavailable"
|
||||
timestamp i64
|
||||
signature string
|
||||
}
|
||||
|
||||
struct Election {
|
||||
mut:
|
||||
clients []redis.Connection
|
||||
pubkeys map[string]ed25519.PublicKey
|
||||
self Node
|
||||
keys Keys
|
||||
all_nodes map[string]Node
|
||||
buffer_nodes []string
|
||||
}
|
||||
|
||||
// Redis keys
|
||||
fn vote_key(term int, node_id string) string { return 'vote:${term}:${node_id}' }
|
||||
fn health_key(reporter_id string, target_id string) string { return 'health:${reporter_id}:${target_id}' }
|
||||
fn node_status_key(node_id string) string { return 'node_status:${node_id}' }
|
||||
|
||||
// Write vote (signed) to ALL redis servers
|
||||
fn (mut e Election) vote_for(candidate string) {
|
||||
msg := '${e.self.term}:${candidate}'
|
||||
sig_hex := e.keys.sign(msg)
|
||||
for mut c in e.clients {
|
||||
k := vote_key(e.self.term, e.self.id)
|
||||
c.hset(k, 'candidate', candidate) or {}
|
||||
c.hset(k, 'sig', sig_hex) or {}
|
||||
c.expire(k, 5) or {}
|
||||
}
|
||||
println('[${e.self.id}] voted for $candidate (term=${e.self.term})')
|
||||
}
|
||||
|
||||
// Report node health status
|
||||
fn (mut e Election) report_node_health(target_id string, status string) {
|
||||
now := time.now().unix_time()
|
||||
msg := '${target_id}:${status}:${now}'
|
||||
sig_hex := e.keys.sign(msg)
|
||||
|
||||
report := HealthReport{
|
||||
reporter_id: e.self.id
|
||||
target_id: target_id
|
||||
status: status
|
||||
timestamp: now
|
||||
signature: sig_hex
|
||||
}
|
||||
|
||||
for mut c in e.clients {
|
||||
k := health_key(e.self.id, target_id)
|
||||
c.hset(k, 'status', status) or {}
|
||||
c.hset(k, 'timestamp', now.str()) or {}
|
||||
c.hset(k, 'signature', sig_hex) or {}
|
||||
c.expire(k, 86400) or {} // expire after 24 hours
|
||||
}
|
||||
println('[${e.self.id}] reported $target_id as $status')
|
||||
}
|
||||
|
||||
// Collect health reports and check for consensus on unavailable nodes
|
||||
fn (mut e Election) check_node_availability() {
|
||||
now := time.now().unix_time()
|
||||
mut unavailable_reports := map[string]map[string]i64{} // target_id -> reporter_id -> timestamp
|
||||
|
||||
for mut c in e.clients {
|
||||
keys := c.keys('health:*') or { continue }
|
||||
for k in keys {
|
||||
parts := k.split(':')
|
||||
if parts.len != 3 { continue }
|
||||
reporter_id := parts[1]
|
||||
target_id := parts[2]
|
||||
|
||||
vals := c.hgetall(k) or { continue }
|
||||
status := vals['status']
|
||||
timestamp_str := vals['timestamp']
|
||||
sig_hex := vals['signature']
|
||||
|
||||
if reporter_id !in e.pubkeys { continue }
|
||||
|
||||
timestamp := timestamp_str.i64()
|
||||
msg := '${target_id}:${status}:${timestamp}'
|
||||
|
||||
if verify(e.pubkeys[reporter_id], msg, sig_hex) {
|
||||
if status == 'unavailable' && (now - timestamp) >= (node_unavailable_threshold_ms / 1000) {
|
||||
if target_id !in unavailable_reports {
|
||||
unavailable_reports[target_id] = map[string]i64{}
|
||||
}
|
||||
unavailable_reports[target_id][reporter_id] = timestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for consensus (2 out of 3 active nodes agree)
|
||||
for target_id, reports in unavailable_reports {
|
||||
if reports.len >= 2 && target_id in e.all_nodes {
|
||||
if e.all_nodes[target_id].status == .active {
|
||||
println('[${e.self.id}] Consensus reached: $target_id is unavailable for >1 day')
|
||||
e.promote_buffer_node(target_id)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Promote a buffer node to active status
|
||||
fn (mut e Election) promote_buffer_node(failed_node_id string) {
|
||||
if e.buffer_nodes.len == 0 {
|
||||
println('[${e.self.id}] No buffer nodes available for promotion')
|
||||
return
|
||||
}
|
||||
|
||||
// Select first available buffer node
|
||||
buffer_id := e.buffer_nodes[0]
|
||||
|
||||
// Update node statuses
|
||||
if failed_node_id in e.all_nodes {
|
||||
e.all_nodes[failed_node_id].status = .unavailable
|
||||
}
|
||||
if buffer_id in e.all_nodes {
|
||||
e.all_nodes[buffer_id].status = .active
|
||||
}
|
||||
|
||||
// Remove from buffer list
|
||||
e.buffer_nodes = e.buffer_nodes.filter(it != buffer_id)
|
||||
|
||||
// Announce the promotion
|
||||
for mut c in e.clients {
|
||||
k := node_status_key(buffer_id)
|
||||
c.hset(k, 'status', 'active') or {}
|
||||
c.hset(k, 'promoted_at', time.now().unix_time().str()) or {}
|
||||
c.hset(k, 'replaced_node', failed_node_id) or {}
|
||||
|
||||
// Mark failed node as unavailable
|
||||
failed_k := node_status_key(failed_node_id)
|
||||
c.hset(failed_k, 'status', 'unavailable') or {}
|
||||
c.hset(failed_k, 'failed_at', time.now().unix_time().str()) or {}
|
||||
}
|
||||
|
||||
println('[${e.self.id}] Promoted buffer node $buffer_id to replace failed node $failed_node_id')
|
||||
}
|
||||
|
||||
// Collect votes from ALL redis servers, verify signatures (only from active nodes)
|
||||
fn (mut e Election) collect_votes(term int) map[string]int {
|
||||
mut counts := map[string]int{}
|
||||
mut seen := map[string]bool{} // avoid double-counting same vote from multiple servers
|
||||
|
||||
for mut c in e.clients {
|
||||
keys := c.keys('vote:${term}:*') or { continue }
|
||||
for k in keys {
|
||||
if seen[k] { continue }
|
||||
seen[k] = true
|
||||
vals := c.hgetall(k) or { continue }
|
||||
candidate := vals['candidate']
|
||||
sig_hex := vals['sig']
|
||||
voter_id := k.split(':')[2]
|
||||
|
||||
// Only count votes from active nodes
|
||||
if voter_id !in e.pubkeys || voter_id !in e.all_nodes { continue }
|
||||
if e.all_nodes[voter_id].status != .active { continue }
|
||||
|
||||
msg := '${term}:${candidate}'
|
||||
if verify(e.pubkeys[voter_id], msg, sig_hex) {
|
||||
counts[candidate]++
|
||||
} else {
|
||||
println('[${e.self.id}] invalid signature from $voter_id')
|
||||
}
|
||||
}
|
||||
}
|
||||
return counts
|
||||
}
|
||||
|
||||
// Run election (only active nodes participate)
|
||||
fn (mut e Election) run_election() {
|
||||
if e.self.status != .active {
|
||||
return // Buffer nodes don't participate in elections
|
||||
}
|
||||
|
||||
e.self.term++
|
||||
e.vote_for(e.self.id)
|
||||
|
||||
// wait a bit for other nodes to also vote
|
||||
time.sleep(500 * time.millisecond)
|
||||
|
||||
votes := e.collect_votes(e.self.term)
|
||||
active_node_count := e.all_nodes.values().filter(it.status == .active).len
|
||||
majority_threshold := (active_node_count / 2) + 1
|
||||
|
||||
for cand, cnt in votes {
|
||||
if cnt >= majority_threshold {
|
||||
if cand == e.self.id {
|
||||
println('[${e.self.id}] I AM LEADER (term=${e.self.term}, votes=$cnt, active_nodes=$active_node_count)')
|
||||
e.self.leader = true
|
||||
} else {
|
||||
println('[${e.self.id}] sees LEADER = $cand (term=${e.self.term}, votes=$cnt, active_nodes=$active_node_count)')
|
||||
e.self.leader = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Health monitoring loop (runs in background)
|
||||
fn (mut e Election) health_monitor_loop() {
|
||||
for {
|
||||
if e.self.status == .active {
|
||||
// Check health of other nodes
|
||||
for node_id, node in e.all_nodes {
|
||||
if node_id == e.self.id { continue }
|
||||
|
||||
// Simple health check: try to read a heartbeat key
|
||||
mut is_available := false
|
||||
for mut c in e.clients {
|
||||
heartbeat_key := 'heartbeat:${node_id}'
|
||||
val := c.get(heartbeat_key) or { continue }
|
||||
last_heartbeat := val.i64()
|
||||
if (time.now().unix_time() - last_heartbeat) < 60 { // 60 seconds threshold
|
||||
is_available = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
status := if is_available { 'available' } else { 'unavailable' }
|
||||
e.report_node_health(node_id, status)
|
||||
}
|
||||
|
||||
// Check for consensus on failed nodes
|
||||
e.check_node_availability()
|
||||
}
|
||||
|
||||
time.sleep(health_check_interval_ms * time.millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// Heartbeat loop
|
||||
fn (mut e Election) heartbeat_loop() {
|
||||
for {
|
||||
// Update own heartbeat
|
||||
now := time.now().unix_time()
|
||||
for mut c in e.clients {
|
||||
heartbeat_key := 'heartbeat:${e.self.id}'
|
||||
c.set(heartbeat_key, now.str()) or {}
|
||||
c.expire(heartbeat_key, 120) or {} // expire after 2 minutes
|
||||
}
|
||||
|
||||
if e.self.status == .active {
|
||||
if e.self.leader {
|
||||
println('[${e.self.id}] Heartbeat term=${e.self.term} (LEADER)')
|
||||
} else {
|
||||
e.run_election()
|
||||
}
|
||||
} else if e.self.status == .buffer {
|
||||
println('[${e.self.id}] Buffer node monitoring cluster')
|
||||
}
|
||||
|
||||
time.sleep(heartbeat_interval_ms * time.millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// --- MAIN ---
|
||||
|
||||
fn main() {
|
||||
if os.args.len < 3 {
|
||||
eprintln('Usage: ./prog <node_id> <status>')
|
||||
eprintln(' status: active|buffer')
|
||||
return
|
||||
}
|
||||
node_id := os.args[1]
|
||||
status_str := os.args[2]
|
||||
|
||||
status := match status_str {
|
||||
'active' { NodeStatus.active }
|
||||
'buffer' { NodeStatus.buffer }
|
||||
else {
|
||||
eprintln('Invalid status. Use: active|buffer')
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// --- Generate ephemeral keys for demo ---
|
||||
// In real use: load from PEM files
|
||||
priv, pub := ed25519.generate_key(rand.reader) or { panic(err) }
|
||||
|
||||
mut pubkeys := map[string]ed25519.PublicKey{}
|
||||
pubkeys[node_id] = pub
|
||||
// TODO: load all pubkeys from config file so every node knows others
|
||||
|
||||
// Initialize all nodes (in real scenario, load from config)
|
||||
mut all_nodes := map[string]Node{}
|
||||
all_nodes['node1'] = Node{id: 'node1', status: .active}
|
||||
all_nodes['node2'] = Node{id: 'node2', status: .active}
|
||||
all_nodes['node3'] = Node{id: 'node3', status: .active}
|
||||
all_nodes['node4'] = Node{id: 'node4', status: .buffer}
|
||||
|
||||
// Set current node status
|
||||
all_nodes[node_id].status = status
|
||||
|
||||
servers := ['127.0.0.1:6379', '127.0.0.1:6380', '127.0.0.1:6381', '127.0.0.1:6382']
|
||||
mut conns := []redis.Connection{}
|
||||
for s in servers {
|
||||
mut c := redis.connect(redis.Options{ server: s }) or {
|
||||
panic('could not connect to redis $s: $err')
|
||||
}
|
||||
conns << c
|
||||
}
|
||||
|
||||
mut election := Election{
|
||||
clients: conns
|
||||
pubkeys: pubkeys
|
||||
self: Node{
|
||||
id: node_id
|
||||
term: 0
|
||||
leader: false
|
||||
status: status
|
||||
}
|
||||
keys: Keys{ priv: priv, pub: pub }
|
||||
all_nodes: all_nodes
|
||||
buffer_nodes: ['node4'] // Initially node4 is buffer
|
||||
}
|
||||
|
||||
println('[$node_id] started as $status_str, connected to 4 redis servers.')
|
||||
|
||||
// Start health monitoring in background
|
||||
go election.health_monitor_loop()
|
||||
|
||||
// Start main heartbeat loop
|
||||
election.heartbeat_loop()
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Extensions from instruct1.md
|
||||
|
||||
### 1. **4th Redis Server**
|
||||
- Added `:6382` as the 4th Redis server for enhanced redundancy.
|
||||
|
||||
### 2. **Node Status Management**
|
||||
- **NodeStatus enum**: `active`, `buffer`, `unavailable`
|
||||
- **Buffer nodes**: Don't participate in elections but monitor cluster health.
|
||||
|
||||
### 3. **Health Monitoring System**
|
||||
- **Health reports**: Signed reports about node availability.
|
||||
- **Consensus mechanism**: 2 out of 3 active nodes must agree a node is unavailable.
|
||||
- **1-day threshold**: Node must be unavailable for >24 hours before replacement.
|
||||
|
||||
### 4. **Automatic Buffer Promotion**
|
||||
- When consensus is reached about a failed node, buffer node automatically becomes active.
|
||||
- Failed node is marked as unavailable.
|
||||
- Cluster continues with 3 active nodes.
|
||||
|
||||
### 5. **Enhanced Election Logic**
|
||||
- Only active nodes participate in voting.
|
||||
- Majority threshold adapts to current number of active nodes.
|
||||
- Buffer nodes monitor but don't vote.
|
||||
|
||||
---
|
||||
|
||||
## How to Run
|
||||
|
||||
1. **Start 4 redis servers**:
|
||||
```bash
|
||||
redis-server --port 6379 --dir /tmp/redis1 --daemonize yes
|
||||
redis-server --port 6380 --dir /tmp/redis2 --daemonize yes
|
||||
redis-server --port 6381 --dir /tmp/redis3 --daemonize yes
|
||||
redis-server --port 6382 --dir /tmp/redis4 --daemonize yes
|
||||
```
|
||||
|
||||
2. **Run 3 active nodes + 1 buffer**:
|
||||
```bash
|
||||
v run raft_sign_v2.v node1 active
|
||||
v run raft_sign_v2.v node2 active
|
||||
v run raft_sign_v2.v node3 active
|
||||
v run raft_sign_v2.v node4 buffer
|
||||
```
|
||||
|
||||
3. **Test failure scenario**:
|
||||
- Stop one active node (e.g., kill node3)
|
||||
- Wait >1 day (or reduce threshold for testing)
|
||||
- Watch buffer node4 automatically become active
|
||||
- Cluster continues with 3 active nodes
|
||||
|
||||
---
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Enhanced fault tolerance**: Can survive 1 node failure without service interruption.
|
||||
- **Automatic recovery**: No manual intervention needed for node replacement.
|
||||
- **Consensus-based decisions**: Prevents false positives in failure detection.
|
||||
- **Cryptographic security**: All health reports are signed and verified.
|
||||
- **Scalable design**: Easy to add more buffer nodes if needed.
|
||||
101
lib/hero/heromodels/base.v
Normal file
101
lib/hero/heromodels/base.v
Normal file
@@ -0,0 +1,101 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.md5
|
||||
import json
|
||||
|
||||
import freeflowuniverse.herolib.core.redisclient
|
||||
import freeflowuniverse.herolib.data.encoder
|
||||
|
||||
|
||||
|
||||
// Group represents a collection of users with roles and permissions
|
||||
@[heap]
|
||||
pub struct Base {
|
||||
pub mut:
|
||||
id u32
|
||||
name string
|
||||
description string
|
||||
created_at i64
|
||||
updated_at i64
|
||||
securitypolicy u32
|
||||
tags u32 //when we set/get we always do as []string but this can then be sorted and md5ed this gies the unique id of tags
|
||||
comments []u32
|
||||
}
|
||||
|
||||
|
||||
|
||||
@[heap]
|
||||
pub struct SecurityPolicy {å
|
||||
pub mut:
|
||||
id u32
|
||||
read []u32 //links to users & groups
|
||||
write []u32 //links to users & groups
|
||||
delete []u32 //links to users & groups
|
||||
public bool
|
||||
md5 string //this sorts read, write and delete u32 + hash, then do md5 hash, this allows to go from a random read/write/delete/public config to a hash
|
||||
}
|
||||
|
||||
|
||||
@[heap]
|
||||
pub struct Tags {
|
||||
pub mut:
|
||||
id u32
|
||||
names []string //unique per id
|
||||
md5 string //of sorted names, to make easy to find unique id, each name lowercased and made ascii
|
||||
}
|
||||
|
||||
|
||||
/////////////////
|
||||
|
||||
@[params]
|
||||
pub struct BaseArgs {
|
||||
pub mut:
|
||||
id ?u32
|
||||
name string
|
||||
description string
|
||||
securitypolicy ?u32
|
||||
tags []string
|
||||
comments []CommentArg
|
||||
}
|
||||
|
||||
|
||||
pub fn tags2id(tags []string) !u32 {
|
||||
mut myid:=0
|
||||
if tags.len>0{
|
||||
mytags:=tags.map(it.to_lower_ascii().trim_space()).sort().join(",")
|
||||
mymd5:=crypto.hexhash(mytags)
|
||||
tags:=redis.hget("db:tags", mymd5)!
|
||||
if tags == ""{
|
||||
myid = u32(redis.incr("db:tags:id")!)
|
||||
redis.hset("db:tags", mymd5, myid)!
|
||||
redis.hset("db:tags", myid, mytags)!
|
||||
}else{
|
||||
myid = tags.int()
|
||||
}
|
||||
}
|
||||
return myid
|
||||
}
|
||||
|
||||
pub fn comments2id(comments []CommentArg) !u32 {
|
||||
mut myid:=0
|
||||
if comments.len>0{
|
||||
mycomments:=comments.map(it.to_lower_ascii().trim_space()).sort().join(",")
|
||||
mymd5:=crypto.hexhash(mycomments)
|
||||
comments:=redis.hget("db:comments", mymd5)!
|
||||
if comments == ""{
|
||||
myid = u32(redis.incr("db:comments:id")!)
|
||||
redis.hset("db:comments", mymd5, myid)!
|
||||
redis.hset("db:comments", myid, mycomments)!
|
||||
}else{
|
||||
myid = comments.int()
|
||||
}
|
||||
}
|
||||
return myid
|
||||
}
|
||||
|
||||
|
||||
// Convert CommentArg array to u32 array
|
||||
mut comment_ids := []u32{}
|
||||
for comment in args.comments {
|
||||
comment_ids << comment_set(comment)!
|
||||
}
|
||||
69
lib/hero/heromodels/calendar.v
Normal file
69
lib/hero/heromodels/calendar.v
Normal file
@@ -0,0 +1,69 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
import freeflowuniverse.herolib.data.ourtime
|
||||
import time
|
||||
|
||||
// Calendar represents a collection of events
|
||||
@[heap]
|
||||
pub struct Calendar {
|
||||
Base
|
||||
pub mut:
|
||||
group_id u32 // Associated group for permissions
|
||||
events []u32 // IDs of calendar events (changed to u32 to match CalendarEvent)
|
||||
color string // Hex color code
|
||||
timezone string
|
||||
is_public bool
|
||||
}
|
||||
|
||||
@[params]
|
||||
pub struct CalendarArgs {
|
||||
BaseArgs
|
||||
pub mut:
|
||||
group_id u32
|
||||
events []u32
|
||||
color string
|
||||
timezone string
|
||||
is_public bool
|
||||
}
|
||||
|
||||
pub fn calendar_new(args CalendarArgs) Calendar {
|
||||
commentids:=[]u32{}
|
||||
for comment in args.comments{
|
||||
commentids << comment_set(comment)!
|
||||
}
|
||||
mut obj := Calendar{
|
||||
id: args.id
|
||||
name: args.name
|
||||
description: args.description
|
||||
created_at: ourtime.now().unix()
|
||||
updated_at: ourtime.now().unix()
|
||||
securitypolicy: args.securitypolicy
|
||||
tags: args.tags
|
||||
comments: commentids
|
||||
group_id: args.group_id
|
||||
events: args.events
|
||||
color: args.color
|
||||
timezone: args.timezone
|
||||
is_public: args.is_public
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
pub fn (mut c Calendar) add_event(event_id u32) { // Changed event_id to u32
|
||||
if event_id !in c.events {
|
||||
c.events << event_id
|
||||
c.updated_at = ourtime.now().unix() // Use Base's updated_at
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut c Calendar) dump() []u8 {
|
||||
//TODO: implement based on lib/data/encoder/readme.md
|
||||
return []u8{}
|
||||
}
|
||||
|
||||
pub fn calendar_load(data []u8) Calendar {
|
||||
//TODO: implement based on lib/data/encoder/readme.md
|
||||
return Calendar{}
|
||||
}
|
||||
266
lib/hero/heromodels/calendar_event.v
Normal file
266
lib/hero/heromodels/calendar_event.v
Normal file
@@ -0,0 +1,266 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
import freeflowuniverse.herolib.data.ourtime
|
||||
import freeflowuniverse.herolib.data.encoder
|
||||
import freeflowuniverse.herolib.core.redisclient
|
||||
|
||||
// CalendarEvent represents a single event in a calendar
|
||||
@[heap]
|
||||
pub struct CalendarEvent {
|
||||
Base
|
||||
pub mut:
|
||||
title string
|
||||
start_time i64 // Unix timestamp
|
||||
end_time i64 // Unix timestamp
|
||||
location string
|
||||
attendees []u32 // IDs of user groups
|
||||
fs_items []u32 // IDs of linked files or dirs
|
||||
calendar_id u32 // Associated calendar
|
||||
status EventStatus
|
||||
is_all_day bool
|
||||
is_recurring bool
|
||||
recurrence []RecurrenceRule //normally empty
|
||||
reminder_mins []int // Minutes before event for reminders
|
||||
color string // Hex color code
|
||||
timezone string
|
||||
}
|
||||
|
||||
pub struct Attendee {
|
||||
pub mut:
|
||||
user_id u32
|
||||
status AttendanceStatus
|
||||
role AttendeeRole
|
||||
}
|
||||
|
||||
pub enum AttendanceStatus {
|
||||
no_response
|
||||
accepted
|
||||
declined
|
||||
tentative
|
||||
}
|
||||
|
||||
pub enum AttendeeRole {
|
||||
required
|
||||
optional
|
||||
organizer
|
||||
}
|
||||
|
||||
pub enum EventStatus {
|
||||
draft
|
||||
published
|
||||
cancelled
|
||||
completed
|
||||
}
|
||||
|
||||
pub struct RecurrenceRule {
|
||||
pub mut:
|
||||
frequency RecurrenceFreq
|
||||
interval int // Every N frequencies
|
||||
until i64 // End date (Unix timestamp)
|
||||
count int // Number of occurrences
|
||||
by_weekday []int // Days of week (0=Sunday)
|
||||
by_monthday []int // Days of month
|
||||
}
|
||||
|
||||
pub enum RecurrenceFreq {
|
||||
none
|
||||
daily
|
||||
weekly
|
||||
monthly
|
||||
yearly
|
||||
}
|
||||
|
||||
|
||||
@[params]
|
||||
pub struct CalendarEventArgs {
|
||||
BaseArgs
|
||||
pub mut:
|
||||
title string
|
||||
start_time string // use ourtime module to go from string to epoch
|
||||
end_time string // use ourtime module to go from string to epoch
|
||||
location string
|
||||
attendees []u32 // IDs of user groups
|
||||
fs_items []u32 // IDs of linked files or dirs
|
||||
calendar_id u32 // Associated calendar
|
||||
status EventStatus
|
||||
is_all_day bool
|
||||
is_recurring bool
|
||||
recurrence []RecurrenceRule
|
||||
reminder_mins []int // Minutes before event for reminders
|
||||
color string // Hex color code
|
||||
timezone string
|
||||
}
|
||||
|
||||
|
||||
pub fn calendar_event_new(args CalendarEventArgs) !CalendarEvent {
|
||||
// Convert tags to u32 ID
|
||||
tags_id := tags2id(args.tags)!
|
||||
|
||||
|
||||
return CalendarEvent{
|
||||
// Base fields
|
||||
id: args.id or { 0 }
|
||||
name: args.name
|
||||
description: args.description
|
||||
created_at: ourtime.now().unix()
|
||||
updated_at: ourtime.now().unix()
|
||||
securitypolicy: args.securitypolicy or { 0 }
|
||||
tags: tags_id
|
||||
comments: comment_ids
|
||||
|
||||
// CalendarEvent specific fields
|
||||
title: args.title
|
||||
description: args.description
|
||||
start_time: ourtime.new(args.start_time)!.unix()
|
||||
end_time: ourtime.new(args.end_time)!.unix()
|
||||
location: args.location
|
||||
attendees: args.attendees
|
||||
fs_items: args.fs_items
|
||||
calendar_id: args.calendar_id
|
||||
status: args.status
|
||||
is_all_day: args.is_all_day
|
||||
is_recurring: args.is_recurring
|
||||
recurrence: args.recurrence
|
||||
reminder_mins: args.reminder_mins
|
||||
color: args.color
|
||||
timezone: args.timezone
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut e CalendarEvent) dump() ![]u8 {
|
||||
// Create a new encoder
|
||||
mut enc := encoder.new()
|
||||
|
||||
// Add version byte
|
||||
enc.add_u8(1)
|
||||
|
||||
// Encode Base fields
|
||||
enc.add_u32(e.id)
|
||||
enc.add_string(e.name)
|
||||
enc.add_string(e.description)
|
||||
enc.add_i64(e.created_at)
|
||||
enc.add_i64(e.updated_at)
|
||||
enc.add_u32(e.securitypolicy)
|
||||
enc.add_u32(e.tags)
|
||||
enc.add_list_u32(e.comments)
|
||||
|
||||
// Encode CalendarEvent specific fields
|
||||
enc.add_string(e.title)
|
||||
enc.add_string(e.description)
|
||||
enc.add_i64(e.start_time)
|
||||
enc.add_i64(e.end_time)
|
||||
enc.add_string(e.location)
|
||||
enc.add_list_u32(e.attendees)
|
||||
enc.add_list_u32(e.fs_items)
|
||||
enc.add_u32(e.calendar_id)
|
||||
enc.add_u8(u8(e.status))
|
||||
enc.add_bool(e.is_all_day)
|
||||
enc.add_bool(e.is_recurring)
|
||||
|
||||
// Encode recurrence array
|
||||
enc.add_u16(u16(e.recurrence.len))
|
||||
for rule in e.recurrence {
|
||||
enc.add_u8(u8(rule.frequency))
|
||||
enc.add_int(rule.interval)
|
||||
enc.add_i64(rule.until)
|
||||
enc.add_int(rule.count)
|
||||
enc.add_list_int(rule.by_weekday)
|
||||
enc.add_list_int(rule.by_monthday)
|
||||
}
|
||||
|
||||
enc.add_list_int(e.reminder_mins)
|
||||
enc.add_string(e.color)
|
||||
enc.add_string(e.timezone)
|
||||
|
||||
return enc.data
|
||||
}
|
||||
|
||||
pub fn calendar_event_load(data []u8) !CalendarEvent {
|
||||
// Create a new decoder
|
||||
mut dec := encoder.decoder_new(data)
|
||||
|
||||
// Read version byte
|
||||
version := dec.get_u8()
|
||||
if version != 1 {
|
||||
return error('wrong version in calendar event load')
|
||||
}
|
||||
|
||||
// Decode Base fields
|
||||
id := dec.get_u32()
|
||||
name := dec.get_string()
|
||||
description := dec.get_string()
|
||||
created_at := dec.get_i64()
|
||||
updated_at := dec.get_i64()
|
||||
securitypolicy := dec.get_u32()
|
||||
tags := dec.get_u32()
|
||||
comments := dec.get_list_u32()
|
||||
|
||||
// Decode CalendarEvent specific fields
|
||||
title := dec.get_string()
|
||||
description2 := dec.get_string() // Second description field
|
||||
start_time := dec.get_i64()
|
||||
end_time := dec.get_i64()
|
||||
location := dec.get_string()
|
||||
attendees := dec.get_list_u32()
|
||||
fs_items := dec.get_list_u32()
|
||||
calendar_id := dec.get_u32()
|
||||
status := EventStatus(dec.get_u8())
|
||||
is_all_day := dec.get_bool()
|
||||
is_recurring := dec.get_bool()
|
||||
|
||||
// Decode recurrence array
|
||||
recurrence_len := dec.get_u16()
|
||||
mut recurrence := []RecurrenceRule{}
|
||||
for _ in 0..recurrence_len {
|
||||
frequency := RecurrenceFreq(dec.get_u8())
|
||||
interval := dec.get_int()
|
||||
until := dec.get_i64()
|
||||
count := dec.get_int()
|
||||
by_weekday := dec.get_list_int()
|
||||
by_monthday := dec.get_list_int()
|
||||
|
||||
recurrence << RecurrenceRule{
|
||||
frequency: frequency
|
||||
interval: interval
|
||||
until: until
|
||||
count: count
|
||||
by_weekday: by_weekday
|
||||
by_monthday: by_monthday
|
||||
}
|
||||
}
|
||||
|
||||
reminder_mins := dec.get_list_int()
|
||||
color := dec.get_string()
|
||||
timezone := dec.get_string()
|
||||
|
||||
return CalendarEvent{
|
||||
// Base fields
|
||||
id: id
|
||||
name: name
|
||||
description: description
|
||||
created_at: created_at
|
||||
updated_at: updated_at
|
||||
securitypolicy: securitypolicy
|
||||
tags: tags
|
||||
comments: comments
|
||||
|
||||
// CalendarEvent specific fields
|
||||
title: title
|
||||
description: description2
|
||||
start_time: start_time
|
||||
end_time: end_time
|
||||
location: location
|
||||
attendees: attendees
|
||||
fs_items: fs_items
|
||||
calendar_id: calendar_id
|
||||
status: status
|
||||
is_all_day: is_all_day
|
||||
is_recurring: is_recurring
|
||||
recurrence: recurrence
|
||||
reminder_mins: reminder_mins
|
||||
color: color
|
||||
timezone: timezone
|
||||
}
|
||||
}
|
||||
63
lib/hero/heromodels/chat_group.v
Normal file
63
lib/hero/heromodels/chat_group.v
Normal file
@@ -0,0 +1,63 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// ChatGroup represents a chat channel or conversation
|
||||
@[heap]
|
||||
pub struct ChatGroup {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
description string
|
||||
group_id string // Associated group for permissions
|
||||
chat_type ChatType
|
||||
messages []string // IDs of chat messages
|
||||
created_at i64
|
||||
updated_at i64
|
||||
last_activity i64
|
||||
is_archived bool
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub enum ChatType {
|
||||
public_channel
|
||||
private_channel
|
||||
direct_message
|
||||
group_message
|
||||
}
|
||||
|
||||
pub fn (mut c ChatGroup) calculate_id() {
|
||||
content := json.encode(ChatGroupContent{
|
||||
name: c.name
|
||||
description: c.description
|
||||
group_id: c.group_id
|
||||
chat_type: c.chat_type
|
||||
is_archived: c.is_archived
|
||||
tags: c.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
c.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct ChatGroupContent {
|
||||
name string
|
||||
description string
|
||||
group_id string
|
||||
chat_type ChatType
|
||||
is_archived bool
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_chat_group(name string, group_id string, chat_type ChatType) ChatGroup {
|
||||
mut chat_group := ChatGroup{
|
||||
name: name
|
||||
group_id: group_id
|
||||
chat_type: chat_type
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
last_activity: time.now().unix_time()
|
||||
}
|
||||
chat_group.calculate_id()
|
||||
return chat_group
|
||||
}
|
||||
103
lib/hero/heromodels/chat_message.v
Normal file
103
lib/hero/heromodels/chat_message.v
Normal file
@@ -0,0 +1,103 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// ChatMessage represents a message in a chat group
|
||||
@[heap]
|
||||
pub struct ChatMessage {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
content string
|
||||
chat_group_id string // Associated chat group
|
||||
sender_id string // User ID of sender
|
||||
parent_messages []MessageLink // Referenced/replied messages
|
||||
fs_files []string // IDs of linked files
|
||||
message_type MessageType
|
||||
status MessageStatus
|
||||
created_at i64
|
||||
updated_at i64
|
||||
edited_at i64
|
||||
deleted_at i64
|
||||
reactions []MessageReaction
|
||||
mentions []string // User IDs mentioned in message
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub struct MessageLink {
|
||||
pub mut:
|
||||
message_id string
|
||||
link_type MessageLinkType
|
||||
}
|
||||
|
||||
pub enum MessageLinkType {
|
||||
reply
|
||||
reference
|
||||
forward
|
||||
quote
|
||||
}
|
||||
|
||||
pub enum MessageType {
|
||||
text
|
||||
image
|
||||
file
|
||||
voice
|
||||
video
|
||||
system
|
||||
announcement
|
||||
}
|
||||
|
||||
pub enum MessageStatus {
|
||||
sent
|
||||
delivered
|
||||
read
|
||||
failed
|
||||
deleted
|
||||
}
|
||||
|
||||
pub struct MessageReaction {
|
||||
pub mut:
|
||||
user_id string
|
||||
emoji string
|
||||
timestamp i64
|
||||
}
|
||||
|
||||
pub fn (mut m ChatMessage) calculate_id() {
|
||||
content := json.encode(MessageContent{
|
||||
content: m.content
|
||||
chat_group_id: m.chat_group_id
|
||||
sender_id: m.sender_id
|
||||
parent_messages: m.parent_messages
|
||||
fs_files: m.fs_files
|
||||
message_type: m.message_type
|
||||
mentions: m.mentions
|
||||
tags: m.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
m.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct MessageContent {
|
||||
content string
|
||||
chat_group_id string
|
||||
sender_id string
|
||||
parent_messages []MessageLink
|
||||
fs_files []string
|
||||
message_type MessageType
|
||||
mentions []string
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_chat_message(content string, chat_group_id string, sender_id string) ChatMessage {
|
||||
mut message := ChatMessage{
|
||||
content: content
|
||||
chat_group_id: chat_group_id
|
||||
sender_id: sender_id
|
||||
message_type: .text
|
||||
status: .sent
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
}
|
||||
message.calculate_id()
|
||||
return message
|
||||
}
|
||||
102
lib/hero/heromodels/comment.v
Normal file
102
lib/hero/heromodels/comment.v
Normal file
@@ -0,0 +1,102 @@
|
||||
module heromodels
|
||||
|
||||
import freeflowuniverse.herolib.core.redisclient
|
||||
import json
|
||||
|
||||
import freeflowuniverse.herolib.core.redisclient
|
||||
import freeflowuniverse.herolib.data.encoder
|
||||
import freeflowuniverse.herolib.data.ourtime
|
||||
|
||||
|
||||
@[heap]
|
||||
pub struct Comment {
|
||||
pub mut:
|
||||
id u32
|
||||
comment string
|
||||
parent u32 //id of parent comment if any, 0 means none
|
||||
updated_at i64
|
||||
author u32 //links to user
|
||||
}
|
||||
|
||||
pub fn (self Comment) dump() ![]u8{
|
||||
// Create a new encoder
|
||||
mut e := encoder.new()
|
||||
e.add_u8(1)
|
||||
e.add_u32(self.id)
|
||||
e.add_string(self.comment)
|
||||
e.add_u32(self.parent)
|
||||
e.add_i64(self.updated_at)
|
||||
e.add_u32(self.author)
|
||||
return e.data
|
||||
}
|
||||
|
||||
|
||||
pub fn comment_load(data []u8) !Comment{
|
||||
// Create a new decoder
|
||||
mut e := encoder.decoder_new(data)
|
||||
version := e.get_u8()
|
||||
if version != 1 {
|
||||
panic("wrong version in comment load")
|
||||
}
|
||||
mut comment := Comment{}
|
||||
comment.id = e.get_u32()
|
||||
comment.comment = e.get_string()
|
||||
comment.parent = e.get_u32()
|
||||
comment.updated_at = e.get_i64()
|
||||
comment.author = e.get_u32()
|
||||
return comment
|
||||
}
|
||||
|
||||
|
||||
pub struct CommentArg {
|
||||
pub mut:
|
||||
comment string
|
||||
parent u32 //id of parent comment if any, 0 means none
|
||||
author u32 //links to user
|
||||
}
|
||||
|
||||
//get new comment, not from the DB
|
||||
pub fn comment_new(args CommentArg) !Comment{
|
||||
mut o:=Comment {
|
||||
comment: args.comment
|
||||
parent:args.parent
|
||||
updated_at: ourtime.now().unix()
|
||||
author: args.author
|
||||
}
|
||||
return o
|
||||
}
|
||||
|
||||
pub fn comment_multiset(args []CommentArg) ![]u32{
|
||||
mut ids := []u32{}
|
||||
for comment in args {
|
||||
ids << comment_set(comment)!
|
||||
}
|
||||
return ids
|
||||
}
|
||||
|
||||
|
||||
pub fn comment_set(args CommentArg) !u32{
|
||||
mut redis := redisclient.core_get()!
|
||||
mut o:=comment_new(args)!
|
||||
myid := redis.incr("db:comments:id")!
|
||||
o.id = myid
|
||||
data := o.dump()!
|
||||
redis.hset("db:comments:data", myid, data)!
|
||||
return myid
|
||||
}
|
||||
|
||||
pub fn comment_exist(id u32) !bool{
|
||||
mut redis := redisclient.core_get()!
|
||||
return redis.hexist("db:comments:data",id)!
|
||||
}
|
||||
|
||||
pub fn comment_get(id u32) !Comment{
|
||||
mut redis := redisclient.core_get()!
|
||||
mut data:= redis.hget("db:comments:data",id)!
|
||||
if data.len>0{
|
||||
return comment_load(data)!
|
||||
}else{
|
||||
return error("Can't find comment with id: ${id}")
|
||||
}
|
||||
|
||||
}
|
||||
71
lib/hero/heromodels/core_methods.v
Normal file
71
lib/hero/heromodels/core_methods.v
Normal file
@@ -0,0 +1,71 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.md5
|
||||
import json
|
||||
|
||||
import freeflowuniverse.herolib.core.redisclient
|
||||
import freeflowuniverse.herolib.data.encoder
|
||||
|
||||
|
||||
pub fn [T] set(obj T) !Base {
|
||||
//todo: get the dump() from the obj , save the
|
||||
mut redis := redisclient.core_get()!
|
||||
|
||||
data := obj.dump()
|
||||
|
||||
redis.hset("db:${name}",id,data)!
|
||||
|
||||
}
|
||||
|
||||
pub fn [T] get(id u32) !T {
|
||||
//todo: get the dump() from the obj , save the
|
||||
mut redis := redisclient.core_get()!
|
||||
|
||||
data := redis.hget("db:${name}",id)!
|
||||
|
||||
obj:=$name_load(data) or {
|
||||
return error("could not load ${name} from data")
|
||||
}
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
pub fn [T] exists(id u32) !T {
|
||||
//todo: get the dump() from the obj , save the
|
||||
mut redis := redisclient.core_get()!
|
||||
|
||||
return redis.hexists("db:${name}",id)!
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
pub fn [T] delete(id u32) !T {
|
||||
//todo: get the dump() from the obj , save the
|
||||
mut redis := redisclient.core_get()!
|
||||
|
||||
return redis.hdel("db:${name}",id)!
|
||||
|
||||
return obj
|
||||
}
|
||||
|
||||
|
||||
|
||||
//make it easy to get a base object
|
||||
pub fn [T] new_from_base(args BaseArgs) !T {
|
||||
|
||||
mut redis := redisclient.core_get()!
|
||||
|
||||
commentids:=comment_multiset(args.comments)!
|
||||
tags:=tags2id(args.tags)!
|
||||
|
||||
return T{
|
||||
id: args.id or { 0 }
|
||||
name: args.name
|
||||
description: args.description
|
||||
created_at: ourtime.now().unix()
|
||||
updated_at: ourtime.now().unix()
|
||||
securitypolicy: args.securitypolicy or { 0 }
|
||||
tags: tags
|
||||
comments: commentids)
|
||||
}
|
||||
}
|
||||
37
lib/hero/heromodels/examples/example1.vsh
Normal file
37
lib/hero/heromodels/examples/example1.vsh
Normal file
@@ -0,0 +1,37 @@
|
||||
|
||||
|
||||
|
||||
// Create a user
|
||||
mut user := new_user('John Doe', 'john@example.com')
|
||||
|
||||
// Create a group
|
||||
mut group := new_group('Development Team', 'Software development group')
|
||||
group.add_member(user.id, .admin)
|
||||
|
||||
// Create a project
|
||||
mut project := new_project('Website Redesign', 'Redesign company website', group.id)
|
||||
|
||||
// Create an issue
|
||||
mut issue := new_project_issue('Fix login bug', project.id, user.id, .bug)
|
||||
|
||||
// Create a calendar
|
||||
mut calendar := new_calendar('Team Calendar', group.id)
|
||||
|
||||
// Create an event
|
||||
mut event := new_calendar_event('Sprint Planning', 1672531200, 1672534800, calendar.id, user.id)
|
||||
calendar.add_event(event.id)
|
||||
|
||||
// Create a filesystem
|
||||
mut fs := new_fs('Team Files', group.id)
|
||||
|
||||
// Create a blob for file content
|
||||
mut blob := new_fs_blob('Hello World!'.bytes())!
|
||||
|
||||
println('User ID: ${user.id}')
|
||||
println('Group ID: ${group.id}')
|
||||
println('Project ID: ${project.id}')
|
||||
println('Issue ID: ${issue.id}')
|
||||
println('Calendar ID: ${calendar.id}')
|
||||
println('Event ID: ${event.id}')
|
||||
println('Filesystem ID: ${fs.id}')
|
||||
println('Blob ID: ${blob.id}')
|
||||
51
lib/hero/heromodels/fs.v
Normal file
51
lib/hero/heromodels/fs.v
Normal file
@@ -0,0 +1,51 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// Fs represents a filesystem namespace
|
||||
@[heap]
|
||||
pub struct Fs {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
description string
|
||||
group_id string // Associated group for permissions
|
||||
root_dir_id string // ID of root directory
|
||||
created_at i64
|
||||
updated_at i64
|
||||
quota_bytes i64 // Storage quota in bytes
|
||||
used_bytes i64 // Current usage in bytes
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn (mut f Fs) calculate_id() {
|
||||
content := json.encode(FsContent{
|
||||
name: f.name
|
||||
description: f.description
|
||||
group_id: f.group_id
|
||||
quota_bytes: f.quota_bytes
|
||||
tags: f.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
f.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct FsContent {
|
||||
name string
|
||||
description string
|
||||
group_id string
|
||||
quota_bytes i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_fs(name string, group_id string) Fs {
|
||||
mut fs := Fs{
|
||||
name: name
|
||||
group_id: group_id
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
}
|
||||
fs.calculate_id()
|
||||
return fs
|
||||
}
|
||||
40
lib/hero/heromodels/fs_blob.v
Normal file
40
lib/hero/heromodels/fs_blob.v
Normal file
@@ -0,0 +1,40 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
|
||||
// FsBlob represents binary data up to 1MB
|
||||
@[heap]
|
||||
pub struct FsBlob {
|
||||
pub mut:
|
||||
id string // blake192 hash of content
|
||||
data []u8 // Binary data (max 1MB)
|
||||
size_bytes int // Size in bytes
|
||||
created_at i64
|
||||
mime_type string
|
||||
encoding string // e.g., "gzip", "none"
|
||||
}
|
||||
|
||||
pub fn (mut b FsBlob) calculate_id() {
|
||||
hash := blake3.sum256(b.data)
|
||||
b.id = hash.hex()[..48] // blake192 = first 192 bits = 48 hex chars
|
||||
}
|
||||
|
||||
pub fn new_fs_blob(data []u8) !FsBlob {
|
||||
if data.len > 1024 * 1024 { // 1MB limit
|
||||
return error('Blob size exceeds 1MB limit')
|
||||
}
|
||||
|
||||
mut blob := FsBlob{
|
||||
data: data
|
||||
size_bytes: data.len
|
||||
created_at: time.now().unix_time()
|
||||
encoding: 'none'
|
||||
}
|
||||
blob.calculate_id()
|
||||
return blob
|
||||
}
|
||||
|
||||
pub fn (b FsBlob) verify_integrity() bool {
|
||||
hash := blake3.sum256(b.data)
|
||||
return hash.hex()[..48] == b.id
|
||||
}
|
||||
52
lib/hero/heromodels/fs_dir.v
Normal file
52
lib/hero/heromodels/fs_dir.v
Normal file
@@ -0,0 +1,52 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// FsDir represents a directory in a filesystem
|
||||
@[heap]
|
||||
pub struct FsDir {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
fs_id string // Associated filesystem
|
||||
parent_id string // Parent directory ID (empty for root)
|
||||
group_id string // Associated group for permissions
|
||||
children []string // Child directory and file IDs
|
||||
created_at i64
|
||||
updated_at i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn (mut d FsDir) calculate_id() {
|
||||
content := json.encode(DirContent{
|
||||
name: d.name
|
||||
fs_id: d.fs_id
|
||||
parent_id: d.parent_id
|
||||
group_id: d.group_id
|
||||
tags: d.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
d.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct DirContent {
|
||||
name string
|
||||
fs_id string
|
||||
parent_id string
|
||||
group_id string
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_fs_dir(name string, fs_id string, parent_id string, group_id string) FsDir {
|
||||
mut dir := FsDir{
|
||||
name: name
|
||||
fs_id: fs_id
|
||||
parent_id: parent_id
|
||||
group_id: group_id
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
}
|
||||
dir.calculate_id()
|
||||
return dir
|
||||
}
|
||||
64
lib/hero/heromodels/fs_file.v
Normal file
64
lib/hero/heromodels/fs_file.v
Normal file
@@ -0,0 +1,64 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// FsFile represents a file in a filesystem
|
||||
@[heap]
|
||||
pub struct FsFile {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
fs_id string // Associated filesystem
|
||||
directories []string // Directory IDs where this file exists
|
||||
blobs []string // Blake192 IDs of file content blobs
|
||||
size_bytes i64 // Total file size
|
||||
mime_type string
|
||||
checksum string // Overall file checksum
|
||||
created_at i64
|
||||
updated_at i64
|
||||
accessed_at i64
|
||||
tags []string
|
||||
metadata map[string]string // Custom metadata
|
||||
}
|
||||
|
||||
pub fn (mut f FsFile) calculate_id() {
|
||||
content := json.encode(FileContent{
|
||||
name: f.name
|
||||
fs_id: f.fs_id
|
||||
directories: f.directories
|
||||
blobs: f.blobs
|
||||
size_bytes: f.size_bytes
|
||||
mime_type: f.mime_type
|
||||
checksum: f.checksum
|
||||
tags: f.tags
|
||||
metadata: f.metadata
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
f.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct FileContent {
|
||||
name string
|
||||
fs_id string
|
||||
directories []string
|
||||
blobs []string
|
||||
size_bytes i64
|
||||
mime_type string
|
||||
checksum string
|
||||
tags []string
|
||||
metadata map[string]string
|
||||
}
|
||||
|
||||
pub fn new_fs_file(name string, fs_id string, directories []string) FsFile {
|
||||
mut file := FsFile{
|
||||
name: name
|
||||
fs_id: fs_id
|
||||
directories: directories
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
accessed_at: time.now().unix_time()
|
||||
}
|
||||
file.calculate_id()
|
||||
return file
|
||||
}
|
||||
60
lib/hero/heromodels/fs_symlink.v
Normal file
60
lib/hero/heromodels/fs_symlink.v
Normal file
@@ -0,0 +1,60 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// FsSymlink represents a symbolic link in a filesystem
|
||||
@[heap]
|
||||
pub struct FsSymlink {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
fs_id string // Associated filesystem
|
||||
parent_id string // Parent directory ID
|
||||
target_id string // ID of target file or directory
|
||||
target_type SymlinkTargetType
|
||||
created_at i64
|
||||
updated_at i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub enum SymlinkTargetType {
|
||||
file
|
||||
directory
|
||||
}
|
||||
|
||||
pub fn (mut s FsSymlink) calculate_id() {
|
||||
content := json.encode(SymlinkContent{
|
||||
name: s.name
|
||||
fs_id: s.fs_id
|
||||
parent_id: s.parent_id
|
||||
target_id: s.target_id
|
||||
target_type: s.target_type
|
||||
tags: s.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
s.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct SymlinkContent {
|
||||
name string
|
||||
fs_id string
|
||||
parent_id string
|
||||
target_id string
|
||||
target_type SymlinkTargetType
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_fs_symlink(name string, fs_id string, parent_id string, target_id string, target_type SymlinkTargetType) FsSymlink {
|
||||
mut symlink := FsSymlink{
|
||||
name: name
|
||||
fs_id: fs_id
|
||||
parent_id: parent_id
|
||||
target_id: target_id
|
||||
target_type: target_type
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
}
|
||||
symlink.calculate_id()
|
||||
return symlink
|
||||
}
|
||||
80
lib/hero/heromodels/group.v
Normal file
80
lib/hero/heromodels/group.v
Normal file
@@ -0,0 +1,80 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// Group represents a collection of users with roles and permissions
|
||||
@[heap]
|
||||
pub struct Group {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
description string
|
||||
members []GroupMember
|
||||
subgroups []string // IDs of child groups
|
||||
parent_group string // ID of parent group
|
||||
created_at i64
|
||||
updated_at i64
|
||||
is_public bool
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub struct GroupMember {
|
||||
pub mut:
|
||||
user_id string
|
||||
role GroupRole
|
||||
joined_at i64
|
||||
}
|
||||
|
||||
pub enum GroupRole {
|
||||
reader
|
||||
writer
|
||||
admin
|
||||
owner
|
||||
}
|
||||
|
||||
pub fn (mut g Group) calculate_id() {
|
||||
content := json.encode(GroupContent{
|
||||
name: g.name
|
||||
description: g.description
|
||||
members: g.members
|
||||
subgroups: g.subgroups
|
||||
parent_group: g.parent_group
|
||||
is_public: g.is_public
|
||||
tags: g.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
g.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct GroupContent {
|
||||
name string
|
||||
description string
|
||||
members []GroupMember
|
||||
subgroups []string
|
||||
parent_group string
|
||||
is_public bool
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_group(name string, description string) Group {
|
||||
mut group := Group{
|
||||
name: name
|
||||
description: description
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
is_public: false
|
||||
}
|
||||
group.calculate_id()
|
||||
return group
|
||||
}
|
||||
|
||||
pub fn (mut g Group) add_member(user_id string, role GroupRole) {
|
||||
g.members << GroupMember{
|
||||
user_id: user_id
|
||||
role: role
|
||||
joined_at: time.now().unix_time()
|
||||
}
|
||||
g.updated_at = time.now().unix_time()
|
||||
g.calculate_id()
|
||||
}
|
||||
31
lib/hero/heromodels/instructions.md
Normal file
31
lib/hero/heromodels/instructions.md
Normal file
@@ -0,0 +1,31 @@
|
||||
distill vlang objects out of the calendr/contact/circle and create the missing parts
|
||||
|
||||
organze per root object which are @[heap] and in separate file with name.v
|
||||
|
||||
the rootobjects are
|
||||
|
||||
- user
|
||||
- group (which users are members and in which role can be admin, writer, reader, can be linked to subgroups)
|
||||
- calendar (references to event, group)
|
||||
- calendar_event (everything related to an event on calendar, link to one or more fs_file)
|
||||
- project (grouping per project, defines swimlanes and milestones this allows us to visualize as kanban, link to group, link to one or more fs_file )
|
||||
- project_issue (and issue is specific type, e.g. task, story, bug, question,…), issue is linked to project by id, also defined priority…, on which swimlane, deadline, assignees, … ,,,, has tags, link to one or more fs_file
|
||||
- chat_group (link to group, name/description/tags)
|
||||
- chat_message (link to chat_group, link to parent_chat_messages and what type of link e.g. reply or reference or? , status, … link to one or more fs_file)
|
||||
- fs = filesystem (link to group)
|
||||
- fs_dir = directory in filesystem, link to parent, link to group
|
||||
- fs_file (link to one or more fs_dir, list of references to blobs as blake192)
|
||||
- fs_symlink (can be link to dir or file)
|
||||
- fs_blob (the data itself, max size 1 MB, binary data, id = blake192)
|
||||
|
||||
the group’s define how people can interact with the parts e.g. calendar linked to group, so readers of that group can read and have copy of the info linked to that group
|
||||
|
||||
all the objects are identified by their blake192 (based on the content)
|
||||
|
||||
there is a special table which has link between blake192 and their previous & next version, so we can always walk the three, both parts are indexed (this is independent of type of object)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
102
lib/hero/heromodels/project.v
Normal file
102
lib/hero/heromodels/project.v
Normal file
@@ -0,0 +1,102 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// Project represents a collection of issues organized in swimlanes
|
||||
@[heap]
|
||||
pub struct Project {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
description string
|
||||
group_id string // Associated group for permissions
|
||||
swimlanes []Swimlane
|
||||
milestones []Milestone
|
||||
issues []string // IDs of project issues
|
||||
fs_files []string // IDs of linked files
|
||||
status ProjectStatus
|
||||
start_date i64
|
||||
end_date i64
|
||||
created_at i64
|
||||
updated_at i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub struct Swimlane {
|
||||
pub mut:
|
||||
id string
|
||||
name string
|
||||
description string
|
||||
order int
|
||||
color string
|
||||
is_done bool
|
||||
}
|
||||
|
||||
pub struct Milestone {
|
||||
pub mut:
|
||||
id string
|
||||
name string
|
||||
description string
|
||||
due_date i64
|
||||
completed bool
|
||||
issues []string // IDs of issues in this milestone
|
||||
}
|
||||
|
||||
pub enum ProjectStatus {
|
||||
planning
|
||||
active
|
||||
on_hold
|
||||
completed
|
||||
cancelled
|
||||
}
|
||||
|
||||
pub fn (mut p Project) calculate_id() {
|
||||
content := json.encode(ProjectContent{
|
||||
name: p.name
|
||||
description: p.description
|
||||
group_id: p.group_id
|
||||
swimlanes: p.swimlanes
|
||||
milestones: p.milestones
|
||||
issues: p.issues
|
||||
fs_files: p.fs_files
|
||||
status: p.status
|
||||
start_date: p.start_date
|
||||
end_date: p.end_date
|
||||
tags: p.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
p.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct ProjectContent {
|
||||
name string
|
||||
description string
|
||||
group_id string
|
||||
swimlanes []Swimlane
|
||||
milestones []Milestone
|
||||
issues []string
|
||||
fs_files []string
|
||||
status ProjectStatus
|
||||
start_date i64
|
||||
end_date i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_project(name string, description string, group_id string) Project {
|
||||
mut project := Project{
|
||||
name: name
|
||||
description: description
|
||||
group_id: group_id
|
||||
status: .planning
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
swimlanes: [
|
||||
Swimlane{id: 'todo', name: 'To Do', order: 1, color: '#f1c40f'},
|
||||
Swimlane{id: 'in_progress', name: 'In Progress', order: 2, color: '#3498db'},
|
||||
Swimlane{id: 'done', name: 'Done', order: 3, color: '#2ecc71', is_done: true}
|
||||
]
|
||||
}
|
||||
project.calculate_id()
|
||||
return project
|
||||
}
|
||||
115
lib/hero/heromodels/project_issue.v
Normal file
115
lib/hero/heromodels/project_issue.v
Normal file
@@ -0,0 +1,115 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// ProjectIssue represents a task, story, bug, or question in a project
|
||||
@[heap]
|
||||
pub struct ProjectIssue {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
title string
|
||||
description string
|
||||
project_id string // Associated project
|
||||
issue_type IssueType
|
||||
priority IssuePriority
|
||||
status IssueStatus
|
||||
swimlane_id string // Current swimlane
|
||||
assignees []string // User IDs
|
||||
reporter string // User ID who created the issue
|
||||
milestone_id string // Associated milestone
|
||||
deadline i64 // Unix timestamp
|
||||
estimate int // Story points or hours
|
||||
fs_files []string // IDs of linked files
|
||||
parent_id string // Parent issue ID (for sub-tasks)
|
||||
children []string // Child issue IDs
|
||||
created_at i64
|
||||
updated_at i64
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub enum IssueType {
|
||||
task
|
||||
story
|
||||
bug
|
||||
question
|
||||
epic
|
||||
subtask
|
||||
}
|
||||
|
||||
pub enum IssuePriority {
|
||||
lowest
|
||||
low
|
||||
medium
|
||||
high
|
||||
highest
|
||||
critical
|
||||
}
|
||||
|
||||
pub enum IssueStatus {
|
||||
open
|
||||
in_progress
|
||||
blocked
|
||||
review
|
||||
testing
|
||||
done
|
||||
closed
|
||||
}
|
||||
|
||||
pub fn (mut i ProjectIssue) calculate_id() {
|
||||
content := json.encode(IssueContent{
|
||||
title: i.title
|
||||
description: i.description
|
||||
project_id: i.project_id
|
||||
issue_type: i.issue_type
|
||||
priority: i.priority
|
||||
status: i.status
|
||||
swimlane_id: i.swimlane_id
|
||||
assignees: i.assignees
|
||||
reporter: i.reporter
|
||||
milestone_id: i.milestone_id
|
||||
deadline: i.deadline
|
||||
estimate: i.estimate
|
||||
fs_files: i.fs_files
|
||||
parent_id: i.parent_id
|
||||
children: i.children
|
||||
tags: i.tags
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
i.id = hash.hex()[..48]
|
||||
}
|
||||
|
||||
struct IssueContent {
|
||||
title string
|
||||
description string
|
||||
project_id string
|
||||
issue_type IssueType
|
||||
priority IssuePriority
|
||||
status IssueStatus
|
||||
swimlane_id string
|
||||
assignees []string
|
||||
reporter string
|
||||
milestone_id string
|
||||
deadline i64
|
||||
estimate int
|
||||
fs_files []string
|
||||
parent_id string
|
||||
children []string
|
||||
tags []string
|
||||
}
|
||||
|
||||
pub fn new_project_issue(title string, project_id string, reporter string, issue_type IssueType) ProjectIssue {
|
||||
mut issue := ProjectIssue{
|
||||
title: title
|
||||
project_id: project_id
|
||||
reporter: reporter
|
||||
issue_type: issue_type
|
||||
priority: .medium
|
||||
status: .open
|
||||
swimlane_id: 'todo'
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
}
|
||||
issue.calculate_id()
|
||||
return issue
|
||||
}
|
||||
67
lib/hero/heromodels/user.v
Normal file
67
lib/hero/heromodels/user.v
Normal file
@@ -0,0 +1,67 @@
|
||||
module heromodels
|
||||
|
||||
import crypto.blake3
|
||||
import json
|
||||
|
||||
// User represents a person in the system
|
||||
@[heap]
|
||||
pub struct User {
|
||||
pub mut:
|
||||
id string // blake192 hash
|
||||
name string
|
||||
email string
|
||||
public_key string // for encryption/signing
|
||||
phone string
|
||||
address string
|
||||
avatar_url string
|
||||
bio string
|
||||
timezone string
|
||||
created_at i64
|
||||
updated_at i64
|
||||
status UserStatus
|
||||
}
|
||||
|
||||
pub enum UserStatus {
|
||||
active
|
||||
inactive
|
||||
suspended
|
||||
pending
|
||||
}
|
||||
|
||||
pub fn (mut u User) calculate_id() {
|
||||
content := json.encode(UserContent{
|
||||
name: u.name
|
||||
email: u.email
|
||||
public_key: u.public_key
|
||||
phone: u.phone
|
||||
address: u.address
|
||||
bio: u.bio
|
||||
timezone: u.timezone
|
||||
status: u.status
|
||||
})
|
||||
hash := blake3.sum256(content.bytes())
|
||||
u.id = hash.hex()[..48] // blake192 = first 192 bits = 48 hex chars
|
||||
}
|
||||
|
||||
struct UserContent {
|
||||
name string
|
||||
email string
|
||||
public_key string
|
||||
phone string
|
||||
address string
|
||||
bio string
|
||||
timezone string
|
||||
status UserStatus
|
||||
}
|
||||
|
||||
pub fn new_user(name string, email string) User {
|
||||
mut user := User{
|
||||
name: name
|
||||
email: email
|
||||
created_at: time.now().unix_time()
|
||||
updated_at: time.now().unix_time()
|
||||
status: .active
|
||||
}
|
||||
user.calculate_id()
|
||||
return user
|
||||
}
|
||||
40
lib/hero/heromodels/version_history.v
Normal file
40
lib/hero/heromodels/version_history.v
Normal file
@@ -0,0 +1,40 @@
|
||||
module heromodels
|
||||
|
||||
// VersionHistory tracks the evolution of objects by their blake192 IDs
|
||||
@[heap]
|
||||
pub struct VersionHistory {
|
||||
pub mut:
|
||||
current_id string // blake192 hash of current version
|
||||
previous_id string // blake192 hash of previous version
|
||||
next_id string // blake192 hash of next version (if exists)
|
||||
object_type string // Type of object (User, Group, etc.)
|
||||
change_type ChangeType
|
||||
changed_by string // User ID who made the change
|
||||
changed_at i64 // Unix timestamp
|
||||
change_notes string // Optional description of changes
|
||||
}
|
||||
|
||||
pub enum ChangeType {
|
||||
create
|
||||
update
|
||||
delete
|
||||
restore
|
||||
}
|
||||
|
||||
pub fn new_version_history(current_id string, previous_id string, object_type string, change_type ChangeType, changed_by string) VersionHistory {
|
||||
return VersionHistory{
|
||||
current_id: current_id
|
||||
previous_id: previous_id
|
||||
object_type: object_type
|
||||
change_type: change_type
|
||||
changed_by: changed_by
|
||||
changed_at: time.now().unix_time()
|
||||
}
|
||||
}
|
||||
|
||||
// Database indexes needed:
|
||||
// - Index on current_id for fast lookup
|
||||
// - Index on previous_id for walking backward
|
||||
// - Index on next_id for walking forward
|
||||
// - Index on object_type for filtering by type
|
||||
// - Index on changed_by for user activity tracking
|
||||
@@ -1 +0,0 @@
|
||||
../../../../../git.threefold.info/herocode/db/specs/models
|
||||
49
lib/installers/base/play.v
Normal file
49
lib/installers/base/play.v
Normal file
@@ -0,0 +1,49 @@
|
||||
module base
|
||||
|
||||
import freeflowuniverse.herolib.core.playbook
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
|
||||
pub fn play(mut plbook playbook.PlayBook) ! {
|
||||
if plbook.exists(filter: 'base.install') {
|
||||
console.print_header('play base.install')
|
||||
for mut action in plbook.find(filter: 'base.install')! {
|
||||
mut p := action.params
|
||||
install(
|
||||
reset: p.get_default_false('reset')
|
||||
develop: p.get_default_false('develop')
|
||||
)!
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
if plbook.exists(filter: 'base.develop') {
|
||||
console.print_header('play base.develop')
|
||||
for mut action in plbook.find(filter: 'base.develop')! {
|
||||
mut p := action.params
|
||||
develop(
|
||||
reset: p.get_default_false('reset')
|
||||
)!
|
||||
action.done = true
|
||||
}
|
||||
}
|
||||
if plbook.exists(filter: 'base.redis_install') {
|
||||
console.print_header('play base.redis_install')
|
||||
for action in plbook.find(filter: 'base.redis_install')! {
|
||||
mut p := action.params
|
||||
redis_install(
|
||||
port: p.get_int_default('port', 6379)!
|
||||
ipaddr: p.get_default('ipaddr', 'localhost')!
|
||||
reset: p.get_default_false('reset')
|
||||
start: p.get_default_true('start')
|
||||
)!
|
||||
}
|
||||
}
|
||||
// if plbook.exists(filter: 'base.sshkeysinstall') {
|
||||
// console.print_header('play base.sshkeysinstall')
|
||||
// for action in plbook.find(filter: 'base.sshkeysinstall')! {
|
||||
// mut p := action.params
|
||||
// sshkeysinstall(
|
||||
// reset: p.get_default_false('reset')
|
||||
// )!
|
||||
// }
|
||||
// }
|
||||
}
|
||||
54
lib/installers/base/readme.md
Normal file
54
lib/installers/base/readme.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Installer - Base Module
|
||||
|
||||
This module provides heroscript actions to install and configure base system dependencies.
|
||||
|
||||
## Actions
|
||||
|
||||
### `base.install`
|
||||
|
||||
Installs base packages for the detected operating system (OSX, Ubuntu, Alpine, Arch).
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `reset` (bool): If true, reinstalls packages even if they are already present. Default: `false`.
|
||||
- `develop` (bool): If true, installs development packages. Default: `false`.
|
||||
|
||||
**Example:**
|
||||
|
||||
```heroscript
|
||||
!!base.install
|
||||
develop: true
|
||||
```
|
||||
|
||||
### `base.develop`
|
||||
|
||||
Installs development packages for the detected operating system.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `reset` (bool): If true, reinstalls packages. Default: `false`.
|
||||
|
||||
**Example:**
|
||||
|
||||
```heroscript
|
||||
!!base.develop
|
||||
reset: true
|
||||
```
|
||||
|
||||
### `base.redis_install`
|
||||
|
||||
Installs and configures Redis server.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `port` (int): Port for Redis to listen on. Default: `6379`.
|
||||
- `ipaddr` (string): IP address to bind to. Default: `localhost`.
|
||||
- `reset` (bool): If true, reinstalls and reconfigures Redis. Default: `false`.
|
||||
- `start` (bool): If true, starts the Redis server after installation. Default: `true`.
|
||||
|
||||
**Example:**
|
||||
|
||||
```heroscript
|
||||
!!base.redis_install
|
||||
port: 6380
|
||||
```
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&CometBFT {
|
||||
if r.hexists('context:cometbft', args.name)! {
|
||||
data := r.hget('context:cometbft', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('CometBFT with name: cometbft does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(CometBFT, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&CometBFT {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("CometBFT with name 'cometbft' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return cometbft_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for cometbft with name:cometbft')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'cometbft.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'cometbft.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
cometbft_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: cometbft' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: cometbft' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: cometbft' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: cometbft' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -208,7 +213,7 @@ pub fn (mut self CometBFT) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('cometbft start')
|
||||
console.print_header('installer: cometbft start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -221,7 +226,7 @@ pub fn (mut self CometBFT) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting cometbft with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: cometbft starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&MeilisearchInstaller {
|
||||
if r.hexists('context:meilisearch_installer', args.name)! {
|
||||
data := r.hget('context:meilisearch_installer', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('MeilisearchInstaller with name: meilisearch_installer does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(MeilisearchInstaller, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&MeilisearchInstaller {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("MeilisearchInstaller with name 'meilisearch_installer' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return meilisearch_installer_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for meilisearch_installer with name:meilisearch_installer')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'meilisearch_installer.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'meilisearch_installer.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
meilisearch_installer_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: meilisearch_installer' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: meilisearch_installer' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: meilisearch_installer' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: meilisearch_installer' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -208,7 +213,7 @@ pub fn (mut self MeilisearchInstaller) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('meilisearch_installer start')
|
||||
console.print_header('installer: meilisearch_installer start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -221,7 +226,7 @@ pub fn (mut self MeilisearchInstaller) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting meilisearch_installer with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: meilisearch_installer starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&Postgresql {
|
||||
if r.hexists('context:postgresql', args.name)! {
|
||||
data := r.hget('context:postgresql', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('Postgresql with name: postgresql does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(Postgresql, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&Postgresql {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("Postgresql with name 'postgresql' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return postgresql_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for postgresql with name:postgresql')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'postgresql.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'postgresql.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
postgresql_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: postgresql' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: postgresql' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: postgresql' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: postgresql' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -206,7 +211,7 @@ pub fn (mut self Postgresql) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('postgresql start')
|
||||
console.print_header('installer: postgresql start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -219,7 +224,7 @@ pub fn (mut self Postgresql) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting postgresql with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: postgresql starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&QDrant {
|
||||
if r.hexists('context:qdrant_installer', args.name)! {
|
||||
data := r.hget('context:qdrant_installer', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('QDrant with name: qdrant_installer does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(QDrant, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&QDrant {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("QDrant with name 'qdrant_installer' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return qdrant_installer_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for qdrant_installer with name:qdrant_installer')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'qdrant_installer.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'qdrant_installer.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
qdrant_installer_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: qdrant_installer' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: qdrant_installer' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: qdrant_installer' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: qdrant_installer' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -208,7 +213,7 @@ pub fn (mut self QDrant) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('qdrant_installer start')
|
||||
console.print_header('installer: qdrant_installer start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -221,7 +226,7 @@ pub fn (mut self QDrant) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting qdrant_installer with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: qdrant_installer starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&ZeroDB {
|
||||
if r.hexists('context:zerodb', args.name)! {
|
||||
data := r.hget('context:zerodb', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('ZeroDB with name: zerodb does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(ZeroDB, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&ZeroDB {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("ZeroDB with name 'zerodb' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return zerodb_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for zerodb with name:zerodb')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'zerodb.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'zerodb.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
zerodb_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: zerodb' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: zerodb' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: zerodb' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: zerodb' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -206,7 +211,7 @@ pub fn (mut self ZeroDB) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('zerodb start')
|
||||
console.print_header('installer: zerodb start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -219,7 +224,7 @@ pub fn (mut self ZeroDB) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting zerodb with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: zerodb starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
return error("can't configure zerofs, because no configuration allowed for this installer.")
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'zerofs.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -68,6 +68,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
zerofs_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -83,19 +84,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: zerofs' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: zerofs' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: zerofs' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: zerofs' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -106,7 +107,7 @@ pub fn (mut self ZeroFS) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('zerofs start')
|
||||
console.print_header('installer: zerofs start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -119,7 +120,7 @@ pub fn (mut self ZeroFS) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting zerofs with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: zerofs starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&CoreDNS {
|
||||
if r.hexists('context:coredns', args.name)! {
|
||||
data := r.hget('context:coredns', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('CoreDNS with name: coredns does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(CoreDNS, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&CoreDNS {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("CoreDNS with name 'coredns' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return coredns_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for coredns with name:coredns')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'coredns.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'coredns.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
coredns_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: coredns' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: coredns' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: coredns' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: coredns' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -206,7 +211,7 @@ pub fn (mut self CoreDNS) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('coredns start')
|
||||
console.print_header('installer: coredns start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -219,7 +224,7 @@ pub fn (mut self CoreDNS) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting coredns with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: coredns starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -1,43 +1,19 @@
|
||||
# coredns
|
||||
# Installer - CoreDNS Module
|
||||
|
||||
coredns
|
||||
This module provides heroscript actions for installing and managing CoreDNS.
|
||||
|
||||
To get started
|
||||
## Actions
|
||||
|
||||
```v
|
||||
### `coredns.install`
|
||||
|
||||
Installs the CoreDNS server.
|
||||
|
||||
import freeflowuniverse.herolib.installers.infra.coredns as coredns_installer
|
||||
**Parameters:**
|
||||
|
||||
heroscript:="
|
||||
!!coredns.configure name:'test'
|
||||
config_path: '/etc/coredns/Corefile'
|
||||
dnszones_path: '/etc/coredns/zones'
|
||||
plugins: 'forward,cache'
|
||||
example: true
|
||||
- `reset` (bool): If true, force a reinstall even if CoreDNS is already detected. Default: `false`.
|
||||
|
||||
!!coredns.start name:'test' reset:1
|
||||
"
|
||||
**Example:**
|
||||
|
||||
coredns_installer.play(heroscript=heroscript)!
|
||||
|
||||
//or we can call the default and do a start with reset
|
||||
//mut installer:= coredns_installer.get()!
|
||||
//installer.start(reset:true)!
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
## example heroscript
|
||||
|
||||
```hero
|
||||
!!coredns.configure
|
||||
name: 'custom'
|
||||
config_path: '/etc/coredns/Corefile'
|
||||
config_url: 'https://github.com/example/coredns-config'
|
||||
dnszones_path: '/etc/coredns/zones'
|
||||
dnszones_url: 'https://github.com/example/dns-zones'
|
||||
plugins: 'forward,cache'
|
||||
example: false
|
||||
```
|
||||
```heroscript
|
||||
!!coredns.install
|
||||
reset: true
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&GiteaServer {
|
||||
if r.hexists('context:gitea', args.name)! {
|
||||
data := r.hget('context:gitea', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('GiteaServer with name: gitea does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(GiteaServer, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&GiteaServer {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("GiteaServer with name 'gitea' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return gitea_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for gitea with name:gitea')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'gitea.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'gitea.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
gitea_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: gitea' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: gitea' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: gitea' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: gitea' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -206,7 +211,7 @@ pub fn (mut self GiteaServer) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('gitea start')
|
||||
console.print_header('installer: gitea start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -219,7 +224,7 @@ pub fn (mut self GiteaServer) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting gitea with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: gitea starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
23
lib/installers/infra/gitea/play.v
Normal file
23
lib/installers/infra/gitea/play.v
Normal file
@@ -0,0 +1,23 @@
|
||||
module gitea
|
||||
|
||||
import freeflowuniverse.herolib.core.playbook { PlayBook }
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.installers.infra.gitea { install }
|
||||
|
||||
pub fn play(mut plbook PlayBook) ! {
|
||||
if !plbook.exists(filter: 'gitea.') {
|
||||
return
|
||||
}
|
||||
|
||||
mut install_action := plbook.ensure_once(filter: 'gitea.install')!
|
||||
mut p := install_action.params
|
||||
|
||||
mut args := InstallArgs{
|
||||
reset: p.get_default_false('reset')
|
||||
}
|
||||
|
||||
console.print_header('Executing gitea.install action')
|
||||
install(args)!
|
||||
|
||||
install_action.done = true
|
||||
}
|
||||
@@ -1,29 +1,19 @@
|
||||
# gitea
|
||||
# Installer - Gitea Module
|
||||
|
||||
This module provides heroscript actions for installing and managing Gitea.
|
||||
|
||||
## Actions
|
||||
|
||||
To get started
|
||||
### `gitea.install`
|
||||
|
||||
```v
|
||||
Installs the Gitea Git service.
|
||||
|
||||
import freeflowuniverse.herolib.installers.infra.gitea as gitea_installer
|
||||
**Parameters:**
|
||||
|
||||
- `reset` (bool): If true, force a reinstall even if Gitea is already detected. Default: `false`.
|
||||
|
||||
//if you want to configure using heroscript
|
||||
gitea_installer.play(heroscript:'
|
||||
!!gitea.configure name:test
|
||||
passwd:'something'
|
||||
domain: 'docs.info.com'
|
||||
')!
|
||||
**Example:**
|
||||
|
||||
mut installer:= gitea_installer.get(name:'test')!
|
||||
installer.start()!
|
||||
|
||||
|
||||
```
|
||||
|
||||
|
||||
this will look for a configured mail & postgresql client both on instance name: "default", change in heroscript if needed
|
||||
|
||||
- postgresql_client_name = "default"
|
||||
- mail_client_name = "default"
|
||||
```heroscript
|
||||
!!gitea.install
|
||||
reset: true
|
||||
@@ -135,19 +135,13 @@ fn install() ! {
|
||||
|
||||
fn destroy() ! {
|
||||
console.print_header('removing livekit')
|
||||
osal.process_kill_recursive(name: 'livekit') or {
|
||||
return error('Could not kill livekit due to: ${err}')
|
||||
}
|
||||
res := os.execute('sudo rm -rf /usr/local/bin/livekit-server')
|
||||
if res.exit_code != 0 {
|
||||
return error('Failed to remove LiveKit server')
|
||||
}
|
||||
|
||||
mut zinit_factory := zinit.new()!
|
||||
if zinit_factory.exists('livekit') {
|
||||
zinit_factory.stop('livekit') or {
|
||||
return error('Could not stop livekit service due to: ${err}')
|
||||
}
|
||||
zinit_factory.delete('livekit') or {
|
||||
return error('Could not delete livekit service due to: ${err}')
|
||||
}
|
||||
}
|
||||
console.print_header('livekit removed')
|
||||
}
|
||||
|
||||
@@ -38,6 +38,7 @@ pub fn get(args ArgsGet) !&LivekitServer {
|
||||
if r.hexists('context:livekit', args.name)! {
|
||||
data := r.hget('context:livekit', args.name)!
|
||||
if data.len == 0 {
|
||||
print_backtrace()
|
||||
return error('LivekitServer with name: livekit does not exist, prob bug.')
|
||||
}
|
||||
mut obj := json.decode(LivekitServer, data)!
|
||||
@@ -46,12 +47,14 @@ pub fn get(args ArgsGet) !&LivekitServer {
|
||||
if args.create {
|
||||
new(args)!
|
||||
} else {
|
||||
print_backtrace()
|
||||
return error("LivekitServer with name 'livekit' does not exist")
|
||||
}
|
||||
}
|
||||
return get(name: args.name)! // no longer from db nor create
|
||||
}
|
||||
return livekit_global[args.name] or {
|
||||
print_backtrace()
|
||||
return error('could not get config for livekit with name:livekit')
|
||||
}
|
||||
}
|
||||
@@ -124,14 +127,15 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'livekit.configure')!
|
||||
if install_actions.len > 0 {
|
||||
for install_action in install_actions {
|
||||
for mut install_action in install_actions {
|
||||
heroscript := install_action.heroscript()
|
||||
mut obj2 := heroscript_loads(heroscript)!
|
||||
set(obj2)!
|
||||
install_action.done = true
|
||||
}
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'livekit.')!
|
||||
for other_action in other_actions {
|
||||
for mut other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
@@ -163,6 +167,7 @@ pub fn play(mut plbook PlayBook) ! {
|
||||
livekit_obj.restart()!
|
||||
}
|
||||
}
|
||||
other_action.done = true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,19 +183,19 @@ fn startupmanager_get(cat startupmanager.StartupManagerType) !startupmanager.Sta
|
||||
// systemd
|
||||
match cat {
|
||||
.screen {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: livekit' startupmanager get screen")
|
||||
return startupmanager.get(.screen)!
|
||||
}
|
||||
.zinit {
|
||||
console.print_debug('startupmanager: zinit')
|
||||
console.print_debug("installer: livekit' startupmanager get zinit")
|
||||
return startupmanager.get(.zinit)!
|
||||
}
|
||||
.systemd {
|
||||
console.print_debug('startupmanager: systemd')
|
||||
console.print_debug("installer: livekit' startupmanager get systemd")
|
||||
return startupmanager.get(.systemd)!
|
||||
}
|
||||
else {
|
||||
console.print_debug('startupmanager: auto')
|
||||
console.print_debug("installer: livekit' startupmanager get auto")
|
||||
return startupmanager.get(.auto)!
|
||||
}
|
||||
}
|
||||
@@ -208,7 +213,7 @@ pub fn (mut self LivekitServer) start() ! {
|
||||
return
|
||||
}
|
||||
|
||||
console.print_header('livekit start')
|
||||
console.print_header('installer: livekit start')
|
||||
|
||||
if !installed()! {
|
||||
install()!
|
||||
@@ -221,7 +226,7 @@ pub fn (mut self LivekitServer) start() ! {
|
||||
for zprocess in startupcmd()! {
|
||||
mut sm := startupmanager_get(zprocess.startuptype)!
|
||||
|
||||
console.print_debug('starting livekit with ${zprocess.startuptype}...')
|
||||
console.print_debug('installer: livekit starting with ${zprocess.startuptype}...')
|
||||
|
||||
sm.new(zprocess)!
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import freeflowuniverse.herolib.core.pathlib
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import os
|
||||
|
||||
pub const version = '1.7.2'
|
||||
pub const version = '1.9.0'
|
||||
const singleton = false
|
||||
const default = true
|
||||
|
||||
|
||||
@@ -1,22 +1,19 @@
|
||||
# livekit
|
||||
# Installer - Livekit Module
|
||||
|
||||
This module provides heroscript actions for installing and managing Livekit.
|
||||
|
||||
## Actions
|
||||
|
||||
To get started
|
||||
### `livekit.install`
|
||||
|
||||
```v
|
||||
Installs the Livekit server.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `reset` (bool): If true, force a reinstall even if Livekit is already detected. Default: `false`.
|
||||
|
||||
import freeflowuniverse.herolib.installers.something. livekit
|
||||
**Example:**
|
||||
|
||||
mut installer:= livekit.get()!
|
||||
|
||||
installer.start()!
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
livekit once installed will have generated the secret keys
|
||||
```heroscript
|
||||
!!livekit.install
|
||||
reset: true
|
||||
@@ -1,13 +0,0 @@
|
||||
|
||||
!!hero_code.generate_installer
|
||||
name:'screen'
|
||||
classname:'Screen'
|
||||
singleton:0
|
||||
templates:0
|
||||
default:1
|
||||
title:''
|
||||
supported_platforms:''
|
||||
reset:0
|
||||
startupmanager:0
|
||||
hasconfig:0
|
||||
build:0
|
||||
@@ -1,44 +0,0 @@
|
||||
# screen
|
||||
|
||||
|
||||
|
||||
To get started
|
||||
|
||||
```v
|
||||
|
||||
|
||||
import freeflowuniverse.herolib.installers.something.screen as screen_installer
|
||||
|
||||
heroscript:="
|
||||
!!screen.configure name:'test'
|
||||
password: '1234'
|
||||
port: 7701
|
||||
|
||||
!!screen.start name:'test' reset:1
|
||||
"
|
||||
|
||||
screen_installer.play(heroscript=heroscript)!
|
||||
|
||||
//or we can call the default and do a start with reset
|
||||
//mut installer:= screen_installer.get()!
|
||||
//installer.start(reset:true)!
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
## example heroscript
|
||||
|
||||
```hero
|
||||
!!screen.configure
|
||||
homedir: '/home/user/screen'
|
||||
username: 'admin'
|
||||
password: 'secretpassword'
|
||||
title: 'Some Title'
|
||||
host: 'localhost'
|
||||
port: 8888
|
||||
|
||||
```
|
||||
|
||||
|
||||
@@ -1,60 +0,0 @@
|
||||
module screen
|
||||
|
||||
import freeflowuniverse.herolib.core
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import freeflowuniverse.herolib.installers.ulist
|
||||
import os
|
||||
|
||||
// checks if a certain version or above is installed
|
||||
fn installed() !bool {
|
||||
res := os.execute('screen --version')
|
||||
if res.exit_code != 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// get the Upload List of the files
|
||||
fn ulist_get() !ulist.UList {
|
||||
// optionally build a UList which is all paths which are result of building, is then used e.g. in upload
|
||||
return ulist.UList{}
|
||||
}
|
||||
|
||||
// uploads to S3 server if configured
|
||||
fn upload() ! {}
|
||||
|
||||
fn install() ! {
|
||||
console.print_header('install screen')
|
||||
|
||||
if core.is_ubuntu()! {
|
||||
res := os.execute('sudo apt install screen -y')
|
||||
if res.exit_code != 0 {
|
||||
return error('failed to install screen: ${res.output}')
|
||||
}
|
||||
} else if core.is_osx()! {
|
||||
res := os.execute('sudo brew install screen')
|
||||
if res.exit_code != 0 {
|
||||
return error('failed to install screen: ${res.output}')
|
||||
}
|
||||
} else {
|
||||
return error('unsupported platform: ${core.platform()!}')
|
||||
}
|
||||
}
|
||||
|
||||
fn destroy() ! {
|
||||
console.print_header('uninstall screen')
|
||||
if core.is_ubuntu()! {
|
||||
res := os.execute('sudo apt remove screen -y')
|
||||
if res.exit_code != 0 {
|
||||
return error('failed to uninstall screen: ${res.output}')
|
||||
}
|
||||
} else if core.is_osx()! {
|
||||
res := os.execute('sudo brew uninstall screen')
|
||||
if res.exit_code != 0 {
|
||||
return error('failed to uninstall screen: ${res.output}')
|
||||
}
|
||||
} else {
|
||||
return error('unsupported platform: ${core.platform()!}')
|
||||
}
|
||||
}
|
||||
@@ -1,79 +0,0 @@
|
||||
module screen
|
||||
|
||||
import freeflowuniverse.herolib.core.playbook { PlayBook }
|
||||
import freeflowuniverse.herolib.ui.console
|
||||
import json
|
||||
import freeflowuniverse.herolib.osal.startupmanager
|
||||
|
||||
__global (
|
||||
screen_global map[string]&Screen
|
||||
screen_default string
|
||||
)
|
||||
|
||||
/////////FACTORY
|
||||
|
||||
@[params]
|
||||
pub struct ArgsGet {
|
||||
pub mut:
|
||||
name string = 'default'
|
||||
}
|
||||
|
||||
pub fn new(args ArgsGet) !&Screen {
|
||||
return &Screen{}
|
||||
}
|
||||
|
||||
pub fn get(args ArgsGet) !&Screen {
|
||||
return new(args)!
|
||||
}
|
||||
|
||||
pub fn play(mut plbook PlayBook) ! {
|
||||
if !plbook.exists(filter: 'screen.') {
|
||||
return
|
||||
}
|
||||
mut install_actions := plbook.find(filter: 'screen.configure')!
|
||||
if install_actions.len > 0 {
|
||||
return error("can't configure screen, because no configuration allowed for this installer.")
|
||||
}
|
||||
mut other_actions := plbook.find(filter: 'screen.')!
|
||||
for other_action in other_actions {
|
||||
if other_action.name in ['destroy', 'install', 'build'] {
|
||||
mut p := other_action.params
|
||||
reset := p.get_default_false('reset')
|
||||
if other_action.name == 'destroy' || reset {
|
||||
console.print_debug('install action screen.destroy')
|
||||
destroy()!
|
||||
}
|
||||
if other_action.name == 'install' {
|
||||
console.print_debug('install action screen.install')
|
||||
install()!
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
//////////////////////////# LIVE CYCLE MANAGEMENT FOR INSTALLERS ///////////////////////////////////
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
@[params]
|
||||
pub struct InstallArgs {
|
||||
pub mut:
|
||||
reset bool
|
||||
}
|
||||
|
||||
pub fn (mut self Screen) install(args InstallArgs) ! {
|
||||
switch(self.name)
|
||||
if args.reset || (!installed()!) {
|
||||
install()!
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut self Screen) destroy() ! {
|
||||
switch(self.name)
|
||||
destroy()!
|
||||
}
|
||||
|
||||
// switch instance to be used for screen
|
||||
pub fn switch(name string) {
|
||||
screen_default = name
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user