Compare commits

...

134 Commits

Author SHA1 Message Date
96bc5c9e5d ... 2025-02-07 12:17:34 +03:00
20927485d9 ... 2025-02-07 12:13:39 +03:00
a034708f21 ... 2025-02-07 12:08:25 +03:00
19a2577564 ... 2025-02-07 12:07:32 +03:00
e34d804dda ... 2025-02-07 11:59:52 +03:00
cd6c899661 ... 2025-02-07 11:45:28 +03:00
4b2f83ceaf ... 2025-02-07 11:43:04 +03:00
69b0944fdd ... 2025-02-07 11:32:30 +03:00
e99f484249 ... 2025-02-07 11:30:32 +03:00
ebdd7060b0 .. 2025-02-07 11:27:09 +03:00
6f78151f7e ... 2025-02-07 11:25:41 +03:00
133a8c7441 Merge branch 'development_kristof10' into development
# Conflicts:
#	.github/workflows/build_and_test.yml
#	.github/workflows/hero_build_linux.yml
#	.github/workflows/hero_build_macos.yml
#	install_herolib.vsh
#	install_v.sh
#	workflows/hero_build_macos.yml
2025-02-07 11:20:20 +03:00
393dc98c73 ... 2025-02-07 11:17:24 +03:00
105d4dd054 ... 2025-02-07 11:08:05 +03:00
4a738b966e ... 2025-02-07 07:36:23 +03:00
4fd3095a75 ... 2025-02-07 07:33:41 +03:00
737457dbad ... 2025-02-07 07:27:24 +03:00
fb05951c5b ... 2025-02-07 07:25:28 +03:00
094c915a20 ... 2025-02-07 07:16:56 +03:00
c6e83252cf ... 2025-02-07 07:12:08 +03:00
0a757e8634 ... 2025-02-07 07:03:10 +03:00
21a7d7506a ... 2025-02-07 06:57:02 +03:00
6bae1a98ea ... 2025-02-07 06:51:12 +03:00
3d76bc9c04 ... 2025-02-07 06:40:10 +03:00
4495df4d2e ... 2025-02-07 06:30:13 +03:00
99d3e6c00c ... 2025-02-07 06:19:54 +03:00
8a005a2fd2 ... 2025-02-07 06:13:59 +03:00
f37999c10c ... 2025-02-07 06:06:21 +03:00
5842fb9f26 ... 2025-02-07 06:02:39 +03:00
df5aa1f1a3 ... 2025-02-07 05:57:09 +03:00
f079122be5 ... 2025-02-07 05:54:33 +03:00
a3daefa7ce ... 2025-02-07 05:21:13 +03:00
e5a3c2cae1 ... 2025-02-06 21:17:30 +03:00
313e241a72 ., 2025-02-06 21:15:01 +03:00
a66ef2e8b3 ... 2025-02-06 21:09:20 +03:00
f4f5eb06a4 location fixed for postgresql client 2025-02-06 07:15:32 +03:00
c93fe755fd location postgresql 2025-02-06 06:57:23 +03:00
6bdc4a5b8f location is importing in psql 2025-02-06 06:48:47 +03:00
048c72b294 Merge branch 'development_kristof10' of https://github.com/freeflowuniverse/herolib into development_kristof10 2025-02-06 06:26:47 +03:00
5ad2062e5c format 2025-02-06 06:26:44 +03:00
babbb610d9 feat(tfgrid3deployer): add openwebui deployment example
- Adds a new example demonstrating deployment of OpenWebUI on the ThreeFold Grid using the `tfgrid3deployer` module.
- Provides detailed instructions and a README file for easy setup and execution.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-05 11:51:05 +02:00
5bbb99c3f9 ... 2025-02-05 11:03:19 +03:00
6eec7dbda2 s 2025-02-05 10:23:22 +03:00
be9f37a459 s 2025-02-05 10:16:25 +03:00
757358fded s 2025-02-05 09:19:16 +03:00
430586cc89 s 2025-02-05 08:14:48 +03:00
69b405ba65 Merge branch 'development_kristof10' of github.com:freeflowuniverse/herolib into development_kristof10 2025-02-05 07:57:09 +03:00
c2eef5a6ab s 2025-02-05 07:57:05 +03:00
b89a9e87c8 feat: Add open-webui example using Docker
- Created an example to create a docker image for open-webui
- The `docker_recipe_env.v` file is updated to correctly handle quoting in environment variables.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-04 18:44:02 +02:00
01991027cc s 2025-02-04 08:26:51 +03:00
9b81061e22 s 2025-02-04 08:24:47 +03:00
Omdanii
2d30e6f9cf Merge pull request #46 from freeflowuniverse/development_ci_fixes
Development ci fixes
2025-02-03 17:28:02 +02:00
62f64862be test: remove unnecessary env vars
- Remove unnecessary environment variables from the build and test workflow.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-03 17:13:38 +02:00
ee205c4b07 refactor: Simplify V and Herolib setup (#45)
* refactor: Simplify V and Herolib setup

- Use install_v.sh script to install V and Herolib in CI.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>

* WIP: handle github actions in vlang installer

- Add `--github-actions` flag to `install_v.sh` script.
- Add appropriate privilege when running install_v.sh script.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>

* refactor: consolidate build workflows

- Consolidate macOS and Linux build workflows into a single `build_and_test.yml` workflow.
- Remove the now-redundant `hero_build_linux.yml` workflow.
- Update the workflow to support both Linux and macOS targets.
- Update `install_v.sh` script to handle brew installations without sudo.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>

---------

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-03 17:04:40 +02:00
9aee672450 s 2025-02-03 14:02:26 +03:00
2d17938c75 s 2025-02-03 12:53:40 +03:00
67562cacc2 s 2025-02-03 12:39:22 +03:00
c8d715126f s 2025-02-03 12:37:19 +03:00
1d6af5204b ... 2025-02-02 21:56:59 +03:00
a5398094da ... 2025-02-02 19:53:34 +03:00
0fd5062408 docusaurus 2025-02-02 19:24:10 +03:00
Omdanii
c886f85d21 Merge pull request #44 from freeflowuniverse/development_wg
Add SAL for wireguard
2025-02-02 15:33:06 +02:00
749fa94312 feat: Add WireGuard installer
- Add a new WireGuard installer to the project.
- This installer handles installation and uninstallation of WireGuard.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-02 15:20:57 +02:00
c27fcc6983 fix: WireGuard client improvements
- Updated WireGuard client to use `wg-quick` instead of `sudo wg-quick`.
- Improved error handling in WireGuard client.
- Added `sudo` to `wg show` command for proper permissions.
- Updated example `wireguard.vsh` script to reflect changes.
- Added a new `wg0.conf` file for the WireGuard configuration.
- Resolved the issue where the script wasn't finding the configuration file.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-02-02 14:59:05 +02:00
Mahmoud Emad
7bd997e368 feat: Add WireGuard client
- Add a WireGuard client to the project.
- New utility functions have been added to parse the output of `wg show` command and improve error handling.
- Add start, down, show, show_config, generate_private_key, and get_public_key method to interact with the wg binary.
- Created a new example file to offer more clear usage.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-02-02 14:41:21 +02:00
Mahmoud Emad
d803a49a85 feat: Add WireGuard client support
- Add a new WireGuard client to the project.
- Includes a factory, model, and basic client functionality.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-02-02 12:52:42 +02:00
cfa9f877b3 search 2025-02-02 12:30:25 +03:00
e4f883d35a location 2025-02-02 12:17:33 +03:00
0ad1d27327 location 2025-02-02 11:48:19 +03:00
3bf2473c3a location 2025-02-02 08:10:32 +03:00
84c2b43595 starts working 2025-01-31 22:12:52 +03:00
1135b8cee5 rpcserver 2025-01-31 22:06:02 +03:00
8322280066 initial openrpc server 2025-01-31 21:53:29 +03:00
99ecf1b0d8 model done 2025-01-31 17:42:06 +03:00
bb69ee0be9 ... 2025-01-31 17:28:44 +03:00
fb0754b3b2 ... 2025-01-31 17:13:56 +03:00
d6a13f81e0 Merge branch 'development_fix_ci' into development_kristof 2025-01-31 15:40:46 +03:00
6e1f23b702 Merge branch 'development' into development_kristof 2025-01-31 15:40:18 +03:00
74ab68d05f ... 2025-01-31 15:39:44 +03:00
27cb6cb0c6 ... 2025-01-31 09:58:53 +03:00
5670efc4cb ... 2025-01-31 08:29:17 +03:00
5a8ad0a47b Merge pull request #43 from freeflowuniverse/development_webgw_wireguard
feat (tfgrid3deployer): add more support for wireguard
2025-01-30 18:45:53 +02:00
45098785e9 feat (tfgrid3deployer): add more support for wireguard
- support adding multiple user access endpoints to a network
- support connecting gateways over wireguard

Co-authored-by: mahmoud <mahmmoud.hassanein@gmail.com>
2025-01-30 18:07:58 +02:00
6a8bd5c205 wip: support multiple user access endpoints
Co-authored-by: mahmoud <mahmmoud.hassanein@gmail.com>
Co-authored-by: mario <mariobassem12@gmail.com>
2025-01-29 18:12:16 +01:00
Omdanii
2e5f618d0b Merge pull request #42 from freeflowuniverse/development_buildah
feat: Add Buildah installer
2025-01-29 15:09:03 +02:00
112f5eecb2 feat: Add Buildah installer
- Added a Buildah installer to the project.
- The installer can install and remove Buildah.
- Updated the installer to use the latest Buildah version.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
Co-authored-by: mariobassem <mariobassem12@gmail.com>
2025-01-29 11:13:33 +01:00
8cf611ca51 Merge pull request #41 from freeflowuniverse/development_docker
Fix docker examples
2025-01-28 17:44:55 +02:00
0f095a691d feat: add docker installer
- Add a new docker installer.
- Includes functionality for installing, starting, stopping, and removing docker.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
Co-authored-by: omda  <mahmmoud.hassanein@gmail.com>
2025-01-28 16:17:33 +01:00
8b0f692673 fix: Fix docker examples
- Moved `httpconnection` import from `clients` to `core`.
- Changed `tfgrid-sdk-ts` dashboard to playground.
- Added ipaddr to node_local().
- Added public keyword to OpenSSLGenerateArgs.
- Improved DockerEngine image and container loading.
- Added utils.contains_ssh_port.
- Improved error handling in DockerEngine.
- Improved Docker registry handling.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
Co-authored-by: omda <mahmmoud.hassanein@gmail.com>
2025-01-28 14:08:42 +01:00
04d891a0b8 Merge branch 'development_hetzner' into development_fix_ci 2025-01-28 09:35:02 +03:00
f8d675dcaf WIP: feat: add Hetzner deployment example
- Added a new example demonstrating deployment on Hetzner using the `tfgrid3deployer`.
- The example creates a VM and adds a webname.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-27 18:55:01 +02:00
9f6e49963e wip: test: improve tmux tests
- Fix bugs found during testing.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-27 18:27:50 +02:00
0ae8e227fc feat: Add screen installer
- Add a new installer for the `screen` utility.
- This installer supports Ubuntu and macOS.
- Includes functionality for installation, uninstallation,
  and status checking.
- Fixed tests for osal.screen

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-27 17:18:39 +02:00
623f1a289e test: Add workaround for zinit installation in tests
- Added a workaround to download and execute zinit if it's not found in the path.
- This is necessary because the zinit installer cannot be imported due to a circular dependency.
- This ensures that the tests can run reliably even without zinit being pre-installed.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-27 17:18:39 +02:00
Mahmoud Emad
85ac9e5104 test: Fix rpc_test file 2025-01-27 17:18:39 +02:00
Mahmoud Emad
266205363d test: Handle the package install test 2025-01-27 17:18:39 +02:00
Mahmoud Emad
b9ad95a99d test: remove unnecessary test file
- Remove `net_test.v` from the list of test files.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-27 17:18:39 +02:00
ca8799af39 Merge pull request #30 from felipensp/patch-1
Update resp_model.v to make RValue public
2025-01-27 12:39:52 +03:00
1cd176a626 Merge pull request #35 from freeflowuniverse/development_fix_ci
Fix CI
2025-01-26 16:08:02 +02:00
Mahmoud Emad
22918434c3 fix: Use cache key for repository lookup
- Changed the repository lookup in `repo_new_from_gitlocation` to use the cache key instead of the repository name.
- Added a new client `livekit` to the `test_basic.vsh` script for testing purposes.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-26 15:44:21 +02:00
Omdanii
23410e6109 Merge pull request #34 from freeflowuniverse/development_vastai
feat: Add VastAI client
2025-01-26 14:02:09 +02:00
Mahmoud Emad
77809423fd feat: add VastAI instance management functions
- Added functions to manage VastAI instances:
- `attach_sshkey_to_instance`
- `stop_instance`
- `destroy_instance`
- `launch_instance`
- `start_instances`
- `start_instance`
- Updated example to demonstrate new functions.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-26 12:44:52 +02:00
34b1aad175 feat: Add VastAI client
- Add a new VastAI client to the project.
- This client allows users to search for and create GPU instances on VastAI.
- It uses the VastAI API to interact with the platform.
- Includes functionality for searching offers, getting top offers, and creating instances.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-23 19:22:39 +02:00
d4d3713cad Merge pull request #33 from freeflowuniverse/development_runpod
Development runpod
2025-01-23 17:23:47 +02:00
01fff39e41 refactor: improve runpod client
- Refactor RunPod client to use environment variables for API key.
- Update RunPod example script to reflect changes.
- Remove unused gql_builder.v file.
- Update README.md to reflect changes.
- Improve error handling and logging.
- Use json2 for JSON encoding/decoding.
- Update dependencies.
- Implemented more endpoints for managing pods.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-23 16:17:12 +02:00
12968cb580 Merge branch 'development' of https://github.com/freeflowuniverse/herolib into development 2025-01-23 14:16:11 +01:00
888aac4867 push 2025-01-23 14:16:02 +01:00
6637756088 gittools 2025-01-23 14:09:20 +01:00
timurgordon
70856f7348 Merge branch 'development' of https://github.com/freeflowuniverse/herolib into development 2025-01-23 13:07:39 +00:00
timurgordon
ac19671469 add env vars for livekit tests 2025-01-23 13:07:35 +00:00
f2b9b73528 Merge branch 'development_fix_examples' into development 2025-01-23 09:43:27 +01:00
d1c907fc3a Merge branch 'development' of https://github.com/freeflowuniverse/herolib into development 2025-01-23 09:42:45 +01:00
df0c4ca857 ai prompts 2025-01-23 09:42:32 +01:00
timurgordon
02128e69ba fix livekit compilation 2025-01-23 00:59:02 +00:00
timurgordon
885c4d9b32 remove inline sum types 2025-01-23 00:52:19 +00:00
timurgordon
12db34ddb0 remove panic from test 2025-01-23 00:31:05 +00:00
timurgordon
c7d7e8b954 Merge branch 'development' of https://github.com/freeflowuniverse/herolib into development 2025-01-23 00:03:04 +00:00
timurgordon
a95ce8abb2 move auth and jwt modules to herolib 2025-01-23 00:02:48 +00:00
timurgordon
cab7a47050 move log module to herolib 2025-01-22 23:58:58 +00:00
timurgordon
dce0b71530 replace crystallib with herolib 2025-01-22 23:57:49 +00:00
timurgordon
915951d84f fix mailclient compilation 2025-01-22 23:56:13 +00:00
timurgordon
b3509611a2 move livekit client to herolib 2025-01-22 23:55:18 +00:00
Felipe Pena
e82e367e95 Update resp_model.v to make RValue public 2025-01-22 18:32:57 -03:00
Mahmoud Emad
6f9d570a93 WIP: refactor RunPod client
- Refactor RunPod client to use a new GraphQL builder.
- This improves the readability and maintainability of the code.
- The old `build_query` function was removed, and the new
- `QueryBuilder` struct is now used.  This allows for a more
- flexible and extensible approach to constructing GraphQL
- queries.  The example in `runpod_example.vsh` is now
- commented out until the new GraphQL builder is fully
- implemented.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-22 20:35:50 +02:00
Mahmoud Emad
7486d561ec feat: update runpod example and client
- Update the RunPod example to use a new API key and
- reduce resource allocation for pods.
- Added stop pod functionality to the RunPod client and example.
- Updated the RunPod client to use new API endpoints.
- Updated the base URL for the RunPod client.
- Added authorization header to HTTP client.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-21 15:54:48 +02:00
Mahmoud Emad
50116651de feat: Add spot pod start and improved error handling
- Added functionality to start spot pods using the RunPod API.
- Improved error handling and clarity in the RunPod client.
- Added more detailed comments to the code for better readability.
- Refactored the HTTP client and utils to improve modularity.
- Updated example to demonstrate spot pod creation and starting.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-21 12:38:25 +02:00
Mahmoud Emad
3fe350abe9 feat: Add RunPod start and improved pod creation
- Added a new `start_on_demand_pod` function to the RunPod client.
- Improved the `create_on_demand_pod` function to handle nested machine structure in the response.
- Updated the example to use the new functions and handle the new response structure.
- Updated the API key for the example.
- Added more descriptive field names in the `create_on_demand_pod` input.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-21 11:32:38 +02:00
Mahmoud Emad
9e51604286 feat: enhance RunPod example with detailed pod creation
- Added more options to the RunPod example, including:
- `cloud_type`, `gpu_count`, `volume_in_gb`,
- `container_disk_in_gb`, `min_vcpu_count`,
- `min_memory_in_gb`, `gpu_type_id`, `ports`, and
- `volume_mount_path`.  This provides a more
- comprehensive demonstration of RunPod's capabilities.
- Updated the example to create an on-demand pod with
- specified resources and settings.  The spot pod
- creation remains largely unchanged.  Improved the
- clarity and completeness of the example.  Removed
- commented-out code for better readability.  Also
- updated the `PodFindAndDeployOnDemandRequest` struct
- to remove default values, allowing for more flexible
- pod configurations.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-21 11:03:33 +02:00
Mahmoud Emad
309496ef5d feat: Add RunPod client
- Added a new RunPod client to the project.
- Updated the example to use the new client.
- Improved error handling in the client.
- Refactored the code for better readability.

Co-authored-by: mariobassem12 <mariobassem12@gmail.com>
2025-01-21 10:50:07 +02:00
4422d67701 wip: add spot pod creation
- Add support for creating spot pods using the RunPod API.
- Implement `create_spot_pod` function in the `RunPod` client.
- Refactor RunPod client to handle different query types and response structures.
- Improve error handling and logging for GraphQL requests.
- Update example to demonstrate spot pod creation.

Co-authored-by: mahmmoud.hassanein <mahmmoud.hassanein@gmail.com>
2025-01-20 21:34:16 +02:00
Mahmoud Emad
d54a1e5a34 refactor: improve RunPod client
- Refactor RunPod client to use generics for requests and
responses.
- This improves code readability and maintainability.
- Remove redundant code for building GraphQL queries and
handling HTTP requests.
- Add support for environment variables in pod creation.
- Update example with new API key and environment variables.

Co-authored-by: supermario <mariobassem12@gmail.com>
2025-01-20 15:02:05 +02:00
Mahmoud Emad
0d2307acc8 feat: add RunPod client
- Add a new RunPod client to the project.
- This client allows users to interact with the RunPod API to create and manage pods.
- Includes example usage and configuration options.
2025-01-19 22:20:47 +02:00
Mahmoud Emad
de45ed3a07 refactor: remove dependency on herolib.osal
- Replaced `freeflowuniverse.herolib.osal` with `freeflowuniverse.herolib.core`
2025-01-19 09:48:05 +02:00
480894372c export docker 2025-01-17 05:04:30 +01:00
687 changed files with 48850 additions and 2897 deletions

View File

@@ -2,9 +2,9 @@ name: Deploy Documentation to Pages
on:
push:
branches: ["main"]
branches: ["development"]
workflow_dispatch:
branches: ["main"]
branches: ["development"]
permissions:
contents: read
@@ -17,34 +17,31 @@ concurrency:
jobs:
deploy-documentation:
#if: startsWith(github.ref, 'refs/tags/')
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Install Vlang dependencies
run: sudo apt update && sudo apt install -y libgc-dev
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Vlang
run: ./install_v.sh
- name: Generate documentation
run: |
./doc.vsh
# ls /home/runner/work/herolib/docs
./doc.vsh
find .
- name: Setup Pages
uses: actions/configure-pages@v3
uses: actions/configure-pages@v4
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
uses: actions/upload-pages-artifact@v3
with:
path: "/home/runner/work/herolib/herolib/docs"
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1
uses: actions/deploy-pages@v4

90
.github/workflows/hero_build.yml vendored Normal file
View File

@@ -0,0 +1,90 @@
name: Release Hero
permissions:
contents: write
on:
push:
branches: ["main","development"]
workflow_dispatch:
branches: ["main","development"]
jobs:
build:
timeout-minutes: 60
if: startsWith(github.ref, 'refs/tags/')
strategy:
fail-fast: false
matrix:
include:
- target: x86_64-unknown-linux-musl
os: ubuntu-latest
short-name: linux-i64
- target: aarch64-unknown-linux-musl
os: ubuntu-latest
short-name: linux-arm64
- target: aarch64-apple-darwin
os: macos-latest
short-name: macos-arm64
- target: x86_64-apple-darwin
os: macos-13
short-name: macos-i64
runs-on: ${{ matrix.os }}
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "🔎 The name of your branch is ${{ github.ref_name }} and your repository is ${{ github.repository }}."
- name: Check out repository code
uses: actions/checkout@v4
- name: Setup V & Herolib
id: setup
run: ./install_v.sh --herolib
timeout-minutes: 10
- name: Do all the basic tests
timeout-minutes: 25
run: ./test_basic.vsh
- name: Build Hero
timeout-minutes: 15
run: |
set -e
v -w -d use_openssl -enable-globals cli/hero.v -o cli/hero-${{ matrix.target }}
- name: Upload
uses: actions/upload-artifact@v4
with:
name: hero-${{ matrix.target }}
path: cli/hero-${{ matrix.target }}
release_hero:
needs: build
runs-on: ubuntu-latest
permissions:
contents: write
if: startsWith(github.ref, 'refs/tags/')
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Download Artifacts
uses: actions/download-artifact@v4
with:
path: cli/bins
merge-multiple: true
- name: Release
uses: softprops/action-gh-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref_name }}
name: Release ${{ github.ref_name }}
draft: false
fail_on_unmatched_files: true
generate_release_notes: true
files: cli/bins/*

32
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Build on Linux & Run tests
permissions:
contents: write
on:
push:
workflow_dispatch:
jobs:
build:
strategy:
matrix:
include:
- target: x86_64-unknown-linux-musl
os: ubuntu-latest
short-name: linux-i64
runs-on: ${{ matrix.os }}
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "🔎 The name of your branch is ${{ github.ref_name }} and your repository is ${{ github.repository }}."
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup V & Herolib
run: ./install_v.sh --herolib
- name: Do all the basic tests
run: ./test_basic.vsh

1
.gitignore vendored
View File

@@ -25,7 +25,6 @@ dump.rdb
output/
*.db
.stellar
vdocs/
data.ms/
test_basic
cli/hero

View File

@@ -1,10 +1,19 @@
# herolib
a smaller version of herolib with only the items we need for hero
> [documentation here](https://freeflowuniverse.github.io/herolib/)
> [documentation of the library](https://freeflowuniverse.github.io/herolib/)
## automated install
## hero install for users
```bash
curl https://raw.githubusercontent.com/freeflowuniverse/herolib/refs/heads/development_kristof10/install_hero.sh > /tmp/install_hero.sh
bash /tmp/install_hero.sh
```
this tool can be used to work with git, build books, play with hero AI, ...
## automated install for developers
```bash
curl 'https://raw.githubusercontent.com/freeflowuniverse/herolib/refs/heads/main/install_v.sh' > /tmp/install_v.sh
@@ -16,7 +25,7 @@ bash /tmp/install_v.sh --analyzer --herolib
```bash
#~/code/github/freeflowuniverse/herolib/install_v.sh --help
~/code/github/freeflowuniverse/herolib/install_v.sh --help
V & HeroLib Installer Script

View File

@@ -0,0 +1,93 @@
We have our own instruction language called heroscript, below you will find details how to use it.
## heroscript
Heroscript is our small scripting language which is used for communicating with our digital tools like calendar management.
which has following structure
```heroscript
!!calendar.event_add
title: 'go to dentist'
start: '2025/03/01'
description: '
a description can be multiline
like this
'
!!calendar.event_delete
title: 'go to dentist'
```
- the format is !!$actor.$action (there is no space before !!)
- every parameter comes on next line with spaces in front (4 spaces, always use 4 spaces, dont make variation)
- every actor.action starts with !!
- the first part is the actor e.g. calendar in this case
- the 2e part is the action name
- multilines are supported see the description field
below you will find the instructions for different actors, comments how to use it are behind # which means not part of the the definition itself
## remarks on parameters used
- date
- format of the date is yyyy/mm/dd hh:mm:ss
- +1h means 1 hour later than now
- +1m means 1 min later than now
- +1d means 1 day later than now
- same for -1h, -1m, -1d
- money expressed as
- $val $cursymbol
- $cursymbol is 3 letters e.g. USD, capital
- lists are comma separated and '...' around
## generic instructions
- do not add information if not specifically asked for
## circle
every actor action happens in a circle, a user can ask to switch circles, command available is
```
!!circle.switch
name: 'project x'
```
## calendar
```heroscript
!!calendar.event_add
title: 'go to dentist'
start: '2025/03/01'
end: '+1h' #if + notation used is later than the start
description: '
a description can be multiline
like this
'
attendees: 'tim, rob'
!!calendar.event_delete
title: 'go to dentist'
```
## NOW DO ONE
schedule event tomorrow 10 am, for 1h, with tim & rob, we want to product management threefold
now is friday jan 17
only give me the instructions needed, only return the heroscript no text around
if not clear enough ask the user for more info
if not sure do not invent, only give instructions as really asked for

View File

@@ -0,0 +1,58 @@
# how to manage my agenda
## Metadata for function calling
functions_metadata = [
{
"name": "event_add",
"description": "Adds a calendar event.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string", "description": "Title of the event."},
"start": {"type": "string", "description": "Start date and time in 'YYYY/MM/DD hh:mm' format."},
"end": {"type": "string", "description": "End date or duration (e.g., +2h)."},
"description": {"type": "string", "description": "Event description."},
"attendees": {"type": "string", "description": "Comma-separated list of attendees' emails."},
},
"required": ["title", "start"]
}
},
{
"name": "event_delete",
"description": "Deletes a calendar event by title.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string", "description": "Title of the event to delete."},
},
"required": ["title"]
}
}
]
## example call
{
"function": "event_add",
"parameters": {
"title": "Team Sync",
"start": "2025/02/01 10:00",
"end": "+1h",
"description": "",
"attendees": "alice@example.com, bob@example.com"
}
}
## how to use
Parse the user query to determine intent (e.g., "schedule" maps to event_add, "cancel" maps to event_delete).
Extract required parameters (e.g., title, start date).
Invoke the appropriate function with the extracted parameters.
Return the function's result as the response.

View File

@@ -0,0 +1,72 @@
you represent a digital twin for a user, the user talks to you to get things done for his digital life
you will interprete the instructions the user prompts, and figure out the multiple instructions, break it up and categorize them as follows:
- cat: calendar
- manage calendar for the user
- cat: contacts
- manage contacts for the user
- cat: communicate
- communicate with others using text
- cat: tasks
- manage my tasks
- cat: circle
- define circle we work in, a circle is like a project context in which we do above, so can be for a team or a project, try to find it
- cat: sysadmin
- system administration, e.g. creation of virtual machines (VM), containers, start stop see monitoring information
- cat: notes
- anything to do with transctiptions, note takings, summaries
- how we recorded meetings e.g. zoom, google meet, ...
- how we are looking for info in meeting
- cat: unknown
- anything we can't understand
try to understand what user wants and put it in blocks (one per category for the action e.g. calendar)
- before each block (instruction) put ###########################
- in the first line mention the category as defined above, only mention this category once and there is only one per block
- then reformulate in clear instructions what needs to be done after that
- the instructions are put in lines following the instruction (not in the instruction line)
- only make blocks for instructions as given
what you output will be used further to do more specific prompting
if circle, always put these instructions first
if time is specified put the time as follows
- if relative e.g. next week, tomorrow, after tomorrow, in one hour then start from the current time
- time is in format: YYYY/MM/DD hh:mm format
- current time is friday 2025/01/17 10:12
- if e.g. next month jan, or next tuesday then don't repeat the browd instructions like tuesday, this just show the date as YYYY/MM/DD hh:mm
if not clear for a date, don't invent just repeat the original instruction
if the category is not clear, just use unknown
NOW DO EXAMPLE 1
```
hi good morning
Can you help me find meetings I have done around research of threefold in the last 2 weeks
I need to create a new VM, 4 GB of memory, 2 vcpu, in belgium, with ubuntu
I would like do schedule a meeting, need to go to the dentist tomorrow at 10am, its now friday jan 17
also remind me I need to do the dishes after tomorrow in the morning
can you also add jef as a contact, he lives in geneva, he is doing something about rocketscience
I need to paint my wall in my room next week wednesday
cancel all my meetings next sunday
can you give me list of my contacts who live in geneva and name sounds like tom
send a message to my mother, I am seeing here in 3 days at 7pm
```

View File

@@ -11,7 +11,7 @@
when I generate vlang scripts I will always use .vsh extension and use following as first line:
```
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
```
- a .vsh is a v shell script and can be executed as is, no need to use v ...
@@ -21,7 +21,7 @@ when I generate vlang scripts I will always use .vsh extension and use following
## to do argument parsing use following examples
```v
#!/usr/bin/env -S v -n -w -cg -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import os
import flag

View File

@@ -2238,7 +2238,7 @@ be faster, since there is no need for a re-compilation of a script, that has not
An example `deploy.vsh`:
```v oksyntax
#!/usr/bin/env -S v -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -gc none -cc tcc -d use_openssl -enable-globals run
// Note: The shebang line above, associates the .vsh file to V on Unix-like systems,
// so it can be run just by specifying the path to the .vsh file, once it's made
@@ -2300,11 +2300,11 @@ Whilst V does normally not allow vsh scripts without the designated file extensi
to circumvent this rule and have a file with a fully custom name and shebang. Whilst this feature
exists it is only recommended for specific usecases like scripts that will be put in the path and
should **not** be used for things like build or deploy scripts. To access this feature start the
file with `#!/usr/bin/env -S v -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
file with `#!/usr/bin/env -S v -gc none -cc tcc -d use_openssl -enable-globals run
the built executable. This will run in crun mode so it will only rebuild if changes to the script
were made and keep the binary as `tmp.<scriptfilename>`. **Caution**: if this filename already
exists the file will be overridden. If you want to rebuild each time and not keep this binary
instead use `#!/usr/bin/env -S v -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
instead use `#!/usr/bin/env -S v -gc none -cc tcc -d use_openssl -enable-globals run
# Appendices

View File

@@ -3,7 +3,7 @@
this is how we want example scripts to be, see the first line
```vlang
#!/usr/bin/env -S v -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.sysadmintools.daguserver

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env -S v -parallel-cc -enable-globals run
// #!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -parallel-cc -enable-globals run
// #!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
import flag
@@ -45,7 +45,7 @@ compile_cmd := if os.user_os() == 'macos' {
if prod_mode {
'v -enable-globals -w -n -prod hero.v'
} else {
'v -w -cg -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals hero.v'
'v -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
}
} else {
if prod_mode {
@@ -66,7 +66,7 @@ os.chmod('hero', 0o755) or { panic('Failed to make hero binary executable: ${err
// Ensure destination directory exists
os.mkdir_all(os.dir(heropath)) or { panic('Failed to create directory ${os.dir(heropath)}: ${err}') }
println(heropath)
// Copy to destination paths
os.cp('hero', heropath) or { panic('Failed to copy hero binary to ${heropath}: ${err}') }
os.cp('hero', '/tmp/hero') or { panic('Failed to copy hero binary to /tmp/hero: ${err}') }

View File

@@ -89,5 +89,9 @@ fn hero_upload() ! {
}
fn main() {
//os.execute_or_panic('${os.home_dir()}/code/github/freeflowuniverse/herolib/cli/compile.vsh -p')
println("compile hero can take 60 sec+ on osx.")
os.execute_or_panic('${os.home_dir()}/code/github/freeflowuniverse/herolib/cli/compile.vsh -p')
println( "upload:")
hero_upload() or { eprintln(err) exit(1) }
}

View File

@@ -31,7 +31,7 @@ fn do() ! {
mut cmd := Command{
name: 'hero'
description: 'Your HERO toolset.'
version: '2.0.0'
version: '1.0.2'
}
// herocmds.cmd_run_add_flags(mut cmd)
@@ -81,6 +81,7 @@ fn do() ! {
// herocmds.cmd_zola(mut cmd)
// herocmds.cmd_juggler(mut cmd)
herocmds.cmd_generator(mut cmd)
herocmds.cmd_docusaurus(mut cmd)
// herocmds.cmd_docsorter(mut cmd)
// cmd.add_command(publishing.cmd_publisher(pre_func))
cmd.setup()

40
doc.vsh
View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
@@ -26,9 +26,9 @@ os.chdir(herolib_path) or {
panic('Failed to change directory to herolib: ${err}')
}
os.rmdir_all('_docs') or {}
os.rmdir_all('docs') or {}
os.rmdir_all('vdocs') or {}
os.mkdir_all('_docs') or {}
os.mkdir_all('docs') or {}
os.mkdir_all('vdocs') or {}
// Generate HTML documentation
println('Generating HTML documentation...')
@@ -42,13 +42,12 @@ os.chdir(abs_dir_of_script) or {
// Generate Markdown documentation
println('Generating Markdown documentation...')
os.rmdir_all('vdocs') or {}
// if os.system('v doc -m -no-color -f md -o ../vdocs/v/') != 0 {
// panic('Failed to generate V markdown documentation')
// }
if os.system('v doc -m -no-color -f md -o vdocs/herolib/') != 0 {
if os.system('v doc -m -no-color -f md -o vdocs/') != 0 {
panic('Failed to generate Hero markdown documentation')
}
@@ -62,4 +61,33 @@ $if !linux {
}
}
// Create Jekyll required files
println('Creating Jekyll files...')
os.mkdir_all('docs/assets/css') or {}
// Create style.scss
style_content := '---\n---\n\n@import "{{ site.theme }}";'
os.write_file('docs/assets/css/style.scss', style_content) or {
panic('Failed to create style.scss: ${err}')
}
// Create _config.yml
config_content := 'title: HeroLib Documentation
description: Documentation for the HeroLib project
theme: jekyll-theme-primer
baseurl: /herolib
exclude:
- Gemfile
- Gemfile.lock
- node_modules
- vendor/bundle/
- vendor/cache/
- vendor/gems/
- vendor/ruby/'
os.write_file('docs/_config.yml', config_content) or {
panic('Failed to create _config.yml: ${err}')
}
println('Documentation generation completed successfully!')

View File

@@ -1,3 +0,0 @@
.bash_history
.openvscode-server/
.cache/

View File

@@ -1,48 +0,0 @@
# Use Ubuntu 24.04 as the base image
FROM ubuntu:24.04
# Set the working directory
WORKDIR /root
# Copy local installation scripts into the container
COPY scripts/install_v.sh /tmp/install_v.sh
COPY scripts/install_herolib.vsh /tmp/install_herolib.vsh
COPY scripts/install_vscode.sh /tmp/install_vscode.sh
COPY scripts/ourinit.sh /usr/local/bin/
# Make the scripts executable
RUN chmod +x /tmp/install_v.sh /tmp/install_herolib.vsh
RUN apt-get update && apt-get install -y \
curl bash sudo mc wget tmux htop openssh-server
RUN bash /tmp/install_v.sh
RUN yes y | bash /tmp/install_v.sh --analyzer
RUN bash /tmp/install_vscode.sh
RUN /tmp/install_herolib.vsh && \
mkdir -p /var/run/sshd && \
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config && \
echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config && \
chown -R root:root /root/.ssh && \
chmod -R 700 /root/.ssh/ && \
chmod 600 /root/.ssh/authorized_keys && \
service ssh start && \
apt-get clean && \
echo "PS1='HERO: \w \$ '" >> ~/.bashrc \
rm -rf /var/lib/apt/lists/*
#SSH
RUN mkdir -p /var/run/sshd && \
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config && \
echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config && \
chown -R root:root /root/.ssh && \
chmod -R 700 /root/.ssh/ && \
chmod 600 /root/.ssh/authorized_keys && \
service ssh start
ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]

View File

@@ -1,36 +0,0 @@
#!/bin/bash -e
# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"
# Docker image and container names
DOCKER_IMAGE_NAME="docusaurus"
DEBUG_CONTAINER_NAME="herolib"
function cleanup {
if docker ps -aq -f name="$DEBUG_CONTAINER_NAME" &>/dev/null; then
echo "Cleaning up leftover debug container..."
docker rm -f "$DEBUG_CONTAINER_NAME" &>/dev/null || true
fi
}
trap cleanup EXIT
# Attempt to build the Docker image
BUILD_LOG=$(mktemp)
set +e
docker build --name herolib --progress=plain -t "$DOCKER_IMAGE_NAME" .
BUILD_EXIT_CODE=$?
set -e
# Handle build failure
if [ $BUILD_EXIT_CODE -ne 0 ]; then
echo -e "\\n[ERROR] Docker build failed.\n"
echo -e "remove the part which didn't build in the Dockerfile, the run again and to debug do:"
echo docker run --name herolib -it --entrypoint=/bin/bash "herolib"
exit $BUILD_EXIT_CODE
else
echo -e "\\n[INFO] Docker build completed successfully."
fi

View File

@@ -1,19 +0,0 @@
#!/bin/bash -ex
# Get the directory where the script is located
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
# Remove any existing container named 'debug' (ignore errors)
docker rm -f herolib > /dev/null 2>&1
docker run --name herolib -it \
--entrypoint="/usr/local/bin/ourinit.sh" \
-v "${SCRIPT_DIR}/scripts:/scripts" \
-v "$HOME/code:/root/code" \
-p 4100:8100 \
-p 4101:8101 \
-p 4102:8102 \
-p 4379:6379 \
-p 4022:22 \
-p 4000:3000 herolib

View File

@@ -1,34 +0,0 @@
services:
postgres:
image: postgres:latest
container_name: postgres_service
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: planetfirst
POSTGRES_DB: mydb
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
herolib:
build:
context: .
dockerfile: Dockerfile
image: herolib:latest
container_name: herolib
volumes:
- ~/code:/root/code
stdin_open: true
tty: true
ports:
- "4100:8100"
- "4101:8101"
- "4102:8102"
- "4379:6379"
- "4000:3000"
- "4022:22"
command: ["/usr/local/bin/ourinit.sh"]
volumes:
postgres_data:

View File

@@ -1,98 +0,0 @@
#!/bin/bash -e
# Set version and file variables
OPENVSCODE_SERVER_VERSION="1.97.0"
TMP_DIR="/tmp"
FILENAME="openvscode.tar.gz"
FILE_PATH="$TMP_DIR/$FILENAME"
INSTALL_DIR="/opt/openvscode"
BIN_PATH="/usr/local/bin/openvscode-server"
TMUX_SESSION="openvscode-server"
# Function to detect architecture
get_architecture() {
ARCH=$(uname -m)
case "$ARCH" in
x86_64)
echo "x64"
;;
aarch64)
echo "arm64"
;;
*)
echo "Unsupported architecture: $ARCH" >&2
exit 1
;;
esac
}
# Check if OpenVSCode Server is already installed
if [ -d "$INSTALL_DIR" ] && [ -x "$BIN_PATH" ]; then
echo "OpenVSCode Server is already installed at $INSTALL_DIR. Skipping download and installation."
else
# Determine architecture-specific URL
ARCH=$(get_architecture)
if [ "$ARCH" == "x64" ]; then
DOWNLOAD_URL="https://github.com/gitpod-io/openvscode-server/releases/download/openvscode-server-insiders-v${OPENVSCODE_SERVER_VERSION}/openvscode-server-insiders-v${OPENVSCODE_SERVER_VERSION}-linux-x64.tar.gz"
elif [ "$ARCH" == "arm64" ]; then
DOWNLOAD_URL="https://github.com/gitpod-io/openvscode-server/releases/download/openvscode-server-insiders-v${OPENVSCODE_SERVER_VERSION}/openvscode-server-insiders-v${OPENVSCODE_SERVER_VERSION}-linux-arm64.tar.gz"
fi
# Navigate to temporary directory
cd "$TMP_DIR"
# Remove existing file if it exists
if [ -f "$FILE_PATH" ]; then
rm -f "$FILE_PATH"
fi
# Download file using curl
curl -L "$DOWNLOAD_URL" -o "$FILE_PATH"
# Verify file size is greater than 40 MB (40 * 1024 * 1024 bytes)
FILE_SIZE=$(stat -c%s "$FILE_PATH")
if [ "$FILE_SIZE" -le $((40 * 1024 * 1024)) ]; then
echo "Error: Downloaded file size is less than 40 MB." >&2
exit 1
fi
# Extract the tar.gz file
EXTRACT_DIR="openvscode-server-insiders-v${OPENVSCODE_SERVER_VERSION}-linux-${ARCH}"
tar -xzf "$FILE_PATH"
# Move the extracted directory to the install location
if [ -d "$INSTALL_DIR" ]; then
rm -rf "$INSTALL_DIR"
fi
mv "$EXTRACT_DIR" "$INSTALL_DIR"
# Create a symlink for easy access
ln -sf "$INSTALL_DIR/bin/openvscode-server" "$BIN_PATH"
# Verify installation
if ! command -v openvscode-server >/dev/null 2>&1; then
echo "Error: Failed to create symlink for openvscode-server." >&2
exit 1
fi
# Install default plugins
PLUGINS=("ms-python.python" "esbenp.prettier-vscode" "saoudrizwan.claude-dev" "yzhang.markdown-all-in-one" "ms-vscode-remote.remote-ssh" "ms-vscode.remote-explorer" "charliermarsh.ruff" "qwtel.sqlite-viewer" "vosca.vscode-v-analyzer" "tomoki1207.pdf")
for PLUGIN in "${PLUGINS[@]}"; do
"$INSTALL_DIR/bin/openvscode-server" --install-extension "$PLUGIN"
done
echo "Default plugins installed: ${PLUGINS[*]}"
# Clean up temporary directory
if [ -d "$TMP_DIR" ]; then
find "$TMP_DIR" -maxdepth 1 -type f -name "openvscode*" -exec rm -f {} \;
fi
fi
# Start OpenVSCode Server in a tmux session
if tmux has-session -t "$TMUX_SESSION" 2>/dev/null; then
tmux kill-session -t "$TMUX_SESSION"
fi
tmux new-session -d -s "$TMUX_SESSION" "$INSTALL_DIR/bin/openvscode-server"
echo "OpenVSCode Server is running in a tmux session named '$TMUX_SESSION'."

View File

@@ -1,14 +0,0 @@
#!/bin/bash -e
# redis-server --daemonize yes
# TMUX_SESSION="vscode"
# # Start OpenVSCode Server in a tmux session
# if tmux has-session -t "$TMUX_SESSION" 2>/dev/null; then
# tmux kill-session -t "$TMUX_SESSION"
# fi
# tmux new-session -d -s "$TMUX_SESSION" "/usr/local/bin/openvscode-server --host 0.0.0.0 --without-connection-token"
# service ssh start
exec /bin/bash

View File

@@ -1,61 +0,0 @@
#!/bin/bash -e
# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"
CONTAINER_NAME="herolib"
TARGET_PORT=4000
# Function to check if a container is running
is_container_running() {
docker ps --filter "name=$CONTAINER_NAME" --filter "status=running" -q
}
# Function to check if a port is accessible
is_port_accessible() {
nc -zv 127.0.0.1 "$1" &>/dev/null
}
# Check if the container exists and is running
if ! is_container_running; then
echo "Container $CONTAINER_NAME is not running."
# Check if the container exists but is stopped
if docker ps -a --filter "name=$CONTAINER_NAME" -q | grep -q .; then
echo "Starting existing container $CONTAINER_NAME..."
docker start "$CONTAINER_NAME"
else
echo "Container $CONTAINER_NAME does not exist. Attempting to start with start.sh..."
if [[ -f "$SCRIPT_DIR/start.sh" ]]; then
bash "$SCRIPT_DIR/start.sh"
else
echo "Error: start.sh not found in $SCRIPT_DIR."
exit 1
fi
fi
# Wait for the container to be fully up
sleep 5
fi
# Verify the container is running
if ! is_container_running; then
echo "Error: Failed to start container $CONTAINER_NAME."
exit 1
fi
echo "Container $CONTAINER_NAME is running."
# Check if the target port is accessible
if is_port_accessible "$TARGET_PORT"; then
echo "Port $TARGET_PORT is accessible."
else
echo "Port $TARGET_PORT is not accessible. Please check the service inside the container."
fi
# Enter the container
echo
echo " ** WE NOW LOGIN TO THE CONTAINER ** "
echo
docker exec -it herolib bash

View File

@@ -1,3 +0,0 @@
#!/bin/bash -e
ssh root@localhost -p 4022

View File

@@ -1,63 +0,0 @@
#!/bin/bash -e
# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"
# Define variables
CONTAINER_NAME="herolib"
CONTAINER_SSH_DIR="/root/.ssh"
AUTHORIZED_KEYS="authorized_keys"
TEMP_AUTH_KEYS="/tmp/authorized_keys"
# Step 1: Create a temporary file to store public keys
> $TEMP_AUTH_KEYS # Clear the file if it exists
# Step 2: Add public keys from ~/.ssh/ if they exist
if ls ~/.ssh/*.pub 1>/dev/null 2>&1; then
cat ~/.ssh/*.pub >> $TEMP_AUTH_KEYS
fi
# Step 3: Check if ssh-agent is running and get public keys from it
if pgrep ssh-agent >/dev/null; then
echo "ssh-agent is running. Fetching keys..."
ssh-add -L >> $TEMP_AUTH_KEYS 2>/dev/null
else
echo "ssh-agent is not running or no keys loaded."
fi
# Step 4: Ensure the temporary file is not empty
if [ ! -s $TEMP_AUTH_KEYS ]; then
echo "No public keys found. Exiting."
exit 1
fi
# Step 5: Ensure the container's SSH directory exists
docker exec -it $CONTAINER_NAME mkdir -p $CONTAINER_SSH_DIR
docker exec -it $CONTAINER_NAME chmod 700 $CONTAINER_SSH_DIR
# Step 6: Copy the public keys into the container's authorized_keys file
docker cp $TEMP_AUTH_KEYS $CONTAINER_NAME:$CONTAINER_SSH_DIR/$AUTHORIZED_KEYS
# Step 7: Set proper permissions for authorized_keys
docker exec -it $CONTAINER_NAME chmod 600 $CONTAINER_SSH_DIR/$AUTHORIZED_KEYS
# Step 8: Install and start the SSH server inside the container
docker exec -it $CONTAINER_NAME bash -c "
apt-get update &&
apt-get install -y openssh-server &&
mkdir -p /var/run/sshd &&
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config &&
echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config &&
chown -R root:root /root/.ssh &&
chmod -R 700 /root/.ssh/ &&
chmod 600 /root/.ssh/authorized_keys &&
service ssh start
"
# Step 9: Clean up temporary file on the host
rm $TEMP_AUTH_KEYS
echo "SSH keys added and SSH server configured. You can now SSH into the container."
ssh root@localhost -p 4022

View File

@@ -1,8 +0,0 @@
#!/bin/bash -e
# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"

View File

@@ -17,6 +17,76 @@ docker run --name herolib \
docker exec -it herolib /scripts/cleanup.sh
docker export herolib | gzip > ${HOME}/Downloads/herolib.tar.gz
docker kill herolib
# Detect the OS
detect_os() {
if [[ "$(uname)" == "Darwin" ]]; then
echo "osx"
elif [[ -f /etc/os-release ]]; then
. /etc/os-release
if [[ "$ID" == "ubuntu" ]]; then
echo "ubuntu"
fi
else
echo "unknown"
fi
}
OS=$(detect_os)
if [[ "$OS" == "osx" ]]; then
echo "Running on macOS..."
docker export herolib | gzip > "${HOME}/Downloads/herolib.tar.gz"
echo "Docker image exported to ${HOME}/Downloads/herolib.tar.gz"
elif [[ "$OS" == "ubuntu" ]]; then
echo "Running on Ubuntu..."
export TEMP_TAR="/tmp/herolib.tar"
# Export the Docker container to a tar file
docker export herolib > "$TEMP_TAR"
echo "Docker container exported to $TEMP_TAR"
# Import the tar file back as a single-layer image
docker import "$TEMP_TAR" herolib:single-layer
echo "Docker image imported as single-layer: herolib:single-layer"
# Log in to Docker Hub and push the image
docker login --username despiegk
docker tag herolib:single-layer despiegk/herolib:single-layer
docker push despiegk/herolib:single-layer
echo "Docker image pushed to Docker Hub as despiegk/herolib:single-layer"
# Optionally remove the tar file after importing
rm -f "$TEMP_TAR"
echo "Temporary file $TEMP_TAR removed"
else
echo "Unsupported OS detected. Exiting."
exit 1
fi
docker kill herolib
# Test the pushed Docker image locally
echo "Testing the Docker image locally..."
TEST_CONTAINER_NAME="test_herolib_container"
docker pull despiegk/herolib:single-layer
if [[ $? -ne 0 ]]; then
echo "Failed to pull the Docker image from Docker Hub. Exiting."
exit 1
fi
docker run --name "$TEST_CONTAINER_NAME" -d despiegk/herolib:single-layer
if [[ $? -ne 0 ]]; then
echo "Failed to run the Docker image as a container. Exiting."
exit 1
fi
docker ps | grep "$TEST_CONTAINER_NAME"
if [[ $? -eq 0 ]]; then
echo "Container $TEST_CONTAINER_NAME is running successfully."
else
echo "Container $TEST_CONTAINER_NAME is not running. Check the logs for details."
fi

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
import flag
@@ -64,7 +64,7 @@ os.symlink('${abs_dir_of_script}/lib', '${os.home_dir()}/.vmodules/freeflowunive
println('Herolib installation completed successfully!')
// Add vtest alias
addtoscript('alias vtest=', 'alias vtest=\'v -stats -enable-globals -n -w -cg -gc none -no-retry-compilation -cc tcc test\' ') or {
addtoscript('alias vtest=', 'alias vtest=\'v -stats -enable-globals -n -w -cg -gc none -cc tcc test\' ') or {
eprintln('Failed to add vtest alias: ${err}')
}

View File

@@ -0,0 +1,22 @@
version: '3.9'
services:
db:
image: 'postgres:17.2-alpine3.21'
restart: always
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: 1234
networks:
- my_network
adminer:
image: adminer
restart: always
ports:
- 8080:8080
networks:
- my_network
networks:
my_network:

View File

@@ -0,0 +1,6 @@
Server (Host): db (because Docker Compose creates an internal network and uses service names as hostnames)
Username: postgres (default PostgreSQL username)
Password: 1234 (as set in your POSTGRES_PASSWORD environment variable)
Database: Leave it empty or enter postgres (default database)

13
docker/postgresql/start.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash -e
# Get the directory where the script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$SCRIPT_DIR"
# Stop any existing containers and remove them
docker compose down
# Start the services in detached mode
docker compose up -d
echo "PostgreSQL is ready"

View File

@@ -34,7 +34,7 @@ The examples directory demonstrates various capabilities of HeroLib:
When creating V scripts (.vsh files), always use the following shebang:
```bash
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
```
This shebang ensures:

View File

@@ -1,12 +1,11 @@
module main
import freeflowuniverse.herolib.osal
import freeflowuniverse.herolib.installers.base
import freeflowuniverse.herolib.core
fn do() ! {
// base.uninstall_brew()!
// println("something")
if osal.is_osx() {
if core.is_osx()! {
println('IS OSX')
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.builder
import freeflowuniverse.herolib.core.pathlib

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.builder
import freeflowuniverse.herolib.core.pathlib

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.builder
import freeflowuniverse.herolib.core.pathlib

25
examples/clients/mail.vsh Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.mailclient
// remove the previous one, otherwise the env variables are not read
mailclient.config_delete(name: 'test')!
// env variables which need to be set are:
// - MAIL_FROM=...
// - MAIL_PASSWORD=...
// - MAIL_PORT=465
// - MAIL_SERVER=...
// - MAIL_USERNAME=...
mut client := mailclient.get(name: 'test')!
println(client)
client.send(
subject: 'this is a test'
to: 'kristof@incubaid.com'
body: '
this is my email content
'
)!

43
examples/clients/psql.vsh Executable file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core
import freeflowuniverse.herolib.clients.postgresql_client
// Configure PostgreSQL client
heroscript := "
!!postgresql_client.configure
name:'test'
user: 'postgres'
port: 5432
host: 'localhost'
password: '1234'
dbname: 'postgres'
"
// Process the heroscript configuration
postgresql_client.play(heroscript: heroscript)!
// Get the configured client
mut db_client := postgresql_client.get(name: 'test')!
// Check if test database exists, create if not
if !db_client.db_exists('test')! {
println('Creating database test...')
db_client.db_create('test')!
}
// Switch to test database
db_client.dbname = 'test'
// Create table if not exists
create_table_sql := 'CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)'
println('Creating table users if not exists...')
db_client.exec(create_table_sql)!
println('Database and table setup completed successfully!')

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.base

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.pathlib
import freeflowuniverse.herolib.core.base

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.base
import freeflowuniverse.herolib.develop.gittools

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
import freeflowuniverse.herolib.core.codeparser

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -gc none -cc tcc -d use_openssl -enable-globals run
import time
import freeflowuniverse.herolib.core.smartid

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.dbfs
import time

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.generator.installer

View File

@@ -1,7 +1,7 @@
module dagu
// import os
import freeflowuniverse.herolib.clients.httpconnection
import freeflowuniverse.herolib.core.httpconnection
import os
struct GiteaClient[T] {

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
import json

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.pathlib
import os

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.pathlib
import os

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.pathlib
import freeflowuniverse.herolib.data.paramsparser

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.pathlib
import os

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.crypt.secrets

1
examples/data/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
cache

139
examples/data/cache.vsh Executable file
View File

@@ -0,0 +1,139 @@
#!/usr/bin/env -S v run
// Example struct to cache
import freeflowuniverse.herolib.data.cache
import time
@[heap]
struct User {
id u32
name string
age int
}
fn main() {
// Create a cache with custom configuration
config := cache.CacheConfig{
max_entries: 1000 // Maximum number of entries
max_size_mb: 10.0 // Maximum cache size in MB
ttl_seconds: 300 // Items expire after 5 minutes
eviction_ratio: 0.2 // Evict 20% of entries when full
}
mut user_cache := cache.new_cache[User](config)
// Create some example users
user1 := &User{
id: 1
name: 'Alice'
age: 30
}
user2 := &User{
id: 2
name: 'Bob'
age: 25
}
// Add users to cache
println('Adding users to cache...')
user_cache.set(user1.id, user1)
user_cache.set(user2.id, user2)
// Retrieve users from cache
println('\nRetrieving users from cache:')
if cached_user1 := user_cache.get(1) {
println('Found user 1: ${cached_user1.name}, age ${cached_user1.age}')
}
if cached_user2 := user_cache.get(2) {
println('Found user 2: ${cached_user2.name}, age ${cached_user2.age}')
}
// Try to get non-existent user
println('\nTrying to get non-existent user:')
if user := user_cache.get(999) {
println('Found user: ${user.name}')
} else {
println('User not found in cache')
}
// Demonstrate cache stats
println('\nCache statistics:')
println('Number of entries: ${user_cache.len()}')
// Clear the cache
println('\nClearing cache...')
user_cache.clear()
println('Cache entries after clear: ${user_cache.len()}')
// Demonstrate max entries limit
println('\nDemonstrating max entries limit (adding 2000 entries):')
println('Initial cache size: ${user_cache.len()}')
for i := u32(0); i < 2000; i++ {
user := &User{
id: i
name: 'User${i}'
age: 20 + int(i % 50)
}
user_cache.set(i, user)
if i % 200 == 0 {
println('After adding ${i} entries:')
println(' Cache size: ${user_cache.len()}')
// Check some entries to verify LRU behavior
if i >= 500 {
old_id := if i < 1000 { u32(0) } else { i - 1000 }
recent_id := i - 1
println(' Entry ${old_id} (old): ${if _ := user_cache.get(old_id) {
'found'
} else {
'evicted'
}}')
println(' Entry ${recent_id} (recent): ${if _ := user_cache.get(recent_id) {
'found'
} else {
'evicted'
}}')
}
println('')
}
}
println('Final statistics:')
println('Cache size: ${user_cache.len()} (should be max 1000)')
// Verify we can only access recent entries
println('\nVerifying LRU behavior:')
println('First entry (0): ${if _ := user_cache.get(0) { 'found' } else { 'evicted' }}')
println('Middle entry (1000): ${if _ := user_cache.get(1000) { 'found' } else { 'evicted' }}')
println('Recent entry (1900): ${if _ := user_cache.get(1900) { 'found' } else { 'evicted' }}')
println('Last entry (1999): ${if _ := user_cache.get(1999) { 'found' } else { 'evicted' }}')
// Demonstrate TTL expiration
println('\nDemonstrating TTL expiration:')
quick_config := cache.CacheConfig{
ttl_seconds: 2 // Set short TTL for demo
}
mut quick_cache := cache.new_cache[User](quick_config)
// Add a user
quick_cache.set(user1.id, user1)
println('Added user to cache with 2 second TTL')
if cached := quick_cache.get(user1.id) {
println('User found immediately: ${cached.name}')
}
// Wait for TTL to expire
println('Waiting for TTL to expire...')
time.sleep(3 * time.second)
if _ := quick_cache.get(user1.id) {
println('User still in cache')
} else {
println('User expired from cache as expected')
}
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.encoder
import crypto.ed25519

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.crypt.aes_symmetric { decrypt, encrypt }
import freeflowuniverse.herolib.ui.console

175
examples/data/graphdb.vsh Executable file
View File

@@ -0,0 +1,175 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
// Example demonstrating GraphDB usage in a social network context
import freeflowuniverse.herolib.data.graphdb
fn main() {
// Initialize a new graph database with default cache settings
mut gdb := graphdb.new(
path: '/tmp/social_network_example'
reset: true // Start fresh each time
)!
println('=== Social Network Graph Example ===\n')
// 1. Creating User Nodes
println('Creating users...')
mut alice_id := gdb.create_node({
'type': 'user'
'name': 'Alice Chen'
'age': '28'
'location': 'San Francisco'
'occupation': 'Software Engineer'
})!
println('Created user: ${gdb.debug_node(alice_id)!}')
mut bob_id := gdb.create_node({
'type': 'user'
'name': 'Bob Smith'
'age': '32'
'location': 'New York'
'occupation': 'Product Manager'
})!
println('Created user: ${gdb.debug_node(bob_id)!}')
mut carol_id := gdb.create_node({
'type': 'user'
'name': 'Carol Davis'
'age': '27'
'location': 'San Francisco'
'occupation': 'Data Scientist'
})!
println('Created user: ${gdb.debug_node(carol_id)!}')
// 2. Creating Organization Nodes
println('\nCreating organizations...')
mut techcorp_id := gdb.create_node({
'type': 'organization'
'name': 'TechCorp'
'industry': 'Technology'
'location': 'San Francisco'
'size': '500+'
})!
println('Created organization: ${gdb.debug_node(techcorp_id)!}')
mut datacorp_id := gdb.create_node({
'type': 'organization'
'name': 'DataCorp'
'industry': 'Data Analytics'
'location': 'New York'
'size': '100-500'
})!
println('Created organization: ${gdb.debug_node(datacorp_id)!}')
// 3. Creating Interest Nodes
println('\nCreating interest groups...')
mut ai_group_id := gdb.create_node({
'type': 'group'
'name': 'AI Enthusiasts'
'category': 'Technology'
'members': '0'
})!
println('Created group: ${gdb.debug_node(ai_group_id)!}')
// 4. Establishing Relationships
println('\nCreating relationships...')
// Friendship relationships
gdb.create_edge(alice_id, bob_id, 'FRIENDS', {
'since': '2022'
'strength': 'close'
})!
gdb.create_edge(alice_id, carol_id, 'FRIENDS', {
'since': '2023'
'strength': 'close'
})!
// Employment relationships
gdb.create_edge(alice_id, techcorp_id, 'WORKS_AT', {
'role': 'Senior Engineer'
'since': '2021'
'department': 'Engineering'
})!
gdb.create_edge(bob_id, datacorp_id, 'WORKS_AT', {
'role': 'Product Lead'
'since': '2020'
'department': 'Product'
})!
gdb.create_edge(carol_id, techcorp_id, 'WORKS_AT', {
'role': 'Data Scientist'
'since': '2022'
'department': 'Analytics'
})!
// Group memberships
gdb.create_edge(alice_id, ai_group_id, 'MEMBER_OF', {
'joined': '2023'
'status': 'active'
})!
gdb.create_edge(carol_id, ai_group_id, 'MEMBER_OF', {
'joined': '2023'
'status': 'active'
})!
// 5. Querying the Graph
println('\nPerforming queries...')
// Find users in San Francisco
println('\nUsers in San Francisco:')
sf_users := gdb.query_nodes_by_property('location', 'San Francisco')!
for user in sf_users {
if user.properties['type'] == 'user' {
println('- ${user.properties['name']} (${user.properties['occupation']})')
}
}
// Find Alice's friends
println("\nAlice's friends:")
alice_friends := gdb.get_connected_nodes(alice_id, 'FRIENDS', 'out')!
for friend in alice_friends {
println('- ${friend.properties['name']} in ${friend.properties['location']}')
}
// Find where Alice works
println("\nAlice's workplace:")
alice_workplaces := gdb.get_connected_nodes(alice_id, 'WORKS_AT', 'out')!
for workplace in alice_workplaces {
println('- ${workplace.properties['name']} (${workplace.properties['industry']})')
}
// Find TechCorp employees
println('\nTechCorp employees:')
techcorp_employees := gdb.get_connected_nodes(techcorp_id, 'WORKS_AT', 'in')!
for employee in techcorp_employees {
println('- ${employee.properties['name']} as ${employee.properties['occupation']}')
}
// Find AI group members
println('\nAI Enthusiasts group members:')
ai_members := gdb.get_connected_nodes(ai_group_id, 'MEMBER_OF', 'in')!
for member in ai_members {
println('- ${member.properties['name']}')
}
// 6. Updating Data
println('\nUpdating data...')
// Promote Alice
println('\nPromoting Alice...')
mut alice := gdb.get_node(alice_id)!
alice.properties['occupation'] = 'Lead Software Engineer'
gdb.update_node(alice_id, alice.properties)!
// Update Alice's work relationship
mut edges := gdb.get_edges_between(alice_id, techcorp_id)!
if edges.len > 0 {
gdb.update_edge(edges[0].id, {
'role': 'Engineering Team Lead'
'since': '2021'
'department': 'Engineering'
})!
}
println('\nFinal graph structure:')
gdb.print_graph()!
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.encoderhero
import freeflowuniverse.herolib.core.base

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.encoderhero
import freeflowuniverse.herolib.core.base
import time
struct Person {
mut:
name string
age int = 20
birthday time.Time
}
mut person := Person{
name: 'Bob'
birthday: time.now()
}
heroscript := encoderhero.encode[Person](person)!
println(heroscript)
person2 := encoderhero.decode[Person](heroscript)!
println(person2)
// show that it doesn't matter which action & method is used
heroscript2 := "!!a.b name:Bob age:20 birthday:'2025-02-06 09:57:30'"
person3 := encoderhero.decode[Person](heroscript)!
println(person3)

35
examples/data/jsonexample.vsh Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import json
enum JobTitle {
manager
executive
worker
}
struct Employee {
mut:
name string
family string @[json: '-'] // this field will be skipped
age int
salary f32
title JobTitle @[json: 'ETitle'] // the key for this field will be 'ETitle', not 'title'
notes string @[omitempty] // the JSON property is not created if the string is equal to '' (an empty string).
// TODO: document @[raw]
}
x := Employee{'Peter', 'Begins', 28, 95000.5, .worker, ''}
println(x)
s := json.encode(x)
println('JSON encoding of employee x: ${s}')
assert s == '{"name":"Peter","age":28,"salary":95000.5,"ETitle":"worker"}'
mut y := json.decode(Employee, s)!
assert y != x
assert y.family == ''
y.family = 'Begins'
assert y == x
println(y)
ss := json.encode(y)
println('JSON encoding of employee y: ${ss}')
assert ss == s

View File

@@ -0,0 +1,63 @@
#!/usr/bin/env -S v -n -w -cg -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.postgresql_client
import freeflowuniverse.herolib.data.location
// Configure PostgreSQL client
heroscript := "
!!postgresql_client.configure
name:'test'
user: 'postgres'
port: 5432
host: 'localhost'
password: '1234'
dbname: 'postgres'
"
// Process the heroscript configuration
postgresql_client.play(heroscript: heroscript)!
// Get the configured client
mut db_client := postgresql_client.get(name: 'test')!
// Create a new location instance
mut loc := location.new(mut db_client, false) or { panic(err) }
println('Location database initialized')
// Initialize the database (downloads and imports data)
// This only needs to be done once or when updating data
println('Downloading and importing location data (this may take a few minutes)...')
// the arg is if we redownload
loc.download_and_import(false) or { panic(err) }
println('Data import complete')
// // Example 1: Search for a city
// println('\nSearching for London...')
// results := loc.search('London', 'GB', 5, true) or { panic(err) }
// for result in results {
// println('${result.city.name}, ${result.country.name} (${result.country.iso2})')
// println('Coordinates: ${result.city.latitude}, ${result.city.longitude}')
// println('Population: ${result.city.population}')
// println('Timezone: ${result.city.timezone}')
// println('---')
// }
// // Example 2: Search near coordinates (10km radius from London)
// println('\nSearching for cities within 10km of London...')
// nearby := loc.search_near(51.5074, -0.1278, 10.0, 5) or { panic(err) }
// for result in nearby {
// println('${result.city.name}, ${result.country.name}')
// println('Distance from center: Approx ${result.similarity:.1f}km')
// println('---')
// }
// // Example 3: Fuzzy search in a specific country
// println('\nFuzzy searching for "New" in United States...')
// us_cities := loc.search('New', 'US', 5, true) or { panic(err) }
// for result in us_cities {
// println('${result.city.name}, ${result.country.name}')
// println('State: ${result.city.state_name} (${result.city.state_code})')
// println('Population: ${result.city.population}')
// println('---')
// }

View File

@@ -0,0 +1,63 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.postgresql_client
import freeflowuniverse.herolib.data.location
// Configure PostgreSQL client
heroscript := "
!!postgresql_client.configure
name:'test'
user: 'postgres'
port: 5432
host: 'localhost'
password: '1234'
dbname: 'postgres'
"
// Process the heroscript configuration
postgresql_client.play(heroscript: heroscript)!
// Get the configured client
mut db_client := postgresql_client.get(name: 'test')!
// Create a new location instance
mut loc := location.new(mut db_client, false) or { panic(err) }
println('Location database initialized')
// Initialize the database (downloads and imports data)
// This only needs to be done once or when updating data
println('Downloading and importing location data (this may take a few minutes)...')
// the arg is if we redownload
loc.download_and_import(false) or { panic(err) }
println('Data import complete')
// // Example 1: Search for a city
// println('\nSearching for London...')
// results := loc.search('London', 'GB', 5, true) or { panic(err) }
// for result in results {
// println('${result.city.name}, ${result.country.name} (${result.country.iso2})')
// println('Coordinates: ${result.city.latitude}, ${result.city.longitude}')
// println('Population: ${result.city.population}')
// println('Timezone: ${result.city.timezone}')
// println('---')
// }
// // Example 2: Search near coordinates (10km radius from London)
// println('\nSearching for cities within 10km of London...')
// nearby := loc.search_near(51.5074, -0.1278, 10.0, 5) or { panic(err) }
// for result in nearby {
// println('${result.city.name}, ${result.country.name}')
// println('Distance from center: Approx ${result.similarity:.1f}km')
// println('---')
// }
// // Example 3: Fuzzy search in a specific country
// println('\nFuzzy searching for "New" in United States...')
// us_cities := loc.search('New', 'US', 5, true) or { panic(err) }
// for result in us_cities {
// println('${result.city.name}, ${result.country.name}')
// println('State: ${result.city.state_name} (${result.city.state_code})')
// println('Population: ${result.city.population}')
// println('---')
// }

40
examples/data/ourdb_example.vsh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.ourdb
const test_dir = '/tmp/ourdb'
mut db := ourdb.new(
record_nr_max: 16777216 - 1 // max size of records
record_size_max: 1024
path: test_dir
reset: true
)!
defer {
db.destroy() or { panic('failed to destroy db: ${err}') }
}
// Test set and get
test_data := 'Hello, World!'.bytes()
id := db.set(data: test_data)!
retrieved := db.get(id)!
assert retrieved == test_data
assert id == 0
// Test overwrite
new_data := 'Updated data'.bytes()
id2 := db.set(id: 0, data: new_data)!
assert id2 == 0
// // Verify lookup table has the correct location
// location := db.lookup.get(id2)!
// println('Location after update - file_nr: ${location.file_nr}, position: ${location.position}')
// Get and verify the updated data
retrieved2 := db.get(id2)!
println('Retrieved data: ${retrieved2}')
println('Expected data: ${new_data}')
assert retrieved2 == new_data

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.core.playbook
import freeflowuniverse.herolib.data.paramsparser

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.paramsparser { Params, parse }
import time

33
examples/data/radixtree.vsh Executable file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.radixtree
mut rt := radixtree.new(path: '/tmp/radixtree_test', reset: true)!
// Show initial state
println('\nInitial state:')
rt.debug_db()!
// Test insert
println('\nInserting key "test" with value "value1"')
rt.insert('test', 'value1'.bytes())!
// Show state after insert
println('\nState after insert:')
rt.debug_db()!
// Print tree structure
rt.print_tree()!
// Test search
if value := rt.search('test') {
println('\nFound value: ${value.bytestr()}')
} else {
println('\nError: ${err}')
}
println('\nInserting key "test2" with value "value2"')
rt.insert('test2', 'value2'.bytes())!
// Print tree structure
rt.print_tree()!

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.data.resp
import crypto.ed25519

View File

@@ -1,30 +1,10 @@
#!/usr/bin/env -S v -n -w -no-retry-compilation -d use_openssl -enable-globals run
//#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cg -cc tcc -d use_openssl -enable-globals run
// #!/usr/bin/env -S v -n -w -cg -d use_openssl -enable-globals run
//-parallel-cc
import os
import freeflowuniverse.herolib.develop.gittools
// import freeflowuniverse.herolib.develop.performance
mut silent := false
mut gs := gittools.get(reload: true)!
coderoot := if 'CODEROOT' in os.environ() {
os.environ()['CODEROOT']
} else {
os.join_path(os.home_dir(), 'code')
}
// timer := performance.new('gittools')
mut gs := gittools.get()!
if coderoot.len > 0 {
// is a hack for now
gs = gittools.new(coderoot: coderoot)!
}
mypath := gs.do(
recursive: true
cmd: 'list'
)!
// timer.timeline()
gs.repos_print()!

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.develop.gittools
import freeflowuniverse.herolib.osal

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.develop.gittools
import freeflowuniverse.herolib.osal

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import os
import freeflowuniverse.herolib.osal

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.sysadmin.startupmanager
import os

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.develop.luadns

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.openai as op

View File

@@ -0,0 +1,93 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
// import freeflowuniverse.herolib.core.base
import freeflowuniverse.herolib.clients.runpod
import json
import x.json2
// Create client with direct API key
// This uses RUNPOD_API_KEY from environment
mut rp := runpod.get()!
// Create a new on demand pod
on_demand_pod_response := rp.create_on_demand_pod(
name: 'RunPod Tensorflow'
image_name: 'runpod/tensorflow'
cloud_type: 'ALL'
gpu_count: 1
volume_in_gb: 5
container_disk_in_gb: 5
min_memory_in_gb: 4
min_vcpu_count: 1
gpu_type_id: 'NVIDIA RTX A4000'
ports: '8888/http'
volume_mount_path: '/workspace'
env: [
runpod.EnvironmentVariableInput{
key: 'JUPYTER_PASSWORD'
value: 'rn51hunbpgtltcpac3ol'
},
]
)!
println('Created pod with ID: ${on_demand_pod_response.id}')
// create a spot pod
spot_pod_response := rp.create_spot_pod(
port: 1826
bid_per_gpu: 0.2
cloud_type: 'SECURE'
gpu_count: 1
volume_in_gb: 5
container_disk_in_gb: 5
min_vcpu_count: 1
min_memory_in_gb: 4
gpu_type_id: 'NVIDIA RTX A4000'
name: 'RunPod Pytorch'
image_name: 'runpod/pytorch'
docker_args: ''
ports: '8888/http'
volume_mount_path: '/workspace'
env: [
runpod.EnvironmentVariableInput{
key: 'JUPYTER_PASSWORD'
value: 'rn51hunbpgtltcpac3ol'
},
]
)!
println('Created spot pod with ID: ${spot_pod_response.id}')
// stop on-demand pod
stop_on_demand_pod := rp.stop_pod(
pod_id: '${on_demand_pod_response.id}'
)!
println('Stopped on-demand pod with ID: ${stop_on_demand_pod.id}')
// stop spot pod
stop_spot_pod := rp.stop_pod(
pod_id: '${spot_pod_response.id}'
)!
println('Stopped spot pod with ID: ${stop_spot_pod.id}')
// start on-demand pod
start_on_demand_pod := rp.start_on_demand_pod(pod_id: '${on_demand_pod_response.id}', gpu_count: 1)!
println('Started on demand pod with ID: ${on_demand_pod_response.id}')
// start spot pod
start_spot_pod := rp.start_spot_pod(
pod_id: '${spot_pod_response.id}'
gpu_count: 1
bid_per_gpu: 0.2
)!
println('Started spot pod with ID: ${spot_pod_response.id}')
get_pod := rp.get_pod(
pod_id: '${spot_pod_response.id}'
)!
println('Get pod result: ${get_pod}')
rp.terminate_pod(pod_id: '${spot_pod_response.id}')!
println('pod with id ${spot_pod_response.id} is terminated')
rp.terminate_pod(pod_id: '${on_demand_pod_response.id}')!
println('pod with id ${on_demand_pod_response.id} is terminated')

View File

@@ -0,0 +1,66 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.vastai
import json
import x.json2
// Create client with direct API key
// This uses VASTAI_API_KEY from environment
mut va := vastai.get()!
offers := va.search_offers()!
println('offers: ${offers}')
top_offers := va.get_top_offers(5)!
println('top offers: ${top_offers}')
create_instance_res := va.create_instance(
id: top_offers[0].id
config: vastai.CreateInstanceConfig{
image: 'pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime'
disk: 10
}
)!
println('create instance res: ${create_instance_res}')
attach_sshkey_to_instance_res := va.attach_sshkey_to_instance(
id: 1
ssh_key: 'ssh-rsa AAAA...'
)!
println('attach sshkey to instance res: ${attach_sshkey_to_instance_res}')
stop_instance_res := va.stop_instance(
id: 1
state: 'stopped'
)!
println('stop instance res: ${stop_instance_res}')
destroy_instance_res := va.destroy_instance(
id: 1
)!
println('destroy instance res: ${destroy_instance_res}')
// For some reason this method returns an error from their server, 500 ERROR
// (request failed with code 500: {"error":"server_error","msg":"Something went wrong on the server"})
launch_instance_res := va.launch_instance(
// Required
num_gpus: 1
gpu_name: 'RTX_3090'
image: 'vastai/tensorflow'
disk: 10
region: 'us-west'
// Optional
env: 'user=7amada, home=/home/7amada'
)!
println('destroy instance res: ${launch_instance_res}')
start_instances_res := va.start_instances(
ids: [1, 2, 3]
)!
println('start instances res: ${start_instances_res}')
start_instance_res := va.start_instance(
id: 1
)!
println('start instance res: ${start_instance_res}')

View File

@@ -0,0 +1,8 @@
[Interface]
Address = 10.10.3.0/24
PrivateKey = wDewSiri8jlaGnUDN6SwK7QhN082U7gfX27YMGILvVA=
[Peer]
PublicKey = 2JEGJQ8FbajdFk0fFs/881H/D3FRjwlUxvNDZFxDeWQ=
AllowedIPs = 10.10.0.0/16, 100.64.0.0/16
PersistentKeepalive = 25
Endpoint = 185.206.122.31:3241

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -d use_openssl -enable-globals run
import freeflowuniverse.herolib.clients.wireguard
import freeflowuniverse.herolib.installers.net.wireguard as wireguard_installer
import time
import os
mut wg_installer := wireguard_installer.get()!
wg_installer.install()!
// Create Wireguard client
mut wg := wireguard.get()!
config_file_path := '${os.dir(@FILE)}/wg0.conf'
wg.start(config_file_path: config_file_path)!
println('${config_file_path} is started')
time.sleep(time.second * 2)
info := wg.show()!
println('info: ${info}')
config := wg.show_config(interface_name: 'wg0')!
println('config: ${config}')
private_key := wg.generate_private_key()!
println('private_key: ${private_key}')
public_key := wg.get_public_key(private_key: private_key)!
println('public_key: ${public_key}')
wg.down(config_file_path: config_file_path)!
println('${config_file_path} is down')
wg_installer.destroy()!

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.hero.bootstrap

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.hero.generation

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.hero.generation

View File

@@ -1 +1 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run

View File

@@ -1 +1 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import example_actor

View File

@@ -1 +1 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.sysadmintools.actrunner
import freeflowuniverse.herolib.installers.virt.herocontainers

11
examples/installers/buildah.vsh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.virt.buildah as buildah_installer
mut buildah := buildah_installer.get()!
// To install
buildah.install()!
// To remove
buildah.destroy()!

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.fediverse.conduit

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.infra.coredns as coredns_installer

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.sysadmintools.daguserver
import freeflowuniverse.herolib.installers.infra.zinit

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.sysadmintools.daguserver

11
examples/installers/docker.vsh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.virt.docker as docker_installer
mut docker := docker_installer.get()!
// To install
docker.install()!
// To remove
docker.destroy()!

View File

@@ -1,34 +1,16 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.infra.gitea as gitea_installer
// First of all, we need to set the gitea configuration
// heroscript := "
// !!gitea.configure
// name:'default'
// version:'1.22.6'
// path: '/var/lib/git'
// passwd: '12345678'
// postgresql_name: 'default'
// mail_from: 'git@meet.tf'
// smtp_addr: 'smtp-relay.brevo.com'
// smtp_login: 'admin'
// smtp_port: 587
// smtp_passwd: '12345678'
// domain: 'meet.tf'
// jwt_secret: ''
// lfs_jwt_secret: ''
// internal_token: ''
// secret_key: ''
// "
mut installer := gitea_installer.get(name: 'test')!
// gitea_installer.play(
// name: 'default'
// heroscript: heroscript
// )!
// if you want to configure using heroscript
gitea_installer.play(
heroscript: "
!!gitea.configure name:test
passwd:'something'
domain: 'docs.info.com'
"
)!
// Then we need to get an instace of the installer and call the install
mut gitea := gitea_installer.get()!
// println('gitea configs: ${gitea}')
gitea.install()!
gitea.start()!
installer.start()!

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.threefold.griddriver

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.lang.vlang
import freeflowuniverse.herolib.installers.sysadmintools.daguserver

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.osal
import freeflowuniverse.herolib.installers.lang.golang

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -cg -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.lang.rust
import freeflowuniverse.herolib.installers.lang.python

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.net.mycelium as mycelium_installer

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import freeflowuniverse.herolib.installers.virt.podman as podman_installer

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env -S v -n -w -gc none -no-retry-compilation -cc tcc -d use_openssl -enable-globals run
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import time
import freeflowuniverse.herolib.installers.db.postgresql

Some files were not shown because too many files have changed in this diff Show More