Compare commits

..

16 Commits

Author SHA1 Message Date
e34ba394b9 Merge branch 'development' into development_heroprompt_v2
* development: (182 commits)
  ...
  ...
  fix ci
  ...
  fix: Ignore regex_convert_test.v test
  refactor: Replace codewalker with pathlib and filemap
  test: Ignore virt/heropods/network_test.v in CI
  feat: implement container keep-alive feature
  ...
  ...
  ...
  ...
  ...
  ...
  ...
  ...
  test: Add comprehensive heropods network and container tests
  ...
  ...
  codewalker
  ...

# Conflicts:
#	lib/develop/heroprompt/heroprompt_workspace.v
2025-11-25 18:54:14 +01:00
43308dfbe1 Merge branch 'development' into development_heroprompt_v2
* development:
  ...
  feat: Update site page source references
  feat: Add announcement bar configuration
  ...
  Update the pages
  ...
  ...
  ...
  ...
  ...
  ...
  ...
  ...
  ...
  fix: Improve Docusaurus link generation logic
2025-10-21 06:45:24 +02:00
Mahmoud-Emad
e8904ea1ce chore: Formating the pages 2025-10-15 15:38:13 +03:00
Mahmoud-Emad
3d25fe0f04 refactor: Update import paths and save logic
- Update import paths from `freeflowuniverse.herolib` to `incubaid.herolib`
- Ensure `ws.parent.save()` is only called when `ws.parent` is present
- Remove redundant symlink cleanup for `freeflowuniverse.herolib`
2025-10-15 15:03:25 +03:00
Mahmoud-Emad
d91957b945 Merge branch 'development' into development_heroprompt_v2 2025-10-15 14:40:14 +03:00
Mahmoud-Emad
923f8c24e7 feat: Improve HeroPrompt file selection and workspace management
- Refactor Directory struct and its methods.
- Update file selection logic for directories and files.
- Enhance prompt generation with better file mapping.
- Add unit tests for directory and file operations.
- Improve workspace management with auto-save and logging.
2025-10-12 12:16:52 +03:00
Mahmoud-Emad
40ad68e0ff Merge branch 'development' into development_heroprompt_v2 2025-10-10 15:25:22 +03:00
Mahmoud-Emad
1762387301 feat: add recursive file and directory search
- Introduce SearchResult struct for search items
- Implement search_files and recursive search logic
- Migrate heroprompt file operations to pathlib
- Update expand_home_path to use pathlib
- Integrate codewalker and pathlib in example script
2025-09-14 12:42:43 +03:00
ea9286687d Merge branch 'development' into development_heroprompt_v2
* development:
  Trigger security scan
  Add Github Actions Security workflow
2025-09-11 08:08:27 +04:00
Mahmoud-Emad
cc837a1427 feat: enhance file selection and prompt generation
- Add gitignore filtering to file tree and search
- Introduce recursive directory listing API
- Enable recursive directory selection in UI
- Pass selected paths directly for prompt generation
- Refactor API endpoint names and error handling
2025-09-09 16:31:08 +03:00
Mahmoud-Emad
154c08411c refactor: Simplify prompt content and file map generation
- Extract recursive file tree logic into new helper function
- Remove explicit file content generation from prompt
- Simplify `build_file_map` to only generate file trees
- Eliminate file metadata calculation from `build_file_map`
- Comment out extensive example workspace operations
2025-09-09 11:50:12 +03:00
1870f2a7ce Merge branch 'development' into development_heroprompt_v2
* development:
  ...
  ...
  add example heromodels call
  add example and heromodels openrpc server
  remove server from gitignore
  clean up and fix openrpc server implementation
  Test the workflow
2025-09-09 06:31:24 +04:00
Mahmoud-Emad
ff92f6eff2 feat: Initialize Workspace with metadata fields
- Initialize `children`, `created`, `updated`, `is_saved`
- Add `time` import for timestamp fields
- Remove unused `ui.console` import
- Update package version constant to `1.0.0`
2025-09-08 15:45:17 +03:00
Mahmoud-Emad
eeb5e207f2 Merge branch 'development' into development_heroprompt_v2 2025-09-08 14:55:31 +03:00
Mahmoud-Emad
09b595948d Merge branch 'development' into development_heroprompt_v2 2025-09-07 14:52:43 +03:00
Mahmoud-Emad
63c0b81fc9 feat: Support multi-root workspaces
- Remove `base_path` from Workspace struct and APIs
- Enable adding multiple root directories to a workspace
- Update file tree UI to display all workspace roots
- Refactor file map generation for multi-root display
- Improve prompt output clipboard copy with status
2025-09-07 14:40:17 +03:00
300 changed files with 5355 additions and 5111 deletions

View File

@@ -1,32 +0,0 @@
# Building Hero for release
Generally speaking, our scripts and docs for building hero produce non portable binaries for Linux. While that's fine for development purposes, statically linked binaries are much more convenient for releases and distribution.
The release workflow here creates a static binary for Linux using an Alpine container. A few notes follow about how that's done.
## Static builds in vlang
Since V compiles to C in our case, we are really concerned with how to produce static C builds. The V project provides [some guidance](https://github.com/vlang/v?tab=readme-ov-file#docker-with-alpinemusl) on using an Alpine container and passing `-cflags -static` to the V compiler.
That's fine for some projects. Hero has a dependency on the `libpq` C library for Postgres functionality, however, and this creates a complication.
## Static linking libpq
In order to create a static build of hero on Alpine, we need to install some additional packages:
* openssl-libs-static
* postgresql-dev
The full `apk` command to prepare the container for building looks like this:
```bash
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
```
Then we also need to instruct the C compiler to link against the Postgres static shared libraries. Here's the build command:
```bash
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v
```
Note that gcc is also the preferred compiler for static builds.

View File

@@ -35,6 +35,9 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
# We do the workaround as described here https://github.com/Incubaid/herolib?tab=readme-ov-file#tcc-compiler-error-on-macos
# gcc and clang also don't work on macOS due to https://github.com/vlang/v/issues/25467
# We can change the compiler or remove this when one is fixed
- name: Setup V & Herolib
id: setup
shell: bash
@@ -50,34 +53,52 @@ jobs:
echo "Herolib symlink created to $(pwd)/lib"
timeout-minutes: 10
# For Linux, we build a static binary linked against musl on Alpine. For
# static linking, gcc is preferred
# We can't make static builds for Linux easily, since we link to libql
# (Postgres) and this has no static version available in the Alpine
# repos. Therefore we build dynamic binaries for both glibc and musl.
#
# Again we work around a bug limiting our choice of C compiler tcc won't
# work on Alpine due to https://github.com/vlang/v/issues/24866
# So always use gcc for Linux
#
# For macOS, we can only use tcc (see above), but then we hit issues using
# the garbage collector, so disable that
- name: Build Hero
timeout-minutes: 15
run: |
set -ex
set -e
if [ "${{ runner.os }}" = "Linux" ]; then
sudo apt-get install libpq-dev
# Build for glibc
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}
# Build for musl using Alpine in Docker
docker run --rm \
-v ${{ github.workspace }}/lib:/root/.vmodules/incubaid/herolib \
-v ${{ github.workspace }}:/herolib \
-w /herolib \
alpine:3.22 \
alpine \
sh -c '
set -ex
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
apk add --no-cache bash git build-base openssl-dev libpq-dev
cd v
make clean
make
./v symlink
cd ..
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v -o cli/hero-${{ matrix.target }}-musl
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}-musl
'
else
v -w -d use_openssl -enable-globals -cc clang cli/hero.v -o cli/hero-${{ matrix.target }}
fi
- name: Upload glibc binary
if: runner.os == 'Linux'
uses: actions/upload-artifact@v4
with:
name: hero-${{ matrix.target }}
path: cli/hero-${{ matrix.target }}
- name: Upload musl binary
if: runner.os == 'Linux'
uses: actions/upload-artifact@v4

View File

@@ -1,77 +0,0 @@
# HeroLib AI Prompts (`aiprompts/`)
This directory contains AI-oriented instructions and manuals for working with the Hero tool and the `herolib` codebase.
It is the **entry point for AI agents** that generate or modify code/docs in this repository.
## Scope
- **Global rules for AI and V/Hero usage**
See:
- `herolib_start_here.md`
- `vlang_herolib_core.md`
- **Herolib core modules**
See:
- `herolib_core/` (core HeroLib modules)
- `herolib_advanced/` (advanced topics)
- **Docusaurus & Site module (Hero docs)**
See:
- `docusaurus/docusaurus_ebook_manual.md`
- `lib/web/docusaurus/README.md` (authoritative module doc)
- `lib/web/site/ai_instructions.md` and `lib/web/site/readme.md`
- **HeroModels / HeroDB**
See:
- `ai_instructions_hero_models.md`
- `heromodel_instruct.md`
- **V language & web server docs** (upstream-style, mostly language-level)
See:
- `v_core/`, `v_advanced/`
- `v_veb_webserver/`
## Sources of Truth
For any domain, **code and module-level docs are authoritative**:
- Core install & usage: `herolib/README.md`, scripts under `scripts/`
- Site module: `lib/web/site/ai_instructions.md`, `lib/web/site/readme.md`
- Docusaurus module: `lib/web/docusaurus/README.md`, `lib/web/docusaurus/*.v`
- DocTree client: `lib/data/doctree/client/README.md`
- HeroModels: `lib/hero/heromodels/*.v` + tests
`aiprompts/` files **must not contradict** these. When in doubt, follow the code / module docs first and treat prompts as guidance.
## Directory Overview
- `herolib_start_here.md` / `vlang_herolib_core.md`
Global AI rules and V/Hero basics.
- `herolib_core/` & `herolib_advanced/`
Per-module instructions for core/advanced HeroLib features.
- `docusaurus/`
AI manual for building Hero docs/ebooks with the Docusaurus + Site + DocTree pipeline.
- `instructions/`
Active, higher-level instructions (e.g. HeroDB base filesystem).
- `instructions_archive/`
**Legacy / historical** prompt material. See `instructions_archive/README.md`.
- `todo/`
Meta design/refactor notes (not up-to-date instructions for normal usage).
- `v_core/`, `v_advanced/`, `v_veb_webserver/`
V language and web framework references used when generating V code.
- `bizmodel/`, `unpolly/`, `doctree/`, `documentor/`
Domain-specific or feature-specific instructions.
## How to Treat Legacy Material
- Content under `instructions_archive/` is **kept for reference** and may describe older flows (e.g. older documentation or prompt pipelines).
Do **not** use it as a primary source for new work unless explicitly requested.
- Some prompts mention **Doctree**; the current default docs pipeline uses **DocTree**. Doctree/`doctreeclient` is an alternative/legacy backend.
## Guidelines for AI Agents
- Always:
- Respect global rules in `herolib_start_here.md` and `vlang_herolib_core.md`.
- Prefer module docs under `lib/` when behavior or parameters differ.
- Avoid modifying generated files (e.g. `*_ .v` or other generated artifacts) as instructed.
- When instructions conflict, resolve as:
1. **Code & module docs in `lib/`**
2. **AI instructions in `aiprompts/`**
3. **Archived docs (`instructions_archive/`) only when explicitly needed**.

View File

@@ -2,9 +2,9 @@
## Overview
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`.
This document provides clear instructions for AI agents to create new HeroDB models similar to `message.v`.
These models are used to store structured data in Redis using the HeroDB system.
The `message.v` example can be found in `lib/hero/heromodels/message.v`.
The message.v can be found in `lib/hero/heromodels/message.v`.s
## Key Concepts
@@ -108,7 +108,7 @@ Add your model to the ModelsFactory struct in `factory.v`:
```v
pub struct ModelsFactory {
pub mut:
calendar DBCalendar
messages DBCalendar
// ... other models
}
```

View File

@@ -0,0 +1,51 @@
# Doctree Export Specification
## Overview
The `doctree` module in `lib/data/doctree` is responsible for processing and exporting documentation trees. This involves taking a structured representation of documentation (collections, pages, images, files) and writing it to a specified file system destination. Additionally, it leverages Redis to store metadata about the exported documentation, facilitating quick lookups and integration with other systems.
## Key Components
### `lib/data/doctree/export.v`
This file defines the main `export` function for the `Tree` object. It orchestrates the overall export process:
- Takes `TreeExportArgs` which includes parameters like `destination`, `reset` (to clear destination), `keep_structure`, `exclude_errors`, `toreplace` (for regex replacements), `concurrent` (for parallel processing), and `redis` (to control Redis metadata storage).
- Processes definitions, includes, actions, and macros within the `Tree`.
- Generates file paths for pages, images, and other files.
- Iterates through `Collection` objects within the `Tree` and calls their respective `export` methods, passing down the `redis` flag.
### `lib/data/doctree/collection/export.v`
This file defines the `export` function for the `Collection` object. This is where the actual file system writing and Redis interaction for individual collections occur:
- Takes `CollectionExportArgs` which includes `destination`, `file_paths`, `reset`, `keep_structure`, `exclude_errors`, `replacer`, and the `redis` flag.
- Creates a `.collection` file in the destination directory with basic collection information.
- **Redis Integration**:
- Obtains a Redis client using `base.context().redis()`.
- Stores the collection's destination path in Redis using `redis.hset('doctree:path', 'collection_name', 'destination_path')`.
- Calls `export_pages`, `export_files`, `export_images`, and `export_linked_pages` which all interact with Redis if the `redis` flag is true.
- **`export_pages`**:
- Processes page links and handles not-found errors.
- Writes markdown content to the destination file system.
- Stores page metadata in Redis: `redis.hset('doctree:collection_name', 'page_name', 'page_file_name.md')`.
- **`export_files` and `export_images`**:
- Copies files and images to the destination directory (e.g., `img/`).
- Stores file/image metadata in Redis: `redis.hset('doctree:collection_name', 'file_name', 'img/file_name.ext')`.
- **`export_linked_pages`**:
- Gathers linked pages within the collection.
- Writes a `.linkedpages` file.
- Stores linked pages file metadata in Redis: `redis.hset('doctree:collection_name', 'linkedpages', 'linkedpages_file_name.md')`.
## Link between Redis and Export
The `doctree` export process uses Redis as a metadata store. When the `redis` flag is set to `true` (which is the default), the export functions populate Redis with key-value pairs that map collection names, page names, file names, and image names to their respective paths and file names within the exported documentation structure.
This Redis integration serves as a quick lookup mechanism for other applications or services that might need to access or reference the exported documentation. Instead of traversing the file system, these services can query Redis to get the location of specific documentation elements.
## Is Export Needed?
Yes, the export functionality is crucial for making the processed `doctree` content available outside the internal `doctree` representation.
- **File System Export**: The core purpose of the export is to write the documentation content (markdown files, images, other assets) to a specified directory. This is essential for serving the documentation via a web server, integrating with static site generators (like Docusaurus, as suggested by other files in the project), or simply providing a browsable version of the documentation.
- **Redis Metadata**: While the file system export is fundamental, the Redis metadata storage is an important complementary feature. It provides an efficient way for other systems to programmatically discover and locate documentation assets. If there are downstream applications that rely on this Redis metadata for navigation, search, or content delivery, then the Redis part of the export is indeed needed. If no such applications exist or are planned, the `redis` flag can be set to `false` to skip this step, but the file system export itself remains necessary for external consumption of the documentation.

View File

@@ -2,38 +2,13 @@
This manual provides a comprehensive guide on how to leverage HeroLib's Docusaurus integration, Doctree, and HeroScript to create and manage technical ebooks, optimized for AI-driven content generation and project management.
## Quick Start - Recommended Ebook Structure
The recommended directory structure for an ebook:
```
my_ebook/
├── scan.hero # DocTree collection scanning
├── config.hero # Site configuration
├── menus.hero # Navbar and footer configuration
├── include.hero # Docusaurus define and doctree export
├── 1_intro.heroscript # Page definitions (numbered for ordering)
├── 2_concepts.heroscript # More page definitions
└── 3_advanced.heroscript # Additional pages
```
**Running an ebook:**
```bash
# Start development server
hero docs -d -p /path/to/my_ebook
# Build for production
hero docs -p /path/to/my_ebook
```
## 1. Core Concepts
To effectively create ebooks with HeroLib, it's crucial to understand the interplay of three core components:
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process. Files use `.hero` extension for configuration and `.heroscript` for page definitions.
* **HeroScript**: A concise scripting language used to define the structure, configuration, and content flow of your Docusaurus site. It acts as the declarative interface for the entire process.
* **Docusaurus**: A popular open-source static site generator. HeroLib uses Docusaurus as the underlying framework to render your ebook content into a navigable website.
* **DocTree**: HeroLib's document collection layer. DocTree scans and exports markdown "collections" and "pages" that Docusaurus consumes.
* **Doctree**: HeroLib's content management system. Doctree organizes your markdown files into "collections" and "pages," allowing for structured content retrieval and reuse across multiple projects.
## 2. Setting Up a Docusaurus Project with HeroLib
@@ -47,26 +22,18 @@ The `docusaurus.define` HeroScript directive configures the global settings for
```heroscript
!!docusaurus.define
name:"my_ebook" // must match the site name from !!site.config
path_build: "/tmp/my_ebook_build"
path_publish: "/tmp/my_ebook_publish"
reset: true // clean build dir before building (optional)
install: true // run bun install if needed (optional)
template_update: true // update the Docusaurus template (optional)
doctree_dir: "/tmp/doctree_export" // where DocTree exports collections
use_doctree: true // use DocTree as content backend
production: true
update: true
```
**Arguments:**
* `name` (string, required): The site/factory name. Must match the `name` used in `!!site.config` so Docusaurus can find the corresponding site definition.
* `path_build` (string, optional): The local path where the Docusaurus site will be built. Defaults to `~/hero/var/docusaurus/build`.
* `path_publish` (string, optional): The local path where the final Docusaurus site will be published (e.g., for deployment). Defaults to `~/hero/var/docusaurus/publish`.
* `reset` (boolean, optional): If `true`, clean the build directory before starting.
* `install` (boolean, optional): If `true`, run dependency installation (e.g., `bun install`).
* `template_update` (boolean, optional): If `true`, update the Docusaurus template.
* `doctree_dir` (string, optional): Directory where DocTree exports collections (used by the DocTree client in `lib/data/doctree/client`).
* `use_doctree` (boolean, optional): If `true`, use the DocTree client as the content backend (default behavior).
* `production` (boolean, optional): If `true`, the site will be built for production (optimized). Default is `false`.
* `update` (boolean, optional): If `true`, the Docusaurus template and dependencies will be updated. Default is `false`.
### 2.2. Adding a Docusaurus Site (`docusaurus.add`)
@@ -86,7 +53,7 @@ The `docusaurus.add` directive defines an individual Docusaurus site (your ebook
```heroscript
!!docusaurus.add
name:"tfgrid_tech_ebook"
git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/ebooks/tech"
git_reset:true // Reset Git repository before pulling
git_pull:true // Pull latest changes
git_root:"/tmp/git_clones" // Optional: specify a root directory for git clones
@@ -223,18 +190,18 @@ Configure the footer section of your Docusaurus site.
* `href` (string, optional): External URL for the link.
* `to` (string, optional): Internal Docusaurus path.
### 3.4. Publish Destinations (`site.publish`, `site.publish_dev`)
### 3.4. Build Destinations (`site.build_dest`, `site.build_dest_dev`)
Specify where the built Docusaurus site should be deployed. This typically involves an SSH connection defined elsewhere (e.g., `!!site.ssh_connection`).
**HeroScript Example:**
```heroscript
!!site.publish
!!site.build_dest
ssh_name:"production_server" // Name of a pre-defined SSH connection
path:"/var/www/my-ebook" // Remote path on the server
path:"/var/www/my-ebook" // Remote path on the server
!!site.publish_dev
!!site.build_dest_dev
ssh_name:"dev_server"
path:"/tmp/dev-ebook"
```
@@ -252,7 +219,7 @@ This powerful feature allows you to pull markdown content and assets from other
```heroscript
!!site.import
url:'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
url:'https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections/cloud_reinvented'
dest:'cloud_reinvented' // Destination subdirectory within your Docusaurus docs folder
replace:'NAME:MyName, URGENCY:red' // Optional: comma-separated key:value pairs for text replacement
```
@@ -271,60 +238,49 @@ This is where you define the actual content pages and how they are organized int
```heroscript
// Define a category
!!site.page_category name:'introduction' label:"Introduction to Ebook"
!!site.page_category path:'introduction' label:"Introduction to Ebook" position:10
// Define pages - first page specifies collection, subsequent pages reuse it
!!site.page src:"my_collection:chapter_1_overview"
// Define a page within that category, linking to Doctree content
!!site.page path:'introduction' src:"my_doctree_collection:chapter_1_overview"
title:"Chapter 1: Overview"
description:"A brief introduction to the ebook's content."
!!site.page src:"chapter_2_basics"
title:"Chapter 2: Basics"
// New category with new collection
!!site.page_category name:'advanced' label:"Advanced Topics"
!!site.page src:"advanced_collection:performance"
title:"Performance Tuning"
hide_title:true
position:1 // Order within the category
hide_title:true // Hide the title on the page itself
```
**Arguments:**
* **`site.page_category`**:
* `name` (string, required): Category identifier (used internally).
* `path` (string, required): The path to the category directory within your Docusaurus `docs` folder (e.g., `introduction` will create `docs/introduction/_category_.json`).
* `label` (string, required): The display name for the category in the sidebar.
* `position` (int, optional): The order of the category in the sidebar (auto-incremented if omitted).
* `position` (int, optional): The order of the category in the sidebar.
* `sitename` (string, optional): If you have multiple Docusaurus sites defined, specify which site this category belongs to. Defaults to the current site's name.
* **`site.page`**:
* `src` (string, required): **Crucial for DocTree/collection integration.** Format: `collection_name:page_name` for the first page, or just `page_name` to reuse the previous collection.
* `title` (string, optional): The title of the page. If not provided, HeroLib extracts it from the markdown `# Heading` or uses the page name.
* `src` (string, required): **Crucial for Doctree integration.** This specifies the source of the page content in the format `collection_name:page_name`. HeroLib will fetch the markdown content from the specified Doctree collection and page.
* `path` (string, required): The relative path and filename for the generated markdown file within your Docusaurus `docs` folder (e.g., `introduction/chapter_1.md`). If only a directory is provided (e.g., `introduction/`), the `page_name` from `src` will be used as the filename.
* `title` (string, optional): The title of the page. If not provided, HeroLib will attempt to extract it from the markdown content or use the `page_name`.
* `description` (string, optional): A short description for the page, used in frontmatter.
* `position` (int, optional): The order of the page within its category.
* `hide_title` (boolean, optional): If `true`, the title will not be displayed on the page itself.
* `draft` (boolean, optional): If `true`, the page will be hidden from navigation.
* `draft` (boolean, optional): If `true`, the page will be marked as a draft and not included in production builds.
* `title_nr` (int, optional): If set, HeroLib will re-number the markdown headings (e.g., `title_nr:3` will make `# Heading` become `### Heading`). Useful for consistent heading levels across imported content.
### 3.7. Collections and DocTree/Doctree Integration
### 3.7. Doctree Integration Details
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your content collections.
The `site.page` directive's `src` parameter (`collection_name:page_name`) is the bridge to your Doctree content.
**Current default: DocTree export**
1. **Collections**: DocTree exports markdown files into collections under an `export_dir` (see `lib/data/doctree/client`).
2. **Export step**: A separate process (DocTree) writes the collections into `doctree_dir` (e.g., `/tmp/doctree_export`), following the `content/` + `meta/` structure.
3. **Docusaurus consumption**: The Docusaurus module uses the DocTree client (`doctree_client`) to resolve `collection_name:page_name` into markdown content and assets when generating docs.
**Alternative: Doctree/`doctreeclient`**
In older setups, or when explicitly configured, Doctree and `doctreeclient` can still be used to provide the same `collection:page` model:
**How Doctree Works:**
1. **Collections**: Doctree organizes markdown files into logical groups called "collections." A collection is typically a directory containing markdown files and an empty `.collection` file.
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`):
2. **Scanning**: You define which collections Doctree should scan using `!!doctree.scan` in a HeroScript file (e.g., `doctree.heroscript`).
**Example `doctree.heroscript`:**
```heroscript
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections"
!!doctree.scan git_url:"https://git.threefold.info/tfgrid/docs_tfgrid4/src/branch/main/collections"
```
This will pull the `collections` directory from the specified Git URL and make its contents available to Doctree.
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, the client (`doctree_client` or `doctreeclient`, depending on configuration) fetches the content of `my_page.md` from the `my_collection` collection.
3. **Page Retrieval**: When `site.page` references `src:"my_collection:my_page"`, HeroLib's `doctreeclient` fetches the content of `my_page.md` from the `my_collection` collection that Doctree has scanned.
## 4. Building and Developing Your Ebook

View File

@@ -0,0 +1,378 @@
# HeroPrompt Module
The `heroprompt` module provides a hierarchical workspace-based system for organizing code files and generating structured AI prompts. It enables developers to select files from multiple directories and generate formatted prompts for AI code analysis.
## Key Features
- **Hierarchical Organization**: HeroPrompt → Workspace → Directory → Files
- **Redis Persistence**: All data persists across sessions using Redis
- **Factory Pattern**: Clean API with `get()`, `delete()`, `exists()`, `list()` functions
- **File Selection**: Select specific files or entire directories for analysis
- **Active Workspace**: Manage multiple workspaces with one active at a time
- **Prompt Generation**: Generate structured prompts with file maps, contents, and instructions
- **Template-Based**: Uses V templates for consistent prompt formatting
## Basic Usage
### 1. Getting Started
```v
import incubaid.herolib.develop.heroprompt
// Create or get a HeroPrompt instance
mut hp := heroprompt.get(name: 'my_project', create: true)!
// Create a workspace (first workspace is automatically active)
mut workspace := hp.new_workspace(
name: 'my_workspace'
description: 'My project workspace'
)!
```
### 2. Adding Directories
```v
// Add directory and automatically scan all files
mut dir := workspace.add_directory(
path: '/path/to/your/code'
name: 'backend'
scan: true // Scans all files and subdirectories
)!
// Add another directory
mut frontend_dir := workspace.add_directory(
path: '/path/to/frontend'
name: 'frontend'
scan: true
)!
```
### 3. Selecting Files
```v
// Select specific files
dir.select_file(path: '/path/to/your/code/main.v')!
dir.select_file(path: '/path/to/your/code/utils.v')!
// Or select all files in a directory
frontend_dir.select_all()!
// Deselect files
dir.deselect_file(path: '/path/to/your/code/test.v')!
// Deselect all files
dir.deselect_all()!
```
### 4. Generating AI Prompts
```v
// Generate prompt with selected files
prompt := workspace.generate_prompt(
instruction: 'Review these files and suggest improvements'
)!
println(prompt)
// Or generate with specific files (overrides selection)
prompt2 := workspace.generate_prompt(
instruction: 'Analyze these specific files'
selected_files: ['/path/to/file1.v', '/path/to/file2.v']
)!
```
## Factory Functions
### `heroprompt.get(name: string, create: bool) !HeroPrompt`
Gets or creates a HeroPrompt instance.
```v
// Get existing instance or create new one
mut hp := heroprompt.get(name: 'my_project', create: true)!
// Get existing instance only (error if doesn't exist)
mut hp2 := heroprompt.get(name: 'my_project')!
```
### `heroprompt.delete(name: string) !`
Deletes a HeroPrompt instance from Redis.
```v
heroprompt.delete(name: 'my_project')!
```
### `heroprompt.exists(name: string) !bool`
Checks if a HeroPrompt instance exists.
```v
if heroprompt.exists(name: 'my_project')! {
println('Instance exists')
}
```
### `heroprompt.list() ![]string`
Lists all HeroPrompt instance names.
```v
instances := heroprompt.list()!
for name in instances {
println('Instance: ${name}')
}
```
## HeroPrompt Methods
### Workspace Management
#### `hp.new_workspace(name: string, description: string, is_active: bool) !&Workspace`
Creates a new workspace. The first workspace is automatically set as active.
```v
mut ws := hp.new_workspace(
name: 'backend'
description: 'Backend API workspace'
)!
```
#### `hp.get_workspace(name: string) !&Workspace`
Retrieves an existing workspace by name.
```v
mut ws := hp.get_workspace('backend')!
```
#### `hp.get_active_workspace() !&Workspace`
Returns the currently active workspace.
```v
mut active := hp.get_active_workspace()!
println('Active workspace: ${active.name}')
```
#### `hp.set_active_workspace(name: string) !`
Sets a workspace as active (deactivates all others).
```v
hp.set_active_workspace('frontend')!
```
#### `hp.list_workspaces() []&Workspace`
Lists all workspaces in the instance.
```v
workspaces := hp.list_workspaces()
for ws in workspaces {
println('Workspace: ${ws.name}')
}
```
#### `hp.delete_workspace(name: string) !`
Deletes a workspace.
```v
hp.delete_workspace('old_workspace')!
```
## Workspace Methods
### Directory Management
#### `ws.add_directory(path: string, name: string, scan: bool) !&Directory`
Adds a directory to the workspace.
```v
mut dir := ws.add_directory(
path: '/path/to/code'
name: 'my_code'
scan: true // Automatically scans all files
)!
```
#### `ws.list_directories() []&Directory`
Lists all directories in the workspace.
```v
dirs := ws.list_directories()
for dir in dirs {
println('Directory: ${dir.name}')
}
```
#### `ws.remove_directory(id: string) !`
Removes a directory from the workspace.
```v
ws.remove_directory(id: dir.id)!
```
### Prompt Generation
#### `ws.generate_prompt(instruction: string, selected_files: []string, show_all_files: bool) !string`
Generates a complete AI prompt with file map, contents, and instructions.
```v
// Use selected files (from select_file() calls)
prompt := ws.generate_prompt(
instruction: 'Review the code'
)!
// Or specify files explicitly
prompt2 := ws.generate_prompt(
instruction: 'Analyze these files'
selected_files: ['/path/to/file1.v', '/path/to/file2.v']
show_all_files: false
)!
```
#### `ws.generate_file_map(selected_files: []string, show_all: bool) !string`
Generates a hierarchical tree structure of files.
```v
file_map := ws.generate_file_map(
selected_files: ['/path/to/file1.v']
show_all: false
)!
println(file_map)
```
#### `ws.generate_file_contents(selected_files: []string, include_path: bool) !string`
Generates formatted file contents.
```v
contents := ws.generate_file_contents(
selected_files: ['/path/to/file1.v']
include_path: true
)!
println(contents)
```
## Directory Methods
### File Selection
#### `dir.select_file(path: string) !`
Marks a file as selected.
```v
dir.select_file(path: '/path/to/file.v')!
```
#### `dir.select_all() !`
Selects all files in the directory and subdirectories.
```v
dir.select_all()!
```
#### `dir.deselect_file(path: string) !`
Deselects a file.
```v
dir.deselect_file(path: '/path/to/file.v')!
```
#### `dir.deselect_all() !`
Deselects all files in the directory.
```v
dir.deselect_all()!
```
### Directory Information
#### `dir.exists() bool`
Checks if the directory exists on the filesystem.
```v
if dir.exists() {
println('Directory exists')
}
```
#### `dir.get_contents() !DirectoryContent`
Gets all files in the directory (scans if needed).
```v
content := dir.get_contents()!
println('Files: ${content.files.len}')
```
## Generated Prompt Format
The generated prompt uses a template with three sections:
```prompt
<user_instructions>
Review these files and suggest improvements
</user_instructions>
<file_map>
my_project/
├── src/
│ ├── main.v *
│ └── utils.v *
└── README.md *
</file_map>
<file_contents>
File: /path/to/src/main.v
\```v
module main
fn main() {
println('Hello')
}
\```
</file_contents>
```
Files marked with `*` in the file_map are the selected files included in the prompt.
## Complete Example
```v
import incubaid.herolib.develop.heroprompt
mut hp := heroprompt.get(name: 'my_app', create: true)!
mut ws := hp.new_workspace(name: 'backend')!
mut src_dir := ws.add_directory(path: '/path/to/src', name: 'source', scan: true)!
src_dir.select_file(path: '/path/to/src/main.v')!
prompt := ws.generate_prompt(instruction: 'Review the code')!
println(prompt)
heroprompt.delete(name: 'my_app')!
```
## Tips
- Use `heroprompt.delete()` at start for fresh state
- First workspace is automatically active
- Changes auto-save to Redis
- Use `scan: true` to discover all files
- Create separate workspaces for different contexts

View File

@@ -35,11 +35,11 @@ pub fn play(mut plbook PlayBook) ! {
if plbook.exists_once(filter: 'docusaurus.define') {
mut action := plbook.get(filter: 'docusaurus.define')!
mut p := action.params
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
path_build := p.get_default('path_build', '')!
path_publish := p.get_default('path_publish', '')!
reset := p.get_default_false('reset')
use_doctree := p.get_default_false('use_doctree')
//example how we get parameters from the action see core_params.md for more details
ds = new(
path: p.get_default('path_publish', '')!
production: p.get_default_false('production')
)!
}
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
@@ -51,4 +51,4 @@ pub fn play(mut plbook PlayBook) ! {
}
```
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.

View File

@@ -1,5 +1,3 @@
> NOTE: This document is an example snapshot of a developer's filesystem layout for HeroDB/HeroModels. Paths under `/Users/despiegk/...` are illustrative only. For the current, authoritative structure always use the live repository tree (this checkout) and the modules under `lib/hero/heromodels` and `lib/hero/db`.
<file_map>
/Users/despiegk/code/github/incubaid/herolib
├── .github

View File

@@ -1,15 +0,0 @@
# Instructions Archive (Legacy Prompts)
This directory contains **archived / legacy AI prompt material** for `herolib`.
- Files here may describe **older workflows** (e.g. previous documentation generation or model pipelines).
- They are kept for **historical reference** and to help understand how things evolved.
- They are **not** guaranteed to match the current `herolib` implementation.
## Usage Guidelines
- Do **not** use these files as the primary source for new features or refactors.
- When generating code or documentation, prefer:
1. Code and module docs under `lib/` (e.g. `lib/web/site/ai_instructions.md`, `lib/web/docusaurus/README.md`).
2. Up-to-date AI instructions under `aiprompts/` (outside of `instructions_archive/`).
- Only consult this directory when you explicitly need to understand **historical behavior** or migrate old flows.

View File

@@ -1,10 +1,51 @@
# module orm
## Contents
- [Constants](#Constants)
- [new_query](#new_query)
- [orm_select_gen](#orm_select_gen)
- [orm_stmt_gen](#orm_stmt_gen)
- [orm_table_gen](#orm_table_gen)
- [Connection](#Connection)
- [Primitive](#Primitive)
- [QueryBuilder[T]](#QueryBuilder[T])
- [reset](#reset)
- [where](#where)
- [or_where](#or_where)
- [order](#order)
- [limit](#limit)
- [offset](#offset)
- [select](#select)
- [set](#set)
- [query](#query)
- [count](#count)
- [insert](#insert)
- [insert_many](#insert_many)
- [update](#update)
- [delete](#delete)
- [create](#create)
- [drop](#drop)
- [last_id](#last_id)
- [MathOperationKind](#MathOperationKind)
- [OperationKind](#OperationKind)
- [OrderType](#OrderType)
- [SQLDialect](#SQLDialect)
- [StmtKind](#StmtKind)
- [InfixType](#InfixType)
- [Null](#Null)
- [QueryBuilder](#QueryBuilder)
- [QueryData](#QueryData)
- [SelectConfig](#SelectConfig)
- [Table](#Table)
- [TableField](#TableField)
## Constants
```v
const num64 = [typeof[i64]().idx, typeof[u64]().idx]
```
[[Return to contents]](#Contents)
```v
const nums = [
@@ -18,7 +59,7 @@ const nums = [
]
```
[[Return to contents]](#Contents)
```v
const float = [
@@ -27,31 +68,31 @@ const float = [
]
```
[[Return to contents]](#Contents)
```v
const type_string = typeof[string]().idx
```
[[Return to contents]](#Contents)
```v
const serial = -1
```
[[Return to contents]](#Contents)
```v
const time_ = -2
```
[[Return to contents]](#Contents)
```v
const enum_ = -3
```
[[Return to contents]](#Contents)
```v
const type_idx = {
@@ -70,19 +111,19 @@ const type_idx = {
}
```
[[Return to contents]](#Contents)
```v
const string_max_len = 2048
```
[[Return to contents]](#Contents)
```v
const null_primitive = Primitive(Null{})
```
[[Return to contents]](#Contents)
## new_query
```v
@@ -91,7 +132,7 @@ fn new_query[T](conn Connection) &QueryBuilder[T]
new_query create a new query object for struct `T`
[[Return to contents]](#Contents)
## orm_select_gen
```v
@@ -100,7 +141,7 @@ fn orm_select_gen(cfg SelectConfig, q string, num bool, qm string, start_pos int
Generates an sql select stmt, from universal parameter orm - See SelectConfig q, num, qm, start_pos - see orm_stmt_gen where - See QueryData
[[Return to contents]](#Contents)
## orm_stmt_gen
```v
@@ -110,7 +151,7 @@ fn orm_stmt_gen(sql_dialect SQLDialect, table Table, q string, kind StmtKind, nu
Generates an sql stmt, from universal parameter q - The quotes character, which can be different in every type, so it's variable num - Stmt uses nums at prepared statements (? or ?1) qm - Character for prepared statement (qm for question mark, as in sqlite) start_pos - When num is true, it's the start position of the counter
[[Return to contents]](#Contents)
## orm_table_gen
```v
@@ -120,7 +161,7 @@ fn orm_table_gen(sql_dialect SQLDialect, table Table, q string, defaults bool, d
Generates an sql table stmt, from universal parameter table - Table struct q - see orm_stmt_gen defaults - enables default values in stmt def_unique_len - sets default unique length for texts fields - See TableField sql_from_v - Function which maps type indices to sql type names alternative - Needed for msdb
[[Return to contents]](#Contents)
## Connection
```v
@@ -140,7 +181,7 @@ Interfaces gets called from the backend and can be implemented Since the orm sup
Every function without last_id() returns an optional, which returns an error if present last_id returns the last inserted id of the db
[[Return to contents]](#Contents)
## Primitive
```v
@@ -162,7 +203,7 @@ type Primitive = InfixType
| []Primitive
```
[[Return to contents]](#Contents)
## QueryBuilder[T]
## reset
@@ -172,7 +213,7 @@ fn (qb_ &QueryBuilder[T]) reset() &QueryBuilder[T]
reset reset a query object, but keep the connection and table name
[[Return to contents]](#Contents)
## where
```v
@@ -181,7 +222,7 @@ fn (qb_ &QueryBuilder[T]) where(condition string, params ...Primitive) !&QueryBu
where create a `where` clause, it will `AND` with previous `where` clause. valid token in the `condition` include: `field's names`, `operator`, `(`, `)`, `?`, `AND`, `OR`, `||`, `&&`, valid `operator` incldue: `=`, `!=`, `<>`, `>=`, `<=`, `>`, `<`, `LIKE`, `ILIKE`, `IS NULL`, `IS NOT NULL`, `IN`, `NOT IN` example: `where('(a > ? AND b <= ?) OR (c <> ? AND (x = ? OR y = ?))', a, b, c, x, y)`
[[Return to contents]](#Contents)
## or_where
```v
@@ -190,7 +231,7 @@ fn (qb_ &QueryBuilder[T]) or_where(condition string, params ...Primitive) !&Quer
or_where create a `where` clause, it will `OR` with previous `where` clause.
[[Return to contents]](#Contents)
## order
```v
@@ -199,7 +240,7 @@ fn (qb_ &QueryBuilder[T]) order(order_type OrderType, field string) !&QueryBuild
order create a `order` clause
[[Return to contents]](#Contents)
## limit
```v
@@ -208,7 +249,7 @@ fn (qb_ &QueryBuilder[T]) limit(limit int) !&QueryBuilder[T]
limit create a `limit` clause
[[Return to contents]](#Contents)
## offset
```v
@@ -217,7 +258,7 @@ fn (qb_ &QueryBuilder[T]) offset(offset int) !&QueryBuilder[T]
offset create a `offset` clause
[[Return to contents]](#Contents)
## select
```v
@@ -226,7 +267,7 @@ fn (qb_ &QueryBuilder[T]) select(fields ...string) !&QueryBuilder[T]
select create a `select` clause
[[Return to contents]](#Contents)
## set
```v
@@ -235,7 +276,7 @@ fn (qb_ &QueryBuilder[T]) set(assign string, values ...Primitive) !&QueryBuilder
set create a `set` clause for `update`
[[Return to contents]](#Contents)
## query
```v
@@ -244,7 +285,7 @@ fn (qb_ &QueryBuilder[T]) query() ![]T
query start a query and return result in struct `T`
[[Return to contents]](#Contents)
## count
```v
@@ -253,7 +294,7 @@ fn (qb_ &QueryBuilder[T]) count() !int
count start a count query and return result
[[Return to contents]](#Contents)
## insert
```v
@@ -262,7 +303,7 @@ fn (qb_ &QueryBuilder[T]) insert[T](value T) !&QueryBuilder[T]
insert insert a record into the database
[[Return to contents]](#Contents)
## insert_many
```v
@@ -271,7 +312,7 @@ fn (qb_ &QueryBuilder[T]) insert_many[T](values []T) !&QueryBuilder[T]
insert_many insert records into the database
[[Return to contents]](#Contents)
## update
```v
@@ -280,7 +321,7 @@ fn (qb_ &QueryBuilder[T]) update() !&QueryBuilder[T]
update update record(s) in the database
[[Return to contents]](#Contents)
## delete
```v
@@ -289,7 +330,7 @@ fn (qb_ &QueryBuilder[T]) delete() !&QueryBuilder[T]
delete delete record(s) in the database
[[Return to contents]](#Contents)
## create
```v
@@ -298,7 +339,7 @@ fn (qb_ &QueryBuilder[T]) create() !&QueryBuilder[T]
create create a table
[[Return to contents]](#Contents)
## drop
```v
@@ -307,7 +348,7 @@ fn (qb_ &QueryBuilder[T]) drop() !&QueryBuilder[T]
drop drop a table
[[Return to contents]](#Contents)
## last_id
```v
@@ -316,7 +357,7 @@ fn (qb_ &QueryBuilder[T]) last_id() int
last_id returns the last inserted id of the db
[[Return to contents]](#Contents)
## MathOperationKind
```v
@@ -328,7 +369,7 @@ enum MathOperationKind {
}
```
[[Return to contents]](#Contents)
## OperationKind
```v
@@ -348,7 +389,7 @@ enum OperationKind {
}
```
[[Return to contents]](#Contents)
## OrderType
```v
@@ -358,7 +399,7 @@ enum OrderType {
}
```
[[Return to contents]](#Contents)
## SQLDialect
```v
@@ -370,7 +411,7 @@ enum SQLDialect {
}
```
[[Return to contents]](#Contents)
## StmtKind
```v
@@ -381,7 +422,7 @@ enum StmtKind {
}
```
[[Return to contents]](#Contents)
## InfixType
```v
@@ -393,14 +434,14 @@ pub:
}
```
[[Return to contents]](#Contents)
## Null
```v
struct Null {}
```
[[Return to contents]](#Contents)
## QueryBuilder
```v
@@ -415,7 +456,7 @@ pub mut:
}
```
[[Return to contents]](#Contents)
## QueryData
```v
@@ -433,7 +474,7 @@ pub mut:
Examples for QueryData in SQL: abc == 3 && b == 'test' => fields[abc, b]; data[3, 'test']; types[index of int, index of string]; kinds[.eq, .eq]; is_and[true]; Every field, data, type & kind of operation in the expr share the same index in the arrays is_and defines how they're addicted to each other either and or or parentheses defines which fields will be inside () auto_fields are indexes of fields where db should generate a value when absent in an insert
[[Return to contents]](#Contents)
## SelectConfig
```v
@@ -455,7 +496,7 @@ pub mut:
table - Table struct is_count - Either the data will be returned or an integer with the count has_where - Select all or use a where expr has_order - Order the results order - Name of the column which will be ordered order_type - Type of order (asc, desc) has_limit - Limits the output data primary - Name of the primary field has_offset - Add an offset to the result fields - Fields to select types - Types to select
[[Return to contents]](#Contents)
## Table
```v
@@ -466,7 +507,7 @@ pub mut:
}
```
[[Return to contents]](#Contents)
## TableField
```v
@@ -480,3 +521,7 @@ pub mut:
is_arr bool
}
```
[[Return to contents]](#Contents)
#### Powered by vdoc. Generated on: 2 Sep 2025 07:19:37

View File

@@ -1,282 +0,0 @@
# V ORM — Developer Cheat Sheet
*Fast reference for Struct Mapping, CRUD, Attributes, Query Builder, and Usage Patterns*
---
## 1. What V ORM Is
* Built-in ORM for **SQLite**, **MySQL**, **PostgreSQL**
* Unified V-syntax; no SQL string building
* Automatic query sanitization
* Compile-time type & field checks
* Structs map directly to tables
---
## 2. Define Models (Struct ↔ Table)
### Basic Example
```v
struct User {
id int @[primary; sql: serial]
name string
email string @[unique]
}
```
### Nullable Fields
```v
age ?int // allows NULL
```
---
## 3. Struct Attributes
### Table-level
| Attribute | Meaning |
| ---------------------------- | ------------------------- |
| `@[table: 'custom_name']` | Override table name |
| `@[comment: '...']` | Table comment |
| `@[index: 'field1, field2']` | Creates multi-field index |
---
## 4. Field Attributes
| Attribute | Description |
| ------------------------------------------------ | ---------------------------- |
| `@[primary]` | Primary key |
| `@[unique]` | UNIQUE constraint |
| `@[unique: 'group']` | Composite unique group |
| `@[skip]` / `@[sql: '-']` | Ignore field |
| `@[sql: serial]` | Auto-increment key |
| `@[sql: 'col_name']` | Rename column |
| `@[sql_type: 'BIGINT']` | Force SQL type |
| `@[default: 'CURRENT_TIMESTAMP']` | Raw SQL default |
| `@[fkey: 'field']` | Foreign key on a child array |
| `@[references]`, `@[references: 'table(field)']` | FK relationship |
| `@[index]` | Index on field |
| `@[comment: '...']` | Column comment |
### Example
```v
struct Post {
id int @[primary; sql: serial]
title string
body string
author_id int @[references: 'users(id)']
}
```
---
## 5. ORM SQL Block (Primary API)
### Create Table
```v
sql db {
create table User
}!
```
### Drop Table
```v
sql db {
drop table User
}!
```
### Insert
```v
id := sql db {
insert new_user into User
}!
```
### Select
```v
users := sql db {
select from User where age > 18 && name != 'Tom'
order by id desc
limit 10
}!
```
### Update
```v
sql db {
update User set name = 'Alice' where id == 1
}!
```
### Delete
```v
sql db {
delete from User where id > 100
}!
```
---
## 6. Relationships
### One-to-Many
```v
struct Parent {
id int @[primary; sql: serial]
children []Child @[fkey: 'parent_id']
}
struct Child {
id int @[primary; sql: serial]
parent_id int
}
```
---
## 7. Notes on `time.Time`
* Stored as integer timestamps
* SQL defaults like `NOW()` / `CURRENT_TIMESTAMP` **dont work** for `time.Time` with V ORM defaults
* Use `@[default: 'CURRENT_TIMESTAMP']` only with custom SQL types
---
## 8. Query Builder API (Dynamic Queries)
### Create Builder
```v
mut qb := orm.new_query[User](db)
```
### Create Table
```v
qb.create()!
```
### Insert Many
```v
qb.insert_many(users)!
```
### Select
```v
results := qb
.select('id, name')!
.where('age > ?', 18)!
.order('id DESC')!
.limit(20)!
.query()!
```
### Update
```v
qb
.set('name = ?', 'NewName')!
.where('id = ?', 1)!
.update()!
```
### Delete
```v
qb.where('created_at IS NULL')!.delete()!
```
### Complex WHERE
```v
qb.where(
'(salary > ? AND age < ?) OR (role LIKE ?)',
3000, 40, '%engineer%'
)!
```
---
## 9. Connecting to Databases
### SQLite
```v
import db.sqlite
db := sqlite.connect('db.sqlite')!
```
### MySQL
```v
import db.mysql
db := mysql.connect(host: 'localhost', user: 'root', password: '', dbname: 'test')!
```
### PostgreSQL
```v
import db.pg
db := pg.connect(conn_str)!
```
---
## 10. Full Example (Complete CRUD)
```v
import db.sqlite
struct Customer {
id int @[primary; sql: serial]
name string
email string @[unique]
}
fn main() {
db := sqlite.connect('customers.db')!
sql db { create table Customer }!
new_c := Customer{name: 'Alice', email: 'alice@x.com'}
id := sql db { insert new_c into Customer }!
println(id)
list := sql db { select from Customer where name == 'Alice' }!
println(list)
sql db { update Customer set name = 'Alicia' where id == id }!
sql db { delete from Customer where id == id }!
}
```
---
## 11. Best Practices
* Always use `sql db { ... }` for static queries
* Use QueryBuilder for dynamic conditions
* Prefer `sql: serial` for primary keys
* Explicitly define foreign keys
* Use `?T` for nullable fields
* Keep struct names identical to table names unless overridden

View File

@@ -122,12 +122,12 @@ pub fn play(mut plbook PlayBook) ! {
if plbook.exists_once(filter: 'docusaurus.define') {
mut action := plbook.get(filter: 'docusaurus.define')!
mut p := action.params
//example how we get parameters from the action see aiprompts/herolib_core/core_params.md for more details
path_build := p.get_default('path_build', '')!
path_publish := p.get_default('path_publish', '')!
reset := p.get_default_false('reset')
use_doctree := p.get_default_false('use_doctree')
}
//example how we get parameters from the action see core_params.md for more details
ds = new(
path: p.get_default('path_publish', '')!
production: p.get_default_false('production')
)!
}
// Process 'docusaurus.add' actions to configure individual Docusaurus sites
actions := plbook.find(filter: 'docusaurus.add')!
@@ -138,7 +138,7 @@ pub fn play(mut plbook PlayBook) ! {
}
```
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/herolib_core/core_params.md`.
For detailed information on parameter retrieval methods (e.g., `p.get()`, `p.get_int()`, `p.get_default_true()`), refer to `aiprompts/ai_core/core_params.md`.
# PlayBook, process heroscripts

View File

@@ -10,7 +10,6 @@ fp.version('v0.1.0')
fp.description('Compile hero binary in debug or production mode')
fp.skip_executable()
prod_mode := fp.bool('prod', `p`, false, 'Build production version (optimized)')
help_requested := fp.bool('help', `h`, false, 'Show help message')
@@ -62,8 +61,6 @@ compile_cmd := if os.user_os() == 'macos' {
'v -enable-globals -g -w -n -prod hero.v'
} else {
'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
// 'v -n -g -w -cg -gc none -cc tcc -d use_openssl -enable-globals hero.v'
// 'v -cg -enable-globals -parallel-cc -w -n -d use_openssl hero.v'
}
} else {
if prod_mode {

View File

@@ -53,7 +53,7 @@ fn do() ! {
mut cmd := Command{
name: 'hero'
description: 'Your HERO toolset.'
version: '1.0.38'
version: '1.0.36'
}
mut toinstall := false
@@ -103,4 +103,4 @@ fn main() {
print_backtrace()
exit(1)
}
}
}

View File

@@ -40,3 +40,4 @@ RUN /tmp/install_herolib.vsh && \
ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env -S v -n -w -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.ai.client
mut cl := client.new()!
// response := cl.llms.llm_local.chat_completion(
// message: 'Explain quantum computing in simple terms'
// temperature: 0.5
// max_completion_tokens: 1024
// )!
response := cl.llms.llm_embed.chat_completion(
message: 'Explain quantum computing in simple terms'
)!
println(response)

View File

@@ -1,12 +1,12 @@
#!/usr/bin/env hero
!!doctree.scan
!!atlas.scan
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/mycelium_economics'
!!doctree.scan
!!atlas.scan
git_url: 'https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/authentic_web'
// !!doctree.scan
// !!atlas.scan
// git_url: 'https://git.ourworld.tf/geomind/docs_geomind/src/branch/main/collections/usecases'
!!doctree.export destination: '/tmp/doctree_export'
!!atlas.export destination: '/tmp/atlas_export'

View File

@@ -1,15 +1,15 @@
#!/usr/bin/env hero
!!doctree.scan
git_url: 'https://git.ourworld.tf/geomind/doctree_geomind/src/branch/main/content'
meta_path: '/tmp/doctree_export_meta'
!!atlas.scan
git_url: 'https://git.ourworld.tf/geomind/atlas_geomind/src/branch/main/content'
meta_path: '/tmp/atlas_export_meta'
!!doctree.scan
git_url: 'https://git.ourworld.tf/tfgrid/doctree_threefold/src/branch/main/content'
meta_path: '/tmp/doctree_export_meta'
!!atlas.scan
git_url: 'https://git.ourworld.tf/tfgrid/atlas_threefold/src/branch/main/content'
meta_path: '/tmp/atlas_export_meta'
ignore3: 'static,templates,groups'
!!doctree.export
destination: '/tmp/doctree_export_test'
!!atlas.export
destination: '/tmp/atlas_export_test'
include: true
redis: true

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env hero
!!doctree.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/tests"
!!atlas.scan git_url:"https://git.ourworld.tf/tfgrid/docs_tfgrid4/src/branch/main/collections/tests"
!!doctree.export destination: '/tmp/doctree_export'
!!atlas.export destination: '/tmp/atlas_export'

View File

@@ -1,308 +0,0 @@
#!/usr/bin/env -S vrun
import incubaid.herolib.data.doctree
import incubaid.herolib.ui.console
import os
fn main() {
println('=== ATLAS DEBUG SCRIPT ===\n')
// Create and scan doctree
mut a := doctree.new(name: 'main')!
// Scan the collections
println('Scanning collections...\n')
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/mycelium_nodes_tiers'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geomind_compare'
)!
a.scan(path: '/Users/despiegk/code/git.ourworld.tf/geomind/docs_geomind/collections/geoaware')!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_economics'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_concepts'
)!
a.scan(
path: '/Users/despiegk/code/git.ourworld.tf/tfgrid/docs_tfgrid4/collections/mycelium_cloud_tech'
)!
// Initialize doctree (post-scanning validation)
a.init_post()!
// Print all pages per collection
println('\n=== COLLECTIONS & PAGES ===\n')
for col_name, col in a.collections {
println('Collection: ${col_name}')
println(' Pages (${col.pages.len}):')
if col.pages.len > 0 {
for page_name, _ in col.pages {
println(' - ${page_name}')
}
} else {
println(' (empty)')
}
println(' Files/Images (${col.files.len}):')
if col.files.len > 0 {
for file_name, _ in col.files {
println(' - ${file_name}')
}
} else {
println(' (empty)')
}
}
// Validate links (this will recursively find links across collections)
println('\n=== VALIDATING LINKS (RECURSIVE) ===\n')
a.validate_links()!
println(' Link validation complete\n')
// Check for broken links
println('\n=== BROKEN LINKS ===\n')
mut total_errors := 0
for col_name, col in a.collections {
if col.has_errors() {
println('Collection: ${col_name} (${col.errors.len} errors)')
for err in col.errors {
println(' [${err.category_str()}] Page: ${err.page_key}')
println(' Message: ${err.message}')
println('')
total_errors++
}
}
}
if total_errors == 0 {
println(' No broken links found!')
} else {
println('\n Total broken link errors: ${total_errors}')
}
// Show discovered links per page (validates recursive discovery)
println('\n\n=== DISCOVERED LINKS (RECURSIVE RESOLUTION) ===\n')
println('Checking for files referenced by cross-collection pages...\n')
mut total_links := 0
for col_name, col in a.collections {
mut col_has_links := false
for page_name, page in col.pages {
if page.links.len > 0 {
if !col_has_links {
println('Collection: ${col_name}')
col_has_links = true
}
println(' Page: ${page_name} (${page.links.len} links)')
for link in page.links {
target_col := if link.target_collection_name != '' {
link.target_collection_name
} else {
col_name
}
println(' ${target_col}:${link.target_item_name} [${link.file_type}]')
total_links++
}
}
}
}
println('\n Total links discovered: ${total_links}')
// List pages that need investigation
println('\n=== CHECKING SPECIFIC MISSING PAGES ===\n')
missing_pages := [
'compare_electricity',
'internet_basics',
'centralization_risk',
'gdp_negative',
]
// Check in geoaware collection
if 'geoaware' in a.collections {
mut geoaware := a.get_collection('geoaware')!
println('Collection: geoaware')
if geoaware.pages.len > 0 {
println(' All pages in collection:')
for page_name, _ in geoaware.pages {
println(' - ${page_name}')
}
} else {
println(' (No pages found)')
}
println('\n Checking for specific missing pages:')
for page_name in missing_pages {
exists := page_name in geoaware.pages
status := if exists { '' } else { '' }
println(' ${status} ${page_name}')
}
}
// Check for pages across all collections
println('\n\n=== LOOKING FOR MISSING PAGES ACROSS ALL COLLECTIONS ===\n')
for missing_page in missing_pages {
println('Searching for "${missing_page}":')
mut found := false
for col_name, col in a.collections {
if missing_page in col.pages {
println(' Found in: ${col_name}')
found = true
}
}
if !found {
println(' Not found in any collection')
}
}
// Check for the solution page
println('\n\n=== CHECKING FOR "solution" PAGE ===\n')
for col_name in ['mycelium_nodes_tiers', 'geomind_compare', 'geoaware', 'mycelium_economics',
'mycelium_concepts', 'mycelium_cloud_tech'] {
if col_name in a.collections {
mut col := a.get_collection(col_name)!
exists := col.page_exists('solution')!
status := if exists { '' } else { '' }
println('${status} ${col_name}: "solution" page')
}
}
// Print error summary
println('\n\n=== ERROR SUMMARY BY CATEGORY ===\n')
mut category_counts := map[string]int{}
for _, col in a.collections {
for err in col.errors {
cat_str := err.category_str()
category_counts[cat_str]++
}
}
if category_counts.len == 0 {
println(' No errors found!')
} else {
for cat, count in category_counts {
println('${cat}: ${count}')
}
}
// ===== EXPORT AND FILE VERIFICATION TEST =====
println('\n\n=== EXPORT AND FILE VERIFICATION TEST ===\n')
// Create export directory
export_path := '/tmp/doctree_debug_export'
if os.exists(export_path) {
os.rmdir_all(export_path)!
}
os.mkdir_all(export_path)!
println('Exporting to: ${export_path}\n')
a.export(destination: export_path)!
println(' Export completed\n')
// Collect all files found during link validation
mut expected_files := map[string]string{} // key: file_name, value: collection_name
mut file_count := 0
for col_name, col in a.collections {
for page_name, page in col.pages {
for link in page.links {
if link.status == .found && (link.file_type == .file || link.file_type == .image) {
file_key := link.target_item_name
expected_files[file_key] = link.target_collection_name
file_count++
}
}
}
}
println('Expected to find ${file_count} file references in links\n')
println('=== VERIFYING FILES IN EXPORT DIRECTORY ===\n')
// Get the first collection name (the primary exported collection)
mut primary_col_name := ''
for col_name, _ in a.collections {
primary_col_name = col_name
break
}
if primary_col_name == '' {
println(' No collections found')
} else {
mut verified_count := 0
mut missing_count := 0
mut found_files := map[string]bool{}
// Check both img and files directories
img_dir := '${export_path}/content/${primary_col_name}/img'
files_dir := '${export_path}/content/${primary_col_name}/files'
// Scan img directory
if os.exists(img_dir) {
img_files := os.ls(img_dir) or { []string{} }
for img_file in img_files {
found_files[img_file] = true
}
}
// Scan files directory
if os.exists(files_dir) {
file_list := os.ls(files_dir) or { []string{} }
for file in file_list {
found_files[file] = true
}
}
println('Files/Images found in export directory:')
if found_files.len > 0 {
for file_name, _ in found_files {
println(' ${file_name}')
if file_name in expected_files {
verified_count++
}
}
} else {
println(' (none found)')
}
println('\n=== FILE VERIFICATION RESULTS ===\n')
println('Expected files from links: ${file_count}')
println('Files found in export: ${found_files.len}')
println('Files verified (present in export): ${verified_count}')
// Check for missing expected files
for expected_file, source_col in expected_files {
if expected_file !in found_files {
missing_count++
println(' Missing: ${expected_file} (from ${source_col})')
}
}
if missing_count > 0 {
println('\n ${missing_count} expected files are MISSING from export!')
} else if verified_count == file_count && file_count > 0 {
println('\n All expected files are present in export directory!')
} else if file_count == 0 {
println('\n No file links were found during validation (check if pages have file references)')
}
// Show directory structure
println('\n=== EXPORT DIRECTORY STRUCTURE ===\n')
if os.exists('${export_path}/content/${primary_col_name}') {
println('${export_path}/content/${primary_col_name}/')
content_files := os.ls('${export_path}/content/${primary_col_name}') or { []string{} }
for item in content_files {
full_path := '${export_path}/content/${primary_col_name}/${item}'
if os.is_dir(full_path) {
sub_items := os.ls(full_path) or { []string{} }
println(' ${item}/ (${sub_items.len} items)')
for sub_item in sub_items {
println(' - ${sub_item}')
}
} else {
println(' - ${item}')
}
}
}
}
}

View File

@@ -1,18 +1,18 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.data.doctree
import incubaid.herolib.data.atlas
import incubaid.herolib.core.pathlib
import incubaid.herolib.web.doctree_client
import incubaid.herolib.web.atlas_client
import os
// Example: DocTree Export and AtlasClient Usage
// Example: Atlas Export and AtlasClient Usage
println('DocTree Export & Client Example')
println('Atlas Export & Client Example')
println('============================================================')
// Setup test directory
test_dir := '/tmp/doctree_example'
export_dir := '/tmp/doctree_export'
test_dir := '/tmp/atlas_example'
export_dir := '/tmp/atlas_export'
os.rmdir_all(test_dir) or {}
os.rmdir_all(export_dir) or {}
os.mkdir_all(test_dir)!
@@ -30,9 +30,9 @@ page1.write('# Introduction\n\nWelcome to the docs!')!
mut page2 := pathlib.get_file(path: '${col_path}/guide.md', create: true)!
page2.write('# Guide\n\n!!include docs:intro\n\nMore content here.')!
// Create and scan doctree
println('\n1. Creating DocTree and scanning...')
mut a := doctree.new(name: 'my_docs')!
// Create and scan atlas
println('\n1. Creating Atlas and scanning...')
mut a := atlas.new(name: 'my_docs')!
a.scan(path: test_dir)!
println(' Found ${a.collections.len} collection(s)')
@@ -60,7 +60,7 @@ println(' ✓ Export complete')
// Use AtlasClient to access exported content
println('\n4. Using AtlasClient to read exported content...')
mut client := doctree_client.new(export_dir: export_dir)!
mut client := atlas_client.new(export_dir: export_dir)!
// List collections
collections := client.list_collections()!

View File

@@ -0,0 +1,198 @@
# HeroPrompt Example
Generate structured AI prompts from your codebase with file selection and workspace management.
## Quick Start
Run the example:
```bash
./examples/develop/heroprompt/prompt_example.vsh
```
This example demonstrates the complete workflow from creating a workspace to generating AI prompts.
---
## What is HeroPrompt?
HeroPrompt helps you organize code files and generate structured prompts for AI analysis:
- **Workspace Management**: Organize files into logical workspaces
- **File Selection**: Select specific files or entire directories
- **Prompt Generation**: Generate formatted prompts with file trees and contents
- **Redis Persistence**: All data persists across sessions
- **Active Workspace**: Easily switch between different workspaces
---
## Basic Usage
### 1. Create Instance and Workspace
```v
import incubaid.herolib.develop.heroprompt
// Create or get instance
mut hp := heroprompt.get(name: 'my_project', create: true)!
// Create workspace (first workspace is automatically active)
mut workspace := hp.new_workspace(
name: 'my_workspace'
description: 'My project workspace'
)!
```
### 2. Add Directories
```v
// Add directory and scan all files
mut dir := workspace.add_directory(
path: '/path/to/your/code'
name: 'my_code'
scan: true // Automatically scans all files and subdirectories
)!
```
### 3. Select Files
```v
// Select specific files
dir.select_file(path: '/path/to/file1.v')!
dir.select_file(path: '/path/to/file2.v')!
// Or select all files in directory
dir.select_all()!
```
### 4. Generate Prompt
```v
// Generate AI prompt with selected files
prompt := workspace.generate_prompt(
instruction: 'Review these files and suggest improvements'
)!
println(prompt)
```
---
## Generated Prompt Format
The generated prompt includes three sections:
```
<user_instructions>
Review these files and suggest improvements
</user_instructions>
<file_map>
my_project/
├── src/
│ ├── main.v *
│ └── utils.v *
└── README.md *
</file_map>
<file_contents>
File: /path/to/src/main.v
```v
module main
...
```
</file_contents>
```
---
## API Reference
### Factory Functions
```v
heroprompt.get(name: 'my_project', create: true)! // Get or create
heroprompt.delete(name: 'my_project')! // Delete instance
heroprompt.exists(name: 'my_project')! // Check if exists
heroprompt.list()! // List all instances
```
### HeroPrompt Methods
```v
hp.new_workspace(name: 'ws', description: 'desc')! // Create workspace
hp.get_workspace('ws')! // Get workspace by name
hp.list_workspaces() // List all workspaces
hp.delete_workspace('ws')! // Delete workspace
hp.get_active_workspace()! // Get active workspace
hp.set_active_workspace('ws')! // Set active workspace
```
### Workspace Methods
```v
ws.add_directory(path: '/path', name: 'dir', scan: true)! // Add directory
ws.list_directories() // List directories
ws.remove_directory(id: 'dir_id')! // Remove directory
ws.generate_prompt(instruction: 'Review')! // Generate prompt
ws.generate_file_map()! // Generate file tree
ws.generate_file_contents()! // Generate contents
```
### Directory Methods
```v
dir.select_file(path: '/path/to/file')! // Select file
dir.select_all()! // Select all files
dir.deselect_file(path: '/path/to/file')! // Deselect file
dir.deselect_all()! // Deselect all files
```
---
## Features
### Active Workspace
```v
// Get the currently active workspace
mut active := hp.get_active_workspace()!
// Switch to a different workspace
hp.set_active_workspace('other_workspace')!
```
### Multiple Workspaces
```v
// Create multiple workspaces for different purposes
mut backend := hp.new_workspace(name: 'backend')!
mut frontend := hp.new_workspace(name: 'frontend')!
mut docs := hp.new_workspace(name: 'documentation')!
```
### File Selection
```v
// Select individual files
dir.select_file(path: '/path/to/file.v')!
// Select all files in directory
dir.select_all()!
// Deselect files
dir.deselect_file(path: '/path/to/file.v')!
dir.deselect_all()!
```
---
## Tips
- Always start with cleanup (`heroprompt.delete()`) in examples to ensure a fresh state
- The first workspace created is automatically set as active
- File selection persists to Redis automatically
- Use `scan: true` when adding directories to automatically scan all files
- Selected files are tracked per directory for efficient management

View File

@@ -1,50 +0,0 @@
#!/usr/bin/env -S v -n -w -gc none -cg -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.develop.heroprompt
import os
// mut workspace := heroprompt.new(
// path: '${os.home_dir()}/code/github/incubaid/herolib'
// name: 'workspace'
// )!
mut workspace := heroprompt.get(
name: 'example_ws'
path: '${os.home_dir()}/code/github/incubaid/herolib'
create: true
)!
println('workspace (initial): ${workspace}')
println('selected (initial): ${workspace.selected_children()}')
// Add a directory and a file
workspace.add_dir(path: '${os.home_dir()}/code/github/incubaid/herolib/docker')!
workspace.add_file(
path: '${os.home_dir()}/code/github/incubaid/herolib/docker/docker_ubuntu_install.sh'
)!
println('selected (after add): ${workspace.selected_children()}')
// Build a prompt from current selection (should be empty now)
mut prompt := workspace.prompt(
text: 'Using the selected files, i want you to get all print statments'
)
println('--- PROMPT START ---')
println(prompt)
println('--- PROMPT END ---')
// Remove the file by name, then the directory by name
workspace.remove_file(name: 'docker_ubuntu_install.sh') or { println('remove_file: ${err}') }
workspace.remove_dir(name: 'docker') or { println('remove_dir: ${err}') }
println('selected (after remove): ${workspace.selected_children()}')
// List workspaces (names only)
mut all := heroprompt.list_workspaces() or { []&heroprompt.Workspace{} }
mut names := []string{}
for w in all {
names << w.name
}
println('workspaces: ${names}')
// Optionally delete the example workspace
workspace.delete_workspace() or { println('delete_workspace: ${err}') }

View File

@@ -0,0 +1,145 @@
#!/usr/bin/env -S v -n -w -gc none -cg -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.develop.heroprompt
import os
println('=== HeroPrompt: AI Prompt Generation Example ===\n')
// ============================================================================
// STEP 1: Cleanup and Setup
// ============================================================================
// Always start fresh - delete any existing instance
println('Step 1: Cleaning up any existing instance...')
heroprompt.delete(name: 'prompt_demo') or {}
println(' Cleanup complete\n')
// ============================================================================
// STEP 2: Create HeroPrompt Instance
// ============================================================================
// Get or create a new HeroPrompt instance
// The 'create: true' parameter will create it if it doesn't exist
println('Step 2: Creating HeroPrompt instance...')
mut hp := heroprompt.get(name: 'prompt_demo', create: true)!
println(' Created instance: ${hp.name}\n')
// ============================================================================
// STEP 3: Create Workspace
// ============================================================================
// A workspace is a collection of directories and files
// The first workspace is automatically set as active
println('Step 3: Creating workspace...')
mut workspace := hp.new_workspace(
name: 'my_project'
description: 'Example project workspace'
)!
println(' Created workspace: ${workspace.name}')
println(' Active: ${workspace.is_active}')
println(' Description: ${workspace.description}\n')
// ============================================================================
// STEP 4: Add Directories to Workspace
// ============================================================================
// Add directories containing code you want to analyze
// The 'scan: true' parameter automatically scans all files and subdirectories
println('Step 4: Adding directories to workspace...')
homepath := os.home_dir()
// Add the examples directory
mut examples_dir := workspace.add_directory(
path: '${homepath}/code/github/incubaid/herolib/examples/develop/heroprompt'
name: 'examples'
scan: true
)!
println(' Added directory: examples')
// Add the library directory
mut lib_dir := workspace.add_directory(
path: '${homepath}/code/github/incubaid/herolib/lib/develop/heroprompt'
name: 'library'
scan: true
)!
println(' Added directory: library\n')
// ============================================================================
// STEP 5: Select Specific Files
// ============================================================================
// You can select specific files from directories for prompt generation
// This is useful when you only want to analyze certain files
println('Step 5: Selecting specific files...')
// Select individual files from the examples directory
examples_dir.select_file(
path: '${homepath}/code/github/incubaid/herolib/examples/develop/heroprompt/README.md'
)!
println(' Selected: README.md')
examples_dir.select_file(
path: '${homepath}/code/github/incubaid/herolib/examples/develop/heroprompt/prompt_example.vsh'
)!
println(' Selected: prompt_example.vsh')
// Select all files from the library directory
lib_dir.select_all()!
println(' Selected all files in library directory\n')
// ============================================================================
// STEP 6: Generate AI Prompt
// ============================================================================
// Generate a complete prompt with file map, file contents, and instructions
// The prompt automatically includes only the selected files
println('Step 6: Generating AI prompt...')
prompt := workspace.generate_prompt(
instruction: 'Review the selected files and provide suggestions for improvements.'
)!
println(' Generated prompt')
println(' Total length: ${prompt.len} characters\n')
// ============================================================================
// STEP 7: Display Prompt Preview
// ============================================================================
println('Step 7: Prompt preview (first 800 characters)...')
preview_len := if prompt.len > 800 { 800 } else { prompt.len }
println(prompt[..preview_len])
// ============================================================================
// STEP 8: Alternative - Get Active Workspace
// ============================================================================
// You can retrieve the active workspace without knowing its name
println('Step 8: Working with active workspace...')
mut active_ws := hp.get_active_workspace()!
println(' Retrieved active workspace: ${active_ws.name}')
println(' Directories: ${active_ws.directories.len}')
println(' Files: ${active_ws.files.len}\n')
// ============================================================================
// STEP 9: Set Different Active Workspace
// ============================================================================
// You can create multiple workspaces and switch between them
println('Step 9: Creating and switching workspaces...')
// Create a second workspace
mut workspace2 := hp.new_workspace(
name: 'documentation'
description: 'Documentation workspace'
is_active: false
)!
println(' Created workspace: ${workspace2.name}')
// Switch active workspace
hp.set_active_workspace('documentation')!
println(' Set active workspace to: documentation')
// Verify the switch
active_ws = hp.get_active_workspace()!
println(' Current active workspace: ${active_ws.name}\n')
// ============================================================================
// STEP 10: Cleanup
// ============================================================================
println('Step 10: Cleanup...')
heroprompt.delete(name: 'prompt_demo')!
println(' Deleted instance\n')

View File

@@ -1,4 +1 @@
hetzner_kristof1
hetzner_kristof2
hetzner_kristof3
hetzner_test1
hetzner_example

View File

@@ -1,3 +0,0 @@
export HETZNER_USER="#ws+JdQtGCdL"
export HETZNER_PASSWORD="Kds007kds!"
export HETZNER_SSHKEY_NAME="mahmoud"

View File

@@ -1,34 +1,37 @@
#!/usr/bin/env hero
#!/usr/bin/env hero
// # Configure HetznerManager, replace with your own credentials, server id's and ssh key name and all other parameters
!!hetznermanager.configure
user:"user_name"
whitelist:"server_id"
password:"password"
sshkey:"ssh_key_name"
// !!hetznermanager.configure
// name:"main"
// user:"krist"
// whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
// password:"wontsethere"
// sshkey:"kristof"
!!hetznermanager.server_rescue
server_name: 'server_name' // The name of the server to manage (or use `id`)
wait: true // Wait for the operation to complete
hero_install: true // Automatically install Herolib in the rescue system
// !!hetznermanager.server_rescue
// server_name: 'kristof21' // The name of the server to manage (or use `id`)
// wait: true // Wait for the operation to complete
// hero_install: true // Automatically install Herolib in the rescue system
// # Reset a server
!!hetznermanager.server_reset
instance: 'main'
server_name: 'server_name'
wait: true
// !!hetznermanager.server_reset
// instance: 'main'
// server_name: 'your-server-name'
// wait: true
// # Add a new SSH key to your Hetzner account
!!hetznermanager.key_create
instance: 'main'
key_name: 'ssh_key_name'
data: 'ssh-rsa AAAA...'
// !!hetznermanager.key_create
// instance: 'main'
// key_name: 'my-laptop-key'
// data: 'ssh-rsa AAAA...'
// Install Ubuntu 24.04 on a server
!!hetznermanager.ubuntu_install
server_name: 'server_name'
server_name: 'kristof2'
wait: true
hero_install: true // Install Herolib on the new OS

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.ui.console
import incubaid.herolib.core.base
import incubaid.herolib.builder
import time
import os
import incubaid.herolib.core.playcmds
user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${user}"
whitelist:"2111181, 2392178, 2545053, 2542166, 2550508, 2550378,2550253"
password:"${passwd}"
sshkey:"kristof"
'
println(hs)
playcmds.run(heroscript: hs)!
console.print_header('Hetzner Test.')
mut cl := hetznermanager.get()!
// println(cl)
// for i in 0 .. 5 {
// println('test cache, first time slow then fast')
// }
println(cl.servers_list()!)
// mut serverinfo := cl.server_info_get(name: 'kristof2')!
// println(serverinfo)
// cl.server_reset(name:"kristof2",wait:true)!
// don't forget to specify the keyname needed
// cl.server_rescue(name:"kristof2",wait:true, hero_install:true,sshkey_name:"kristof")!
// mut ks:=cl.keys_get()!
// println(ks)
// console.print_header('SSH login')
// mut b := builder.new()!
// mut n := b.node_new(ipaddr: serverinfo.server_ip)!
// this will put hero in debug mode on the system
// n.hero_install(compile:true)!
// n.shell("")!
// cl.ubuntu_install(name: 'kristof2', wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!

View File

@@ -1,79 +0,0 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.ui.console
import incubaid.herolib.core.base
import incubaid.herolib.builder
import time
import os
import incubaid.herolib.core.playcmds
// Server-specific configuration
const server_name = 'kristof1'
const server_whitelist = '2521602'
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
playcmds.run(heroscript: hs)!
console.print_header('Hetzner Test.')
mut cl := hetznermanager.get()!
// println(cl)
// for i in 0 .. 5 {
// println('test cache, first time slow then fast')
// }
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
// cl.server_reset(name: 'kristof2', wait: true)!
// cl.server_rescue(name: name, wait: true, hero_install: true)!
// mut ks := cl.keys_get()!
// println(ks)
// console.print_header('SSH login')
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
// this will put hero in debug mode on the system
mut b := builder.new()!
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
n.hero_install(compile: true)!
n.shell('')!

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.ui.console
import incubaid.herolib.core.base
import incubaid.herolib.builder
import time
import os
import incubaid.herolib.core.playcmds
// Server-specific configuration
const server_name = 'kristof2'
const server_whitelist = '2555487'
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hero_script := '
!!hetznermanager.configure
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
playcmds.run(heroscript: hero_script)!
mut hetznermanager_ := hetznermanager.get()!
mut serverinfo := hetznermanager_.server_info_get(name: server_name)!
println('${server_name} ${serverinfo.server_ip}')
hetznermanager_.server_rescue(name: server_name, wait: true, hero_install: true)!
mut keys := hetznermanager_.keys_get()!
mut b := builder.new()!
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
hetznermanager_.ubuntu_install(name: server_name, wait: true, hero_install: true)!
n.shell('')!

View File

@@ -1,79 +0,0 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.ui.console
import incubaid.herolib.core.base
import incubaid.herolib.builder
import time
import os
import incubaid.herolib.core.playcmds
// Server-specific configuration
const server_name = 'kristof3'
const server_whitelist = '2573047'
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
playcmds.run(heroscript: hs)!
console.print_header('Hetzner Test.')
mut cl := hetznermanager.get()!
// println(cl)
// for i in 0 .. 5 {
// println('test cache, first time slow then fast')
// }
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
// cl.server_reset(name: 'kristof2', wait: true)!
// cl.server_rescue(name: name, wait: true, hero_install: true)!
// mut ks := cl.keys_get()!
// println(ks)
// console.print_header('SSH login')
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
// this will put hero in debug mode on the system
mut b := builder.new()!
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
n.hero_install(compile: true)!
n.shell('')!

View File

@@ -1,79 +0,0 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.ui.console
import incubaid.herolib.core.base
import incubaid.herolib.builder
import time
import os
import incubaid.herolib.core.playcmds
// Server-specific configuration
const server_name = 'test1'
const server_whitelist = '2575034'
// Load credentials from environment variables
// Source hetzner_env.sh before running: source examples/virt/hetzner/hetzner_env.sh
hetzner_user := os.environ()['HETZNER_USER'] or {
println('HETZNER_USER not set')
exit(1)
}
hetzner_passwd := os.environ()['HETZNER_PASSWORD'] or {
println('HETZNER_PASSWORD not set')
exit(1)
}
hetzner_sshkey_name := os.environ()['HETZNER_SSHKEY_NAME'] or {
println('HETZNER_SSHKEY_NAME not set')
exit(1)
}
hs := '
!!hetznermanager.configure
user:"${hetzner_user}"
whitelist:"${server_whitelist}"
password:"${hetzner_passwd}"
sshkey:"${hetzner_sshkey_name}"
'
println(hs)
playcmds.run(heroscript: hs)!
console.print_header('Hetzner Test.')
mut cl := hetznermanager.get()!
// println(cl)
// for i in 0 .. 5 {
// println('test cache, first time slow then fast')
// }
println(cl.servers_list()!)
mut serverinfo := cl.server_info_get(name: server_name)!
println(serverinfo)
// cl.server_reset(name: 'kristof2', wait: true)!
// cl.server_rescue(name: name, wait: true, hero_install: true)!
// mut ks := cl.keys_get()!
// println(ks)
// console.print_header('SSH login')
cl.ubuntu_install(name: server_name, wait: true, hero_install: true)!
// cl.ubuntu_install(name: 'kristof20', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550378, name: 'kristof21', wait: true, hero_install: true)!
// cl.ubuntu_install(id:2550508, name: 'kristof22', wait: true, hero_install: true)!
// cl.ubuntu_install(id: 2550253, name: 'kristof23', wait: true, hero_install: true)!
// this will put hero in debug mode on the system
mut b := builder.new()!
mut n := b.node_new(ipaddr: serverinfo.server_ip)!
n.hero_install(compile: true)!
n.shell('')!

View File

@@ -1,57 +1,9 @@
# Hetzner Examples
## Quick Start
### 1. Configure Environment Variables
get the login passwd from:
Copy `hetzner_env.sh` and fill in your credentials:
https://robot.hetzner.com/preferences/index
```bash
export HETZNER_USER="your-robot-username" # Hetzner Robot API username
export HETZNER_PASSWORD="your-password" # Hetzner Robot API password
export HETZNER_SSHKEY_NAME="my-key" # Name of SSH key registered in Hetzner
```
Each script has its own server name and whitelist ID defined at the top.
### 2. Run a Script
```bash
source hetzner_env.sh
./hetzner_kristof2.vsh
```
## SSH Keys
The `HETZNER_SSHKEY_NAME` must be the **name** of an SSH key already registered in your Hetzner Robot account.
Available keys in our Hetzner account:
- hossnys (RSA 2048)
- Jan De Landtsheer (ED25519 256)
- mahmoud (ED25519 256)
- kristof (ED25519 256)
- maxime (ED25519 256)
To add a new key, use `key_create` in your script or the Hetzner Robot web interface.
## Alternative: Using hero_secrets
You can also use the shared secrets repository:
```bash
hero git pull https://git.threefold.info/despiegk/hero_secrets
source ~/code/git.ourworld.tf/despiegk/hero_secrets/mysecrets.sh
```
## Troubleshooting
### Get Robot API credentials
Get your login credentials from: https://robot.hetzner.com/preferences/index
### Test API access
```bash
curl -u "your-username:your-password" https://robot-ws.your-server.de/server
```
curl -u "#ws+JdQtGCdL:..." https://robot-ws.your-server.de/server
```

View File

@@ -1,208 +0,0 @@
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.web.doctree.meta
import incubaid.herolib.core.playbook
import incubaid.herolib.ui.console
// Comprehensive HeroScript for testing multi-level navigation depths
const test_heroscript_nav_depth = '
!!site.config
name: "nav_depth_test"
title: "Navigation Depth Test Site"
description: "Testing multi-level nested navigation"
tagline: "Deep navigation structures"
!!site.navbar
title: "Nav Depth Test"
!!site.navbar_item
label: "Home"
to: "/"
position: "left"
// ============================================================
// LEVEL 1: Simple top-level category
// ============================================================
!!site.page_category
path: "Why"
collapsible: true
collapsed: false
//COLLECTION WILL BE REPEATED, HAS NO INFLUENCE ON NAVIGATION LEVELS
!!site.page src: "mycollection:intro"
label: "Why Choose Us"
title: "Why Choose Us"
description: "Reasons to use this platform"
!!site.page src: "benefits"
label: "Key Benefits"
title: "Key Benefits"
description: "Main benefits overview"
// ============================================================
// LEVEL 1: Simple top-level category
// ============================================================
!!site.page_category
path: "Tutorials"
collapsible: true
collapsed: false
!!site.page src: "getting_started"
label: "Getting Started"
title: "Getting Started"
description: "Basic tutorial to get started"
!!site.page src: "first_steps"
label: "First Steps"
title: "First Steps"
description: "Your first steps with the platform"
// ============================================================
// LEVEL 3: Three-level nested category (Tutorials > Operations > Urgent)
// ============================================================
!!site.page_category
path: "Tutorials/Operations/Urgent"
collapsible: true
collapsed: false
!!site.page src: "emergency_restart"
label: "Emergency Restart"
title: "Emergency Restart"
description: "How to emergency restart the system"
!!site.page src: "critical_fixes"
label: "Critical Fixes"
title: "Critical Fixes"
description: "Apply critical fixes immediately"
!!site.page src: "incident_response"
label: "Incident Response"
title: "Incident Response"
description: "Handle incidents in real-time"
// ============================================================
// LEVEL 2: Two-level nested category (Tutorials > Operations)
// ============================================================
!!site.page_category
path: "Tutorials/Operations"
collapsible: true
collapsed: false
!!site.page src: "daily_checks"
label: "Daily Checks"
title: "Daily Checks"
description: "Daily maintenance checklist"
!!site.page src: "monitoring"
label: "Monitoring"
title: "Monitoring"
description: "System monitoring procedures"
!!site.page src: "backups"
label: "Backups"
title: "Backups"
description: "Backup and restore procedures"
// ============================================================
// LEVEL 1: One-to-two level (Tutorials)
// ============================================================
// Note: This creates a sibling at the Tutorials level (not nested deeper)
!!site.page src: "advanced_concepts"
label: "Advanced Concepts"
title: "Advanced Concepts"
description: "Deep dive into advanced concepts"
!!site.page src: "troubleshooting"
label: "Troubleshooting"
title: "Troubleshooting"
description: "Troubleshooting guide"
// ============================================================
// LEVEL 2: Two-level nested category (Why > FAQ)
// ============================================================
!!site.page_category
path: "Why/FAQ"
collapsible: true
collapsed: false
!!site.page src: "general"
label: "General Questions"
title: "General Questions"
description: "Frequently asked questions"
!!site.page src: "pricing_questions"
label: "Pricing"
title: "Pricing Questions"
description: "Questions about pricing"
!!site.page src: "technical_faq"
label: "Technical FAQ"
title: "Technical FAQ"
description: "Technical frequently asked questions"
!!site.page src: "support_faq"
label: "Support"
title: "Support FAQ"
description: "Support-related FAQ"
// ============================================================
// LEVEL 4: Four-level nested category (Tutorials > Operations > Database > Optimization)
// ============================================================
!!site.page_category
path: "Tutorials/Operations/Database/Optimization"
collapsible: true
collapsed: false
!!site.page src: "query_optimization"
label: "Query Optimization"
title: "Query Optimization"
description: "Optimize your database queries"
!!site.page src: "indexing_strategy"
label: "Indexing Strategy"
title: "Indexing Strategy"
description: "Effective indexing strategies"
!!site.page_category
path: "Tutorials/Operations/Database"
collapsible: true
collapsed: false
!!site.page src: "configuration"
label: "Configuration"
title: "Database Configuration"
description: "Configure your database"
!!site.page src: "replication"
label: "Replication"
title: "Database Replication"
description: "Set up database replication"
'
fn check(s2 meta.Site) {
// assert s == s2
}
// ========================================================
// SETUP: Create and process playbook
// ========================================================
console.print_item('Creating playbook from HeroScript')
mut plbook := playbook.new(text: test_heroscript_nav_depth)!
console.print_green(' Playbook created')
console.lf()
console.print_item('Processing site configuration')
meta.play(mut plbook)!
console.print_green(' Site processed')
console.lf()
console.print_item('Retrieving configured site')
mut nav_site := meta.get(name: 'nav_depth_test')!
console.print_green(' Site retrieved')
console.lf()
// check(nav_site)

View File

@@ -1,201 +0,0 @@
# Site Module Usage Guide
## Quick Examples
### 1. Run Basic Example
```bash
cd examples/web/site
vrun process_site.vsh ./
```
With output:
```
=== Site Configuration Processor ===
Processing HeroScript files from: ./
Found 1 HeroScript file(s):
- basic.heroscript
Processing: basic.heroscript
=== Configuration Complete ===
Site: simple_docs
Title: Simple Documentation
Pages: 4
Description: A basic documentation site
Navigation structure:
- [Page] Getting Started
- [Page] Installation
- [Page] Usage Guide
- [Page] FAQ
✓ Site configuration ready for deployment
```
### 2. Run Multi-Section Example
```bash
vrun process_site.vsh ./
# Edit process_site.vsh to use multi_section.heroscript instead
```
### 3. Process Custom Directory
```bash
vrun process_site.vsh /path/to/your/site/config
```
## File Structure
```
docs/
├── 0_config.heroscript # Basic config
├── 1_menu.heroscript # Navigation
├── 2_pages.heroscript # Pages and categories
└── process.vsh # Your processing script
```
## Creating Your Own Site
1. **Create a config directory:**
```bash
mkdir my_site
cd my_site
```
2. **Create config file (0_config.heroscript):**
```heroscript
!!site.config
name: "my_site"
title: "My Site"
```
3. **Create pages file (1_pages.heroscript):**
```heroscript
!!site.page src: "docs:intro"
title: "Getting Started"
```
4. **Process with script:**
```bash
vrun ../process_site.vsh ./
```
## Common Workflows
### Workflow 1: Documentation Site
```
docs/
├── 0_config.heroscript
│ └── Basic config + metadata
├── 1_menu.heroscript
│ └── Navbar + footer
├── 2_getting_started.heroscript
│ └── Getting started pages
├── 3_api.heroscript
│ └── API reference pages
└── 4_advanced.heroscript
└── Advanced topic pages
```
### Workflow 2: Internal Knowledge Base
```
kb/
├── 0_config.heroscript
├── 1_navigation.heroscript
└── 2_articles.heroscript
```
### Workflow 3: Product Documentation with Imports
```
product_docs/
├── 0_config.heroscript
├── 1_imports.heroscript
│ └── Import shared templates
├── 2_menu.heroscript
└── 3_pages.heroscript
```
## Tips & Tricks
### Tip 1: Reuse Collections
```heroscript
# Specify once, reuse multiple times
!!site.page src: "guides:intro"
!!site.page src: "setup" # Reuses "guides"
!!site.page src: "deployment" # Still "guides"
# Switch to new collection
!!site.page src: "api:reference"
!!site.page src: "examples" # Now "api"
```
### Tip 2: Auto-Increment Categories
```heroscript
# Automatically positioned at 100, 200, 300...
!!site.page_category name: "basics"
!!site.page_category name: "advanced"
!!site.page_category name: "expert"
# Or specify explicit positions
!!site.page_category name: "basics" position: 10
!!site.page_category name: "advanced" position: 20
```
### Tip 3: Title Extraction
Let titles come from markdown files:
```heroscript
# Don't specify title
!!site.page src: "docs:introduction"
# Title will be extracted from # Heading in introduction.md
```
### Tip 4: Draft Pages
Hide pages while working on them:
```heroscript
!!site.page src: "docs:work_in_progress"
draft: true
title: "Work in Progress"
```
## Debugging
### Debug: Check What Got Configured
```v
mut s := site.get(name: 'my_site')!
println(s.pages) // All pages
println(s.nav) // Navigation structure
println(s.siteconfig) // Configuration
```
### Debug: List All Sites
```v
sites := site.list()
for site_name in sites {
println('Site: ${site_name}')
}
```
### Debug: Enable Verbose Output
Add `console.print_debug()` calls in your HeroScript processing.
## Next Steps
- Customize `process_site.vsh` for your needs
- Add your existing pages (in markdown)
- Export to Docusaurus
- Deploy to production
For more info, see the main [Site Module README](./readme.md).

View File

@@ -1,53 +0,0 @@
#!/usr/bin/env hero
# Basic single-section documentation site
!!site.config
name: "simple_docs"
title: "Simple Documentation"
description: "A basic documentation site"
copyright: "© 2024 Example"
url: "https://docs.example.com"
base_url: "/"
!!site.navbar
title: "Simple Docs"
logo_src: "img/logo.png"
!!site.navbar_item
label: "Docs"
to: "/"
position: "left"
!!site.navbar_item
label: "GitHub"
href: "https://github.com/example/repo"
position: "right"
!!site.footer
style: "dark"
!!site.footer_item
title: "Documentation"
label: "Getting Started"
to: "getting-started"
!!site.footer_item
title: "Community"
label: "Discord"
href: "https://discord.gg/example"
!!site.page src: "docs:introduction"
title: "Getting Started"
description: "Learn the basics"
!!site.page src: "installation"
title: "Installation"
description: "How to install"
!!site.page src: "usage"
title: "Usage Guide"
description: "How to use the system"
!!site.page src: "faq"
title: "FAQ"
description: "Frequently asked questions"

View File

@@ -1,155 +0,0 @@
#!/usr/bin/env hero
# Multi-section documentation with categories
!!site.config
name: "multi_docs"
title: "Complete Documentation"
description: "Comprehensive documentation with multiple sections"
tagline: "Everything you need to know"
copyright: "© 2024 Tech Company"
url: "https://docs.techcompany.com"
base_url: "/docs"
!!site.navbar
title: "Tech Documentation"
logo_src: "img/logo.svg"
!!site.navbar_item
label: "Documentation"
to: "/"
position: "left"
!!site.navbar_item
label: "API"
to: "api"
position: "left"
!!site.navbar_item
label: "GitHub"
href: "https://github.com/techcompany"
position: "right"
!!site.footer
style: "dark"
!!site.footer_item
title: "Guides"
label: "Getting Started"
to: "getting-started"
!!site.footer_item
title: "Guides"
label: "Installation"
to: "installation"
!!site.footer_item
title: "Company"
label: "Website"
href: "https://techcompany.com"
!!site.footer_item
title: "Legal"
label: "Privacy"
href: "https://techcompany.com/privacy"
# ==================================================
# Getting Started Section
# ==================================================
!!site.page_category
name: "getting_started"
label: "Getting Started"
position: 100
!!site.page src: "docs:introduction"
title: "Introduction"
description: "What is this project?"
!!site.page src: "installation"
title: "Installation"
description: "Get up and running"
!!site.page src: "quickstart"
title: "Quick Start"
description: "Your first steps"
# ==================================================
# Core Concepts Section
# ==================================================
!!site.page_category
name: "concepts"
label: "Core Concepts"
position: 200
!!site.page src: "concepts:architecture"
title: "Architecture"
description: "System design and architecture"
!!site.page src: "components"
title: "Components"
description: "Main system components"
!!site.page src: "data_flow"
title: "Data Flow"
description: "How data flows through the system"
!!site.page src: "security"
title: "Security"
description: "Security considerations"
# ==================================================
# Advanced Topics Section
# ==================================================
!!site.page_category
name: "advanced"
label: "Advanced Topics"
position: 300
!!site.page src: "advanced:performance"
title: "Performance Tuning"
description: "Optimize your system"
!!site.page src: "scaling"
title: "Scaling"
description: "Scale to millions of users"
!!site.page src: "deployment"
title: "Deployment"
description: "Deploy to production"
# ==================================================
# API Reference Section
# ==================================================
!!site.page_category
name: "api"
label: "API Reference"
position: 400
!!site.page src: "api:overview"
title: "API Overview"
description: "API capabilities and base URLs"
!!site.page src: "rest_api"
title: "REST API"
description: "Complete REST API documentation"
!!site.page src: "graphql_api"
title: "GraphQL"
description: "GraphQL API documentation"
!!site.page src: "webhooks"
title: "Webhooks"
description: "Implement webhooks in your app"
# ==================================================
# Publishing
# ==================================================
!!site.publish
path: "/var/www/html/docs"
!!site.publish_dev
path: "/tmp/docs-preview"

View File

@@ -1,116 +0,0 @@
#!/usr/bin/env -S v -n -w -gc none -cg -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.playbook
import incubaid.herolib.web.site
import incubaid.herolib.ui.console
import os
// Process a site configuration from HeroScript files
println(console.color_fg(.green) + '=== Site Configuration Processor ===' + console.reset())
// Get directory from command line or use default
mut config_dir := './docs'
if os.args.len > 1 {
config_dir = os.args[1]
}
if !os.exists(config_dir) {
console.print_stderr('Error: Directory not found: ${config_dir}')
exit(1)
}
console.print_item('Processing HeroScript files from: ${config_dir}')
// Find all heroscript files
mut heroscript_files := []string{}
entries := os.ls(config_dir) or {
console.print_stderr('Error reading directory: ${err}')
exit(1)
}
for entry in entries {
if entry.ends_with('.heroscript') {
heroscript_files << entry
}
}
// Sort files (to ensure numeric prefix order)
heroscript_files.sort()
if heroscript_files.len == 0 {
console.print_stderr('No .heroscript files found in ${config_dir}')
exit(1)
}
console.print_item('Found ${heroscript_files.len} HeroScript file(s):')
for file in heroscript_files {
console.print_item(' - ${file}')
}
// Process each file
mut site_names := []string{}
for file in heroscript_files {
full_path := os.join_path(config_dir, file)
console.print_lf(1)
console.print_header('Processing: ${file}')
mut plbook := playbook.new(path: full_path) or {
console.print_stderr('Error loading ${file}: ${err}')
continue
}
site.play(mut plbook) or {
console.print_stderr('Error processing ${file}: ${err}')
continue
}
}
// Get all configured sites
site_names = site.list()
if site_names.len == 0 {
console.print_stderr('No sites were configured')
exit(1)
}
console.print_lf(2)
console.print_green('=== Configuration Complete ===')
// Display configured sites
for site_name in site_names {
mut configured_site := site.get(name: site_name) or { continue }
console.print_header('Site: ${site_name}')
console.print_item('Title: ${configured_site.siteconfig.title}')
console.print_item('Pages: ${configured_site.pages.len}')
console.print_item('Description: ${configured_site.siteconfig.description}')
// Show pages organized by category
if configured_site.nav.my_sidebar.len > 0 {
console.print_item('Navigation structure:')
for nav_item in configured_site.nav.my_sidebar {
match nav_item {
site.NavDoc {
console.print_item(' - [Page] ${nav_item.label}')
}
site.NavCat {
console.print_item(' - [Category] ${nav_item.label}')
for sub_item in nav_item.items {
match sub_item {
site.NavDoc {
console.print_item(' - ${sub_item.label}')
}
else {}
}
}
}
else {}
}
}
}
console.print_lf(1)
}
println(console.color_fg(.green) + ' Site configuration ready for deployment' + console.reset())

View File

@@ -45,7 +45,7 @@ fn addtoscript(tofind string, toadd string) ! {
// Reset symlinks (cleanup)
println('Resetting all symlinks...')
os.rm('${os.home_dir()}/.vmodules/incubaid/herolib') or {}
os.rm('${os.home_dir()}/.vmodules/freeflowuniverse/herolib') or {}
os.rm('${os.home_dir()}/.vmodules/incubaid/herolib') or {}
// Create necessary directories
os.mkdir_all('${os.home_dir()}/.vmodules/incubaid') or {

View File

@@ -13,7 +13,7 @@ import incubaid.herolib.installers.lang.python
import os
fn startupcmd() ![]startupmanager.ZProcessNewArgs {
_ := get()!
mut installer := get()!
mut res := []startupmanager.ZProcessNewArgs{}
// THIS IS EXAMPLE CODEAND NEEDS TO BE CHANGED
// res << startupmanager.ZProcessNewArgs{
@@ -28,7 +28,7 @@ fn startupcmd() ![]startupmanager.ZProcessNewArgs {
}
fn running() !bool {
_ := get()!
mut installer := get()!
// THIS IS EXAMPLE CODEAND NEEDS TO BE CHANGED
// this checks health of erpnext
// curl http://localhost:3333/api/v1/s --oauth2-bearer 1234 works

View File

@@ -16,7 +16,7 @@ pub mut:
pub fn (b BizModel) export(args ExportArgs) ! {
name := if args.name != '' { args.name } else { texttools.snake_case(args.title) }
pathlib.get_dir(
path := pathlib.get_dir(
path: os.join_path(os.home_dir(), 'hero/var/bizmodel/exports/${name}')
create: true
empty: true
@@ -52,7 +52,7 @@ pub fn (model BizModel) write_operational_plan(args ExportArgs) ! {
mut hr_page := pathlib.get_file(path: '${hr_dir.path}/human_resources.md')!
hr_page.template_write($tmpl('./templates/human_resources.md'), true)!
for _, employee in model.employees {
for key, employee in model.employees {
mut employee_page := pathlib.get_file(
path: '${hr_dir.path}/${texttools.snake_case(employee.name)}.md'
)!
@@ -73,7 +73,7 @@ pub fn (model BizModel) write_operational_plan(args ExportArgs) ! {
}
}
for _, department in model.departments {
for key, department in model.departments {
dept := department
mut dept_page := pathlib.get_file(
path: '${depts_dir.path}/${texttools.snake_case(department.name)}.md'
@@ -94,7 +94,7 @@ pub fn (model BizModel) write_revenue_model(args ExportArgs) ! {
products_page.template_write('# Products', true)!
name1 := 'example'
for _, product in model.products {
for key, product in model.products {
mut product_page := pathlib.get_file(
path: '${products_dir.path}/${texttools.snake_case(product.name)}.md'
)!

View File

@@ -7,7 +7,7 @@ import incubaid.herolib.core.pathlib
pub struct ExportCSVArgs {
pub mut:
path string
include_empty bool // whether to include empty cells or not
include_empty bool = false // whether to include empty cells or not
separator string = '|' // separator character for CSV
}

View File

@@ -22,7 +22,7 @@ pub fn play(mut plbook PlayBook) ! {
})
// play actions for each biz in plbook
for biz, _ in actions_by_biz {
for biz, actions in actions_by_biz {
mut model := getset(biz)!
model.play(mut plbook)!
}

View File

@@ -8,7 +8,7 @@ import incubaid.herolib.core.playbook { Action }
// title:'Engineering Division'
// avg_monthly_cost:'6000USD' avg_indexation:'5%'
fn (mut m BizModel) department_define_action(action Action) !Action {
_ := action.params.get_default('bizname', '')!
bizname := action.params.get_default('bizname', '')!
mut name := action.params.get('name') or { return error('department name is required') }
mut descr := action.params.get_default('descr', '')!
if descr.len == 0 {

View File

@@ -74,7 +74,7 @@ fn (mut m BizModel) employee_define_action(action Action) !Action {
mut curcost := -costpeople_row.cells[x].val
mut curpeople := nrpeople_row.cells[x].val
mut currev := revtotal.cells[x].val
println("currev: ${currev}, curcost: ${curcost}, curpeople: ${curpeople}, costpercent_revenue: ${cost_percent_revenue}")
// println("currev: ${currev}, curcost: ${curcost}, curpeople: ${curpeople}, costpercent_revenue: ${cost_percent_revenue}")
if currev * cost_percent_revenue > curcost {
costpeople_row.cells[x].val = -currev * cost_percent_revenue
nrpeople_row.cells[x].val = f64(currev * cost_percent_revenue / costperson_default.usd())

View File

@@ -10,7 +10,7 @@ fn (mut sim BizModel) pl_total() ! {
// sheet.pprint(nr_columns: 10)!
_ := sheet.group2row(
mut pl_total := sheet.group2row(
name: 'pl_summary'
include: ['pl']
tags: 'summary'

View File

@@ -77,7 +77,7 @@ fn (mut m BizModel) revenue_action(action Action) !Action {
product.has_revenue = true
}
_ := revenue.action(
mut margin := revenue.action(
name: '${r.name}_margin'
descr: 'Margin for ${r.name}'
action: .substract

View File

@@ -6,7 +6,7 @@ import incubaid.herolib.core.texttools
// see lib/biz/bizmodel/docs/revenue.md
fn (mut m BizModel) revenue_item_action(action Action) !Action {
mut r := get_action_descr(action)!
mut product := m.products[r.name] or { return error('Product "${r.name}" not found for revenue item action') }
mut product := m.products[r.name]
mut nr_sold := m.sheet.row_new(
name: '${r.name}_nr_sold'
@@ -193,7 +193,7 @@ fn (mut m BizModel) revenue_item_action(action Action) !Action {
tags: 'name:${r.name}'
)!
_ := margin_setup.action(
mut margin := margin_setup.action(
name: '${r.name}_margin'
descr: 'Margin for ${r.name}'
action: .add

View File

@@ -6,19 +6,19 @@ import incubaid.herolib.core.playbook
fn (mut sim BizModel) revenue_total() ! {
mut sheet := sim.sheet
_ := sheet.group2row(
mut revenue_total := sheet.group2row(
name: 'revenue_total'
include: ['rev']
tags: 'total revtotal pl'
descr: 'Revenue Total'
)!
_ := sheet.group2row(
mut cogs_total := sheet.group2row(
name: 'cogs_total'
include: ['cogs']
tags: 'total cogstotal pl'
descr: 'Cost of Goods Total.'
)!
_ := sheet.group2row(
mut margin_total := sheet.group2row(
name: 'margin_total'
include: ['margin']
tags: 'total margintotal'

View File

@@ -7,7 +7,7 @@ import incubaid.herolib.core.pathlib
pub struct ExportCSVArgs {
pub mut:
path string
include_empty bool // whether to include empty cells or not
include_empty bool = false // whether to include empty cells or not
separator string = '|' // separator character for CSV
}

View File

@@ -118,23 +118,23 @@ pub fn (s Sheet) data_get_as_string(args RowGetArgs) !string {
}
nryears := 5
err_pre := "Can't get data for sheet:${s.name} row:${args.rowname}.\n"
mut s2 := s
mut s2 := s
if args.period_type == .year {
s2 = *s.toyear(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
s2 = s.toyear(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
}
if args.period_type == .quarter {
s2 = *s.toquarter(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
s2 = s.toquarter(
name: args.rowname
namefilter: args.namefilter
includefilter: args.includefilter
excludefilter: args.excludefilter
)!
}
mut out := ''

View File

@@ -20,7 +20,7 @@ fn pad_right(s string, length int) string {
pub struct PPrintArgs {
pub mut:
group_months int = 1 // e.g. if 2 then will group by 2 months
nr_columns int // number of columns to show in the table, 0 is all
nr_columns int = 0 // number of columns to show in the table, 0 is all
description bool // show description in the table
aggrtype bool = true // show aggregate type in the table
tags bool = true // show tags in the table
@@ -151,7 +151,7 @@ pub fn (mut s Sheet) pprint(args PPrintArgs) ! {
}
max_cols := data_start_index + args.nr_columns
mut new_all_rows := [][]string{}
for _, row in all_rows {
for i, row in all_rows {
if row.len > max_cols {
new_all_rows << row[0..max_cols]
} else {

View File

@@ -67,9 +67,7 @@ pub fn (mut node Node) hero_install(args HeroInstallArgs) ! {
todo << 'bash /tmp/install_v.sh --herolib '
}
}
// Use exec instead of exec_interactive since user interaction is not needed
// exec_interactive uses shell mode which replaces the process and never returns
node.exec(cmd: todo.join('\n'), stdout: true)!
node.exec_interactive(todo.join('\n'))!
}
@[params]

View File

@@ -99,11 +99,8 @@ pub fn (mut executor ExecutorLocal) download(args SyncArgs) ! {
}
pub fn (mut executor ExecutorLocal) shell(cmd string) ! {
// Note: os.execvp replaces the current process and never returns.
// This is intentional - shell() is designed to hand over control to the shell.
// Do not put shell() before any other code that needs to execute.
if cmd.len > 0 {
os.execvp('/bin/bash', ['-c', cmd])!
os.execvp('/bin/bash', ["-c '${cmd}'"])!
} else {
os.execvp('/bin/bash', [])!
}

View File

@@ -235,12 +235,11 @@ pub fn (mut executor ExecutorSSH) info() map[string]string {
// forwarding ssh traffic to certain container
pub fn (mut executor ExecutorSSH) shell(cmd string) ! {
mut args := ['-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null',
'${executor.user}@${executor.ipaddr.addr}', '-p', '${executor.ipaddr.port}']
if cmd.len > 0 {
args << cmd
panic('TODO IMPLEMENT SHELL EXEC OVER SSH')
}
os.execvp('ssh', args)!
os.execvp('ssh', ['-o StrictHostKeyChecking=no', '${executor.user}@${executor.ipaddr.addr}',
'-p ${executor.ipaddr.port}'])!
}
pub fn (mut executor ExecutorSSH) list(path string) ![]string {

View File

@@ -228,7 +228,7 @@ pub fn (mut client MeilisearchClient) similar_documents(uid string, args Similar
method: .post
data: json.encode(args)
}
client.enable_eperimental_feature(vector_store: true)! // Enable the feature first.
res := client.enable_eperimental_feature(vector_store: true)! // Enable the feature first.
mut http := client.httpclient()!
rsponse := http.post_json_str(req)!
println('rsponse: ${rsponse}')

View File

@@ -19,7 +19,7 @@ pub mut:
user string = 'root'
port int = 5432
host string = 'localhost'
password string
password string = ''
dbname string = 'postgres'
}
@@ -52,7 +52,8 @@ pub fn heroscript_dumps(obj PostgresqlClient) !string {
}
pub fn heroscript_loads(heroscript string) !PostgresqlClient {
mut client := encoderhero.decode[PostgresqlClient](heroscript)!
client.db_ = pg.DB{}
return client
mut obj := encoderhero.decode[PostgresqlClient](heroscript)!
return PostgresqlClient{
db_: pg.DB{}
}
}

View File

@@ -114,5 +114,5 @@ fn (q QueryBuilder) build_query(args BuildQueryArgs) string {
fn type_to_map[T](t T) !map[string]json2.Any {
encoded_input := json2.encode(t)
return json2.decode[json2.Any](encoded_input)!.as_map()
return json2.raw_decode(encoded_input)!.as_map()
}

View File

@@ -11,13 +11,8 @@ pub fn parse_const(code_ string) !Const {
if !code.contains('=') {
return error('code <${code_}> is not of const')
}
mut name := code.split('=')[0].trim_space()
// Strip 'const ' prefix if present
if name.starts_with('const ') {
name = name.trim_string_left('const ').trim_space()
}
return Const{
name: name
name: code.split('=')[0].trim_space()
value: code.split('=')[1].trim_space()
}
}

View File

@@ -44,11 +44,6 @@ pub fn (p Param) typescript() string {
pub fn parse_param(code_ string) !Param {
mut code := code_.trim_space()
// Handle empty string (void return type)
if code == '' {
return Param{}
}
if code == '!' {
return Param{
is_result: true
@@ -65,13 +60,6 @@ pub fn parse_param(code_ string) !Param {
}
split := code.split(' ').filter(it != '')
// Handle empty split (void return type after mut check)
if split.len == 0 {
return Param{
mutable: is_mut
}
}
if split.len == 1 {
// means anonymous param
return Param{

View File

@@ -15,14 +15,14 @@ fn test_comprehensive_code_parsing() {
console.print_lf(1)
// Run all tests
check_module_parsing()!
check_struct_parsing()
check_function_parsing()!
check_imports_and_modules()
check_type_system()
check_visibility_modifiers()
check_method_parsing()!
check_constants_parsing()
test_module_parsing()
test_struct_parsing()
test_function_parsing()
test_imports_and_modules()
test_type_system()
test_visibility_modifiers()
test_method_parsing()
test_constants_parsing()
console.print_green(' All comprehensive tests passed!')
console.print_lf(1)
@@ -74,7 +74,7 @@ fn copy_directory(src string, dst string) ! {
}
}
fn check_module_parsing() ! {
fn test_module_parsing() {
console.print_header('Test 1: Module and File Parsing')
mut myparser := new(path: '/tmp/codeparsertest', recursive: true)!
@@ -98,7 +98,7 @@ fn check_module_parsing() ! {
console.print_lf(1)
}
fn check_struct_parsing() {
fn test_struct_parsing() {
console.print_header('Test 2: Struct Parsing')
models_file := os.join_path('/tmp/codeparsertest', 'models.v')
@@ -145,7 +145,7 @@ fn check_struct_parsing() {
console.print_lf(1)
}
fn check_function_parsing() ! {
fn test_function_parsing() {
console.print_header('Test 3: Function Parsing')
mut myparser := new(path: '/tmp/codeparsertest', recursive: true)!
@@ -191,7 +191,7 @@ fn check_function_parsing() ! {
console.print_lf(1)
}
fn check_imports_and_modules() {
fn test_imports_and_modules() {
console.print_header('Test 4: Imports and Module Names')
models_file := os.join_path('/tmp/codeparsertest', 'models.v')
@@ -222,7 +222,7 @@ fn check_imports_and_modules() {
console.print_lf(1)
}
fn check_type_system() {
fn test_type_system() {
console.print_header('Test 5: Type System')
models_file := os.join_path('/tmp/codeparsertest', 'models.v')
@@ -257,7 +257,7 @@ fn check_type_system() {
console.print_lf(1)
}
fn check_visibility_modifiers() {
fn test_visibility_modifiers() {
console.print_header('Test 6: Visibility Modifiers')
models_file := os.join_path('/tmp/codeparsertest', 'models.v')
@@ -293,7 +293,7 @@ fn check_visibility_modifiers() {
console.print_lf(1)
}
fn check_method_parsing() ! {
fn test_method_parsing() {
console.print_header('Test 7: Method Parsing')
mut myparser := new(path: '/tmp/codeparsertest', recursive: true)!
@@ -327,7 +327,7 @@ fn check_method_parsing() ! {
console.print_lf(1)
}
fn check_constants_parsing() {
fn test_constants_parsing() {
console.print_header('Test 8: Constants Parsing')
models_file := os.join_path('/tmp/codeparsertest', 'models.v')

View File

@@ -217,7 +217,7 @@ fn cmd_git_execute(cmd Command) ! {
mut gs := gittools.new(coderoot: coderoot)!
// create the filter for doing group actions, or action on 1 repo
_ := ''
mut filter := ''
mut url := ''
mut path := ''

View File

@@ -164,7 +164,7 @@ pub fn plbook_run(cmd Command) !(&playbook.PlayBook, string) {
playbook.new(path: path)!
}
_ := cmd.flags.get_bool('dagu') or { false }
dagu := cmd.flags.get_bool('dagu') or { false }
playcmds.run(plbook: plbook)!

View File

@@ -3,7 +3,7 @@ module playcmds
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.data.atlas
import incubaid.herolib.biz.bizmodel
import incubaid.herolib.mycelium.incatokens
import incubaid.herolib.threefold.incatokens
import incubaid.herolib.web.site
import incubaid.herolib.virt.hetznermanager
import incubaid.herolib.virt.heropods
@@ -20,7 +20,6 @@ import incubaid.herolib.installers.horus.herorunner
import incubaid.herolib.installers.horus.osirisrunner
import incubaid.herolib.installers.horus.salrunner
import incubaid.herolib.installers.virt.podman
import incubaid.herolib.installers.virt.kubernetes_installer
import incubaid.herolib.installers.infra.gitea
import incubaid.herolib.builder
@@ -81,7 +80,6 @@ pub fn run(args_ PlayArgs) ! {
herolib.play(mut plbook)!
vlang.play(mut plbook)!
podman.play(mut plbook)!
kubernetes_installer.play(mut plbook)!
gitea.play(mut plbook)!
giteaclient.play(mut plbook)!

View File

@@ -11,7 +11,7 @@ pub fn play_ssh(mut plbook PlayBook) ! {
}
// Get or create a single SSH agent instance
_ := sshagent.new_single(sshagent.SSHAgentNewArgs{})!
mut agent := sshagent.new_single(sshagent.SSHAgentNewArgs{})!
// TO IMPLEMENT:

View File

@@ -2,8 +2,8 @@ module playmacros
import incubaid.herolib.ui.console
import incubaid.herolib.core.playbook { Action, PlayBook }
import incubaid.herolib.mycelium.grid4.gridsimulator
import incubaid.herolib.mycelium.grid4.farmingsimulator
import incubaid.herolib.threefold.grid4.gridsimulator
import incubaid.herolib.threefold.grid4.farmingsimulator
import incubaid.herolib.biz.bizmodel
import incubaid.herolib.biz.spreadsheet

View File

@@ -23,7 +23,7 @@ pub fn escape_regex_chars(s string) string {
// This function does not add implicit ^ and $ anchors, allowing for substring matches.
fn wildcard_to_regex(wildcard_pattern string) string {
mut regex_pattern := ''
for _, r in wildcard_pattern.runes() {
for i, r in wildcard_pattern.runes() {
match r {
`*` {
regex_pattern += '.*'

View File

@@ -381,48 +381,3 @@ fn test_get_edit_url() {
// Assert the URLs are correct
// assert edit_url == 'https://github.com/test/repo/edit/main/test_page.md'
}
fn test_export_recursive_links() {
// Create 3 collections with chained links
col_a_path := '${test_base}/recursive_export/col_a'
col_b_path := '${test_base}/recursive_export/col_b'
col_c_path := '${test_base}/recursive_export/col_c'
os.mkdir_all(col_a_path)!
os.mkdir_all(col_b_path)!
os.mkdir_all(col_c_path)!
// Collection A
mut cfile_a := pathlib.get_file(path: '${col_a_path}/.collection', create: true)!
cfile_a.write('name:col_a')!
mut page_a := pathlib.get_file(path: '${col_a_path}/page_a.md', create: true)!
page_a.write('# Page A\n\n[Link to B](col_b:page_b)')!
// Collection B
mut cfile_b := pathlib.get_file(path: '${col_b_path}/.collection', create: true)!
cfile_b.write('name:col_b')!
mut page_b := pathlib.get_file(path: '${col_b_path}/page_b.md', create: true)!
page_b.write('# Page B\n\n[Link to C](col_c:page_c)')!
// Collection C
mut cfile_c := pathlib.get_file(path: '${col_c_path}/.collection', create: true)!
cfile_c.write('name:col_c')!
mut page_c := pathlib.get_file(path: '${col_c_path}/page_c.md', create: true)!
page_c.write('# Page C\n\nFinal content')!
// Export
mut a := new()!
a.add_collection(mut pathlib.get_dir(path: col_a_path)!)!
a.add_collection(mut pathlib.get_dir(path: col_b_path)!)!
a.add_collection(mut pathlib.get_dir(path: col_c_path)!)!
export_path := '${test_base}/export_recursive'
a.export(destination: export_path)!
// Verify all pages were exported
assert os.exists('${export_path}/content/col_a/page_a.md')
assert os.exists('${export_path}/content/col_a/page_b.md') // From Collection B
assert os.exists('${export_path}/content/col_a/page_c.md') // From Collection C
// TODO: test not complete
}

View File

@@ -18,7 +18,7 @@ AtlasClient provides methods to:
import incubaid.herolib.web.atlas_client
// Create client
mut client := atlas_client.new(export_dir: '${os.home_dir()}/hero/var/atlas_export')!
mut client := atlas_client.new(export_dir: '/tmp/atlas_export')!
// List collections
collections := client.list_collections()!

View File

@@ -44,7 +44,6 @@ pub mut:
}
// Export a single collection
// Export a single collection with recursive link processing
pub fn (mut c Collection) export(args CollectionExportArgs) ! {
// Create collection directory
mut col_dir := pathlib.get_dir(
@@ -67,14 +66,11 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
)!
json_file.write(meta)!
// Track all cross-collection pages and files that need to be exported
// Use maps with collection:name as key to track globally across all resolutions
mut cross_collection_pages := map[string]&Page{} // key: "collection:page_name"
mut cross_collection_files := map[string]&File{} // key: "collection:file_name"
mut processed_local_pages := map[string]bool{} // Track which local pages we've already processed
mut processed_cross_pages := map[string]bool{} // Track which cross-collection pages we've processed for links
// Track cross-collection pages and files that need to be copied for self-contained export
mut cross_collection_pages := map[string]&Page{} // key: page.name, value: &Page
mut cross_collection_files := map[string]&File{} // key: file.name, value: &File
// First pass: export all pages in this collection and recursively collect ALL cross-collection references
// First pass: export all pages in this collection and collect cross-collection references
for _, mut page in c.pages {
// Get content with includes processed and links transformed for export
content := page.content_with_fixed_links(
@@ -86,11 +82,33 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
mut dest_file := pathlib.get_file(path: '${col_dir.path}/${page.name}.md', create: true)!
dest_file.write(content)!
// Recursively collect cross-collection references from this page
c.collect_cross_collection_references(mut page, mut cross_collection_pages, mut
cross_collection_files, mut processed_cross_pages)!
// Collect cross-collection references for copying (pages and files/images)
// IMPORTANT: Use cached links from validation (before transformation) to preserve collection info
for mut link in page.links {
if link.status != .found {
continue
}
processed_local_pages[page.name] = true
// Collect cross-collection page references
is_local := link.target_collection_name == c.name
if link.file_type == .page && !is_local {
mut target_page := link.target_page() or { continue }
// Use page name as key to avoid duplicates
if target_page.name !in cross_collection_pages {
cross_collection_pages[target_page.name] = target_page
}
}
// Collect cross-collection file/image references
if (link.file_type == .file || link.file_type == .image) && !is_local {
mut target_file := link.target_file() or { continue }
// Use file name as key to avoid duplicates
file_key := target_file.name
if file_key !in cross_collection_files {
cross_collection_files[file_key] = target_file
}
}
}
// Redis operations...
if args.redis {
@@ -118,48 +136,21 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
src_file.copy(dest: dest_file.path)!
}
// Second pass: copy all collected cross-collection pages and process their links recursively
// Keep iterating until no new cross-collection references are found
for {
mut found_new_references := false
// Second pass: copy cross-collection referenced pages to make collection self-contained
for _, mut ref_page in cross_collection_pages {
// Get the referenced page content with includes processed
ref_content := ref_page.content_with_fixed_links(
include: args.include
cross_collection: true
export_mode: true
)!
// Process all cross-collection pages we haven't processed yet
for page_key, mut ref_page in cross_collection_pages {
if page_key in processed_cross_pages {
continue // Already processed this page's links
}
// Mark as processed to avoid infinite loops
processed_cross_pages[page_key] = true
found_new_references = true
// Get the referenced page content with includes processed
ref_content := ref_page.content_with_fixed_links(
include: args.include
cross_collection: true
export_mode: true
)!
// Write the referenced page to this collection's directory
mut dest_file := pathlib.get_file(
path: '${col_dir.path}/${ref_page.name}.md'
create: true
)!
dest_file.write(ref_content)!
// CRITICAL: Recursively process links in this cross-collection page
// This ensures we get pages/files/images referenced by ref_page
c.collect_cross_collection_references(mut ref_page, mut cross_collection_pages, mut
cross_collection_files, mut processed_cross_pages)!
}
// If we didn't find any new references, we're done with the recursive pass
if !found_new_references {
break
}
// Write the referenced page to this collection's directory
mut dest_file := pathlib.get_file(path: '${col_dir.path}/${ref_page.name}.md', create: true)!
dest_file.write(ref_content)!
}
// Third pass: copy ALL collected cross-collection referenced files/images
// Third pass: copy cross-collection referenced files/images to make collection self-contained
for _, mut ref_file in cross_collection_files {
mut src_file := ref_file.path()!
@@ -177,42 +168,3 @@ pub fn (mut c Collection) export(args CollectionExportArgs) ! {
src_file.copy(dest: dest_file.path)!
}
}
// Helper function to recursively collect cross-collection references
// This processes a page's links and adds all non-local references to the collections
fn (mut c Collection) collect_cross_collection_references(mut page Page,
mut all_cross_pages map[string]&Page,
mut all_cross_files map[string]&File,
mut processed_pages map[string]bool) ! {
// Use cached links from validation (before transformation) to preserve collection info
for mut link in page.links {
if link.status != .found {
continue
}
is_local := link.target_collection_name == c.name
// Collect cross-collection page references
if link.file_type == .page && !is_local {
page_key := '${link.target_collection_name}:${link.target_item_name}'
// Only add if not already collected
if page_key !in all_cross_pages {
mut target_page := link.target_page()!
all_cross_pages[page_key] = target_page
// Don't mark as processed yet - we'll do that when we actually process its links
}
}
// Collect cross-collection file/image references
if (link.file_type == .file || link.file_type == .image) && !is_local {
file_key := '${link.target_collection_name}:${link.target_item_name}'
// Only add if not already collected
if file_key !in all_cross_files {
mut target_file := link.target_file()!
all_cross_files[file_key] = target_file
}
}
}
}

View File

@@ -3,7 +3,6 @@ module atlas
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.develop.gittools
import incubaid.herolib.ui.console
import os
// Play function to process HeroScript actions for Atlas
pub fn play(mut plbook PlayBook) ! {
@@ -67,7 +66,7 @@ pub fn play(mut plbook PlayBook) ! {
for mut action in export_actions {
mut p := action.params
name = p.get_default('name', 'main')!
destination := p.get_default('destination', '${os.home_dir()}/hero/var/atlas_export')!
destination := p.get_default('destination', '/tmp/atlas_export')!
reset := p.get_default_true('reset')
include := p.get_default_true('include')
redis := p.get_default_true('redis')

View File

@@ -38,7 +38,7 @@ pub fn set_titles(page string, maxnr int) string {
for line in lines {
mut hash_count := 0
mut first_char_idx := 0
for _, r in line.runes() {
for char_idx, r in line.runes() {
if r == ` ` {
first_char_idx++
continue
@@ -89,7 +89,7 @@ pub fn set_titles(page string, maxnr int) string {
// Remove existing numbering (e.g., "1. ", "1.1. ")
mut skip_chars := 0
mut in_numbering := true
for _, r in original_title_text.runes() {
for r_idx, r in original_title_text.runes() {
if in_numbering {
if (r >= `0` && r <= `9`) || r == `.` || r == ` ` {
skip_chars++

View File

@@ -22,8 +22,8 @@ pub mut:
recursive bool
pull bool
reload bool // means reload the info into the cache
script bool // run non interactive
reset bool // means we will lose changes (only relevant for clone, pull)
script bool = true // run non interactive
reset bool = true // means we will lose changes (only relevant for clone, pull)
}
// do group actions on repo
@@ -38,12 +38,14 @@ pub mut:
// url string
// pull bool
// reload bool //means reload the info into the cache
// script bool // run non interactive
// reset bool// means we will lose changes (only relevant for clone, pull)
// script bool = true // run non interactive
// reset bool = true // means we will lose changes (only relevant for clone, pull)
//```
pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
mut args := args_
console.print_debug('git do ${args.cmd}')
// println(args)
// $dbg;
if args.path.len > 0 && args.url.len > 0 {
panic('bug')
@@ -97,9 +99,7 @@ pub fn (mut gs GitStructure) do(args_ ReposActionsArgs) !string {
provider: args.provider
)!
// println(repos.map(it.name))
if repos.len < 4 || args.cmd in 'pull,push,commit'.split(',') {
if repos.len < 4 || args.cmd in 'pull,push,commit,delete'.split(',') {
args.reload = true
}

View File

@@ -19,7 +19,7 @@ pub fn (mut repo GitRepo) status_update(args StatusUpdateArgs) ! {
}
if args.reset || repo.last_load == 0 {
// console.print_debug('${repo.name} : Cache Get')
// console.print_debug('${repo.name} : Cache get')
repo.cache_get()!
}
@@ -30,8 +30,6 @@ pub fn (mut repo GitRepo) status_update(args StatusUpdateArgs) ! {
// Decide if a full load is needed.
if args.reset || repo.last_load == 0
|| current_time - repo.last_load >= repo.config.remote_check_period {
// console.print_debug("reload ${repo.name}:\n args reset:${args.reset}\n lastload:${repo.last_load}\n currtime-lastload:${current_time- repo.last_load}\n period:${repo.config.remote_check_period}")
// $dbg;
repo.load_internal() or {
// Persist the error state to the cache
console.print_stderr('Failed to load repository ${repo.name} at ${repo.path()}: ${err}')
@@ -53,8 +51,7 @@ fn (mut repo GitRepo) load_internal() ! {
repo.exec('fetch --all') or {
repo.status.error = 'Failed to fetch updates: ${err}'
console.print_stderr('Failed to fetch updates for ${repo.name} at ${repo.path()}: ${err}. \nPlease check git repo source, network connection and repository access.')
return
return error('Failed to fetch updates for ${repo.name} at ${repo.path()}: ${err}. Please check network connection and repository access.')
}
repo.load_branches()!
repo.load_tags()!

View File

@@ -1,8 +1,9 @@
!!hero_code.generate_client
name:'heroprompt'
classname:'Workspace'
singleton:0
classname:'HeroPrompt'
singleton:1
default:1
hasconfig:1
templates:
reset:0

View File

@@ -0,0 +1,126 @@
module heroprompt
import rand
import incubaid.herolib.data.ourtime
// HeroPrompt Methods - Workspace Management
@[params]
pub struct NewWorkspaceParams {
pub mut:
name string @[required] // Workspace name
description string // Optional description
is_active bool = false // Whether this should be the active workspace
}
// new_workspace creates a new workspace in this HeroPrompt instance
pub fn (mut hp HeroPrompt) new_workspace(args NewWorkspaceParams) !&Workspace {
hp.log(.info, 'Creating workspace: ${args.name}')
// Check if workspace already exists
if args.name in hp.workspaces {
hp.log(.error, 'Workspace already exists: ${args.name}')
return error('workspace already exists: ${args.name}')
}
// Determine if this should be the active workspace
// If it's the first workspace, make it active by default
// Or if explicitly requested via args.is_active
is_first_workspace := hp.workspaces.len == 0
should_be_active := args.is_active || is_first_workspace
// Create new workspace
mut ws := &Workspace{
id: rand.uuid_v4()
name: args.name
description: args.description
is_active: should_be_active
directories: map[string]&Directory{}
files: []HeropromptFile{}
created: ourtime.now()
updated: ourtime.now()
parent: &hp // Set parent reference for auto-save
}
// Add to heroprompt instance
hp.workspaces[args.name] = ws
hp.updated = ourtime.now()
// Save to Redis
hp.save()!
hp.log(.info, 'Workspace created: ${args.name}')
return ws
}
// get_workspace retrieves an existing workspace by name
pub fn (hp &HeroPrompt) get_workspace(name string) !&Workspace {
if name !in hp.workspaces {
return error('workspace not found: ${name}')
}
return hp.workspaces[name]
}
// list_workspaces returns all workspaces in this HeroPrompt instance
pub fn (hp &HeroPrompt) list_workspaces() []&Workspace {
mut workspaces := []&Workspace{}
for _, ws in hp.workspaces {
workspaces << ws
}
return workspaces
}
// delete_workspace removes a workspace from this HeroPrompt instance
pub fn (mut hp HeroPrompt) delete_workspace(name string) ! {
if name !in hp.workspaces {
hp.log(.error, 'Workspace not found: ${name}')
return error('workspace not found: ${name}')
}
hp.workspaces.delete(name)
hp.updated = ourtime.now()
hp.save()!
hp.log(.info, 'Workspace deleted: ${name}')
}
// save persists the HeroPrompt instance to Redis
pub fn (mut hp HeroPrompt) save() ! {
hp.updated = ourtime.now()
set(hp)!
}
// get_active_workspace returns the currently active workspace
pub fn (mut hp HeroPrompt) get_active_workspace() !&Workspace {
for name, ws in hp.workspaces {
if ws.is_active {
// Return the actual reference from the map, not a copy
return hp.workspaces[name] or { return error('workspace not found: ${name}') }
}
}
return error('no active workspace found')
}
// set_active_workspace sets the specified workspace as active and deactivates all others
pub fn (mut hp HeroPrompt) set_active_workspace(name string) ! {
// Check if workspace exists
if name !in hp.workspaces {
hp.log(.error, 'Workspace not found: ${name}')
return error('workspace not found: ${name}')
}
// Deactivate all workspaces
for _, mut ws in hp.workspaces {
ws.is_active = false
}
// Activate the specified workspace
mut ws := hp.workspaces[name] or { return error('workspace not found: ${name}') }
ws.is_active = true
hp.updated = ourtime.now()
// Save to Redis
hp.save()!
hp.log(.info, 'Active workspace set to: ${name}')
}

View File

@@ -1,66 +0,0 @@
module heroprompt
import incubaid.herolib.core.pathlib
import os
pub struct HeropromptChild {
pub mut:
content string
path pathlib.Path
name string
include_tree bool // when true and this child is a dir, include full subtree in maps/contents
}
// Utility function to get file extension with special handling for common files
pub fn get_file_extension(filename string) string {
// Handle special cases for common files without extensions
special_files := {
'dockerfile': 'dockerfile'
'makefile': 'makefile'
'license': 'license'
'readme': 'readme'
'changelog': 'changelog'
'authors': 'authors'
'contributors': 'contributors'
'copying': 'copying'
'install': 'install'
'news': 'news'
'todo': 'todo'
'version': 'version'
'manifest': 'manifest'
'gemfile': 'gemfile'
'rakefile': 'rakefile'
'procfile': 'procfile'
'vagrantfile': 'vagrantfile'
}
lower_filename := filename.to_lower()
if lower_filename in special_files {
return special_files[lower_filename]
}
if filename.starts_with('.') && !filename.starts_with('..') {
if filename.contains('.') && filename.len > 1 {
parts := filename[1..].split('.')
if parts.len >= 2 {
return parts[parts.len - 1]
} else {
return filename[1..]
}
} else {
return filename[1..]
}
}
parts := filename.split('.')
if parts.len < 2 {
return ''
}
return parts[parts.len - 1]
}
// Read the file content
pub fn (chl HeropromptChild) read() !string {
if chl.path.cat != .file {
return error('cannot read content of a directory')
}
content := os.read_file(chl.path.path)!
return content
}

View File

@@ -0,0 +1,467 @@
module heroprompt
import os
import rand
import incubaid.herolib.core.pathlib
import incubaid.herolib.develop.codewalker
import incubaid.herolib.data.ourtime
// Directory represents a directory/directory added to a workspace
// It contains metadata about the directory and its location
@[heap]
pub struct Directory {
pub mut:
id string = rand.uuid_v4() // Unique identifier for this directory
name string // Display name (can be customized by user)
path string // Absolute path to the directory
description string // Optional description
git_info GitInfo // Git directory information (if applicable)
created ourtime.OurTime // When this directory was added
updated ourtime.OurTime // Last update time
include_tree bool = true // Whether to include full tree in file maps
is_expanded bool // UI state: whether directory is expanded in tree view
is_selected bool // UI state: whether directory checkbox is checked
selected_files map[string]bool // Map of file paths to selection state (normalized paths)
}
// GitInfo contains git-specific metadata for a directory
pub struct GitInfo {
pub mut:
is_git_dir bool // Whether this is a git directory
current_branch string // Current git branch
remote_url string // Remote URL (if any)
last_commit string // Last commit hash
has_changes bool // Whether there are uncommitted changes
}
// Create a new directory from a directory path
@[params]
pub struct NewDirectoryParams {
pub mut:
path string @[required] // Absolute path to directory
name string // Optional custom name (defaults to directory name)
description string // Optional description
}
// Create a new directory instance
pub fn new_directory(args NewDirectoryParams) !Directory {
if args.path.len == 0 {
return error('directory path is required')
}
mut dir_path := pathlib.get(args.path)
if !dir_path.exists() || !dir_path.is_dir() {
return error('path is not an existing directory: ${args.path}')
}
abs_path := dir_path.realpath()
dir_name := dir_path.name()
// Detect git information
git_info := detect_git_info(abs_path)
return Directory{
id: rand.uuid_v4()
name: if args.name.len > 0 { args.name } else { dir_name }
path: abs_path
description: args.description
git_info: git_info
created: ourtime.now()
updated: ourtime.now()
include_tree: true
}
}
// Detect git information for a directory
fn detect_git_info(path string) GitInfo {
// TODO: Use the gittools library to get this information
// Keep it for now, maybe next version
mut info := GitInfo{
is_git_dir: false
}
// Check if .git directory exists
git_dir := os.join_path(path, '.git')
if !os.exists(git_dir) {
return info
}
info.is_git_dir = true
// Try to detect current branch
head_file := os.join_path(git_dir, 'HEAD')
if os.exists(head_file) {
head_content := os.read_file(head_file) or { '' }
if head_content.contains('ref: refs/heads/') {
info.current_branch = head_content.replace('ref: refs/heads/', '').trim_space()
}
}
// Try to detect remote URL
config_file := os.join_path(git_dir, 'config')
if os.exists(config_file) {
config_content := os.read_file(config_file) or { '' }
// Simple parsing - look for url = line
for line in config_content.split_into_lines() {
trimmed := line.trim_space()
if trimmed.starts_with('url = ') {
info.remote_url = trimmed.replace('url = ', '')
break
}
}
}
// Check for uncommitted changes (simplified - just check if there are any files in git status)
// In a real implementation, would run `git status --porcelain`
info.has_changes = false // Placeholder - would need to execute git command
return info
}
// Update directory metadata
@[params]
pub struct UpdateDirectoryParams {
pub mut:
name string
description string
}
pub fn (mut dir Directory) update(args UpdateDirectoryParams) {
if args.name.len > 0 {
dir.name = args.name
}
if args.description.len > 0 {
dir.description = args.description
}
dir.updated = ourtime.now()
}
// Refresh git information for this directory
pub fn (mut dir Directory) refresh_git_info() {
dir.git_info = detect_git_info(dir.path)
dir.updated = ourtime.now()
}
// Check if directory path still exists
pub fn (dir &Directory) exists() bool {
return os.exists(dir.path) && os.is_dir(dir.path)
}
// Get directory size (number of files)
pub fn (dir &Directory) file_count() !int {
if !dir.exists() {
return error('directory path no longer exists')
}
// Use codewalker to count files
mut cw := codewalker.new(codewalker.CodeWalkerArgs{})!
mut fm := cw.filemap_get(path: dir.path, content_read: false)!
return fm.content.len
}
// Get display name with git branch if available
pub fn (dir &Directory) display_name() string {
if dir.git_info.is_git_dir && dir.git_info.current_branch.len > 0 {
return '${dir.name} (${dir.git_info.current_branch})'
}
return dir.name
}
// Directory Management Methods
// DirectoryContent holds the scanned files and directories from a directory
pub struct DirectoryContent {
pub mut:
files []HeropromptFile // All files found in the directory
directories []string // All directories found in the directory
file_count int // Total number of files
dir_count int // Total number of directories
}
// get_contents scans the directory and returns all files and directories
// This method respects .gitignore and .heroignore files
// This is a public method that can be used to retrieve directory contents for prompt generation
pub fn (dir &Directory) get_contents() !DirectoryContent {
return dir.scan()
}
// scan scans the entire directory and returns all files and directories
// This method respects .gitignore and .heroignore files
// Note: This is a private method. Use add_dir() with scan parameter or get_contents() instead.
fn (dir &Directory) scan() !DirectoryContent {
if !dir.exists() {
return error('directory path does not exist: ${dir.path}')
}
// Use codewalker to scan the directory with gitignore support
mut cw := codewalker.new(codewalker.CodeWalkerArgs{})!
mut fm := cw.filemap_get(path: dir.path, content_read: true)!
mut files := []HeropromptFile{}
mut directories := map[string]bool{} // Use map to avoid duplicates
// Process each file from the filemap
for file_path, content in fm.content {
// Create HeropromptFile for each file
abs_path := os.join_path(dir.path, file_path)
file := HeropromptFile{
id: rand.uuid_v4()
name: os.base(file_path)
path: abs_path
content: content
created: ourtime.now()
updated: ourtime.now()
}
files << file
// Extract directory path
dir_path := os.dir(file_path)
if dir_path != '.' && dir_path.len > 0 {
// Add all parent directories
mut current_dir := dir_path
for current_dir != '.' && current_dir.len > 0 {
directories[current_dir] = true
current_dir = os.dir(current_dir)
}
}
}
// Convert directories map to array
mut dir_list := []string{}
for directory_path, _ in directories {
dir_list << directory_path
}
return DirectoryContent{
files: files
directories: dir_list
file_count: files.len
dir_count: dir_list.len
}
}
@[params]
pub struct AddFileParams {
pub mut:
path string @[required] // Path to file (relative to directory or absolute)
}
// add_file adds a specific file to the directory
// Returns the created HeropromptFile
pub fn (dir &Directory) add_file(args AddFileParams) !HeropromptFile {
mut file_path := args.path
// If path is relative, make it relative to directory path
if !os.is_abs_path(file_path) {
file_path = os.join_path(dir.path, file_path)
}
// Validate file exists
if !os.exists(file_path) {
return error('file does not exist: ${file_path}')
}
if os.is_dir(file_path) {
return error('path is a directory, not a file: ${file_path}')
}
// Read file content
content := os.read_file(file_path) or { return error('failed to read file: ${file_path}') }
// Create HeropromptFile
file := HeropromptFile{
id: rand.uuid_v4()
name: os.base(file_path)
path: file_path
content: content
created: ourtime.now()
updated: ourtime.now()
}
return file
}
@[params]
pub struct SelectFileParams {
pub mut:
path string @[required] // Path to file (relative to directory or absolute)
}
// select_file marks a file as selected within this directory
// The file path can be relative to the directory or absolute
pub fn (mut dir Directory) select_file(args SelectFileParams) ! {
// Normalize the path
mut file_path := args.path
if !os.is_abs_path(file_path) {
file_path = os.join_path(dir.path, file_path)
}
file_path = os.real_path(file_path)
// Verify file exists
if !os.exists(file_path) {
return error('file does not exist: ${args.path}')
}
// Verify file is within this directory
if !file_path.starts_with(os.real_path(dir.path)) {
return error('file is not within this directory: ${args.path}')
}
// Mark file as selected
dir.selected_files[file_path] = true
dir.updated = ourtime.now()
}
// select_all marks all files in this directory and subdirectories as selected
pub fn (mut dir Directory) select_all() ! {
// Verify directory exists
if !dir.exists() {
return error('directory does not exist: ${dir.path}')
}
// Get all files in directory
content := dir.get_contents()!
// Mark all files as selected
for file in content.files {
normalized_path := os.real_path(file.path)
dir.selected_files[normalized_path] = true
}
dir.updated = ourtime.now()
}
@[params]
pub struct DeselectFileParams {
pub mut:
path string @[required] // Path to file (relative to directory or absolute)
}
// deselect_file marks a file as not selected within this directory
pub fn (mut dir Directory) deselect_file(args DeselectFileParams) ! {
// Normalize the path
mut file_path := args.path
if !os.is_abs_path(file_path) {
file_path = os.join_path(dir.path, file_path)
}
file_path = os.real_path(file_path)
// Verify file exists
if !os.exists(file_path) {
return error('file does not exist: ${args.path}')
}
// Verify file is within this directory
if !file_path.starts_with(os.real_path(dir.path)) {
return error('file is not within this directory: ${args.path}')
}
// Mark file as not selected (remove from map or set to false)
dir.selected_files.delete(file_path)
dir.updated = ourtime.now()
}
// deselect_all marks all files in this directory as not selected
pub fn (mut dir Directory) deselect_all() ! {
// Verify directory exists
if !dir.exists() {
return error('directory does not exist: ${dir.path}')
}
// Clear all selections
dir.selected_files.clear()
dir.updated = ourtime.now()
}
// expand sets the directory as expanded in the UI
pub fn (mut dir Directory) expand() {
dir.is_expanded = true
dir.updated = ourtime.now()
}
// collapse sets the directory as collapsed in the UI
pub fn (mut dir Directory) collapse() {
dir.is_expanded = false
dir.updated = ourtime.now()
}
@[params]
pub struct AddDirParams {
pub mut:
path string @[required] // Path to directory (relative to directory or absolute)
scan bool = true // Whether to automatically scan the directory (default: true)
}
// add_dir adds all files from a specific directory
// Returns DirectoryContent with files from that directory
// If scan=true (default), automatically scans the directory respecting .gitignore
// If scan=false, returns empty DirectoryContent (manual mode)
pub fn (dir &Directory) add_dir(args AddDirParams) !DirectoryContent {
mut dir_path := args.path
// If path is relative, make it relative to directory path
if !os.is_abs_path(dir_path) {
dir_path = os.join_path(dir.path, dir_path)
}
// Validate directory exists
if !os.exists(dir_path) {
return error('directory does not exist: ${dir_path}')
}
if !os.is_dir(dir_path) {
return error('path is not a directory: ${dir_path}')
}
// If scan is false, return empty content (manual mode)
if !args.scan {
return DirectoryContent{
files: []HeropromptFile{}
file_count: 0
dir_count: 0
}
}
// Use codewalker to scan the directory
mut cw := codewalker.new(codewalker.CodeWalkerArgs{})!
mut fm := cw.filemap_get(path: dir_path, content_read: true)!
mut files := []HeropromptFile{}
mut directories := map[string]bool{} // Track directories
// Process each file
for file_path, content in fm.content {
abs_path := os.join_path(dir_path, file_path)
file := HeropromptFile{
id: rand.uuid_v4()
name: os.base(file_path)
path: abs_path
content: content
created: ourtime.now()
updated: ourtime.now()
}
files << file
// Extract directory path
dir_part := os.dir(file_path)
if dir_part != '.' && dir_part.len > 0 {
// Add all parent directories
mut current_dir := dir_part
for current_dir != '.' && current_dir.len > 0 {
directories[current_dir] = true
current_dir = os.dir(current_dir)
}
}
}
// Convert directories map to array
mut dir_list := []string{}
for directory_path, _ in directories {
dir_list << directory_path
}
return DirectoryContent{
files: files
directories: dir_list
file_count: files.len
dir_count: dir_list.len
}
}

View File

@@ -0,0 +1,288 @@
module heroprompt
import os
// Test directory: scan entire directory
fn test_directory_scan() ! {
// Create temp directory with files
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_scan')
sub_dir := os.join_path(test_dir, 'src')
os.mkdir_all(sub_dir)!
os.write_file(os.join_path(test_dir, 'readme.md'), '# Test')!
os.write_file(os.join_path(test_dir, 'main.v'), 'fn main() {}')!
os.write_file(os.join_path(sub_dir, 'utils.v'), 'pub fn hello() {}')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Scan directory
content := repo.scan()!
// Should find all files
assert content.file_count >= 3
assert content.files.len >= 3
// Should find directories
assert content.dir_count >= 1
// Verify files have content
mut found_readme := false
for file in content.files {
if file.name == 'readme.md' {
found_readme = true
assert file.content.contains('# Test')
}
}
assert found_readme, 'readme.md not found in scanned files'
}
// Test directory: scan respects gitignore
fn test_directory_scan_gitignore() ! {
// Create temp directory with .gitignore
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_gitignore')
os.mkdir_all(test_dir)!
// Create .gitignore
os.write_file(os.join_path(test_dir, '.gitignore'), 'ignored.txt\n*.log')!
// Create files
os.write_file(os.join_path(test_dir, 'included.txt'), 'included')!
os.write_file(os.join_path(test_dir, 'ignored.txt'), 'ignored')!
os.write_file(os.join_path(test_dir, 'test.log'), 'log')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Scan directory
content := repo.scan()!
// Should include included.txt
mut found_included := false
mut found_ignored := false
mut found_log := false
for file in content.files {
if file.name == 'included.txt' {
found_included = true
}
if file.name == 'ignored.txt' {
found_ignored = true
}
if file.name == 'test.log' {
found_log = true
}
}
assert found_included, 'included.txt should be found'
// Note: gitignore behavior depends on codewalker implementation
// These assertions might need adjustment based on actual behavior
}
// Test directory: add_file with relative path
fn test_directory_add_file_relative() ! {
// Create temp directory with file
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_file_rel')
os.mkdir_all(test_dir)!
os.write_file(os.join_path(test_dir, 'test.txt'), 'test content')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Add file with relative path
file := repo.add_file(path: 'test.txt')!
assert file.name == 'test.txt'
assert file.content == 'test content'
assert file.path.contains('test.txt')
}
// Test directory: add_file with absolute path
fn test_directory_add_file_absolute() ! {
// Create temp directory with file
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_file_abs')
os.mkdir_all(test_dir)!
test_file := os.join_path(test_dir, 'test.txt')
os.write_file(test_file, 'test content')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Add file with absolute path
file := repo.add_file(path: test_file)!
assert file.name == 'test.txt'
assert file.content == 'test content'
}
// Test directory: add_file non-existent file
fn test_directory_add_file_nonexistent() ! {
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_nonexist')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Try to add non-existent file
repo.add_file(path: 'nonexistent.txt') or {
assert err.msg().contains('does not exist')
return
}
assert false, 'Expected error when adding non-existent file'
}
// Test directory: add_dir with relative path
fn test_directory_add_dir_relative() ! {
// Create temp directory with subdirectory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_dir_rel')
sub_dir := os.join_path(test_dir, 'src')
os.mkdir_all(sub_dir)!
os.write_file(os.join_path(sub_dir, 'file1.txt'), 'content1')!
os.write_file(os.join_path(sub_dir, 'file2.txt'), 'content2')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Add directory with relative path
content := repo.add_dir(path: 'src')!
assert content.file_count >= 2
assert content.files.len >= 2
}
// Test directory: add_dir with absolute path
fn test_directory_add_dir_absolute() ! {
// Create temp directory with subdirectory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_dir_abs')
sub_dir := os.join_path(test_dir, 'src')
os.mkdir_all(sub_dir)!
os.write_file(os.join_path(sub_dir, 'file1.txt'), 'content1')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Add directory with absolute path
content := repo.add_dir(path: sub_dir)!
assert content.file_count >= 1
}
// Test directory: add_dir non-existent directory
fn test_directory_add_dir_nonexistent() ! {
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_dir_nonexist')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Try to add non-existent directory
repo.add_dir(path: 'nonexistent_dir') or {
assert err.msg().contains('does not exist')
return
}
assert false, 'Expected error when adding non-existent directory'
}
// Test directory: file_count
fn test_directory_file_count() ! {
// Create temp directory with files
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_file_count')
os.mkdir_all(test_dir)!
os.write_file(os.join_path(test_dir, 'file1.txt'), 'content1')!
os.write_file(os.join_path(test_dir, 'file2.txt'), 'content2')!
os.write_file(os.join_path(test_dir, 'file3.txt'), 'content3')!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir)!
// Get file count
count := repo.file_count()!
assert count >= 3
}
// Test directory: display_name with git info
fn test_directory_display_name() ! {
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_display_name')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
}
// Create directory
repo := new_directory(path: test_dir, name: 'Test Repo')!
// Display name should be the custom name
display_name := repo.display_name()
assert display_name == 'Test Repo'
}
// Test directory: exists check
fn test_directory_exists() ! {
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_exists')
os.mkdir_all(test_dir)!
// Create directory
repo := new_directory(path: test_dir)!
// Should exist
assert repo.exists() == true
// Remove directory
os.rmdir_all(test_dir)!
// Should not exist
assert repo.exists() == false
}

View File

@@ -2,11 +2,10 @@ module heroprompt
import incubaid.herolib.core.base
import incubaid.herolib.core.playbook { PlayBook }
import incubaid.herolib.ui.console
import json
__global (
heroprompt_global map[string]&Workspace
heroprompt_global shared map[string]&HeroPrompt
heroprompt_default string
)
@@ -18,38 +17,68 @@ pub mut:
name string = 'default'
fromdb bool // will load from filesystem
create bool // default will not create if not exist
reset bool // will delete and recreate if exists
}
pub fn new(args ArgsGet) !&Workspace {
mut obj := Workspace{
pub fn new(args ArgsGet) !&HeroPrompt {
mut obj := HeroPrompt{
name: args.name
}
set(obj)!
return get(name: args.name)!
}
pub fn get(args ArgsGet) !&Workspace {
pub fn get(args ArgsGet) !&HeroPrompt {
mut context := base.context()!
heroprompt_default = args.name
if args.fromdb || args.name !in heroprompt_global {
mut r := context.redis()!
mut r := context.redis()!
// Handle reset: delete existing instance and create fresh
if args.reset {
// Delete from Redis and memory
r.hdel('context:heroprompt', args.name) or {}
lock heroprompt_global {
heroprompt_global.delete(args.name)
}
// Create new instance
mut obj := HeroPrompt{
name: args.name
}
set(obj)!
return get(name: args.name)! // Recursive call to load the new instance
}
// Check if we need to load from DB
needs_load := rlock heroprompt_global {
args.fromdb || args.name !in heroprompt_global
}
if needs_load {
heroprompt_default = args.name
if r.hexists('context:heroprompt', args.name)! {
// Load existing instance from Redis
data := r.hget('context:heroprompt', args.name)!
if data.len == 0 {
print_backtrace()
return error('Workspace with name: ${args.name} does not exist, prob bug.')
}
mut obj := json.decode(Workspace, data)!
mut obj := json.decode(HeroPrompt, data)!
set_in_mem(obj)!
} else {
// Instance doesn't exist in Redis
if args.create {
new(args)!
// Create new instance
mut obj := HeroPrompt{
name: args.name
}
set(obj)!
} else {
print_backtrace()
return error("Workspace with name '${args.name}' does not exist")
}
}
return get(name: args.name)! // no longer from db nor create
return get(name: args.name)! // Recursive call to return the instance
}
return heroprompt_global[args.name] or {
print_backtrace()
@@ -58,9 +87,10 @@ pub fn get(args ArgsGet) !&Workspace {
}
// register the config for the future
pub fn set(o Workspace) ! {
pub fn set(o HeroPrompt) ! {
mut o2 := set_in_mem(o)!
heroprompt_default = o2.name
mut context := base.context()!
mut r := context.redis()!
r.hset('context:heroprompt', o2.name, json.encode(o2))!
@@ -77,6 +107,11 @@ pub fn delete(args ArgsGet) ! {
mut context := base.context()!
mut r := context.redis()!
r.hdel('context:heroprompt', args.name)!
// Also remove from memory
lock heroprompt_global {
heroprompt_global.delete(args.name)
}
}
@[params]
@@ -86,15 +121,17 @@ pub mut:
}
// if fromdb set: load from filesystem, and not from mem, will also reset what is in mem
pub fn list(args ArgsList) ![]&Workspace {
mut res := []&Workspace{}
pub fn list(args ArgsList) ![]&HeroPrompt {
mut res := []&HeroPrompt{}
mut context := base.context()!
if args.fromdb {
// reset what is in mem
heroprompt_global = map[string]&Workspace{}
lock heroprompt_global {
heroprompt_global = map[string]&HeroPrompt{}
}
heroprompt_default = ''
}
if args.fromdb {
mut r := context.redis()!
mut l := r.hkeys('context:heroprompt')!
@@ -104,18 +141,34 @@ pub fn list(args ArgsList) ![]&Workspace {
return res
} else {
// load from memory
for _, client in heroprompt_global {
res << client
rlock heroprompt_global {
for _, client in heroprompt_global {
res << client
}
}
}
return res
}
// only sets in mem, does not set as config
fn set_in_mem(o Workspace) !Workspace {
fn set_in_mem(o HeroPrompt) !HeroPrompt {
mut o2 := obj_init(o)!
heroprompt_global[o2.name] = &o2
// Restore parent references for all workspaces AFTER storing in global
// This ensures the parent pointer points to the actual instance in memory
lock heroprompt_global {
heroprompt_global[o2.name] = &o2
// Now restore parent references using the stored instance
mut stored := heroprompt_global[o2.name] or {
return error('failed to store heroprompt instance in memory')
}
for _, mut ws in stored.workspaces {
ws.parent = stored
}
}
heroprompt_default = o2.name
return o2
}
@@ -136,5 +189,4 @@ pub fn play(mut plbook PlayBook) ! {
// switch instance to be used for heroprompt
pub fn switch(name string) {
heroprompt_default = name
}

View File

@@ -0,0 +1,118 @@
module heroprompt
import rand
import incubaid.herolib.core.pathlib
import incubaid.herolib.data.ourtime
// HeropromptFile represents a standalone file added to a workspace
// (not part of a directory)
@[heap]
pub struct HeropromptFile {
pub mut:
id string = rand.uuid_v4() // Unique identifier
name string // File name
path string // Absolute path to file
content string // File content (cached)
created ourtime.OurTime // When added to workspace
updated ourtime.OurTime // Last update time
is_selected bool // UI state: whether file checkbox is checked
}
// Create a new file instance
@[params]
pub struct NewFileParams {
pub mut:
path string @[required] // Absolute path to file
}
pub fn new_file(args NewFileParams) !HeropromptFile {
if args.path.len == 0 {
return error('file path is required')
}
mut file_path := pathlib.get(args.path)
if !file_path.exists() || !file_path.is_file() {
return error('path is not an existing file: ${args.path}')
}
abs_path := file_path.realpath()
file_name := file_path.name()
// Read file content
content := file_path.read() or { '' }
return HeropromptFile{
id: rand.uuid_v4()
name: file_name
path: abs_path
content: content
created: ourtime.now()
updated: ourtime.now()
}
}
// Refresh file content from disk
pub fn (mut file HeropromptFile) refresh() ! {
mut file_path := pathlib.get(file.path)
if !file_path.exists() {
return error('file no longer exists: ${file.path}')
}
file.content = file_path.read()!
file.updated = ourtime.now()
}
// Check if file still exists
pub fn (file &HeropromptFile) exists() bool {
mut file_path := pathlib.get(file.path)
return file_path.exists() && file_path.is_file()
}
// Get file extension
pub fn (file &HeropromptFile) extension() string {
return get_file_extension(file.name)
}
// Utility function to get file extension with special handling for common files
pub fn get_file_extension(filename string) string {
// Handle special cases for common files without extensions
special_files := {
'dockerfile': 'dockerfile'
'makefile': 'makefile'
'license': 'license'
'readme': 'readme'
'changelog': 'changelog'
'authors': 'authors'
'contributors': 'contributors'
'copying': 'copying'
'install': 'install'
'news': 'news'
'todo': 'todo'
'version': 'version'
'manifest': 'manifest'
'gemfile': 'gemfile'
'rakefile': 'rakefile'
'procfile': 'procfile'
'vagrantfile': 'vagrantfile'
}
lower_filename := filename.to_lower()
if lower_filename in special_files {
return special_files[lower_filename]
}
if filename.starts_with('.') && !filename.starts_with('..') {
if filename.contains('.') && filename.len > 1 {
parts := filename[1..].split('.')
if parts.len >= 2 {
return parts[parts.len - 1]
} else {
return filename[1..]
}
} else {
return filename[1..]
}
}
parts := filename.split('.')
if parts.len < 2 {
return ''
}
return parts[parts.len - 1]
}

View File

@@ -0,0 +1,150 @@
module heroprompt
import os
// Test file: create new file
fn test_new_file() ! {
// Create temp file
temp_base := os.temp_dir()
test_file := os.join_path(temp_base, 'test_heroprompt_new_file.txt')
os.write_file(test_file, 'test content')!
defer {
os.rm(test_file) or {}
}
// Create HeropromptFile
file := new_file(path: test_file)!
assert file.name == 'test_heroprompt_new_file.txt'
assert file.content == 'test content'
assert file.path == os.real_path(test_file)
assert file.id.len > 0
}
// Test file: create file with non-existent path
fn test_new_file_nonexistent() ! {
// Try to create file with non-existent path
new_file(path: '/nonexistent/path/file.txt') or {
assert err.msg().contains('not an existing file')
return
}
assert false, 'Expected error when creating file with non-existent path'
}
// Test file: refresh content
fn test_file_refresh() ! {
// Create temp file
temp_base := os.temp_dir()
test_file := os.join_path(temp_base, 'test_heroprompt_refresh.txt')
os.write_file(test_file, 'initial content')!
defer {
os.rm(test_file) or {}
}
// Create HeropromptFile
mut file := new_file(path: test_file)!
assert file.content == 'initial content'
// Modify file on disk
os.write_file(test_file, 'updated content')!
// Refresh file
file.refresh()!
assert file.content == 'updated content'
}
// Test file: exists check
fn test_file_exists() ! {
// Create temp file
temp_base := os.temp_dir()
test_file := os.join_path(temp_base, 'test_heroprompt_file_exists.txt')
os.write_file(test_file, 'content')!
// Create HeropromptFile
file := new_file(path: test_file)!
// Should exist
assert file.exists() == true
// Remove file
os.rm(test_file)!
// Should not exist
assert file.exists() == false
}
// Test file: get extension
fn test_file_extension() ! {
// Create temp files with different extensions
temp_base := os.temp_dir()
test_files := {
'test.txt': 'txt'
'test.v': 'v'
'test.py': 'py'
'archive.tar.gz': 'gz'
'.gitignore': 'gitignore'
'Dockerfile': 'dockerfile'
'Makefile': 'makefile'
'README': 'readme'
}
for filename, expected_ext in test_files {
test_file := os.join_path(temp_base, filename)
os.write_file(test_file, 'content')!
file := new_file(path: test_file)!
actual_ext := file.extension()
assert actual_ext == expected_ext, 'Expected ${expected_ext} for ${filename}, got ${actual_ext}'
os.rm(test_file)!
}
}
// Test get_file_extension utility function
fn test_get_file_extension() ! {
// Regular files
assert get_file_extension('test.txt') == 'txt'
assert get_file_extension('main.v') == 'v'
assert get_file_extension('script.py') == 'py'
// Files with multiple dots
assert get_file_extension('archive.tar.gz') == 'gz'
assert get_file_extension('config.test.js') == 'js'
// Dotfiles
assert get_file_extension('.gitignore') == 'gitignore'
assert get_file_extension('.env') == 'env'
// Special files
assert get_file_extension('Dockerfile') == 'dockerfile'
assert get_file_extension('Makefile') == 'makefile'
assert get_file_extension('README') == 'readme'
assert get_file_extension('LICENSE') == 'license'
// Files without extension
assert get_file_extension('noextension') == ''
}
// Test file: generate UUIDs
fn test_file_unique_ids() ! {
// Create temp file
temp_base := os.temp_dir()
test_file := os.join_path(temp_base, 'test_heroprompt_unique_id.txt')
os.write_file(test_file, 'content')!
defer {
os.rm(test_file) or {}
}
// Create HeropromptFile instance
file1 := new_file(path: test_file)!
// ID should be a UUID (36 characters with dashes)
assert file1.id.len == 36
assert file1.id.contains('-')
}

View File

@@ -0,0 +1,48 @@
module heroprompt
import incubaid.herolib.ui.console
import incubaid.herolib.core.logger
// log writes a log message with the specified level
// Outputs to both console and log file (unless run_in_tests is true)
pub fn (mut hp HeroPrompt) log(level LogLevel, text string) {
// Skip logging if running in tests
if hp.run_in_tests {
return
}
// Console output with appropriate colors
match level {
.error {
console.print_stderr('ERROR: ${text}')
}
.warning {
console.print_warn('WARNING: ${text}')
}
.info {
console.print_info(text)
}
.debug {
console.print_debug(text)
}
}
// File logging - use the stored logger instance (no resource leak)
level_str := match level {
.error { 'ERROR' }
.warning { 'WARNING' }
.info { 'INFO' }
.debug { 'DEBUG' }
}
logtype := match level {
.error { logger.LogType.error }
else { logger.LogType.stdout }
}
hp.logger.log(
cat: level_str
log: text
logtype: logtype
) or { console.print_stderr('Failed to write log: ${err}') }
}

View File

@@ -1,50 +1,94 @@
module heroprompt
import time
import incubaid.herolib.core.playbook
import incubaid.herolib.data.ourtime
import incubaid.herolib.data.encoderhero
import incubaid.herolib.core.logger
import rand
pub const version = '0.0.0'
pub const version = '1.0.0'
const singleton = false
const default = true
// Workspace represents a workspace containing multiple directories
// and their selected files for AI prompt generation
// LogLevel represents the severity level of a log message
pub enum LogLevel {
error
warning
info
debug
}
// HeroPrompt is the main factory instance that manages workspaces
@[heap]
pub struct HeroPrompt {
pub mut:
id string = rand.uuid_v4() // Unique identifier
name string = 'default' // Instance name
workspaces map[string]&Workspace // Map of workspace name to workspace
created ourtime.OurTime // Time of creation
updated ourtime.OurTime // Time of last update
log_path string // Path to log file
run_in_tests bool // Flag to suppress logging during tests
logger logger.Logger @[skip; str: skip] // Logger instance (reused, not serialized)
}
// Workspace represents a collection of directories and files for prompt generation
@[heap]
pub struct Workspace {
pub mut:
name string = 'default' // Workspace name
base_path string // Base path of the workspace
children []HeropromptChild // List of directories and files in this workspace
created time.Time // Time of creation
updated time.Time // Time of last update
is_saved bool
id string // Unique identifier
name string // Workspace name (required)
description string // Workspace description (optional)
is_active bool // Whether this is the active workspace
directories map[string]&Directory // Map of directory ID to directory
files []HeropromptFile // Standalone files in this workspace
created ourtime.OurTime // Time of creation
updated ourtime.OurTime // Time of last update
parent ?&HeroPrompt @[skip; str: skip] // Reference to parent HeroPrompt (not serialized)
}
// your checking & initialization code if needed
fn obj_init(mycfg_ Workspace) !Workspace {
return mycfg_
// obj_init validates and initializes the HeroPrompt instance
fn obj_init(mycfg_ HeroPrompt) !HeroPrompt {
mut mycfg := mycfg_
// Initialize workspaces map if nil
if mycfg.workspaces.len == 0 {
mycfg.workspaces = map[string]&Workspace{}
}
// Set ID if not set
if mycfg.id.len == 0 {
mycfg.id = rand.uuid_v4()
}
// Set timestamps if not set
if mycfg.created.unixt == 0 {
mycfg.created = ourtime.now()
}
if mycfg.updated.unixt == 0 {
mycfg.updated = ourtime.now()
}
// Set default log path if not set
if mycfg.log_path.len == 0 {
mycfg.log_path = '/tmp/heroprompt_logs'
}
// Initialize logger instance (create directory if needed)
mycfg.logger = logger.new(path: mycfg.log_path, console_output: false) or {
// If logger creation fails, create a dummy logger to avoid nil reference
// This can happen if path is invalid, but we'll handle it gracefully
logger.new(path: '/tmp/heroprompt_logs', console_output: false) or {
panic('Failed to initialize logger: ${err}')
}
}
// Restore parent references for all workspaces
// This is critical because parent references are not serialized to Redis
for _, mut ws in mycfg.workspaces {
ws.parent = &mycfg
}
return mycfg
}
/////////////NORMALLY NO NEED TO TOUCH
pub fn heroscript_loads(heroscript string) !Workspace {
mut pb := playbook.new(text: heroscript)!
// Accept either define or configure; prefer define if present
mut action_name := 'heroprompt.define'
if !pb.exists_once(filter: action_name) {
action_name = 'heroprompt.configure'
if !pb.exists_once(filter: action_name) {
return error("heroprompt: missing 'heroprompt.define' or 'heroprompt.configure' action")
}
}
mut action := pb.get(filter: action_name)!
mut p := action.params
return Workspace{
name: p.get_default('name', 'default')!
base_path: p.get_default('base_path', '')!
created: time.now()
updated: time.now()
children: []HeropromptChild{}
}
pub fn heroscript_loads(heroscript string) !HeroPrompt {
mut obj := encoderhero.decode[HeroPrompt](heroscript)!
return obj
}

View File

@@ -0,0 +1,330 @@
module heroprompt
import incubaid.herolib.develop.codewalker
import os
// Prompt generation functionality for HeroPrompt workspaces
// HeropromptTmpPrompt is the template struct for prompt generation
struct HeropromptTmpPrompt {
pub mut:
user_instructions string
file_map string
file_contents string
}
@[params]
pub struct GenerateFileMapParams {
pub mut:
selected_files []string // List of file paths to mark as selected (optional, if empty all files are selected)
show_all bool // If true, show all files in directories; if false, show only selected files
}
// generate_file_map generates a hierarchical tree structure of the workspace
// with selected files marked with '*'
pub fn (ws &Workspace) generate_file_map(args GenerateFileMapParams) !string {
mut all_files := []string{}
mut selected_set := map[string]bool{}
// Build set of selected files for quick lookup
for path in args.selected_files {
selected_set[path] = true
}
// Collect all files from directories
for _, dir in ws.directories {
content := dir.get_contents() or { continue }
for file in content.files {
all_files << file.path
}
}
// Add standalone files
for file in ws.files {
all_files << file.path
}
// If no specific files selected, select all
mut files_to_show := if args.selected_files.len > 0 {
args.selected_files.clone()
} else {
all_files.clone()
}
// Find common base path
mut base_path := ''
if files_to_show.len > 0 {
base_path = find_common_base_path(files_to_show)
}
// Generate tree using codewalker
mut tree := ''
if args.show_all {
// Show full directory tree with selected files marked
tree = generate_full_tree_with_selection(files_to_show, all_files, base_path)
} else {
// Show minimal tree with only selected files
tree = codewalker.build_file_tree_selected(files_to_show, base_path)
}
// Add config note
mut output := 'Config: directory-only view; selected files shown.\n\n'
// Add base path if available
if base_path.len > 0 {
output += '${os.base(base_path)}\n'
}
output += tree
return output
}
// find_common_base_path finds the common base directory for a list of file paths
fn find_common_base_path(paths []string) string {
if paths.len == 0 {
return ''
}
if paths.len == 1 {
return os.dir(paths[0])
}
// Split all paths into components
mut path_parts := [][]string{}
for path in paths {
parts := path.split(os.path_separator)
path_parts << parts
}
// Find common prefix
mut common := []string{}
// Find minimum length
mut min_len := path_parts[0].len
for parts in path_parts {
if parts.len < min_len {
min_len = parts.len
}
}
for i in 0 .. min_len - 1 { // -1 to exclude filename
part := path_parts[0][i]
mut all_match := true
for j in 1 .. path_parts.len {
if path_parts[j][i] != part {
all_match = false
break
}
}
if all_match {
common << part
} else {
break
}
}
if common.len == 0 {
return ''
}
return common.join(os.path_separator)
}
// generate_full_tree_with_selection generates a full directory tree with selected files marked
fn generate_full_tree_with_selection(selected_files []string, all_files []string, base_path string) string {
// For now, use the minimal tree approach
// TODO: Implement full tree with selective marking
return codewalker.build_file_tree_selected(selected_files, base_path)
}
@[params]
pub struct GenerateFileContentsParams {
pub mut:
selected_files []string // List of file paths to include (optional, if empty all files are included)
include_path bool = true // Include file path as header
}
// generate_file_contents generates formatted file contents section
pub fn (ws &Workspace) generate_file_contents(args GenerateFileContentsParams) !string {
mut output := ''
mut files_to_include := map[string]HeropromptFile{}
// Collect all files from directories
for _, dir in ws.directories {
content := dir.get_contents() or { continue }
for file in content.files {
// Normalize path for consistent comparison
normalized_path := os.real_path(file.path)
// Create a mutable copy to update selection state
mut file_copy := file
// Check if file is selected in directory's selection map
if normalized_path in dir.selected_files {
file_copy.is_selected = dir.selected_files[normalized_path]
}
files_to_include[normalized_path] = file_copy
}
}
// Add standalone files
for file in ws.files {
// Normalize path for consistent comparison
normalized_path := os.real_path(file.path)
files_to_include[normalized_path] = file
}
// Filter by selected files if specified
mut files_to_output := []HeropromptFile{}
if args.selected_files.len > 0 {
for path in args.selected_files {
// Normalize the selected path for comparison
normalized_path := os.real_path(path)
if normalized_path in files_to_include {
files_to_output << files_to_include[normalized_path]
}
}
} else {
for _, file in files_to_include {
files_to_output << file
}
}
// Sort files by path for consistent output
files_to_output.sort(a.path < b.path)
// Generate content for each file
for file in files_to_output {
if args.include_path {
output += 'File: ${file.path}\n'
}
// Determine language for syntax highlighting
ext := file.extension()
lang := if ext.len > 0 { ext } else { 'text' }
output += '```${lang}\n'
output += file.content
if !file.content.ends_with('\n') {
output += '\n'
}
output += '```\n\n'
}
return output
}
@[params]
pub struct GeneratePromptParams {
pub mut:
instruction string // User's instruction/question
selected_files []string // List of file paths to include (optional, if empty all files are included)
show_all_files bool // If true, show all files in file_map; if false, show only selected
}
// generate_prompt generates a complete AI prompt combining file_map, file_contents, and user instructions
// If selected_files is empty, automatically uses files marked as selected in the workspace
pub fn (ws &Workspace) generate_prompt(args GeneratePromptParams) !string {
// Determine which files to include
mut files_to_include := args.selected_files.clone()
// If no files specified, use selected files from workspace
if files_to_include.len == 0 {
files_to_include = ws.get_selected_files() or { []string{} }
// If still no files, return error with helpful message
if files_to_include.len == 0 {
return error('no files selected for prompt generation. Use select_file() to select files or provide selected_files parameter')
}
}
// Generate file map
file_map := ws.generate_file_map(
selected_files: files_to_include
show_all: args.show_all_files
)!
// Generate file contents
file_contents := ws.generate_file_contents(
selected_files: files_to_include
include_path: true
)!
// Build user instructions
mut user_instructions := args.instruction
if user_instructions.len > 0 && !user_instructions.ends_with('\n') {
user_instructions += '\n'
}
// Use template to generate prompt
prompt := HeropromptTmpPrompt{
user_instructions: user_instructions
file_map: file_map
file_contents: file_contents
}
result := $tmpl('./templates/prompt.template')
return result
}
// generate_prompt_simple is a convenience method that generates a prompt with just an instruction
// and includes all files from the workspace
pub fn (ws &Workspace) generate_prompt_simple(instruction string) !string {
return ws.generate_prompt(
instruction: instruction
selected_files: []
show_all_files: false
)
}
// get_all_file_paths returns all file paths in the workspace
pub fn (ws &Workspace) get_all_file_paths() ![]string {
mut paths := []string{}
// Collect from directories
for _, dir in ws.directories {
content := dir.get_contents() or { continue }
for file in content.files {
paths << file.path
}
}
// Add standalone files
for file in ws.files {
paths << file.path
}
return paths
}
// filter_files_by_extension filters file paths by extension
pub fn (ws &Workspace) filter_files_by_extension(extensions []string) ![]string {
all_paths := ws.get_all_file_paths()!
mut filtered := []string{}
for path in all_paths {
ext := os.file_ext(path).trim_left('.')
if ext in extensions {
filtered << path
}
}
return filtered
}
// filter_files_by_pattern filters file paths by pattern (simple substring match)
pub fn (ws &Workspace) filter_files_by_pattern(pattern string) ![]string {
all_paths := ws.get_all_file_paths()!
mut filtered := []string{}
pattern_lower := pattern.to_lower()
for path in all_paths {
if path.to_lower().contains(pattern_lower) {
filtered << path
}
}
return filtered
}

View File

@@ -0,0 +1,192 @@
module heroprompt
import incubaid.herolib.core.base
// Test HeroPrompt: new_workspace
fn test_heroprompt_new_workspace() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_new_ws_hp') or {}
defer {
delete(name: 'test_new_ws_hp') or {}
}
// Create heroprompt instance
mut hp := get(name: 'test_new_ws_hp', create: true)!
hp.run_in_tests = true
// Create workspace
ws := hp.new_workspace(name: 'test_workspace', description: 'Test workspace')!
assert ws.name == 'test_workspace'
assert ws.description == 'Test workspace'
assert ws.id.len == 36 // UUID length
assert hp.workspaces.len == 1
}
// Test HeroPrompt: get_workspace
fn test_heroprompt_get_workspace() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_get_ws_hp') or {}
defer {
delete(name: 'test_get_ws_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_get_ws_hp', create: true)!
hp.run_in_tests = true
hp.new_workspace(name: 'test_workspace')!
// Get workspace
ws := hp.get_workspace('test_workspace')!
assert ws.name == 'test_workspace'
// Try to get non-existent workspace
hp.get_workspace('nonexistent') or {
assert err.msg().contains('workspace not found')
return
}
assert false, 'Expected error when getting non-existent workspace'
}
// Test HeroPrompt: list_workspaces
fn test_heroprompt_list_workspaces() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_list_ws_hp') or {}
defer {
delete(name: 'test_list_ws_hp') or {}
}
// Create heroprompt instance
mut hp := get(name: 'test_list_ws_hp', create: true)!
hp.run_in_tests = true
// Initially empty
assert hp.list_workspaces().len == 0
// Create workspaces
hp.new_workspace(name: 'workspace1')!
hp.new_workspace(name: 'workspace2')!
hp.new_workspace(name: 'workspace3')!
// List workspaces
workspaces := hp.list_workspaces()
assert workspaces.len == 3
}
// Test HeroPrompt: delete_workspace
fn test_heroprompt_delete_workspace() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_del_ws_hp') or {}
defer {
delete(name: 'test_del_ws_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_del_ws_hp', create: true)!
hp.run_in_tests = true
hp.new_workspace(name: 'test_workspace')!
assert hp.workspaces.len == 1
// Delete workspace
hp.delete_workspace('test_workspace')!
assert hp.workspaces.len == 0
// Try to delete non-existent workspace
hp.delete_workspace('nonexistent') or {
assert err.msg().contains('workspace not found')
return
}
assert false, 'Expected error when deleting non-existent workspace'
}
// Test HeroPrompt: duplicate workspace
fn test_heroprompt_duplicate_workspace() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_dup_ws_hp') or {}
defer {
delete(name: 'test_dup_ws_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_dup_ws_hp', create: true)!
hp.run_in_tests = true
hp.new_workspace(name: 'test_workspace')!
// Try to create duplicate workspace
hp.new_workspace(name: 'test_workspace') or {
assert err.msg().contains('workspace already exists')
return
}
assert false, 'Expected error when creating duplicate workspace'
}
// Test HeroPrompt: auto-save after workspace operations
fn test_heroprompt_auto_save() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_autosave_hp') or {}
defer {
delete(name: 'test_autosave_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_autosave_hp', create: true)!
hp.run_in_tests = true
mut ws := hp.new_workspace(name: 'test_workspace')!
// Get fresh instance from Redis to verify save
hp2 := get(name: 'test_autosave_hp', fromdb: true)!
assert hp2.workspaces.len == 1
assert 'test_workspace' in hp2.workspaces
}
// Test HeroPrompt: logging suppression in tests
fn test_heroprompt_logging_suppression() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_logging_hp') or {}
defer {
delete(name: 'test_logging_hp') or {}
}
// Create heroprompt instance
mut hp := get(name: 'test_logging_hp', create: true)!
// Test with logging enabled (should not crash)
hp.run_in_tests = false
hp.log(.info, 'Test log message')
// Test with logging disabled
hp.run_in_tests = true
hp.log(.info, 'This should be suppressed')
// If we get here without crashing, logging works
assert true
}

View File

@@ -466,7 +466,7 @@ pub fn generate_random_workspace_name() string {
'script',
'ocean',
'phoenix',
'doctree',
'atlas',
'quest',
'shield',
'dragon',

View File

@@ -0,0 +1,302 @@
module heroprompt
import os
import incubaid.herolib.core.base
// Test workspace: add directory
fn test_workspace_add_directory() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_add_repo_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_add_repo')
os.mkdir_all(test_dir)!
os.write_file(os.join_path(test_dir, 'test.txt'), 'test content')!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_add_repo_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_add_repo_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory
repo := ws.add_directory(path: test_dir, name: 'Test Repo')!
assert ws.directories.len == 1
assert repo.name == 'Test Repo'
assert repo.path == os.real_path(test_dir)
assert repo.id.len > 0
}
// Test workspace: add directory without custom name
fn test_workspace_add_directory_auto_name() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_auto_name_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'my_custom_dir_name')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_auto_name_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_auto_name_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory without custom name
repo := ws.add_directory(path: test_dir)!
// Name should be extracted from directory name
assert repo.name == 'my_custom_dir_name'
}
// Test workspace: add duplicate directory
fn test_workspace_add_duplicate_directory() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_dup_repo_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_dup_repo')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_dup_repo_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_dup_repo_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory
ws.add_directory(path: test_dir)!
// Try to add same directory again
ws.add_directory(path: test_dir) or {
assert err.msg().contains('already added')
return
}
assert false, 'Expected error when adding duplicate directory'
}
// Test workspace: remove directory by ID
fn test_workspace_remove_directory_by_id() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_remove_repo_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_remove_repo')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_remove_repo_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_remove_repo_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory
repo := ws.add_directory(path: test_dir)!
assert ws.directories.len == 1
// Remove by ID
ws.remove_directory(id: repo.id)!
assert ws.directories.len == 0
}
// Test workspace: remove directory by path
fn test_workspace_remove_directory_by_path() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_remove_path_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_remove_path')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_remove_path_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_remove_path_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory
ws.add_directory(path: test_dir)!
assert ws.directories.len == 1
// Remove by path
ws.remove_directory(path: test_dir)!
assert ws.directories.len == 0
}
// Test workspace: get directory
fn test_workspace_get_directory() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_get_repo_hp') or {}
// Create temp directory
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_get_repo')
os.mkdir_all(test_dir)!
defer {
os.rmdir_all(test_dir) or {}
delete(name: 'test_get_repo_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_get_repo_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directory
repo := ws.add_directory(path: test_dir)!
// Get directory
retrieved_repo := ws.get_directory(repo.id)!
assert retrieved_repo.id == repo.id
assert retrieved_repo.path == repo.path
}
// Test workspace: list directories
fn test_workspace_list_directories() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_list_repos_hp') or {}
// Create temp directories
temp_base := os.temp_dir()
test_dir1 := os.join_path(temp_base, 'test_heroprompt_list_1')
test_dir2 := os.join_path(temp_base, 'test_heroprompt_list_2')
os.mkdir_all(test_dir1)!
os.mkdir_all(test_dir2)!
defer {
os.rmdir_all(test_dir1) or {}
os.rmdir_all(test_dir2) or {}
delete(name: 'test_list_repos_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_list_repos_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add directories
ws.add_directory(path: test_dir1)!
ws.add_directory(path: test_dir2)!
// List directories
repos := ws.list_directories()
assert repos.len == 2
}
// Test workspace: add file
fn test_workspace_add_file() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_add_file_hp') or {}
// Create temp file
temp_base := os.temp_dir()
test_file := os.join_path(temp_base, 'test_heroprompt_add_file.txt')
os.write_file(test_file, 'test file content')!
defer {
os.rm(test_file) or {}
delete(name: 'test_add_file_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_add_file_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Add file
file := ws.add_file(path: test_file)!
assert ws.files.len == 1
assert file.content == 'test file content'
assert file.path == os.real_path(test_file)
}
// Test workspace: item count
fn test_workspace_item_count() ! {
mut ctx := base.context()!
mut r := ctx.redis()!
// Clean up
r.hdel('context:heroprompt', 'test_count_hp') or {}
// Create temp directory and file
temp_base := os.temp_dir()
test_dir := os.join_path(temp_base, 'test_heroprompt_count_dir')
test_file := os.join_path(temp_base, 'test_heroprompt_count_file.txt')
os.mkdir_all(test_dir)!
os.write_file(test_file, 'content')!
defer {
os.rmdir_all(test_dir) or {}
os.rm(test_file) or {}
delete(name: 'test_count_hp') or {}
}
// Create heroprompt instance and workspace
mut hp := get(name: 'test_count_hp', create: true)!
hp.run_in_tests = true // Suppress logging during tests
mut ws := hp.new_workspace(name: 'test_ws')!
// Initially empty
assert ws.item_count() == 0
// Add directory
ws.add_directory(path: test_dir)!
assert ws.item_count() == 1
// Add file
ws.add_file(path: test_file)!
assert ws.item_count() == 2
}

View File

@@ -1,28 +1,167 @@
# heroprompt
# Heroprompt Module
To get started
A hierarchical workspace-based system for organizing code files and directories for AI code analysis. The module follows a clean hierarchical API structure: HeroPrompt → Workspace → Directory.
## Features
- **Hierarchical API**: Clean three-level structure (HeroPrompt → Workspace → Directory)
- **Workspace Management**: Create and manage multiple workspaces within a HeroPrompt instance
- **Directory Support**: Add entire directories with automatic file discovery
- **Flexible Scanning**: Support both automatic scanning (with gitignore) and manual file selection
- **Optional Naming**: Directory names are optional - automatically extracted from directory names
- **GitIgnore Support**: Automatic respect for `.gitignore` and `.heroignore` files during scanning
- **Redis Storage**: Persistent storage using Redis with automatic save on mutations
- **Centralized Logging**: Dual output to console and log file with configurable log levels
- **Concurrency Safe**: Thread-safe global state management with proper locking
## Hierarchical API Structure
```v
import incubaid.herolib.develop.heroprompt
// Example Usage:
// 1. Get or create a heroprompt instance
mut hp := heroprompt.get(name: 'my_heroprompt', create: true)!
// 1. Create a new workspace
mut workspace := heroprompt.new(name: 'my_workspace', path: os.getwd())!
// 2. Create workspaces from the heroprompt instance
mut workspace1 := hp.new_workspace(name: 'workspace1', description: 'First workspace')!
mut workspace2 := hp.new_workspace(name: 'workspace2', description: 'Second workspace')!
// 2. Add a directory to the workspace
workspace.add_dir(path: './my_project_dir')!
// 3. Add a file to the workspace
workspace.add_file(path: './my_project_dir/main.v')!
// 4. Generate a prompt
user_instructions := 'Explain the code in main.v'
prompt_output := workspace.prompt(text: user_instructions)
println(prompt_output)
// 3. Add directories to workspaces
// Name parameter is optional - if not provided, extracts from path (last directory name)
dir1 := workspace1.add_directory(
path: '/path/to/directory'
name: 'optional_custom_name' // Optional: defaults to last part of path
)!
// 4. Directory operations - Mode A: Automatic scanning (default)
// add_dir() automatically scans the directory respecting .gitignore
content := dir1.add_dir(path: 'src')! // Automatically scans by default
println('Scanned ${content.file_count} files, ${content.dir_count} directories')
// 5. Directory operations - Mode B: Manual selective addition
dir2 := workspace1.add_directory(path: '/path/to/dir2')!
file := dir2.add_file(path: 'specific_file.v')! // Add specific file
dir_content := dir2.add_dir(path: 'src', scan: false)! // Manual mode (no auto-scan)
// Note: Workspace operations (add_directory, remove_directory, add_file, remove_file)
// automatically save to Redis. No manual hp.save() call needed!
```
## API Overview
### HeroPrompt Factory Functions
- `get(name, fromdb, create, reset)` - Get or create a HeroPrompt instance (main entry point)
- `name: string` - Instance name (default: 'default')
- `fromdb: bool` - Force reload from Redis (default: false, uses cache)
- `create: bool` - Create if doesn't exist (default: false, returns error if not found)
- `reset: bool` - Delete and recreate if exists (default: false)
- `exists(name)` - Check if HeroPrompt instance exists
- `delete(name)` - Delete a HeroPrompt instance
- `list(fromdb)` - List all HeroPrompt instances
**Usage Examples:**
```v
// Get existing instance (error if not found)
hp := heroprompt.get(name: 'my_heroprompt')!
// Get or create instance
hp := heroprompt.get(name: 'my_heroprompt', create: true)!
// Force reload from Redis (bypass cache)
hp := heroprompt.get(name: 'my_heroprompt', fromdb: true)!
// Reset instance (delete and recreate)
hp := heroprompt.get(name: 'my_heroprompt', reset: true)!
```
### HeroPrompt Methods
- `new_workspace(name, description)` - Create a new workspace
- `get_workspace(name)` - Get an existing workspace
- `list_workspaces()` - List all workspaces
- `delete_workspace(name)` - Delete a workspace
- `save()` - Save HeroPrompt instance to Redis
### Workspace Methods
- `add_directory(path, name, description)` - Add a directory (name is optional)
- `remove_directory(id/path/name)` - Remove a directory
- `get_directory(id)` - Get a directory by ID
- `list_directories()` - List all directories
- `add_file(path)` - Add a standalone file
- `remove_file(id/path/name)` - Remove a file
- `get_file(id)` - Get a file by ID
- `list_files()` - List all standalone files
- `item_count()` - Get total number of items (directories + files)
### Directory Methods
- `add_file(path)` - Add a specific file (relative or absolute path)
- `add_dir(path, scan: bool = true)` - Add all files from a specific directory
- `scan: true` (default) - Automatically scans the directory respecting .gitignore
- `scan: false` - Manual mode, returns empty content (for selective file addition)
- `file_count()` - Get number of files in directory
- `display_name()` - Get display name (includes git branch if available)
- `exists()` - Check if directory path still exists
- `refresh_git_info()` - Refresh git metadata
**Note:** The `scan()` method is now private and called automatically by `add_dir()` when `scan=true`.
## Testing
Run all tests:
```bash
v -enable-globals -no-skip-unused -stats test lib/develop/heroprompt
```
Run specific test files:
```bash
v -enable-globals -no-skip-unused test lib/develop/heroprompt/heroprompt_workspace_test.v
v -enable-globals -no-skip-unused test lib/develop/heroprompt/heroprompt_directory_test.v
v -enable-globals -no-skip-unused test lib/develop/heroprompt/heroprompt_file_test.v
```
## Logging Configuration
The module includes centralized logging with dual output (console + file):
```v
mut hp := heroprompt.new(name: 'my_heroprompt', create: true)!
// Configure log path (default: '/tmp/heroprompt_logs')
hp.log_path = '/custom/log/path'
// Suppress logging during tests
hp.run_in_tests = true
```
Log levels: `.error`, `.warning`, `.info`, `.debug`
## Breaking Changes
### v1.0.0
- **Repository → Directory**: The `Repository` struct and all related methods have been renamed to `Directory` to better reflect their purpose
- `add_repository()``add_directory()`
- `remove_repository()``remove_directory()`
- `get_repository()``get_directory()`
- `list_repositories()``list_directories()`
- **Auto-save**: Workspace mutation methods now automatically save to Redis. Manual `hp.save()` calls are no longer required after workspace operations
- **Time Fields**: All time fields now use `ourtime.OurTime` instead of `time.Time`
- **UUID IDs**: All entities now use UUID v4 for unique identifiers instead of custom hash-based IDs
## Examples
See the `examples/develop/heroprompt/` directory for comprehensive examples:
- **01_heroprompt_create_example.vsh** - Creating HeroPrompt instances with workspaces and directories
- **02_heroprompt_get_example.vsh** - Retrieving and working with existing instances
- **README.md** - Detailed guide on running the examples
Run the examples in order (create first, then get) to see the full workflow.

View File

@@ -8,4 +8,4 @@
<file_contents>
@{prompt.file_contents}
</file_contents>
</file_contents>

View File

@@ -18,6 +18,7 @@ pub fn build_selected_tree(files []string, base_root string) string {
}
rels << rp
}
rels.sort()
return tree_from_rel_paths(rels, '')
}
@@ -84,6 +85,13 @@ pub:
// build_file_tree_fs builds a file system tree for given root directories
pub fn build_file_tree_fs(roots []string, prefix string) string {
// Create ignore matcher with default patterns
ignore_matcher := gitignore_matcher_new()
return build_file_tree_fs_with_ignore(roots, prefix, &ignore_matcher)
}
// build_file_tree_fs_with_ignore builds a file system tree with ignore pattern filtering
pub fn build_file_tree_fs_with_ignore(roots []string, prefix string, ignore_matcher &IgnoreMatcher) string {
mut out := ''
for i, root in roots {
if !os.is_dir(root) {
@@ -92,41 +100,91 @@ pub fn build_file_tree_fs(roots []string, prefix string) string {
connector := if i == roots.len - 1 { ' ' } else { ' ' }
out += '${prefix}${connector}${os.base(root)}\n'
child_prefix := if i == roots.len - 1 { prefix + ' ' } else { prefix + ' ' }
// list children under root
entries := os.ls(root) or { []string{} }
// sort: dirs first then files
mut dirs := []string{}
mut files := []string{}
for e in entries {
fp := os.join_path(root, e)
if os.is_dir(fp) {
dirs << fp
} else if os.is_file(fp) {
files << fp
}
out += build_file_tree_fs_recursive_with_ignore(root, child_prefix, '', ignore_matcher)
}
return out
}
// build_file_tree_fs_recursive builds the contents of a directory without adding the directory name itself
fn build_file_tree_fs_recursive(root string, prefix string) string {
// Create ignore matcher with default patterns for backward compatibility
ignore_matcher := gitignore_matcher_new()
return build_file_tree_fs_recursive_with_ignore(root, prefix, '', &ignore_matcher)
}
// build_file_tree_fs_recursive_with_ignore builds the contents of a directory with ignore pattern filtering
fn build_file_tree_fs_recursive_with_ignore(root string, prefix string, base_rel_path string, ignore_matcher &IgnoreMatcher) string {
mut out := ''
// list children under root
entries := os.ls(root) or { []string{} }
// sort: dirs first then files
mut dirs := []string{}
mut files := []string{}
for e in entries {
fp := os.join_path(root, e)
// Calculate relative path for ignore checking
rel_path := if base_rel_path.len > 0 {
if base_rel_path.ends_with('/') { base_rel_path + e } else { base_rel_path + '/' + e }
} else {
e
}
dirs.sort()
files.sort()
// files
for j, f in files {
file_connector := if j == files.len - 1 && dirs.len == 0 {
' '
// Check if this entry should be ignored
mut should_ignore := ignore_matcher.is_ignored(rel_path)
if os.is_dir(fp) && !should_ignore {
// Also check directory pattern with trailing slash
should_ignore = ignore_matcher.is_ignored(rel_path + '/')
}
if should_ignore {
continue
}
if os.is_dir(fp) {
dirs << fp
} else if os.is_file(fp) {
files << fp
}
}
dirs.sort()
files.sort()
// files
for j, f in files {
file_connector := if j == files.len - 1 && dirs.len == 0 {
' '
} else {
' '
}
out += '${prefix}${file_connector}${os.base(f)} *\n'
}
// subdirectories
for j, d in dirs {
sub_connector := if j == dirs.len - 1 { ' ' } else { ' ' }
out += '${prefix}${sub_connector}${os.base(d)}\n'
sub_prefix := if j == dirs.len - 1 {
prefix + ' '
} else {
prefix + ' '
}
// Calculate new relative path for subdirectory
dir_name := os.base(d)
new_rel_path := if base_rel_path.len > 0 {
if base_rel_path.ends_with('/') {
base_rel_path + dir_name
} else {
' '
base_rel_path + '/' + dir_name
}
out += '${child_prefix}${file_connector}${os.base(f)} *\n'
}
// subdirectories
for j, d in dirs {
sub_connector := if j == dirs.len - 1 { ' ' } else { ' ' }
out += '${child_prefix}${sub_connector}${os.base(d)}\n'
sub_prefix := if j == dirs.len - 1 {
child_prefix + ' '
} else {
child_prefix + ' '
}
out += build_file_tree_fs([d], sub_prefix)
} else {
dir_name
}
out += build_file_tree_fs_recursive_with_ignore(d, sub_prefix, new_rel_path, ignore_matcher)
}
return out
}
@@ -149,3 +207,72 @@ pub fn build_file_tree_selected(files []string, base_root string) string {
rels.sort()
return tree_from_rel_paths(rels, '')
}
// SearchResult represents a search result item
pub struct SearchResult {
pub:
name string // filename
path string // relative path from base
full_path string // absolute path
typ string // 'file' or 'directory'
}
// search_files searches for files and directories matching a query string
// - base_path: the root directory to search from
// - query: the search query (case-insensitive substring match)
// - ignore_matcher: optional ignore matcher to filter results (can be null)
// Returns a list of SearchResult items
pub fn search_files(base_path string, query string, ignore_matcher &IgnoreMatcher) ![]SearchResult {
if query.len == 0 {
return []SearchResult{}
}
query_lower := query.to_lower()
mut results := []SearchResult{}
search_directory_recursive(base_path, base_path, query_lower, ignore_matcher, mut
results)!
return results
}
// search_directory_recursive recursively searches directories for matching files
fn search_directory_recursive(dir_path string, base_path string, query_lower string, ignore_matcher &IgnoreMatcher, mut results []SearchResult) ! {
entries := os.ls(dir_path) or { return }
for entry in entries {
full_path := os.join_path(dir_path, entry)
// Calculate relative path from base_path
mut rel_path := full_path
if full_path.starts_with(base_path) {
rel_path = full_path[base_path.len..]
if rel_path.starts_with('/') {
rel_path = rel_path[1..]
}
}
// Check if this entry should be ignored
if unsafe { ignore_matcher != 0 } {
if ignore_matcher.is_ignored(rel_path) {
continue
}
}
// Check if filename or path matches search query
if entry.to_lower().contains(query_lower) || rel_path.to_lower().contains(query_lower) {
results << SearchResult{
name: entry
path: rel_path
full_path: full_path
typ: if os.is_dir(full_path) { 'directory' } else { 'file' }
}
}
// Recursively search subdirectories
if os.is_dir(full_path) {
search_directory_recursive(full_path, base_path, query_lower, ignore_matcher, mut
results)!
}
}
}

View File

@@ -2,7 +2,6 @@ module db
import x.json2
import json
import incubaid.herolib.data.ourtime
import strconv
pub fn decode_int(data string) !int {
@@ -31,17 +30,6 @@ pub fn decode_u32(data string) !u32 {
return u32(parsed_uint)
}
pub fn u32_ourtime(t u32) ourtime.OurTime {
return ourtime.OurTime{
unixt: i64(t)
}
}
pub fn ourtime_u32(t ourtime.OurTime) u32 {
return u32(t.unixt) // Convert unix time to u32
}
pub fn decode_string(data string) !string {
// Try JSON decode first (for proper JSON strings)
// if result := json2.decode[string](data) {

View File

@@ -498,6 +498,7 @@ pub fn calendar_event_handle(mut f ModelsFactory, rpcid int, servercontext map[s
}
else {
println('Method not found on calendar_event: ${method}')
$dbg;
return new_error(rpcid,
code: 32601
message: 'Method ${method} not found on calendar_event'

View File

@@ -1,219 +0,0 @@
module ledger
import incubaid.herolib.hero.db
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.schemas.jsonrpc { Response, new_error, new_response, new_response_false, new_response_int, new_response_true }
import json
pub struct DBAccount {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self Account) type_name() string {
return 'Account'
}
@[params]
pub struct AccountPolicyArg {
pub mut:
policy_id u32
admins []u32
min_signatures u8
limits []AccountLimit
whitelist_out []u32
whitelist_in []u32
lock_till u32
admin_lock_type LockType
admin_lock_till u32
admin_unlock []u32
admin_unlock_min_signature u8
clawback_accounts []u32
clawback_min_signatures u8
clawback_from u32
clawback_till u32
}
pub struct AccountArg {
pub mut:
name string
description string
owner_id u32
location_id u32
accountpolicies []AccountPolicyArg
assets []AccountAsset
assetid u32
last_activity u32 // timestamp
administrators []u32
}
pub fn (mut self DBAccount) new(args AccountArg) !Account {
mut accountpolicies := []AccountPolicy{}
for policy_arg in args.accountpolicies {
accountpolicies << AccountPolicy{
policy_id: policy_arg.policy_id
admins: policy_arg.admins
min_signatures: policy_arg.min_signatures
limits: policy_arg.limits
whitelist_out: policy_arg.whitelist_out
whitelist_in: policy_arg.whitelist_in
lock_till: policy_arg.lock_till
admin_lock_type: policy_arg.admin_lock_type
admin_lock_till: policy_arg.admin_lock_till
admin_unlock: policy_arg.admin_unlock
admin_unlock_min_signature: policy_arg.admin_unlock_min_signature
clawback_accounts: policy_arg.clawback_accounts
clawback_min_signatures: policy_arg.clawback_min_signatures
clawback_from: policy_arg.clawback_from
clawback_till: policy_arg.clawback_till
}
}
mut o := Account{
owner_id: args.owner_id
location_id: args.location_id
accountpolicies: accountpolicies
assets: args.assets
assetid: args.assetid
last_activity: args.last_activity
administrators: args.administrators
}
o.name = args.name
o.description = args.description
o.updated_at = u32(ourtime.now().unix())
return o
}
pub fn (mut self DBAccount) set(o Account) !Account {
return self.db.set[Account](o)!
}
pub fn (mut self DBAccount) delete(id u32) !bool {
if !self.db.exists[Account](id)! {
return false
}
self.db.delete[Account](id)!
return true
}
pub fn (mut self DBAccount) exist(id u32) !bool {
return self.db.exists[Account](id)!
}
pub fn (mut self DBAccount) get(id u32) !Account {
mut o, data := self.db.get_data[Account](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
@[params]
pub struct AccountListArg {
pub mut:
filter string
status int = -1
limit int = 20
offset int = 0
}
pub fn (mut self DBAccount) list(args AccountListArg) ![]Account {
mut all_accounts := self.db.list[Account]()!.map(self.get(it)!)
mut filtered_accounts := []Account{}
for account in all_accounts {
// Add filter logic based on Account properties
if args.filter != '' && !account.name.contains(args.filter)
&& !account.description.contains(args.filter) {
continue
}
// We could add more filters based on status if the Account struct has a status field
filtered_accounts << account
}
// Apply pagination
mut start := args.offset
if start >= filtered_accounts.len {
start = 0
}
mut limit := args.limit
if limit > 100 {
limit = 100
}
if start + limit > filtered_accounts.len {
limit = filtered_accounts.len - start
}
if limit <= 0 {
return []Account{}
}
return if filtered_accounts.len > 0 {
filtered_accounts[start..start + limit]
} else {
[]Account{}
}
}
pub fn (mut self DBAccount) list_all() ![]Account {
return self.db.list[Account]()!.map(self.get(it)!)
}
pub struct UserRef {
pub mut:
id u32
}
pub fn account_handle(mut f ModelsFactory, rpcid int, servercontext map[string]string, userref UserRef, method string, params string) !Response {
match method {
'get' {
id := db.decode_u32(params)!
res := f.account.get(id)!
return new_response(rpcid, json.encode_pretty(res))
}
'set' {
mut args := db.decode_generic[AccountArg](params)!
mut o := f.account.new(args)!
if args.id != 0 {
o.id = args.id
}
o = f.account.set(o)!
return new_response_int(rpcid, int(o.id))
}
'delete' {
id := db.decode_u32(params)!
success := f.account.delete(id)!
if success {
return new_response_true(rpcid)
} else {
return new_response_false(rpcid)
}
}
'exist' {
id := db.decode_u32(params)!
if f.account.exist(id)! {
return new_response_true(rpcid)
} else {
return new_response_false(rpcid)
}
}
'list' {
args := db.decode_generic_or_default[AccountListArg](params, AccountListArg{})!
result := f.account.list(args)!
return new_response(rpcid, json.encode_pretty(result))
}
else {
return new_error(
rpcid: rpcid
code: 32601
message: 'Method ${method} not found on Account'
)
}
}
}

View File

@@ -1,72 +0,0 @@
module ledger
import incubaid.herolib.hero.db
// Account represents an account in the financial system
@[heap]
pub struct Account {
db.Base
pub mut:
owner_id u32 // link to user, o is not defined
location_id u32 // link to location, 0 is none
accountpolicies []AccountPolicy
assets []AccountAsset
assetid u32
last_activity u32
administrators []u32
status AccountStatus
}
// AccountStatus represents the status of an account
pub enum AccountStatus {
active
inactive
suspended
archived
}
// AccountPolicy represents a set of rules for an account
pub struct AccountPolicy {
pub mut:
policy_id u32 @[index]
admins []u32 // people who can transfer money out
min_signatures u8 // nr of people who need to sign
limits []AccountLimit
whitelist_out []u32 // where money can go to
whitelist_in []u32 // where money can come from
lock_till u32 // date in epoch till no money can be transfered, only after
admin_lock_type LockType
admin_lock_till u32 // date in epoch when admin can unlock (0 means its free), this is unlock for changing this policy
admin_unlock []u32 // users who can unlock the admin policy
admin_unlock_min_signature u8 // nr of signatures from the adminunlock
clawback_accounts []u32 // account(s) which can clawback
clawback_min_signatures u8
clawback_from u32 // from epoch money can be clawed back, 0 is always
clawback_till u32 // till which date
}
pub enum LockType {
locked_till
locked
free
}
pub struct AccountLimit {
pub mut:
amount u64 // in smallest unit
asset_id u32
period AccountLimitPeriodLimit
}
pub enum AccountLimitPeriodLimit {
daily
weekly
monthly
}
pub struct AccountAsset {
pub mut:
assetid u32
balance u64
metadata map[string]string
}

View File

@@ -1,122 +0,0 @@
module ledger
import incubaid.herolib.data.encoder
pub fn (self AccountPolicy) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.policy_id)
e.add_list_u32(self.admins)
e.add_u8(self.min_signatures)
e.add_u32(u32(self.limits.len))
for limit in self.limits {
limit.dump(mut e)!
}
e.add_list_u32(self.whitelist_out)
e.add_list_u32(self.whitelist_in)
e.add_u32(self.lock_till)
e.add_u8(u8(self.admin_lock_type))
e.add_u32(self.admin_lock_till)
e.add_list_u32(self.admin_unlock)
e.add_u8(self.admin_unlock_min_signature)
e.add_list_u32(self.clawback_accounts)
e.add_u8(self.clawback_min_signatures)
e.add_u32(self.clawback_from)
e.add_u32(self.clawback_till)
}
fn (mut self AccountPolicy) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountPolicyP version: ${version}'
self.policy_id = e.get_u32()!
self.admins = e.get_list_u32()!
self.min_signatures = e.get_u8()!
limits_len := e.get_u32()!
self.limits = []AccountLimit{cap: int(limits_len)}
for _ in 0 .. limits_len {
mut limit := AccountLimit{}
limit.load(mut e)!
self.limits << limit
}
self.whitelist_out = e.get_list_u32()!
self.whitelist_in = e.get_list_u32()!
self.lock_till = e.get_u32()!
self.admin_lock_type = unsafe { LockType(e.get_u8()!) }
self.admin_lock_till = e.get_u32()!
self.admin_unlock = e.get_list_u32()!
self.admin_unlock_min_signature = e.get_u8()!
self.clawback_accounts = e.get_list_u32()!
self.clawback_min_signatures = e.get_u8()!
self.clawback_from = e.get_u32()!
self.clawback_till = e.get_u32()!
}
pub fn (self AccountLimit) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u64(self.amount)
e.add_u32(self.asset_id)
e.add_u8(u8(self.period))
}
fn (mut self AccountLimit) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountLimitP version: ${version}'
self.amount = e.get_u32()!
self.asset_id = e.get_u32()!
self.period = unsafe { AccountLimitPeriodLimit(e.get_u8()!) }
}
pub fn (self AccountAsset) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.assetid)
e.add_u64(self.balance)
e.add_map_string(self.metadata)
}
fn (mut self AccountAsset) load(mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported AccountAssetP version: ${version}'
self.assetid = e.get_u32()!
self.balance = e.get_u32()!
self.metadata = e.get_map_string()!
}
pub fn (self Account) dump(mut e encoder.Encoder) ! {
e.add_u16(1) // version
e.add_u32(self.owner_id)
e.add_u32(self.location_id)
e.add_u32(u32(self.accountpolicies.len))
for policy in self.accountpolicies {
policy.dump(mut e)!
}
e.add_u32(u32(self.assets.len))
for asset in self.assets {
asset.dump(mut e)!
}
e.add_u32(self.assetid)
e.add_u32(self.last_activity)
e.add_list_u32(self.administrators)
}
fn (mut self DBAccount) load(mut o Account, mut e encoder.Decoder) ! {
version := e.get_u16()!
assert version == 1, 'Unsupported Account version: ${version}'
o.owner_id = e.get_u32()!
o.location_id = e.get_u32()!
policies_len := e.get_u32()!
o.accountpolicies = []AccountPolicy{cap: int(policies_len)}
for _ in 0 .. policies_len {
mut policy := AccountPolicy{}
policy.load(mut e)!
o.accountpolicies << policy
}
assets_len := e.get_u32()!
o.assets = []AccountAsset{cap: int(assets_len)}
for _ in 0 .. assets_len {
mut asset := AccountAsset{}
asset.load(mut e)!
o.assets << asset
}
o.assetid = e.get_u32()!
o.last_activity = e.get_u32()!
o.administrators = e.get_list_u32()!
}

View File

@@ -1,61 +0,0 @@
module ledger
import incubaid.herolib.hero.db
import json
pub struct ModelsFactory {
pub mut:
db &db.DB
account &DBAccount
// asset &DBAsset
// dnszone &DBDNSZone
// group &DBGroup
// member &DBMember
// notary &DBNotary
// signature &DBSignature
// transaction &DBTransaction
// user &DBUser
// userkvs &DBUserKVS
// userkvsitem &DBUserKVSItem
}
pub fn new_models_factory(mut database db.DB) !&ModelsFactory {
mut factory := &ModelsFactory{
db: database
account: &DBAccount{
db: database
}
}
// factory.asset = &DBAsset{
// db: database
// }
// factory.dnszone = &DBDNSZone{
// db: database
// }
// factory.group = &DBGroup{
// db: database
// }
// factory.member = &DBMember{
// db: database
// }
// factory.notary = &DBNotary{
// db: database
// }
// factory.signature = &DBSignature{
// db: database
// }
// factory.transaction = &DBTransaction{
// db: database
// }
// factory.user = &DBUser{
// db: database
// }
// factory.userkvs = &DBUserKVS{
// db: database
// }
// factory.userkvsitem = &DBUserKVSItem{
// db: database
// }
return factory
}

View File

@@ -1,77 +0,0 @@
module ledger
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
import json
// Account represents an account in the financial system
@[heap]
pub struct Account {
db.Base
pub mut:
owner_id u32 // link to user, o is not defined
location_id u32 // link to location, 0 is none
accountpolicies []AccountPolicy
assets []AccountAsset
assetid u32
last_activity u64
administrators []u32
status AccountStatus
}
// AccountStatus represents the status of an account
pub enum AccountStatus {
active
inactive
suspended
archived
}
// AccountPolicy represents a set of rules for an account
pub struct AccountPolicy {
pub mut:
policy_id u32 @[index]
admins []u32 // people who can transfer money out
min_signatures u8 // nr of people who need to sign
limits []AccountLimit
whitelist_out []u32 // where money can go to
whitelist_in []u32 // where money can come from
lock_till u32 // date in epoch till no money can be transfered, only after
admin_lock_type LockType
admin_lock_till u32 // date in epoch when admin can unlock (0 means its free), this is unlock for changing this policy
admin_unlock []u32 // users who can unlock the admin policy
admin_unlock_min_signature u8 // nr of signatures from the adminunlock
clawback_accounts []u32 // account(s) which can clawback
clawback_min_signatures u8
clawback_from u32 // from epoch money can be clawed back, 0 is always
clawback_till u32 // till which date
}
pub enum LockType {
locked_till
locked
free
}
pub struct AccountLimit {
pub mut:
amount u64 //in smallest unit
asset_id u32
period AccountLimitPeriodLimit
}
pub enum AccountLimitPeriodLimit {
daily
weekly
monthly
}
pub struct AccountAsset {
pub mut:
assetid u32
balance u64
metadata map[string]string
}

Some files were not shown because too many files have changed in this diff Show More