97 KiB
<file_map> /Users/despiegk/code/github/incubaid/herolib ├── .github │ └── workflows ├── .zed ├── aiprompts │ ├── .openhands │ ├── bizmodel │ ├── documentor │ ├── docusaurus │ ├── herolib_advanced │ ├── herolib_core │ ├── instructions │ ├── instructions_archive │ │ ├── models_from_v │ │ └── processing │ ├── todo │ ├── v_advanced │ ├── v_core │ │ ├── array │ │ ├── benchmark │ │ ├── builtin │ │ ├── crypto │ │ ├── encoding │ │ ├── io │ │ ├── json │ │ ├── json2 │ │ ├── maps │ │ ├── net │ │ ├── orm │ │ ├── regex │ │ ├── string │ │ ├── time │ │ ├── toml │ │ └── veb │ └── v_veb_webserver ├── cli ├── docker │ ├── herolib │ │ └── scripts │ └── postgresql ├── examples │ ├── aiexamples │ ├── biztools │ │ ├── _archive │ │ ├── bizmodel_docusaurus │ │ │ └── archive │ │ │ └── img │ │ └── examples │ │ └── full │ ├── builder │ │ └── remote_executor │ ├── clients │ ├── core │ │ ├── base │ │ ├── db │ │ ├── logger │ │ ├── openapi │ │ │ └── gitea │ │ ├── openrpc │ │ │ └── examples │ │ │ ├── openrpc_client │ │ │ ├── openrpc_docs │ │ │ └── petstore_client │ │ └── pathlib │ │ └── examples │ │ ├── list │ │ ├── md5 │ │ ├── scanner │ │ └── sha256 │ ├── data │ │ ├── location │ │ ├── ourdb_syncer │ │ ├── params │ │ │ ├── args │ │ │ │ └── data │ │ │ └── paramsfilter │ │ └── resp │ ├── develop │ │ ├── codewalker │ │ ├── gittools │ │ ├── heroprompt │ │ ├── ipapi │ │ ├── juggler │ │ │ └── hero │ │ │ └── playbook │ │ ├── luadns │ │ ├── openai │ │ ├── runpod │ │ ├── vastai │ │ └── wireguard │ ├── hero │ │ ├── crypt │ │ ├── db │ │ ├── herofs │ │ ├── heromodels │ │ ├── herorpc │ │ └── heroserver │ ├── installers │ │ ├── db │ │ ├── infra │ │ ├── lang │ │ ├── net │ │ ├── sysadmintools │ │ ├── threefold │ │ └── virt │ ├── installers_remote │ ├── jobs │ ├── lang │ │ └── python │ ├── mcp │ │ ├── http_demo │ │ ├── http_server │ │ ├── inspector │ │ └── simple_http │ ├── osal │ │ ├── coredns │ │ ├── download │ │ ├── ping │ │ ├── process │ │ │ ├── process_bash │ │ │ └── process_python │ │ ├── rsync │ │ ├── sandbox │ │ │ └── examples │ │ ├── sshagent │ │ ├── tmux │ │ │ └── heroscripts │ │ ├── ubuntu │ │ └── zinit │ │ ├── rpc │ │ └── simple │ ├── schemas │ │ ├── example │ │ │ └── testdata │ │ ├── openapi │ │ │ └── codegen │ │ └── openrpc │ ├── sshagent │ ├── threefold │ │ ├── grid │ │ │ ├── deploy │ │ │ └── utils │ │ ├── gridproxy │ │ ├── holochain │ │ ├── incatokens │ │ │ └── data │ │ ├── solana │ │ └── tfgrid3deployer │ │ ├── gw_over_wireguard │ │ ├── heroscript │ │ ├── hetzner │ │ ├── open_webui_gw │ │ └── vm_gw_caddy │ ├── tools │ │ └── imagemagick │ │ └── .backup │ ├── ui │ │ ├── console │ │ │ ├── console2 │ │ │ └── flow1 │ │ └── telegram │ ├── vfs │ │ └── vfs_db │ ├── virt │ │ ├── daguserver │ │ ├── docker │ │ │ └── ai_web_ui │ │ ├── heropods │ │ ├── hetzner │ │ ├── lima │ │ ├── podman │ │ └── windows │ ├── web │ │ ├── doctree │ │ │ └── content │ │ └── markdown_renderer │ └── webdav ├── lib │ ├── ai │ │ ├── escalayer │ │ ├── mcp │ │ │ ├── baobab │ │ │ ├── cmd │ │ │ ├── mcpgen │ │ │ │ ├── schemas │ │ │ │ └── templates │ │ │ ├── pugconvert │ │ │ │ ├── cmd │ │ │ │ ├── logic │ │ │ │ │ └── templates │ │ │ │ └── mcp │ │ │ ├── rhai │ │ │ │ ├── cmd │ │ │ │ ├── example │ │ │ │ ├── logic │ │ │ │ │ ├── prompts │ │ │ │ │ └── templates │ │ │ │ └── mcp │ │ │ ├── rust │ │ │ └── vcode │ │ │ ├── cmd │ │ │ ├── logic │ │ │ └── mcp │ │ └── utils │ ├── biz │ │ ├── bizmodel │ │ │ ├── docu │ │ │ ├── exampledata │ │ │ └── templates │ │ ├── investortool │ │ │ └── simulator │ │ │ └── templates │ │ ├── planner │ │ │ ├── examples │ │ │ └── models │ │ └── spreadsheet │ │ └── docu │ ├── builder │ ├── clients │ │ ├── giteaclient │ │ ├── ipapi │ │ ├── jina │ │ │ └── py_specs │ │ ├── livekit │ │ ├── mailclient │ │ ├── meilisearch │ │ ├── mycelium │ │ ├── mycelium_rpc │ │ ├── openai │ │ │ ├── audio │ │ │ ├── embeddings │ │ │ ├── files │ │ │ ├── finetune │ │ │ ├── images │ │ │ └── moderation │ │ ├── postgresql_client │ │ ├── qdrant │ │ ├── rclone │ │ ├── runpod │ │ ├── sendgrid │ │ ├── traefik │ │ ├── vastai │ │ ├── wireguard │ │ ├── zerodb_client │ │ └── zinit │ ├── conversiontools │ │ ├── docsorter │ │ │ └── pythonscripts │ │ ├── imagemagick │ │ ├── pdftotext │ │ └── text_extractor │ ├── core │ │ ├── base │ │ ├── code │ │ │ └── templates │ │ │ ├── comment │ │ │ ├── function │ │ │ ├── interface │ │ │ └── struct │ │ ├── generator │ │ │ └── generic │ │ │ └── templates │ │ ├── herocmds │ │ ├── httpconnection │ │ ├── logger │ │ ├── openrpc_remove │ │ │ ├── examples │ │ │ └── specs │ │ ├── pathlib │ │ ├── playbook │ │ ├── playcmds │ │ ├── playmacros │ │ ├── redisclient │ │ ├── rootpath │ │ ├── smartid │ │ ├── texttools │ │ │ └── regext │ │ │ └── testdata │ │ └── vexecutor │ ├── crypt │ │ ├── aes_symmetric │ │ ├── crpgp │ │ ├── ed25519 │ │ ├── keychain │ │ ├── keysafe │ │ ├── openssl │ │ ├── pgp │ │ └── secrets │ ├── data │ │ ├── cache │ │ ├── countries │ │ │ └── data │ │ ├── currency │ │ ├── dbfs │ │ ├── dedupestor │ │ │ └── dedupe_ourdb │ │ ├── doctree │ │ │ ├── collection │ │ │ │ ├── data │ │ │ │ ├── template │ │ │ │ └── testdata │ │ │ │ └── export_test │ │ │ │ ├── export_expected │ │ │ │ │ └── src │ │ │ │ │ └── col1 │ │ │ │ │ └── img │ │ │ │ └── mytree │ │ │ │ └── dir1 │ │ │ │ └── dir2 │ │ │ ├── pointer │ │ │ └── testdata │ │ │ ├── actions │ │ │ │ └── functionality │ │ │ ├── export_test │ │ │ │ ├── export_expected │ │ │ │ │ ├── col1 │ │ │ │ │ │ └── img │ │ │ │ │ └── col2 │ │ │ │ └── mytree │ │ │ │ ├── dir1 │ │ │ │ │ └── dir2 │ │ │ │ └── dir3 │ │ │ ├── process_defs_test │ │ │ │ ├── col1 │ │ │ │ └── col2 │ │ │ ├── process_includes_test │ │ │ │ ├── col1 │ │ │ │ └── col2 │ │ │ ├── rpc │ │ │ └── tree_test │ │ │ ├── fruits │ │ │ │ └── berries │ │ │ │ └── img │ │ │ └── vegetables │ │ │ └── cruciferous │ │ ├── encoder │ │ ├── encoderhero │ │ ├── flist │ │ ├── gid │ │ ├── graphdb │ │ ├── ipaddress │ │ ├── location │ │ ├── markdown │ │ │ ├── elements │ │ │ ├── parsers │ │ │ ├── testdata │ │ │ └── tools │ │ ├── markdownparser2 │ │ ├── markdownrenderer │ │ ├── mnemonic │ │ ├── models │ │ │ └── hr │ │ ├── ourdb │ │ ├── ourdb_syncer │ │ │ ├── http │ │ │ └── streamer │ │ ├── ourjson │ │ ├── ourtime │ │ ├── paramsparser │ │ ├── radixtree │ │ ├── resp │ │ ├── serializers │ │ ├── tst │ │ ├── verasure │ │ └── vstor │ ├── dav │ │ └── webdav │ │ ├── bin │ │ ├── specs │ │ └── templates │ ├── develop │ │ ├── codewalker │ │ ├── gittools │ │ │ └── tests │ │ ├── heroprompt │ │ │ └── templates │ │ ├── luadns │ │ ├── performance │ │ │ └── cmd │ │ ├── sourcetree │ │ ├── vscode │ │ └── vscode_extensions │ │ └── ourdb │ │ └── templates │ ├── hero │ │ ├── crypt │ │ ├── db │ │ ├── herocluster │ │ │ └── example │ │ ├── herofs │ │ │ └── rpc │ │ ├── heromodels │ │ │ ├── beta │ │ │ └── rpc │ │ └── heroserver │ │ └── templates │ ├── installers │ │ ├── base │ │ │ └── templates │ │ ├── db │ │ │ ├── cometbft │ │ │ │ └── templates │ │ │ ├── meilisearch_installer │ │ │ ├── postgresql │ │ │ │ └── templates │ │ │ ├── qdrant_installer │ │ │ │ └── templates │ │ │ ├── zerodb │ │ │ └── zerofs │ │ ├── develapps │ │ │ ├── chrome │ │ │ └── vscode │ │ ├── infra │ │ │ ├── coredns │ │ │ │ └── templates │ │ │ ├── gitea │ │ │ │ └── templates │ │ │ ├── livekit │ │ │ │ └── templates │ │ │ └── zinit_installer │ │ ├── lang │ │ │ ├── golang │ │ │ ├── herolib │ │ │ ├── nodejs │ │ │ ├── python │ │ │ ├── rust │ │ │ └── vlang │ │ ├── net │ │ │ ├── mycelium_installer │ │ │ ├── wireguard_installer │ │ │ └── yggdrasil │ │ ├── sysadmintools │ │ │ ├── actrunner │ │ │ │ └── templates │ │ │ ├── b2 │ │ │ ├── fungistor │ │ │ ├── garage_s3 │ │ │ │ └── templates │ │ │ ├── grafana │ │ │ ├── prometheus │ │ │ │ └── templates │ │ │ ├── rclone │ │ │ │ └── templates │ │ │ ├── restic │ │ │ └── s3 │ │ ├── threefold │ │ │ ├── griddriver │ │ │ └── tfrobot │ │ ├── ulist │ │ ├── virt │ │ │ ├── cloudhypervisor │ │ │ ├── docker │ │ │ ├── herorunner │ │ │ ├── lima │ │ │ │ └── templates │ │ │ ├── pacman │ │ │ │ └── templates │ │ │ ├── podman │ │ │ ├── qemu │ │ │ └── youki │ │ └── web │ │ ├── bun │ │ ├── imagemagick │ │ ├── lighttpd │ │ │ └── templates │ │ ├── tailwind │ │ ├── tailwind4 │ │ ├── traefik │ │ │ └── templates │ │ └── zola │ ├── lang │ │ ├── python │ │ │ └── templates │ │ └── rust │ ├── mcp │ │ ├── baobab │ │ ├── cmd │ │ ├── mcpgen │ │ │ ├── schemas │ │ │ └── templates │ │ ├── pugconvert │ │ │ ├── cmd │ │ │ ├── logic │ │ │ │ └── templates │ │ │ └── mcp │ │ ├── rhai │ │ │ ├── cmd │ │ │ ├── example │ │ │ ├── logic │ │ │ │ ├── prompts │ │ │ │ └── templates │ │ │ └── mcp │ │ ├── transport │ │ └── vcode │ │ ├── cmd │ │ ├── logic │ │ └── mcp │ ├── osal │ │ ├── core │ │ ├── coredns │ │ ├── hostsfile │ │ ├── linux │ │ │ └── templates │ │ ├── netns │ │ ├── notifier │ │ ├── osinstaller │ │ ├── rsync │ │ │ └── templates │ │ ├── screen │ │ ├── sshagent │ │ ├── startupmanager │ │ ├── systemd │ │ │ └── templates │ │ ├── tmux │ │ │ └── bin │ │ ├── traefik │ │ │ └── specs │ │ ├── tun │ │ ├── ubuntu │ │ └── ufw │ ├── schemas │ │ ├── jsonrpc │ │ │ ├── reflection │ │ │ └── testdata │ │ │ ├── testmodule │ │ │ └── testserver │ │ ├── jsonschema │ │ │ ├── codegen │ │ │ │ └── templates │ │ │ └── testdata │ │ ├── openapi │ │ │ ├── codegen │ │ │ ├── templates │ │ │ └── testdata │ │ └── openrpc │ │ ├── _archive │ │ │ ├── codegen │ │ │ │ ├── templates │ │ │ │ └── testdata │ │ │ ├── server │ │ │ └── testdata │ │ │ └── petstore_client │ │ └── testdata │ ├── security │ │ ├── authentication │ │ │ └── templates │ │ └── jwt │ ├── threefold │ │ ├── grid3 │ │ │ ├── deploy_tosort │ │ │ ├── deployer │ │ │ ├── deployer2_sort │ │ │ ├── griddriver │ │ │ ├── gridproxy │ │ │ │ └── model │ │ │ ├── models │ │ │ ├── rmb │ │ │ ├── tfrobot │ │ │ │ └── templates │ │ │ ├── tokens │ │ │ └── zerohub │ │ ├── grid4 │ │ │ ├── datamodel │ │ │ ├── datamodelsimulator │ │ │ ├── farmingsimulator │ │ │ │ └── templates │ │ │ └── gridsimulator │ │ │ └── manual │ │ ├── incatokens │ │ │ └── templates │ │ └── models │ │ ├── business │ │ ├── core │ │ ├── finance │ │ ├── flow │ │ ├── identity │ │ ├── legal │ │ ├── library │ │ ├── location │ │ └── payment │ ├── ui │ │ ├── console │ │ ├── generic │ │ ├── logger │ │ ├── telegram │ │ │ └── client │ │ ├── template │ │ └── uimodel │ ├── vfs │ │ ├── vfs_calendar │ │ ├── vfs_contacts │ │ ├── vfs_db │ │ ├── vfs_local │ │ ├── vfs_mail │ │ └── vfs_nested │ ├── virt │ │ ├── cloudhypervisor │ │ ├── crun │ │ ├── docker │ │ ├── heropods │ │ ├── herorun │ │ ├── herorun2 │ │ ├── hetznermanager │ │ ├── lima │ │ │ ├── raw │ │ │ └── templates │ │ ├── podman │ │ └── qemu │ │ └── templates │ └── web │ ├── doctreeclient │ ├── docusaurus │ │ └── example │ ├── echarts │ ├── site │ │ └── example │ └── ui │ ├── static │ │ ├── css │ │ └── js │ └── templates │ └── admin ├── libarchive │ ├── baobab │ │ ├── actor │ │ ├── generator │ │ │ ├── _archive │ │ │ ├── templates │ │ │ └── testdata │ │ ├── osis │ │ ├── specification │ │ └── stage │ │ └── interfaces │ ├── buildah │ ├── daguserver │ │ └── templates │ ├── dify │ │ └── templates │ ├── examples │ │ └── baobab │ │ ├── generator │ │ │ ├── basic │ │ │ ├── geomind_poc │ │ │ └── openapi_e2e │ │ └── specification │ ├── installers │ │ └── web │ │ └── caddy2 │ │ └── templates │ ├── rhai │ │ ├── prompts │ │ ├── templates │ │ └── testdata │ ├── starlight │ │ └── templates │ └── zinit │ └── zinit ├── manual │ ├── best_practices │ │ ├── osal │ │ └── scripts │ ├── core │ │ └── concepts │ ├── documentation │ └── playcmds ├── research │ ├── globals │ └── openrpc ├── tests │ └── data └── vscodeplugin └── heroscrypt-syntax └── syntaxes
</file_map>
<file_contents> File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/helpers_tags.v
module db
import crypto.md5
pub fn (mut self DB) tags_get(tags []string) !u32 {
return if tags.len > 0 {
mut tags_fixed := tags.map(it.to_lower_ascii().trim_space()).filter(it != '')
tags_fixed.sort_ignore_case()
hash := md5.hexhash(tags_fixed.join(','))
tags_found := self.redis.hget('db:tags', hash)!
return if tags_found == '' {
println('tags_get: new tags: ${tags_fixed.join(',')}')
id := self.new_id()!
self.redis.hset('db:tags', hash, id.str())!
self.redis.hset('db:tags', id.str(), tags_fixed.join(','))!
id
} else {
tags_found.u32()
}
} else {
0
}
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/helpers_comments.v
module db
import crypto.md5
@[params]
pub struct CommentArg {
pub mut:
comment string
parent u32
author u32
}
pub fn (mut self DB) comments_get(args []CommentArg) ![]u32 {
return args.map(self.comment_get(it.comment)!)
}
pub fn (mut self DB) comment_get(comment string) !u32 {
comment_fixed := comment.to_lower_ascii().trim_space()
return if comment_fixed.len > 0 {
hash := md5.hexhash(comment_fixed)
comment_found := self.redis.hget('db:comments', hash)!
if comment_found == '' {
id := self.new_id()!
self.redis.hset('db:comments', hash, id.str())!
self.redis.hset('db:comments', id.str(), comment_fixed)!
id
} else {
comment_found.u32()
}
} else {
0
}
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/factory.v
module db
import incubaid.herolib.core.redisclient
// Current time
// import incubaid.herolib.data.encoder
pub struct DB {
pub mut:
redis &redisclient.Redis @[skip; str: skip]
}
pub fn new() !DB {
mut redisconnection := redisclient.core_get()!
return DB{
redis: redisconnection
}
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/core_models.v
module db
import crypto.md5
import incubaid.herolib.core.redisclient
import incubaid.herolib.data.ourtime
// Group represents a collection of users with roles and permissions
@[heap]
pub struct Base {
pub mut:
id u32
name string
description string
created_at i64
updated_at i64
securitypolicy u32
tags u32 // when we set/get we always do as []string but this can then be sorted and md5ed this gies the unique id of tags
comments []u32
}
@[heap]
pub struct SecurityPolicy {
pub mut:
id u32
read []u32 // links to users & groups
write []u32 // links to users & groups
delete []u32 // links to users & groups
public bool
md5 string // this sorts read, write and delete u32 + hash, then do md5 hash, this allows to go from a random read/write/delete/public config to a hash
}
@[heap]
pub struct Tags {
pub mut:
id u32
names []string // unique per id
md5 string // of sorted names, to make easy to find unique id, each name lowercased and made ascii
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/core_methods.v
module db
import incubaid.herolib.data.ourtime
import incubaid.herolib.data.encoder
pub fn (mut self DB) set[T](obj_ T) !u32 {
// Get the next ID
mut obj := obj_
if obj.id == 0 {
obj.id = self.new_id()!
}
mut t := ourtime.now().unix()
if obj.created_at == 0 {
obj.created_at = t
}
obj.updated_at = t
// id u32
// name string
// description string
// created_at i64
// updated_at i64
// securitypolicy u32
// tags u32 // when we set/get we always do as []string but this can then be sorted and md5ed this gies the unique id of tags
// comments []u32
mut e := encoder.new()
e.add_u8(1)
e.add_u32(obj.id)
e.add_string(obj.name)
e.add_string(obj.description)
e.add_i64(obj.created_at)
e.add_i64(obj.updated_at)
e.add_u32(obj.securitypolicy)
e.add_u32(obj.tags)
e.add_u16(u16(obj.comments.len))
for comment in obj.comments {
e.add_u32(comment)
}
// println('set: before dump, e.data.len: ${e.data.len}')
obj.dump(mut e)!
// println('set: after dump, e.data.len: ${e.data.len}')
self.redis.hset(self.db_name[T](), obj.id.str(), e.data.bytestr())!
return obj.id
}
// return the data, cannot return the object as we do not know the type
pub fn (mut self DB) get_data[T](id u32) !(T, []u8) {
data := self.redis.hget(self.db_name[T](), id.str())!
if data.len == 0 {
return error('herodb:${self.db_name[T]()} not found for ${id}')
}
// println('get_data: data.len: ${data.len}')
mut e := encoder.decoder_new(data.bytes())
version := e.get_u8()!
if version != 1 {
panic('wrong version in base load')
}
mut base := T{}
base.id = e.get_u32()!
base.name = e.get_string()!
base.description = e.get_string()!
base.created_at = e.get_i64()!
base.updated_at = e.get_i64()!
base.securitypolicy = e.get_u32()!
base.tags = e.get_u32()!
for _ in 0 .. e.get_u16()! {
base.comments << e.get_u32()!
}
return base, e.data
}
pub fn (mut self DB) exists[T](id u32) !bool {
return self.redis.hexists(self.db_name[T](), id.str())!
}
pub fn (mut self DB) delete[T](id u32) ! {
self.redis.hdel(self.db_name[T](), id.str())!
}
pub fn (mut self DB) list[T]() ![]u32 {
ids := self.redis.hkeys(self.db_name[T]())!
return ids.map(it.u32())
}
// make it easy to get a base object
pub fn (mut self DB) new_from_base[T](args BaseArgs) !Base {
return T{
Base: new_base(args)!
}
}
fn (mut self DB) db_name[T]() string {
// get the name of the type T
mut name := T.name.to_lower_ascii().split('.').last()
// println("db_name rediskey: '${name}'")
return 'db:${name}'
}
pub fn (mut self DB) new_id() !u32 {
return u32(self.redis.incr('db:id')!)
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/db/ai_instructions_hero_models.md
# HeroDB Model Creation Instructions for AI
## Overview
This document provides clear instructions for AI agents to create new HeroDB models similar to `comment.v`. These models are used to store structured data in Redis using the HeroDB system.
## Key Concepts
- Each model represents a data type stored in Redis hash sets
- Models must implement serialization/deserialization using the `encoder` module
- Models inherit from the `Base` struct which provides common fields
- The database uses a factory pattern for model access
## File Structure
Create a new file in `lib/hero/heromodels/` with the model name (e.g., `calendar.v`).
## Required Components
### 1. Model Struct Definition
Define your model struct with the following pattern:
```v
@[heap]
pub struct Calendar {
db.Base // Inherit from Base struct
pub mut:
// Add your specific fields here
title string
start_time i64
end_time i64
location string
attendees []string
}
2. Type Name Method
Implement a method to return the model's type name:
pub fn (self Calendar) type_name() string {
return 'calendar'
}
3. Serialization (dump) Method
Implement the dump method to serialize your struct's fields using the encoder:
pub fn (self Calendar) dump(mut e &encoder.Encoder) ! {
e.add_string(self.title)
e.add_i64(self.start_time)
e.add_i64(self.end_time)
e.add_string(self.location)
e.add_list_string(self.attendees)
}
4. Deserialization (load) Method
Implement the load method to deserialize your struct's fields:
fn (mut self DBCalendar) load(mut o Calendar, mut e &encoder.Decoder) ! {
o.title = e.get_string()!
o.start_time = e.get_i64()!
o.end_time = e.get_i64()!
o.location = e.get_string()!
o.attendees = e.get_list_string()!
}
5. Model Arguments Struct
Define a struct for creating new instances of your model:
@[params]
pub struct CalendarArg {
pub mut:
title string @[required]
start_time i64
end_time i64
location string
attendees []string
}
6. Database Wrapper Struct
Create a database wrapper struct for your model:
pub struct DBCalendar {
pub mut:
db &db.DB @[skip; str: skip]
}
7. Factory Integration
Add your model to the ModelsFactory struct in factory.v:
pub struct ModelsFactory {
pub mut:
comments DBCalendar
// ... other models
}
And initialize it in the new() function:
pub fn new() !ModelsFactory {
mut mydb := db.new()!
return ModelsFactory{
comments: DBCalendar{
db: &mydb
}
// ... initialize other models
}
}
Encoder Methods Reference
Use these methods for serialization/deserialization:
Encoder (Serialization)
e.add_bool(val bool)e.add_u8(val u8)e.add_u16(val u16)e.add_u32(val u32)e.add_u64(val u64)e.add_i8(val i8)e.add_i16(val i16)e.add_i32(val i32)e.add_i64(val i64)e.add_f32(val f32)e.add_f64(val f64)e.add_string(val string)e.add_list_bool(val []bool)e.add_list_u8(val []u8)e.add_list_u16(val []u16)e.add_list_u32(val []u32)e.add_list_u64(val []u64)e.add_list_i8(val []i8)e.add_list_i16(val []i16)e.add_list_i32(val []i32)e.add_list_i64(val []i64)e.add_list_f32(val []f32)e.add_list_f64(val []f64)e.add_list_string(val []string)
Decoder (Deserialization)
e.get_bool()!e.get_u8()!e.get_u16()!e.get_u32()!e.get_u64()!e.get_i8()!e.get_i16()!e.get_i32()!e.get_i64()!e.get_f32()!e.get_f64()!e.get_string()!e.get_list_bool()!e.get_list_u8()!e.get_list_u16()!e.get_list_u32()!e.get_list_u64()!e.get_list_i8()!e.get_list_i16()!e.get_list_i32()!e.get_list_i64()!e.get_list_f32()!e.get_list_f64()!e.get_list_string()!
CRUD Methods Implementation
Create New Instance
pub fn (mut self DBCalendar) new(args CalendarArg) !Calendar {
mut o := Calendar{
title: args.title
start_time: args.start_time
end_time: args.end_time
location: args.location
attendees: args.attendees
updated_at: ourtime.now().unix()
}
return o
}
Save to Database
pub fn (mut self DBCalendar) set(o Calendar) !u32 {
return self.db.set[Calendar](o)!
}
Retrieve from Database
pub fn (mut self DBCalendar) get(id u32) !Calendar {
mut o, data := self.db.get_data[Calendar](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
Delete from Database
pub fn (mut self DBCalendar) delete(id u32) ! {
self.db.delete[Calendar](id)!
}
Check Existence
pub fn (mut self DBCalendar) exist(id u32) !bool {
return self.db.exists[Calendar](id)!
}
List All Objects
pub fn (mut self DBCalendar) list() ![]Calendar {
return self.db.list[Calendar]()!.map(self.get(it)!)
}
Example Usage Script
Create a .vsh script in examples/hero/heromodels/ to demonstrate usage:
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.redisclient
import incubaid.herolib.hero.heromodels
mut mydb := heromodels.new()!
// Create a new object
mut o := mydb.calendar.new(
title: 'Meeting'
start_time: 1672531200
end_time: 1672534800
location: 'Conference Room'
attendees: ['john@example.com', 'jane@example.com']
)!
// Save to database
oid := mydb.calendar.set(o)!
println('Created object with ID: ${oid}')
// Retrieve from database
mut o2 := mydb.calendar.get(oid)!
println('Retrieved object: ${o2}')
// List all objects
mut objects := mydb.calendar.list()!
println('All objects: ${objects}')
Best Practices
- Always inherit from
db.Basestruct - Implement all required methods (
type_name,dump,load) - Use the encoder methods for consistent serialization
- Handle errors appropriately with
!ororblocks - Keep field ordering consistent between
dumpandloadmethods - Use snake_case for field names
- Add
@[required]attribute to mandatory fields in argument structs - Initialize timestamps using
ourtime.now().unix()
Implementation Steps Summary
- Create model struct inheriting from
db.Base - Implement
type_name()method - Implement
dump()method using encoder - Implement
load()method using decoder - Create argument struct with
@[params]attribute - Create database wrapper struct
- Add model to
ModelsFactoryinfactory.v - Implement CRUD methods
- Create example usage script
- Test the implementation with the example script
File: /Users/despiegk/code/github/incubaid/herolib/examples/hero/herofs/herofs_advanced.vsh
```vsh
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.redisclient
import incubaid.herolib.hero.herofs
import time
import os
// Advanced example of using HeroFS - the Hero Filesystem
// Demonstrates more complex operations including:
// - File operations (move, rename, metadata)
// - Symlinks
// - Binary data handling
// - Directory hierarchies
// - Searching and filtering
fn main() {
// Initialize the HeroFS factory
mut fs_factory := herofs.new()!
println('HeroFS factory initialized')
// Create a new filesystem
mut my_fs := fs_factory.fs.new(
name: 'project_workspace'
description: 'Project development workspace'
quota_bytes: 5 * 1024 * 1024 * 1024 // 5GB quota
)!
// Save the filesystem to get an ID
fs_id := fs_factory.fs.set(my_fs)!
println('Created filesystem: ${my_fs.name} with ID: ${fs_id}')
// Create root directory
mut root_dir := fs_factory.fs_dir.new(
name: 'root'
fs_id: fs_id
parent_id: 0 // Root has no parent
description: 'Root directory'
)!
// Save the root directory
root_dir_id := fs_factory.fs_dir.set(root_dir)!
println('Created root directory with ID: ${root_dir_id}')
// Update the filesystem with the root directory ID
my_fs.root_dir_id = root_dir_id
fs_factory.fs.set(my_fs)!
// Create a directory hierarchy
println('\nCreating directory hierarchy...')
// Main project directories
mut src_dir := fs_factory.fs_dir.new(
name: 'src'
fs_id: fs_id
parent_id: root_dir_id
description: 'Source code'
)!
src_dir_id := fs_factory.fs_dir.set(src_dir)!
mut docs_dir := fs_factory.fs_dir.new(
name: 'docs'
fs_id: fs_id
parent_id: root_dir_id
description: 'Documentation'
)!
docs_dir_id := fs_factory.fs_dir.set(docs_dir)!
mut assets_dir := fs_factory.fs_dir.new(
name: 'assets'
fs_id: fs_id
parent_id: root_dir_id
description: 'Project assets'
)!
assets_dir_id := fs_factory.fs_dir.set(assets_dir)!
// Subdirectories
mut images_dir := fs_factory.fs_dir.new(
name: 'images'
fs_id: fs_id
parent_id: assets_dir_id
description: 'Image assets'
)!
images_dir_id := fs_factory.fs_dir.set(images_dir)!
mut api_docs_dir := fs_factory.fs_dir.new(
name: 'api'
fs_id: fs_id
parent_id: docs_dir_id
description: 'API documentation'
)!
api_docs_dir_id := fs_factory.fs_dir.set(api_docs_dir)!
println('Directory hierarchy created successfully')
// Create some files with different content types
println('\nCreating various files...')
// Text file for source code
code_content := 'fn main() {\n println("Hello, HeroFS!")\n}\n'.bytes()
mut code_blob := fs_factory.fs_blob.new(
data: code_content
mime_type: 'text/plain'
name: 'main.v blob'
)!
code_blob_id := fs_factory.fs_blob.set(code_blob)!
mut code_file := fs_factory.fs_file.new(
name: 'main.v'
fs_id: fs_id
directories: [src_dir_id]
blobs: [code_blob_id]
mime_type: 'text/plain'
metadata: {
'language': 'vlang'
'version': '0.3.3'
}
)!
code_file_id := fs_factory.fs_file.set(code_file)!
// Markdown documentation file
docs_content := '# API Documentation\n\n## Endpoints\n\n- GET /api/v1/users\n- POST /api/v1/users\n'.bytes()
mut docs_blob := fs_factory.fs_blob.new(
data: docs_content
mime_type: 'text/markdown'
name: 'api.md blob'
)!
docs_blob_id := fs_factory.fs_blob.set(docs_blob)!
mut docs_file := fs_factory.fs_file.new(
name: 'api.md'
fs_id: fs_id
directories: [api_docs_dir_id]
blobs: [docs_blob_id]
mime_type: 'text/markdown'
)!
docs_file_id := fs_factory.fs_file.set(docs_file)!
// Create a binary file (sample image)
// For this example, we'll just create random bytes
mut image_data := []u8{len: 1024, init: u8(index % 256)}
mut image_blob := fs_factory.fs_blob.new(
data: image_data
mime_type: 'image/png'
name: 'logo.png blob'
)!
image_blob_id := fs_factory.fs_blob.set(image_blob)!
mut image_file := fs_factory.fs_file.new(
name: 'logo.png'
fs_id: fs_id
directories: [images_dir_id]
blobs: [image_blob_id]
mime_type: 'image/png'
metadata: {
'width': '200'
'height': '100'
'format': 'PNG'
}
)!
image_file_id := fs_factory.fs_file.set(image_file)!
println('Files created successfully')
// Create symlinks
println('\nCreating symlinks...')
// Symlink to the API docs from the root directory
mut api_symlink := fs_factory.fs_symlink.new(
name: 'api-docs'
fs_id: fs_id
parent_id: root_dir_id
target_id: api_docs_dir_id
target_type: .directory
description: 'Shortcut to API documentation'
)!
api_symlink_id := fs_factory.fs_symlink.set(api_symlink)!
// Symlink to the logo from the docs directory
mut logo_symlink := fs_factory.fs_symlink.new(
name: 'logo.png'
fs_id: fs_id
parent_id: docs_dir_id
target_id: image_file_id
target_type: .file
description: 'Shortcut to project logo'
)!
logo_symlink_id := fs_factory.fs_symlink.set(logo_symlink)!
println('Symlinks created successfully')
// Demonstrate file operations
println('\nDemonstrating file operations...')
// 1. Move a file to multiple directories (hard link-like behavior)
println('Moving logo.png to both images and docs directories...')
image_file = fs_factory.fs_file.get(image_file_id)!
fs_factory.fs_file.move(image_file_id, [images_dir_id, docs_dir_id])!
image_file = fs_factory.fs_file.get(image_file_id)!
// 2. Rename a file
println('Renaming main.v to app.v...')
fs_factory.fs_file.rename(code_file_id, 'app.v')!
code_file = fs_factory.fs_file.get(code_file_id)!
// 3. Update file metadata
println('Updating file metadata...')
fs_factory.fs_file.update_metadata(docs_file_id, 'status', 'draft')!
fs_factory.fs_file.update_metadata(docs_file_id, 'author', 'HeroFS Team')!
// 4. Update file access time when "reading" it
println('Updating file access time...')
fs_factory.fs_file.update_accessed(docs_file_id)!
// 5. Add additional content to a file (append a blob)
println('Appending content to API docs...')
additional_content := '\n## Authentication\n\nUse Bearer token for authentication.\n'.bytes()
mut additional_blob := fs_factory.fs_blob.new(
data: additional_content
mime_type: 'text/markdown'
name: 'api_append.md blob'
)!
additional_blob_id := fs_factory.fs_blob.set(additional_blob)!
fs_factory.fs_file.append_blob(docs_file_id, additional_blob_id)!
// Demonstrate directory operations
println('\nDemonstrating directory operations...')
// 1. Create a new directory and move it
mut temp_dir := fs_factory.fs_dir.new(
name: 'temp'
fs_id: fs_id
parent_id: root_dir_id
description: 'Temporary directory'
)!
temp_dir_id := fs_factory.fs_dir.set(temp_dir)!
println('Moving temp directory to be under docs...')
fs_factory.fs_dir.move(temp_dir_id, docs_dir_id)!
// 2. Rename a directory
println('Renaming temp directory to drafts...')
fs_factory.fs_dir.rename(temp_dir_id, 'drafts')!
// 3. Check if a directory has children
has_children := fs_factory.fs_dir.has_children(docs_dir_id)!
println('Does docs directory have children? ${has_children}')
// Demonstrate searching and filtering
println('\nDemonstrating searching and filtering...')
// 1. List all files in the filesystem
all_files := fs_factory.fs_file.list_by_filesystem(fs_id)!
println('All files in filesystem (${all_files.len}):')
for file in all_files {
println('- ${file.name} (ID: ${file.id})')
}
// 2. List files by MIME type
markdown_files := fs_factory.fs_file.list_by_mime_type('text/markdown')!
println('\nMarkdown files (${markdown_files.len}):')
for file in markdown_files {
println('- ${file.name} (ID: ${file.id})')
}
// 3. List all symlinks
all_symlinks := fs_factory.fs_symlink.list_by_filesystem(fs_id)!
println('\nAll symlinks (${all_symlinks.len}):')
for symlink in all_symlinks {
target_type_str := if symlink.target_type == .file { 'file' } else { 'directory' }
println('- ${symlink.name} -> ${symlink.target_id} (${target_type_str})')
}
// 4. Check for broken symlinks
println('\nChecking for broken symlinks:')
for symlink in all_symlinks {
is_broken := fs_factory.fs_symlink.is_broken(symlink.id)!
println('- ${symlink.name}: ${if is_broken { 'BROKEN' } else { 'OK' }}')
}
// Demonstrate file content retrieval
println('\nDemonstrating file content retrieval:')
// Get the updated API docs file and print its content
docs_file = fs_factory.fs_file.get(docs_file_id)!
println('Content of ${docs_file.name}:')
mut full_content := ''
for blob_id in docs_file.blobs {
blob := fs_factory.fs_blob.get(blob_id)!
full_content += blob.data.bytestr()
}
println('---BEGIN CONTENT---')
println(full_content)
println('---END CONTENT---')
// Print filesystem usage
println('\nFilesystem usage:')
my_fs = fs_factory.fs.get(fs_id)!
println('Used: ${my_fs.used_bytes} bytes')
println('Quota: ${my_fs.quota_bytes} bytes')
println('Available: ${my_fs.quota_bytes - my_fs.used_bytes} bytes')
println('\nHeroFS advanced example completed successfully!')
}
File: /Users/despiegk/code/github/incubaid/herolib/examples/hero/herofs/herofs_basic.vsh
#!/usr/bin/env -S v -n -w -cg -gc none -cc tcc -d use_openssl -enable-globals run
import incubaid.herolib.core.redisclient
import incubaid.herolib.hero.herofs
// Basic example of using HeroFS - the Hero Filesystem
// Demonstrates creating a filesystem, directories, and files
fn main() {
// Initialize the HeroFS factory
mut fs_factory := herofs.new()!
println('HeroFS factory initialized')
// Create a new filesystem
mut my_fs := fs_factory.fs.new(
name: 'my_documents'
description: 'Personal documents filesystem'
quota_bytes: 1024 * 1024 * 1024 // 1GB quota
)!
// Save the filesystem to get an ID
fs_id := fs_factory.fs.set(my_fs)!
println('Created filesystem: ${my_fs.name} with ID: ${fs_id}')
// Create root directory
mut root_dir := fs_factory.fs_dir.new(
name: 'root'
fs_id: fs_id
parent_id: 0 // Root has no parent
description: 'Root directory'
)!
// Save the root directory
root_dir_id := fs_factory.fs_dir.set(root_dir)!
println('Created root directory with ID: ${root_dir_id}')
// Update the filesystem with the root directory ID
my_fs.root_dir_id = root_dir_id
fs_factory.fs.set(my_fs)!
// Create some subdirectories
mut docs_dir := fs_factory.fs_dir.new(
name: 'documents'
fs_id: fs_id
parent_id: root_dir_id
description: 'Documents directory'
)!
mut pics_dir := fs_factory.fs_dir.new(
name: 'pictures'
fs_id: fs_id
parent_id: root_dir_id
description: 'Pictures directory'
)!
// Save the subdirectories
docs_dir_id := fs_factory.fs_dir.set(docs_dir)!
pics_dir_id := fs_factory.fs_dir.set(pics_dir)!
println('Created documents directory with ID: ${docs_dir_id}')
println('Created pictures directory with ID: ${pics_dir_id}')
// Create a text file blob
text_content := 'Hello, world! This is a test file in HeroFS.'.bytes()
mut text_blob := fs_factory.fs_blob.new(
data: text_content
mime_type: 'text/plain'
name: 'hello.txt blob'
)!
// Save the blob
blob_id := fs_factory.fs_blob.set(text_blob)!
println('Created text blob with ID: ${blob_id}')
// Create a file referencing the blob
mut text_file := fs_factory.fs_file.new(
name: 'hello.txt'
fs_id: fs_id
directories: [docs_dir_id]
blobs: [blob_id]
mime_type: 'text/plain'
)!
// Save the file
file_id := fs_factory.fs_file.set(text_file)!
println('Created text file with ID: ${file_id}')
// List all directories in the filesystem
dirs := fs_factory.fs_dir.list_by_filesystem(fs_id)!
println('\nAll directories in filesystem:')
for dir in dirs {
println('- ${dir.name} (ID: ${dir.id})')
}
// List all files in the documents directory
files := fs_factory.fs_file.list_by_directory(docs_dir_id)!
println('\nFiles in documents directory:')
for file in files {
println('- ${file.name} (ID: ${file.id}, Size: ${file.size_bytes} bytes)')
// Get the file's content from its blobs
if file.blobs.len > 0 {
blob := fs_factory.fs_blob.get(file.blobs[0])!
content := blob.data.bytestr()
println(' Content: "${content}"')
}
}
println('\nHeroFS basic example completed successfully!')
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/factory.v
module herofs
import incubaid.herolib.hero.db
pub struct FsFactory {
pub mut:
fs DBFs
fs_blob DBFsBlob
fs_dir DBFsDir
fs_file DBFsFile
fs_symlink DBFsSymlink
}
pub fn new() !FsFactory {
mut mydb := db.new()!
return FsFactory{
fs: DBFs{
db: &mydb
}
fs_blob: DBFsBlob{
db: &mydb
}
fs_dir: DBFsDir{
db: &mydb
}
fs_file: DBFsFile{
db: &mydb
}
fs_symlink: DBFsSymlink{
db: &mydb
}
}
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/fs_blob.v
module herofs
import time
import crypto.blake3
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// FsBlob represents binary data up to 1MB
@[heap]
pub struct FsBlob {
db.Base
pub mut:
hash string // blake192 hash of content
data []u8 // Binary data (max 1MB)
size_bytes int // Size in bytes
created_at i64
mime_type string // MIME type
encoding string // Encoding type
}
pub struct DBFsBlob {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self FsBlob) type_name() string {
return 'fs_blob'
}
pub fn (self FsBlob) dump(mut e encoder.Encoder) ! {
e.add_string(self.hash)
e.add_list_u8(self.data)
e.add_int(self.size_bytes)
e.add_i64(self.created_at)
e.add_string(self.mime_type)
e.add_string(self.encoding)
}
fn (mut self DBFsBlob) load(mut o FsBlob, mut e encoder.Decoder) ! {
o.hash = e.get_string()!
o.data = e.get_list_u8()!
o.size_bytes = e.get_int()!
o.created_at = e.get_i64()!
o.mime_type = e.get_string()!
o.encoding = e.get_string()!
}
@[params]
pub struct FsBlobArg {
pub mut:
data []u8 @[required]
mime_type string
encoding string
name string
description string
tags []string
comments []db.CommentArg
}
pub fn (mut blob FsBlob) calculate_hash() {
hash := blake3.sum256(blob.data)
blob.hash = hash.hex()[..48] // blake192 = first 192 bits = 48 hex chars
}
// get new blob, not from the DB
pub fn (mut self DBFsBlob) new(args FsBlobArg) !FsBlob {
if args.data.len > 1024 * 1024 { // 1MB limit
return error('Blob size exceeds 1MB limit')
}
mut o := FsBlob{
data: args.data
size_bytes: args.data.len
created_at: ourtime.now().unix()
mime_type: args.mime_type
encoding: if args.encoding == '' { 'none' } else { args.encoding }
}
// Calculate hash
o.calculate_hash()
// Set base fields
o.name = args.name
o.description = args.description
o.tags = self.db.tags_get(args.tags)!
o.comments = self.db.comments_get(args.comments)!
o.updated_at = ourtime.now().unix()
return o
}
pub fn (mut self DBFsBlob) set(o FsBlob) !u32 {
// Check if a blob with this hash already exists
hash_id := self.db.redis.hget('fsblob:hashes', o.hash)!
if hash_id != '' {
// Blob already exists, return existing ID
return hash_id.u32()
}
// Use db set function which now returns the ID
id := self.db.set[FsBlob](o)!
// Store the hash -> id mapping for lookup
self.db.redis.hset('fsblob:hashes', o.hash, id.str())!
return id
}
pub fn (mut self DBFsBlob) delete(id u32) ! {
// Get the blob to retrieve its hash
mut blob := self.get(id)!
// Remove hash -> id mapping
self.db.redis.hdel('fsblob:hashes', blob.hash)!
// Delete the blob
self.db.delete[FsBlob](id)!
}
pub fn (mut self DBFsBlob) exist(id u32) !bool {
return self.db.exists[FsBlob](id)!
}
pub fn (mut self DBFsBlob) get(id u32) !FsBlob {
mut o, data := self.db.get_data[FsBlob](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
pub fn (mut self DBFsBlob) list() ![]FsBlob {
return self.db.list[FsBlob]()!.map(self.get(it)!)
}
pub fn (mut self DBFsBlob) get_by_hash(hash string) !FsBlob {
id_str := self.db.redis.hget('fsblob:hashes', hash)!
if id_str == '' {
return error('Blob with hash "${hash}" not found')
}
return self.get(id_str.u32())!
}
pub fn (mut self DBFsBlob) exists_by_hash(hash string) !bool {
id_str := self.db.redis.hget('fsblob:hashes', hash)!
return id_str != ''
}
pub fn (blob FsBlob) verify_integrity() bool {
hash := blake3.sum256(blob.data)
return hash.hex()[..48] == blob.hash
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/fs_dir.v
module herofs
import time
import crypto.blake3
import json
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// FsDir represents a directory in a filesystem
@[heap]
pub struct FsDir {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
parent_id u32 // Parent directory ID (0 for root)
}
// DirectoryContents represents the contents of a directory
pub struct DirectoryContents {
pub mut:
directories []FsDir
files []FsFile
symlinks []FsSymlink
}
// ListContentsOptions defines options for listing directory contents
@[params]
pub struct ListContentsOptions {
pub mut:
recursive bool
include_patterns []string // File/directory name patterns to include (e.g. *.py, doc*)
exclude_patterns []string // File/directory name patterns to exclude
}
// we only keep the parents, not the children, as children can be found by doing a query on parent_id, we will need some smart hsets to make this fast enough and efficient
pub struct DBFsDir {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self FsDir) type_name() string {
return 'fs_dir'
}
pub fn (self FsDir) dump(mut e encoder.Encoder) ! {
e.add_string(self.name)
e.add_u32(self.fs_id)
e.add_u32(self.parent_id)
}
fn (mut self DBFsDir) load(mut o FsDir, mut e encoder.Decoder) ! {
o.name = e.get_string()!
o.fs_id = e.get_u32()!
o.parent_id = e.get_u32()!
}
@[params]
pub struct FsDirArg {
pub mut:
name string @[required]
description string
fs_id u32 @[required]
parent_id u32
tags []string
comments []db.CommentArg
}
// get new directory, not from the DB
pub fn (mut self DBFsDir) new(args FsDirArg) !FsDir {
mut o := FsDir{
name: args.name
fs_id: args.fs_id
parent_id: args.parent_id
}
// Set base fields
o.description = args.description
o.tags = self.db.tags_get(args.tags)!
o.comments = self.db.comments_get(args.comments)!
o.updated_at = ourtime.now().unix()
return o
}
pub fn (mut self DBFsDir) set(o FsDir) !u32 {
id := self.db.set[FsDir](o)!
// Store directory in filesystem's directory index
path_key := '${o.fs_id}:${o.parent_id}:${o.name}'
self.db.redis.hset('fsdir:paths', path_key, id.str())!
// Store in filesystem's directory list using hset
self.db.redis.hset('fsdir:fs:${o.fs_id}', id.str(), id.str())!
// Store in parent's children list using hset
if o.parent_id > 0 {
self.db.redis.hset('fsdir:children:${o.parent_id}', id.str(), id.str())!
}
return id
}
pub fn (mut self DBFsDir) delete(id u32) ! {
// Get the directory info before deleting
dir := self.get(id)!
// Check if directory has children using hkeys
children := self.db.redis.hkeys('fsdir:children:${id}')!
if children.len > 0 {
return error('Cannot delete directory ${dir.name} (ID: ${id}) because it has ${children.len} children')
}
// Remove from path index
path_key := '${dir.fs_id}:${dir.parent_id}:${dir.name}'
self.db.redis.hdel('fsdir:paths', path_key)!
// Remove from filesystem's directory list using hdel
self.db.redis.hdel('fsdir:fs:${dir.fs_id}', id.str())!
// Remove from parent's children list using hdel
if dir.parent_id > 0 {
self.db.redis.hdel('fsdir:children:${dir.parent_id}', id.str())!
}
// Delete the directory itself
self.db.delete[FsDir](id)!
}
pub fn (mut self DBFsDir) exist(id u32) !bool {
return self.db.exists[FsDir](id)!
}
pub fn (mut self DBFsDir) get(id u32) !FsDir {
mut o, data := self.db.get_data[FsDir](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
pub fn (mut self DBFsDir) list() ![]FsDir {
return self.db.list[FsDir]()!.map(self.get(it)!)
}
// Get directory by path components
pub fn (mut self DBFsDir) get_by_path(fs_id u32, parent_id u32, name string) !FsDir {
path_key := '${fs_id}:${parent_id}:${name}'
id_str := self.db.redis.hget('fsdir:paths', path_key)!
if id_str == '' {
return error('Directory "${name}" not found in filesystem ${fs_id} under parent ${parent_id}')
}
return self.get(id_str.u32())!
}
// Get all directories in a filesystem
pub fn (mut self DBFsDir) list_by_filesystem(fs_id u32) ![]FsDir {
dir_ids := self.db.redis.hkeys('fsdir:fs:${fs_id}')!
mut dirs := []FsDir{}
for id_str in dir_ids {
dirs << self.get(id_str.u32())!
}
return dirs
}
// Get directory by absolute path
pub fn (mut self DBFsDir) get_by_absolute_path(fs_id u32, path string) !FsDir {
// Normalize path (remove trailing slashes, handle empty path)
normalized_path := if path == '' || path == '/' { '/' } else { path.trim_right('/') }
if normalized_path == '/' {
// Special case for root directory
dirs := self.list_by_filesystem(fs_id)!
for dir in dirs {
if dir.parent_id == 0 {
return dir
}
}
return error('Root directory not found for filesystem ${fs_id}')
}
// Split path into components
components := normalized_path.trim_left('/').split('/')
// Start from the root directory
mut current_dir_id := u32(0)
mut dirs := self.list_by_filesystem(fs_id)!
// Find root directory
for dir in dirs {
if dir.parent_id == 0 {
current_dir_id = dir.id
break
}
}
if current_dir_id == 0 {
return error('Root directory not found for filesystem ${fs_id}')
}
// Navigate through path components
for component in components {
found := false
for dir in dirs {
if dir.parent_id == current_dir_id && dir.name == component {
current_dir_id = dir.id
found = true
break
}
}
if !found {
return error('Directory "${component}" not found in path "${normalized_path}"')
}
// Update dirs for next iteration
dirs = self.list_children(current_dir_id)!
}
return self.get(current_dir_id)!
}
// Create a directory by absolute path, creating parent directories as needed
pub fn (mut self DBFsDir) create_path(fs_id u32, path string) !u32 {
// Normalize path
normalized_path := if path == '' || path == '/' { '/' } else { path.trim_right('/') }
if normalized_path == '/' {
// Special case for root directory
dirs := self.list_by_filesystem(fs_id)!
for dir in dirs {
if dir.parent_id == 0 {
return dir.id
}
}
// Create root directory if it doesn't exist
mut root_dir := self.new(
name: 'root'
fs_id: fs_id
parent_id: 0
description: 'Root directory'
)!
return self.set(root_dir)!
}
// Split path into components
components := normalized_path.trim_left('/').split('/')
// Start from the root directory
mut current_dir_id := u32(0)
mut dirs := self.list_by_filesystem(fs_id)!
// Find or create root directory
for dir in dirs {
if dir.parent_id == 0 {
current_dir_id = dir.id
break
}
}
if current_dir_id == 0 {
// Create root directory
mut root_dir := self.new(
name: 'root'
fs_id: fs_id
parent_id: 0
description: 'Root directory'
)!
current_dir_id = self.set(root_dir)!
}
// Navigate/create through path components
for component in components {
found := false
for dir in dirs {
if dir.parent_id == current_dir_id && dir.name == component {
current_dir_id = dir.id
found = true
break
}
}
if !found {
// Create this directory component
mut new_dir := self.new(
name: component
fs_id: fs_id
parent_id: current_dir_id
description: 'Directory created as part of path ${normalized_path}'
)!
current_dir_id = self.set(new_dir)!
}
// Update directory list for next iteration
dirs = self.list_children(current_dir_id)!
}
return current_dir_id
}
// Delete a directory by absolute path
pub fn (mut self DBFsDir) delete_by_path(fs_id u32, path string) ! {
dir := self.get_by_absolute_path(fs_id, path)!
self.delete(dir.id)!
}
// Move a directory using source and destination paths
pub fn (mut self DBFsDir) move_by_path(fs_id u32, source_path string, dest_path string) !u32 {
// Get the source directory
source_dir := self.get_by_absolute_path(fs_id, source_path)!
// For the destination, we need the parent directory
dest_dir_path := dest_path.all_before_last('/')
dest_dir_name := dest_path.all_after_last('/')
dest_parent_dir := if dest_dir_path == '' || dest_dir_path == '/' {
// Moving to the root
self.get_by_absolute_path(fs_id, '/')!
} else {
self.get_by_absolute_path(fs_id, dest_dir_path)!
}
// First rename if the destination name is different
if source_dir.name != dest_dir_name {
self.rename(source_dir.id, dest_dir_name)!
}
// Then move to the new parent
return self.move(source_dir.id, dest_parent_dir.id)!
}
// Get children of a directory
pub fn (mut self DBFsDir) list_children(dir_id u32) ![]FsDir {
child_ids := self.db.redis.hkeys('fsdir:children:${dir_id}')!
mut dirs := []FsDir{}
for id_str in child_ids {
dirs << self.get(id_str.u32())!
}
return dirs
}
// Check if a directory has children
pub fn (mut self DBFsDir) has_children(dir_id u32) !bool {
keys := self.db.redis.hkeys('fsdir:children:${dir_id}')!
return keys.len > 0
}
// Rename a directory
pub fn (mut self DBFsDir) rename(id u32, new_name string) !u32 {
mut dir := self.get(id)!
// Remove old path index
old_path_key := '${dir.fs_id}:${dir.parent_id}:${dir.name}'
self.db.redis.hdel('fsdir:paths', old_path_key)!
// Update name
dir.name = new_name
// Save with new name
return self.set(dir)!
}
// Move a directory to a new parent
pub fn (mut self DBFsDir) move(id u32, new_parent_id u32) !u32 {
mut dir := self.get(id)!
// Check that new parent exists and is in the same filesystem
if new_parent_id > 0 {
parent := self.get(new_parent_id)!
if parent.fs_id != dir.fs_id {
return error('Cannot move directory across filesystems')
}
}
// Remove old path index
old_path_key := '${dir.fs_id}:${dir.parent_id}:${dir.name}'
self.db.redis.hdel('fsdir:paths', old_path_key)!
// Remove from old parent's children list
if dir.parent_id > 0 {
self.db.redis.hdel('fsdir:children:${dir.parent_id}', id.str())!
}
// Update parent
dir.parent_id = new_parent_id
// Save with new parent
return self.set(dir)!
}
// List contents of a directory with filtering capabilities
pub fn (mut self DBFsDir) list_contents(fs_factory &FsFactory, dir_id u32, opts ListContentsOptions) !DirectoryContents {
mut result := DirectoryContents{}
// Helper function to check if name matches include/exclude patterns
matches_pattern := fn (name string, patterns []string) bool {
if patterns.len == 0 {
return true // No patterns means include everything
}
for pattern in patterns {
if pattern.contains('*') {
prefix := pattern.all_before('*')
suffix := pattern.all_after('*')
if prefix == '' && suffix == '' {
return true // Pattern is just "*"
} else if prefix == '' {
if name.ends_with(suffix) {
return true
}
} else if suffix == '' {
if name.starts_with(prefix) {
return true
}
} else {
if name.starts_with(prefix) && name.ends_with(suffix) {
return true
}
}
} else if name == pattern {
return true // Exact match
}
}
return false
}
// Check if item should be included based on patterns
should_include := fn (name string, include_patterns []string, exclude_patterns []string) bool {
// First apply include patterns (if empty, include everything)
if !matches_pattern(name, include_patterns) && include_patterns.len > 0 {
return false
}
// Then apply exclude patterns
if matches_pattern(name, exclude_patterns) && exclude_patterns.len > 0 {
return false
}
return true
}
// Get directories, files, and symlinks in the current directory
dirs := self.list_children(dir_id)!
for dir in dirs {
if should_include(dir.name, opts.include_patterns, opts.exclude_patterns) {
result.directories << dir
}
// If recursive, process subdirectories
if opts.recursive {
sub_contents := self.list_contents(fs_factory, dir.id, opts)!
result.directories << sub_contents.directories
result.files << sub_contents.files
result.symlinks << sub_contents.symlinks
}
}
// Get files in the directory
files := fs_factory.fs_file.list_by_directory(dir_id)!
for file in files {
if should_include(file.name, opts.include_patterns, opts.exclude_patterns) {
result.files << file
}
}
// Get symlinks in the directory
symlinks := fs_factory.fs_symlink.list_by_parent(dir_id)!
for symlink in symlinks {
if should_include(symlink.name, opts.include_patterns, opts.exclude_patterns) {
result.symlinks << symlink
}
}
return result
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/fs_file.v
module herofs
import time
import crypto.blake3
import json
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// FsFile represents a file in a filesystem
@[heap]
pub struct FsFile {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
directories []u32 // Directory IDs where this file exists, means file can be part of multiple directories (like hard links in Linux)
blobs []u32 // IDs of file content blobs
size_bytes u64
mime_type string // e.g., "image/png"
checksum string // e.g., SHA256 checksum of the file
accessed_at i64
metadata map[string]string // Custom metadata
}
pub struct DBFsFile {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self FsFile) type_name() string {
return 'fs_file'
}
pub fn (self FsFile) dump(mut e encoder.Encoder) ! {
e.add_string(self.name)
e.add_u32(self.fs_id)
// Handle directories
e.add_u16(u16(self.directories.len))
for dir_id in self.directories {
e.add_u32(dir_id)
}
// Handle blobs
e.add_u16(u16(self.blobs.len))
for blob_id in self.blobs {
e.add_u32(blob_id)
}
e.add_u64(self.size_bytes)
e.add_string(self.mime_type)
e.add_string(self.checksum)
e.add_i64(self.accessed_at)
// Handle metadata map
e.add_u16(u16(self.metadata.len))
for key, value in self.metadata {
e.add_string(key)
e.add_string(value)
}
}
fn (mut self DBFsFile) load(mut o FsFile, mut e encoder.Decoder) ! {
o.name = e.get_string()!
o.fs_id = e.get_u32()!
// Load directories
dirs_count := e.get_u16()!
o.directories = []u32{cap: int(dirs_count)}
for _ in 0 .. dirs_count {
o.directories << e.get_u32()!
}
// Load blobs
blobs_count := e.get_u16()!
o.blobs = []u32{cap: int(blobs_count)}
for _ in 0 .. blobs_count {
o.blobs << e.get_u32()!
}
o.size_bytes = e.get_u64()!
o.mime_type = e.get_string()!
o.checksum = e.get_string()!
o.accessed_at = e.get_i64()!
// Load metadata map
metadata_count := e.get_u16()!
o.metadata = map[string]string{}
for _ in 0 .. metadata_count {
key := e.get_string()!
value := e.get_string()!
o.metadata[key] = value
}
}
@[params]
pub struct FsFileArg {
pub mut:
name string @[required]
description string
fs_id u32 @[required]
directories []u32 @[required]
blobs []u32
size_bytes u64
mime_type string
checksum string
metadata map[string]string
tags []string
comments []db.CommentArg
}
// get new file, not from the DB
pub fn (mut self DBFsFile) new(args FsFileArg) !FsFile {
// Calculate size based on blobs if not provided
mut size := args.size_bytes
if size == 0 && args.blobs.len > 0 {
// We'll need to sum the sizes of all blobs
for blob_id in args.blobs {
blob_exists := self.db.exists[FsBlob](blob_id)!
if !blob_exists {
return error('Blob with ID ${blob_id} does not exist')
}
// Get blob data
mut blob_obj, blob_data := self.db.get_data[FsBlob](blob_id)!
mut e_decoder := encoder.decoder_new(blob_data)
// Skip hash
e_decoder.get_string()!
// Skip data, get size directly
e_decoder.get_list_u8()!
size += u64(e_decoder.get_int()!)
}
}
mut o := FsFile{
name: args.name
fs_id: args.fs_id
directories: args.directories
blobs: args.blobs
size_bytes: size
mime_type: args.mime_type
checksum: args.checksum
accessed_at: ourtime.now().unix()
metadata: args.metadata
}
// Set base fields
o.description = args.description
o.tags = self.db.tags_get(args.tags)!
o.comments = self.db.comments_get(args.comments)!
o.updated_at = ourtime.now().unix()
return o
}
pub fn (mut self DBFsFile) set(o FsFile) !u32 {
// Check that directories exist
for dir_id in o.directories {
dir_exists := self.db.exists[FsDir](dir_id)!
if !dir_exists {
return error('Directory with ID ${dir_id} does not exist')
}
}
// Check that blobs exist
for blob_id in o.blobs {
blob_exists := self.db.exists[FsBlob](blob_id)!
if !blob_exists {
return error('Blob with ID ${blob_id} does not exist')
}
}
id := self.db.set[FsFile](o)!
// Store file in each directory's file index
for dir_id in o.directories {
// Store by name in each directory
path_key := '${dir_id}:${o.name}'
self.db.redis.hset('fsfile:paths', path_key, id.str())!
// Add to directory's file list using hset
self.db.redis.hset('fsfile:dir:${dir_id}', id.str(), id.str())!
}
// Store in filesystem's file list using hset
self.db.redis.hset('fsfile:fs:${o.fs_id}', id.str(), id.str())!
// Store by mimetype using hset
if o.mime_type != '' {
self.db.redis.hset('fsfile:mime:${o.mime_type}', id.str(), id.str())!
}
return id
}
pub fn (mut self DBFsFile) delete(id u32) ! {
// Get the file info before deleting
file := self.get(id)!
// Remove from each directory's file index
for dir_id in file.directories {
// Remove from path index
path_key := '${dir_id}:${file.name}'
self.db.redis.hdel('fsfile:paths', path_key)!
// Remove from directory's file list using hdel
self.db.redis.hdel('fsfile:dir:${dir_id}', id.str())!
}
// Remove from filesystem's file list using hdel
self.db.redis.hdel('fsfile:fs:${file.fs_id}', id.str())!
// Remove from mimetype index using hdel
if file.mime_type != '' {
self.db.redis.hdel('fsfile:mime:${file.mime_type}', id.str())!
}
// Delete the file itself
self.db.delete[FsFile](id)!
}
pub fn (mut self DBFsFile) exist(id u32) !bool {
return self.db.exists[FsFile](id)!
}
pub fn (mut self DBFsFile) get(id u32) !FsFile {
mut o, data := self.db.get_data[FsFile](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
pub fn (mut self DBFsFile) list() ![]FsFile {
return self.db.list[FsFile]()!.map(self.get(it)!)
}
// Get file by path in a specific directory
pub fn (mut self DBFsFile) get_by_path(dir_id u32, name string) !FsFile {
path_key := '${dir_id}:${name}'
id_str := self.db.redis.hget('fsfile:paths', path_key)!
if id_str == '' {
return error('File "${name}" not found in directory ${dir_id}')
}
return self.get(id_str.u32())!
}
// List files in a directory
pub fn (mut self DBFsFile) list_by_directory(dir_id u32) ![]FsFile {
file_ids := self.db.redis.hkeys('fsfile:dir:${dir_id}')!
mut files := []FsFile{}
for id_str in file_ids {
files << self.get(id_str.u32())!
}
return files
}
// List files in a filesystem
pub fn (mut self DBFsFile) list_by_filesystem(fs_id u32) ![]FsFile {
file_ids := self.db.redis.hkeys('fsfile:fs:${fs_id}')!
mut files := []FsFile{}
for id_str in file_ids {
files << self.get(id_str.u32())!
}
return files
}
// List files by mime type
pub fn (mut self DBFsFile) list_by_mime_type(mime_type string) ![]FsFile {
file_ids := self.db.redis.hkeys('fsfile:mime:${mime_type}')!
mut files := []FsFile{}
for id_str in file_ids {
files << self.get(id_str.u32())!
}
return files
}
// Update file with a new blob (append)
pub fn (mut self DBFsFile) append_blob(id u32, blob_id u32) !u32 {
// Check blob exists
blob_exists := self.db.exists[FsBlob](blob_id)!
if !blob_exists {
return error('Blob with ID ${blob_id} does not exist')
}
// Get blob size
mut blob_obj, blob_data := self.db.get_data[FsBlob](blob_id)!
mut e_decoder := encoder.decoder_new(blob_data)
// Skip hash
e_decoder.get_string()!
// Skip data, get size directly
e_decoder.get_list_u8()!
blob_size := e_decoder.get_int()!
// Get file
mut file := self.get(id)!
// Add blob if not already in the list
if blob_id !in file.blobs {
file.blobs << blob_id
file.size_bytes += u64(blob_size)
file.updated_at = ourtime.now().unix()
}
// Save file
return self.set(file)!
}
// Update file accessed timestamp
pub fn (mut self DBFsFile) update_accessed(id u32) !u32 {
mut file := self.get(id)!
file.accessed_at = ourtime.now().unix()
return self.set(file)!
}
// Update file metadata
pub fn (mut self DBFsFile) update_metadata(id u32, key string, value string) !u32 {
mut file := self.get(id)!
file.metadata[key] = value
file.updated_at = ourtime.now().unix()
return self.set(file)!
}
// Rename a file
pub fn (mut self DBFsFile) rename(id u32, new_name string) !u32 {
mut file := self.get(id)!
// Remove old path indexes
for dir_id in file.directories {
old_path_key := '${dir_id}:${file.name}'
self.db.redis.hdel('fsfile:paths', old_path_key)!
}
// Update name
file.name = new_name
// Save with new name
return self.set(file)!
}
// Move file to different directories
pub fn (mut self DBFsFile) move(id u32, new_directories []u32) !u32 {
mut file := self.get(id)!
// Check that all new directories exist
for dir_id in new_directories {
dir_exists := self.db.exists[FsDir](dir_id)!
if !dir_exists {
return error('Directory with ID ${dir_id} does not exist')
}
}
// Remove from old directories
for dir_id in file.directories {
path_key := '${dir_id}:${file.name}'
self.db.redis.hdel('fsfile:paths', path_key)!
self.db.redis.hdel('fsfile:dir:${dir_id}', id.str())!
}
// Update directories
file.directories = new_directories
// Save with new directories
return self.set(file)!
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/fs_symlink.v
module herofs
import time
import crypto.blake3
import json
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// FsSymlink represents a symbolic link in a filesystem
@[heap]
pub struct FsSymlink {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
parent_id u32 // Parent directory ID
target_id u32 // ID of target file or directory
target_type SymlinkTargetType
}
pub enum SymlinkTargetType {
file
directory
}
pub struct DBFsSymlink {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self FsSymlink) type_name() string {
return 'fs_symlink'
}
pub fn (self FsSymlink) dump(mut e encoder.Encoder) ! {
e.add_string(self.name)
e.add_u32(self.fs_id)
e.add_u32(self.parent_id)
e.add_u32(self.target_id)
e.add_u8(u8(self.target_type))
}
fn (mut self DBFsSymlink) load(mut o FsSymlink, mut e encoder.Decoder) ! {
o.name = e.get_string()!
o.fs_id = e.get_u32()!
o.parent_id = e.get_u32()!
o.target_id = e.get_u32()!
o.target_type = unsafe { SymlinkTargetType(e.get_u8()!) }
}
@[params]
pub struct FsSymlinkArg {
pub mut:
name string @[required]
description string
fs_id u32 @[required]
parent_id u32 @[required]
target_id u32 @[required]
target_type SymlinkTargetType @[required]
tags []string
comments []db.CommentArg
}
// get new symlink, not from the DB
pub fn (mut self DBFsSymlink) new(args FsSymlinkArg) !FsSymlink {
mut o := FsSymlink{
name: args.name
fs_id: args.fs_id
parent_id: args.parent_id
target_id: args.target_id
target_type: args.target_type
}
// Set base fields
o.description = args.description
o.tags = self.db.tags_get(args.tags)!
o.comments = self.db.comments_get(args.comments)!
o.updated_at = ourtime.now().unix()
return o
}
pub fn (mut self DBFsSymlink) set(o FsSymlink) !u32 {
// Check parent directory exists
if o.parent_id > 0 {
parent_exists := self.db.exists[FsDir](o.parent_id)!
if !parent_exists {
return error('Parent directory with ID ${o.parent_id} does not exist')
}
}
// Check target exists based on target type
if o.target_type == .file {
target_exists := self.db.exists[FsFile](o.target_id)!
if !target_exists {
return error('Target file with ID ${o.target_id} does not exist')
}
} else if o.target_type == .directory {
target_exists := self.db.exists[FsDir](o.target_id)!
if !target_exists {
return error('Target directory with ID ${o.target_id} does not exist')
}
}
id := self.db.set[FsSymlink](o)!
// Store symlink in parent directory's symlink index
path_key := '${o.parent_id}:${o.name}'
self.db.redis.hset('fssymlink:paths', path_key, id.str())!
// Add to parent's symlinks list using hset
self.db.redis.hset('fssymlink:parent:${o.parent_id}', id.str(), id.str())!
// Store in filesystem's symlink list using hset
self.db.redis.hset('fssymlink:fs:${o.fs_id}', id.str(), id.str())!
// Store in target's referrers list using hset
target_key := '${o.target_type}:${o.target_id}'
self.db.redis.hset('fssymlink:target:${target_key}', id.str(), id.str())!
return id
}
pub fn (mut self DBFsSymlink) delete(id u32) ! {
// Get the symlink info before deleting
symlink := self.get(id)!
// Remove from path index
path_key := '${symlink.parent_id}:${symlink.name}'
self.db.redis.hdel('fssymlink:paths', path_key)!
// Remove from parent's symlinks list using hdel
self.db.redis.hdel('fssymlink:parent:${symlink.parent_id}', id.str())!
// Remove from filesystem's symlink list using hdel
self.db.redis.hdel('fssymlink:fs:${symlink.fs_id}', id.str())!
// Remove from target's referrers list using hdel
target_key := '${symlink.target_type}:${symlink.target_id}'
self.db.redis.hdel('fssymlink:target:${target_key}', id.str())!
// Delete the symlink itself
self.db.delete[FsSymlink](id)!
}
pub fn (mut self DBFsSymlink) exist(id u32) !bool {
return self.db.exists[FsSymlink](id)!
}
pub fn (mut self DBFsSymlink) get(id u32) !FsSymlink {
mut o, data := self.db.get_data[FsSymlink](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
pub fn (mut self DBFsSymlink) list() ![]FsSymlink {
return self.db.list[FsSymlink]()!.map(self.get(it)!)
}
// Get symlink by path in a parent directory
pub fn (mut self DBFsSymlink) get_by_path(parent_id u32, name string) !FsSymlink {
path_key := '${parent_id}:${name}'
id_str := self.db.redis.hget('fssymlink:paths', path_key)!
if id_str == '' {
return error('Symlink "${name}" not found in parent directory ${parent_id}')
}
return self.get(id_str.u32())!
}
// List symlinks in a parent directory
pub fn (mut self DBFsSymlink) list_by_parent(parent_id u32) ![]FsSymlink {
symlink_ids := self.db.redis.hkeys('fssymlink:parent:${parent_id}')!
mut symlinks := []FsSymlink{}
for id_str in symlink_ids {
symlinks << self.get(id_str.u32())!
}
return symlinks
}
// List symlinks in a filesystem
pub fn (mut self DBFsSymlink) list_by_filesystem(fs_id u32) ![]FsSymlink {
symlink_ids := self.db.redis.hkeys('fssymlink:fs:${fs_id}')!
mut symlinks := []FsSymlink{}
for id_str in symlink_ids {
symlinks << self.get(id_str.u32())!
}
return symlinks
}
// List symlinks pointing to a target
pub fn (mut self DBFsSymlink) list_by_target(target_type SymlinkTargetType, target_id u32) ![]FsSymlink {
target_key := '${target_type}:${target_id}'
symlink_ids := self.db.redis.hkeys('fssymlink:target:${target_key}')!
mut symlinks := []FsSymlink{}
for id_str in symlink_ids {
symlinks << self.get(id_str.u32())!
}
return symlinks
}
// Rename a symlink
pub fn (mut self DBFsSymlink) rename(id u32, new_name string) !u32 {
mut symlink := self.get(id)!
// Remove old path index
old_path_key := '${symlink.parent_id}:${symlink.name}'
self.db.redis.hdel('fssymlink:paths', old_path_key)!
// Update name
symlink.name = new_name
// Save with new name
return self.set(symlink)!
}
// Move symlink to a new parent directory
pub fn (mut self DBFsSymlink) move(id u32, new_parent_id u32) !u32 {
mut symlink := self.get(id)!
// Check that new parent exists and is in the same filesystem
if new_parent_id > 0 {
parent_data, _ := self.db.get_data[FsDir](new_parent_id)!
if parent_data.fs_id != symlink.fs_id {
return error('Cannot move symlink across filesystems')
}
}
// Remove old path index
old_path_key := '${symlink.parent_id}:${symlink.name}'
self.db.redis.hdel('fssymlink:paths', old_path_key)!
// Remove from old parent's symlinks list using hdel
self.db.redis.hdel('fssymlink:parent:${symlink.parent_id}', id.str())!
// Update parent
symlink.parent_id = new_parent_id
// Save with new parent
return self.set(symlink)!
}
// Redirect symlink to a new target
pub fn (mut self DBFsSymlink) redirect(id u32, new_target_id u32, new_target_type SymlinkTargetType) !u32 {
mut symlink := self.get(id)!
// Check new target exists
if new_target_type == .file {
target_exists := self.db.exists[FsFile](new_target_id)!
if !target_exists {
return error('Target file with ID ${new_target_id} does not exist')
}
} else if new_target_type == .directory {
target_exists := self.db.exists[FsDir](new_target_id)!
if !target_exists {
return error('Target directory with ID ${new_target_id} does not exist')
}
}
// Remove from old target's referrers list
old_target_key := '${symlink.target_type}:${symlink.target_id}'
self.db.redis.hdel('fssymlink:target:${old_target_key}', id.str())!
// Update target
symlink.target_id = new_target_id
symlink.target_type = new_target_type
// Save with new target
return self.set(symlink)!
}
// Resolve a symlink to get its target
pub fn (mut self DBFsSymlink) resolve(id u32) !u32 {
symlink := self.get(id)!
return symlink.target_id
}
// Check if a symlink is broken (target doesn't exist)
pub fn (mut self DBFsSymlink) is_broken(id u32) !bool {
symlink := self.get(id)!
if symlink.target_type == .file {
return !self.db.exists[FsFile](symlink.target_id)!
} else if symlink.target_type == .directory {
return !self.db.exists[FsDir](symlink.target_id)!
}
return true // Unknown target type is considered broken
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/fs.v
module herofs
import time
import crypto.blake3
import json
import incubaid.herolib.data.encoder
import incubaid.herolib.data.ourtime
import incubaid.herolib.hero.db
// Fs represents a filesystem, is the top level container for files and directories and symlinks, blobs are used over filesystems
@[heap]
pub struct Fs {
db.Base
pub mut:
name string
group_id u32 // Associated group for permissions
root_dir_id u32 // ID of root directory
quota_bytes u64 // Storage quota in bytes
used_bytes u64 // Current usage in bytes
}
// We only keep the root directory ID here, other directories can be found by querying parent_id in FsDir
pub struct DBFs {
pub mut:
db &db.DB @[skip; str: skip]
}
pub fn (self Fs) type_name() string {
return 'fs'
}
pub fn (self Fs) dump(mut e encoder.Encoder) ! {
e.add_string(self.name)
e.add_u32(self.group_id)
e.add_u32(self.root_dir_id)
e.add_u64(self.quota_bytes)
e.add_u64(self.used_bytes)
}
fn (mut self DBFs) load(mut o Fs, mut e encoder.Decoder) ! {
o.name = e.get_string()!
o.group_id = e.get_u32()!
o.root_dir_id = e.get_u32()!
o.quota_bytes = e.get_u64()!
o.used_bytes = e.get_u64()!
}
@[params]
pub struct FsArg {
pub mut:
name string @[required]
description string
group_id u32
root_dir_id u32
quota_bytes u64
used_bytes u64
tags []string
comments []db.CommentArg
}
// get new filesystem, not from the DB
pub fn (mut self DBFs) new(args FsArg) !Fs {
mut o := Fs{
name: args.name
group_id: args.group_id
root_dir_id: args.root_dir_id
quota_bytes: args.quota_bytes
used_bytes: args.used_bytes
}
// Set base fields
o.description = args.description
o.tags = self.db.tags_get(args.tags)!
o.comments = self.db.comments_get(args.comments)!
o.updated_at = ourtime.now().unix()
return o
}
pub fn (mut self DBFs) set(o Fs) !u32 {
id := self.db.set[Fs](o)!
// Store name -> id mapping for lookups
self.db.redis.hset('fs:names', o.name, id.str())!
return id
}
pub fn (mut self DBFs) delete(id u32) ! {
// Get the filesystem to retrieve its name
fs := self.get(id)!
// Remove name -> id mapping
self.db.redis.hdel('fs:names', fs.name)!
// Delete the filesystem
self.db.delete[Fs](id)!
}
pub fn (mut self DBFs) exist(id u32) !bool {
return self.db.exists[Fs](id)!
}
pub fn (mut self DBFs) get(id u32) !Fs {
mut o, data := self.db.get_data[Fs](id)!
mut e_decoder := encoder.decoder_new(data)
self.load(mut o, mut e_decoder)!
return o
}
pub fn (mut self DBFs) list() ![]Fs {
return self.db.list[Fs]()!.map(self.get(it)!)
}
// Additional hset operations for efficient lookups
pub fn (mut self DBFs) get_by_name(name string) !Fs {
// We'll store a mapping of name -> id in a separate hash
id_str := self.db.redis.hget('fs:names', name)!
if id_str == '' {
return error('Filesystem with name "${name}" not found')
}
return self.get(id_str.u32())!
}
// Custom method to increase used_bytes
pub fn (mut self DBFs) increase_usage(id u32, bytes u64) !u64 {
mut fs := self.get(id)!
fs.used_bytes += bytes
self.set(fs)!
return fs.used_bytes
}
// Custom method to decrease used_bytes
pub fn (mut self DBFs) decrease_usage(id u32, bytes u64) !u64 {
mut fs := self.get(id)!
if bytes > fs.used_bytes {
fs.used_bytes = 0
} else {
fs.used_bytes -= bytes
}
self.set(fs)!
return fs.used_bytes
}
// Check if quota is exceeded
pub fn (mut self DBFs) check_quota(id u32, additional_bytes u64) !bool {
fs := self.get(id)!
return (fs.used_bytes + additional_bytes) <= fs.quota_bytes
}
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/README.md
# HeroFS - Distributed Filesystem for HeroLib
HeroFS is a distributed filesystem implementation built on top of HeroDB (Redis-based storage). It provides a virtual filesystem with support for files, directories, symbolic links, and binary data blobs.
## Overview
HeroFS implements a filesystem structure where:
- **Fs**: Represents a filesystem as a top-level container
- **FsDir**: Represents directories within a filesystem
- **FsFile**: Represents files with support for multiple directory associations
- **FsSymlink**: Represents symbolic links pointing to files or directories
- **FsBlob**: Represents binary data chunks (up to 1MB) used as file content
## Features
- Distributed storage using Redis
- Support for files, directories, and symbolic links
- Blob-based file content storage with integrity verification
- Multiple directory associations for files (similar to hard links)
- Filesystem quotas and usage tracking
- Metadata support for files
- Efficient lookup mechanisms using Redis hash sets
## Installation
HeroFS is part of HeroLib and is automatically available when using HeroLib.
## Usage
To use HeroFS, you need to create a filesystem factory:
```v
import incubaid.herolib.hero.herofs
mut fs_factory := herofs.new()!
Creating a Filesystem
fs_id := fs_factory.fs.set(fs_factory.fs.new(
name: 'my_filesystem'
quota_bytes: 1000000000 // 1GB quota
)!)!
Working with Directories
// Create root directory
root_dir_id := fs_factory.fs_dir.set(fs_factory.fs_dir.new(
name: 'root'
fs_id: fs_id
parent_id: 0
)!)!
// Create subdirectory
sub_dir_id := fs_factory.fs_dir.set(fs_factory.fs_dir.new(
name: 'documents'
fs_id: fs_id
parent_id: root_dir_id
)!)!
Working with Blobs
// Create a blob with binary data
blob_id := fs_factory.fs_blob.set(fs_factory.fs_blob.new(
data: content_bytes
mime_type: 'text/plain'
)!)!
Working with Files
// Create a file
file_id := fs_factory.fs_file.set(fs_factory.fs_file.new(
name: 'example.txt'
fs_id: fs_id
directories: [root_dir_id]
blobs: [blob_id]
)!)!
Working with Symbolic Links
// Create a symbolic link to a file
symlink_id := fs_factory.fs_symlink.set(fs_factory.fs_symlink.new(
name: 'example_link.txt'
fs_id: fs_id
parent_id: root_dir_id
target_id: file_id
target_type: .file
)!)!
API Reference
The HeroFS module provides the following main components:
FsFactory- Main factory for accessing all filesystem componentsDBFs- Filesystem operationsDBFsDir- Directory operationsDBFsFile- File operationsDBFsSymlink- Symbolic link operationsDBFsBlob- Binary data blob operations
Each component provides CRUD operations and specialized methods for filesystem management.
Examples
Check the examples/hero/herofs/ directory for detailed usage examples.
File: /Users/despiegk/code/github/incubaid/herolib/lib/hero/herofs/specs.md
```md
# HeroFS Specifications
This document provides detailed specifications for the HeroFS distributed filesystem implementation.
## Architecture Overview
HeroFS is built on top of HeroDB, which uses Redis as its storage backend. The filesystem is implemented as a collection of interconnected data structures that represent the various components of a filesystem:
1. **Fs** - Filesystem container
2. **FsDir** - Directories
3. **FsFile** - Files
4. **FsSymlink** - Symbolic links
5. **FsBlob** - Binary data chunks
All components inherit from the `Base` struct, which provides common fields like ID, name, description, timestamps, security policies, tags, and comments.
## Filesystem (Fs)
The `Fs` struct represents a filesystem as a top-level container:
```v
@[heap]
pub struct Fs {
db.Base
pub mut:
name string
group_id u32 // Associated group for permissions
root_dir_id u32 // ID of root directory
quota_bytes u64 // Storage quota in bytes
used_bytes u64 // Current usage in bytes
}
Key Features
- Name-based identification: Filesystems can be retrieved by name using efficient Redis hash sets
- Quota management: Each filesystem has a storage quota and tracks current usage
- Root directory: Each filesystem has a root directory ID that serves as the entry point
- Group association: Filesystems can be associated with groups for permission management
Methods
new(): Create a new filesystem instanceset(): Save filesystem to databaseget(): Retrieve filesystem by IDget_by_name(): Retrieve filesystem by namedelete(): Remove filesystem from databaseexist(): Check if filesystem existslist(): List all filesystemsincrease_usage(): Increase used bytes counterdecrease_usage(): Decrease used bytes countercheck_quota(): Verify if additional bytes would exceed quota
Directory (FsDir)
The FsDir struct represents a directory in a filesystem:
@[heap]
pub struct FsDir {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
parent_id u32 // Parent directory ID (0 for root)
}
Key Features
- Hierarchical structure: Directories form a tree structure with parent-child relationships
- Path-based identification: Efficient lookup by filesystem ID, parent ID, and name
- Children management: Directories automatically track their children through Redis hash sets
- Cross-filesystem isolation: Directories are bound to a specific filesystem
Methods
new(): Create a new directory instanceset(): Save directory to database and update indicesget(): Retrieve directory by IDdelete(): Remove directory (fails if it has children)exist(): Check if directory existslist(): List all directoriesget_by_path(): Retrieve directory by path componentslist_by_filesystem(): List directories in a filesystemlist_children(): List child directorieshas_children(): Check if directory has childrenrename(): Rename directorymove(): Move directory to a new parent
File (FsFile)
The FsFile struct represents a file in a filesystem:
@[heap]
pub struct FsFile {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
directories []u32 // Directory IDs where this file exists
blobs []u32 // IDs of file content blobs
size_bytes u64
mime_type string // e.g., "image/png"
checksum string // e.g., SHA256 checksum of the file
accessed_at i64
metadata map[string]string // Custom metadata
}
Key Features
- Multiple directory associations: Files can exist in multiple directories (similar to hard links in Linux)
- Blob-based content: File content is stored as references to FsBlob objects
- Size tracking: Files track their total size in bytes
- MIME type support: Files store their MIME type for content identification
- Checksum verification: Files can store checksums for integrity verification
- Access timestamp: Tracks when the file was last accessed
- Custom metadata: Files support custom key-value metadata
Methods
new(): Create a new file instanceset(): Save file to database and update indicesget(): Retrieve file by IDdelete(): Remove file and update all indicesexist(): Check if file existslist(): List all filesget_by_path(): Retrieve file by directory and namelist_by_directory(): List files in a directorylist_by_filesystem(): List files in a filesystemlist_by_mime_type(): List files by MIME typeappend_blob(): Add a new blob to the fileupdate_accessed(): Update accessed timestampupdate_metadata(): Update file metadatarename(): Rename file (affects all directories)move(): Move file to different directories
Symbolic Link (FsSymlink)
The FsSymlink struct represents a symbolic link in a filesystem:
@[heap]
pub struct FsSymlink {
db.Base
pub mut:
name string
fs_id u32 // Associated filesystem
parent_id u32 // Parent directory ID
target_id u32 // ID of target file or directory
target_type SymlinkTargetType
}
pub enum SymlinkTargetType {
file
directory
}
Key Features
- Target type specification: Symlinks can point to either files or directories
- Cross-filesystem protection: Symlinks cannot point to targets in different filesystems
- Referrer tracking: Targets know which symlinks point to them
- Broken link detection: Symlinks can be checked for validity
Methods
new(): Create a new symbolic link instanceset(): Save symlink to database and update indicesget(): Retrieve symlink by IDdelete(): Remove symlink and update all indicesexist(): Check if symlink existslist(): List all symlinksget_by_path(): Retrieve symlink by parent directory and namelist_by_parent(): List symlinks in a parent directorylist_by_filesystem(): List symlinks in a filesystemlist_by_target(): List symlinks pointing to a targetrename(): Rename symlinkmove(): Move symlink to a new parent directoryredirect(): Change symlink targetresolve(): Get the target ID of a symlinkis_broken(): Check if symlink target exists
Binary Data Blob (FsBlob)
The FsBlob struct represents binary data chunks:
@[heap]
pub struct FsBlob {
db.Base
pub mut:
hash string // blake192 hash of content
data []u8 // Binary data (max 1MB)
size_bytes int // Size in bytes
created_at i64
mime_type string // MIME type
encoding string // Encoding type
}
Key Features
- Content-based addressing: Blobs are identified by their BLAKE3 hash (first 192 bits)
- Size limit: Blobs are limited to 1MB to ensure efficient storage and retrieval
- Integrity verification: Built-in hash verification for data integrity
- MIME type and encoding: Blobs store their content type information
- Deduplication: Identical content blobs are automatically deduplicated
Methods
new(): Create a new blob instanceset(): Save blob to database (returns existing ID if content already exists)get(): Retrieve blob by IDdelete(): Remove blob from databaseexist(): Check if blob existslist(): List all blobsget_by_hash(): Retrieve blob by content hashexists_by_hash(): Check if blob exists by content hashverify_integrity(): Verify blob data integrity against stored hashcalculate_hash(): Calculate BLAKE3 hash of blob data
Storage Mechanisms
HeroFS uses Redis hash sets extensively for efficient indexing and lookup:
Filesystem Indices
fs:names- Maps filesystem names to IDsfsdir:paths- Maps directory path components to IDsfsdir:fs:${fs_id}- Lists directories in a filesystemfsdir:children:${dir_id}- Lists children of a directoryfsfile:paths- Maps file paths (directory:name) to IDsfsfile:dir:${dir_id}- Lists files in a directoryfsfile:fs:${fs_id}- Lists files in a filesystemfsfile:mime:${mime_type}- Lists files by MIME typefssymlink:paths- Maps symlink paths (parent:name) to IDsfssymlink:parent:${parent_id}- Lists symlinks in a parent directoryfssymlink:fs:${fs_id}- Lists symlinks in a filesystemfssymlink:target:${target_type}:${target_id}- Lists symlinks pointing to a targetfsblob:hashes- Maps content hashes to blob IDs
Data Serialization
All HeroFS components use the HeroLib encoder for serialization:
- Version tag (u8) is stored first
- All fields are serialized in a consistent order
- Deserialization follows the exact same order
- Type safety is maintained through V's type system
Special Features
Hard Links
Files can be associated with multiple directories through the directories field, allowing for hard link-like behavior.
Deduplication
Blobs are automatically deduplicated based on their content hash. When creating a new blob with identical content to an existing one, the existing ID is returned.
Quota Management
Filesystems track their storage usage and can enforce quotas to prevent overconsumption.
Metadata Support
Files support custom metadata as key-value pairs, allowing for flexible attribute storage.
Cross-Component Validation
When creating or modifying components, HeroFS validates references to other components:
- Directory parent must exist
- File directories must exist
- File blobs must exist
- Symlink parent must exist
- Symlink target must exist and match target type
Security Model
HeroFS inherits the security model from HeroDB:
- Each component has a
securitypolicyfield referencing a SecurityPolicy object - Components can have associated tags for categorization
- Components can have associated comments for documentation
Performance Considerations
- All indices are stored as Redis hash sets for O(1) lookup performance
- Blob deduplication reduces storage requirements
- Multiple directory associations allow efficient file organization
- Content-based addressing enables easy integrity verification
- Factory pattern provides easy access to all filesystem components
</file_contents>
<user_instructions>
for lib/hero/herofs
we need to do some refactoring
- we should create a fs_tools.v and this has functions more high level to walk over the filesystem like you would expect from an fs and other tools
- find with include/exclude starting from a path, returning []FindResult and FindResult has 2 props: .enum for file,dir or link, and the u32 id to the obj
- can be recursive and not
- a cp, rm, move all starting from path
basically make it easy to manipulate the fs
when doing a rm, we need to say if we have flag to say if we want to delete the blobs or not, default not
make clear steps and then do them one by one
report well on what you are doing
</user_instructions>