Compare commits
1 Commits
v1.0.38
...
herodb_cli
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
55ebaa4d68 |
32
.github/workflows/README.md
vendored
32
.github/workflows/README.md
vendored
@@ -1,32 +0,0 @@
|
||||
# Building Hero for release
|
||||
|
||||
Generally speaking, our scripts and docs for building hero produce non portable binaries for Linux. While that's fine for development purposes, statically linked binaries are much more convenient for releases and distribution.
|
||||
|
||||
The release workflow here creates a static binary for Linux using an Alpine container. A few notes follow about how that's done.
|
||||
|
||||
## Static builds in vlang
|
||||
|
||||
Since V compiles to C in our case, we are really concerned with how to produce static C builds. The V project provides [some guidance](https://github.com/vlang/v?tab=readme-ov-file#docker-with-alpinemusl) on using an Alpine container and passing `-cflags -static` to the V compiler.
|
||||
|
||||
That's fine for some projects. Hero has a dependency on the `libpq` C library for Postgres functionality, however, and this creates a complication.
|
||||
|
||||
## Static linking libpq
|
||||
|
||||
In order to create a static build of hero on Alpine, we need to install some additional packages:
|
||||
|
||||
* openssl-libs-static
|
||||
* postgresql-dev
|
||||
|
||||
The full `apk` command to prepare the container for building looks like this:
|
||||
|
||||
```bash
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
|
||||
```
|
||||
|
||||
Then we also need to instruct the C compiler to link against the Postgres static shared libraries. Here's the build command:
|
||||
|
||||
```bash
|
||||
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v
|
||||
```
|
||||
|
||||
Note that gcc is also the preferred compiler for static builds.
|
||||
35
.github/workflows/hero_build.yml
vendored
35
.github/workflows/hero_build.yml
vendored
@@ -35,6 +35,9 @@ jobs:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# We do the workaround as described here https://github.com/Incubaid/herolib?tab=readme-ov-file#tcc-compiler-error-on-macos
|
||||
# gcc and clang also don't work on macOS due to https://github.com/vlang/v/issues/25467
|
||||
# We can change the compiler or remove this when one is fixed
|
||||
- name: Setup V & Herolib
|
||||
id: setup
|
||||
shell: bash
|
||||
@@ -50,34 +53,52 @@ jobs:
|
||||
echo "Herolib symlink created to $(pwd)/lib"
|
||||
timeout-minutes: 10
|
||||
|
||||
# For Linux, we build a static binary linked against musl on Alpine. For
|
||||
# static linking, gcc is preferred
|
||||
# We can't make static builds for Linux easily, since we link to libql
|
||||
# (Postgres) and this has no static version available in the Alpine
|
||||
# repos. Therefore we build dynamic binaries for both glibc and musl.
|
||||
#
|
||||
# Again we work around a bug limiting our choice of C compiler tcc won't
|
||||
# work on Alpine due to https://github.com/vlang/v/issues/24866
|
||||
# So always use gcc for Linux
|
||||
#
|
||||
# For macOS, we can only use tcc (see above), but then we hit issues using
|
||||
# the garbage collector, so disable that
|
||||
- name: Build Hero
|
||||
timeout-minutes: 15
|
||||
run: |
|
||||
set -ex
|
||||
set -e
|
||||
if [ "${{ runner.os }}" = "Linux" ]; then
|
||||
sudo apt-get install libpq-dev
|
||||
# Build for glibc
|
||||
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}
|
||||
|
||||
# Build for musl using Alpine in Docker
|
||||
docker run --rm \
|
||||
-v ${{ github.workspace }}/lib:/root/.vmodules/incubaid/herolib \
|
||||
-v ${{ github.workspace }}:/herolib \
|
||||
-w /herolib \
|
||||
alpine:3.22 \
|
||||
alpine \
|
||||
sh -c '
|
||||
set -ex
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev postgresql-dev openssl-libs-static
|
||||
apk add --no-cache bash git build-base openssl-dev libpq-dev
|
||||
cd v
|
||||
make clean
|
||||
make
|
||||
./v symlink
|
||||
cd ..
|
||||
v -w -d use_openssl -enable-globals -cc gcc -cflags -static -ldflags "-lpgcommon_shlib -lpgport_shlib" cli/hero.v -o cli/hero-${{ matrix.target }}-musl
|
||||
v -w -d use_openssl -enable-globals -cc gcc cli/hero.v -o cli/hero-${{ matrix.target }}-musl
|
||||
'
|
||||
|
||||
else
|
||||
v -w -d use_openssl -enable-globals -cc clang cli/hero.v -o cli/hero-${{ matrix.target }}
|
||||
fi
|
||||
|
||||
- name: Upload glibc binary
|
||||
if: runner.os == 'Linux'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: hero-${{ matrix.target }}
|
||||
path: cli/hero-${{ matrix.target }}
|
||||
|
||||
- name: Upload musl binary
|
||||
if: runner.os == 'Linux'
|
||||
uses: actions/upload-artifact@v4
|
||||
|
||||
@@ -53,7 +53,7 @@ fn do() ! {
|
||||
mut cmd := Command{
|
||||
name: 'hero'
|
||||
description: 'Your HERO toolset.'
|
||||
version: '1.0.38'
|
||||
version: '1.0.36'
|
||||
}
|
||||
|
||||
mut toinstall := false
|
||||
@@ -103,4 +103,4 @@ fn main() {
|
||||
print_backtrace()
|
||||
exit(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
34
examples/clients/herodb.vsh
Normal file
34
examples/clients/herodb.vsh
Normal file
@@ -0,0 +1,34 @@
|
||||
#!/usr/bin/env -S v -n -w -enable-globals run
|
||||
|
||||
import incubaid.herolib.clients.herodb
|
||||
|
||||
// Initialize the client
|
||||
mut client := herodb.new(herodb.Config{
|
||||
url: 'http://localhost:3000'
|
||||
})!
|
||||
|
||||
println('Connecting to HeroDB at ${client.server_url}...')
|
||||
|
||||
// List instances
|
||||
instances := client.list_instances()!
|
||||
|
||||
println('Found ${instances.len} instances:')
|
||||
|
||||
for instance in instances {
|
||||
println('----------------------------------------')
|
||||
println('Index: ${instance.index}')
|
||||
println('Name: ${instance.name}')
|
||||
println('Created At: ${instance.created_at}')
|
||||
|
||||
// Parse backend info
|
||||
backend := instance.get_backend_info() or {
|
||||
println('Backend: Unknown/Error (${err})')
|
||||
continue
|
||||
}
|
||||
|
||||
println('Backend Type: ${backend.type_name}')
|
||||
if backend.path != '' {
|
||||
println('Backend Path: ${backend.path}')
|
||||
}
|
||||
}
|
||||
println('----------------------------------------')
|
||||
52
lib/clients/herodb/README.md
Normal file
52
lib/clients/herodb/README.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# HeroDB Client
|
||||
|
||||
A V client library for interacting with the HeroDB JSON-RPC API.
|
||||
|
||||
## Features
|
||||
|
||||
- Connects to HeroDB's JSON-RPC server (default port 3000).
|
||||
- Lists running database instances.
|
||||
- Parses polymorphic backend types (InMemory, Redb, LanceDb).
|
||||
|
||||
## Usage
|
||||
|
||||
```v
|
||||
import incubaid.herolib.clients.herodb
|
||||
|
||||
fn main() {
|
||||
// Initialize the client
|
||||
mut client := herodb.new(herodb.Config{
|
||||
url: 'http://localhost:3000'
|
||||
})!
|
||||
|
||||
// List instances
|
||||
instances := client.list_instances()!
|
||||
|
||||
for instance in instances {
|
||||
println('Index: ${instance.index}')
|
||||
println('Name: ${instance.name}')
|
||||
|
||||
// Parse backend info
|
||||
backend := instance.get_backend_info()!
|
||||
println('Backend: ${backend.type_name}')
|
||||
if backend.path != '' {
|
||||
println('Path: ${backend.path}')
|
||||
}
|
||||
println('---')
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### `fn new(cfg Config) !HeroDB`
|
||||
|
||||
Creates a new HeroDB client instance.
|
||||
|
||||
### `fn (mut self HeroDB) list_instances() ![]InstanceMetadata`
|
||||
|
||||
Retrieves a list of all currently loaded database instances.
|
||||
|
||||
### `fn (m InstanceMetadata) get_backend_info() !BackendInfo`
|
||||
|
||||
Helper method to parse the `backend_type` field from `InstanceMetadata` into a structured `BackendInfo` object.
|
||||
127
lib/clients/herodb/herodb.v
Normal file
127
lib/clients/herodb/herodb.v
Normal file
@@ -0,0 +1,127 @@
|
||||
module herodb
|
||||
|
||||
import json
|
||||
import incubaid.herolib.core.httpconnection
|
||||
|
||||
// JSON-RPC 2.0 Structures
|
||||
|
||||
struct JsonRpcRequest {
|
||||
jsonrpc string = '2.0'
|
||||
method string
|
||||
params []string
|
||||
id int
|
||||
}
|
||||
|
||||
struct JsonRpcResponse[T] {
|
||||
jsonrpc string
|
||||
result T
|
||||
error ?JsonRpcError
|
||||
id int
|
||||
}
|
||||
|
||||
struct JsonRpcError {
|
||||
code int
|
||||
message string
|
||||
data string
|
||||
}
|
||||
|
||||
// HeroDB Specific Structures
|
||||
|
||||
pub struct InstanceMetadata {
|
||||
pub:
|
||||
index int
|
||||
name string
|
||||
// backend_type can be a string ("InMemory") or an object ({"Redb": "path"}).
|
||||
// We use the `raw` attribute to capture the raw JSON and parse it manually.
|
||||
backend_type string @[raw]
|
||||
created_at string
|
||||
}
|
||||
|
||||
// Helper struct to represent the parsed backend info in a usable way
|
||||
pub struct BackendInfo {
|
||||
pub:
|
||||
type_name string // "InMemory", "Redb", "LanceDb"
|
||||
path string // Empty for InMemory
|
||||
}
|
||||
|
||||
pub struct HeroDB {
|
||||
pub:
|
||||
server_url string
|
||||
pub mut:
|
||||
conn ?&httpconnection.HTTPConnection
|
||||
}
|
||||
|
||||
pub struct Config {
|
||||
pub:
|
||||
url string = 'http://localhost:3000'
|
||||
}
|
||||
|
||||
pub fn new(cfg Config) !HeroDB {
|
||||
return HeroDB{
|
||||
server_url: cfg.url
|
||||
}
|
||||
}
|
||||
|
||||
pub fn (mut self HeroDB) connection() !&httpconnection.HTTPConnection {
|
||||
if mut conn := self.conn {
|
||||
return conn
|
||||
}
|
||||
|
||||
mut new_conn := httpconnection.new(
|
||||
name: 'herodb'
|
||||
url: self.server_url
|
||||
retry: 3
|
||||
)!
|
||||
self.conn = new_conn
|
||||
return new_conn
|
||||
}
|
||||
|
||||
fn (mut self HeroDB) rpc_call[T](method string) !T {
|
||||
mut conn := self.connection()!
|
||||
|
||||
req := JsonRpcRequest{
|
||||
method: method
|
||||
id: 1
|
||||
params: []
|
||||
}
|
||||
|
||||
response := conn.post_json_generic[JsonRpcResponse[T]](
|
||||
method: .post
|
||||
prefix: ''
|
||||
data: json.encode(req)
|
||||
dataformat: .json
|
||||
)!
|
||||
|
||||
if err := response.error {
|
||||
return error('RPC Error ${err.code}: ${err.message}')
|
||||
}
|
||||
|
||||
return response.result
|
||||
}
|
||||
|
||||
pub fn (mut self HeroDB) list_instances() ![]InstanceMetadata {
|
||||
return self.rpc_call[[]InstanceMetadata]('db_listInstances')!
|
||||
}
|
||||
|
||||
pub fn (m InstanceMetadata) get_backend_info() !BackendInfo {
|
||||
if m.backend_type.len == 0 {
|
||||
return error('empty backend_type')
|
||||
}
|
||||
if m.backend_type[0] == `"` {
|
||||
// It's a string
|
||||
val := json.decode(string, m.backend_type)!
|
||||
return BackendInfo{
|
||||
type_name: val
|
||||
}
|
||||
} else if m.backend_type[0] == `{` {
|
||||
// It's an object
|
||||
val := json.decode(map[string]string, m.backend_type)!
|
||||
for k, v in val {
|
||||
return BackendInfo{
|
||||
type_name: k
|
||||
path: v
|
||||
}
|
||||
}
|
||||
}
|
||||
return error('unknown backend_type format: ${m.backend_type}')
|
||||
}
|
||||
@@ -149,19 +149,19 @@ pub fn (mut config CrunConfig) set_hostname(hostname string) &CrunConfig {
|
||||
}
|
||||
|
||||
pub fn (mut config CrunConfig) set_memory_limit(limit_bytes u64) &CrunConfig {
|
||||
config.spec.linux_config.resources.memory.limit = limit_bytes
|
||||
config.spec.linux.resources.memory.limit = limit_bytes
|
||||
return config
|
||||
}
|
||||
|
||||
pub fn (mut config CrunConfig) set_cpu_limits(period u64, quota i64, shares u64) &CrunConfig {
|
||||
config.spec.linux_config.resources.cpu.period = period
|
||||
config.spec.linux_config.resources.cpu.quota = quota
|
||||
config.spec.linux_config.resources.cpu.shares = shares
|
||||
config.spec.linux.resources.cpu.period = period
|
||||
config.spec.linux.resources.cpu.quota = quota
|
||||
config.spec.linux.resources.cpu.shares = shares
|
||||
return config
|
||||
}
|
||||
|
||||
pub fn (mut config CrunConfig) set_pids_limit(limit i64) &CrunConfig {
|
||||
config.spec.linux_config.resources.pids.limit = limit
|
||||
config.spec.linux.resources.pids.limit = limit
|
||||
return config
|
||||
}
|
||||
|
||||
@@ -222,15 +222,15 @@ pub fn (mut config CrunConfig) set_terminal(value bool) &CrunConfig {
|
||||
}
|
||||
|
||||
pub fn (mut config CrunConfig) add_masked_path(path string) &CrunConfig {
|
||||
if path !in config.spec.linux_config.masked_paths {
|
||||
config.spec.linux_config.masked_paths << path
|
||||
if path !in config.spec.linux.masked_paths {
|
||||
config.spec.linux.masked_paths << path
|
||||
}
|
||||
return config
|
||||
}
|
||||
|
||||
pub fn (mut config CrunConfig) add_readonly_path(path string) &CrunConfig {
|
||||
if path !in config.spec.linux_config.readonly_paths {
|
||||
config.spec.linux_config.readonly_paths << path
|
||||
if path !in config.spec.linux.readonly_paths {
|
||||
config.spec.linux.readonly_paths << path
|
||||
}
|
||||
return config
|
||||
}
|
||||
@@ -293,7 +293,7 @@ fn create_default_spec() Spec {
|
||||
}
|
||||
hostname: 'container'
|
||||
mounts: create_default_mounts()
|
||||
linux_config: LinuxConfig{
|
||||
linux: Linux{
|
||||
namespaces: create_default_namespaces()
|
||||
masked_paths: [
|
||||
'/proc/acpi',
|
||||
|
||||
@@ -3,14 +3,14 @@ module crun
|
||||
// OCI Runtime Spec structures that can be directly encoded to JSON
|
||||
pub struct Spec {
|
||||
pub mut:
|
||||
oci_version string @[json: 'ociVersion']
|
||||
platform Platform
|
||||
process Process
|
||||
root Root
|
||||
hostname string
|
||||
mounts []Mount
|
||||
linux_config LinuxConfig
|
||||
hooks Hooks
|
||||
oci_version string @[json: 'ociVersion']
|
||||
platform Platform
|
||||
process Process
|
||||
root Root
|
||||
hostname string
|
||||
mounts []Mount
|
||||
linux Linux
|
||||
hooks Hooks
|
||||
}
|
||||
|
||||
pub struct Platform {
|
||||
@@ -68,7 +68,7 @@ pub mut:
|
||||
options []string
|
||||
}
|
||||
|
||||
pub struct LinuxConfig {
|
||||
pub struct Linux {
|
||||
pub mut:
|
||||
namespaces []LinuxNamespace
|
||||
resources LinuxResources
|
||||
|
||||
@@ -88,7 +88,7 @@ os.write_file(hero_v_path, lines.join_lines()) or {
|
||||
os.rm('${hero_v_path}.backup') or { eprintln('Warning: Could not remove backup file: ${err}') }
|
||||
|
||||
// Update version in install_hero.sh
|
||||
install_hero_path := '${ourdir}/scripts/install_hero.sh'
|
||||
install_hero_path := '${ourdir}/install_hero.sh'
|
||||
install_hero_content := os.read_file(install_hero_path) or {
|
||||
eprintln('Error reading ${install_hero_path}: ${err}')
|
||||
exit(1)
|
||||
|
||||
@@ -4,7 +4,7 @@ set -e
|
||||
|
||||
os_name="$(uname -s)"
|
||||
arch_name="$(uname -m)"
|
||||
version='1.0.38'
|
||||
version='1.0.36'
|
||||
|
||||
# Detect Linux distribution type
|
||||
linux_type=""
|
||||
@@ -17,11 +17,19 @@ fi
|
||||
# Base URL for GitHub releases
|
||||
base_url="https://github.com/incubaid/herolib/releases/download/v${version}"
|
||||
|
||||
# Select the URL based on the platform. For Linux we have a single static binary
|
||||
# Select the URL based on the platform
|
||||
if [[ "$os_name" == "Linux" && "$arch_name" == "x86_64" ]]; then
|
||||
url="$base_url/hero-x86_64-linux-musl"
|
||||
if [[ "$linux_type" == "alpine" ]]; then
|
||||
url="$base_url/hero-x86_64-linux-musl"
|
||||
else
|
||||
url="$base_url/hero-x86_64-linux"
|
||||
fi
|
||||
elif [[ "$os_name" == "Linux" && "$arch_name" == "aarch64" ]]; then
|
||||
url="$base_url/hero-aarch64-linux-musl"
|
||||
if [[ "$linux_type" == "alpine" ]]; then
|
||||
url="$base_url/hero-aarch64-linux-musl"
|
||||
else
|
||||
url="$base_url/hero-aarch64-linux"
|
||||
fi
|
||||
elif [[ "$os_name" == "Darwin" && "$arch_name" == "arm64" ]]; then
|
||||
url="$base_url/hero-aarch64-apple-darwin"
|
||||
# elif [[ "$os_name" == "Darwin" && "$arch_name" == "x86_64" ]]; then
|
||||
@@ -138,4 +146,4 @@ if [ "$file_size" -ge 2 ]; then
|
||||
else
|
||||
echo "Downloaded file is less than 2 MB. Process aborted."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
Reference in New Issue
Block a user