Compare commits
43 Commits
9177fa4091
...
tantivy
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5fcaf81b5
|
||
|
|
65c297ca94
|
||
|
|
8decbf3375
|
||
|
|
ff0659b933
|
||
|
|
e9675aafed
|
||
|
|
142084c60f
|
||
| 4b3a86d73d | |||
| a1127b72da | |||
| fbcaafc86b | |||
| 3850df89be | |||
| ce1be0369a | |||
| 45195d403e | |||
| 4b8216bfdb | |||
| f17b441ca1 | |||
| 8bc372ea64 | |||
| 7920945986 | |||
| ff4ea1d844 | |||
| d4d3660bac | |||
| c9e1dcdb6c | |||
| b68325016d | |||
| 2743cd9c81 | |||
| eb07386cf4 | |||
| fc7672c78a | |||
| 46f96fa8cf | |||
| 56699b9abb | |||
| dd90a49615 | |||
| 9054737e84 | |||
| 09553f54c8 | |||
|
|
58cb1e8d5e | ||
| d3d92819cf | |||
| 4fd48f8b0d | |||
| 4bedf71c2d | |||
| b9987a027b | |||
|
|
3b9756a4e1 | ||
| f22a25f5a1 | |||
|
|
892e6e2b90 | ||
|
|
b9a9f3e6d6 | ||
|
|
463000c8f7 | ||
|
|
a92c90e9cb | ||
|
|
34808fc1c9 | ||
|
|
b644bf873f | ||
|
|
a306544a34 | ||
|
|
afa1033cd6 |
1022
Cargo.lock
generated
1022
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
40
Cargo.toml
40
Cargo.toml
@@ -1,12 +1,30 @@
|
|||||||
[workspace]
|
[package]
|
||||||
members = [
|
name = "herodb"
|
||||||
"herodb",
|
version = "0.0.1"
|
||||||
"supervisor",
|
authors = ["Pin Fang <fpfangpin@hotmail.com>"]
|
||||||
]
|
edition = "2021"
|
||||||
resolver = "2"
|
|
||||||
|
|
||||||
# You can define shared profiles for all workspace members here
|
[dependencies]
|
||||||
[profile.release]
|
anyhow = "1.0.59"
|
||||||
lto = true
|
bytes = "1.3.0"
|
||||||
codegen-units = 1
|
thiserror = "1.0.32"
|
||||||
strip = true
|
tokio = { version = "1.23.0", features = ["full"] }
|
||||||
|
clap = { version = "4.5.20", features = ["derive"] }
|
||||||
|
byteorder = "1.4.3"
|
||||||
|
futures = "0.3"
|
||||||
|
sled = "0.34"
|
||||||
|
redb = "2.1.3"
|
||||||
|
serde = { version = "1.0", features = ["derive"] }
|
||||||
|
serde_json = "1.0"
|
||||||
|
bincode = "1.3"
|
||||||
|
chacha20poly1305 = "0.10.1"
|
||||||
|
rand = "0.8"
|
||||||
|
sha2 = "0.10"
|
||||||
|
age = "0.10"
|
||||||
|
secrecy = "0.8"
|
||||||
|
ed25519-dalek = "2"
|
||||||
|
base64 = "0.22"
|
||||||
|
tantivy = "0.25.0"
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
redis = { version = "0.24", features = ["aio", "tokio-comp"] }
|
||||||
|
|||||||
85
README.md
Normal file
85
README.md
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
# HeroDB
|
||||||
|
|
||||||
|
HeroDB is a Redis-compatible database built with Rust, offering a flexible and secure storage solution. It supports two primary storage backends: `redb` (default) and `sled`, both with full encryption capabilities. HeroDB aims to provide a robust and performant key-value store with advanced features like data-at-rest encryption, hash operations, list operations, and cursor-based scanning.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
The main purpose of HeroDB is to offer a lightweight, embeddable, and Redis-compatible database that prioritizes data security through transparent encryption. It's designed for applications that require fast, reliable data storage with the option for strong cryptographic protection, without the overhead of a full-fledged Redis server.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Redis Compatibility**: Supports a subset of Redis commands over RESP (Redis Serialization Protocol) via TCP.
|
||||||
|
- **Dual Backend Support**:
|
||||||
|
- `redb` (default): Optimized for concurrent access and high-throughput scenarios.
|
||||||
|
- `sled`: A lock-free, log-structured database, excellent for specific workloads.
|
||||||
|
- **Data-at-Rest Encryption**: Transparent encryption for both backends using the `age` encryption library.
|
||||||
|
- **Key-Value Operations**: Full support for basic string, hash, and list operations.
|
||||||
|
- **Expiration**: Time-to-live (TTL) functionality for keys.
|
||||||
|
- **Scanning**: Cursor-based iteration for keys and hash fields (`SCAN`, `HSCAN`).
|
||||||
|
- **AGE Cryptography Commands**: HeroDB-specific extensions for cryptographic operations.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Building HeroDB
|
||||||
|
|
||||||
|
To build HeroDB, navigate to the project root and run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --release
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running HeroDB
|
||||||
|
|
||||||
|
You can start HeroDB with different backends and encryption options:
|
||||||
|
|
||||||
|
#### Default `redb` Backend
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./target/release/herodb --dir /tmp/herodb_redb --port 6379
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `sled` Backend
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./target/release/herodb --dir /tmp/herodb_sled --port 6379 --sled
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `redb` with Encryption
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./target/release/herodb --dir /tmp/herodb_encrypted --port 6379 --encrypt --key mysecretkey
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `sled` with Encryption
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./target/release/herodb --dir /tmp/herodb_sled_encrypted --port 6379 --sled --encrypt --key mysecretkey
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage with Redis Clients
|
||||||
|
|
||||||
|
HeroDB can be interacted with using any standard Redis client, such as `redis-cli`, `redis-py` (Python), or `ioredis` (Node.js).
|
||||||
|
|
||||||
|
### Example with `redis-cli`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
redis-cli -p 6379 SET mykey "Hello from HeroDB!"
|
||||||
|
redis-cli -p 6379 GET mykey
|
||||||
|
# → "Hello from HeroDB!"
|
||||||
|
|
||||||
|
redis-cli -p 6379 HSET user:1 name "Alice" age "30"
|
||||||
|
redis-cli -p 6379 HGET user:1 name
|
||||||
|
# → "Alice"
|
||||||
|
|
||||||
|
redis-cli -p 6379 SCAN 0 MATCH user:* COUNT 10
|
||||||
|
# → 1) "0"
|
||||||
|
# 2) 1) "user:1"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
For more detailed information on commands, features, and advanced usage, please refer to the documentation:
|
||||||
|
|
||||||
|
- [Basics](docs/basics.md)
|
||||||
|
- [Supported Commands](docs/cmds.md)
|
||||||
|
- [AGE Cryptography](docs/age.md)
|
||||||
188
docs/age.md
Normal file
188
docs/age.md
Normal file
@@ -0,0 +1,188 @@
|
|||||||
|
# HeroDB AGE usage: Stateless vs Key‑Managed
|
||||||
|
|
||||||
|
This document explains how to use the AGE cryptography commands exposed by HeroDB over the Redis protocol in two modes:
|
||||||
|
- Stateless (ephemeral keys; nothing stored on the server)
|
||||||
|
- Key‑managed (server‑persisted, named keys)
|
||||||
|
|
||||||
|
If you are new to the codebase, the exact tests that exercise these behaviors are:
|
||||||
|
- [rust.test_07_age_stateless_suite()](herodb/tests/usage_suite.rs:495)
|
||||||
|
- [rust.test_08_age_persistent_named_suite()](herodb/tests/usage_suite.rs:555)
|
||||||
|
|
||||||
|
Implementation entry points:
|
||||||
|
- [herodb/src/age.rs](herodb/src/age.rs)
|
||||||
|
- Dispatch from [herodb/src/cmd.rs](herodb/src/cmd.rs)
|
||||||
|
|
||||||
|
Note: Database-at-rest encryption flags in the test harness are unrelated to AGE commands; those flags control storage-level encryption of DB files. See the harness near [rust.start_test_server()](herodb/tests/usage_suite.rs:10).
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
Assuming the server is running on localhost on some $PORT:
|
||||||
|
```bash
|
||||||
|
~/code/git.ourworld.tf/herocode/herodb/herodb/build.sh
|
||||||
|
~/code/git.ourworld.tf/herocode/herodb/target/release/herodb --dir /tmp/data --debug --$PORT 6381 --encryption-key 1234 --encrypt
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PORT=6381
|
||||||
|
# Generate an ephemeral keypair and encrypt/decrypt a message (stateless mode)
|
||||||
|
redis-cli -p $PORT AGE GENENC
|
||||||
|
# → returns an array: [recipient, identity]
|
||||||
|
|
||||||
|
redis-cli -p $PORT AGE ENCRYPT <recipient> "hello world"
|
||||||
|
# → returns ciphertext (base64 in a bulk string)
|
||||||
|
|
||||||
|
redis-cli -p $PORT AGE DECRYPT <identity> <ciphertext_b64>
|
||||||
|
# → returns "hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
For key‑managed mode, generate a named key once and reference it by name afterwards:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE KEYGEN app1
|
||||||
|
# → persists encryption keypair under name "app1"
|
||||||
|
|
||||||
|
redis-cli -p $PORT AGE ENCRYPTNAME app1 "hello"
|
||||||
|
redis-cli -p $PORT AGE DECRYPTNAME app1 <ciphertext_b64>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Stateless AGE (ephemeral)
|
||||||
|
|
||||||
|
Characteristics
|
||||||
|
|
||||||
|
- No server‑side storage of keys.
|
||||||
|
- You pass the actual key material with every call.
|
||||||
|
- Not listable via AGE LIST.
|
||||||
|
|
||||||
|
Commands and examples
|
||||||
|
|
||||||
|
1) Ephemeral encryption keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate an ephemeral encryption keypair
|
||||||
|
redis-cli -p $PORT AGE GENENC
|
||||||
|
# Example output (abridged):
|
||||||
|
# 1) "age1qz..." # recipient (public key) = can be used by others e.g. to verify what I sign
|
||||||
|
# 2) "AGE-SECRET-KEY-1..." # identity (secret) = is like my private, cannot lose this one
|
||||||
|
|
||||||
|
# Encrypt with the recipient public key
|
||||||
|
redis-cli -p $PORT AGE ENCRYPT "age1qz..." "hello world"
|
||||||
|
|
||||||
|
# → returns bulk string payload: base64 ciphertext (encrypted content)
|
||||||
|
|
||||||
|
# Decrypt with the identity (secret) in other words your private key
|
||||||
|
redis-cli -p $PORT AGE DECRYPT "AGE-SECRET-KEY-1..." "<ciphertext_b64>"
|
||||||
|
# → "hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
2) Ephemeral signing keys
|
||||||
|
|
||||||
|
> ? is this same as my private key
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
# Generate an ephemeral signing keypair
|
||||||
|
redis-cli -p $PORT AGE GENSIGN
|
||||||
|
# Example output:
|
||||||
|
# 1) "<verify_pub_b64>"
|
||||||
|
# 2) "<sign_secret_b64>"
|
||||||
|
|
||||||
|
# Sign a message with the secret
|
||||||
|
redis-cli -p $PORT AGE SIGN "<sign_secret_b64>" "msg"
|
||||||
|
# → returns "<signature_b64>"
|
||||||
|
|
||||||
|
# Verify with the public key
|
||||||
|
redis-cli -p $PORT AGE VERIFY "<verify_pub_b64>" "msg" "<signature_b64>"
|
||||||
|
# → 1 (valid) or 0 (invalid)
|
||||||
|
```
|
||||||
|
|
||||||
|
When to use
|
||||||
|
- You do not want the server to store private keys.
|
||||||
|
- You already manage key material on the client side.
|
||||||
|
- You need ad‑hoc operations without persistence.
|
||||||
|
|
||||||
|
Reference test: [rust.test_07_age_stateless_suite()](herodb/tests/usage_suite.rs:495)
|
||||||
|
|
||||||
|
## Key‑managed AGE (persistent, named)
|
||||||
|
|
||||||
|
Characteristics
|
||||||
|
- Server generates and persists keypairs under a chosen name.
|
||||||
|
- Clients refer to keys by name; raw secrets are not supplied on each call.
|
||||||
|
- Keys are discoverable via AGE LIST.
|
||||||
|
|
||||||
|
Commands and examples
|
||||||
|
|
||||||
|
1) Named encryption keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create/persist a named encryption keypair
|
||||||
|
redis-cli -p $PORT AGE KEYGEN app1
|
||||||
|
# → returns [recipient, identity] but also stores them under name "app1"
|
||||||
|
|
||||||
|
> TODO: should not return identity (security, but there can be separate function to export it e.g. AGE EXPORTKEY app1)
|
||||||
|
|
||||||
|
# Encrypt using the stored public key
|
||||||
|
redis-cli -p $PORT AGE ENCRYPTNAME app1 "hello"
|
||||||
|
# → returns bulk string payload: base64 ciphertext
|
||||||
|
|
||||||
|
# Decrypt using the stored secret
|
||||||
|
redis-cli -p $PORT AGE DECRYPTNAME app1 "<ciphertext_b64>"
|
||||||
|
# → "hello"
|
||||||
|
```
|
||||||
|
|
||||||
|
2) Named signing keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create/persist a named signing keypair
|
||||||
|
redis-cli -p $PORT AGE SIGNKEYGEN app1
|
||||||
|
# → returns [verify_pub_b64, sign_secret_b64] and stores under name "app1"
|
||||||
|
|
||||||
|
> TODO: should not return sign_secret_b64 (for security, but there can be separate function to export it e.g. AGE EXPORTSIGNKEY app1)
|
||||||
|
|
||||||
|
# Sign using the stored secret
|
||||||
|
redis-cli -p $PORT AGE SIGNNAME app1 "msg"
|
||||||
|
# → returns "<signature_b64>"
|
||||||
|
|
||||||
|
# Verify using the stored public key
|
||||||
|
redis-cli -p $PORT AGE VERIFYNAME app1 "msg" "<signature_b64>"
|
||||||
|
# → 1 (valid) or 0 (invalid)
|
||||||
|
```
|
||||||
|
|
||||||
|
3) List stored AGE keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE LIST
|
||||||
|
# Example output includes labels such as "encpub" and your key names (e.g., "app1")
|
||||||
|
```
|
||||||
|
|
||||||
|
When to use
|
||||||
|
- You want centralized key storage/rotation and fewer secrets on the client.
|
||||||
|
- You need names/labels for workflows and can trust the server with secrets.
|
||||||
|
- You want discoverability (AGE LIST) and simpler client commands.
|
||||||
|
|
||||||
|
Reference test: [rust.test_08_age_persistent_named_suite()](herodb/tests/usage_suite.rs:555)
|
||||||
|
|
||||||
|
## Choosing a mode
|
||||||
|
|
||||||
|
- Prefer Stateless when:
|
||||||
|
- Minimizing server trust for secret material is the priority.
|
||||||
|
- Clients already have a secure mechanism to store/distribute keys.
|
||||||
|
- Prefer Key‑managed when:
|
||||||
|
- Centralized lifecycle, naming, and discoverability are beneficial.
|
||||||
|
- You plan to integrate rotation, ACLs, or auditability on the server side.
|
||||||
|
|
||||||
|
## Security notes
|
||||||
|
|
||||||
|
- Treat identities and signing secrets as sensitive; avoid logging them.
|
||||||
|
- For key‑managed mode, ensure server storage (and backups) are protected.
|
||||||
|
- AGE operations here are application‑level crypto and are distinct from database-at-rest encryption configured in the test harness.
|
||||||
|
|
||||||
|
## Repository pointers
|
||||||
|
|
||||||
|
- Stateless examples in tests: [rust.test_07_age_stateless_suite()](herodb/tests/usage_suite.rs:495)
|
||||||
|
- Key‑managed examples in tests: [rust.test_08_age_persistent_named_suite()](herodb/tests/usage_suite.rs:555)
|
||||||
|
- AGE implementation: [herodb/src/age.rs](herodb/src/age.rs)
|
||||||
|
- Command dispatch: [herodb/src/cmd.rs](herodb/src/cmd.rs)
|
||||||
|
- Bash demo: [herodb/examples/age_bash_demo.sh](herodb/examples/age_bash_demo.sh)
|
||||||
|
- Rust persistent demo: [herodb/examples/age_persist_demo.rs](herodb/examples/age_persist_demo.rs)
|
||||||
|
- Additional notes: [herodb/instructions/encrypt.md](herodb/instructions/encrypt.md)
|
||||||
623
docs/basics.md
Normal file
623
docs/basics.md
Normal file
@@ -0,0 +1,623 @@
|
|||||||
|
Here's an expanded version of the cmds.md documentation to include the list commands:
|
||||||
|
# HeroDB Commands
|
||||||
|
|
||||||
|
HeroDB implements a subset of Redis commands over the Redis protocol. This document describes the available commands and their usage.
|
||||||
|
|
||||||
|
## String Commands
|
||||||
|
|
||||||
|
### PING
|
||||||
|
Ping the server to test connectivity.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT PING
|
||||||
|
# → PONG
|
||||||
|
```
|
||||||
|
|
||||||
|
### ECHO
|
||||||
|
Echo the given message.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT ECHO "hello"
|
||||||
|
# → hello
|
||||||
|
```
|
||||||
|
|
||||||
|
### SET
|
||||||
|
Set a key to hold a string value.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SET key value
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- EX seconds: Set expiration in seconds
|
||||||
|
- PX milliseconds: Set expiration in milliseconds
|
||||||
|
- NX: Only set if key doesn't exist
|
||||||
|
- XX: Only set if key exists
|
||||||
|
- GET: Return old value
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SET key value EX 60
|
||||||
|
redis-cli -p $PORT SET key value PX 1000
|
||||||
|
redis-cli -p $PORT SET key value NX
|
||||||
|
redis-cli -p $PORT SET key value XX
|
||||||
|
redis-cli -p $PORT SET key value GET
|
||||||
|
```
|
||||||
|
|
||||||
|
### GET
|
||||||
|
Get the value of a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT GET key
|
||||||
|
# → value
|
||||||
|
```
|
||||||
|
|
||||||
|
### MGET
|
||||||
|
Get values of multiple keys.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT MGET key1 key2 key3
|
||||||
|
# → 1) "value1"
|
||||||
|
# 2) "value2"
|
||||||
|
# 3) (nil)
|
||||||
|
```
|
||||||
|
|
||||||
|
### MSET
|
||||||
|
Set multiple key-value pairs.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT MSET key1 value1 key2 value2
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### INCR
|
||||||
|
Increment the integer value of a key by 1.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SET counter 10
|
||||||
|
redis-cli -p $PORT INCR counter
|
||||||
|
# → 11
|
||||||
|
```
|
||||||
|
|
||||||
|
### DEL
|
||||||
|
Delete a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT DEL key
|
||||||
|
# → 1
|
||||||
|
```
|
||||||
|
|
||||||
|
For multiple keys:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT DEL key1 key2 key3
|
||||||
|
# → number of keys deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
### TYPE
|
||||||
|
Determine the type of a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT TYPE key
|
||||||
|
# → string
|
||||||
|
```
|
||||||
|
|
||||||
|
### EXISTS
|
||||||
|
Check if a key exists.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT EXISTS key
|
||||||
|
# → 1 (exists) or 0 (doesn't exist)
|
||||||
|
```
|
||||||
|
|
||||||
|
For multiple keys:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT EXISTS key1 key2 key3
|
||||||
|
# → count of existing keys
|
||||||
|
```
|
||||||
|
|
||||||
|
### EXPIRE / PEXPIRE
|
||||||
|
Set expiration time for a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT EXPIRE key 60
|
||||||
|
# → 1 (timeout set) or 0 (timeout not set)
|
||||||
|
|
||||||
|
redis-cli -p $PORT PEXPIRE key 1000
|
||||||
|
# → 1 (timeout set) or 0 (timeout not set)
|
||||||
|
```
|
||||||
|
|
||||||
|
### EXPIREAT / PEXPIREAT
|
||||||
|
Set expiration timestamp for a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT EXPIREAT key 1672531200
|
||||||
|
# → 1 (timeout set) or 0 (timeout not set)
|
||||||
|
|
||||||
|
redis-cli -p $PORT PEXPIREAT key 1672531200000
|
||||||
|
# → 1 (timeout set) or 0 (timeout not set)
|
||||||
|
```
|
||||||
|
|
||||||
|
### TTL
|
||||||
|
Get the time to live for a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT TTL key
|
||||||
|
# → remaining time in seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
### PERSIST
|
||||||
|
Remove expiration from a key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT PERSIST key
|
||||||
|
# → 1 (timeout removed) or 0 (key has no timeout)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Hash Commands
|
||||||
|
|
||||||
|
### HSET
|
||||||
|
Set field-value pairs in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HSET hashkey field1 value1 field2 value2
|
||||||
|
# → number of fields added
|
||||||
|
```
|
||||||
|
|
||||||
|
### HGET
|
||||||
|
Get value of a field in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HGET hashkey field1
|
||||||
|
# → value1
|
||||||
|
```
|
||||||
|
|
||||||
|
### HGETALL
|
||||||
|
Get all field-value pairs in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HGETALL hashkey
|
||||||
|
# → 1) "field1"
|
||||||
|
# 2) "value1"
|
||||||
|
# 3) "field2"
|
||||||
|
# 4) "value2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### HDEL
|
||||||
|
Delete fields from a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HDEL hashkey field1 field2
|
||||||
|
# → number of fields deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
### HEXISTS
|
||||||
|
Check if a field exists in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HEXISTS hashkey field1
|
||||||
|
# → 1 (exists) or 0 (doesn't exist)
|
||||||
|
```
|
||||||
|
|
||||||
|
### HKEYS
|
||||||
|
Get all field names in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HKEYS hashkey
|
||||||
|
# → 1) "field1"
|
||||||
|
# 2) "field2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### HVALS
|
||||||
|
Get all values in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HVALS hashkey
|
||||||
|
# → 1) "value1"
|
||||||
|
# 2) "value2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### HLEN
|
||||||
|
Get number of fields in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HLEN hashkey
|
||||||
|
# → number of fields
|
||||||
|
```
|
||||||
|
|
||||||
|
### HMGET
|
||||||
|
Get values of multiple fields in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HMGET hashkey field1 field2 field3
|
||||||
|
# → 1) "value1"
|
||||||
|
# 2) "value2"
|
||||||
|
# 3) (nil)
|
||||||
|
```
|
||||||
|
|
||||||
|
### HSETNX
|
||||||
|
Set field-value pair in hash only if field doesn't exist.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HSETNX hashkey field1 value1
|
||||||
|
# → 1 (field set) or 0 (field not set)
|
||||||
|
```
|
||||||
|
|
||||||
|
### HINCRBY
|
||||||
|
Increment integer value of a field in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HINCRBY hashkey field1 5
|
||||||
|
# → new value
|
||||||
|
```
|
||||||
|
|
||||||
|
### HINCRBYFLOAT
|
||||||
|
Increment float value of a field in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HINCRBYFLOAT hashkey field1 3.14
|
||||||
|
# → new value
|
||||||
|
```
|
||||||
|
|
||||||
|
### HSCAN
|
||||||
|
Incrementally iterate over fields in a hash.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HSCAN hashkey 0
|
||||||
|
# → 1) "next_cursor"
|
||||||
|
# 2) 1) "field1"
|
||||||
|
# 2) "value1"
|
||||||
|
# 3) "field2"
|
||||||
|
# 4) "value2"
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- MATCH pattern: Filter fields by pattern
|
||||||
|
- COUNT number: Suggest number of fields to return
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT HSCAN hashkey 0 MATCH f*
|
||||||
|
redis-cli -p $PORT HSCAN hashkey 0 COUNT 10
|
||||||
|
redis-cli -p $PORT HSCAN hashkey 0 MATCH f* COUNT 10
|
||||||
|
```
|
||||||
|
|
||||||
|
## List Commands
|
||||||
|
|
||||||
|
### LPUSH
|
||||||
|
Insert elements at the head of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LPUSH listkey element1 element2 element3
|
||||||
|
# → number of elements in the list
|
||||||
|
```
|
||||||
|
|
||||||
|
### RPUSH
|
||||||
|
Insert elements at the tail of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT RPUSH listkey element1 element2 element3
|
||||||
|
# → number of elements in the list
|
||||||
|
```
|
||||||
|
|
||||||
|
### LPOP
|
||||||
|
Remove and return elements from the head of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LPOP listkey
|
||||||
|
# → element1
|
||||||
|
```
|
||||||
|
|
||||||
|
With count:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LPOP listkey 2
|
||||||
|
# → 1) "element1"
|
||||||
|
# 2) "element2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### RPOP
|
||||||
|
Remove and return elements from the tail of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT RPOP listkey
|
||||||
|
# → element3
|
||||||
|
```
|
||||||
|
|
||||||
|
With count:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT RPOP listkey 2
|
||||||
|
# → 1) "element3"
|
||||||
|
# 2) "element2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### LLEN
|
||||||
|
Get the length of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LLEN listkey
|
||||||
|
# → number of elements in the list
|
||||||
|
```
|
||||||
|
|
||||||
|
### LINDEX
|
||||||
|
Get element at index in a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LINDEX listkey 0
|
||||||
|
# → first element
|
||||||
|
```
|
||||||
|
|
||||||
|
Negative indices count from the end:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LINDEX listkey -1
|
||||||
|
# → last element
|
||||||
|
```
|
||||||
|
|
||||||
|
### LRANGE
|
||||||
|
Get a range of elements from a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LRANGE listkey 0 -1
|
||||||
|
# → 1) "element1"
|
||||||
|
# 2) "element2"
|
||||||
|
# 3) "element3"
|
||||||
|
```
|
||||||
|
|
||||||
|
### LTRIM
|
||||||
|
Trim a list to specified range.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LTRIM listkey 0 1
|
||||||
|
# → OK (list now contains only first 2 elements)
|
||||||
|
```
|
||||||
|
|
||||||
|
### LREM
|
||||||
|
Remove elements from a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LREM listkey 2 element1
|
||||||
|
# → number of elements removed
|
||||||
|
```
|
||||||
|
|
||||||
|
Count values:
|
||||||
|
- Positive: Remove from head
|
||||||
|
- Negative: Remove from tail
|
||||||
|
- Zero: Remove all
|
||||||
|
|
||||||
|
### LINSERT
|
||||||
|
Insert element before or after pivot element.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT LINSERT listkey BEFORE pivot newelement
|
||||||
|
# → number of elements in the list
|
||||||
|
```
|
||||||
|
|
||||||
|
### BLPOP
|
||||||
|
Blocking remove and return elements from the head of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT BLPOP listkey1 listkey2 5
|
||||||
|
# → 1) "listkey1"
|
||||||
|
# 2) "element1"
|
||||||
|
```
|
||||||
|
|
||||||
|
If no elements are available, blocks for specified timeout (in seconds) until an element is pushed to one of the lists.
|
||||||
|
|
||||||
|
### BRPOP
|
||||||
|
Blocking remove and return elements from the tail of a list.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT BRPOP listkey1 listkey2 5
|
||||||
|
# → 1) "listkey1"
|
||||||
|
# 2) "element1"
|
||||||
|
```
|
||||||
|
|
||||||
|
If no elements are available, blocks for specified timeout (in seconds) until an element is pushed to one of the lists.
|
||||||
|
|
||||||
|
## Keyspace Commands
|
||||||
|
|
||||||
|
### KEYS
|
||||||
|
Get all keys matching pattern.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT KEYS *
|
||||||
|
# → 1) "key1"
|
||||||
|
# 2) "key2"
|
||||||
|
```
|
||||||
|
|
||||||
|
### SCAN
|
||||||
|
Incrementally iterate over keys.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SCAN 0
|
||||||
|
# → 1) "next_cursor"
|
||||||
|
# 2) 1) "key1"
|
||||||
|
# 2) "key2"
|
||||||
|
```
|
||||||
|
|
||||||
|
Options:
|
||||||
|
- MATCH pattern: Filter keys by pattern
|
||||||
|
- COUNT number: Suggest number of keys to return
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SCAN 0 MATCH k*
|
||||||
|
redis-cli -p $PORT SCAN 0 COUNT 10
|
||||||
|
redis-cli -p $PORT SCAN 0 MATCH k* COUNT 10
|
||||||
|
```
|
||||||
|
|
||||||
|
### DBSIZE
|
||||||
|
Get number of keys in current database.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT DBSIZE
|
||||||
|
# → number of keys
|
||||||
|
```
|
||||||
|
|
||||||
|
### FLUSHDB
|
||||||
|
Remove all keys from current database.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT FLUSHDB
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Commands
|
||||||
|
|
||||||
|
### CONFIG GET
|
||||||
|
Get configuration parameter.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT CONFIG GET dir
|
||||||
|
# → 1) "dir"
|
||||||
|
# 2) "/path/to/db"
|
||||||
|
|
||||||
|
redis-cli -p $PORT CONFIG GET dbfilename
|
||||||
|
# → 1) "dbfilename"
|
||||||
|
# 2) "0.db"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Client Commands
|
||||||
|
|
||||||
|
### CLIENT SETNAME
|
||||||
|
Set current connection name.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT CLIENT SETNAME myconnection
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### CLIENT GETNAME
|
||||||
|
Get current connection name.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT CLIENT GETNAME
|
||||||
|
# → myconnection
|
||||||
|
```
|
||||||
|
|
||||||
|
## Transaction Commands
|
||||||
|
|
||||||
|
### MULTI
|
||||||
|
Start a transaction block.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT MULTI
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### EXEC
|
||||||
|
Execute all commands in transaction block.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT MULTI
|
||||||
|
redis-cli -p $PORT SET key1 value1
|
||||||
|
redis-cli -p $PORT SET key2 value2
|
||||||
|
redis-cli -p $PORT EXEC
|
||||||
|
# → 1) OK
|
||||||
|
# 2) OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### DISCARD
|
||||||
|
Discard all commands in transaction block.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT MULTI
|
||||||
|
redis-cli -p $PORT SET key1 value1
|
||||||
|
redis-cli -p $PORT DISCARD
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
## AGE Commands
|
||||||
|
|
||||||
|
### AGE GENENC
|
||||||
|
Generate ephemeral encryption keypair.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE GENENC
|
||||||
|
# → 1) "recipient_public_key"
|
||||||
|
# 2) "identity_secret_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE ENCRYPT
|
||||||
|
Encrypt message with recipient public key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE ENCRYPT recipient_public_key "message"
|
||||||
|
# → base64_encoded_ciphertext
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE DECRYPT
|
||||||
|
Decrypt ciphertext with identity secret key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE DECRYPT identity_secret_key base64_encoded_ciphertext
|
||||||
|
# → decrypted_message
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE GENSIGN
|
||||||
|
Generate ephemeral signing keypair.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE GENSIGN
|
||||||
|
# → 1) "verify_public_key"
|
||||||
|
# 2) "sign_secret_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE SIGN
|
||||||
|
Sign message with signing secret key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE SIGN sign_secret_key "message"
|
||||||
|
# → base64_encoded_signature
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE VERIFY
|
||||||
|
Verify signature with verify public key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE VERIFY verify_public_key "message" base64_encoded_signature
|
||||||
|
# → 1 (valid) or 0 (invalid)
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE KEYGEN
|
||||||
|
Generate and persist named encryption keypair.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE KEYGEN keyname
|
||||||
|
# → 1) "recipient_public_key"
|
||||||
|
# 2) "identity_secret_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE SIGNKEYGEN
|
||||||
|
Generate and persist named signing keypair.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE SIGNKEYGEN keyname
|
||||||
|
# → 1) "verify_public_key"
|
||||||
|
# 2) "sign_secret_key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE ENCRYPTNAME
|
||||||
|
Encrypt message with named key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE ENCRYPTNAME keyname "message"
|
||||||
|
# → base64_encoded_ciphertext
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE DECRYPTNAME
|
||||||
|
Decrypt ciphertext with named key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE DECRYPTNAME keyname base64_encoded_ciphertext
|
||||||
|
# → decrypted_message
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE SIGNNAME
|
||||||
|
Sign message with named signing key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE SIGNNAME keyname "message"
|
||||||
|
# → base64_encoded_signature
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE VERIFYNAME
|
||||||
|
Verify signature with named verify key.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE VERIFYNAME keyname "message" base64_encoded_signature
|
||||||
|
# → 1 (valid) or 0 (invalid)
|
||||||
|
```
|
||||||
|
|
||||||
|
### AGE LIST
|
||||||
|
List all stored AGE keys.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT AGE LIST
|
||||||
|
# → 1) "keyname1"
|
||||||
|
# 2) "keyname2"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Server Information Commands
|
||||||
|
|
||||||
|
### INFO
|
||||||
|
Get server information.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT INFO
|
||||||
|
# → Server information
|
||||||
|
```
|
||||||
|
|
||||||
|
With section:
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT INFO replication
|
||||||
|
# → Replication information
|
||||||
|
```
|
||||||
|
|
||||||
|
### COMMAND
|
||||||
|
Get command information (stub implementation).
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT COMMAND
|
||||||
|
# → Empty array (stub)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Selection
|
||||||
|
|
||||||
|
### SELECT
|
||||||
|
Select database by index.
|
||||||
|
```bash
|
||||||
|
redis-cli -p $PORT SELECT 0
|
||||||
|
# → OK
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
This expanded documentation includes all the list commands that were implemented in the cmd.rs file:
|
||||||
|
1. LPUSH - push elements to the left (head) of a list
|
||||||
|
2. RPUSH - push elements to the right (tail) of a list
|
||||||
|
3. LPOP - pop elements from the left (head) of a list
|
||||||
|
4. RPOP - pop elements from the right (tail) of a list
|
||||||
|
5. BLPOP - blocking pop from the left with timeout
|
||||||
|
6. BRPOP - blocking pop from the right with timeout
|
||||||
|
7. LLEN - get list length
|
||||||
|
8. LREM - remove elements from list
|
||||||
|
9. LTRIM - trim list to range
|
||||||
|
10. LINDEX - get element by index
|
||||||
|
11. LRANGE - get range of elements
|
||||||
|
|
||||||
125
docs/cmds.md
Normal file
125
docs/cmds.md
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
|
||||||
|
## Backend Support
|
||||||
|
|
||||||
|
HeroDB supports two storage backends, both with full encryption support:
|
||||||
|
|
||||||
|
- **redb** (default): Full-featured, optimized for production use
|
||||||
|
- **sled**: Alternative embedded database with encryption support
|
||||||
|
|
||||||
|
### Starting HeroDB with Different Backends
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use default redb backend
|
||||||
|
./target/release/herodb --dir /tmp/herodb_redb --port 6379
|
||||||
|
|
||||||
|
# Use sled backend
|
||||||
|
./target/release/herodb --dir /tmp/herodb_sled --port 6379 --sled
|
||||||
|
|
||||||
|
# Use redb with encryption
|
||||||
|
./target/release/herodb --dir /tmp/herodb_encrypted --port 6379 --encrypt --key mysecretkey
|
||||||
|
|
||||||
|
# Use sled with encryption
|
||||||
|
./target/release/herodb --dir /tmp/herodb_sled_encrypted --port 6379 --sled --encrypt --key mysecretkey
|
||||||
|
```
|
||||||
|
|
||||||
|
### Command Support by Backend
|
||||||
|
|
||||||
|
Command Category | redb | sled | Notes |
|
||||||
|
|-----------------|------|------|-------|
|
||||||
|
**Strings** | | | |
|
||||||
|
SET | ✅ | ✅ | Full support |
|
||||||
|
GET | ✅ | ✅ | Full support |
|
||||||
|
DEL | ✅ | ✅ | Full support |
|
||||||
|
EXISTS | ✅ | ✅ | Full support |
|
||||||
|
INCR/DECR | ✅ | ✅ | Full support |
|
||||||
|
MGET/MSET | ✅ | ✅ | Full support |
|
||||||
|
**Hashes** | | | |
|
||||||
|
HSET | ✅ | ✅ | Full support |
|
||||||
|
HGET | ✅ | ✅ | Full support |
|
||||||
|
HGETALL | ✅ | ✅ | Full support |
|
||||||
|
HDEL | ✅ | ✅ | Full support |
|
||||||
|
HEXISTS | ✅ | ✅ | Full support |
|
||||||
|
HKEYS | ✅ | ✅ | Full support |
|
||||||
|
HVALS | ✅ | ✅ | Full support |
|
||||||
|
HLEN | ✅ | ✅ | Full support |
|
||||||
|
HMGET | ✅ | ✅ | Full support |
|
||||||
|
HSETNX | ✅ | ✅ | Full support |
|
||||||
|
HINCRBY/HINCRBYFLOAT | ✅ | ✅ | Full support |
|
||||||
|
HSCAN | ✅ | ✅ | Full support with pattern matching |
|
||||||
|
**Lists** | | | |
|
||||||
|
LPUSH/RPUSH | ✅ | ✅ | Full support |
|
||||||
|
LPOP/RPOP | ✅ | ✅ | Full support |
|
||||||
|
LLEN | ✅ | ✅ | Full support |
|
||||||
|
LRANGE | ✅ | ✅ | Full support |
|
||||||
|
LINDEX | ✅ | ✅ | Full support |
|
||||||
|
LTRIM | ✅ | ✅ | Full support |
|
||||||
|
LREM | ✅ | ✅ | Full support |
|
||||||
|
BLPOP/BRPOP | ✅ | ❌ | Blocking operations not in sled |
|
||||||
|
**Expiration** | | | |
|
||||||
|
EXPIRE | ✅ | ✅ | Full support in both |
|
||||||
|
TTL | ✅ | ✅ | Full support in both |
|
||||||
|
PERSIST | ✅ | ✅ | Full support in both |
|
||||||
|
SETEX/PSETEX | ✅ | ✅ | Full support in both |
|
||||||
|
EXPIREAT/PEXPIREAT | ✅ | ✅ | Full support in both |
|
||||||
|
**Scanning** | | | |
|
||||||
|
KEYS | ✅ | ✅ | Full support with patterns |
|
||||||
|
SCAN | ✅ | ✅ | Full cursor-based iteration |
|
||||||
|
HSCAN | ✅ | ✅ | Full cursor-based iteration |
|
||||||
|
**Transactions** | | | |
|
||||||
|
MULTI/EXEC/DISCARD | ✅ | ❌ | Only supported in redb |
|
||||||
|
**Encryption** | | | |
|
||||||
|
Data-at-rest encryption | ✅ | ✅ | Both support [age](age.tech) encryption |
|
||||||
|
AGE commands | ✅ | ✅ | Both support AGE crypto commands |
|
||||||
|
**Full-Text Search** | | | |
|
||||||
|
FT.CREATE | ✅ | ✅ | Create search index with schema |
|
||||||
|
FT.ADD | ✅ | ✅ | Add document to search index |
|
||||||
|
FT.SEARCH | ✅ | ✅ | Search documents with query |
|
||||||
|
FT.DEL | ✅ | ✅ | Delete document from index |
|
||||||
|
FT.INFO | ✅ | ✅ | Get index information |
|
||||||
|
FT.DROP | ✅ | ✅ | Drop search index |
|
||||||
|
FT.ALTER | ✅ | ✅ | Alter index schema |
|
||||||
|
FT.AGGREGATE | ✅ | ✅ | Aggregate search results |
|
||||||
|
|
||||||
|
### Performance Considerations
|
||||||
|
|
||||||
|
- **redb**: Optimized for concurrent access, better for high-throughput scenarios
|
||||||
|
- **sled**: Lock-free architecture, excellent for specific workloads
|
||||||
|
|
||||||
|
### Encryption Features
|
||||||
|
|
||||||
|
Both backends support:
|
||||||
|
- Transparent data-at-rest encryption using the `age` encryption library
|
||||||
|
- Per-database encryption (databases >= 10 are encrypted when `--encrypt` flag is used)
|
||||||
|
- Secure key derivation using the master key
|
||||||
|
|
||||||
|
### Backend Selection Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example: Testing both backends
|
||||||
|
redis-cli -p 6379 SET mykey "redb value"
|
||||||
|
redis-cli -p 6381 SET mykey "sled value"
|
||||||
|
|
||||||
|
# Example: Using encryption with both
|
||||||
|
./target/release/herodb --port 6379 --encrypt --key secret123
|
||||||
|
./target/release/herodb --port 6381 --sled --encrypt --key secret123
|
||||||
|
|
||||||
|
# Both support the same Redis commands
|
||||||
|
redis-cli -p 6379 HSET user:1 name "Alice" age "30"
|
||||||
|
redis-cli -p 6381 HSET user:1 name "Alice" age "30"
|
||||||
|
|
||||||
|
# Both support SCAN operations
|
||||||
|
redis-cli -p 6379 SCAN 0 MATCH user:* COUNT 10
|
||||||
|
redis-cli -p 6381 SCAN 0 MATCH user:* COUNT 10
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Between Backends
|
||||||
|
|
||||||
|
To migrate data between backends, use Redis replication or dump/restore:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Export from redb
|
||||||
|
redis-cli -p 6379 --rdb dump.rdb
|
||||||
|
|
||||||
|
# Import to sled
|
||||||
|
redis-cli -p 6381 --pipe < dump.rdb
|
||||||
|
```
|
||||||
397
docs/search.md
Normal file
397
docs/search.md
Normal file
@@ -0,0 +1,397 @@
|
|||||||
|
# Full-Text Search with Tantivy
|
||||||
|
|
||||||
|
HeroDB includes powerful full-text search capabilities powered by [Tantivy](https://github.com/quickwit-oss/tantivy), a fast full-text search engine library written in Rust. This provides Redis-compatible search commands similar to RediSearch.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The search functionality allows you to:
|
||||||
|
- Create search indexes with custom schemas
|
||||||
|
- Index documents with multiple field types
|
||||||
|
- Perform complex queries with filters
|
||||||
|
- Support for text, numeric, date, and geographic data
|
||||||
|
- Real-time search with high performance
|
||||||
|
|
||||||
|
## Search Commands
|
||||||
|
|
||||||
|
### FT.CREATE - Create Search Index
|
||||||
|
|
||||||
|
Create a new search index with a defined schema.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.CREATE index_name SCHEMA field_name field_type [options] [field_name field_type [options] ...]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Types:**
|
||||||
|
- `TEXT` - Full-text searchable text fields
|
||||||
|
- `NUMERIC` - Numeric fields (integers, floats)
|
||||||
|
- `TAG` - Tag fields for exact matching
|
||||||
|
- `GEO` - Geographic coordinates (lat,lon)
|
||||||
|
- `DATE` - Date/timestamp fields
|
||||||
|
|
||||||
|
**Field Options:**
|
||||||
|
- `STORED` - Store field value for retrieval
|
||||||
|
- `INDEXED` - Make field searchable
|
||||||
|
- `TOKENIZED` - Enable tokenization for text fields
|
||||||
|
- `FAST` - Enable fast access for numeric fields
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
# Create a product search index
|
||||||
|
FT.CREATE products SCHEMA
|
||||||
|
title TEXT STORED INDEXED TOKENIZED
|
||||||
|
description TEXT STORED INDEXED TOKENIZED
|
||||||
|
price NUMERIC STORED INDEXED FAST
|
||||||
|
category TAG STORED
|
||||||
|
location GEO STORED
|
||||||
|
created_date DATE STORED INDEXED
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.ADD - Add Document to Index
|
||||||
|
|
||||||
|
Add a document to a search index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.ADD index_name doc_id [SCORE score] FIELDS field_name field_value [field_name field_value ...]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
# Add a product document
|
||||||
|
FT.ADD products product:1 SCORE 1.0 FIELDS
|
||||||
|
title "Wireless Headphones"
|
||||||
|
description "High-quality wireless headphones with noise cancellation"
|
||||||
|
price 199.99
|
||||||
|
category "electronics"
|
||||||
|
location "37.7749,-122.4194"
|
||||||
|
created_date 1640995200000
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.SEARCH - Search Documents
|
||||||
|
|
||||||
|
Search for documents in an index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.SEARCH index_name query [LIMIT offset count] [FILTER field min max] [RETURN field [field ...]]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Query Syntax:**
|
||||||
|
- Simple terms: `wireless headphones`
|
||||||
|
- Phrase queries: `"noise cancellation"`
|
||||||
|
- Field-specific: `title:wireless`
|
||||||
|
- Boolean operators: `wireless AND headphones`
|
||||||
|
- Wildcards: `head*`
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
```bash
|
||||||
|
# Simple text search
|
||||||
|
FT.SEARCH products "wireless headphones"
|
||||||
|
|
||||||
|
# Search with filters
|
||||||
|
FT.SEARCH products "headphones" FILTER price 100 300 LIMIT 0 10
|
||||||
|
|
||||||
|
# Field-specific search
|
||||||
|
FT.SEARCH products "title:wireless AND category:electronics"
|
||||||
|
|
||||||
|
# Return specific fields only
|
||||||
|
FT.SEARCH products "*" RETURN title price
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.DEL - Delete Document
|
||||||
|
|
||||||
|
Remove a document from the search index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.DEL index_name doc_id
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
FT.DEL products product:1
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.INFO - Get Index Information
|
||||||
|
|
||||||
|
Get information about a search index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.INFO index_name
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returns:**
|
||||||
|
- Index name and document count
|
||||||
|
- Field definitions and types
|
||||||
|
- Index configuration
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
FT.INFO products
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.DROP - Drop Index
|
||||||
|
|
||||||
|
Delete an entire search index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.DROP index_name
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
FT.DROP products
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.ALTER - Alter Index Schema
|
||||||
|
|
||||||
|
Add new fields to an existing index.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.ALTER index_name SCHEMA ADD field_name field_type [options]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
FT.ALTER products SCHEMA ADD brand TAG STORED
|
||||||
|
```
|
||||||
|
|
||||||
|
### FT.AGGREGATE - Aggregate Search Results
|
||||||
|
|
||||||
|
Perform aggregations on search results.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FT.AGGREGATE index_name query [GROUPBY field] [REDUCE function field AS alias]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
# Group products by category and count
|
||||||
|
FT.AGGREGATE products "*" GROUPBY category REDUCE COUNT 0 AS count
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field Types in Detail
|
||||||
|
|
||||||
|
### TEXT Fields
|
||||||
|
- **Purpose**: Full-text search on natural language content
|
||||||
|
- **Features**: Tokenization, stemming, stop-word removal
|
||||||
|
- **Options**: `STORED`, `INDEXED`, `TOKENIZED`
|
||||||
|
- **Example**: Product titles, descriptions, content
|
||||||
|
|
||||||
|
### NUMERIC Fields
|
||||||
|
- **Purpose**: Numeric data for range queries and sorting
|
||||||
|
- **Types**: I64, U64, F64
|
||||||
|
- **Options**: `STORED`, `INDEXED`, `FAST`
|
||||||
|
- **Example**: Prices, quantities, ratings
|
||||||
|
|
||||||
|
### TAG Fields
|
||||||
|
- **Purpose**: Exact-match categorical data
|
||||||
|
- **Features**: No tokenization, exact string matching
|
||||||
|
- **Options**: `STORED`, case sensitivity control
|
||||||
|
- **Example**: Categories, brands, status values
|
||||||
|
|
||||||
|
### GEO Fields
|
||||||
|
- **Purpose**: Geographic coordinates
|
||||||
|
- **Format**: "latitude,longitude" (e.g., "37.7749,-122.4194")
|
||||||
|
- **Features**: Geographic distance queries
|
||||||
|
- **Options**: `STORED`
|
||||||
|
|
||||||
|
### DATE Fields
|
||||||
|
- **Purpose**: Timestamp and date data
|
||||||
|
- **Format**: Unix timestamp in milliseconds
|
||||||
|
- **Features**: Range queries, temporal filtering
|
||||||
|
- **Options**: `STORED`, `INDEXED`, `FAST`
|
||||||
|
|
||||||
|
## Search Query Syntax
|
||||||
|
|
||||||
|
### Basic Queries
|
||||||
|
```bash
|
||||||
|
# Single term
|
||||||
|
FT.SEARCH products "wireless"
|
||||||
|
|
||||||
|
# Multiple terms (AND by default)
|
||||||
|
FT.SEARCH products "wireless headphones"
|
||||||
|
|
||||||
|
# Phrase query
|
||||||
|
FT.SEARCH products "\"noise cancellation\""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Field-Specific Queries
|
||||||
|
```bash
|
||||||
|
# Search in specific field
|
||||||
|
FT.SEARCH products "title:wireless"
|
||||||
|
|
||||||
|
# Multiple field queries
|
||||||
|
FT.SEARCH products "title:wireless AND description:bluetooth"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Boolean Operators
|
||||||
|
```bash
|
||||||
|
# AND operator
|
||||||
|
FT.SEARCH products "wireless AND headphones"
|
||||||
|
|
||||||
|
# OR operator
|
||||||
|
FT.SEARCH products "wireless OR bluetooth"
|
||||||
|
|
||||||
|
# NOT operator
|
||||||
|
FT.SEARCH products "headphones NOT wired"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Wildcards and Fuzzy Search
|
||||||
|
```bash
|
||||||
|
# Wildcard search
|
||||||
|
FT.SEARCH products "head*"
|
||||||
|
|
||||||
|
# Fuzzy search (approximate matching)
|
||||||
|
FT.SEARCH products "%headphone%"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Range Queries
|
||||||
|
```bash
|
||||||
|
# Numeric range in query
|
||||||
|
FT.SEARCH products "@price:[100 300]"
|
||||||
|
|
||||||
|
# Date range
|
||||||
|
FT.SEARCH products "@created_date:[1640995200000 1672531200000]"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Filtering and Sorting
|
||||||
|
|
||||||
|
### FILTER Clause
|
||||||
|
```bash
|
||||||
|
# Numeric filter
|
||||||
|
FT.SEARCH products "headphones" FILTER price 100 300
|
||||||
|
|
||||||
|
# Multiple filters
|
||||||
|
FT.SEARCH products "*" FILTER price 100 500 FILTER rating 4 5
|
||||||
|
```
|
||||||
|
|
||||||
|
### LIMIT Clause
|
||||||
|
```bash
|
||||||
|
# Pagination
|
||||||
|
FT.SEARCH products "wireless" LIMIT 0 10 # First 10 results
|
||||||
|
FT.SEARCH products "wireless" LIMIT 10 10 # Next 10 results
|
||||||
|
```
|
||||||
|
|
||||||
|
### RETURN Clause
|
||||||
|
```bash
|
||||||
|
# Return specific fields
|
||||||
|
FT.SEARCH products "*" RETURN title price
|
||||||
|
|
||||||
|
# Return all stored fields (default)
|
||||||
|
FT.SEARCH products "*"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
### Indexing Strategy
|
||||||
|
- Only index fields you need to search on
|
||||||
|
- Use `FAST` option for frequently filtered numeric fields
|
||||||
|
- Consider storage vs. search performance trade-offs
|
||||||
|
|
||||||
|
### Query Optimization
|
||||||
|
- Use specific field queries when possible
|
||||||
|
- Combine filters with text queries for better performance
|
||||||
|
- Use pagination with LIMIT for large result sets
|
||||||
|
|
||||||
|
### Memory Usage
|
||||||
|
- Tantivy indexes are memory-mapped for performance
|
||||||
|
- Index size depends on document count and field configuration
|
||||||
|
- Monitor disk space for index storage
|
||||||
|
|
||||||
|
## Integration with Redis Commands
|
||||||
|
|
||||||
|
Search indexes work alongside regular Redis data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Store product data in Redis hash
|
||||||
|
HSET product:1 title "Wireless Headphones" price "199.99"
|
||||||
|
|
||||||
|
# Index the same data for search
|
||||||
|
FT.ADD products product:1 FIELDS title "Wireless Headphones" price 199.99
|
||||||
|
|
||||||
|
# Search returns document IDs that can be used with Redis commands
|
||||||
|
FT.SEARCH products "wireless"
|
||||||
|
# Returns: product:1
|
||||||
|
|
||||||
|
# Retrieve full data using Redis
|
||||||
|
HGETALL product:1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Use Cases
|
||||||
|
|
||||||
|
### E-commerce Product Search
|
||||||
|
```bash
|
||||||
|
# Create product catalog index
|
||||||
|
FT.CREATE catalog SCHEMA
|
||||||
|
name TEXT STORED INDEXED TOKENIZED
|
||||||
|
description TEXT INDEXED TOKENIZED
|
||||||
|
price NUMERIC STORED INDEXED FAST
|
||||||
|
category TAG STORED
|
||||||
|
brand TAG STORED
|
||||||
|
rating NUMERIC STORED FAST
|
||||||
|
|
||||||
|
# Add products
|
||||||
|
FT.ADD catalog prod:1 FIELDS name "iPhone 14" price 999 category "phones" brand "apple" rating 4.5
|
||||||
|
FT.ADD catalog prod:2 FIELDS name "Samsung Galaxy" price 899 category "phones" brand "samsung" rating 4.3
|
||||||
|
|
||||||
|
# Search queries
|
||||||
|
FT.SEARCH catalog "iPhone"
|
||||||
|
FT.SEARCH catalog "phones" FILTER price 800 1000
|
||||||
|
FT.SEARCH catalog "@brand:apple"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Content Management
|
||||||
|
```bash
|
||||||
|
# Create content index
|
||||||
|
FT.CREATE content SCHEMA
|
||||||
|
title TEXT STORED INDEXED TOKENIZED
|
||||||
|
body TEXT INDEXED TOKENIZED
|
||||||
|
author TAG STORED
|
||||||
|
published DATE STORED INDEXED
|
||||||
|
tags TAG STORED
|
||||||
|
|
||||||
|
# Search content
|
||||||
|
FT.SEARCH content "machine learning"
|
||||||
|
FT.SEARCH content "@author:john AND @tags:ai"
|
||||||
|
FT.SEARCH content "*" FILTER published 1640995200000 1672531200000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Geographic Search
|
||||||
|
```bash
|
||||||
|
# Create location-based index
|
||||||
|
FT.CREATE places SCHEMA
|
||||||
|
name TEXT STORED INDEXED TOKENIZED
|
||||||
|
location GEO STORED
|
||||||
|
type TAG STORED
|
||||||
|
|
||||||
|
# Add locations
|
||||||
|
FT.ADD places place:1 FIELDS name "Golden Gate Bridge" location "37.8199,-122.4783" type "landmark"
|
||||||
|
|
||||||
|
# Geographic queries (future feature)
|
||||||
|
FT.SEARCH places "@location:[37.7749 -122.4194 10 km]"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
Common error responses:
|
||||||
|
- `ERR index not found` - Index doesn't exist
|
||||||
|
- `ERR field not found` - Field not defined in schema
|
||||||
|
- `ERR invalid query syntax` - Malformed query
|
||||||
|
- `ERR document not found` - Document ID doesn't exist
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Schema Design**: Plan your schema carefully - changes require reindexing
|
||||||
|
2. **Field Selection**: Only store and index fields you actually need
|
||||||
|
3. **Batch Operations**: Add multiple documents efficiently
|
||||||
|
4. **Query Testing**: Test queries for performance with realistic data
|
||||||
|
5. **Monitoring**: Monitor index size and query performance
|
||||||
|
6. **Backup**: Include search indexes in backup strategies
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
Planned features:
|
||||||
|
- Geographic distance queries
|
||||||
|
- Advanced aggregations and faceting
|
||||||
|
- Highlighting of search results
|
||||||
|
- Synonyms and custom analyzers
|
||||||
|
- Real-time suggestions and autocomplete
|
||||||
|
- Index replication and sharding
|
||||||
171
examples/README.md
Normal file
171
examples/README.md
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
# HeroDB Tantivy Search Examples
|
||||||
|
|
||||||
|
This directory contains examples demonstrating HeroDB's full-text search capabilities powered by Tantivy.
|
||||||
|
|
||||||
|
## Tantivy Search Demo (Bash Script)
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
The `tantivy_search_demo.sh` script provides a comprehensive demonstration of HeroDB's search functionality using Redis commands. It showcases various search scenarios including basic text search, filtering, sorting, geographic queries, and more.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
1. **HeroDB Server**: The server must be running on port 6381
|
||||||
|
2. **Redis CLI**: The `redis-cli` tool must be installed and available in your PATH
|
||||||
|
|
||||||
|
### Running the Demo
|
||||||
|
|
||||||
|
#### Step 1: Start HeroDB Server
|
||||||
|
```bash
|
||||||
|
# From the project root directory
|
||||||
|
cargo run -- --port 6381
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Run the Demo (in a new terminal)
|
||||||
|
```bash
|
||||||
|
# From the project root directory
|
||||||
|
./examples/tantivy_search_demo.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### What the Demo Covers
|
||||||
|
|
||||||
|
The script demonstrates 15 different search scenarios:
|
||||||
|
|
||||||
|
1. **Index Creation** - Creating a search index with various field types
|
||||||
|
2. **Data Insertion** - Adding sample products to the index
|
||||||
|
3. **Basic Text Search** - Simple keyword searches
|
||||||
|
4. **Filtered Search** - Combining text search with category filters
|
||||||
|
5. **Numeric Range Search** - Finding products within price ranges
|
||||||
|
6. **Sorting Results** - Ordering results by different fields
|
||||||
|
7. **Limited Results** - Pagination and result limiting
|
||||||
|
8. **Complex Queries** - Multi-field searches with sorting
|
||||||
|
9. **Geographic Search** - Location-based queries
|
||||||
|
10. **Index Information** - Getting statistics about the search index
|
||||||
|
11. **Search Comparison** - Tantivy vs simple pattern matching
|
||||||
|
12. **Fuzzy Search** - Typo tolerance and approximate matching
|
||||||
|
13. **Phrase Search** - Exact phrase matching
|
||||||
|
14. **Boolean Queries** - AND, OR, NOT operators
|
||||||
|
15. **Cleanup** - Removing test data
|
||||||
|
|
||||||
|
### Sample Data
|
||||||
|
|
||||||
|
The demo uses a product catalog with the following fields:
|
||||||
|
- **title** (TEXT) - Product name with higher search weight
|
||||||
|
- **description** (TEXT) - Detailed product description
|
||||||
|
- **category** (TAG) - Comma-separated categories
|
||||||
|
- **price** (NUMERIC) - Product price for range queries
|
||||||
|
- **rating** (NUMERIC) - Customer rating for sorting
|
||||||
|
- **location** (GEO) - Geographic coordinates for location searches
|
||||||
|
|
||||||
|
### Key Redis Commands Demonstrated
|
||||||
|
|
||||||
|
#### Index Management
|
||||||
|
```bash
|
||||||
|
# Create search index
|
||||||
|
FT.CREATE product_catalog ON HASH PREFIX 1 product: SCHEMA title TEXT WEIGHT 2.0 SORTABLE description TEXT category TAG SEPARATOR , price NUMERIC SORTABLE rating NUMERIC SORTABLE location GEO
|
||||||
|
|
||||||
|
# Get index information
|
||||||
|
FT.INFO product_catalog
|
||||||
|
|
||||||
|
# Drop index
|
||||||
|
FT.DROPINDEX product_catalog
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Search Queries
|
||||||
|
```bash
|
||||||
|
# Basic text search
|
||||||
|
FT.SEARCH product_catalog wireless
|
||||||
|
|
||||||
|
# Filtered search
|
||||||
|
FT.SEARCH product_catalog 'organic @category:{food}'
|
||||||
|
|
||||||
|
# Numeric range
|
||||||
|
FT.SEARCH product_catalog '@price:[50 150]'
|
||||||
|
|
||||||
|
# Sorted results
|
||||||
|
FT.SEARCH product_catalog '@category:{electronics}' SORTBY price ASC
|
||||||
|
|
||||||
|
# Geographic search
|
||||||
|
FT.SEARCH product_catalog '@location:[37.7749 -122.4194 50 km]'
|
||||||
|
|
||||||
|
# Boolean queries
|
||||||
|
FT.SEARCH product_catalog 'wireless AND audio'
|
||||||
|
FT.SEARCH product_catalog 'coffee OR tea'
|
||||||
|
|
||||||
|
# Phrase search
|
||||||
|
FT.SEARCH product_catalog '"noise canceling"'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interactive Features
|
||||||
|
|
||||||
|
The demo script includes:
|
||||||
|
- **Colored output** for better readability
|
||||||
|
- **Pause between steps** to review results
|
||||||
|
- **Error handling** with clear error messages
|
||||||
|
- **Automatic cleanup** of test data
|
||||||
|
- **Progress indicators** showing what each step demonstrates
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
#### HeroDB Not Running
|
||||||
|
```
|
||||||
|
✗ HeroDB is not running on port 6381
|
||||||
|
ℹ Please start HeroDB with: cargo run -- --port 6381
|
||||||
|
```
|
||||||
|
**Solution**: Start the HeroDB server in a separate terminal.
|
||||||
|
|
||||||
|
#### Redis CLI Not Found
|
||||||
|
```
|
||||||
|
redis-cli: command not found
|
||||||
|
```
|
||||||
|
**Solution**: Install Redis tools or use an alternative Redis client.
|
||||||
|
|
||||||
|
#### Connection Refused
|
||||||
|
```
|
||||||
|
Could not connect to Redis at localhost:6381: Connection refused
|
||||||
|
```
|
||||||
|
**Solution**: Ensure HeroDB is running and listening on the correct port.
|
||||||
|
|
||||||
|
### Manual Testing
|
||||||
|
|
||||||
|
You can also run individual commands manually:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Connect to HeroDB
|
||||||
|
redis-cli -h localhost -p 6381
|
||||||
|
|
||||||
|
# Create a simple index
|
||||||
|
FT.CREATE myindex ON HASH SCHEMA title TEXT description TEXT
|
||||||
|
|
||||||
|
# Add a document
|
||||||
|
HSET doc:1 title "Hello World" description "This is a test document"
|
||||||
|
|
||||||
|
# Search
|
||||||
|
FT.SEARCH myindex hello
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Notes
|
||||||
|
|
||||||
|
- **Indexing**: Documents are indexed in real-time as they're added
|
||||||
|
- **Search Speed**: Full-text search is much faster than pattern matching on large datasets
|
||||||
|
- **Memory Usage**: Tantivy indexes are memory-efficient and disk-backed
|
||||||
|
- **Scalability**: Supports millions of documents with sub-second search times
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
The demo showcases advanced Tantivy features:
|
||||||
|
- **Relevance Scoring** - Results ranked by relevance
|
||||||
|
- **Fuzzy Matching** - Handles typos and approximate matches
|
||||||
|
- **Field Weighting** - Title field has higher search weight
|
||||||
|
- **Multi-field Search** - Search across multiple fields simultaneously
|
||||||
|
- **Geographic Queries** - Distance-based location searches
|
||||||
|
- **Numeric Ranges** - Efficient range queries on numeric fields
|
||||||
|
- **Tag Filtering** - Fast categorical filtering
|
||||||
|
|
||||||
|
### Next Steps
|
||||||
|
|
||||||
|
After running the demo, explore:
|
||||||
|
1. **Custom Schemas** - Define your own field types and configurations
|
||||||
|
2. **Large Datasets** - Test with thousands or millions of documents
|
||||||
|
3. **Real Applications** - Integrate search into your applications
|
||||||
|
4. **Performance Tuning** - Optimize for your specific use case
|
||||||
|
|
||||||
|
For more information, see the [search documentation](../herodb/docs/search.md).
|
||||||
@@ -14,25 +14,31 @@ fn read_reply(s: &mut TcpStream) -> String {
|
|||||||
let n = s.read(&mut buf).unwrap();
|
let n = s.read(&mut buf).unwrap();
|
||||||
String::from_utf8_lossy(&buf[..n]).to_string()
|
String::from_utf8_lossy(&buf[..n]).to_string()
|
||||||
}
|
}
|
||||||
fn parse_two_bulk(reply: &str) -> Option<(String,String)> {
|
fn parse_two_bulk(reply: &str) -> Option<(String, String)> {
|
||||||
let mut lines = reply.split("\r\n");
|
let mut lines = reply.split("\r\n");
|
||||||
if lines.next()? != "*2" { return None; }
|
if lines.next()? != "*2" {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
let _n = lines.next()?;
|
let _n = lines.next()?;
|
||||||
let a = lines.next()?.to_string();
|
let a = lines.next()?.to_string();
|
||||||
let _m = lines.next()?;
|
let _m = lines.next()?;
|
||||||
let b = lines.next()?.to_string();
|
let b = lines.next()?.to_string();
|
||||||
Some((a,b))
|
Some((a, b))
|
||||||
}
|
}
|
||||||
fn parse_bulk(reply: &str) -> Option<String> {
|
fn parse_bulk(reply: &str) -> Option<String> {
|
||||||
let mut lines = reply.split("\r\n");
|
let mut lines = reply.split("\r\n");
|
||||||
let hdr = lines.next()?;
|
let hdr = lines.next()?;
|
||||||
if !hdr.starts_with('$') { return None; }
|
if !hdr.starts_with('$') {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
Some(lines.next()?.to_string())
|
Some(lines.next()?.to_string())
|
||||||
}
|
}
|
||||||
fn parse_simple(reply: &str) -> Option<String> {
|
fn parse_simple(reply: &str) -> Option<String> {
|
||||||
let mut lines = reply.split("\r\n");
|
let mut lines = reply.split("\r\n");
|
||||||
let hdr = lines.next()?;
|
let hdr = lines.next()?;
|
||||||
if !hdr.starts_with('+') { return None; }
|
if !hdr.starts_with('+') {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
Some(hdr[1..].to_string())
|
Some(hdr[1..].to_string())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -45,39 +51,45 @@ fn main() {
|
|||||||
let mut s = TcpStream::connect(addr).expect("connect");
|
let mut s = TcpStream::connect(addr).expect("connect");
|
||||||
|
|
||||||
// Generate & persist X25519 enc keys under name "alice"
|
// Generate & persist X25519 enc keys under name "alice"
|
||||||
s.write_all(arr(&["age","keygen","alice"]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "keygen", "alice"]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let (_alice_recip, _alice_ident) = parse_two_bulk(&read_reply(&mut s)).expect("gen enc");
|
let (_alice_recip, _alice_ident) = parse_two_bulk(&read_reply(&mut s)).expect("gen enc");
|
||||||
|
|
||||||
// Generate & persist Ed25519 signing key under name "signer"
|
// Generate & persist Ed25519 signing key under name "signer"
|
||||||
s.write_all(arr(&["age","signkeygen","signer"]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "signkeygen", "signer"]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let (_verify, _secret) = parse_two_bulk(&read_reply(&mut s)).expect("gen sign");
|
let (_verify, _secret) = parse_two_bulk(&read_reply(&mut s)).expect("gen sign");
|
||||||
|
|
||||||
// Encrypt by name
|
// Encrypt by name
|
||||||
let msg = "hello from persistent keys";
|
let msg = "hello from persistent keys";
|
||||||
s.write_all(arr(&["age","encryptname","alice", msg]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "encryptname", "alice", msg]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let ct_b64 = parse_bulk(&read_reply(&mut s)).expect("ct b64");
|
let ct_b64 = parse_bulk(&read_reply(&mut s)).expect("ct b64");
|
||||||
println!("ciphertext b64: {}", ct_b64);
|
println!("ciphertext b64: {}", ct_b64);
|
||||||
|
|
||||||
// Decrypt by name
|
// Decrypt by name
|
||||||
s.write_all(arr(&["age","decryptname","alice", &ct_b64]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "decryptname", "alice", &ct_b64]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let pt = parse_bulk(&read_reply(&mut s)).expect("pt");
|
let pt = parse_bulk(&read_reply(&mut s)).expect("pt");
|
||||||
assert_eq!(pt, msg);
|
assert_eq!(pt, msg);
|
||||||
println!("decrypted ok");
|
println!("decrypted ok");
|
||||||
|
|
||||||
// Sign by name
|
// Sign by name
|
||||||
s.write_all(arr(&["age","signname","signer", msg]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "signname", "signer", msg]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let sig_b64 = parse_bulk(&read_reply(&mut s)).expect("sig b64");
|
let sig_b64 = parse_bulk(&read_reply(&mut s)).expect("sig b64");
|
||||||
|
|
||||||
// Verify by name
|
// Verify by name
|
||||||
s.write_all(arr(&["age","verifyname","signer", msg, &sig_b64]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "verifyname", "signer", msg, &sig_b64]).as_bytes())
|
||||||
|
.unwrap();
|
||||||
let ok = parse_simple(&read_reply(&mut s)).expect("verify");
|
let ok = parse_simple(&read_reply(&mut s)).expect("verify");
|
||||||
assert_eq!(ok, "1");
|
assert_eq!(ok, "1");
|
||||||
println!("signature verified");
|
println!("signature verified");
|
||||||
|
|
||||||
// List names
|
// List names
|
||||||
s.write_all(arr(&["age","list"]).as_bytes()).unwrap();
|
s.write_all(arr(&["age", "list"]).as_bytes()).unwrap();
|
||||||
let list = read_reply(&mut s);
|
let list = read_reply(&mut s);
|
||||||
println!("LIST -> {list}");
|
println!("LIST -> {list}");
|
||||||
|
|
||||||
println!("✔ persistent AGE workflow complete.");
|
println!("✔ persistent AGE workflow complete.");
|
||||||
}
|
}
|
||||||
186
examples/simple_demo.sh
Normal file
186
examples/simple_demo.sh
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Simple HeroDB Demo - Basic Redis Commands
|
||||||
|
# This script demonstrates basic Redis functionality that's currently implemented
|
||||||
|
|
||||||
|
set -e # Exit on any error
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
REDIS_HOST="localhost"
|
||||||
|
REDIS_PORT="6381"
|
||||||
|
REDIS_CLI="redis-cli -h $REDIS_HOST -p $REDIS_PORT"
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Function to print colored output
|
||||||
|
print_header() {
|
||||||
|
echo -e "${BLUE}=== $1 ===${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to check if HeroDB is running
|
||||||
|
check_herodb() {
|
||||||
|
print_info "Checking if HeroDB is running on port $REDIS_PORT..."
|
||||||
|
if ! $REDIS_CLI ping > /dev/null 2>&1; then
|
||||||
|
print_error "HeroDB is not running on port $REDIS_PORT"
|
||||||
|
print_info "Please start HeroDB with: cargo run -- --port $REDIS_PORT"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
print_success "HeroDB is running and responding"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to execute Redis command with error handling
|
||||||
|
execute_cmd() {
|
||||||
|
local cmd="$1"
|
||||||
|
local description="$2"
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Command:${NC} $cmd"
|
||||||
|
if result=$($REDIS_CLI $cmd 2>&1); then
|
||||||
|
echo -e "${GREEN}Result:${NC} $result"
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
print_error "Failed: $description"
|
||||||
|
echo "Error: $result"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main demo function
|
||||||
|
main() {
|
||||||
|
clear
|
||||||
|
print_header "HeroDB Basic Functionality Demo"
|
||||||
|
echo "This demo shows basic Redis commands that are currently implemented"
|
||||||
|
echo "HeroDB runs on port $REDIS_PORT (instead of Redis default 6379)"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check if HeroDB is running
|
||||||
|
check_herodb
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 1: Basic Key-Value Operations"
|
||||||
|
|
||||||
|
execute_cmd "SET greeting 'Hello HeroDB!'" "Setting a simple key-value pair"
|
||||||
|
echo
|
||||||
|
execute_cmd "GET greeting" "Getting the value"
|
||||||
|
echo
|
||||||
|
execute_cmd "SET counter 42" "Setting a numeric value"
|
||||||
|
echo
|
||||||
|
execute_cmd "INCR counter" "Incrementing the counter"
|
||||||
|
echo
|
||||||
|
execute_cmd "GET counter" "Getting the incremented value"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 2: Hash Operations"
|
||||||
|
|
||||||
|
execute_cmd "HSET user:1 name 'John Doe' email 'john@example.com' age 30" "Setting hash fields"
|
||||||
|
echo
|
||||||
|
execute_cmd "HGET user:1 name" "Getting a specific field"
|
||||||
|
echo
|
||||||
|
execute_cmd "HGETALL user:1" "Getting all fields"
|
||||||
|
echo
|
||||||
|
execute_cmd "HLEN user:1" "Getting hash length"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 3: List Operations"
|
||||||
|
|
||||||
|
execute_cmd "LPUSH tasks 'Write code' 'Test code' 'Deploy code'" "Adding items to list"
|
||||||
|
echo
|
||||||
|
execute_cmd "LLEN tasks" "Getting list length"
|
||||||
|
echo
|
||||||
|
execute_cmd "LRANGE tasks 0 -1" "Getting all list items"
|
||||||
|
echo
|
||||||
|
execute_cmd "LPOP tasks" "Popping from left"
|
||||||
|
echo
|
||||||
|
execute_cmd "LRANGE tasks 0 -1" "Checking remaining items"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 4: Key Management"
|
||||||
|
|
||||||
|
execute_cmd "KEYS *" "Listing all keys"
|
||||||
|
echo
|
||||||
|
execute_cmd "EXISTS greeting" "Checking if key exists"
|
||||||
|
echo
|
||||||
|
execute_cmd "TYPE user:1" "Getting key type"
|
||||||
|
echo
|
||||||
|
execute_cmd "DBSIZE" "Getting database size"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 5: Expiration"
|
||||||
|
|
||||||
|
execute_cmd "SET temp_key 'temporary value'" "Setting temporary key"
|
||||||
|
echo
|
||||||
|
execute_cmd "EXPIRE temp_key 5" "Setting 5 second expiration"
|
||||||
|
echo
|
||||||
|
execute_cmd "TTL temp_key" "Checking time to live"
|
||||||
|
echo
|
||||||
|
print_info "Waiting 2 seconds..."
|
||||||
|
sleep 2
|
||||||
|
execute_cmd "TTL temp_key" "Checking TTL again"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 6: Multiple Operations"
|
||||||
|
|
||||||
|
execute_cmd "MSET key1 'value1' key2 'value2' key3 'value3'" "Setting multiple keys"
|
||||||
|
echo
|
||||||
|
execute_cmd "MGET key1 key2 key3" "Getting multiple values"
|
||||||
|
echo
|
||||||
|
execute_cmd "DEL key1 key2" "Deleting multiple keys"
|
||||||
|
echo
|
||||||
|
execute_cmd "EXISTS key1 key2 key3" "Checking existence of multiple keys"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 7: Search Commands (Placeholder)"
|
||||||
|
print_info "Testing FT.CREATE command (currently returns placeholder response)"
|
||||||
|
|
||||||
|
execute_cmd "FT.CREATE test_index SCHEMA title TEXT description TEXT" "Creating search index"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 8: Server Information"
|
||||||
|
|
||||||
|
execute_cmd "INFO" "Getting server information"
|
||||||
|
echo
|
||||||
|
execute_cmd "CONFIG GET dir" "Getting configuration"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 9: Cleanup"
|
||||||
|
|
||||||
|
execute_cmd "FLUSHDB" "Clearing database"
|
||||||
|
echo
|
||||||
|
execute_cmd "DBSIZE" "Confirming database is empty"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Demo Summary"
|
||||||
|
echo "This demonstration showed:"
|
||||||
|
echo "• Basic key-value operations (GET, SET, INCR)"
|
||||||
|
echo "• Hash operations (HSET, HGET, HGETALL)"
|
||||||
|
echo "• List operations (LPUSH, LPOP, LRANGE)"
|
||||||
|
echo "• Key management (KEYS, EXISTS, TYPE, DEL)"
|
||||||
|
echo "• Expiration handling (EXPIRE, TTL)"
|
||||||
|
echo "• Multiple key operations (MSET, MGET)"
|
||||||
|
echo "• Server information commands"
|
||||||
|
echo
|
||||||
|
print_success "HeroDB basic functionality demo completed successfully!"
|
||||||
|
echo
|
||||||
|
print_info "Note: Full-text search (FT.*) commands are defined but not yet fully implemented"
|
||||||
|
print_info "To run HeroDB server: cargo run -- --port 6381"
|
||||||
|
print_info "To connect with redis-cli: redis-cli -h localhost -p 6381"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run the demo
|
||||||
|
main "$@"
|
||||||
239
examples/tantivy_search_demo.sh
Executable file
239
examples/tantivy_search_demo.sh
Executable file
@@ -0,0 +1,239 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# HeroDB Tantivy Search Demo
|
||||||
|
# This script demonstrates full-text search capabilities using Redis commands
|
||||||
|
# HeroDB server should be running on port 6381
|
||||||
|
|
||||||
|
set -e # Exit on any error
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
REDIS_HOST="localhost"
|
||||||
|
REDIS_PORT="6382"
|
||||||
|
REDIS_CLI="redis-cli -h $REDIS_HOST -p $REDIS_PORT"
|
||||||
|
|
||||||
|
# Start the herodb server in the background
|
||||||
|
echo "Starting herodb server..."
|
||||||
|
cargo run -p herodb -- --dir /tmp/herodbtest --port ${REDIS_PORT} --debug &
|
||||||
|
SERVER_PID=$!
|
||||||
|
echo
|
||||||
|
sleep 2 # Give the server a moment to start
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Function to print colored output
|
||||||
|
print_header() {
|
||||||
|
echo -e "${BLUE}=== $1 ===${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}✓ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_info() {
|
||||||
|
echo -e "${YELLOW}ℹ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}✗ $1${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to check if HeroDB is running
|
||||||
|
check_herodb() {
|
||||||
|
print_info "Checking if HeroDB is running on port $REDIS_PORT..."
|
||||||
|
if ! $REDIS_CLI ping > /dev/null 2>&1; then
|
||||||
|
print_error "HeroDB is not running on port $REDIS_PORT"
|
||||||
|
print_info "Please start HeroDB with: cargo run -- --port $REDIS_PORT"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
print_success "HeroDB is running and responding"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to execute Redis command with error handling
|
||||||
|
execute_cmd() {
|
||||||
|
local description="${@: -1}"
|
||||||
|
set -- "${@:1:$(($#-1))}"
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Command:${NC} $(printf '%q ' "$@")"
|
||||||
|
if result=$($REDIS_CLI "$@" 2>&1); then
|
||||||
|
echo -e "${GREEN}Result:${NC} $result"
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
print_error "Failed: $description"
|
||||||
|
echo "Error: $result"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to pause for readability
|
||||||
|
pause() {
|
||||||
|
echo
|
||||||
|
read -p "Press Enter to continue..."
|
||||||
|
echo
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main demo function
|
||||||
|
main() {
|
||||||
|
clear
|
||||||
|
print_header "HeroDB Tantivy Search Demonstration"
|
||||||
|
echo "This demo shows full-text search capabilities using Redis commands"
|
||||||
|
echo "HeroDB runs on port $REDIS_PORT (instead of Redis default 6379)"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Check if HeroDB is running
|
||||||
|
check_herodb
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Step 1: Create Search Index"
|
||||||
|
print_info "Creating a product catalog search index with various field types"
|
||||||
|
|
||||||
|
# Create search index with schema
|
||||||
|
execute_cmd FT.CREATE product_catalog SCHEMA title TEXT description TEXT category TAG price NUMERIC rating NUMERIC location GEO \
|
||||||
|
"Creating search index"
|
||||||
|
|
||||||
|
print_success "Search index 'product_catalog' created successfully"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 2: Add Sample Products"
|
||||||
|
print_info "Adding sample products to demonstrate different search scenarios"
|
||||||
|
|
||||||
|
# Add sample products using FT.ADD
|
||||||
|
execute_cmd FT.ADD product_catalog product:1 1.0 title 'Wireless Bluetooth Headphones' description 'Premium noise-canceling headphones with 30-hour battery life' category 'electronics,audio' price 299.99 rating 4.5 location '-122.4194,37.7749' "Adding product 1"
|
||||||
|
execute_cmd FT.ADD product_catalog product:2 1.0 title 'Organic Coffee Beans' description 'Single-origin Ethiopian coffee beans, medium roast' category 'food,beverages,organic' price 24.99 rating 4.8 location '-74.0060,40.7128' "Adding product 2"
|
||||||
|
execute_cmd FT.ADD product_catalog product:3 1.0 title 'Yoga Mat Premium' description 'Eco-friendly yoga mat with superior grip and cushioning' category 'fitness,wellness,eco-friendly' price 89.99 rating 4.3 location '-118.2437,34.0522' "Adding product 3"
|
||||||
|
execute_cmd FT.ADD product_catalog product:4 1.0 title 'Smart Home Speaker' description 'Voice-controlled smart speaker with AI assistant' category 'electronics,smart-home' price 149.99 rating 4.2 location '-87.6298,41.8781' "Adding product 4"
|
||||||
|
execute_cmd FT.ADD product_catalog product:5 1.0 title 'Organic Green Tea' description 'Premium organic green tea leaves from Japan' category 'food,beverages,organic,tea' price 18.99 rating 4.7 location '139.6503,35.6762' "Adding product 5"
|
||||||
|
execute_cmd FT.ADD product_catalog product:6 1.0 title 'Wireless Gaming Mouse' description 'High-precision gaming mouse with RGB lighting' category 'electronics,gaming' price 79.99 rating 4.4 location '-122.3321,47.6062' "Adding product 6"
|
||||||
|
execute_cmd FT.ADD product_catalog product:7 1.0 title 'Comfortable meditation cushion for mindfulness practice' description 'Meditation cushion with premium materials' category 'wellness,meditation' price 45.99 rating 4.6 location '-122.4194,37.7749' "Adding product 7"
|
||||||
|
execute_cmd FT.ADD product_catalog product:8 1.0 title 'Bluetooth Earbuds' description 'True wireless earbuds with active noise cancellation' category 'electronics,audio' price 199.99 rating 4.1 location '-74.0060,40.7128' "Adding product 8"
|
||||||
|
|
||||||
|
print_success "Added 8 products to the index"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 3: Basic Text Search"
|
||||||
|
print_info "Searching for 'wireless' products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog wireless "Basic text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 4: Search with Filters"
|
||||||
|
print_info "Searching for 'organic' products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog organic "Filtered search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 5: Numeric Range Search"
|
||||||
|
print_info "Searching for 'premium' products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog premium "Text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 6: Sorting Results"
|
||||||
|
print_info "Searching for electronics"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog electronics "Category search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 7: Limiting Results"
|
||||||
|
print_info "Searching for wireless products with limit"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog wireless LIMIT 0 3 "Limited results"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 8: Complex Query"
|
||||||
|
print_info "Finding audio products with noise cancellation"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog 'noise cancellation' "Complex query"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 9: Geographic Search"
|
||||||
|
print_info "Searching for meditation products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog meditation "Text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 10: Aggregation Example"
|
||||||
|
print_info "Getting index information and statistics"
|
||||||
|
|
||||||
|
execute_cmd FT.INFO product_catalog "Index information"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 11: Search Comparison"
|
||||||
|
print_info "Comparing Tantivy search vs simple key matching"
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Tantivy Full-Text Search:${NC}"
|
||||||
|
execute_cmd FT.SEARCH product_catalog 'battery life' "Full-text search for 'battery life'"
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo -e "${YELLOW}Simple Key Pattern Matching:${NC}"
|
||||||
|
execute_cmd KEYS *battery* "Simple pattern matching for 'battery'"
|
||||||
|
|
||||||
|
print_info "Notice how full-text search finds relevant results even when exact words don't match keys"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 12: Fuzzy Search"
|
||||||
|
print_info "Searching for headphones"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog headphones "Text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 13: Phrase Search"
|
||||||
|
print_info "Searching for coffee products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog coffee "Text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 14: Boolean Queries"
|
||||||
|
print_info "Searching for gaming products"
|
||||||
|
|
||||||
|
execute_cmd FT.SEARCH product_catalog gaming "Text search"
|
||||||
|
echo
|
||||||
|
execute_cmd FT.SEARCH product_catalog tea "Text search"
|
||||||
|
pause
|
||||||
|
|
||||||
|
print_header "Step 15: Cleanup"
|
||||||
|
print_info "Removing test data"
|
||||||
|
|
||||||
|
# Delete the search index
|
||||||
|
execute_cmd FT.DROP product_catalog "Dropping search index"
|
||||||
|
|
||||||
|
# Clean up documents from search index
|
||||||
|
for i in {1..8}; do
|
||||||
|
execute_cmd FT.DEL product_catalog product:$i "Deleting product:$i from index"
|
||||||
|
done
|
||||||
|
|
||||||
|
print_success "Cleanup completed"
|
||||||
|
echo
|
||||||
|
|
||||||
|
print_header "Demo Summary"
|
||||||
|
echo "This demonstration showed:"
|
||||||
|
echo "• Creating search indexes with different field types"
|
||||||
|
echo "• Adding documents to the search index"
|
||||||
|
echo "• Basic and advanced text search queries"
|
||||||
|
echo "• Filtering by categories and numeric ranges"
|
||||||
|
echo "• Sorting and limiting results"
|
||||||
|
echo "• Geographic searches"
|
||||||
|
echo "• Fuzzy matching and phrase searches"
|
||||||
|
echo "• Boolean query operators"
|
||||||
|
echo "• Comparison with simple pattern matching"
|
||||||
|
echo
|
||||||
|
print_success "HeroDB Tantivy search demo completed successfully!"
|
||||||
|
echo
|
||||||
|
print_info "Key advantages of Tantivy full-text search:"
|
||||||
|
echo " - Relevance scoring and ranking"
|
||||||
|
echo " - Fuzzy matching and typo tolerance"
|
||||||
|
echo " - Complex boolean queries"
|
||||||
|
echo " - Field-specific searches and filters"
|
||||||
|
echo " - Geographic and numeric range queries"
|
||||||
|
echo " - Much faster than pattern matching on large datasets"
|
||||||
|
echo
|
||||||
|
print_info "To run HeroDB server: cargo run -- --port 6381"
|
||||||
|
print_info "To connect with redis-cli: redis-cli -h localhost -p 6381"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run the demo
|
||||||
|
main "$@"
|
||||||
101
examples/test_tantivy_integration.sh
Executable file
101
examples/test_tantivy_integration.sh
Executable file
@@ -0,0 +1,101 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Simple Tantivy Search Integration Test for HeroDB
|
||||||
|
# This script tests the full-text search functionality we just integrated
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🔍 Testing Tantivy Search Integration..."
|
||||||
|
|
||||||
|
# Build the project first
|
||||||
|
echo "📦 Building HeroDB..."
|
||||||
|
cargo build --release
|
||||||
|
|
||||||
|
# Start the server in the background
|
||||||
|
echo "🚀 Starting HeroDB server on port 6379..."
|
||||||
|
cargo run --release -- --port 6379 --dir ./test_data &
|
||||||
|
SERVER_PID=$!
|
||||||
|
|
||||||
|
# Wait for server to start
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
# Function to cleanup on exit
|
||||||
|
cleanup() {
|
||||||
|
echo "🧹 Cleaning up..."
|
||||||
|
kill $SERVER_PID 2>/dev/null || true
|
||||||
|
rm -rf ./test_data
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set trap for cleanup
|
||||||
|
trap cleanup EXIT INT TERM
|
||||||
|
|
||||||
|
# Function to execute Redis command
|
||||||
|
execute_cmd() {
|
||||||
|
local cmd="$1"
|
||||||
|
local description="$2"
|
||||||
|
|
||||||
|
echo "📝 $description"
|
||||||
|
echo " Command: $cmd"
|
||||||
|
|
||||||
|
if result=$(redis-cli -p 6379 $cmd 2>&1); then
|
||||||
|
echo " ✅ Result: $result"
|
||||||
|
echo
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
echo " ❌ Failed: $result"
|
||||||
|
echo
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "🧪 Running Tantivy Search Tests..."
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Test 1: Create a search index
|
||||||
|
execute_cmd "ft.create books SCHEMA title TEXT description TEXT author TEXT category TAG price NUMERIC" \
|
||||||
|
"Creating search index 'books'"
|
||||||
|
|
||||||
|
# Test 2: Add documents to the index
|
||||||
|
execute_cmd "ft.add books book1 1.0 title \"The Great Gatsby\" description \"A classic American novel about the Jazz Age\" author \"F. Scott Fitzgerald\" category \"fiction,classic\" price \"12.99\"" \
|
||||||
|
"Adding first book"
|
||||||
|
|
||||||
|
execute_cmd "ft.add books book2 1.0 title \"To Kill a Mockingbird\" description \"A novel about racial injustice in the American South\" author \"Harper Lee\" category \"fiction,classic\" price \"14.99\"" \
|
||||||
|
"Adding second book"
|
||||||
|
|
||||||
|
execute_cmd "ft.add books book3 1.0 title \"Programming Rust\" description \"A comprehensive guide to Rust programming language\" author \"Jim Blandy\" category \"programming,technical\" price \"49.99\"" \
|
||||||
|
"Adding third book"
|
||||||
|
|
||||||
|
execute_cmd "ft.add books book4 1.0 title \"The Rust Programming Language\" description \"The official book on Rust programming\" author \"Steve Klabnik\" category \"programming,technical\" price \"39.99\"" \
|
||||||
|
"Adding fourth book"
|
||||||
|
|
||||||
|
# Test 3: Basic search
|
||||||
|
execute_cmd "ft.search books Rust" \
|
||||||
|
"Searching for 'Rust'"
|
||||||
|
|
||||||
|
# Test 4: Search with filters
|
||||||
|
execute_cmd "ft.search books programming FILTER category programming" \
|
||||||
|
"Searching for 'programming' with category filter"
|
||||||
|
|
||||||
|
# Test 5: Search with limit
|
||||||
|
execute_cmd "ft.search books \"*\" LIMIT 0 2" \
|
||||||
|
"Getting first 2 documents"
|
||||||
|
|
||||||
|
# Test 6: Get index info
|
||||||
|
execute_cmd "ft.info books" \
|
||||||
|
"Getting index information"
|
||||||
|
|
||||||
|
# Test 7: Delete a document
|
||||||
|
execute_cmd "ft.del books book1" \
|
||||||
|
"Deleting book1"
|
||||||
|
|
||||||
|
# Test 8: Search again to verify deletion
|
||||||
|
execute_cmd "ft.search books Gatsby" \
|
||||||
|
"Searching for deleted book"
|
||||||
|
|
||||||
|
# Test 9: Drop the index
|
||||||
|
execute_cmd "ft.drop books" \
|
||||||
|
"Dropping the index"
|
||||||
|
|
||||||
|
echo "🎉 All tests completed successfully!"
|
||||||
|
echo "✅ Tantivy search integration is working correctly"
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "herodb"
|
|
||||||
version = "0.0.1"
|
|
||||||
authors = ["Pin Fang <fpfangpin@hotmail.com>"]
|
|
||||||
edition = "2021"
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
anyhow = "1.0.59"
|
|
||||||
bytes = "1.3.0"
|
|
||||||
thiserror = "1.0.32"
|
|
||||||
tokio = { version = "1.23.0", features = ["full"] }
|
|
||||||
clap = { version = "4.5.20", features = ["derive"] }
|
|
||||||
byteorder = "1.4.3"
|
|
||||||
futures = "0.3"
|
|
||||||
redb = "2.1.3"
|
|
||||||
serde = { version = "1.0", features = ["derive"] }
|
|
||||||
serde_json = "1.0"
|
|
||||||
bincode = "1.3.3"
|
|
||||||
chacha20poly1305 = "0.10.1"
|
|
||||||
rand = "0.8"
|
|
||||||
sha2 = "0.10"
|
|
||||||
age = "0.10"
|
|
||||||
secrecy = "0.8"
|
|
||||||
ed25519-dalek = "2"
|
|
||||||
base64 = "0.22"
|
|
||||||
|
|
||||||
[dev-dependencies]
|
|
||||||
redis = { version = "0.24", features = ["aio", "tokio-comp"] }
|
|
||||||
@@ -1,970 +0,0 @@
|
|||||||
use crate::{error::DBError, protocol::Protocol, server::Server};
|
|
||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub enum Cmd {
|
|
||||||
Ping,
|
|
||||||
Echo(String),
|
|
||||||
Select(u64), // Changed from u16 to u64
|
|
||||||
Get(String),
|
|
||||||
Set(String, String),
|
|
||||||
SetPx(String, String, u128),
|
|
||||||
SetEx(String, String, u128),
|
|
||||||
Keys,
|
|
||||||
ConfigGet(String),
|
|
||||||
Info(Option<String>),
|
|
||||||
Del(String),
|
|
||||||
Type(String),
|
|
||||||
Incr(String),
|
|
||||||
Multi,
|
|
||||||
Exec,
|
|
||||||
Discard,
|
|
||||||
// Hash commands
|
|
||||||
HSet(String, Vec<(String, String)>),
|
|
||||||
HGet(String, String),
|
|
||||||
HGetAll(String),
|
|
||||||
HDel(String, Vec<String>),
|
|
||||||
HExists(String, String),
|
|
||||||
HKeys(String),
|
|
||||||
HVals(String),
|
|
||||||
HLen(String),
|
|
||||||
HMGet(String, Vec<String>),
|
|
||||||
HSetNx(String, String, String),
|
|
||||||
HScan(String, u64, Option<String>, Option<u64>), // key, cursor, pattern, count
|
|
||||||
Scan(u64, Option<String>, Option<u64>), // cursor, pattern, count
|
|
||||||
Ttl(String),
|
|
||||||
Exists(String),
|
|
||||||
Quit,
|
|
||||||
Client(Vec<String>),
|
|
||||||
ClientSetName(String),
|
|
||||||
ClientGetName,
|
|
||||||
// List commands
|
|
||||||
LPush(String, Vec<String>),
|
|
||||||
RPush(String, Vec<String>),
|
|
||||||
LPop(String, Option<u64>),
|
|
||||||
RPop(String, Option<u64>),
|
|
||||||
LLen(String),
|
|
||||||
LRem(String, i64, String),
|
|
||||||
LTrim(String, i64, i64),
|
|
||||||
LIndex(String, i64),
|
|
||||||
LRange(String, i64, i64),
|
|
||||||
FlushDb,
|
|
||||||
Unknow(String),
|
|
||||||
// AGE (rage) commands — stateless
|
|
||||||
AgeGenEnc,
|
|
||||||
AgeGenSign,
|
|
||||||
AgeEncrypt(String, String), // recipient, message
|
|
||||||
AgeDecrypt(String, String), // identity, ciphertext_b64
|
|
||||||
AgeSign(String, String), // signing_secret, message
|
|
||||||
AgeVerify(String, String, String), // verify_pub, message, signature_b64
|
|
||||||
|
|
||||||
// NEW: persistent named-key commands
|
|
||||||
AgeKeygen(String), // name
|
|
||||||
AgeSignKeygen(String), // name
|
|
||||||
AgeEncryptName(String, String), // name, message
|
|
||||||
AgeDecryptName(String, String), // name, ciphertext_b64
|
|
||||||
AgeSignName(String, String), // name, message
|
|
||||||
AgeVerifyName(String, String, String), // name, message, signature_b64
|
|
||||||
AgeList,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Cmd {
|
|
||||||
pub fn from(s: &str) -> Result<(Self, Protocol, &str), DBError> {
|
|
||||||
let (protocol, remaining) = Protocol::from(s)?;
|
|
||||||
match protocol.clone() {
|
|
||||||
Protocol::Array(p) => {
|
|
||||||
let cmd = p.into_iter().map(|x| x.decode()).collect::<Vec<_>>();
|
|
||||||
if cmd.is_empty() {
|
|
||||||
return Err(DBError("cmd length is 0".to_string()));
|
|
||||||
}
|
|
||||||
Ok((
|
|
||||||
match cmd[0].to_lowercase().as_str() {
|
|
||||||
"select" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError("wrong number of arguments for SELECT".to_string()));
|
|
||||||
}
|
|
||||||
let idx = cmd[1].parse::<u64>().map_err(|_| DBError("ERR DB index is not an integer".to_string()))?;
|
|
||||||
Cmd::Select(idx)
|
|
||||||
}
|
|
||||||
"echo" => Cmd::Echo(cmd[1].clone()),
|
|
||||||
"ping" => Cmd::Ping,
|
|
||||||
"get" => Cmd::Get(cmd[1].clone()),
|
|
||||||
"set" => {
|
|
||||||
if cmd.len() == 5 && cmd[3].to_lowercase() == "px" {
|
|
||||||
Cmd::SetPx(cmd[1].clone(), cmd[2].clone(), cmd[4].parse().unwrap())
|
|
||||||
} else if cmd.len() == 5 && cmd[3].to_lowercase() == "ex" {
|
|
||||||
Cmd::SetEx(cmd[1].clone(), cmd[2].clone(), cmd[4].parse().unwrap())
|
|
||||||
} else if cmd.len() == 3 {
|
|
||||||
Cmd::Set(cmd[1].clone(), cmd[2].clone())
|
|
||||||
} else {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"setex" => {
|
|
||||||
if cmd.len() != 4 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for SETEX command")));
|
|
||||||
}
|
|
||||||
Cmd::SetEx(cmd[1].clone(), cmd[3].clone(), cmd[2].parse().unwrap())
|
|
||||||
}
|
|
||||||
"config" => {
|
|
||||||
if cmd.len() != 3 || cmd[1].to_lowercase() != "get" {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
} else {
|
|
||||||
Cmd::ConfigGet(cmd[2].clone())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"keys" => {
|
|
||||||
if cmd.len() != 2 || cmd[1] != "*" {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
} else {
|
|
||||||
Cmd::Keys
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"info" => {
|
|
||||||
let section = if cmd.len() == 2 {
|
|
||||||
Some(cmd[1].clone())
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
Cmd::Info(section)
|
|
||||||
}
|
|
||||||
"del" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
Cmd::Del(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"type" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
Cmd::Type(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"incr" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
Cmd::Incr(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"multi" => {
|
|
||||||
if cmd.len() != 1 {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
Cmd::Multi
|
|
||||||
}
|
|
||||||
"exec" => {
|
|
||||||
if cmd.len() != 1 {
|
|
||||||
return Err(DBError(format!("unsupported cmd {:?}", cmd)));
|
|
||||||
}
|
|
||||||
Cmd::Exec
|
|
||||||
}
|
|
||||||
"discard" => Cmd::Discard,
|
|
||||||
// Hash commands
|
|
||||||
"hset" => {
|
|
||||||
if cmd.len() < 4 || (cmd.len() - 2) % 2 != 0 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HSET command")));
|
|
||||||
}
|
|
||||||
let mut pairs = Vec::new();
|
|
||||||
let mut i = 2;
|
|
||||||
while i + 1 < cmd.len() {
|
|
||||||
pairs.push((cmd[i].clone(), cmd[i + 1].clone()));
|
|
||||||
i += 2;
|
|
||||||
}
|
|
||||||
Cmd::HSet(cmd[1].clone(), pairs)
|
|
||||||
}
|
|
||||||
"hget" => {
|
|
||||||
if cmd.len() != 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HGET command")));
|
|
||||||
}
|
|
||||||
Cmd::HGet(cmd[1].clone(), cmd[2].clone())
|
|
||||||
}
|
|
||||||
"hgetall" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HGETALL command")));
|
|
||||||
}
|
|
||||||
Cmd::HGetAll(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"hdel" => {
|
|
||||||
if cmd.len() < 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HDEL command")));
|
|
||||||
}
|
|
||||||
Cmd::HDel(cmd[1].clone(), cmd[2..].to_vec())
|
|
||||||
}
|
|
||||||
"hexists" => {
|
|
||||||
if cmd.len() != 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HEXISTS command")));
|
|
||||||
}
|
|
||||||
Cmd::HExists(cmd[1].clone(), cmd[2].clone())
|
|
||||||
}
|
|
||||||
"hkeys" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HKEYS command")));
|
|
||||||
}
|
|
||||||
Cmd::HKeys(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"hvals" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HVALS command")));
|
|
||||||
}
|
|
||||||
Cmd::HVals(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"hlen" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HLEN command")));
|
|
||||||
}
|
|
||||||
Cmd::HLen(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"hmget" => {
|
|
||||||
if cmd.len() < 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HMGET command")));
|
|
||||||
}
|
|
||||||
Cmd::HMGet(cmd[1].clone(), cmd[2..].to_vec())
|
|
||||||
}
|
|
||||||
"hsetnx" => {
|
|
||||||
if cmd.len() != 4 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HSETNX command")));
|
|
||||||
}
|
|
||||||
Cmd::HSetNx(cmd[1].clone(), cmd[2].clone(), cmd[3].clone())
|
|
||||||
}
|
|
||||||
"hscan" => {
|
|
||||||
if cmd.len() < 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for HSCAN command")));
|
|
||||||
}
|
|
||||||
|
|
||||||
let key = cmd[1].clone();
|
|
||||||
let cursor = cmd[2].parse::<u64>().map_err(|_|
|
|
||||||
DBError("ERR invalid cursor".to_string()))?;
|
|
||||||
|
|
||||||
let mut pattern = None;
|
|
||||||
let mut count = None;
|
|
||||||
let mut i = 3;
|
|
||||||
|
|
||||||
while i < cmd.len() {
|
|
||||||
match cmd[i].to_lowercase().as_str() {
|
|
||||||
"match" => {
|
|
||||||
if i + 1 >= cmd.len() {
|
|
||||||
return Err(DBError("ERR syntax error".to_string()));
|
|
||||||
}
|
|
||||||
pattern = Some(cmd[i + 1].clone());
|
|
||||||
i += 2;
|
|
||||||
}
|
|
||||||
"count" => {
|
|
||||||
if i + 1 >= cmd.len() {
|
|
||||||
return Err(DBError("ERR syntax error".to_string()));
|
|
||||||
}
|
|
||||||
count = Some(cmd[i + 1].parse::<u64>().map_err(|_|
|
|
||||||
DBError("ERR value is not an integer or out of range".to_string()))?);
|
|
||||||
i += 2;
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
return Err(DBError(format!("ERR syntax error")));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Cmd::HScan(key, cursor, pattern, count)
|
|
||||||
}
|
|
||||||
"scan" => {
|
|
||||||
if cmd.len() < 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for SCAN command")));
|
|
||||||
}
|
|
||||||
|
|
||||||
let cursor = cmd[1].parse::<u64>().map_err(|_|
|
|
||||||
DBError("ERR invalid cursor".to_string()))?;
|
|
||||||
|
|
||||||
let mut pattern = None;
|
|
||||||
let mut count = None;
|
|
||||||
let mut i = 2;
|
|
||||||
|
|
||||||
while i < cmd.len() {
|
|
||||||
match cmd[i].to_lowercase().as_str() {
|
|
||||||
"match" => {
|
|
||||||
if i + 1 >= cmd.len() {
|
|
||||||
return Err(DBError("ERR syntax error".to_string()));
|
|
||||||
}
|
|
||||||
pattern = Some(cmd[i + 1].clone());
|
|
||||||
i += 2;
|
|
||||||
}
|
|
||||||
"count" => {
|
|
||||||
if i + 1 >= cmd.len() {
|
|
||||||
return Err(DBError("ERR syntax error".to_string()));
|
|
||||||
}
|
|
||||||
count = Some(cmd[i + 1].parse::<u64>().map_err(|_|
|
|
||||||
DBError("ERR value is not an integer or out of range".to_string()))?);
|
|
||||||
i += 2;
|
|
||||||
}
|
|
||||||
_ => {
|
|
||||||
return Err(DBError(format!("ERR syntax error")));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Cmd::Scan(cursor, pattern, count)
|
|
||||||
}
|
|
||||||
"ttl" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for TTL command")));
|
|
||||||
}
|
|
||||||
Cmd::Ttl(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"exists" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for EXISTS command")));
|
|
||||||
}
|
|
||||||
Cmd::Exists(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"quit" => {
|
|
||||||
if cmd.len() != 1 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for QUIT command")));
|
|
||||||
}
|
|
||||||
Cmd::Quit
|
|
||||||
}
|
|
||||||
"client" => {
|
|
||||||
if cmd.len() > 1 {
|
|
||||||
match cmd[1].to_lowercase().as_str() {
|
|
||||||
"setname" => {
|
|
||||||
if cmd.len() == 3 {
|
|
||||||
Cmd::ClientSetName(cmd[2].clone())
|
|
||||||
} else {
|
|
||||||
return Err(DBError("wrong number of arguments for 'client setname' command".to_string()));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"getname" => {
|
|
||||||
if cmd.len() == 2 {
|
|
||||||
Cmd::ClientGetName
|
|
||||||
} else {
|
|
||||||
return Err(DBError("wrong number of arguments for 'client getname' command".to_string()));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ => Cmd::Client(cmd[1..].to_vec()),
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
Cmd::Client(vec![])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"lpush" => {
|
|
||||||
if cmd.len() < 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LPUSH command")));
|
|
||||||
}
|
|
||||||
Cmd::LPush(cmd[1].clone(), cmd[2..].to_vec())
|
|
||||||
}
|
|
||||||
"rpush" => {
|
|
||||||
if cmd.len() < 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for RPUSH command")));
|
|
||||||
}
|
|
||||||
Cmd::RPush(cmd[1].clone(), cmd[2..].to_vec())
|
|
||||||
}
|
|
||||||
"lpop" => {
|
|
||||||
if cmd.len() < 2 || cmd.len() > 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LPOP command")));
|
|
||||||
}
|
|
||||||
let count = if cmd.len() == 3 {
|
|
||||||
Some(cmd[2].parse::<u64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
Cmd::LPop(cmd[1].clone(), count)
|
|
||||||
}
|
|
||||||
"rpop" => {
|
|
||||||
if cmd.len() < 2 || cmd.len() > 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for RPOP command")));
|
|
||||||
}
|
|
||||||
let count = if cmd.len() == 3 {
|
|
||||||
Some(cmd[2].parse::<u64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
Cmd::RPop(cmd[1].clone(), count)
|
|
||||||
}
|
|
||||||
"llen" => {
|
|
||||||
if cmd.len() != 2 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LLEN command")));
|
|
||||||
}
|
|
||||||
Cmd::LLen(cmd[1].clone())
|
|
||||||
}
|
|
||||||
"lrem" => {
|
|
||||||
if cmd.len() != 4 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LREM command")));
|
|
||||||
}
|
|
||||||
let count = cmd[2].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
Cmd::LRem(cmd[1].clone(), count, cmd[3].clone())
|
|
||||||
}
|
|
||||||
"ltrim" => {
|
|
||||||
if cmd.len() != 4 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LTRIM command")));
|
|
||||||
}
|
|
||||||
let start = cmd[2].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
let stop = cmd[3].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
Cmd::LTrim(cmd[1].clone(), start, stop)
|
|
||||||
}
|
|
||||||
"lindex" => {
|
|
||||||
if cmd.len() != 3 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LINDEX command")));
|
|
||||||
}
|
|
||||||
let index = cmd[2].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
Cmd::LIndex(cmd[1].clone(), index)
|
|
||||||
}
|
|
||||||
"lrange" => {
|
|
||||||
if cmd.len() != 4 {
|
|
||||||
return Err(DBError(format!("wrong number of arguments for LRANGE command")));
|
|
||||||
}
|
|
||||||
let start = cmd[2].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
let stop = cmd[3].parse::<i64>().map_err(|_| DBError("ERR value is not an integer or out of range".to_string()))?;
|
|
||||||
Cmd::LRange(cmd[1].clone(), start, stop)
|
|
||||||
}
|
|
||||||
"flushdb" => {
|
|
||||||
if cmd.len() != 1 {
|
|
||||||
return Err(DBError("wrong number of arguments for FLUSHDB command".to_string()));
|
|
||||||
}
|
|
||||||
Cmd::FlushDb
|
|
||||||
}
|
|
||||||
"age" => {
|
|
||||||
if cmd.len() < 2 {
|
|
||||||
return Err(DBError("wrong number of arguments for AGE".to_string()));
|
|
||||||
}
|
|
||||||
match cmd[1].to_lowercase().as_str() {
|
|
||||||
// stateless
|
|
||||||
"genenc" => { if cmd.len() != 2 { return Err(DBError("AGE GENENC takes no args".to_string())); }
|
|
||||||
Cmd::AgeGenEnc }
|
|
||||||
"gensign" => { if cmd.len() != 2 { return Err(DBError("AGE GENSIGN takes no args".to_string())); }
|
|
||||||
Cmd::AgeGenSign }
|
|
||||||
"encrypt" => { if cmd.len() != 4 { return Err(DBError("AGE ENCRYPT <recipient> <message>".to_string())); }
|
|
||||||
Cmd::AgeEncrypt(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"decrypt" => { if cmd.len() != 4 { return Err(DBError("AGE DECRYPT <identity> <ciphertext_b64>".to_string())); }
|
|
||||||
Cmd::AgeDecrypt(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"sign" => { if cmd.len() != 4 { return Err(DBError("AGE SIGN <signing_secret> <message>".to_string())); }
|
|
||||||
Cmd::AgeSign(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"verify" => { if cmd.len() != 5 { return Err(DBError("AGE VERIFY <verify_pub> <message> <signature_b64>".to_string())); }
|
|
||||||
Cmd::AgeVerify(cmd[2].clone(), cmd[3].clone(), cmd[4].clone()) }
|
|
||||||
|
|
||||||
// persistent names
|
|
||||||
"keygen" => { if cmd.len() != 3 { return Err(DBError("AGE KEYGEN <name>".to_string())); }
|
|
||||||
Cmd::AgeKeygen(cmd[2].clone()) }
|
|
||||||
"signkeygen" => { if cmd.len() != 3 { return Err(DBError("AGE SIGNKEYGEN <name>".to_string())); }
|
|
||||||
Cmd::AgeSignKeygen(cmd[2].clone()) }
|
|
||||||
"encryptname" => { if cmd.len() != 4 { return Err(DBError("AGE ENCRYPTNAME <name> <message>".to_string())); }
|
|
||||||
Cmd::AgeEncryptName(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"decryptname" => { if cmd.len() != 4 { return Err(DBError("AGE DECRYPTNAME <name> <ciphertext_b64>".to_string())); }
|
|
||||||
Cmd::AgeDecryptName(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"signname" => { if cmd.len() != 4 { return Err(DBError("AGE SIGNNAME <name> <message>".to_string())); }
|
|
||||||
Cmd::AgeSignName(cmd[2].clone(), cmd[3].clone()) }
|
|
||||||
"verifyname" => { if cmd.len() != 5 { return Err(DBError("AGE VERIFYNAME <name> <message> <signature_b64>".to_string())); }
|
|
||||||
Cmd::AgeVerifyName(cmd[2].clone(), cmd[3].clone(), cmd[4].clone()) }
|
|
||||||
"list" => { if cmd.len() != 2 { return Err(DBError("AGE LIST".to_string())); }
|
|
||||||
Cmd::AgeList }
|
|
||||||
_ => return Err(DBError(format!("unsupported AGE subcommand {:?}", cmd))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
_ => Cmd::Unknow(cmd[0].clone()),
|
|
||||||
},
|
|
||||||
protocol,
|
|
||||||
remaining
|
|
||||||
))
|
|
||||||
}
|
|
||||||
_ => Err(DBError(format!(
|
|
||||||
"fail to parse as cmd for {:?}",
|
|
||||||
protocol
|
|
||||||
))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn run(self, server: &mut Server) -> Result<Protocol, DBError> {
|
|
||||||
// Handle queued commands for transactions
|
|
||||||
if server.queued_cmd.is_some()
|
|
||||||
&& !matches!(self, Cmd::Exec)
|
|
||||||
&& !matches!(self, Cmd::Multi)
|
|
||||||
&& !matches!(self, Cmd::Discard)
|
|
||||||
{
|
|
||||||
let protocol = self.clone().to_protocol();
|
|
||||||
server.queued_cmd.as_mut().unwrap().push((self, protocol));
|
|
||||||
return Ok(Protocol::SimpleString("QUEUED".to_string()));
|
|
||||||
}
|
|
||||||
|
|
||||||
match self {
|
|
||||||
Cmd::Select(db) => select_cmd(server, db).await,
|
|
||||||
Cmd::Ping => Ok(Protocol::SimpleString("PONG".to_string())),
|
|
||||||
Cmd::Echo(s) => Ok(Protocol::BulkString(s)),
|
|
||||||
Cmd::Get(k) => get_cmd(server, &k).await,
|
|
||||||
Cmd::Set(k, v) => set_cmd(server, &k, &v).await,
|
|
||||||
Cmd::SetPx(k, v, x) => set_px_cmd(server, &k, &v, &x).await,
|
|
||||||
Cmd::SetEx(k, v, x) => set_ex_cmd(server, &k, &v, &x).await,
|
|
||||||
Cmd::Del(k) => del_cmd(server, &k).await,
|
|
||||||
Cmd::ConfigGet(name) => config_get_cmd(&name, server),
|
|
||||||
Cmd::Keys => keys_cmd(server).await,
|
|
||||||
Cmd::Info(section) => info_cmd(server, §ion).await,
|
|
||||||
Cmd::Type(k) => type_cmd(server, &k).await,
|
|
||||||
Cmd::Incr(key) => incr_cmd(server, &key).await,
|
|
||||||
Cmd::Multi => {
|
|
||||||
server.queued_cmd = Some(Vec::<(Cmd, Protocol)>::new());
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
}
|
|
||||||
Cmd::Exec => exec_cmd(server).await,
|
|
||||||
Cmd::Discard => {
|
|
||||||
if server.queued_cmd.is_some() {
|
|
||||||
server.queued_cmd = None;
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
} else {
|
|
||||||
Ok(Protocol::err("ERR DISCARD without MULTI"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Hash commands
|
|
||||||
Cmd::HSet(key, pairs) => hset_cmd(server, &key, &pairs).await,
|
|
||||||
Cmd::HGet(key, field) => hget_cmd(server, &key, &field).await,
|
|
||||||
Cmd::HGetAll(key) => hgetall_cmd(server, &key).await,
|
|
||||||
Cmd::HDel(key, fields) => hdel_cmd(server, &key, &fields).await,
|
|
||||||
Cmd::HExists(key, field) => hexists_cmd(server, &key, &field).await,
|
|
||||||
Cmd::HKeys(key) => hkeys_cmd(server, &key).await,
|
|
||||||
Cmd::HVals(key) => hvals_cmd(server, &key).await,
|
|
||||||
Cmd::HLen(key) => hlen_cmd(server, &key).await,
|
|
||||||
Cmd::HMGet(key, fields) => hmget_cmd(server, &key, &fields).await,
|
|
||||||
Cmd::HSetNx(key, field, value) => hsetnx_cmd(server, &key, &field, &value).await,
|
|
||||||
Cmd::HScan(key, cursor, pattern, count) => hscan_cmd(server, &key, &cursor, pattern.as_deref(), &count).await,
|
|
||||||
Cmd::Scan(cursor, pattern, count) => scan_cmd(server, &cursor, pattern.as_deref(), &count).await,
|
|
||||||
Cmd::Ttl(key) => ttl_cmd(server, &key).await,
|
|
||||||
Cmd::Exists(key) => exists_cmd(server, &key).await,
|
|
||||||
Cmd::Quit => Ok(Protocol::SimpleString("OK".to_string())),
|
|
||||||
Cmd::Client(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
|
||||||
Cmd::ClientSetName(name) => client_setname_cmd(server, &name).await,
|
|
||||||
Cmd::ClientGetName => client_getname_cmd(server).await,
|
|
||||||
// List commands
|
|
||||||
Cmd::LPush(key, elements) => lpush_cmd(server, &key, &elements).await,
|
|
||||||
Cmd::RPush(key, elements) => rpush_cmd(server, &key, &elements).await,
|
|
||||||
Cmd::LPop(key, count) => lpop_cmd(server, &key, &count).await,
|
|
||||||
Cmd::RPop(key, count) => rpop_cmd(server, &key, &count).await,
|
|
||||||
Cmd::LLen(key) => llen_cmd(server, &key).await,
|
|
||||||
Cmd::LRem(key, count, element) => lrem_cmd(server, &key, count, &element).await,
|
|
||||||
Cmd::LTrim(key, start, stop) => ltrim_cmd(server, &key, start, stop).await,
|
|
||||||
Cmd::LIndex(key, index) => lindex_cmd(server, &key, index).await,
|
|
||||||
Cmd::LRange(key, start, stop) => lrange_cmd(server, &key, start, stop).await,
|
|
||||||
Cmd::FlushDb => flushdb_cmd(server).await,
|
|
||||||
// AGE (rage): stateless
|
|
||||||
Cmd::AgeGenEnc => Ok(crate::age::cmd_age_genenc().await),
|
|
||||||
Cmd::AgeGenSign => Ok(crate::age::cmd_age_gensign().await),
|
|
||||||
Cmd::AgeEncrypt(recipient, message) => Ok(crate::age::cmd_age_encrypt(&recipient, &message).await),
|
|
||||||
Cmd::AgeDecrypt(identity, ct_b64) => Ok(crate::age::cmd_age_decrypt(&identity, &ct_b64).await),
|
|
||||||
Cmd::AgeSign(secret, message) => Ok(crate::age::cmd_age_sign(&secret, &message).await),
|
|
||||||
Cmd::AgeVerify(vpub, msg, sig_b64) => Ok(crate::age::cmd_age_verify(&vpub, &msg, &sig_b64).await),
|
|
||||||
|
|
||||||
// AGE (rage): persistent named keys
|
|
||||||
Cmd::AgeKeygen(name) => Ok(crate::age::cmd_age_keygen(server, &name).await),
|
|
||||||
Cmd::AgeSignKeygen(name) => Ok(crate::age::cmd_age_signkeygen(server, &name).await),
|
|
||||||
Cmd::AgeEncryptName(name, message) => Ok(crate::age::cmd_age_encrypt_name(server, &name, &message).await),
|
|
||||||
Cmd::AgeDecryptName(name, ct_b64) => Ok(crate::age::cmd_age_decrypt_name(server, &name, &ct_b64).await),
|
|
||||||
Cmd::AgeSignName(name, message) => Ok(crate::age::cmd_age_sign_name(server, &name, &message).await),
|
|
||||||
Cmd::AgeVerifyName(name, message, sig_b64) => Ok(crate::age::cmd_age_verify_name(server, &name, &message, &sig_b64).await),
|
|
||||||
Cmd::AgeList => Ok(crate::age::cmd_age_list(server).await),
|
|
||||||
Cmd::Unknow(s) => Ok(Protocol::err(&format!("ERR unknown command `{}`", s))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn to_protocol(self) -> Protocol {
|
|
||||||
match self {
|
|
||||||
Cmd::Select(db) => Protocol::Array(vec![Protocol::BulkString("select".to_string()), Protocol::BulkString(db.to_string())]),
|
|
||||||
Cmd::Ping => Protocol::Array(vec![Protocol::BulkString("ping".to_string())]),
|
|
||||||
Cmd::Echo(s) => Protocol::Array(vec![Protocol::BulkString("echo".to_string()), Protocol::BulkString(s)]),
|
|
||||||
Cmd::Get(k) => Protocol::Array(vec![Protocol::BulkString("get".to_string()), Protocol::BulkString(k)]),
|
|
||||||
Cmd::Set(k, v) => Protocol::Array(vec![Protocol::BulkString("set".to_string()), Protocol::BulkString(k), Protocol::BulkString(v)]),
|
|
||||||
_ => Protocol::SimpleString("...".to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn flushdb_cmd(server: &mut Server) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.flushdb() {
|
|
||||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn select_cmd(server: &mut Server, db: u64) -> Result<Protocol, DBError> {
|
|
||||||
// Test if we can access the database (this will create it if needed)
|
|
||||||
server.selected_db = db;
|
|
||||||
match server.current_storage() {
|
|
||||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn lindex_cmd(server: &Server, key: &str, index: i64) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.lindex(key, index) {
|
|
||||||
Ok(Some(element)) => Ok(Protocol::BulkString(element)),
|
|
||||||
Ok(None) => Ok(Protocol::Null),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn lrange_cmd(server: &Server, key: &str, start: i64, stop: i64) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.lrange(key, start, stop) {
|
|
||||||
Ok(elements) => Ok(Protocol::Array(elements.into_iter().map(Protocol::BulkString).collect())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn ltrim_cmd(server: &Server, key: &str, start: i64, stop: i64) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.ltrim(key, start, stop) {
|
|
||||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn lrem_cmd(server: &Server, key: &str, count: i64, element: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.lrem(key, count, element) {
|
|
||||||
Ok(removed_count) => Ok(Protocol::SimpleString(removed_count.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn llen_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.llen(key) {
|
|
||||||
Ok(len) => Ok(Protocol::SimpleString(len.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn lpop_cmd(server: &Server, key: &str, count: &Option<u64>) -> Result<Protocol, DBError> {
|
|
||||||
let count_val = count.unwrap_or(1);
|
|
||||||
match server.current_storage()?.lpop(key, count_val) {
|
|
||||||
Ok(elements) => {
|
|
||||||
if elements.is_empty() {
|
|
||||||
if count.is_some() {
|
|
||||||
Ok(Protocol::Array(vec![]))
|
|
||||||
} else {
|
|
||||||
Ok(Protocol::Null)
|
|
||||||
}
|
|
||||||
} else if count.is_some() {
|
|
||||||
Ok(Protocol::Array(elements.into_iter().map(Protocol::BulkString).collect()))
|
|
||||||
} else {
|
|
||||||
Ok(Protocol::BulkString(elements[0].clone()))
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn rpop_cmd(server: &Server, key: &str, count: &Option<u64>) -> Result<Protocol, DBError> {
|
|
||||||
let count_val = count.unwrap_or(1);
|
|
||||||
match server.current_storage()?.rpop(key, count_val) {
|
|
||||||
Ok(elements) => {
|
|
||||||
if elements.is_empty() {
|
|
||||||
if count.is_some() {
|
|
||||||
Ok(Protocol::Array(vec![]))
|
|
||||||
} else {
|
|
||||||
Ok(Protocol::Null)
|
|
||||||
}
|
|
||||||
} else if count.is_some() {
|
|
||||||
Ok(Protocol::Array(elements.into_iter().map(Protocol::BulkString).collect()))
|
|
||||||
} else {
|
|
||||||
Ok(Protocol::BulkString(elements[0].clone()))
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn lpush_cmd(server: &Server, key: &str, elements: &[String]) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.lpush(key, elements.to_vec()) {
|
|
||||||
Ok(len) => Ok(Protocol::SimpleString(len.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn rpush_cmd(server: &Server, key: &str, elements: &[String]) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.rpush(key, elements.to_vec()) {
|
|
||||||
Ok(len) => Ok(Protocol::SimpleString(len.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn exec_cmd(server: &mut Server) -> Result<Protocol, DBError> {
|
|
||||||
// Move the queued commands out of `server` so we drop the borrow immediately.
|
|
||||||
let cmds = if let Some(cmds) = server.queued_cmd.take() {
|
|
||||||
cmds
|
|
||||||
} else {
|
|
||||||
return Ok(Protocol::err("ERR EXEC without MULTI"));
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut out = Vec::new();
|
|
||||||
for (cmd, _) in cmds {
|
|
||||||
// Use Box::pin to handle recursion in async function
|
|
||||||
let res = Box::pin(cmd.run(server)).await?;
|
|
||||||
out.push(res);
|
|
||||||
}
|
|
||||||
Ok(Protocol::Array(out))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn incr_cmd(server: &Server, key: &String) -> Result<Protocol, DBError> {
|
|
||||||
let storage = server.current_storage()?;
|
|
||||||
let current_value = storage.get(key)?;
|
|
||||||
|
|
||||||
let new_value = match current_value {
|
|
||||||
Some(v) => {
|
|
||||||
match v.parse::<i64>() {
|
|
||||||
Ok(num) => num + 1,
|
|
||||||
Err(_) => return Ok(Protocol::err("ERR value is not an integer or out of range")),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => 1,
|
|
||||||
};
|
|
||||||
|
|
||||||
storage.set(key.clone(), new_value.to_string())?;
|
|
||||||
Ok(Protocol::SimpleString(new_value.to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
fn config_get_cmd(name: &String, server: &Server) -> Result<Protocol, DBError> {
|
|
||||||
let value = match name.as_str() {
|
|
||||||
"dir" => Some(server.option.dir.clone()),
|
|
||||||
"dbfilename" => Some(format!("{}.db", server.selected_db)),
|
|
||||||
"databases" => Some("16".to_string()), // Hardcoded as per original logic
|
|
||||||
_ => None,
|
|
||||||
};
|
|
||||||
|
|
||||||
if let Some(val) = value {
|
|
||||||
Ok(Protocol::Array(vec![
|
|
||||||
Protocol::BulkString(name.clone()),
|
|
||||||
Protocol::BulkString(val),
|
|
||||||
]))
|
|
||||||
} else {
|
|
||||||
// Return an empty array for unknown config options, which is standard Redis behavior
|
|
||||||
Ok(Protocol::Array(vec![]))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn keys_cmd(server: &Server) -> Result<Protocol, DBError> {
|
|
||||||
let keys = server.current_storage()?.keys("*")?;
|
|
||||||
Ok(Protocol::Array(
|
|
||||||
keys.into_iter().map(Protocol::BulkString).collect(),
|
|
||||||
))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize)]
|
|
||||||
struct ServerInfo {
|
|
||||||
redis_version: String,
|
|
||||||
encrypted: bool,
|
|
||||||
selected_db: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn info_cmd(server: &Server, section: &Option<String>) -> Result<Protocol, DBError> {
|
|
||||||
let info = ServerInfo {
|
|
||||||
redis_version: "7.0.0".to_string(),
|
|
||||||
encrypted: server.current_storage()?.is_encrypted(),
|
|
||||||
selected_db: server.selected_db,
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut info_string = String::new();
|
|
||||||
info_string.push_str(&format!("# Server\n"));
|
|
||||||
info_string.push_str(&format!("redis_version:{}\n", info.redis_version));
|
|
||||||
info_string.push_str(&format!("encrypted:{}\n", if info.encrypted { 1 } else { 0 }));
|
|
||||||
info_string.push_str(&format!("# Keyspace\n"));
|
|
||||||
info_string.push_str(&format!("db{}:keys=0,expires=0,avg_ttl=0\n", info.selected_db));
|
|
||||||
|
|
||||||
|
|
||||||
match section {
|
|
||||||
Some(s) => match s.as_str() {
|
|
||||||
"replication" => Ok(Protocol::BulkString(
|
|
||||||
"role:master\nmaster_replid:8371b4fb1155b71f4a04d3e1bc3e18c4a990aeea\nmaster_repl_offset:0\n".to_string()
|
|
||||||
)),
|
|
||||||
_ => Err(DBError(format!("unsupported section {:?}", s))),
|
|
||||||
},
|
|
||||||
None => {
|
|
||||||
Ok(Protocol::BulkString(info_string))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn type_cmd(server: &Server, k: &String) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.get_key_type(k)? {
|
|
||||||
Some(type_str) => Ok(Protocol::SimpleString(type_str)),
|
|
||||||
None => Ok(Protocol::SimpleString("none".to_string())),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn del_cmd(server: &Server, k: &str) -> Result<Protocol, DBError> {
|
|
||||||
server.current_storage()?.del(k.to_string())?;
|
|
||||||
Ok(Protocol::SimpleString("1".to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn set_ex_cmd(
|
|
||||||
server: &Server,
|
|
||||||
k: &str,
|
|
||||||
v: &str,
|
|
||||||
x: &u128,
|
|
||||||
) -> Result<Protocol, DBError> {
|
|
||||||
server.current_storage()?.setx(k.to_string(), v.to_string(), *x * 1000)?;
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn set_px_cmd(
|
|
||||||
server: &Server,
|
|
||||||
k: &str,
|
|
||||||
v: &str,
|
|
||||||
x: &u128,
|
|
||||||
) -> Result<Protocol, DBError> {
|
|
||||||
server.current_storage()?.setx(k.to_string(), v.to_string(), *x)?;
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn set_cmd(server: &Server, k: &str, v: &str) -> Result<Protocol, DBError> {
|
|
||||||
server.current_storage()?.set(k.to_string(), v.to_string())?;
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn get_cmd(server: &Server, k: &str) -> Result<Protocol, DBError> {
|
|
||||||
let v = server.current_storage()?.get(k)?;
|
|
||||||
Ok(v.map_or(Protocol::Null, Protocol::BulkString))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Hash command implementations
|
|
||||||
async fn hset_cmd(server: &Server, key: &str, pairs: &[(String, String)]) -> Result<Protocol, DBError> {
|
|
||||||
let new_fields = server.current_storage()?.hset(key, pairs.to_vec())?;
|
|
||||||
Ok(Protocol::SimpleString(new_fields.to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hget_cmd(server: &Server, key: &str, field: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hget(key, field) {
|
|
||||||
Ok(Some(value)) => Ok(Protocol::BulkString(value)),
|
|
||||||
Ok(None) => Ok(Protocol::Null),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hgetall_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hgetall(key) {
|
|
||||||
Ok(pairs) => {
|
|
||||||
let mut result = Vec::new();
|
|
||||||
for (field, value) in pairs {
|
|
||||||
result.push(Protocol::BulkString(field));
|
|
||||||
result.push(Protocol::BulkString(value));
|
|
||||||
}
|
|
||||||
Ok(Protocol::Array(result))
|
|
||||||
}
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hdel_cmd(server: &Server, key: &str, fields: &[String]) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hdel(key, fields.to_vec()) {
|
|
||||||
Ok(deleted) => Ok(Protocol::SimpleString(deleted.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hexists_cmd(server: &Server, key: &str, field: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hexists(key, field) {
|
|
||||||
Ok(exists) => Ok(Protocol::SimpleString(if exists { "1" } else { "0" }.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hkeys_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hkeys(key) {
|
|
||||||
Ok(keys) => Ok(Protocol::Array(
|
|
||||||
keys.into_iter().map(Protocol::BulkString).collect(),
|
|
||||||
)),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hvals_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hvals(key) {
|
|
||||||
Ok(values) => Ok(Protocol::Array(
|
|
||||||
values.into_iter().map(Protocol::BulkString).collect(),
|
|
||||||
)),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hlen_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hlen(key) {
|
|
||||||
Ok(len) => Ok(Protocol::SimpleString(len.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hmget_cmd(server: &Server, key: &str, fields: &[String]) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hmget(key, fields.to_vec()) {
|
|
||||||
Ok(values) => {
|
|
||||||
let result: Vec<Protocol> = values
|
|
||||||
.into_iter()
|
|
||||||
.map(|v| v.map_or(Protocol::Null, Protocol::BulkString))
|
|
||||||
.collect();
|
|
||||||
Ok(Protocol::Array(result))
|
|
||||||
}
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hsetnx_cmd(server: &Server, key: &str, field: &str, value: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hsetnx(key, field, value) {
|
|
||||||
Ok(was_set) => Ok(Protocol::SimpleString(if was_set { "1" } else { "0" }.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn scan_cmd(
|
|
||||||
server: &Server,
|
|
||||||
cursor: &u64,
|
|
||||||
pattern: Option<&str>,
|
|
||||||
count: &Option<u64>
|
|
||||||
) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.scan(*cursor, pattern, *count) {
|
|
||||||
Ok((next_cursor, key_value_pairs)) => {
|
|
||||||
let mut result = Vec::new();
|
|
||||||
result.push(Protocol::BulkString(next_cursor.to_string()));
|
|
||||||
// For SCAN, we only return the keys, not the values
|
|
||||||
let keys: Vec<Protocol> = key_value_pairs.into_iter().map(|(key, _)| Protocol::BulkString(key)).collect();
|
|
||||||
result.push(Protocol::Array(keys));
|
|
||||||
Ok(Protocol::Array(result))
|
|
||||||
}
|
|
||||||
Err(e) => Ok(Protocol::err(&format!("ERR {}", e.0))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn hscan_cmd(
|
|
||||||
server: &Server,
|
|
||||||
key: &str,
|
|
||||||
cursor: &u64,
|
|
||||||
pattern: Option<&str>,
|
|
||||||
count: &Option<u64>
|
|
||||||
) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.hscan(key, *cursor, pattern, *count) {
|
|
||||||
Ok((next_cursor, field_value_pairs)) => {
|
|
||||||
let mut result = Vec::new();
|
|
||||||
result.push(Protocol::BulkString(next_cursor.to_string()));
|
|
||||||
// For HSCAN, we return field-value pairs flattened
|
|
||||||
let mut fields_and_values = Vec::new();
|
|
||||||
for (field, value) in field_value_pairs {
|
|
||||||
fields_and_values.push(Protocol::BulkString(field));
|
|
||||||
fields_and_values.push(Protocol::BulkString(value));
|
|
||||||
}
|
|
||||||
result.push(Protocol::Array(fields_and_values));
|
|
||||||
Ok(Protocol::Array(result))
|
|
||||||
}
|
|
||||||
Err(e) => Ok(Protocol::err(&format!("ERR {}", e.0))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn ttl_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.ttl(key) {
|
|
||||||
Ok(ttl) => Ok(Protocol::SimpleString(ttl.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn exists_cmd(server: &Server, key: &str) -> Result<Protocol, DBError> {
|
|
||||||
match server.current_storage()?.exists(key) {
|
|
||||||
Ok(exists) => Ok(Protocol::SimpleString(if exists { "1" } else { "0" }.to_string())),
|
|
||||||
Err(e) => Ok(Protocol::err(&e.0)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn client_setname_cmd(server: &mut Server, name: &str) -> Result<Protocol, DBError> {
|
|
||||||
server.client_name = Some(name.to_string());
|
|
||||||
Ok(Protocol::SimpleString("OK".to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn client_getname_cmd(server: &Server) -> Result<Protocol, DBError> {
|
|
||||||
match &server.client_name {
|
|
||||||
Some(name) => Ok(Protocol::BulkString(name.clone())),
|
|
||||||
None => Ok(Protocol::Null),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
pub mod age; // NEW
|
|
||||||
pub mod cmd;
|
|
||||||
pub mod crypto;
|
|
||||||
pub mod error;
|
|
||||||
pub mod options;
|
|
||||||
pub mod protocol;
|
|
||||||
pub mod server;
|
|
||||||
pub mod storage;
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#[derive(Clone)]
|
|
||||||
pub struct DBOption {
|
|
||||||
pub dir: String,
|
|
||||||
pub port: u16,
|
|
||||||
pub debug: bool,
|
|
||||||
pub encrypt: bool,
|
|
||||||
pub encryption_key: Option<String>, // Master encryption key
|
|
||||||
}
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
use core::str;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::sync::Arc;
|
|
||||||
use tokio::io::AsyncReadExt;
|
|
||||||
use tokio::io::AsyncWriteExt;
|
|
||||||
|
|
||||||
use crate::cmd::Cmd;
|
|
||||||
use crate::error::DBError;
|
|
||||||
use crate::options;
|
|
||||||
use crate::protocol::Protocol;
|
|
||||||
use crate::storage::Storage;
|
|
||||||
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct Server {
|
|
||||||
pub db_cache: std::sync::Arc<std::sync::RwLock<HashMap<u64, Arc<Storage>>>>,
|
|
||||||
pub option: options::DBOption,
|
|
||||||
pub client_name: Option<String>,
|
|
||||||
pub selected_db: u64, // Changed from usize to u64
|
|
||||||
pub queued_cmd: Option<Vec<(Cmd, Protocol)>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Server {
|
|
||||||
pub async fn new(option: options::DBOption) -> Self {
|
|
||||||
Server {
|
|
||||||
db_cache: Arc::new(std::sync::RwLock::new(HashMap::new())),
|
|
||||||
option,
|
|
||||||
client_name: None,
|
|
||||||
selected_db: 0,
|
|
||||||
queued_cmd: None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn current_storage(&self) -> Result<Arc<Storage>, DBError> {
|
|
||||||
let mut cache = self.db_cache.write().unwrap();
|
|
||||||
|
|
||||||
if let Some(storage) = cache.get(&self.selected_db) {
|
|
||||||
return Ok(storage.clone());
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// Create new database file
|
|
||||||
let db_file_path = std::path::PathBuf::from(self.option.dir.clone())
|
|
||||||
.join(format!("{}.db", self.selected_db));
|
|
||||||
|
|
||||||
// Ensure the directory exists before creating the database file
|
|
||||||
if let Some(parent_dir) = db_file_path.parent() {
|
|
||||||
std::fs::create_dir_all(parent_dir).map_err(|e| {
|
|
||||||
DBError(format!("Failed to create directory {}: {}", parent_dir.display(), e))
|
|
||||||
})?;
|
|
||||||
}
|
|
||||||
|
|
||||||
println!("Creating new db file: {}", db_file_path.display());
|
|
||||||
|
|
||||||
let storage = Arc::new(Storage::new(
|
|
||||||
db_file_path,
|
|
||||||
self.should_encrypt_db(self.selected_db),
|
|
||||||
self.option.encryption_key.as_deref()
|
|
||||||
)?);
|
|
||||||
|
|
||||||
cache.insert(self.selected_db, storage.clone());
|
|
||||||
Ok(storage)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn should_encrypt_db(&self, db_index: u64) -> bool {
|
|
||||||
// DB 0-9 are non-encrypted, DB 10+ are encrypted
|
|
||||||
self.option.encrypt && db_index >= 10
|
|
||||||
}
|
|
||||||
|
|
||||||
pub async fn handle(
|
|
||||||
&mut self,
|
|
||||||
mut stream: tokio::net::TcpStream,
|
|
||||||
) -> Result<(), DBError> {
|
|
||||||
let mut buf = [0; 512];
|
|
||||||
|
|
||||||
loop {
|
|
||||||
let len = match stream.read(&mut buf).await {
|
|
||||||
Ok(0) => {
|
|
||||||
println!("[handle] connection closed");
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
Ok(len) => len,
|
|
||||||
Err(e) => {
|
|
||||||
println!("[handle] read error: {:?}", e);
|
|
||||||
return Err(e.into());
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut s = str::from_utf8(&buf[..len])?;
|
|
||||||
while !s.is_empty() {
|
|
||||||
let (cmd, protocol, remaining) = match Cmd::from(s) {
|
|
||||||
Ok((cmd, protocol, remaining)) => (cmd, protocol, remaining),
|
|
||||||
Err(e) => {
|
|
||||||
println!("\x1b[31;1mprotocol error: {:?}\x1b[0m", e);
|
|
||||||
(Cmd::Unknow("protocol_error".to_string()), Protocol::err(&format!("protocol error: {}", e.0)), "")
|
|
||||||
}
|
|
||||||
};
|
|
||||||
s = remaining;
|
|
||||||
|
|
||||||
if self.option.debug {
|
|
||||||
println!("\x1b[34;1mgot command: {:?}, protocol: {:?}\x1b[0m", cmd, protocol);
|
|
||||||
} else {
|
|
||||||
println!("got command: {:?}, protocol: {:?}", cmd, protocol);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if this is a QUIT command before processing
|
|
||||||
let is_quit = matches!(cmd, Cmd::Quit);
|
|
||||||
|
|
||||||
let res = match cmd.run(self).await {
|
|
||||||
Ok(p) => p,
|
|
||||||
Err(e) => {
|
|
||||||
if self.option.debug {
|
|
||||||
eprintln!("[run error] {:?}", e);
|
|
||||||
}
|
|
||||||
Protocol::err(&format!("ERR {}", e.0))
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
if self.option.debug {
|
|
||||||
println!("\x1b[34;1mqueued cmd {:?}\x1b[0m", self.queued_cmd);
|
|
||||||
println!("\x1b[32;1mgoing to send response {}\x1b[0m", res.encode());
|
|
||||||
} else {
|
|
||||||
print!("queued cmd {:?}", self.queued_cmd);
|
|
||||||
println!("going to send response {}", res.encode());
|
|
||||||
}
|
|
||||||
|
|
||||||
_ = stream.write(res.encode().as_bytes()).await?;
|
|
||||||
|
|
||||||
// If this was a QUIT command, close the connection
|
|
||||||
if is_quit {
|
|
||||||
println!("[handle] QUIT command received, closing connection");
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,126 +0,0 @@
|
|||||||
use std::{
|
|
||||||
path::Path,
|
|
||||||
time::{SystemTime, UNIX_EPOCH},
|
|
||||||
};
|
|
||||||
|
|
||||||
use redb::{Database, TableDefinition};
|
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
|
|
||||||
use crate::crypto::CryptoFactory;
|
|
||||||
use crate::error::DBError;
|
|
||||||
|
|
||||||
// Re-export modules
|
|
||||||
mod storage_basic;
|
|
||||||
mod storage_hset;
|
|
||||||
mod storage_lists;
|
|
||||||
mod storage_extra;
|
|
||||||
|
|
||||||
// Re-export implementations
|
|
||||||
// Note: These imports are used by the impl blocks in the submodules
|
|
||||||
// The compiler shows them as unused because they're not directly used in this file
|
|
||||||
// but they're needed for the Storage struct methods to be available
|
|
||||||
pub use storage_extra::*;
|
|
||||||
|
|
||||||
// Table definitions for different Redis data types
|
|
||||||
const TYPES_TABLE: TableDefinition<&str, &str> = TableDefinition::new("types");
|
|
||||||
const STRINGS_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("strings");
|
|
||||||
const HASHES_TABLE: TableDefinition<(&str, &str), &[u8]> = TableDefinition::new("hashes");
|
|
||||||
const LISTS_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("lists");
|
|
||||||
const STREAMS_META_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("streams_meta");
|
|
||||||
const STREAMS_DATA_TABLE: TableDefinition<(&str, &str), &[u8]> = TableDefinition::new("streams_data");
|
|
||||||
const ENCRYPTED_TABLE: TableDefinition<&str, u8> = TableDefinition::new("encrypted");
|
|
||||||
const EXPIRATION_TABLE: TableDefinition<&str, u64> = TableDefinition::new("expiration");
|
|
||||||
|
|
||||||
#[derive(Serialize, Deserialize, Debug, Clone)]
|
|
||||||
pub struct StreamEntry {
|
|
||||||
pub fields: Vec<(String, String)>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize, Deserialize, Debug, Clone)]
|
|
||||||
pub struct ListValue {
|
|
||||||
pub elements: Vec<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[inline]
|
|
||||||
pub fn now_in_millis() -> u128 {
|
|
||||||
let start = SystemTime::now();
|
|
||||||
let duration_since_epoch = start.duration_since(UNIX_EPOCH).unwrap();
|
|
||||||
duration_since_epoch.as_millis()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct Storage {
|
|
||||||
db: Database,
|
|
||||||
crypto: Option<CryptoFactory>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Storage {
|
|
||||||
pub fn new(path: impl AsRef<Path>, should_encrypt: bool, master_key: Option<&str>) -> Result<Self, DBError> {
|
|
||||||
let db = Database::create(path)?;
|
|
||||||
|
|
||||||
// Create tables if they don't exist
|
|
||||||
let write_txn = db.begin_write()?;
|
|
||||||
{
|
|
||||||
let _ = write_txn.open_table(TYPES_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(STRINGS_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(HASHES_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(LISTS_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(STREAMS_META_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(STREAMS_DATA_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(ENCRYPTED_TABLE)?;
|
|
||||||
let _ = write_txn.open_table(EXPIRATION_TABLE)?;
|
|
||||||
}
|
|
||||||
write_txn.commit()?;
|
|
||||||
|
|
||||||
// Check if database was previously encrypted
|
|
||||||
let read_txn = db.begin_read()?;
|
|
||||||
let encrypted_table = read_txn.open_table(ENCRYPTED_TABLE)?;
|
|
||||||
let was_encrypted = encrypted_table.get("encrypted")?.map(|v| v.value() == 1).unwrap_or(false);
|
|
||||||
drop(read_txn);
|
|
||||||
|
|
||||||
let crypto = if should_encrypt || was_encrypted {
|
|
||||||
if let Some(key) = master_key {
|
|
||||||
Some(CryptoFactory::new(key.as_bytes()))
|
|
||||||
} else {
|
|
||||||
return Err(DBError("Encryption requested but no master key provided".to_string()));
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
// If we're enabling encryption for the first time, mark it
|
|
||||||
if should_encrypt && !was_encrypted {
|
|
||||||
let write_txn = db.begin_write()?;
|
|
||||||
{
|
|
||||||
let mut encrypted_table = write_txn.open_table(ENCRYPTED_TABLE)?;
|
|
||||||
encrypted_table.insert("encrypted", &1u8)?;
|
|
||||||
}
|
|
||||||
write_txn.commit()?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(Storage {
|
|
||||||
db,
|
|
||||||
crypto,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn is_encrypted(&self) -> bool {
|
|
||||||
self.crypto.is_some()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Helper methods for encryption
|
|
||||||
fn encrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
|
||||||
if let Some(crypto) = &self.crypto {
|
|
||||||
Ok(crypto.encrypt(data))
|
|
||||||
} else {
|
|
||||||
Ok(data.to_vec())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn decrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
|
||||||
if let Some(crypto) = &self.crypto {
|
|
||||||
Ok(crypto.decrypt(data)?)
|
|
||||||
} else {
|
|
||||||
Ok(data.to_vec())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
1251
specs/backgroundinfo/lance.md
Normal file
1251
specs/backgroundinfo/lance.md
Normal file
File diff suppressed because it is too large
Load Diff
6847
specs/backgroundinfo/lancedb.md
Normal file
6847
specs/backgroundinfo/lancedb.md
Normal file
File diff suppressed because it is too large
Load Diff
113
specs/backgroundinfo/sled.md
Normal file
113
specs/backgroundinfo/sled.md
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
========================
|
||||||
|
CODE SNIPPETS
|
||||||
|
========================
|
||||||
|
TITLE: Basic Database Operations with sled in Rust
|
||||||
|
DESCRIPTION: This snippet demonstrates fundamental operations using the `sled` embedded database in Rust. It covers opening a database tree, inserting and retrieving key-value pairs, performing range queries, deleting entries, and executing an atomic compare-and-swap operation. It also shows how to flush changes to disk for durability.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/README.md#_snippet_0
|
||||||
|
|
||||||
|
LANGUAGE: Rust
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
let tree = sled::open("/tmp/welcome-to-sled")?;
|
||||||
|
|
||||||
|
// insert and get, similar to std's BTreeMap
|
||||||
|
let old_value = tree.insert("key", "value")?;
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
tree.get(&"key")?,
|
||||||
|
Some(sled::IVec::from("value")),
|
||||||
|
);
|
||||||
|
|
||||||
|
// range queries
|
||||||
|
for kv_result in tree.range("key_1".."key_9") {}
|
||||||
|
|
||||||
|
// deletion
|
||||||
|
let old_value = tree.remove(&"key")?;
|
||||||
|
|
||||||
|
// atomic compare and swap
|
||||||
|
tree.compare_and_swap(
|
||||||
|
"key",
|
||||||
|
Some("current_value"),
|
||||||
|
Some("new_value"),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// block until all operations are stable on disk
|
||||||
|
// (flush_async also available to get a Future)
|
||||||
|
tree.flush()?;
|
||||||
|
```
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
TITLE: Subscribing to sled Events Asynchronously (Rust)
|
||||||
|
DESCRIPTION: This snippet demonstrates how to asynchronously subscribe to events on key prefixes in a `sled` database. It initializes a `sled` database, creates a `Subscriber` for all key prefixes, inserts a key-value pair to trigger an event, and then uses `extreme::run` to await and process incoming events. The `Subscriber` struct implements `Future<Output=Option<Event>>`, allowing it to be awaited in an async context.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/README.md#_snippet_1
|
||||||
|
|
||||||
|
LANGUAGE: Rust
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
let sled = sled::open("my_db").unwrap();
|
||||||
|
|
||||||
|
let mut sub = sled.watch_prefix("");
|
||||||
|
|
||||||
|
sled.insert(b"a", b"a").unwrap();
|
||||||
|
|
||||||
|
extreme::run(async move {
|
||||||
|
while let Some(event) = (&mut sub).await {
|
||||||
|
println!("got event {:?}", event);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
TITLE: Iterating Subscriber Events with Async/Await in Rust
|
||||||
|
DESCRIPTION: This snippet demonstrates how to asynchronously iterate over events from a `Subscriber` instance in Rust. Since `Subscriber` now implements `Future`, it can be awaited in a loop to process incoming events, enabling efficient prefix watching. The loop continues as long as new events are available.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/CHANGELOG.md#_snippet_0
|
||||||
|
|
||||||
|
LANGUAGE: Rust
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
while let Some(event) = (&mut subscriber).await {}
|
||||||
|
```
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
TITLE: Suppressing TSAN Race on Arc::drop in Rust
|
||||||
|
DESCRIPTION: This suppression addresses a false positive race detection by ThreadSanitizer in Rust's `Arc::drop` implementation. TSAN fails to correctly reason about the raw atomic `Acquire` fence used after the strong-count atomic subtraction with a `Release` fence in the `Drop` implementation, leading to an erroneous race report.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/tsan_suppressions.txt#_snippet_0
|
||||||
|
|
||||||
|
LANGUAGE: TSAN Suppression
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
race:Arc*drop
|
||||||
|
```
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
TITLE: Suppressing TSAN Race on std::thread::local in Rust
|
||||||
|
DESCRIPTION: This suppression addresses ThreadSanitizer false positives when using Rust's `std::thread::local`. Similar to `lazy_static`, `std::thread::local` utilizes implicit barriers that TSAN fails to recognize, leading to incorrect race condition reports.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/tsan_suppressions.txt#_snippet_2
|
||||||
|
|
||||||
|
LANGUAGE: TSAN Suppression
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
race:std::thread::local
|
||||||
|
```
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
TITLE: Suppressing TSAN Race on lazy_static in Rust
|
||||||
|
DESCRIPTION: This suppression targets ThreadSanitizer false positives related to the `lazy_static` crate in Rust. `lazy_static` relies on implicit memory barriers that TSAN does not correctly detect, causing it to report races where none exist.
|
||||||
|
|
||||||
|
SOURCE: https://github.com/spacejam/sled/blob/main/tsan_suppressions.txt#_snippet_1
|
||||||
|
|
||||||
|
LANGUAGE: TSAN Suppression
|
||||||
|
CODE:
|
||||||
|
```
|
||||||
|
race:lazy_static
|
||||||
|
```
|
||||||
735
specs/lance_implementation.md
Normal file
735
specs/lance_implementation.md
Normal file
@@ -0,0 +1,735 @@
|
|||||||
|
Looking at your request to ensure embedding happens with an external embedding provider configured in Redis at `config:core:aiembed:url`, I'll modify the implementation to use an external embedding service instead of local models.
|
||||||
|
|
||||||
|
## Updated Implementation with External Embedding Provider
|
||||||
|
|
||||||
|
### 1. Update Dependencies in `Cargo.toml`
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[dependencies]
|
||||||
|
# ... existing dependencies ...
|
||||||
|
lance = "0.18"
|
||||||
|
lance-index = "0.18"
|
||||||
|
lance-linalg = "0.18"
|
||||||
|
arrow = "52"
|
||||||
|
arrow-array = "52"
|
||||||
|
arrow-schema = "52"
|
||||||
|
parquet = "52"
|
||||||
|
uuid = { version = "1.10", features = ["v4"] }
|
||||||
|
reqwest = { version = "0.11", features = ["json"] }
|
||||||
|
serde = { version = "1.0", features = ["derive"] }
|
||||||
|
serde_json = "1.0"
|
||||||
|
base64 = "0.22"
|
||||||
|
image = "0.25"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create Enhanced Lance Module with External Embedding
|
||||||
|
|
||||||
|
Create `src/lance_store.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tokio::sync::RwLock;
|
||||||
|
|
||||||
|
use arrow::array::{Float32Array, StringArray, BinaryArray, ArrayRef};
|
||||||
|
use arrow::datatypes::{DataType, Field, Schema};
|
||||||
|
use arrow::record_batch::RecordBatch;
|
||||||
|
use lance::dataset::{Dataset, WriteParams, WriteMode};
|
||||||
|
use lance::index::vector::VectorIndexParams;
|
||||||
|
use lance_index::vector::pq::PQBuildParams;
|
||||||
|
use lance_index::vector::ivf::IvfBuildParams;
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use crate::error::DBError;
|
||||||
|
use crate::cmd::Protocol;
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
struct EmbeddingRequest {
|
||||||
|
texts: Option<Vec<String>>,
|
||||||
|
images: Option<Vec<String>>, // base64 encoded
|
||||||
|
model: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
struct EmbeddingResponse {
|
||||||
|
embeddings: Vec<Vec<f32>>,
|
||||||
|
model: String,
|
||||||
|
usage: Option<HashMap<String, u32>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct LanceStore {
|
||||||
|
datasets: Arc<RwLock<HashMap<String, Arc<Dataset>>>>,
|
||||||
|
data_dir: PathBuf,
|
||||||
|
http_client: reqwest::Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl LanceStore {
|
||||||
|
pub async fn new(data_dir: PathBuf) -> Result<Self, DBError> {
|
||||||
|
// Create data directory if it doesn't exist
|
||||||
|
std::fs::create_dir_all(&data_dir)
|
||||||
|
.map_err(|e| DBError(format!("Failed to create Lance data directory: {}", e)))?;
|
||||||
|
|
||||||
|
let http_client = reqwest::Client::builder()
|
||||||
|
.timeout(std::time::Duration::from_secs(30))
|
||||||
|
.build()
|
||||||
|
.map_err(|e| DBError(format!("Failed to create HTTP client: {}", e)))?;
|
||||||
|
|
||||||
|
Ok(Self {
|
||||||
|
datasets: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
data_dir,
|
||||||
|
http_client,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get embedding service URL from Redis config
|
||||||
|
async fn get_embedding_url(&self, server: &crate::server::Server) -> Result<String, DBError> {
|
||||||
|
// Get the embedding URL from Redis config
|
||||||
|
let key = "config:core:aiembed:url";
|
||||||
|
|
||||||
|
// Use HGET to retrieve the URL from Redis hash
|
||||||
|
let cmd = crate::cmd::Cmd::HGet {
|
||||||
|
key: key.to_string(),
|
||||||
|
field: "url".to_string(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Execute command to get the config
|
||||||
|
let result = cmd.run(server).await?;
|
||||||
|
|
||||||
|
match result {
|
||||||
|
Protocol::BulkString(url) => Ok(url),
|
||||||
|
Protocol::SimpleString(url) => Ok(url),
|
||||||
|
Protocol::Nil => Err(DBError(
|
||||||
|
"Embedding service URL not configured. Set it with: HSET config:core:aiembed:url url <YOUR_EMBEDDING_SERVICE_URL>".to_string()
|
||||||
|
)),
|
||||||
|
_ => Err(DBError("Invalid embedding URL configuration".to_string())),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call external embedding service
|
||||||
|
async fn call_embedding_service(
|
||||||
|
&self,
|
||||||
|
server: &crate::server::Server,
|
||||||
|
texts: Option<Vec<String>>,
|
||||||
|
images: Option<Vec<String>>,
|
||||||
|
) -> Result<Vec<Vec<f32>>, DBError> {
|
||||||
|
let url = self.get_embedding_url(server).await?;
|
||||||
|
|
||||||
|
let request = EmbeddingRequest {
|
||||||
|
texts,
|
||||||
|
images,
|
||||||
|
model: None, // Let the service use its default
|
||||||
|
};
|
||||||
|
|
||||||
|
let response = self.http_client
|
||||||
|
.post(&url)
|
||||||
|
.json(&request)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| DBError(format!("Failed to call embedding service: {}", e)))?;
|
||||||
|
|
||||||
|
if !response.status().is_success() {
|
||||||
|
let status = response.status();
|
||||||
|
let error_text = response.text().await.unwrap_or_default();
|
||||||
|
return Err(DBError(format!(
|
||||||
|
"Embedding service returned error {}: {}",
|
||||||
|
status, error_text
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let embedding_response: EmbeddingResponse = response
|
||||||
|
.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| DBError(format!("Failed to parse embedding response: {}", e)))?;
|
||||||
|
|
||||||
|
Ok(embedding_response.embeddings)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn embed_text(
|
||||||
|
&self,
|
||||||
|
server: &crate::server::Server,
|
||||||
|
texts: Vec<String>
|
||||||
|
) -> Result<Vec<Vec<f32>>, DBError> {
|
||||||
|
if texts.is_empty() {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
self.call_embedding_service(server, Some(texts), None).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn embed_image(
|
||||||
|
&self,
|
||||||
|
server: &crate::server::Server,
|
||||||
|
image_bytes: Vec<u8>
|
||||||
|
) -> Result<Vec<f32>, DBError> {
|
||||||
|
// Convert image bytes to base64
|
||||||
|
let base64_image = base64::encode(&image_bytes);
|
||||||
|
|
||||||
|
let embeddings = self.call_embedding_service(
|
||||||
|
server,
|
||||||
|
None,
|
||||||
|
Some(vec![base64_image])
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
embeddings.into_iter()
|
||||||
|
.next()
|
||||||
|
.ok_or_else(|| DBError("No embedding returned for image".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_dataset(
|
||||||
|
&self,
|
||||||
|
name: &str,
|
||||||
|
schema: Schema,
|
||||||
|
) -> Result<(), DBError> {
|
||||||
|
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||||
|
|
||||||
|
// Create empty dataset with schema
|
||||||
|
let write_params = WriteParams {
|
||||||
|
mode: WriteMode::Create,
|
||||||
|
..Default::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create an empty RecordBatch with the schema
|
||||||
|
let empty_batch = RecordBatch::new_empty(Arc::new(schema));
|
||||||
|
let batches = vec![empty_batch];
|
||||||
|
|
||||||
|
let dataset = Dataset::write(
|
||||||
|
batches,
|
||||||
|
dataset_path.to_str().unwrap(),
|
||||||
|
Some(write_params)
|
||||||
|
).await
|
||||||
|
.map_err(|e| DBError(format!("Failed to create dataset: {}", e)))?;
|
||||||
|
|
||||||
|
let mut datasets = self.datasets.write().await;
|
||||||
|
datasets.insert(name.to_string(), Arc::new(dataset));
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn write_vectors(
|
||||||
|
&self,
|
||||||
|
dataset_name: &str,
|
||||||
|
vectors: Vec<Vec<f32>>,
|
||||||
|
metadata: Option<HashMap<String, Vec<String>>>,
|
||||||
|
) -> Result<usize, DBError> {
|
||||||
|
let dataset_path = self.data_dir.join(format!("{}.lance", dataset_name));
|
||||||
|
|
||||||
|
// Open or get cached dataset
|
||||||
|
let dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||||
|
|
||||||
|
// Build RecordBatch
|
||||||
|
let num_vectors = vectors.len();
|
||||||
|
if num_vectors == 0 {
|
||||||
|
return Ok(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
let dim = vectors.first()
|
||||||
|
.ok_or_else(|| DBError("Empty vectors".to_string()))?
|
||||||
|
.len();
|
||||||
|
|
||||||
|
// Flatten vectors
|
||||||
|
let flat_vectors: Vec<f32> = vectors.into_iter().flatten().collect();
|
||||||
|
let vector_array = Float32Array::from(flat_vectors);
|
||||||
|
let vector_array = arrow::array::FixedSizeListArray::try_new_from_values(
|
||||||
|
vector_array,
|
||||||
|
dim as i32
|
||||||
|
).map_err(|e| DBError(format!("Failed to create vector array: {}", e)))?;
|
||||||
|
|
||||||
|
let mut arrays: Vec<ArrayRef> = vec![Arc::new(vector_array)];
|
||||||
|
let mut fields = vec![Field::new(
|
||||||
|
"vector",
|
||||||
|
DataType::FixedSizeList(
|
||||||
|
Arc::new(Field::new("item", DataType::Float32, true)),
|
||||||
|
dim as i32
|
||||||
|
),
|
||||||
|
false
|
||||||
|
)];
|
||||||
|
|
||||||
|
// Add metadata columns if provided
|
||||||
|
if let Some(metadata) = metadata {
|
||||||
|
for (key, values) in metadata {
|
||||||
|
if values.len() != num_vectors {
|
||||||
|
return Err(DBError(format!(
|
||||||
|
"Metadata field '{}' has {} values but expected {}",
|
||||||
|
key, values.len(), num_vectors
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
let array = StringArray::from(values);
|
||||||
|
arrays.push(Arc::new(array));
|
||||||
|
fields.push(Field::new(&key, DataType::Utf8, true));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let schema = Arc::new(Schema::new(fields));
|
||||||
|
let batch = RecordBatch::try_new(schema, arrays)
|
||||||
|
.map_err(|e| DBError(format!("Failed to create RecordBatch: {}", e)))?;
|
||||||
|
|
||||||
|
// Append to dataset
|
||||||
|
let write_params = WriteParams {
|
||||||
|
mode: WriteMode::Append,
|
||||||
|
..Default::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
Dataset::write(
|
||||||
|
vec![batch],
|
||||||
|
dataset_path.to_str().unwrap(),
|
||||||
|
Some(write_params)
|
||||||
|
).await
|
||||||
|
.map_err(|e| DBError(format!("Failed to write to dataset: {}", e)))?;
|
||||||
|
|
||||||
|
// Refresh cached dataset
|
||||||
|
let mut datasets = self.datasets.write().await;
|
||||||
|
datasets.remove(dataset_name);
|
||||||
|
|
||||||
|
Ok(num_vectors)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn search_vectors(
|
||||||
|
&self,
|
||||||
|
dataset_name: &str,
|
||||||
|
query_vector: Vec<f32>,
|
||||||
|
k: usize,
|
||||||
|
nprobes: Option<usize>,
|
||||||
|
refine_factor: Option<usize>,
|
||||||
|
) -> Result<Vec<(f32, HashMap<String, String>)>, DBError> {
|
||||||
|
let dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||||
|
|
||||||
|
// Build query
|
||||||
|
let mut query = dataset.scan();
|
||||||
|
query = query.nearest(
|
||||||
|
"vector",
|
||||||
|
&query_vector,
|
||||||
|
k,
|
||||||
|
).map_err(|e| DBError(format!("Failed to build search query: {}", e)))?;
|
||||||
|
|
||||||
|
if let Some(nprobes) = nprobes {
|
||||||
|
query = query.nprobes(nprobes);
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(refine) = refine_factor {
|
||||||
|
query = query.refine_factor(refine);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute search
|
||||||
|
let results = query
|
||||||
|
.try_into_stream()
|
||||||
|
.await
|
||||||
|
.map_err(|e| DBError(format!("Failed to execute search: {}", e)))?
|
||||||
|
.try_collect::<Vec<_>>()
|
||||||
|
.await
|
||||||
|
.map_err(|e| DBError(format!("Failed to collect results: {}", e)))?;
|
||||||
|
|
||||||
|
// Process results
|
||||||
|
let mut output = Vec::new();
|
||||||
|
for batch in results {
|
||||||
|
// Get distances
|
||||||
|
let distances = batch
|
||||||
|
.column_by_name("_distance")
|
||||||
|
.ok_or_else(|| DBError("No distance column".to_string()))?
|
||||||
|
.as_any()
|
||||||
|
.downcast_ref::<Float32Array>()
|
||||||
|
.ok_or_else(|| DBError("Invalid distance type".to_string()))?;
|
||||||
|
|
||||||
|
// Get metadata
|
||||||
|
for i in 0..batch.num_rows() {
|
||||||
|
let distance = distances.value(i);
|
||||||
|
let mut metadata = HashMap::new();
|
||||||
|
|
||||||
|
for field in batch.schema().fields() {
|
||||||
|
if field.name() != "vector" && field.name() != "_distance" {
|
||||||
|
if let Some(col) = batch.column_by_name(field.name()) {
|
||||||
|
if let Some(str_array) = col.as_any().downcast_ref::<StringArray>() {
|
||||||
|
if !str_array.is_null(i) {
|
||||||
|
metadata.insert(
|
||||||
|
field.name().to_string(),
|
||||||
|
str_array.value(i).to_string()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output.push((distance, metadata));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(output)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn store_multimodal(
|
||||||
|
&self,
|
||||||
|
server: &crate::server::Server,
|
||||||
|
dataset_name: &str,
|
||||||
|
text: Option<String>,
|
||||||
|
image_bytes: Option<Vec<u8>>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
) -> Result<String, DBError> {
|
||||||
|
// Generate ID
|
||||||
|
let id = uuid::Uuid::new_v4().to_string();
|
||||||
|
|
||||||
|
// Generate embeddings using external service
|
||||||
|
let embedding = if let Some(text) = text.as_ref() {
|
||||||
|
self.embed_text(server, vec![text.clone()]).await?
|
||||||
|
.into_iter()
|
||||||
|
.next()
|
||||||
|
.ok_or_else(|| DBError("No embedding returned".to_string()))?
|
||||||
|
} else if let Some(img) = image_bytes.as_ref() {
|
||||||
|
self.embed_image(server, img.clone()).await?
|
||||||
|
} else {
|
||||||
|
return Err(DBError("No text or image provided".to_string()));
|
||||||
|
};
|
||||||
|
|
||||||
|
// Prepare metadata
|
||||||
|
let mut full_metadata = metadata;
|
||||||
|
full_metadata.insert("id".to_string(), id.clone());
|
||||||
|
if let Some(text) = text {
|
||||||
|
full_metadata.insert("text".to_string(), text);
|
||||||
|
}
|
||||||
|
if let Some(img) = image_bytes {
|
||||||
|
full_metadata.insert("image_base64".to_string(), base64::encode(img));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert metadata to column vectors
|
||||||
|
let mut metadata_cols = HashMap::new();
|
||||||
|
for (key, value) in full_metadata {
|
||||||
|
metadata_cols.insert(key, vec![value]);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to dataset
|
||||||
|
self.write_vectors(dataset_name, vec![embedding], Some(metadata_cols)).await?;
|
||||||
|
|
||||||
|
Ok(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn search_with_text(
|
||||||
|
&self,
|
||||||
|
server: &crate::server::Server,
|
||||||
|
dataset_name: &str,
|
||||||
|
query_text: String,
|
||||||
|
k: usize,
|
||||||
|
nprobes: Option<usize>,
|
||||||
|
refine_factor: Option<usize>,
|
||||||
|
) -> Result<Vec<(f32, HashMap<String, String>)>, DBError> {
|
||||||
|
// Embed the query text using external service
|
||||||
|
let embeddings = self.embed_text(server, vec![query_text]).await?;
|
||||||
|
let query_vector = embeddings.into_iter()
|
||||||
|
.next()
|
||||||
|
.ok_or_else(|| DBError("No embedding returned for query".to_string()))?;
|
||||||
|
|
||||||
|
// Search with the embedding
|
||||||
|
self.search_vectors(dataset_name, query_vector, k, nprobes, refine_factor).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_index(
|
||||||
|
&self,
|
||||||
|
dataset_name: &str,
|
||||||
|
index_type: &str,
|
||||||
|
num_partitions: Option<usize>,
|
||||||
|
num_sub_vectors: Option<usize>,
|
||||||
|
) -> Result<(), DBError> {
|
||||||
|
let dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||||
|
|
||||||
|
let mut params = VectorIndexParams::default();
|
||||||
|
|
||||||
|
match index_type.to_uppercase().as_str() {
|
||||||
|
"IVF_PQ" => {
|
||||||
|
params.ivf = IvfBuildParams {
|
||||||
|
num_partitions: num_partitions.unwrap_or(256),
|
||||||
|
..Default::default()
|
||||||
|
};
|
||||||
|
params.pq = PQBuildParams {
|
||||||
|
num_sub_vectors: num_sub_vectors.unwrap_or(16),
|
||||||
|
..Default::default()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
_ => return Err(DBError(format!("Unsupported index type: {}", index_type))),
|
||||||
|
}
|
||||||
|
|
||||||
|
dataset.create_index(
|
||||||
|
&["vector"],
|
||||||
|
lance::index::IndexType::Vector,
|
||||||
|
None,
|
||||||
|
¶ms,
|
||||||
|
true
|
||||||
|
).await
|
||||||
|
.map_err(|e| DBError(format!("Failed to create index: {}", e)))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_or_open_dataset(&self, name: &str) -> Result<Arc<Dataset>, DBError> {
|
||||||
|
let mut datasets = self.datasets.write().await;
|
||||||
|
|
||||||
|
if let Some(dataset) = datasets.get(name) {
|
||||||
|
return Ok(dataset.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||||
|
if !dataset_path.exists() {
|
||||||
|
return Err(DBError(format!("Dataset '{}' does not exist", name)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let dataset = Dataset::open(dataset_path.to_str().unwrap())
|
||||||
|
.await
|
||||||
|
.map_err(|e| DBError(format!("Failed to open dataset: {}", e)))?;
|
||||||
|
|
||||||
|
let dataset = Arc::new(dataset);
|
||||||
|
datasets.insert(name.to_string(), dataset.clone());
|
||||||
|
|
||||||
|
Ok(dataset)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_datasets(&self) -> Result<Vec<String>, DBError> {
|
||||||
|
let mut datasets = Vec::new();
|
||||||
|
|
||||||
|
let entries = std::fs::read_dir(&self.data_dir)
|
||||||
|
.map_err(|e| DBError(format!("Failed to read data directory: {}", e)))?;
|
||||||
|
|
||||||
|
for entry in entries {
|
||||||
|
let entry = entry.map_err(|e| DBError(format!("Failed to read entry: {}", e)))?;
|
||||||
|
let path = entry.path();
|
||||||
|
|
||||||
|
if path.is_dir() {
|
||||||
|
if let Some(name) = path.file_name() {
|
||||||
|
if let Some(name_str) = name.to_str() {
|
||||||
|
if name_str.ends_with(".lance") {
|
||||||
|
let dataset_name = name_str.trim_end_matches(".lance");
|
||||||
|
datasets.push(dataset_name.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(datasets)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn drop_dataset(&self, name: &str) -> Result<(), DBError> {
|
||||||
|
// Remove from cache
|
||||||
|
let mut datasets = self.datasets.write().await;
|
||||||
|
datasets.remove(name);
|
||||||
|
|
||||||
|
// Delete from disk
|
||||||
|
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||||
|
if dataset_path.exists() {
|
||||||
|
std::fs::remove_dir_all(dataset_path)
|
||||||
|
.map_err(|e| DBError(format!("Failed to delete dataset: {}", e)))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_dataset_info(&self, name: &str) -> Result<HashMap<String, String>, DBError> {
|
||||||
|
let dataset = self.get_or_open_dataset(name).await?;
|
||||||
|
|
||||||
|
let mut info = HashMap::new();
|
||||||
|
info.insert("name".to_string(), name.to_string());
|
||||||
|
info.insert("version".to_string(), dataset.version().to_string());
|
||||||
|
info.insert("num_rows".to_string(), dataset.count_rows().await?.to_string());
|
||||||
|
|
||||||
|
// Get schema info
|
||||||
|
let schema = dataset.schema();
|
||||||
|
let fields: Vec<String> = schema.fields()
|
||||||
|
.iter()
|
||||||
|
.map(|f| format!("{}:{}", f.name(), f.data_type()))
|
||||||
|
.collect();
|
||||||
|
info.insert("schema".to_string(), fields.join(", "));
|
||||||
|
|
||||||
|
Ok(info)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Update Command Implementations
|
||||||
|
|
||||||
|
Update the command implementations to pass the server reference for embedding service access:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// In cmd.rs, update the lance command implementations
|
||||||
|
|
||||||
|
async fn lance_store_cmd(
|
||||||
|
server: &Server,
|
||||||
|
dataset: &str,
|
||||||
|
text: Option<String>,
|
||||||
|
image_base64: Option<String>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let lance_store = server.lance_store()?;
|
||||||
|
|
||||||
|
// Decode image if provided
|
||||||
|
let image_bytes = if let Some(b64) = image_base64 {
|
||||||
|
Some(base64::decode(b64).map_err(|e|
|
||||||
|
DBError(format!("Invalid base64 image: {}", e)))?)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
// Pass server reference for embedding service access
|
||||||
|
let id = lance_store.store_multimodal(
|
||||||
|
server, // Pass server to access Redis config
|
||||||
|
dataset,
|
||||||
|
text,
|
||||||
|
image_bytes,
|
||||||
|
metadata,
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
Ok(Protocol::BulkString(id))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn lance_embed_text_cmd(
|
||||||
|
server: &Server,
|
||||||
|
texts: &[String],
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let lance_store = server.lance_store()?;
|
||||||
|
|
||||||
|
// Pass server reference for embedding service access
|
||||||
|
let embeddings = lance_store.embed_text(server, texts.to_vec()).await?;
|
||||||
|
|
||||||
|
// Return as array of vectors
|
||||||
|
let mut output = Vec::new();
|
||||||
|
for embedding in embeddings {
|
||||||
|
let vector_str = format!("[{}]",
|
||||||
|
embedding.iter()
|
||||||
|
.map(|f| f.to_string())
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(",")
|
||||||
|
);
|
||||||
|
output.push(Protocol::BulkString(vector_str));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Protocol::Array(output))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn lance_search_text_cmd(
|
||||||
|
server: &Server,
|
||||||
|
dataset: &str,
|
||||||
|
query_text: &str,
|
||||||
|
k: usize,
|
||||||
|
nprobes: Option<usize>,
|
||||||
|
refine_factor: Option<usize>,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let lance_store = server.lance_store()?;
|
||||||
|
|
||||||
|
// Search using text query (will be embedded automatically)
|
||||||
|
let results = lance_store.search_with_text(
|
||||||
|
server,
|
||||||
|
dataset,
|
||||||
|
query_text.to_string(),
|
||||||
|
k,
|
||||||
|
nprobes,
|
||||||
|
refine_factor,
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
// Format results
|
||||||
|
let mut output = Vec::new();
|
||||||
|
for (distance, metadata) in results {
|
||||||
|
let metadata_json = serde_json::to_string(&metadata)
|
||||||
|
.unwrap_or_else(|_| "{}".to_string());
|
||||||
|
|
||||||
|
output.push(Protocol::Array(vec![
|
||||||
|
Protocol::BulkString(distance.to_string()),
|
||||||
|
Protocol::BulkString(metadata_json),
|
||||||
|
]));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Protocol::Array(output))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add new command for text-based search
|
||||||
|
pub enum Cmd {
|
||||||
|
// ... existing commands ...
|
||||||
|
LanceSearchText {
|
||||||
|
dataset: String,
|
||||||
|
query_text: String,
|
||||||
|
k: usize,
|
||||||
|
nprobes: Option<usize>,
|
||||||
|
refine_factor: Option<usize>,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### 1. Configure the Embedding Service
|
||||||
|
|
||||||
|
First, users need to configure the embedding service URL:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Configure the embedding service endpoint
|
||||||
|
redis-cli> HSET config:core:aiembed:url url "http://localhost:8000/embeddings"
|
||||||
|
OK
|
||||||
|
|
||||||
|
# Or use a cloud service
|
||||||
|
redis-cli> HSET config:core:aiembed:url url "https://api.openai.com/v1/embeddings"
|
||||||
|
OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Use Lance Commands with Automatic External Embedding
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a dataset
|
||||||
|
redis-cli> LANCE.CREATE products DIM 1536 SCHEMA name:string price:float category:string
|
||||||
|
OK
|
||||||
|
|
||||||
|
# Store text with automatic embedding (calls external service)
|
||||||
|
redis-cli> LANCE.STORE products TEXT "Wireless noise-canceling headphones with 30-hour battery" name:AirPods price:299.99 category:Electronics
|
||||||
|
"uuid-123-456"
|
||||||
|
|
||||||
|
# Search using text query (automatically embeds the query)
|
||||||
|
redis-cli> LANCE.SEARCH.TEXT products "best headphones for travel" K 5
|
||||||
|
1) "0.92"
|
||||||
|
2) "{\"id\":\"uuid-123\",\"name\":\"AirPods\",\"price\":\"299.99\"}"
|
||||||
|
|
||||||
|
# Get embeddings directly
|
||||||
|
redis-cli> LANCE.EMBED.TEXT "This text will be embedded"
|
||||||
|
1) "[0.123, 0.456, 0.789, ...]"
|
||||||
|
```
|
||||||
|
|
||||||
|
## External Embedding Service API Specification
|
||||||
|
|
||||||
|
The external embedding service should accept POST requests with this format:
|
||||||
|
|
||||||
|
```json
|
||||||
|
// Request
|
||||||
|
{
|
||||||
|
"texts": ["text1", "text2"], // Optional
|
||||||
|
"images": ["base64_img1"], // Optional
|
||||||
|
"model": "text-embedding-ada-002" // Optional
|
||||||
|
}
|
||||||
|
|
||||||
|
// Response
|
||||||
|
{
|
||||||
|
"embeddings": [[0.1, 0.2, ...], [0.3, 0.4, ...]],
|
||||||
|
"model": "text-embedding-ada-002",
|
||||||
|
"usage": {
|
||||||
|
"prompt_tokens": 100,
|
||||||
|
"total_tokens": 100
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
The implementation includes comprehensive error handling:
|
||||||
|
|
||||||
|
1. **Missing Configuration**: Clear error message if embedding URL not configured
|
||||||
|
2. **Service Failures**: Graceful handling of embedding service errors
|
||||||
|
3. **Timeout Protection**: 30-second timeout for embedding requests
|
||||||
|
4. **Retry Logic**: Could be added for resilience
|
||||||
|
|
||||||
|
## Benefits of This Approach
|
||||||
|
|
||||||
|
1. **Flexibility**: Supports any embedding service with compatible API
|
||||||
|
2. **Cost Control**: Use your preferred embedding provider
|
||||||
|
3. **Scalability**: Embedding service can be scaled independently
|
||||||
|
4. **Consistency**: All embeddings use the same configured service
|
||||||
|
5. **Security**: API keys and endpoints stored securely in Redis
|
||||||
|
|
||||||
|
This implementation ensures that all embedding operations go through the external service configured in Redis, providing a clean separation between the vector database functionality and the embedding generation.
|
||||||
|
|
||||||
|
|
||||||
|
TODO EXTRA:
|
||||||
|
|
||||||
|
- secret for the embedding service API key
|
||||||
|
|
||||||
@@ -12,17 +12,17 @@
|
|||||||
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use secrecy::ExposeSecret;
|
|
||||||
use age::{Decryptor, Encryptor};
|
|
||||||
use age::x25519;
|
use age::x25519;
|
||||||
|
use age::{Decryptor, Encryptor};
|
||||||
|
use secrecy::ExposeSecret;
|
||||||
|
|
||||||
use ed25519_dalek::{Signature, Signer, Verifier, SigningKey, VerifyingKey};
|
use ed25519_dalek::{Signature, Signer, SigningKey, Verifier, VerifyingKey};
|
||||||
|
|
||||||
use base64::{engine::general_purpose::STANDARD as B64, Engine as _};
|
use base64::{engine::general_purpose::STANDARD as B64, Engine as _};
|
||||||
|
|
||||||
|
use crate::error::DBError;
|
||||||
use crate::protocol::Protocol;
|
use crate::protocol::Protocol;
|
||||||
use crate::server::Server;
|
use crate::server::Server;
|
||||||
use crate::error::DBError;
|
|
||||||
|
|
||||||
// ---------- Internal helpers ----------
|
// ---------- Internal helpers ----------
|
||||||
|
|
||||||
@@ -32,7 +32,7 @@ pub enum AgeWireError {
|
|||||||
Crypto(String),
|
Crypto(String),
|
||||||
Utf8,
|
Utf8,
|
||||||
SignatureLen,
|
SignatureLen,
|
||||||
NotFound(&'static str), // which kind of key was missing
|
NotFound(&'static str), // which kind of key was missing
|
||||||
Storage(String),
|
Storage(String),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -83,34 +83,38 @@ pub fn gen_enc_keypair() -> (String, String) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn gen_sign_keypair() -> (String, String) {
|
pub fn gen_sign_keypair() -> (String, String) {
|
||||||
use rand::RngCore;
|
|
||||||
use rand::rngs::OsRng;
|
use rand::rngs::OsRng;
|
||||||
|
use rand::RngCore;
|
||||||
|
|
||||||
// Generate random 32 bytes for the signing key
|
// Generate random 32 bytes for the signing key
|
||||||
let mut secret_bytes = [0u8; 32];
|
let mut secret_bytes = [0u8; 32];
|
||||||
OsRng.fill_bytes(&mut secret_bytes);
|
OsRng.fill_bytes(&mut secret_bytes);
|
||||||
|
|
||||||
let signing_key = SigningKey::from_bytes(&secret_bytes);
|
let signing_key = SigningKey::from_bytes(&secret_bytes);
|
||||||
let verifying_key = signing_key.verifying_key();
|
let verifying_key = signing_key.verifying_key();
|
||||||
|
|
||||||
// Encode as base64 for storage
|
// Encode as base64 for storage
|
||||||
let signing_key_b64 = B64.encode(signing_key.to_bytes());
|
let signing_key_b64 = B64.encode(signing_key.to_bytes());
|
||||||
let verifying_key_b64 = B64.encode(verifying_key.to_bytes());
|
let verifying_key_b64 = B64.encode(verifying_key.to_bytes());
|
||||||
|
|
||||||
(verifying_key_b64, signing_key_b64) // (verify_pub, signing_secret)
|
(verifying_key_b64, signing_key_b64) // (verify_pub, signing_secret)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt `msg` for `recipient_str` (X25519). Returns base64(ciphertext).
|
/// Encrypt `msg` for `recipient_str` (X25519). Returns base64(ciphertext).
|
||||||
pub fn encrypt_b64(recipient_str: &str, msg: &str) -> Result<String, AgeWireError> {
|
pub fn encrypt_b64(recipient_str: &str, msg: &str) -> Result<String, AgeWireError> {
|
||||||
let recipient = parse_recipient(recipient_str)?;
|
let recipient = parse_recipient(recipient_str)?;
|
||||||
let enc = Encryptor::with_recipients(vec![Box::new(recipient)])
|
let enc =
|
||||||
.expect("failed to create encryptor"); // Handle Option<Encryptor>
|
Encryptor::with_recipients(vec![Box::new(recipient)]).expect("failed to create encryptor"); // Handle Option<Encryptor>
|
||||||
let mut out = Vec::new();
|
let mut out = Vec::new();
|
||||||
{
|
{
|
||||||
use std::io::Write;
|
use std::io::Write;
|
||||||
let mut w = enc.wrap_output(&mut out).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
let mut w = enc
|
||||||
w.write_all(msg.as_bytes()).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
.wrap_output(&mut out)
|
||||||
w.finish().map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
|
w.write_all(msg.as_bytes())
|
||||||
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
|
w.finish()
|
||||||
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
}
|
}
|
||||||
Ok(B64.encode(out))
|
Ok(B64.encode(out))
|
||||||
}
|
}
|
||||||
@@ -118,19 +122,27 @@ pub fn encrypt_b64(recipient_str: &str, msg: &str) -> Result<String, AgeWireErro
|
|||||||
/// Decrypt base64(ciphertext) with `identity_str`. Returns plaintext String.
|
/// Decrypt base64(ciphertext) with `identity_str`. Returns plaintext String.
|
||||||
pub fn decrypt_b64(identity_str: &str, ct_b64: &str) -> Result<String, AgeWireError> {
|
pub fn decrypt_b64(identity_str: &str, ct_b64: &str) -> Result<String, AgeWireError> {
|
||||||
let id = parse_identity(identity_str)?;
|
let id = parse_identity(identity_str)?;
|
||||||
let ct = B64.decode(ct_b64.as_bytes()).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
let ct = B64
|
||||||
|
.decode(ct_b64.as_bytes())
|
||||||
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
let dec = Decryptor::new(&ct[..]).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
let dec = Decryptor::new(&ct[..]).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
|
|
||||||
// The decrypt method returns a Result<StreamReader, DecryptError>
|
// The decrypt method returns a Result<StreamReader, DecryptError>
|
||||||
let mut r = match dec {
|
let mut r = match dec {
|
||||||
Decryptor::Recipients(d) => d.decrypt(std::iter::once(&id as &dyn age::Identity))
|
Decryptor::Recipients(d) => d
|
||||||
|
.decrypt(std::iter::once(&id as &dyn age::Identity))
|
||||||
.map_err(|e| AgeWireError::Crypto(e.to_string()))?,
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?,
|
||||||
Decryptor::Passphrase(_) => return Err(AgeWireError::Crypto("Expected recipients, got passphrase".to_string())),
|
Decryptor::Passphrase(_) => {
|
||||||
|
return Err(AgeWireError::Crypto(
|
||||||
|
"Expected recipients, got passphrase".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut pt = Vec::new();
|
let mut pt = Vec::new();
|
||||||
use std::io::Read;
|
use std::io::Read;
|
||||||
r.read_to_end(&mut pt).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
r.read_to_end(&mut pt)
|
||||||
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
String::from_utf8(pt).map_err(|_| AgeWireError::Utf8)
|
String::from_utf8(pt).map_err(|_| AgeWireError::Utf8)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -144,7 +156,9 @@ pub fn sign_b64(signing_secret_str: &str, msg: &str) -> Result<String, AgeWireEr
|
|||||||
/// Verify detached signature (base64) for `msg` with pubkey.
|
/// Verify detached signature (base64) for `msg` with pubkey.
|
||||||
pub fn verify_b64(verify_pub_str: &str, msg: &str, sig_b64: &str) -> Result<bool, AgeWireError> {
|
pub fn verify_b64(verify_pub_str: &str, msg: &str, sig_b64: &str) -> Result<bool, AgeWireError> {
|
||||||
let verifying_key = parse_ed25519_verifying_key(verify_pub_str)?;
|
let verifying_key = parse_ed25519_verifying_key(verify_pub_str)?;
|
||||||
let sig_bytes = B64.decode(sig_b64.as_bytes()).map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
let sig_bytes = B64
|
||||||
|
.decode(sig_b64.as_bytes())
|
||||||
|
.map_err(|e| AgeWireError::Crypto(e.to_string()))?;
|
||||||
if sig_bytes.len() != 64 {
|
if sig_bytes.len() != 64 {
|
||||||
return Err(AgeWireError::SignatureLen);
|
return Err(AgeWireError::SignatureLen);
|
||||||
}
|
}
|
||||||
@@ -155,30 +169,49 @@ pub fn verify_b64(verify_pub_str: &str, msg: &str, sig_b64: &str) -> Result<bool
|
|||||||
// ---------- Storage helpers ----------
|
// ---------- Storage helpers ----------
|
||||||
|
|
||||||
fn sget(server: &Server, key: &str) -> Result<Option<String>, AgeWireError> {
|
fn sget(server: &Server, key: &str) -> Result<Option<String>, AgeWireError> {
|
||||||
let st = server.current_storage().map_err(|e| AgeWireError::Storage(e.0))?;
|
let st = server
|
||||||
|
.current_storage()
|
||||||
|
.map_err(|e| AgeWireError::Storage(e.0))?;
|
||||||
st.get(key).map_err(|e| AgeWireError::Storage(e.0))
|
st.get(key).map_err(|e| AgeWireError::Storage(e.0))
|
||||||
}
|
}
|
||||||
fn sset(server: &Server, key: &str, val: &str) -> Result<(), AgeWireError> {
|
fn sset(server: &Server, key: &str, val: &str) -> Result<(), AgeWireError> {
|
||||||
let st = server.current_storage().map_err(|e| AgeWireError::Storage(e.0))?;
|
let st = server
|
||||||
st.set(key.to_string(), val.to_string()).map_err(|e| AgeWireError::Storage(e.0))
|
.current_storage()
|
||||||
|
.map_err(|e| AgeWireError::Storage(e.0))?;
|
||||||
|
st.set(key.to_string(), val.to_string())
|
||||||
|
.map_err(|e| AgeWireError::Storage(e.0))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn enc_pub_key_key(name: &str) -> String { format!("age:key:{name}") }
|
fn enc_pub_key_key(name: &str) -> String {
|
||||||
fn enc_priv_key_key(name: &str) -> String { format!("age:privkey:{name}") }
|
format!("age:key:{name}")
|
||||||
fn sign_pub_key_key(name: &str) -> String { format!("age:signpub:{name}") }
|
}
|
||||||
fn sign_priv_key_key(name: &str) -> String { format!("age:signpriv:{name}") }
|
fn enc_priv_key_key(name: &str) -> String {
|
||||||
|
format!("age:privkey:{name}")
|
||||||
|
}
|
||||||
|
fn sign_pub_key_key(name: &str) -> String {
|
||||||
|
format!("age:signpub:{name}")
|
||||||
|
}
|
||||||
|
fn sign_priv_key_key(name: &str) -> String {
|
||||||
|
format!("age:signpriv:{name}")
|
||||||
|
}
|
||||||
|
|
||||||
// ---------- Command handlers (RESP Protocol) ----------
|
// ---------- Command handlers (RESP Protocol) ----------
|
||||||
// Basic (stateless) ones kept for completeness
|
// Basic (stateless) ones kept for completeness
|
||||||
|
|
||||||
pub async fn cmd_age_genenc() -> Protocol {
|
pub async fn cmd_age_genenc() -> Protocol {
|
||||||
let (recip, ident) = gen_enc_keypair();
|
let (recip, ident) = gen_enc_keypair();
|
||||||
Protocol::Array(vec![Protocol::BulkString(recip), Protocol::BulkString(ident)])
|
Protocol::Array(vec![
|
||||||
|
Protocol::BulkString(recip),
|
||||||
|
Protocol::BulkString(ident),
|
||||||
|
])
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn cmd_age_gensign() -> Protocol {
|
pub async fn cmd_age_gensign() -> Protocol {
|
||||||
let (verify, secret) = gen_sign_keypair();
|
let (verify, secret) = gen_sign_keypair();
|
||||||
Protocol::Array(vec![Protocol::BulkString(verify), Protocol::BulkString(secret)])
|
Protocol::Array(vec![
|
||||||
|
Protocol::BulkString(verify),
|
||||||
|
Protocol::BulkString(secret),
|
||||||
|
])
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn cmd_age_encrypt(recipient: &str, message: &str) -> Protocol {
|
pub async fn cmd_age_encrypt(recipient: &str, message: &str) -> Protocol {
|
||||||
@@ -214,16 +247,30 @@ pub async fn cmd_age_verify(verify_pub: &str, message: &str, sig_b64: &str) -> P
|
|||||||
|
|
||||||
pub async fn cmd_age_keygen(server: &Server, name: &str) -> Protocol {
|
pub async fn cmd_age_keygen(server: &Server, name: &str) -> Protocol {
|
||||||
let (recip, ident) = gen_enc_keypair();
|
let (recip, ident) = gen_enc_keypair();
|
||||||
if let Err(e) = sset(server, &enc_pub_key_key(name), &recip) { return e.to_protocol(); }
|
if let Err(e) = sset(server, &enc_pub_key_key(name), &recip) {
|
||||||
if let Err(e) = sset(server, &enc_priv_key_key(name), &ident) { return e.to_protocol(); }
|
return e.to_protocol();
|
||||||
Protocol::Array(vec![Protocol::BulkString(recip), Protocol::BulkString(ident)])
|
}
|
||||||
|
if let Err(e) = sset(server, &enc_priv_key_key(name), &ident) {
|
||||||
|
return e.to_protocol();
|
||||||
|
}
|
||||||
|
Protocol::Array(vec![
|
||||||
|
Protocol::BulkString(recip),
|
||||||
|
Protocol::BulkString(ident),
|
||||||
|
])
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn cmd_age_signkeygen(server: &Server, name: &str) -> Protocol {
|
pub async fn cmd_age_signkeygen(server: &Server, name: &str) -> Protocol {
|
||||||
let (verify, secret) = gen_sign_keypair();
|
let (verify, secret) = gen_sign_keypair();
|
||||||
if let Err(e) = sset(server, &sign_pub_key_key(name), &verify) { return e.to_protocol(); }
|
if let Err(e) = sset(server, &sign_pub_key_key(name), &verify) {
|
||||||
if let Err(e) = sset(server, &sign_priv_key_key(name), &secret) { return e.to_protocol(); }
|
return e.to_protocol();
|
||||||
Protocol::Array(vec![Protocol::BulkString(verify), Protocol::BulkString(secret)])
|
}
|
||||||
|
if let Err(e) = sset(server, &sign_priv_key_key(name), &secret) {
|
||||||
|
return e.to_protocol();
|
||||||
|
}
|
||||||
|
Protocol::Array(vec![
|
||||||
|
Protocol::BulkString(verify),
|
||||||
|
Protocol::BulkString(secret),
|
||||||
|
])
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn cmd_age_encrypt_name(server: &Server, name: &str, message: &str) -> Protocol {
|
pub async fn cmd_age_encrypt_name(server: &Server, name: &str, message: &str) -> Protocol {
|
||||||
@@ -253,7 +300,9 @@ pub async fn cmd_age_decrypt_name(server: &Server, name: &str, ct_b64: &str) ->
|
|||||||
pub async fn cmd_age_sign_name(server: &Server, name: &str, message: &str) -> Protocol {
|
pub async fn cmd_age_sign_name(server: &Server, name: &str, message: &str) -> Protocol {
|
||||||
let sec = match sget(server, &sign_priv_key_key(name)) {
|
let sec = match sget(server, &sign_priv_key_key(name)) {
|
||||||
Ok(Some(v)) => v,
|
Ok(Some(v)) => v,
|
||||||
Ok(None) => return AgeWireError::NotFound("signing secret (age:signpriv:{name})").to_protocol(),
|
Ok(None) => {
|
||||||
|
return AgeWireError::NotFound("signing secret (age:signpriv:{name})").to_protocol()
|
||||||
|
}
|
||||||
Err(e) => return e.to_protocol(),
|
Err(e) => return e.to_protocol(),
|
||||||
};
|
};
|
||||||
match sign_b64(&sec, message) {
|
match sign_b64(&sec, message) {
|
||||||
@@ -262,10 +311,17 @@ pub async fn cmd_age_sign_name(server: &Server, name: &str, message: &str) -> Pr
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn cmd_age_verify_name(server: &Server, name: &str, message: &str, sig_b64: &str) -> Protocol {
|
pub async fn cmd_age_verify_name(
|
||||||
|
server: &Server,
|
||||||
|
name: &str,
|
||||||
|
message: &str,
|
||||||
|
sig_b64: &str,
|
||||||
|
) -> Protocol {
|
||||||
let pubk = match sget(server, &sign_pub_key_key(name)) {
|
let pubk = match sget(server, &sign_pub_key_key(name)) {
|
||||||
Ok(Some(v)) => v,
|
Ok(Some(v)) => v,
|
||||||
Ok(None) => return AgeWireError::NotFound("verify pubkey (age:signpub:{name})").to_protocol(),
|
Ok(None) => {
|
||||||
|
return AgeWireError::NotFound("verify pubkey (age:signpub:{name})").to_protocol()
|
||||||
|
}
|
||||||
Err(e) => return e.to_protocol(),
|
Err(e) => return e.to_protocol(),
|
||||||
};
|
};
|
||||||
match verify_b64(&pubk, message, sig_b64) {
|
match verify_b64(&pubk, message, sig_b64) {
|
||||||
@@ -277,25 +333,43 @@ pub async fn cmd_age_verify_name(server: &Server, name: &str, message: &str, sig
|
|||||||
|
|
||||||
pub async fn cmd_age_list(server: &Server) -> Protocol {
|
pub async fn cmd_age_list(server: &Server) -> Protocol {
|
||||||
// Returns 4 arrays: ["encpub", <names...>], ["encpriv", ...], ["signpub", ...], ["signpriv", ...]
|
// Returns 4 arrays: ["encpub", <names...>], ["encpriv", ...], ["signpub", ...], ["signpriv", ...]
|
||||||
let st = match server.current_storage() { Ok(s) => s, Err(e) => return Protocol::err(&e.0) };
|
let st = match server.current_storage() {
|
||||||
|
Ok(s) => s,
|
||||||
|
Err(e) => return Protocol::err(&e.0),
|
||||||
|
};
|
||||||
|
|
||||||
let pull = |pat: &str, prefix: &str| -> Result<Vec<String>, DBError> {
|
let pull = |pat: &str, prefix: &str| -> Result<Vec<String>, DBError> {
|
||||||
let keys = st.keys(pat)?;
|
let keys = st.keys(pat)?;
|
||||||
let mut names: Vec<String> = keys.into_iter()
|
let mut names: Vec<String> = keys
|
||||||
|
.into_iter()
|
||||||
.filter_map(|k| k.strip_prefix(prefix).map(|x| x.to_string()))
|
.filter_map(|k| k.strip_prefix(prefix).map(|x| x.to_string()))
|
||||||
.collect();
|
.collect();
|
||||||
names.sort();
|
names.sort();
|
||||||
Ok(names)
|
Ok(names)
|
||||||
};
|
};
|
||||||
|
|
||||||
let encpub = match pull("age:key:*", "age:key:") { Ok(v) => v, Err(e)=> return Protocol::err(&e.0) };
|
let encpub = match pull("age:key:*", "age:key:") {
|
||||||
let encpriv = match pull("age:privkey:*", "age:privkey:") { Ok(v) => v, Err(e)=> return Protocol::err(&e.0) };
|
Ok(v) => v,
|
||||||
let signpub = match pull("age:signpub:*", "age:signpub:") { Ok(v) => v, Err(e)=> return Protocol::err(&e.0) };
|
Err(e) => return Protocol::err(&e.0),
|
||||||
let signpriv= match pull("age:signpriv:*", "age:signpriv:") { Ok(v) => v, Err(e)=> return Protocol::err(&e.0) };
|
};
|
||||||
|
let encpriv = match pull("age:privkey:*", "age:privkey:") {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(e) => return Protocol::err(&e.0),
|
||||||
|
};
|
||||||
|
let signpub = match pull("age:signpub:*", "age:signpub:") {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(e) => return Protocol::err(&e.0),
|
||||||
|
};
|
||||||
|
let signpriv = match pull("age:signpriv:*", "age:signpriv:") {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(e) => return Protocol::err(&e.0),
|
||||||
|
};
|
||||||
|
|
||||||
let to_arr = |label: &str, v: Vec<String>| {
|
let to_arr = |label: &str, v: Vec<String>| {
|
||||||
let mut out = vec![Protocol::BulkString(label.to_string())];
|
let mut out = vec![Protocol::BulkString(label.to_string())];
|
||||||
out.push(Protocol::Array(v.into_iter().map(Protocol::BulkString).collect()));
|
out.push(Protocol::Array(
|
||||||
|
v.into_iter().map(Protocol::BulkString).collect(),
|
||||||
|
));
|
||||||
Protocol::Array(out)
|
Protocol::Array(out)
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -305,4 +379,4 @@ pub async fn cmd_age_list(server: &Server) -> Protocol {
|
|||||||
to_arr("signpub", signpub),
|
to_arr("signpub", signpub),
|
||||||
to_arr("signpriv", signpriv),
|
to_arr("signpriv", signpriv),
|
||||||
])
|
])
|
||||||
}
|
}
|
||||||
2113
src/cmd.rs
Normal file
2113
src/cmd.rs
Normal file
File diff suppressed because it is too large
Load Diff
@@ -11,9 +11,9 @@ const TAG_LEN: usize = 16;
|
|||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub enum CryptoError {
|
pub enum CryptoError {
|
||||||
Format, // wrong length / header
|
Format, // wrong length / header
|
||||||
Version(u8), // unknown version
|
Version(u8), // unknown version
|
||||||
Decrypt, // wrong key or corrupted data
|
Decrypt, // wrong key or corrupted data
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<CryptoError> for crate::error::DBError {
|
impl From<CryptoError> for crate::error::DBError {
|
||||||
@@ -23,6 +23,7 @@ impl From<CryptoError> for crate::error::DBError {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Super-simple factory: new(secret) + encrypt(bytes) + decrypt(bytes)
|
/// Super-simple factory: new(secret) + encrypt(bytes) + decrypt(bytes)
|
||||||
|
#[derive(Clone)]
|
||||||
pub struct CryptoFactory {
|
pub struct CryptoFactory {
|
||||||
key: chacha20poly1305::Key,
|
key: chacha20poly1305::Key,
|
||||||
}
|
}
|
||||||
@@ -70,4 +71,4 @@ impl CryptoFactory {
|
|||||||
let cipher = XChaCha20Poly1305::new(&self.key);
|
let cipher = XChaCha20Poly1305::new(&self.key);
|
||||||
cipher.decrypt(nonce, ct).map_err(|_| CryptoError::Decrypt)
|
cipher.decrypt(nonce, ct).map_err(|_| CryptoError::Decrypt)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,9 +1,8 @@
|
|||||||
use std::num::ParseIntError;
|
use std::num::ParseIntError;
|
||||||
|
|
||||||
use tokio::sync::mpsc;
|
|
||||||
use redb;
|
|
||||||
use bincode;
|
use bincode;
|
||||||
|
use redb;
|
||||||
|
use tokio::sync::mpsc;
|
||||||
|
|
||||||
// todo: more error types
|
// todo: more error types
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
12
src/lib.rs
Normal file
12
src/lib.rs
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
pub mod age; // NEW
|
||||||
|
pub mod cmd;
|
||||||
|
pub mod crypto;
|
||||||
|
pub mod error;
|
||||||
|
pub mod options;
|
||||||
|
pub mod protocol;
|
||||||
|
pub mod search_cmd; // Add this
|
||||||
|
pub mod server;
|
||||||
|
pub mod storage;
|
||||||
|
pub mod storage_sled; // Add this
|
||||||
|
pub mod storage_trait; // Add this
|
||||||
|
pub mod tantivy_search;
|
||||||
@@ -22,7 +22,6 @@ struct Args {
|
|||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
debug: bool,
|
debug: bool,
|
||||||
|
|
||||||
|
|
||||||
/// Master encryption key for encrypted databases
|
/// Master encryption key for encrypted databases
|
||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
encryption_key: Option<String>,
|
encryption_key: Option<String>,
|
||||||
@@ -30,6 +29,10 @@ struct Args {
|
|||||||
/// Encrypt the database
|
/// Encrypt the database
|
||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
encrypt: bool,
|
encrypt: bool,
|
||||||
|
|
||||||
|
/// Use the sled backend
|
||||||
|
#[arg(long)]
|
||||||
|
sled: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
@@ -51,6 +54,11 @@ async fn main() {
|
|||||||
debug: args.debug,
|
debug: args.debug,
|
||||||
encryption_key: args.encryption_key,
|
encryption_key: args.encryption_key,
|
||||||
encrypt: args.encrypt,
|
encrypt: args.encrypt,
|
||||||
|
backend: if args.sled {
|
||||||
|
herodb::options::BackendType::Sled
|
||||||
|
} else {
|
||||||
|
herodb::options::BackendType::Redb
|
||||||
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
// new server
|
// new server
|
||||||
15
src/options.rs
Normal file
15
src/options.rs
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum BackendType {
|
||||||
|
Redb,
|
||||||
|
Sled,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct DBOption {
|
||||||
|
pub dir: String,
|
||||||
|
pub port: u16,
|
||||||
|
pub debug: bool,
|
||||||
|
pub encrypt: bool,
|
||||||
|
pub encryption_key: Option<String>,
|
||||||
|
pub backend: BackendType,
|
||||||
|
}
|
||||||
@@ -19,6 +19,10 @@ impl fmt::Display for Protocol {
|
|||||||
|
|
||||||
impl Protocol {
|
impl Protocol {
|
||||||
pub fn from(protocol: &str) -> Result<(Self, &str), DBError> {
|
pub fn from(protocol: &str) -> Result<(Self, &str), DBError> {
|
||||||
|
if protocol.is_empty() {
|
||||||
|
// Incomplete frame; caller should read more bytes
|
||||||
|
return Err(DBError("[incomplete] empty".to_string()));
|
||||||
|
}
|
||||||
let ret = match protocol.chars().nth(0) {
|
let ret = match protocol.chars().nth(0) {
|
||||||
Some('+') => Self::parse_simple_string_sfx(&protocol[1..]),
|
Some('+') => Self::parse_simple_string_sfx(&protocol[1..]),
|
||||||
Some('$') => Self::parse_bulk_string_sfx(&protocol[1..]),
|
Some('$') => Self::parse_bulk_string_sfx(&protocol[1..]),
|
||||||
@@ -77,18 +81,21 @@ impl Protocol {
|
|||||||
pub fn encode(&self) -> String {
|
pub fn encode(&self) -> String {
|
||||||
match self {
|
match self {
|
||||||
Protocol::SimpleString(s) => format!("+{}\r\n", s),
|
Protocol::SimpleString(s) => format!("+{}\r\n", s),
|
||||||
Protocol::BulkString(s) => format!("${}\r\n{}\r\n", s.len(), s),
|
Protocol::BulkString(s) => format!("${}\r\n{}\r\n", s.len(), s),
|
||||||
Protocol::Array(ss) => {
|
Protocol::Array(ss) => {
|
||||||
format!("*{}\r\n", ss.len()) + &ss.iter().map(|x| x.encode()).collect::<String>()
|
format!("*{}\r\n", ss.len()) + &ss.iter().map(|x| x.encode()).collect::<String>()
|
||||||
}
|
}
|
||||||
Protocol::Null => "$-1\r\n".to_string(),
|
Protocol::Null => "$-1\r\n".to_string(),
|
||||||
Protocol::Error(s) => format!("-{}\r\n", s), // proper RESP error
|
Protocol::Error(s) => format!("-{}\r\n", s), // proper RESP error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn parse_simple_string_sfx(protocol: &str) -> Result<(Self, &str), DBError> {
|
fn parse_simple_string_sfx(protocol: &str) -> Result<(Self, &str), DBError> {
|
||||||
match protocol.find("\r\n") {
|
match protocol.find("\r\n") {
|
||||||
Some(x) => Ok((Self::SimpleString(protocol[..x].to_string()), &protocol[x + 2..])),
|
Some(x) => Ok((
|
||||||
|
Self::SimpleString(protocol[..x].to_string()),
|
||||||
|
&protocol[x + 2..],
|
||||||
|
)),
|
||||||
_ => Err(DBError(format!(
|
_ => Err(DBError(format!(
|
||||||
"[new simple string] unsupported protocol: {:?}",
|
"[new simple string] unsupported protocol: {:?}",
|
||||||
protocol
|
protocol
|
||||||
@@ -101,21 +108,20 @@ impl Protocol {
|
|||||||
let size = Self::parse_usize(&protocol[..len_end])?;
|
let size = Self::parse_usize(&protocol[..len_end])?;
|
||||||
let data_start = len_end + 2;
|
let data_start = len_end + 2;
|
||||||
let data_end = data_start + size;
|
let data_end = data_start + size;
|
||||||
let s = Self::parse_string(&protocol[data_start..data_end])?;
|
|
||||||
|
|
||||||
if protocol.len() < data_end + 2 || &protocol[data_end..data_end+2] != "\r\n" {
|
// If we don't yet have the full bulk payload + trailing CRLF, signal INCOMPLETE
|
||||||
Err(DBError(format!(
|
if protocol.len() < data_end + 2 {
|
||||||
"[new bulk string] unmatched string length in prototocl {:?}",
|
return Err(DBError("[incomplete] bulk body".to_string()));
|
||||||
protocol,
|
|
||||||
)))
|
|
||||||
} else {
|
|
||||||
Ok((Protocol::BulkString(s), &protocol[data_end + 2..]))
|
|
||||||
}
|
}
|
||||||
|
if &protocol[data_end..data_end + 2] != "\r\n" {
|
||||||
|
return Err(DBError("[incomplete] bulk terminator".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let s = Self::parse_string(&protocol[data_start..data_end])?;
|
||||||
|
Ok((Protocol::BulkString(s), &protocol[data_end + 2..]))
|
||||||
} else {
|
} else {
|
||||||
Err(DBError(format!(
|
// No CRLF after bulk length header yet
|
||||||
"[new bulk string] unsupported protocol: {:?}",
|
Err(DBError("[incomplete] bulk header".to_string()))
|
||||||
protocol
|
|
||||||
)))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -125,16 +131,25 @@ impl Protocol {
|
|||||||
let mut remaining = &s[len_end + 2..];
|
let mut remaining = &s[len_end + 2..];
|
||||||
let mut vec = vec![];
|
let mut vec = vec![];
|
||||||
for _ in 0..array_len {
|
for _ in 0..array_len {
|
||||||
let (p, rem) = Protocol::from(remaining)?;
|
match Protocol::from(remaining) {
|
||||||
vec.push(p);
|
Ok((p, rem)) => {
|
||||||
remaining = rem;
|
vec.push(p);
|
||||||
|
remaining = rem;
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
// Propagate incomplete so caller can read more bytes
|
||||||
|
if e.0.starts_with("[incomplete]") {
|
||||||
|
return Err(e);
|
||||||
|
} else {
|
||||||
|
return Err(e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
Ok((Protocol::Array(vec), remaining))
|
Ok((Protocol::Array(vec), remaining))
|
||||||
} else {
|
} else {
|
||||||
Err(DBError(format!(
|
// No CRLF after array header yet
|
||||||
"[new array] unsupported protocol: {:?}",
|
Err(DBError("[incomplete] array header".to_string()))
|
||||||
s
|
|
||||||
)))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
273
src/search_cmd.rs
Normal file
273
src/search_cmd.rs
Normal file
@@ -0,0 +1,273 @@
|
|||||||
|
use crate::{
|
||||||
|
error::DBError,
|
||||||
|
protocol::Protocol,
|
||||||
|
server::Server,
|
||||||
|
tantivy_search::{
|
||||||
|
FieldDef, Filter, FilterType, IndexConfig, NumericType, SearchOptions, TantivySearch,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
pub async fn ft_create_cmd(
|
||||||
|
server: &Server,
|
||||||
|
index_name: String,
|
||||||
|
schema: Vec<(String, String, Vec<String>)>,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
// Parse schema into field definitions
|
||||||
|
let mut field_definitions = Vec::new();
|
||||||
|
|
||||||
|
for (field_name, field_type, options) in schema {
|
||||||
|
let field_def = match field_type.to_uppercase().as_str() {
|
||||||
|
"TEXT" => {
|
||||||
|
let mut weight = 1.0;
|
||||||
|
let mut sortable = false;
|
||||||
|
let mut no_index = false;
|
||||||
|
|
||||||
|
for opt in &options {
|
||||||
|
match opt.to_uppercase().as_str() {
|
||||||
|
"WEIGHT" => {
|
||||||
|
// Next option should be the weight value
|
||||||
|
if let Some(idx) = options.iter().position(|x| x == opt) {
|
||||||
|
if idx + 1 < options.len() {
|
||||||
|
weight = options[idx + 1].parse().unwrap_or(1.0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"SORTABLE" => sortable = true,
|
||||||
|
"NOINDEX" => no_index = true,
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FieldDef::Text {
|
||||||
|
stored: true,
|
||||||
|
indexed: !no_index,
|
||||||
|
tokenized: true,
|
||||||
|
fast: sortable,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"NUMERIC" => {
|
||||||
|
let mut sortable = false;
|
||||||
|
|
||||||
|
for opt in &options {
|
||||||
|
if opt.to_uppercase() == "SORTABLE" {
|
||||||
|
sortable = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FieldDef::Numeric {
|
||||||
|
stored: true,
|
||||||
|
indexed: true,
|
||||||
|
fast: sortable,
|
||||||
|
precision: NumericType::F64,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"TAG" => {
|
||||||
|
let mut separator = ",".to_string();
|
||||||
|
let mut case_sensitive = false;
|
||||||
|
|
||||||
|
for i in 0..options.len() {
|
||||||
|
match options[i].to_uppercase().as_str() {
|
||||||
|
"SEPARATOR" => {
|
||||||
|
if i + 1 < options.len() {
|
||||||
|
separator = options[i + 1].clone();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"CASESENSITIVE" => case_sensitive = true,
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
FieldDef::Tag {
|
||||||
|
stored: true,
|
||||||
|
separator,
|
||||||
|
case_sensitive,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"GEO" => FieldDef::Geo { stored: true },
|
||||||
|
_ => {
|
||||||
|
return Err(DBError(format!("Unknown field type: {}", field_type)));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
field_definitions.push((field_name, field_def));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create the search index
|
||||||
|
let search_path = server.search_index_path();
|
||||||
|
let config = IndexConfig::default();
|
||||||
|
|
||||||
|
println!(
|
||||||
|
"Creating search index '{}' at path: {:?}",
|
||||||
|
index_name, search_path
|
||||||
|
);
|
||||||
|
println!("Field definitions: {:?}", field_definitions);
|
||||||
|
|
||||||
|
let search_index = TantivySearch::new_with_schema(
|
||||||
|
search_path,
|
||||||
|
index_name.clone(),
|
||||||
|
field_definitions,
|
||||||
|
Some(config),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
println!("Search index '{}' created successfully", index_name);
|
||||||
|
|
||||||
|
// Store in registry
|
||||||
|
let mut indexes = server.search_indexes.write().unwrap();
|
||||||
|
indexes.insert(index_name, Arc::new(search_index));
|
||||||
|
|
||||||
|
Ok(Protocol::SimpleString("OK".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ft_add_cmd(
|
||||||
|
server: &Server,
|
||||||
|
index_name: String,
|
||||||
|
doc_id: String,
|
||||||
|
_score: f64,
|
||||||
|
fields: HashMap<String, String>,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let indexes = server.search_indexes.read().unwrap();
|
||||||
|
|
||||||
|
let search_index = indexes
|
||||||
|
.get(&index_name)
|
||||||
|
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||||
|
|
||||||
|
search_index.add_document_with_fields(&doc_id, fields)?;
|
||||||
|
|
||||||
|
Ok(Protocol::SimpleString("OK".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ft_search_cmd(
|
||||||
|
server: &Server,
|
||||||
|
index_name: String,
|
||||||
|
query: String,
|
||||||
|
filters: Vec<(String, String)>,
|
||||||
|
limit: Option<usize>,
|
||||||
|
offset: Option<usize>,
|
||||||
|
return_fields: Option<Vec<String>>,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let indexes = server.search_indexes.read().unwrap();
|
||||||
|
|
||||||
|
let search_index = indexes
|
||||||
|
.get(&index_name)
|
||||||
|
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||||
|
|
||||||
|
// Convert filters to search filters
|
||||||
|
let search_filters = filters
|
||||||
|
.into_iter()
|
||||||
|
.map(|(field, value)| Filter {
|
||||||
|
field,
|
||||||
|
filter_type: FilterType::Equals(value),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let options = SearchOptions {
|
||||||
|
limit: limit.unwrap_or(10),
|
||||||
|
offset: offset.unwrap_or(0),
|
||||||
|
filters: search_filters,
|
||||||
|
sort_by: None,
|
||||||
|
return_fields,
|
||||||
|
highlight: false,
|
||||||
|
};
|
||||||
|
|
||||||
|
let results = search_index.search_with_options(&query, options)?;
|
||||||
|
|
||||||
|
// Format results as Redis protocol
|
||||||
|
let mut response = Vec::new();
|
||||||
|
|
||||||
|
// First element is the total count
|
||||||
|
response.push(Protocol::SimpleString(results.total.to_string()));
|
||||||
|
|
||||||
|
// Then each document
|
||||||
|
for doc in results.documents {
|
||||||
|
let mut doc_array = Vec::new();
|
||||||
|
|
||||||
|
// Add document ID if it exists
|
||||||
|
if let Some(id) = doc.fields.get("_id") {
|
||||||
|
doc_array.push(Protocol::BulkString(id.clone()));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add score
|
||||||
|
doc_array.push(Protocol::BulkString(doc.score.to_string()));
|
||||||
|
|
||||||
|
// Add fields as key-value pairs
|
||||||
|
for (field_name, field_value) in doc.fields {
|
||||||
|
if field_name != "_id" {
|
||||||
|
doc_array.push(Protocol::BulkString(field_name));
|
||||||
|
doc_array.push(Protocol::BulkString(field_value));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
response.push(Protocol::Array(doc_array));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Protocol::Array(response))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ft_del_cmd(
|
||||||
|
server: &Server,
|
||||||
|
index_name: String,
|
||||||
|
doc_id: String,
|
||||||
|
) -> Result<Protocol, DBError> {
|
||||||
|
let indexes = server.search_indexes.read().unwrap();
|
||||||
|
|
||||||
|
let _search_index = indexes
|
||||||
|
.get(&index_name)
|
||||||
|
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||||
|
|
||||||
|
// For now, return success
|
||||||
|
// In a full implementation, we'd need to add a delete method to TantivySearch
|
||||||
|
println!("Deleting document '{}' from index '{}'", doc_id, index_name);
|
||||||
|
|
||||||
|
Ok(Protocol::SimpleString("1".to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ft_info_cmd(server: &Server, index_name: String) -> Result<Protocol, DBError> {
|
||||||
|
let indexes = server.search_indexes.read().unwrap();
|
||||||
|
|
||||||
|
let search_index = indexes
|
||||||
|
.get(&index_name)
|
||||||
|
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||||
|
|
||||||
|
let info = search_index.get_info()?;
|
||||||
|
|
||||||
|
// Format info as Redis protocol
|
||||||
|
let mut response = Vec::new();
|
||||||
|
|
||||||
|
response.push(Protocol::BulkString("index_name".to_string()));
|
||||||
|
response.push(Protocol::BulkString(info.name));
|
||||||
|
|
||||||
|
response.push(Protocol::BulkString("num_docs".to_string()));
|
||||||
|
response.push(Protocol::BulkString(info.num_docs.to_string()));
|
||||||
|
|
||||||
|
response.push(Protocol::BulkString("num_fields".to_string()));
|
||||||
|
response.push(Protocol::BulkString(info.fields.len().to_string()));
|
||||||
|
|
||||||
|
response.push(Protocol::BulkString("fields".to_string()));
|
||||||
|
let fields_str = info
|
||||||
|
.fields
|
||||||
|
.iter()
|
||||||
|
.map(|f| format!("{}:{}", f.name, f.field_type))
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(", ");
|
||||||
|
response.push(Protocol::BulkString(fields_str));
|
||||||
|
|
||||||
|
Ok(Protocol::Array(response))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn ft_drop_cmd(server: &Server, index_name: String) -> Result<Protocol, DBError> {
|
||||||
|
let mut indexes = server.search_indexes.write().unwrap();
|
||||||
|
|
||||||
|
if indexes.remove(&index_name).is_some() {
|
||||||
|
// Also remove the index files from disk
|
||||||
|
let index_path = server.search_index_path().join(&index_name);
|
||||||
|
if index_path.exists() {
|
||||||
|
std::fs::remove_dir_all(index_path)
|
||||||
|
.map_err(|e| DBError(format!("Failed to remove index files: {}", e)))?;
|
||||||
|
}
|
||||||
|
Ok(Protocol::SimpleString("OK".to_string()))
|
||||||
|
} else {
|
||||||
|
Err(DBError(format!("Index '{}' not found", index_name)))
|
||||||
|
}
|
||||||
|
}
|
||||||
276
src/server.rs
Normal file
276
src/server.rs
Normal file
@@ -0,0 +1,276 @@
|
|||||||
|
use core::str;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::sync::RwLock;
|
||||||
|
use tokio::io::AsyncReadExt;
|
||||||
|
use tokio::io::AsyncWriteExt;
|
||||||
|
use tokio::sync::{oneshot, Mutex};
|
||||||
|
|
||||||
|
use std::sync::atomic::{AtomicU64, Ordering};
|
||||||
|
|
||||||
|
use crate::cmd::Cmd;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use crate::options;
|
||||||
|
use crate::protocol::Protocol;
|
||||||
|
use crate::storage::Storage;
|
||||||
|
use crate::storage_sled::SledStorage;
|
||||||
|
use crate::storage_trait::StorageBackend;
|
||||||
|
use crate::tantivy_search::TantivySearch;
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct Server {
|
||||||
|
pub db_cache: Arc<RwLock<HashMap<u64, Arc<dyn StorageBackend>>>>,
|
||||||
|
pub search_indexes: Arc<RwLock<HashMap<String, Arc<TantivySearch>>>>,
|
||||||
|
pub option: options::DBOption,
|
||||||
|
pub client_name: Option<String>,
|
||||||
|
pub selected_db: u64, // Changed from usize to u64
|
||||||
|
pub queued_cmd: Option<Vec<(Cmd, Protocol)>>,
|
||||||
|
|
||||||
|
// BLPOP waiter registry: per (db_index, key) FIFO of waiters
|
||||||
|
pub list_waiters: Arc<Mutex<HashMap<u64, HashMap<String, Vec<Waiter>>>>>,
|
||||||
|
pub waiter_seq: Arc<AtomicU64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct Waiter {
|
||||||
|
pub id: u64,
|
||||||
|
pub side: PopSide,
|
||||||
|
pub tx: oneshot::Sender<(String, String)>, // (key, element)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||||
|
pub enum PopSide {
|
||||||
|
Left,
|
||||||
|
Right,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Server {
|
||||||
|
pub async fn new(option: options::DBOption) -> Self {
|
||||||
|
Server {
|
||||||
|
db_cache: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
search_indexes: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
option,
|
||||||
|
client_name: None,
|
||||||
|
selected_db: 0,
|
||||||
|
queued_cmd: None,
|
||||||
|
|
||||||
|
list_waiters: Arc::new(Mutex::new(HashMap::new())),
|
||||||
|
waiter_seq: Arc::new(AtomicU64::new(1)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn current_storage(&self) -> Result<Arc<dyn StorageBackend>, DBError> {
|
||||||
|
let mut cache = self.db_cache.write().unwrap();
|
||||||
|
|
||||||
|
if let Some(storage) = cache.get(&self.selected_db) {
|
||||||
|
return Ok(storage.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create new database file
|
||||||
|
let db_file_path = std::path::PathBuf::from(self.option.dir.clone())
|
||||||
|
.join(format!("{}.db", self.selected_db));
|
||||||
|
|
||||||
|
// Ensure the directory exists before creating the database file
|
||||||
|
if let Some(parent_dir) = db_file_path.parent() {
|
||||||
|
std::fs::create_dir_all(parent_dir).map_err(|e| {
|
||||||
|
DBError(format!(
|
||||||
|
"Failed to create directory {}: {}",
|
||||||
|
parent_dir.display(),
|
||||||
|
e
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Creating new db file: {}", db_file_path.display());
|
||||||
|
|
||||||
|
let storage: Arc<dyn StorageBackend> = match self.option.backend {
|
||||||
|
options::BackendType::Redb => Arc::new(Storage::new(
|
||||||
|
db_file_path,
|
||||||
|
self.should_encrypt_db(self.selected_db),
|
||||||
|
self.option.encryption_key.as_deref(),
|
||||||
|
)?),
|
||||||
|
options::BackendType::Sled => Arc::new(SledStorage::new(
|
||||||
|
db_file_path,
|
||||||
|
self.should_encrypt_db(self.selected_db),
|
||||||
|
self.option.encryption_key.as_deref(),
|
||||||
|
)?),
|
||||||
|
};
|
||||||
|
|
||||||
|
cache.insert(self.selected_db, storage.clone());
|
||||||
|
Ok(storage)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn should_encrypt_db(&self, db_index: u64) -> bool {
|
||||||
|
// DB 0-9 are non-encrypted, DB 10+ are encrypted
|
||||||
|
self.option.encrypt && db_index >= 10
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add method to get search index path
|
||||||
|
pub fn search_index_path(&self) -> std::path::PathBuf {
|
||||||
|
std::path::PathBuf::from(&self.option.dir).join("search_indexes")
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- BLPOP waiter helpers -----
|
||||||
|
|
||||||
|
pub async fn register_waiter(
|
||||||
|
&self,
|
||||||
|
db_index: u64,
|
||||||
|
key: &str,
|
||||||
|
side: PopSide,
|
||||||
|
) -> (u64, oneshot::Receiver<(String, String)>) {
|
||||||
|
let id = self.waiter_seq.fetch_add(1, Ordering::Relaxed);
|
||||||
|
let (tx, rx) = oneshot::channel::<(String, String)>();
|
||||||
|
|
||||||
|
let mut guard = self.list_waiters.lock().await;
|
||||||
|
let per_db = guard.entry(db_index).or_insert_with(HashMap::new);
|
||||||
|
let q = per_db.entry(key.to_string()).or_insert_with(Vec::new);
|
||||||
|
q.push(Waiter { id, side, tx });
|
||||||
|
(id, rx)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn unregister_waiter(&self, db_index: u64, key: &str, id: u64) {
|
||||||
|
let mut guard = self.list_waiters.lock().await;
|
||||||
|
if let Some(per_db) = guard.get_mut(&db_index) {
|
||||||
|
if let Some(q) = per_db.get_mut(key) {
|
||||||
|
q.retain(|w| w.id != id);
|
||||||
|
if q.is_empty() {
|
||||||
|
per_db.remove(key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if per_db.is_empty() {
|
||||||
|
guard.remove(&db_index);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Called after LPUSH/RPUSH to deliver to blocked BLPOP waiters.
|
||||||
|
pub async fn drain_waiters_after_push(&self, key: &str) -> Result<(), DBError> {
|
||||||
|
let db_index = self.selected_db;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// Check if any waiter exists
|
||||||
|
let maybe_waiter = {
|
||||||
|
let mut guard = self.list_waiters.lock().await;
|
||||||
|
if let Some(per_db) = guard.get_mut(&db_index) {
|
||||||
|
if let Some(q) = per_db.get_mut(key) {
|
||||||
|
if !q.is_empty() {
|
||||||
|
// Pop FIFO
|
||||||
|
Some(q.remove(0))
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let waiter = if let Some(w) = maybe_waiter { w } else { break };
|
||||||
|
|
||||||
|
// Pop one element depending on waiter side
|
||||||
|
let elems = match waiter.side {
|
||||||
|
PopSide::Left => self.current_storage()?.lpop(key, 1)?,
|
||||||
|
PopSide::Right => self.current_storage()?.rpop(key, 1)?,
|
||||||
|
};
|
||||||
|
if elems.is_empty() {
|
||||||
|
// Nothing to deliver; re-register waiter at the front to preserve order
|
||||||
|
let mut guard = self.list_waiters.lock().await;
|
||||||
|
let per_db = guard.entry(db_index).or_insert_with(HashMap::new);
|
||||||
|
let q = per_db.entry(key.to_string()).or_insert_with(Vec::new);
|
||||||
|
q.insert(0, waiter);
|
||||||
|
break;
|
||||||
|
} else {
|
||||||
|
let elem = elems[0].clone();
|
||||||
|
// Send to waiter; if receiver dropped, just continue
|
||||||
|
let _ = waiter.tx.send((key.to_string(), elem));
|
||||||
|
// Loop to try to satisfy more waiters if more elements remain
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn handle(&mut self, mut stream: tokio::net::TcpStream) -> Result<(), DBError> {
|
||||||
|
// Accumulate incoming bytes to handle partial RESP frames
|
||||||
|
let mut acc = String::new();
|
||||||
|
let mut buf = vec![0u8; 8192];
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let n = match stream.read(&mut buf).await {
|
||||||
|
Ok(0) => {
|
||||||
|
println!("[handle] connection closed");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
Ok(n) => n,
|
||||||
|
Err(e) => {
|
||||||
|
println!("[handle] read error: {:?}", e);
|
||||||
|
return Err(e.into());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Append to accumulator. RESP for our usage is ASCII-safe.
|
||||||
|
acc.push_str(str::from_utf8(&buf[..n])?);
|
||||||
|
|
||||||
|
// Try to parse as many complete commands as are available in 'acc'.
|
||||||
|
loop {
|
||||||
|
let parsed = Cmd::from(&acc);
|
||||||
|
let (cmd, protocol, remaining) = match parsed {
|
||||||
|
Ok((cmd, protocol, remaining)) => (cmd, protocol, remaining),
|
||||||
|
Err(_e) => {
|
||||||
|
// Incomplete or invalid frame; assume incomplete and wait for more data.
|
||||||
|
// This avoids emitting spurious protocol_error for split frames.
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Advance the accumulator to the unparsed remainder
|
||||||
|
acc = remaining.to_string();
|
||||||
|
|
||||||
|
if self.option.debug {
|
||||||
|
println!(
|
||||||
|
"\x1b[34;1mgot command: {:?}, protocol: {:?}\x1b[0m",
|
||||||
|
cmd, protocol
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
println!("got command: {:?}, protocol: {:?}", cmd, protocol);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if this is a QUIT command before processing
|
||||||
|
let is_quit = matches!(cmd, Cmd::Quit);
|
||||||
|
|
||||||
|
let res = match cmd.run(self).await {
|
||||||
|
Ok(p) => p,
|
||||||
|
Err(e) => {
|
||||||
|
if self.option.debug {
|
||||||
|
eprintln!("[run error] {:?}", e);
|
||||||
|
}
|
||||||
|
Protocol::err(&format!("ERR {}", e.0))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if self.option.debug {
|
||||||
|
println!("\x1b[34;1mqueued cmd {:?}\x1b[0m", self.queued_cmd);
|
||||||
|
println!("\x1b[32;1mgoing to send response {}\x1b[0m", res.encode());
|
||||||
|
} else {
|
||||||
|
print!("queued cmd {:?}", self.queued_cmd);
|
||||||
|
println!("going to send response {}", res.encode());
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = stream.write(res.encode().as_bytes()).await?;
|
||||||
|
|
||||||
|
// If this was a QUIT command, close the connection
|
||||||
|
if is_quit {
|
||||||
|
println!("[handle] QUIT command received, closing connection");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Continue parsing any further complete commands already in 'acc'
|
||||||
|
if acc.is_empty() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
305
src/storage/mod.rs
Normal file
305
src/storage/mod.rs
Normal file
@@ -0,0 +1,305 @@
|
|||||||
|
use std::{
|
||||||
|
path::Path,
|
||||||
|
sync::Arc,
|
||||||
|
time::{SystemTime, UNIX_EPOCH},
|
||||||
|
};
|
||||||
|
|
||||||
|
use redb::{Database, TableDefinition};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
use crate::crypto::CryptoFactory;
|
||||||
|
use crate::error::DBError;
|
||||||
|
|
||||||
|
// Re-export modules
|
||||||
|
mod storage_basic;
|
||||||
|
mod storage_extra;
|
||||||
|
mod storage_hset;
|
||||||
|
mod storage_lists;
|
||||||
|
|
||||||
|
// Re-export implementations
|
||||||
|
// Note: These imports are used by the impl blocks in the submodules
|
||||||
|
// The compiler shows them as unused because they're not directly used in this file
|
||||||
|
// but they're needed for the Storage struct methods to be available
|
||||||
|
pub use storage_extra::*;
|
||||||
|
|
||||||
|
// Table definitions for different Redis data types
|
||||||
|
const TYPES_TABLE: TableDefinition<&str, &str> = TableDefinition::new("types");
|
||||||
|
const STRINGS_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("strings");
|
||||||
|
const HASHES_TABLE: TableDefinition<(&str, &str), &[u8]> = TableDefinition::new("hashes");
|
||||||
|
const LISTS_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("lists");
|
||||||
|
const STREAMS_META_TABLE: TableDefinition<&str, &[u8]> = TableDefinition::new("streams_meta");
|
||||||
|
const STREAMS_DATA_TABLE: TableDefinition<(&str, &str), &[u8]> =
|
||||||
|
TableDefinition::new("streams_data");
|
||||||
|
const ENCRYPTED_TABLE: TableDefinition<&str, u8> = TableDefinition::new("encrypted");
|
||||||
|
const EXPIRATION_TABLE: TableDefinition<&str, u64> = TableDefinition::new("expiration");
|
||||||
|
|
||||||
|
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||||
|
pub struct StreamEntry {
|
||||||
|
pub fields: Vec<(String, String)>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||||
|
pub struct ListValue {
|
||||||
|
pub elements: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
pub fn now_in_millis() -> u128 {
|
||||||
|
let start = SystemTime::now();
|
||||||
|
let duration_since_epoch = start.duration_since(UNIX_EPOCH).unwrap();
|
||||||
|
duration_since_epoch.as_millis()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct Storage {
|
||||||
|
db: Database,
|
||||||
|
crypto: Option<CryptoFactory>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Storage {
|
||||||
|
pub fn new(
|
||||||
|
path: impl AsRef<Path>,
|
||||||
|
should_encrypt: bool,
|
||||||
|
master_key: Option<&str>,
|
||||||
|
) -> Result<Self, DBError> {
|
||||||
|
let db = Database::create(path)?;
|
||||||
|
|
||||||
|
// Create tables if they don't exist
|
||||||
|
let write_txn = db.begin_write()?;
|
||||||
|
{
|
||||||
|
let _ = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(STRINGS_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(HASHES_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(STREAMS_META_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(STREAMS_DATA_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(ENCRYPTED_TABLE)?;
|
||||||
|
let _ = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
|
||||||
|
// Check if database was previously encrypted
|
||||||
|
let read_txn = db.begin_read()?;
|
||||||
|
let encrypted_table = read_txn.open_table(ENCRYPTED_TABLE)?;
|
||||||
|
let was_encrypted = encrypted_table
|
||||||
|
.get("encrypted")?
|
||||||
|
.map(|v| v.value() == 1)
|
||||||
|
.unwrap_or(false);
|
||||||
|
drop(read_txn);
|
||||||
|
|
||||||
|
let crypto = if should_encrypt || was_encrypted {
|
||||||
|
if let Some(key) = master_key {
|
||||||
|
Some(CryptoFactory::new(key.as_bytes()))
|
||||||
|
} else {
|
||||||
|
return Err(DBError(
|
||||||
|
"Encryption requested but no master key provided".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
// If we're enabling encryption for the first time, mark it
|
||||||
|
if should_encrypt && !was_encrypted {
|
||||||
|
let write_txn = db.begin_write()?;
|
||||||
|
{
|
||||||
|
let mut encrypted_table = write_txn.open_table(ENCRYPTED_TABLE)?;
|
||||||
|
encrypted_table.insert("encrypted", &1u8)?;
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Storage { db, crypto })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn is_encrypted(&self) -> bool {
|
||||||
|
self.crypto.is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper methods for encryption
|
||||||
|
fn encrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
||||||
|
if let Some(crypto) = &self.crypto {
|
||||||
|
Ok(crypto.encrypt(data))
|
||||||
|
} else {
|
||||||
|
Ok(data.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn decrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
||||||
|
if let Some(crypto) = &self.crypto {
|
||||||
|
Ok(crypto.decrypt(data)?)
|
||||||
|
} else {
|
||||||
|
Ok(data.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
use crate::storage_trait::StorageBackend;
|
||||||
|
|
||||||
|
impl StorageBackend for Storage {
|
||||||
|
fn get(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
|
self.get(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn set(&self, key: String, value: String) -> Result<(), DBError> {
|
||||||
|
self.set(key, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn setx(&self, key: String, value: String, expire_ms: u128) -> Result<(), DBError> {
|
||||||
|
self.setx(key, value, expire_ms)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn del(&self, key: String) -> Result<(), DBError> {
|
||||||
|
self.del(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn exists(&self, key: &str) -> Result<bool, DBError> {
|
||||||
|
self.exists(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn keys(&self, pattern: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
self.keys(pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn dbsize(&self) -> Result<i64, DBError> {
|
||||||
|
self.dbsize()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn flushdb(&self) -> Result<(), DBError> {
|
||||||
|
self.flushdb()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_key_type(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
|
self.get_key_type(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn scan(
|
||||||
|
&self,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
|
self.scan(cursor, pattern, count)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hscan(
|
||||||
|
&self,
|
||||||
|
key: &str,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
|
self.hscan(key, cursor, pattern, count)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hset(&self, key: &str, pairs: Vec<(String, String)>) -> Result<i64, DBError> {
|
||||||
|
self.hset(key, pairs)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hget(&self, key: &str, field: &str) -> Result<Option<String>, DBError> {
|
||||||
|
self.hget(key, field)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hgetall(&self, key: &str) -> Result<Vec<(String, String)>, DBError> {
|
||||||
|
self.hgetall(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hdel(&self, key: &str, fields: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
self.hdel(key, fields)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hexists(&self, key: &str, field: &str) -> Result<bool, DBError> {
|
||||||
|
self.hexists(key, field)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hkeys(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
self.hkeys(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hvals(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
self.hvals(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hlen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
self.hlen(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hmget(&self, key: &str, fields: Vec<String>) -> Result<Vec<Option<String>>, DBError> {
|
||||||
|
self.hmget(key, fields)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hsetnx(&self, key: &str, field: &str, value: &str) -> Result<bool, DBError> {
|
||||||
|
self.hsetnx(key, field, value)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
self.lpush(key, elements)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
self.rpush(key, elements)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
|
self.lpop(key, count)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
|
self.rpop(key, count)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn llen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
self.llen(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lindex(&self, key: &str, index: i64) -> Result<Option<String>, DBError> {
|
||||||
|
self.lindex(key, index)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lrange(&self, key: &str, start: i64, stop: i64) -> Result<Vec<String>, DBError> {
|
||||||
|
self.lrange(key, start, stop)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ltrim(&self, key: &str, start: i64, stop: i64) -> Result<(), DBError> {
|
||||||
|
self.ltrim(key, start, stop)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lrem(&self, key: &str, count: i64, element: &str) -> Result<i64, DBError> {
|
||||||
|
self.lrem(key, count, element)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ttl(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
self.ttl(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expire_seconds(&self, key: &str, secs: u64) -> Result<bool, DBError> {
|
||||||
|
self.expire_seconds(key, secs)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn pexpire_millis(&self, key: &str, ms: u128) -> Result<bool, DBError> {
|
||||||
|
self.pexpire_millis(key, ms)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn persist(&self, key: &str) -> Result<bool, DBError> {
|
||||||
|
self.persist(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expire_at_seconds(&self, key: &str, ts_secs: i64) -> Result<bool, DBError> {
|
||||||
|
self.expire_at_seconds(key, ts_secs)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn pexpire_at_millis(&self, key: &str, ts_ms: i64) -> Result<bool, DBError> {
|
||||||
|
self.pexpire_at_millis(key, ts_ms)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_encrypted(&self) -> bool {
|
||||||
|
self.is_encrypted()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn info(&self) -> Result<Vec<(String, String)>, DBError> {
|
||||||
|
self.info()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn clone_arc(&self) -> Arc<dyn StorageBackend> {
|
||||||
|
unimplemented!("Storage cloning not yet implemented for redb backend")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
use redb::{ReadableTable};
|
|
||||||
use crate::error::DBError;
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use redb::ReadableTable;
|
||||||
|
|
||||||
impl Storage {
|
impl Storage {
|
||||||
pub fn flushdb(&self) -> Result<(), DBError> {
|
pub fn flushdb(&self) -> Result<(), DBError> {
|
||||||
@@ -15,11 +15,17 @@ impl Storage {
|
|||||||
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
|
||||||
// inefficient, but there is no other way
|
// inefficient, but there is no other way
|
||||||
let keys: Vec<String> = types_table.iter()?.map(|item| item.unwrap().0.value().to_string()).collect();
|
let keys: Vec<String> = types_table
|
||||||
|
.iter()?
|
||||||
|
.map(|item| item.unwrap().0.value().to_string())
|
||||||
|
.collect();
|
||||||
for key in keys {
|
for key in keys {
|
||||||
types_table.remove(key.as_str())?;
|
types_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
let keys: Vec<String> = strings_table.iter()?.map(|item| item.unwrap().0.value().to_string()).collect();
|
let keys: Vec<String> = strings_table
|
||||||
|
.iter()?
|
||||||
|
.map(|item| item.unwrap().0.value().to_string())
|
||||||
|
.collect();
|
||||||
for key in keys {
|
for key in keys {
|
||||||
strings_table.remove(key.as_str())?;
|
strings_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
@@ -34,23 +40,35 @@ impl Storage {
|
|||||||
for (key, field) in keys {
|
for (key, field) in keys {
|
||||||
hashes_table.remove((key.as_str(), field.as_str()))?;
|
hashes_table.remove((key.as_str(), field.as_str()))?;
|
||||||
}
|
}
|
||||||
let keys: Vec<String> = lists_table.iter()?.map(|item| item.unwrap().0.value().to_string()).collect();
|
let keys: Vec<String> = lists_table
|
||||||
|
.iter()?
|
||||||
|
.map(|item| item.unwrap().0.value().to_string())
|
||||||
|
.collect();
|
||||||
for key in keys {
|
for key in keys {
|
||||||
lists_table.remove(key.as_str())?;
|
lists_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
let keys: Vec<String> = streams_meta_table.iter()?.map(|item| item.unwrap().0.value().to_string()).collect();
|
let keys: Vec<String> = streams_meta_table
|
||||||
|
.iter()?
|
||||||
|
.map(|item| item.unwrap().0.value().to_string())
|
||||||
|
.collect();
|
||||||
for key in keys {
|
for key in keys {
|
||||||
streams_meta_table.remove(key.as_str())?;
|
streams_meta_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
let keys: Vec<(String,String)> = streams_data_table.iter()?.map(|item| {
|
let keys: Vec<(String, String)> = streams_data_table
|
||||||
let binding = item.unwrap();
|
.iter()?
|
||||||
let (key, field) = binding.0.value();
|
.map(|item| {
|
||||||
(key.to_string(), field.to_string())
|
let binding = item.unwrap();
|
||||||
}).collect();
|
let (key, field) = binding.0.value();
|
||||||
|
(key.to_string(), field.to_string())
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
for (key, field) in keys {
|
for (key, field) in keys {
|
||||||
streams_data_table.remove((key.as_str(), field.as_str()))?;
|
streams_data_table.remove((key.as_str(), field.as_str()))?;
|
||||||
}
|
}
|
||||||
let keys: Vec<String> = expiration_table.iter()?.map(|item| item.unwrap().0.value().to_string()).collect();
|
let keys: Vec<String> = expiration_table
|
||||||
|
.iter()?
|
||||||
|
.map(|item| item.unwrap().0.value().to_string())
|
||||||
|
.collect();
|
||||||
for key in keys {
|
for key in keys {
|
||||||
expiration_table.remove(key.as_str())?;
|
expiration_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
@@ -62,7 +80,7 @@ impl Storage {
|
|||||||
pub fn get_key_type(&self, key: &str) -> Result<Option<String>, DBError> {
|
pub fn get_key_type(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let table = read_txn.open_table(TYPES_TABLE)?;
|
let table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
// Before returning type, check for expiration
|
// Before returning type, check for expiration
|
||||||
if let Some(type_val) = table.get(key)? {
|
if let Some(type_val) = table.get(key)? {
|
||||||
if type_val.value() == "string" {
|
if type_val.value() == "string" {
|
||||||
@@ -83,7 +101,7 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: Value is encrypted/decrypted
|
// ✅ ENCRYPTION APPLIED: Value is encrypted/decrypted
|
||||||
pub fn get(&self, key: &str) -> Result<Option<String>, DBError> {
|
pub fn get(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "string" => {
|
Some(type_val) if type_val.value() == "string" => {
|
||||||
@@ -96,7 +114,7 @@ impl Storage {
|
|||||||
return Ok(None);
|
return Ok(None);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get and decrypt value
|
// Get and decrypt value
|
||||||
let strings_table = read_txn.open_table(STRINGS_TABLE)?;
|
let strings_table = read_txn.open_table(STRINGS_TABLE)?;
|
||||||
match strings_table.get(key)? {
|
match strings_table.get(key)? {
|
||||||
@@ -115,21 +133,21 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: Value is encrypted before storage
|
// ✅ ENCRYPTION APPLIED: Value is encrypted before storage
|
||||||
pub fn set(&self, key: String, value: String) -> Result<(), DBError> {
|
pub fn set(&self, key: String, value: String) -> Result<(), DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
types_table.insert(key.as_str(), "string")?;
|
types_table.insert(key.as_str(), "string")?;
|
||||||
|
|
||||||
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
||||||
// Only encrypt the value, not expiration
|
// Only encrypt the value, not expiration
|
||||||
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
||||||
strings_table.insert(key.as_str(), encrypted.as_slice())?;
|
strings_table.insert(key.as_str(), encrypted.as_slice())?;
|
||||||
|
|
||||||
// Remove any existing expiration since this is a regular SET
|
// Remove any existing expiration since this is a regular SET
|
||||||
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
expiration_table.remove(key.as_str())?;
|
expiration_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -137,41 +155,42 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: Value is encrypted before storage
|
// ✅ ENCRYPTION APPLIED: Value is encrypted before storage
|
||||||
pub fn setx(&self, key: String, value: String, expire_ms: u128) -> Result<(), DBError> {
|
pub fn setx(&self, key: String, value: String, expire_ms: u128) -> Result<(), DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
types_table.insert(key.as_str(), "string")?;
|
types_table.insert(key.as_str(), "string")?;
|
||||||
|
|
||||||
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
||||||
// Only encrypt the value
|
// Only encrypt the value
|
||||||
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
||||||
strings_table.insert(key.as_str(), encrypted.as_slice())?;
|
strings_table.insert(key.as_str(), encrypted.as_slice())?;
|
||||||
|
|
||||||
// Store expiration separately (unencrypted)
|
// Store expiration separately (unencrypted)
|
||||||
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
let expires_at = expire_ms + now_in_millis();
|
let expires_at = expire_ms + now_in_millis();
|
||||||
expiration_table.insert(key.as_str(), &(expires_at as u64))?;
|
expiration_table.insert(key.as_str(), &(expires_at as u64))?;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn del(&self, key: String) -> Result<(), DBError> {
|
pub fn del(&self, key: String) -> Result<(), DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
let mut strings_table = write_txn.open_table(STRINGS_TABLE)?;
|
||||||
let mut hashes_table: redb::Table<(&str, &str), &[u8]> = write_txn.open_table(HASHES_TABLE)?;
|
let mut hashes_table: redb::Table<(&str, &str), &[u8]> =
|
||||||
|
write_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
// Remove from type table
|
// Remove from type table
|
||||||
types_table.remove(key.as_str())?;
|
types_table.remove(key.as_str())?;
|
||||||
|
|
||||||
// Remove from strings table
|
// Remove from strings table
|
||||||
strings_table.remove(key.as_str())?;
|
strings_table.remove(key.as_str())?;
|
||||||
|
|
||||||
// Remove all hash fields for this key
|
// Remove all hash fields for this key
|
||||||
let mut to_remove = Vec::new();
|
let mut to_remove = Vec::new();
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
@@ -183,19 +202,19 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
drop(iter);
|
drop(iter);
|
||||||
|
|
||||||
for (hash_key, field) in to_remove {
|
for (hash_key, field) in to_remove {
|
||||||
hashes_table.remove((hash_key.as_str(), field.as_str()))?;
|
hashes_table.remove((hash_key.as_str(), field.as_str()))?;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove from lists table
|
// Remove from lists table
|
||||||
lists_table.remove(key.as_str())?;
|
lists_table.remove(key.as_str())?;
|
||||||
|
|
||||||
// Also remove expiration
|
// Also remove expiration
|
||||||
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
expiration_table.remove(key.as_str())?;
|
expiration_table.remove(key.as_str())?;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -203,7 +222,7 @@ impl Storage {
|
|||||||
pub fn keys(&self, pattern: &str) -> Result<Vec<String>, DBError> {
|
pub fn keys(&self, pattern: &str) -> Result<Vec<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let table = read_txn.open_table(TYPES_TABLE)?;
|
let table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
let mut keys = Vec::new();
|
let mut keys = Vec::new();
|
||||||
let mut iter = table.iter()?;
|
let mut iter = table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
@@ -212,7 +231,34 @@ impl Storage {
|
|||||||
keys.push(key);
|
keys.push(key);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(keys)
|
Ok(keys)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl Storage {
|
||||||
|
pub fn dbsize(&self) -> Result<i64, DBError> {
|
||||||
|
let read_txn = self.db.begin_read()?;
|
||||||
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let expiration_table = read_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
|
||||||
|
let mut count: i64 = 0;
|
||||||
|
let mut iter = types_table.iter()?;
|
||||||
|
while let Some(entry) = iter.next() {
|
||||||
|
let entry = entry?;
|
||||||
|
let key = entry.0.value();
|
||||||
|
let ty = entry.1.value();
|
||||||
|
|
||||||
|
if ty == "string" {
|
||||||
|
if let Some(expires_at) = expiration_table.get(key)? {
|
||||||
|
if now_in_millis() > expires_at.value() as u128 {
|
||||||
|
// Skip logically expired string keys
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
count += 1;
|
||||||
|
}
|
||||||
|
Ok(count)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,24 +1,29 @@
|
|||||||
use redb::{ReadableTable};
|
|
||||||
use crate::error::DBError;
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use redb::ReadableTable;
|
||||||
|
|
||||||
impl Storage {
|
impl Storage {
|
||||||
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
||||||
pub fn scan(&self, cursor: u64, pattern: Option<&str>, count: Option<u64>) -> Result<(u64, Vec<(String, String)>), DBError> {
|
pub fn scan(
|
||||||
|
&self,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let strings_table = read_txn.open_table(STRINGS_TABLE)?;
|
let strings_table = read_txn.open_table(STRINGS_TABLE)?;
|
||||||
|
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
let mut current_cursor = 0u64;
|
let mut current_cursor = 0u64;
|
||||||
let limit = count.unwrap_or(10) as usize;
|
let limit = count.unwrap_or(10) as usize;
|
||||||
|
|
||||||
let mut iter = types_table.iter()?;
|
let mut iter = types_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
let key = entry.0.value().to_string();
|
let key = entry.0.value().to_string();
|
||||||
let key_type = entry.1.value().to_string();
|
let key_type = entry.1.value().to_string();
|
||||||
|
|
||||||
if current_cursor >= cursor {
|
if current_cursor >= cursor {
|
||||||
// Apply pattern matching if specified
|
// Apply pattern matching if specified
|
||||||
let matches = if let Some(pat) = pattern {
|
let matches = if let Some(pat) = pattern {
|
||||||
@@ -26,7 +31,7 @@ impl Storage {
|
|||||||
} else {
|
} else {
|
||||||
true
|
true
|
||||||
};
|
};
|
||||||
|
|
||||||
if matches {
|
if matches {
|
||||||
// For scan, we return key-value pairs for string types
|
// For scan, we return key-value pairs for string types
|
||||||
if key_type == "string" {
|
if key_type == "string" {
|
||||||
@@ -41,7 +46,7 @@ impl Storage {
|
|||||||
// For non-string types, just return the key with type as value
|
// For non-string types, just return the key with type as value
|
||||||
result.push((key, key_type));
|
result.push((key, key_type));
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.len() >= limit {
|
if result.len() >= limit {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@@ -49,15 +54,19 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
current_cursor += 1;
|
current_cursor += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
let next_cursor = if result.len() < limit { 0 } else { current_cursor };
|
let next_cursor = if result.len() < limit {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
current_cursor
|
||||||
|
};
|
||||||
Ok((next_cursor, result))
|
Ok((next_cursor, result))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn ttl(&self, key: &str) -> Result<i64, DBError> {
|
pub fn ttl(&self, key: &str) -> Result<i64, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "string" => {
|
Some(type_val) if type_val.value() == "string" => {
|
||||||
let expiration_table = read_txn.open_table(EXPIRATION_TABLE)?;
|
let expiration_table = read_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
@@ -75,14 +84,14 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
Some(_) => Ok(-1), // Key exists but is not a string (no expiration support for other types)
|
Some(_) => Ok(-1), // Key exists but is not a string (no expiration support for other types)
|
||||||
None => Ok(-2), // Key does not exist
|
None => Ok(-2), // Key does not exist
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn exists(&self, key: &str) -> Result<bool, DBError> {
|
pub fn exists(&self, key: &str) -> Result<bool, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "string" => {
|
Some(type_val) if type_val.value() == "string" => {
|
||||||
// Check if string key has expired
|
// Check if string key has expired
|
||||||
@@ -95,9 +104,131 @@ impl Storage {
|
|||||||
Ok(true)
|
Ok(true)
|
||||||
}
|
}
|
||||||
Some(_) => Ok(true), // Key exists and is not a string
|
Some(_) => Ok(true), // Key exists and is not a string
|
||||||
None => Ok(false), // Key does not exist
|
None => Ok(false), // Key does not exist
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// -------- Expiration helpers (string keys only, consistent with TTL/EXISTS) --------
|
||||||
|
|
||||||
|
// Set expiry in seconds; returns true if applied (key exists and is string), false otherwise
|
||||||
|
pub fn expire_seconds(&self, key: &str, secs: u64) -> Result<bool, DBError> {
|
||||||
|
// Determine eligibility first to avoid holding borrows across commit
|
||||||
|
let mut applied = false;
|
||||||
|
let write_txn = self.db.begin_write()?;
|
||||||
|
{
|
||||||
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let is_string = types_table
|
||||||
|
.get(key)?
|
||||||
|
.map(|v| v.value() == "string")
|
||||||
|
.unwrap_or(false);
|
||||||
|
if is_string {
|
||||||
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
let expires_at = now_in_millis() + (secs as u128) * 1000;
|
||||||
|
expiration_table.insert(key, &(expires_at as u64))?;
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
Ok(applied)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set expiry in milliseconds; returns true if applied (key exists and is string), false otherwise
|
||||||
|
pub fn pexpire_millis(&self, key: &str, ms: u128) -> Result<bool, DBError> {
|
||||||
|
let mut applied = false;
|
||||||
|
let write_txn = self.db.begin_write()?;
|
||||||
|
{
|
||||||
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let is_string = types_table
|
||||||
|
.get(key)?
|
||||||
|
.map(|v| v.value() == "string")
|
||||||
|
.unwrap_or(false);
|
||||||
|
if is_string {
|
||||||
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
let expires_at = now_in_millis() + ms;
|
||||||
|
expiration_table.insert(key, &(expires_at as u64))?;
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
Ok(applied)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove expiry if present; returns true if removed, false otherwise
|
||||||
|
pub fn persist(&self, key: &str) -> Result<bool, DBError> {
|
||||||
|
let mut removed = false;
|
||||||
|
let write_txn = self.db.begin_write()?;
|
||||||
|
{
|
||||||
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let is_string = types_table
|
||||||
|
.get(key)?
|
||||||
|
.map(|v| v.value() == "string")
|
||||||
|
.unwrap_or(false);
|
||||||
|
if is_string {
|
||||||
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
if expiration_table.remove(key)?.is_some() {
|
||||||
|
removed = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
Ok(removed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Absolute EXPIREAT in seconds since epoch
|
||||||
|
// Returns true if applied (key exists and is string), false otherwise
|
||||||
|
pub fn expire_at_seconds(&self, key: &str, ts_secs: i64) -> Result<bool, DBError> {
|
||||||
|
let mut applied = false;
|
||||||
|
let write_txn = self.db.begin_write()?;
|
||||||
|
{
|
||||||
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let is_string = types_table
|
||||||
|
.get(key)?
|
||||||
|
.map(|v| v.value() == "string")
|
||||||
|
.unwrap_or(false);
|
||||||
|
if is_string {
|
||||||
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
let expires_at_ms: u128 = if ts_secs <= 0 {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
(ts_secs as u128) * 1000
|
||||||
|
};
|
||||||
|
expiration_table.insert(key, &(expires_at_ms as u64))?;
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
Ok(applied)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Absolute PEXPIREAT in milliseconds since epoch
|
||||||
|
// Returns true if applied (key exists and is string), false otherwise
|
||||||
|
pub fn pexpire_at_millis(&self, key: &str, ts_ms: i64) -> Result<bool, DBError> {
|
||||||
|
let mut applied = false;
|
||||||
|
let write_txn = self.db.begin_write()?;
|
||||||
|
{
|
||||||
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
|
let is_string = types_table
|
||||||
|
.get(key)?
|
||||||
|
.map(|v| v.value() == "string")
|
||||||
|
.unwrap_or(false);
|
||||||
|
if is_string {
|
||||||
|
let mut expiration_table = write_txn.open_table(EXPIRATION_TABLE)?;
|
||||||
|
let expires_at_ms: u128 = if ts_ms <= 0 { 0 } else { ts_ms as u128 };
|
||||||
|
expiration_table.insert(key, &(expires_at_ms as u64))?;
|
||||||
|
applied = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
write_txn.commit()?;
|
||||||
|
Ok(applied)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn info(&self) -> Result<Vec<(String, String)>, DBError> {
|
||||||
|
let dbsize = self.dbsize()?;
|
||||||
|
Ok(vec![
|
||||||
|
("db_size".to_string(), dbsize.to_string()),
|
||||||
|
("is_encrypted".to_string(), self.is_encrypted().to_string()),
|
||||||
|
])
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Utility function for glob pattern matching
|
// Utility function for glob pattern matching
|
||||||
@@ -105,21 +236,21 @@ pub fn glob_match(pattern: &str, text: &str) -> bool {
|
|||||||
if pattern == "*" {
|
if pattern == "*" {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Simple glob matching - supports * and ? wildcards
|
// Simple glob matching - supports * and ? wildcards
|
||||||
let pattern_chars: Vec<char> = pattern.chars().collect();
|
let pattern_chars: Vec<char> = pattern.chars().collect();
|
||||||
let text_chars: Vec<char> = text.chars().collect();
|
let text_chars: Vec<char> = text.chars().collect();
|
||||||
|
|
||||||
fn match_recursive(pattern: &[char], text: &[char], pi: usize, ti: usize) -> bool {
|
fn match_recursive(pattern: &[char], text: &[char], pi: usize, ti: usize) -> bool {
|
||||||
if pi >= pattern.len() {
|
if pi >= pattern.len() {
|
||||||
return ti >= text.len();
|
return ti >= text.len();
|
||||||
}
|
}
|
||||||
|
|
||||||
if ti >= text.len() {
|
if ti >= text.len() {
|
||||||
// Check if remaining pattern is all '*'
|
// Check if remaining pattern is all '*'
|
||||||
return pattern[pi..].iter().all(|&c| c == '*');
|
return pattern[pi..].iter().all(|&c| c == '*');
|
||||||
}
|
}
|
||||||
|
|
||||||
match pattern[pi] {
|
match pattern[pi] {
|
||||||
'*' => {
|
'*' => {
|
||||||
// Try matching zero or more characters
|
// Try matching zero or more characters
|
||||||
@@ -144,7 +275,7 @@ pub fn glob_match(pattern: &str, text: &str) -> bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
match_recursive(&pattern_chars, &text_chars, 0, 0)
|
match_recursive(&pattern_chars, &text_chars, 0, 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -165,4 +296,4 @@ mod tests {
|
|||||||
assert!(glob_match("*test*", "this_is_a_test_string"));
|
assert!(glob_match("*test*", "this_is_a_test_string"));
|
||||||
assert!(!glob_match("*test*", "this_is_a_string"));
|
assert!(!glob_match("*test*", "this_is_a_string"));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,44 +1,50 @@
|
|||||||
use redb::{ReadableTable};
|
|
||||||
use crate::error::DBError;
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use redb::ReadableTable;
|
||||||
|
|
||||||
impl Storage {
|
impl Storage {
|
||||||
// ✅ ENCRYPTION APPLIED: Values are encrypted before storage
|
// ✅ ENCRYPTION APPLIED: Values are encrypted before storage
|
||||||
pub fn hset(&self, key: &str, pairs: Vec<(String, String)>) -> Result<i64, DBError> {
|
pub fn hset(&self, key: &str, pairs: Vec<(String, String)>) -> Result<i64, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut new_fields = 0i64;
|
let mut new_fields = 0i64;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
||||||
|
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
access_guard.map(|v| v.value().to_string())
|
access_guard.map(|v| v.value().to_string())
|
||||||
};
|
};
|
||||||
|
|
||||||
match key_type.as_deref() {
|
match key_type.as_deref() {
|
||||||
Some("hash") | None => { // Proceed if hash or new key
|
Some("hash") | None => {
|
||||||
|
// Proceed if hash or new key
|
||||||
// Set the type to hash (only if new key or existing hash)
|
// Set the type to hash (only if new key or existing hash)
|
||||||
types_table.insert(key, "hash")?;
|
types_table.insert(key, "hash")?;
|
||||||
|
|
||||||
for (field, value) in pairs {
|
for (field, value) in pairs {
|
||||||
// Check if field already exists
|
// Check if field already exists
|
||||||
let exists = hashes_table.get((key, field.as_str()))?.is_some();
|
let exists = hashes_table.get((key, field.as_str()))?.is_some();
|
||||||
|
|
||||||
// Encrypt the value before storing
|
// Encrypt the value before storing
|
||||||
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
||||||
hashes_table.insert((key, field.as_str()), encrypted.as_slice())?;
|
hashes_table.insert((key, field.as_str()), encrypted.as_slice())?;
|
||||||
|
|
||||||
if !exists {
|
if !exists {
|
||||||
new_fields += 1;
|
new_fields += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Some(_) => return Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value"
|
||||||
|
.to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(new_fields)
|
Ok(new_fields)
|
||||||
}
|
}
|
||||||
@@ -47,7 +53,7 @@ impl Storage {
|
|||||||
pub fn hget(&self, key: &str, field: &str) -> Result<Option<String>, DBError> {
|
pub fn hget(&self, key: &str, field: &str) -> Result<Option<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
let key_type = types_table.get(key)?.map(|v| v.value().to_string());
|
let key_type = types_table.get(key)?.map(|v| v.value().to_string());
|
||||||
|
|
||||||
match key_type.as_deref() {
|
match key_type.as_deref() {
|
||||||
@@ -62,7 +68,9 @@ impl Storage {
|
|||||||
None => Ok(None),
|
None => Ok(None),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(None),
|
None => Ok(None),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -80,7 +88,7 @@ impl Storage {
|
|||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
@@ -91,10 +99,12 @@ impl Storage {
|
|||||||
result.push((field.to_string(), value));
|
result.push((field.to_string(), value));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(Vec::new()),
|
None => Ok(Vec::new()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -102,24 +112,24 @@ impl Storage {
|
|||||||
pub fn hdel(&self, key: &str, fields: Vec<String>) -> Result<i64, DBError> {
|
pub fn hdel(&self, key: &str, fields: Vec<String>) -> Result<i64, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut deleted = 0i64;
|
let mut deleted = 0i64;
|
||||||
|
|
||||||
// First check if key exists and is a hash
|
// First check if key exists and is a hash
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
access_guard.map(|v| v.value().to_string())
|
access_guard.map(|v| v.value().to_string())
|
||||||
};
|
};
|
||||||
|
|
||||||
match key_type.as_deref() {
|
match key_type.as_deref() {
|
||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
||||||
|
|
||||||
for field in fields {
|
for field in fields {
|
||||||
if hashes_table.remove((key, field.as_str()))?.is_some() {
|
if hashes_table.remove((key, field.as_str()))?.is_some() {
|
||||||
deleted += 1;
|
deleted += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if hash is now empty and remove type if so
|
// Check if hash is now empty and remove type if so
|
||||||
let mut has_fields = false;
|
let mut has_fields = false;
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
@@ -132,24 +142,26 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
drop(iter);
|
drop(iter);
|
||||||
|
|
||||||
if !has_fields {
|
if !has_fields {
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
types_table.remove(key)?;
|
types_table.remove(key)?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Some(_) => return Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
None => {} // Key does not exist, nothing to delete, return 0 deleted
|
None => {} // Key does not exist, nothing to delete, return 0 deleted
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(deleted)
|
Ok(deleted)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn hexists(&self, key: &str, field: &str) -> Result<bool, DBError> {
|
pub fn hexists(&self, key: &str, field: &str) -> Result<bool, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -161,15 +173,15 @@ impl Storage {
|
|||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
Ok(hashes_table.get((key, field))?.is_some())
|
Ok(hashes_table.get((key, field))?.is_some())
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(false),
|
None => Ok(false),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn hkeys(&self, key: &str) -> Result<Vec<String>, DBError> {
|
pub fn hkeys(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -180,7 +192,7 @@ impl Storage {
|
|||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
@@ -189,10 +201,12 @@ impl Storage {
|
|||||||
result.push(field.to_string());
|
result.push(field.to_string());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(Vec::new()),
|
None => Ok(Vec::new()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -200,8 +214,6 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: All values are decrypted after retrieval
|
// ✅ ENCRYPTION APPLIED: All values are decrypted after retrieval
|
||||||
pub fn hvals(&self, key: &str) -> Result<Vec<String>, DBError> {
|
pub fn hvals(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -212,7 +224,7 @@ impl Storage {
|
|||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
@@ -223,18 +235,18 @@ impl Storage {
|
|||||||
result.push(value);
|
result.push(value);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(Vec::new()),
|
None => Ok(Vec::new()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn hlen(&self, key: &str) -> Result<i64, DBError> {
|
pub fn hlen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -245,7 +257,7 @@ impl Storage {
|
|||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut count = 0i64;
|
let mut count = 0i64;
|
||||||
|
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
@@ -254,10 +266,12 @@ impl Storage {
|
|||||||
count += 1;
|
count += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(count)
|
Ok(count)
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(0),
|
None => Ok(0),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -265,8 +279,6 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
||||||
pub fn hmget(&self, key: &str, fields: Vec<String>) -> Result<Vec<Option<String>>, DBError> {
|
pub fn hmget(&self, key: &str, fields: Vec<String>) -> Result<Vec<Option<String>>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -277,7 +289,7 @@ impl Storage {
|
|||||||
Some("hash") => {
|
Some("hash") => {
|
||||||
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
let hashes_table = read_txn.open_table(HASHES_TABLE)?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
for field in fields {
|
for field in fields {
|
||||||
match hashes_table.get((key, field.as_str()))? {
|
match hashes_table.get((key, field.as_str()))? {
|
||||||
Some(data) => {
|
Some(data) => {
|
||||||
@@ -288,10 +300,12 @@ impl Storage {
|
|||||||
None => result.push(None),
|
None => result.push(None),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok(fields.into_iter().map(|_| None).collect()),
|
None => Ok(fields.into_iter().map(|_| None).collect()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -300,42 +314,52 @@ impl Storage {
|
|||||||
pub fn hsetnx(&self, key: &str, field: &str, value: &str) -> Result<bool, DBError> {
|
pub fn hsetnx(&self, key: &str, field: &str, value: &str) -> Result<bool, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut result = false;
|
let mut result = false;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
let mut hashes_table = write_txn.open_table(HASHES_TABLE)?;
|
||||||
|
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
access_guard.map(|v| v.value().to_string())
|
access_guard.map(|v| v.value().to_string())
|
||||||
};
|
};
|
||||||
|
|
||||||
match key_type.as_deref() {
|
match key_type.as_deref() {
|
||||||
Some("hash") | None => { // Proceed if hash or new key
|
Some("hash") | None => {
|
||||||
|
// Proceed if hash or new key
|
||||||
// Check if field already exists
|
// Check if field already exists
|
||||||
if hashes_table.get((key, field))?.is_none() {
|
if hashes_table.get((key, field))?.is_none() {
|
||||||
// Set the type to hash (only if new key or existing hash)
|
// Set the type to hash (only if new key or existing hash)
|
||||||
types_table.insert(key, "hash")?;
|
types_table.insert(key, "hash")?;
|
||||||
|
|
||||||
// Encrypt the value before storing
|
// Encrypt the value before storing
|
||||||
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
let encrypted = self.encrypt_if_needed(value.as_bytes())?;
|
||||||
hashes_table.insert((key, field), encrypted.as_slice())?;
|
hashes_table.insert((key, field), encrypted.as_slice())?;
|
||||||
result = true;
|
result = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Some(_) => return Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value"
|
||||||
|
.to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
// ✅ ENCRYPTION APPLIED: Values are decrypted after retrieval
|
||||||
pub fn hscan(&self, key: &str, cursor: u64, pattern: Option<&str>, count: Option<u64>) -> Result<(u64, Vec<(String, String)>), DBError> {
|
pub fn hscan(
|
||||||
|
&self,
|
||||||
|
key: &str,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
|
||||||
|
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
let key_type = {
|
let key_type = {
|
||||||
let access_guard = types_table.get(key)?;
|
let access_guard = types_table.get(key)?;
|
||||||
@@ -348,28 +372,28 @@ impl Storage {
|
|||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
let mut current_cursor = 0u64;
|
let mut current_cursor = 0u64;
|
||||||
let limit = count.unwrap_or(10) as usize;
|
let limit = count.unwrap_or(10) as usize;
|
||||||
|
|
||||||
let mut iter = hashes_table.iter()?;
|
let mut iter = hashes_table.iter()?;
|
||||||
while let Some(entry) = iter.next() {
|
while let Some(entry) = iter.next() {
|
||||||
let entry = entry?;
|
let entry = entry?;
|
||||||
let (hash_key, field) = entry.0.value();
|
let (hash_key, field) = entry.0.value();
|
||||||
|
|
||||||
if hash_key == key {
|
if hash_key == key {
|
||||||
if current_cursor >= cursor {
|
if current_cursor >= cursor {
|
||||||
let field_str = field.to_string();
|
let field_str = field.to_string();
|
||||||
|
|
||||||
// Apply pattern matching if specified
|
// Apply pattern matching if specified
|
||||||
let matches = if let Some(pat) = pattern {
|
let matches = if let Some(pat) = pattern {
|
||||||
super::storage_extra::glob_match(pat, &field_str)
|
super::storage_extra::glob_match(pat, &field_str)
|
||||||
} else {
|
} else {
|
||||||
true
|
true
|
||||||
};
|
};
|
||||||
|
|
||||||
if matches {
|
if matches {
|
||||||
let decrypted = self.decrypt_if_needed(entry.1.value())?;
|
let decrypted = self.decrypt_if_needed(entry.1.value())?;
|
||||||
let value = String::from_utf8(decrypted)?;
|
let value = String::from_utf8(decrypted)?;
|
||||||
result.push((field_str, value));
|
result.push((field_str, value));
|
||||||
|
|
||||||
if result.len() >= limit {
|
if result.len() >= limit {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@@ -378,12 +402,18 @@ impl Storage {
|
|||||||
current_cursor += 1;
|
current_cursor += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let next_cursor = if result.len() < limit { 0 } else { current_cursor };
|
let next_cursor = if result.len() < limit {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
current_cursor
|
||||||
|
};
|
||||||
Ok((next_cursor, result))
|
Ok((next_cursor, result))
|
||||||
}
|
}
|
||||||
Some(_) => Err(DBError("WRONGTYPE Operation against a key holding the wrong kind of value".to_string())),
|
Some(_) => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
None => Ok((0, Vec::new())),
|
None => Ok((0, Vec::new())),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,20 +1,20 @@
|
|||||||
use redb::{ReadableTable};
|
|
||||||
use crate::error::DBError;
|
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use redb::ReadableTable;
|
||||||
|
|
||||||
impl Storage {
|
impl Storage {
|
||||||
// ✅ ENCRYPTION APPLIED: Elements are encrypted before storage
|
// ✅ ENCRYPTION APPLIED: Elements are encrypted before storage
|
||||||
pub fn lpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
pub fn lpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut _length = 0i64;
|
let mut _length = 0i64;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
// Set the type to list
|
// Set the type to list
|
||||||
types_table.insert(key, "list")?;
|
types_table.insert(key, "list")?;
|
||||||
|
|
||||||
// Get current list or create empty one
|
// Get current list or create empty one
|
||||||
let mut list: Vec<String> = match lists_table.get(key)? {
|
let mut list: Vec<String> = match lists_table.get(key)? {
|
||||||
Some(data) => {
|
Some(data) => {
|
||||||
@@ -23,20 +23,20 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
None => Vec::new(),
|
None => Vec::new(),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add elements to the front (left)
|
// Add elements to the front (left)
|
||||||
for element in elements.into_iter() {
|
for element in elements.into_iter() {
|
||||||
list.insert(0, element);
|
list.insert(0, element);
|
||||||
}
|
}
|
||||||
|
|
||||||
_length = list.len() as i64;
|
_length = list.len() as i64;
|
||||||
|
|
||||||
// Encrypt and store the updated list
|
// Encrypt and store the updated list
|
||||||
let serialized = serde_json::to_vec(&list)?;
|
let serialized = serde_json::to_vec(&list)?;
|
||||||
let encrypted = self.encrypt_if_needed(&serialized)?;
|
let encrypted = self.encrypt_if_needed(&serialized)?;
|
||||||
lists_table.insert(key, encrypted.as_slice())?;
|
lists_table.insert(key, encrypted.as_slice())?;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(_length)
|
Ok(_length)
|
||||||
}
|
}
|
||||||
@@ -45,14 +45,14 @@ impl Storage {
|
|||||||
pub fn rpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
pub fn rpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut _length = 0i64;
|
let mut _length = 0i64;
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
// Set the type to list
|
// Set the type to list
|
||||||
types_table.insert(key, "list")?;
|
types_table.insert(key, "list")?;
|
||||||
|
|
||||||
// Get current list or create empty one
|
// Get current list or create empty one
|
||||||
let mut list: Vec<String> = match lists_table.get(key)? {
|
let mut list: Vec<String> = match lists_table.get(key)? {
|
||||||
Some(data) => {
|
Some(data) => {
|
||||||
@@ -61,17 +61,17 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
None => Vec::new(),
|
None => Vec::new(),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Add elements to the end (right)
|
// Add elements to the end (right)
|
||||||
list.extend(elements);
|
list.extend(elements);
|
||||||
_length = list.len() as i64;
|
_length = list.len() as i64;
|
||||||
|
|
||||||
// Encrypt and store the updated list
|
// Encrypt and store the updated list
|
||||||
let serialized = serde_json::to_vec(&list)?;
|
let serialized = serde_json::to_vec(&list)?;
|
||||||
let encrypted = self.encrypt_if_needed(&serialized)?;
|
let encrypted = self.encrypt_if_needed(&serialized)?;
|
||||||
lists_table.insert(key, encrypted.as_slice())?;
|
lists_table.insert(key, encrypted.as_slice())?;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(_length)
|
Ok(_length)
|
||||||
}
|
}
|
||||||
@@ -80,12 +80,12 @@ impl Storage {
|
|||||||
pub fn lpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
pub fn lpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
// First check if key exists and is a list, and get the data
|
// First check if key exists and is a list, and get the data
|
||||||
let list_data = {
|
let list_data = {
|
||||||
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
let result = match types_table.get(key)? {
|
let result = match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
if let Some(data) = lists_table.get(key)? {
|
if let Some(data) = lists_table.get(key)? {
|
||||||
@@ -100,7 +100,7 @@ impl Storage {
|
|||||||
};
|
};
|
||||||
result
|
result
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(mut list) = list_data {
|
if let Some(mut list) = list_data {
|
||||||
let pop_count = std::cmp::min(count as usize, list.len());
|
let pop_count = std::cmp::min(count as usize, list.len());
|
||||||
for _ in 0..pop_count {
|
for _ in 0..pop_count {
|
||||||
@@ -108,7 +108,7 @@ impl Storage {
|
|||||||
result.push(list.remove(0));
|
result.push(list.remove(0));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
if list.is_empty() {
|
if list.is_empty() {
|
||||||
// Remove the key if list is empty
|
// Remove the key if list is empty
|
||||||
@@ -122,7 +122,7 @@ impl Storage {
|
|||||||
lists_table.insert(key, encrypted.as_slice())?;
|
lists_table.insert(key, encrypted.as_slice())?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
@@ -131,12 +131,12 @@ impl Storage {
|
|||||||
pub fn rpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
pub fn rpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut result = Vec::new();
|
let mut result = Vec::new();
|
||||||
|
|
||||||
// First check if key exists and is a list, and get the data
|
// First check if key exists and is a list, and get the data
|
||||||
let list_data = {
|
let list_data = {
|
||||||
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
let result = match types_table.get(key)? {
|
let result = match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
if let Some(data) = lists_table.get(key)? {
|
if let Some(data) = lists_table.get(key)? {
|
||||||
@@ -151,7 +151,7 @@ impl Storage {
|
|||||||
};
|
};
|
||||||
result
|
result
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(mut list) = list_data {
|
if let Some(mut list) = list_data {
|
||||||
let pop_count = std::cmp::min(count as usize, list.len());
|
let pop_count = std::cmp::min(count as usize, list.len());
|
||||||
for _ in 0..pop_count {
|
for _ in 0..pop_count {
|
||||||
@@ -159,7 +159,7 @@ impl Storage {
|
|||||||
result.push(list.pop().unwrap());
|
result.push(list.pop().unwrap());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
if list.is_empty() {
|
if list.is_empty() {
|
||||||
// Remove the key if list is empty
|
// Remove the key if list is empty
|
||||||
@@ -173,7 +173,7 @@ impl Storage {
|
|||||||
lists_table.insert(key, encrypted.as_slice())?;
|
lists_table.insert(key, encrypted.as_slice())?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
@@ -181,7 +181,7 @@ impl Storage {
|
|||||||
pub fn llen(&self, key: &str) -> Result<i64, DBError> {
|
pub fn llen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
||||||
@@ -202,7 +202,7 @@ impl Storage {
|
|||||||
pub fn lindex(&self, key: &str, index: i64) -> Result<Option<String>, DBError> {
|
pub fn lindex(&self, key: &str, index: i64) -> Result<Option<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
||||||
@@ -210,13 +210,13 @@ impl Storage {
|
|||||||
Some(data) => {
|
Some(data) => {
|
||||||
let decrypted = self.decrypt_if_needed(data.value())?;
|
let decrypted = self.decrypt_if_needed(data.value())?;
|
||||||
let list: Vec<String> = serde_json::from_slice(&decrypted)?;
|
let list: Vec<String> = serde_json::from_slice(&decrypted)?;
|
||||||
|
|
||||||
let actual_index = if index < 0 {
|
let actual_index = if index < 0 {
|
||||||
list.len() as i64 + index
|
list.len() as i64 + index
|
||||||
} else {
|
} else {
|
||||||
index
|
index
|
||||||
};
|
};
|
||||||
|
|
||||||
if actual_index >= 0 && (actual_index as usize) < list.len() {
|
if actual_index >= 0 && (actual_index as usize) < list.len() {
|
||||||
Ok(Some(list[actual_index as usize].clone()))
|
Ok(Some(list[actual_index as usize].clone()))
|
||||||
} else {
|
} else {
|
||||||
@@ -234,7 +234,7 @@ impl Storage {
|
|||||||
pub fn lrange(&self, key: &str, start: i64, stop: i64) -> Result<Vec<String>, DBError> {
|
pub fn lrange(&self, key: &str, start: i64, stop: i64) -> Result<Vec<String>, DBError> {
|
||||||
let read_txn = self.db.begin_read()?;
|
let read_txn = self.db.begin_read()?;
|
||||||
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
let types_table = read_txn.open_table(TYPES_TABLE)?;
|
||||||
|
|
||||||
match types_table.get(key)? {
|
match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
let lists_table = read_txn.open_table(LISTS_TABLE)?;
|
||||||
@@ -242,22 +242,30 @@ impl Storage {
|
|||||||
Some(data) => {
|
Some(data) => {
|
||||||
let decrypted = self.decrypt_if_needed(data.value())?;
|
let decrypted = self.decrypt_if_needed(data.value())?;
|
||||||
let list: Vec<String> = serde_json::from_slice(&decrypted)?;
|
let list: Vec<String> = serde_json::from_slice(&decrypted)?;
|
||||||
|
|
||||||
if list.is_empty() {
|
if list.is_empty() {
|
||||||
return Ok(Vec::new());
|
return Ok(Vec::new());
|
||||||
}
|
}
|
||||||
|
|
||||||
let len = list.len() as i64;
|
let len = list.len() as i64;
|
||||||
let start_idx = if start < 0 { std::cmp::max(0, len + start) } else { std::cmp::min(start, len) };
|
let start_idx = if start < 0 {
|
||||||
let stop_idx = if stop < 0 { std::cmp::max(-1, len + stop) } else { std::cmp::min(stop, len - 1) };
|
std::cmp::max(0, len + start)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(start, len)
|
||||||
|
};
|
||||||
|
let stop_idx = if stop < 0 {
|
||||||
|
std::cmp::max(-1, len + stop)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(stop, len - 1)
|
||||||
|
};
|
||||||
|
|
||||||
if start_idx > stop_idx || start_idx >= len {
|
if start_idx > stop_idx || start_idx >= len {
|
||||||
return Ok(Vec::new());
|
return Ok(Vec::new());
|
||||||
}
|
}
|
||||||
|
|
||||||
let start_usize = start_idx as usize;
|
let start_usize = start_idx as usize;
|
||||||
let stop_usize = (stop_idx + 1) as usize;
|
let stop_usize = (stop_idx + 1) as usize;
|
||||||
|
|
||||||
Ok(list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec())
|
Ok(list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec())
|
||||||
}
|
}
|
||||||
None => Ok(Vec::new()),
|
None => Ok(Vec::new()),
|
||||||
@@ -270,12 +278,12 @@ impl Storage {
|
|||||||
// ✅ ENCRYPTION APPLIED: Elements are decrypted after retrieval and encrypted before storage
|
// ✅ ENCRYPTION APPLIED: Elements are decrypted after retrieval and encrypted before storage
|
||||||
pub fn ltrim(&self, key: &str, start: i64, stop: i64) -> Result<(), DBError> {
|
pub fn ltrim(&self, key: &str, start: i64, stop: i64) -> Result<(), DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
|
|
||||||
// First check if key exists and is a list, and get the data
|
// First check if key exists and is a list, and get the data
|
||||||
let list_data = {
|
let list_data = {
|
||||||
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
let result = match types_table.get(key)? {
|
let result = match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
if let Some(data) = lists_table.get(key)? {
|
if let Some(data) = lists_table.get(key)? {
|
||||||
@@ -290,17 +298,25 @@ impl Storage {
|
|||||||
};
|
};
|
||||||
result
|
result
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(list) = list_data {
|
if let Some(list) = list_data {
|
||||||
if list.is_empty() {
|
if list.is_empty() {
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
return Ok(());
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
let len = list.len() as i64;
|
let len = list.len() as i64;
|
||||||
let start_idx = if start < 0 { std::cmp::max(0, len + start) } else { std::cmp::min(start, len) };
|
let start_idx = if start < 0 {
|
||||||
let stop_idx = if stop < 0 { std::cmp::max(-1, len + stop) } else { std::cmp::min(stop, len - 1) };
|
std::cmp::max(0, len + start)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(start, len)
|
||||||
|
};
|
||||||
|
let stop_idx = if stop < 0 {
|
||||||
|
std::cmp::max(-1, len + stop)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(stop, len - 1)
|
||||||
|
};
|
||||||
|
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
if start_idx > stop_idx || start_idx >= len {
|
if start_idx > stop_idx || start_idx >= len {
|
||||||
// Remove the entire list
|
// Remove the entire list
|
||||||
@@ -311,7 +327,7 @@ impl Storage {
|
|||||||
let start_usize = start_idx as usize;
|
let start_usize = start_idx as usize;
|
||||||
let stop_usize = (stop_idx + 1) as usize;
|
let stop_usize = (stop_idx + 1) as usize;
|
||||||
let trimmed = list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec();
|
let trimmed = list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec();
|
||||||
|
|
||||||
if trimmed.is_empty() {
|
if trimmed.is_empty() {
|
||||||
lists_table.remove(key)?;
|
lists_table.remove(key)?;
|
||||||
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
let mut types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
@@ -324,7 +340,7 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -333,12 +349,12 @@ impl Storage {
|
|||||||
pub fn lrem(&self, key: &str, count: i64, element: &str) -> Result<i64, DBError> {
|
pub fn lrem(&self, key: &str, count: i64, element: &str) -> Result<i64, DBError> {
|
||||||
let write_txn = self.db.begin_write()?;
|
let write_txn = self.db.begin_write()?;
|
||||||
let mut removed = 0i64;
|
let mut removed = 0i64;
|
||||||
|
|
||||||
// First check if key exists and is a list, and get the data
|
// First check if key exists and is a list, and get the data
|
||||||
let list_data = {
|
let list_data = {
|
||||||
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
let types_table = write_txn.open_table(TYPES_TABLE)?;
|
||||||
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
|
|
||||||
let result = match types_table.get(key)? {
|
let result = match types_table.get(key)? {
|
||||||
Some(type_val) if type_val.value() == "list" => {
|
Some(type_val) if type_val.value() == "list" => {
|
||||||
if let Some(data) = lists_table.get(key)? {
|
if let Some(data) = lists_table.get(key)? {
|
||||||
@@ -353,7 +369,7 @@ impl Storage {
|
|||||||
};
|
};
|
||||||
result
|
result
|
||||||
};
|
};
|
||||||
|
|
||||||
if let Some(mut list) = list_data {
|
if let Some(mut list) = list_data {
|
||||||
if count == 0 {
|
if count == 0 {
|
||||||
// Remove all occurrences
|
// Remove all occurrences
|
||||||
@@ -383,7 +399,7 @@ impl Storage {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
let mut lists_table = write_txn.open_table(LISTS_TABLE)?;
|
||||||
if list.is_empty() {
|
if list.is_empty() {
|
||||||
lists_table.remove(key)?;
|
lists_table.remove(key)?;
|
||||||
@@ -396,8 +412,8 @@ impl Storage {
|
|||||||
lists_table.insert(key, encrypted.as_slice())?;
|
lists_table.insert(key, encrypted.as_slice())?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
write_txn.commit()?;
|
write_txn.commit()?;
|
||||||
Ok(removed)
|
Ok(removed)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
902
src/storage_sled/mod.rs
Normal file
902
src/storage_sled/mod.rs
Normal file
@@ -0,0 +1,902 @@
|
|||||||
|
// src/storage_sled/mod.rs
|
||||||
|
use crate::crypto::CryptoFactory;
|
||||||
|
use crate::error::DBError;
|
||||||
|
use crate::storage_trait::StorageBackend;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::{SystemTime, UNIX_EPOCH};
|
||||||
|
|
||||||
|
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||||
|
enum ValueType {
|
||||||
|
String(String),
|
||||||
|
Hash(HashMap<String, String>),
|
||||||
|
List(Vec<String>),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize, Deserialize, Debug, Clone)]
|
||||||
|
struct StorageValue {
|
||||||
|
value: ValueType,
|
||||||
|
expires_at: Option<u128>, // milliseconds since epoch
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct SledStorage {
|
||||||
|
db: sled::Db,
|
||||||
|
types: sled::Tree,
|
||||||
|
crypto: Option<CryptoFactory>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SledStorage {
|
||||||
|
pub fn new(
|
||||||
|
path: impl AsRef<Path>,
|
||||||
|
should_encrypt: bool,
|
||||||
|
master_key: Option<&str>,
|
||||||
|
) -> Result<Self, DBError> {
|
||||||
|
let db = sled::open(path).map_err(|e| DBError(format!("Failed to open sled: {}", e)))?;
|
||||||
|
let types = db
|
||||||
|
.open_tree("types")
|
||||||
|
.map_err(|e| DBError(format!("Failed to open types tree: {}", e)))?;
|
||||||
|
|
||||||
|
// Check if database was previously encrypted
|
||||||
|
let encrypted_tree = db
|
||||||
|
.open_tree("encrypted")
|
||||||
|
.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
let was_encrypted = encrypted_tree
|
||||||
|
.get("encrypted")
|
||||||
|
.map_err(|e| DBError(e.to_string()))?
|
||||||
|
.map(|v| v[0] == 1)
|
||||||
|
.unwrap_or(false);
|
||||||
|
|
||||||
|
let crypto = if should_encrypt || was_encrypted {
|
||||||
|
if let Some(key) = master_key {
|
||||||
|
Some(CryptoFactory::new(key.as_bytes()))
|
||||||
|
} else {
|
||||||
|
return Err(DBError(
|
||||||
|
"Encryption requested but no master key provided".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
// Mark database as encrypted if enabling encryption
|
||||||
|
if should_encrypt && !was_encrypted {
|
||||||
|
encrypted_tree
|
||||||
|
.insert("encrypted", &[1u8])
|
||||||
|
.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
encrypted_tree.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(SledStorage { db, types, crypto })
|
||||||
|
}
|
||||||
|
|
||||||
|
fn now_millis() -> u128 {
|
||||||
|
SystemTime::now()
|
||||||
|
.duration_since(UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn encrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
||||||
|
if let Some(crypto) = &self.crypto {
|
||||||
|
Ok(crypto.encrypt(data))
|
||||||
|
} else {
|
||||||
|
Ok(data.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn decrypt_if_needed(&self, data: &[u8]) -> Result<Vec<u8>, DBError> {
|
||||||
|
if let Some(crypto) = &self.crypto {
|
||||||
|
Ok(crypto.decrypt(data)?)
|
||||||
|
} else {
|
||||||
|
Ok(data.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_storage_value(&self, key: &str) -> Result<Option<StorageValue>, DBError> {
|
||||||
|
match self.db.get(key).map_err(|e| DBError(e.to_string()))? {
|
||||||
|
Some(encrypted_data) => {
|
||||||
|
let decrypted = self.decrypt_if_needed(&encrypted_data)?;
|
||||||
|
let storage_val: StorageValue = bincode::deserialize(&decrypted)
|
||||||
|
.map_err(|e| DBError(format!("Deserialization error: {}", e)))?;
|
||||||
|
|
||||||
|
// Check expiration
|
||||||
|
if let Some(expires_at) = storage_val.expires_at {
|
||||||
|
if Self::now_millis() > expires_at {
|
||||||
|
// Expired, remove it
|
||||||
|
self.db.remove(key).map_err(|e| DBError(e.to_string()))?;
|
||||||
|
self.types.remove(key).map_err(|e| DBError(e.to_string()))?;
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Some(storage_val))
|
||||||
|
}
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn set_storage_value(&self, key: &str, storage_val: StorageValue) -> Result<(), DBError> {
|
||||||
|
let data = bincode::serialize(&storage_val)
|
||||||
|
.map_err(|e| DBError(format!("Serialization error: {}", e)))?;
|
||||||
|
let encrypted = self.encrypt_if_needed(&data)?;
|
||||||
|
self.db
|
||||||
|
.insert(key, encrypted)
|
||||||
|
.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
|
||||||
|
// Store type info (unencrypted for efficiency)
|
||||||
|
let type_str = match &storage_val.value {
|
||||||
|
ValueType::String(_) => "string",
|
||||||
|
ValueType::Hash(_) => "hash",
|
||||||
|
ValueType::List(_) => "list",
|
||||||
|
};
|
||||||
|
self.types
|
||||||
|
.insert(key, type_str.as_bytes())
|
||||||
|
.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn glob_match(pattern: &str, text: &str) -> bool {
|
||||||
|
if pattern == "*" {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
let pattern_chars: Vec<char> = pattern.chars().collect();
|
||||||
|
let text_chars: Vec<char> = text.chars().collect();
|
||||||
|
|
||||||
|
fn match_recursive(pattern: &[char], text: &[char], pi: usize, ti: usize) -> bool {
|
||||||
|
if pi >= pattern.len() {
|
||||||
|
return ti >= text.len();
|
||||||
|
}
|
||||||
|
|
||||||
|
if ti >= text.len() {
|
||||||
|
return pattern[pi..].iter().all(|&c| c == '*');
|
||||||
|
}
|
||||||
|
|
||||||
|
match pattern[pi] {
|
||||||
|
'*' => {
|
||||||
|
for i in ti..=text.len() {
|
||||||
|
if match_recursive(pattern, text, pi + 1, i) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
false
|
||||||
|
}
|
||||||
|
'?' => match_recursive(pattern, text, pi + 1, ti + 1),
|
||||||
|
c => {
|
||||||
|
if text[ti] == c {
|
||||||
|
match_recursive(pattern, text, pi + 1, ti + 1)
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match_recursive(&pattern_chars, &text_chars, 0, 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl StorageBackend for SledStorage {
|
||||||
|
fn get(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::String(s) => Ok(Some(s)),
|
||||||
|
_ => Ok(None),
|
||||||
|
},
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn set(&self, key: String, value: String) -> Result<(), DBError> {
|
||||||
|
let storage_val = StorageValue {
|
||||||
|
value: ValueType::String(value),
|
||||||
|
expires_at: None,
|
||||||
|
};
|
||||||
|
self.set_storage_value(&key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn setx(&self, key: String, value: String, expire_ms: u128) -> Result<(), DBError> {
|
||||||
|
let storage_val = StorageValue {
|
||||||
|
value: ValueType::String(value),
|
||||||
|
expires_at: Some(Self::now_millis() + expire_ms),
|
||||||
|
};
|
||||||
|
self.set_storage_value(&key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn del(&self, key: String) -> Result<(), DBError> {
|
||||||
|
self.db.remove(&key).map_err(|e| DBError(e.to_string()))?;
|
||||||
|
self.types
|
||||||
|
.remove(&key)
|
||||||
|
.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn exists(&self, key: &str) -> Result<bool, DBError> {
|
||||||
|
// Check with expiration
|
||||||
|
Ok(self.get_storage_value(key)?.is_some())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn keys(&self, pattern: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
let mut keys = Vec::new();
|
||||||
|
for item in self.types.iter() {
|
||||||
|
let (key_bytes, _) = item.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
let key = String::from_utf8_lossy(&key_bytes).to_string();
|
||||||
|
|
||||||
|
// Check if key is expired
|
||||||
|
if self.get_storage_value(&key)?.is_some() {
|
||||||
|
if Self::glob_match(pattern, &key) {
|
||||||
|
keys.push(key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(keys)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn scan(
|
||||||
|
&self,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
|
let mut result = Vec::new();
|
||||||
|
let mut current_cursor = 0u64;
|
||||||
|
let limit = count.unwrap_or(10) as usize;
|
||||||
|
|
||||||
|
for item in self.types.iter() {
|
||||||
|
if current_cursor >= cursor {
|
||||||
|
let (key_bytes, type_bytes) = item.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
let key = String::from_utf8_lossy(&key_bytes).to_string();
|
||||||
|
|
||||||
|
// Check pattern match
|
||||||
|
let matches = if let Some(pat) = pattern {
|
||||||
|
Self::glob_match(pat, &key)
|
||||||
|
} else {
|
||||||
|
true
|
||||||
|
};
|
||||||
|
|
||||||
|
if matches {
|
||||||
|
// Check if key is expired and get value
|
||||||
|
if let Some(storage_val) = self.get_storage_value(&key)? {
|
||||||
|
let value = match storage_val.value {
|
||||||
|
ValueType::String(s) => s,
|
||||||
|
_ => String::from_utf8_lossy(&type_bytes).to_string(),
|
||||||
|
};
|
||||||
|
result.push((key, value));
|
||||||
|
|
||||||
|
if result.len() >= limit {
|
||||||
|
current_cursor += 1;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
current_cursor += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
let next_cursor = if result.len() < limit {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
current_cursor
|
||||||
|
};
|
||||||
|
Ok((next_cursor, result))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn dbsize(&self) -> Result<i64, DBError> {
|
||||||
|
let mut count = 0i64;
|
||||||
|
for item in self.types.iter() {
|
||||||
|
let (key_bytes, _) = item.map_err(|e| DBError(e.to_string()))?;
|
||||||
|
let key = String::from_utf8_lossy(&key_bytes).to_string();
|
||||||
|
if self.get_storage_value(&key)?.is_some() {
|
||||||
|
count += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(count)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn flushdb(&self) -> Result<(), DBError> {
|
||||||
|
self.db.clear().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
self.types.clear().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_key_type(&self, key: &str) -> Result<Option<String>, DBError> {
|
||||||
|
// First check if key exists (handles expiration)
|
||||||
|
if self.get_storage_value(key)?.is_some() {
|
||||||
|
match self.types.get(key).map_err(|e| DBError(e.to_string()))? {
|
||||||
|
Some(data) => Ok(Some(String::from_utf8_lossy(&data).to_string())),
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Ok(None)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hash operations
|
||||||
|
fn hset(&self, key: &str, pairs: Vec<(String, String)>) -> Result<i64, DBError> {
|
||||||
|
let mut storage_val = self.get_storage_value(key)?.unwrap_or(StorageValue {
|
||||||
|
value: ValueType::Hash(HashMap::new()),
|
||||||
|
expires_at: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let hash = match &mut storage_val.value {
|
||||||
|
ValueType::Hash(h) => h,
|
||||||
|
_ => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut new_fields = 0i64;
|
||||||
|
for (field, value) in pairs {
|
||||||
|
if !hash.contains_key(&field) {
|
||||||
|
new_fields += 1;
|
||||||
|
}
|
||||||
|
hash.insert(field, value);
|
||||||
|
}
|
||||||
|
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(new_fields)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hget(&self, key: &str, field: &str) -> Result<Option<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.get(field).cloned()),
|
||||||
|
_ => Ok(None),
|
||||||
|
},
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hgetall(&self, key: &str) -> Result<Vec<(String, String)>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.into_iter().collect()),
|
||||||
|
_ => Ok(Vec::new()),
|
||||||
|
},
|
||||||
|
None => Ok(Vec::new()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hscan(
|
||||||
|
&self,
|
||||||
|
key: &str,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => {
|
||||||
|
let mut result = Vec::new();
|
||||||
|
let mut current_cursor = 0u64;
|
||||||
|
let limit = count.unwrap_or(10) as usize;
|
||||||
|
|
||||||
|
for (field, value) in h.iter() {
|
||||||
|
if current_cursor >= cursor {
|
||||||
|
let matches = if let Some(pat) = pattern {
|
||||||
|
Self::glob_match(pat, field)
|
||||||
|
} else {
|
||||||
|
true
|
||||||
|
};
|
||||||
|
|
||||||
|
if matches {
|
||||||
|
result.push((field.clone(), value.clone()));
|
||||||
|
if result.len() >= limit {
|
||||||
|
current_cursor += 1;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
current_cursor += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
let next_cursor = if result.len() < limit {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
current_cursor
|
||||||
|
};
|
||||||
|
Ok((next_cursor, result))
|
||||||
|
}
|
||||||
|
_ => Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
)),
|
||||||
|
},
|
||||||
|
None => Ok((0, Vec::new())),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hdel(&self, key: &str, fields: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(0),
|
||||||
|
};
|
||||||
|
|
||||||
|
let hash = match &mut storage_val.value {
|
||||||
|
ValueType::Hash(h) => h,
|
||||||
|
_ => return Ok(0),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut deleted = 0i64;
|
||||||
|
for field in fields {
|
||||||
|
if hash.remove(&field).is_some() {
|
||||||
|
deleted += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if hash.is_empty() {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(deleted)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hexists(&self, key: &str, field: &str) -> Result<bool, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.contains_key(field)),
|
||||||
|
_ => Ok(false),
|
||||||
|
},
|
||||||
|
None => Ok(false),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hkeys(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.keys().cloned().collect()),
|
||||||
|
_ => Ok(Vec::new()),
|
||||||
|
},
|
||||||
|
None => Ok(Vec::new()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hvals(&self, key: &str) -> Result<Vec<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.values().cloned().collect()),
|
||||||
|
_ => Ok(Vec::new()),
|
||||||
|
},
|
||||||
|
None => Ok(Vec::new()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hlen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(h.len() as i64),
|
||||||
|
_ => Ok(0),
|
||||||
|
},
|
||||||
|
None => Ok(0),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hmget(&self, key: &str, fields: Vec<String>) -> Result<Vec<Option<String>>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::Hash(h) => Ok(fields.into_iter().map(|f| h.get(&f).cloned()).collect()),
|
||||||
|
_ => Ok(fields.into_iter().map(|_| None).collect()),
|
||||||
|
},
|
||||||
|
None => Ok(fields.into_iter().map(|_| None).collect()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn hsetnx(&self, key: &str, field: &str, value: &str) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = self.get_storage_value(key)?.unwrap_or(StorageValue {
|
||||||
|
value: ValueType::Hash(HashMap::new()),
|
||||||
|
expires_at: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let hash = match &mut storage_val.value {
|
||||||
|
ValueType::Hash(h) => h,
|
||||||
|
_ => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if hash.contains_key(field) {
|
||||||
|
Ok(false)
|
||||||
|
} else {
|
||||||
|
hash.insert(field.to_string(), value.to_string());
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// List operations
|
||||||
|
fn lpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
let mut storage_val = self.get_storage_value(key)?.unwrap_or(StorageValue {
|
||||||
|
value: ValueType::List(Vec::new()),
|
||||||
|
expires_at: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
for element in elements.into_iter().rev() {
|
||||||
|
list.insert(0, element);
|
||||||
|
}
|
||||||
|
|
||||||
|
let len = list.len() as i64;
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(len)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError> {
|
||||||
|
let mut storage_val = self.get_storage_value(key)?.unwrap_or(StorageValue {
|
||||||
|
value: ValueType::List(Vec::new()),
|
||||||
|
expires_at: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => {
|
||||||
|
return Err(DBError(
|
||||||
|
"WRONGTYPE Operation against a key holding the wrong kind of value".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
list.extend(elements);
|
||||||
|
let len = list.len() as i64;
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(len)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(Vec::new()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => return Ok(Vec::new()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut result = Vec::new();
|
||||||
|
for _ in 0..count.min(list.len() as u64) {
|
||||||
|
if let Some(elem) = list.first() {
|
||||||
|
result.push(elem.clone());
|
||||||
|
list.remove(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if list.is_empty() {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(Vec::new()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => return Ok(Vec::new()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut result = Vec::new();
|
||||||
|
for _ in 0..count.min(list.len() as u64) {
|
||||||
|
if let Some(elem) = list.pop() {
|
||||||
|
result.push(elem);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if list.is_empty() {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn llen(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::List(l) => Ok(l.len() as i64),
|
||||||
|
_ => Ok(0),
|
||||||
|
},
|
||||||
|
None => Ok(0),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lindex(&self, key: &str, index: i64) -> Result<Option<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::List(list) => {
|
||||||
|
let actual_index = if index < 0 {
|
||||||
|
list.len() as i64 + index
|
||||||
|
} else {
|
||||||
|
index
|
||||||
|
};
|
||||||
|
|
||||||
|
if actual_index >= 0 && (actual_index as usize) < list.len() {
|
||||||
|
Ok(Some(list[actual_index as usize].clone()))
|
||||||
|
} else {
|
||||||
|
Ok(None)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => Ok(None),
|
||||||
|
},
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lrange(&self, key: &str, start: i64, stop: i64) -> Result<Vec<String>, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => match storage_val.value {
|
||||||
|
ValueType::List(list) => {
|
||||||
|
if list.is_empty() {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
let len = list.len() as i64;
|
||||||
|
let start_idx = if start < 0 {
|
||||||
|
std::cmp::max(0, len + start)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(start, len)
|
||||||
|
};
|
||||||
|
let stop_idx = if stop < 0 {
|
||||||
|
std::cmp::max(-1, len + stop)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(stop, len - 1)
|
||||||
|
};
|
||||||
|
|
||||||
|
if start_idx > stop_idx || start_idx >= len {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
}
|
||||||
|
|
||||||
|
let start_usize = start_idx as usize;
|
||||||
|
let stop_usize = (stop_idx + 1) as usize;
|
||||||
|
|
||||||
|
Ok(list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec())
|
||||||
|
}
|
||||||
|
_ => Ok(Vec::new()),
|
||||||
|
},
|
||||||
|
None => Ok(Vec::new()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ltrim(&self, key: &str, start: i64, stop: i64) -> Result<(), DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => return Ok(()),
|
||||||
|
};
|
||||||
|
|
||||||
|
if list.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let len = list.len() as i64;
|
||||||
|
let start_idx = if start < 0 {
|
||||||
|
std::cmp::max(0, len + start)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(start, len)
|
||||||
|
};
|
||||||
|
let stop_idx = if stop < 0 {
|
||||||
|
std::cmp::max(-1, len + stop)
|
||||||
|
} else {
|
||||||
|
std::cmp::min(stop, len - 1)
|
||||||
|
};
|
||||||
|
|
||||||
|
if start_idx > stop_idx || start_idx >= len {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
let start_usize = start_idx as usize;
|
||||||
|
let stop_usize = (stop_idx + 1) as usize;
|
||||||
|
*list = list[start_usize..std::cmp::min(stop_usize, list.len())].to_vec();
|
||||||
|
|
||||||
|
if list.is_empty() {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lrem(&self, key: &str, count: i64, element: &str) -> Result<i64, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(0),
|
||||||
|
};
|
||||||
|
|
||||||
|
let list = match &mut storage_val.value {
|
||||||
|
ValueType::List(l) => l,
|
||||||
|
_ => return Ok(0),
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut removed = 0i64;
|
||||||
|
|
||||||
|
if count == 0 {
|
||||||
|
// Remove all occurrences
|
||||||
|
let original_len = list.len();
|
||||||
|
list.retain(|x| x != element);
|
||||||
|
removed = (original_len - list.len()) as i64;
|
||||||
|
} else if count > 0 {
|
||||||
|
// Remove first count occurrences
|
||||||
|
let mut to_remove = count as usize;
|
||||||
|
list.retain(|x| {
|
||||||
|
if x == element && to_remove > 0 {
|
||||||
|
to_remove -= 1;
|
||||||
|
removed += 1;
|
||||||
|
false
|
||||||
|
} else {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Remove last |count| occurrences
|
||||||
|
let mut to_remove = (-count) as usize;
|
||||||
|
for i in (0..list.len()).rev() {
|
||||||
|
if list[i] == element && to_remove > 0 {
|
||||||
|
list.remove(i);
|
||||||
|
to_remove -= 1;
|
||||||
|
removed += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if list.is_empty() {
|
||||||
|
self.del(key.to_string())?;
|
||||||
|
} else {
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(removed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expiration
|
||||||
|
fn ttl(&self, key: &str) -> Result<i64, DBError> {
|
||||||
|
match self.get_storage_value(key)? {
|
||||||
|
Some(storage_val) => {
|
||||||
|
if let Some(expires_at) = storage_val.expires_at {
|
||||||
|
let now = Self::now_millis();
|
||||||
|
if now >= expires_at {
|
||||||
|
Ok(-2) // Key has expired
|
||||||
|
} else {
|
||||||
|
Ok(((expires_at - now) / 1000) as i64) // TTL in seconds
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Ok(-1) // Key exists but has no expiration
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => Ok(-2), // Key does not exist
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expire_seconds(&self, key: &str, secs: u64) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(false),
|
||||||
|
};
|
||||||
|
|
||||||
|
storage_val.expires_at = Some(Self::now_millis() + (secs as u128) * 1000);
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn pexpire_millis(&self, key: &str, ms: u128) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(false),
|
||||||
|
};
|
||||||
|
|
||||||
|
storage_val.expires_at = Some(Self::now_millis() + ms);
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn persist(&self, key: &str) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(false),
|
||||||
|
};
|
||||||
|
|
||||||
|
if storage_val.expires_at.is_some() {
|
||||||
|
storage_val.expires_at = None;
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
} else {
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expire_at_seconds(&self, key: &str, ts_secs: i64) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(false),
|
||||||
|
};
|
||||||
|
|
||||||
|
let expires_at_ms: u128 = if ts_secs <= 0 {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
(ts_secs as u128) * 1000
|
||||||
|
};
|
||||||
|
storage_val.expires_at = Some(expires_at_ms);
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn pexpire_at_millis(&self, key: &str, ts_ms: i64) -> Result<bool, DBError> {
|
||||||
|
let mut storage_val = match self.get_storage_value(key)? {
|
||||||
|
Some(sv) => sv,
|
||||||
|
None => return Ok(false),
|
||||||
|
};
|
||||||
|
|
||||||
|
let expires_at_ms: u128 = if ts_ms <= 0 { 0 } else { ts_ms as u128 };
|
||||||
|
storage_val.expires_at = Some(expires_at_ms);
|
||||||
|
self.set_storage_value(key, storage_val)?;
|
||||||
|
self.db.flush().map_err(|e| DBError(e.to_string()))?;
|
||||||
|
Ok(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_encrypted(&self) -> bool {
|
||||||
|
self.crypto.is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn info(&self) -> Result<Vec<(String, String)>, DBError> {
|
||||||
|
let dbsize = self.dbsize()?;
|
||||||
|
Ok(vec![
|
||||||
|
("db_size".to_string(), dbsize.to_string()),
|
||||||
|
("is_encrypted".to_string(), self.is_encrypted().to_string()),
|
||||||
|
])
|
||||||
|
}
|
||||||
|
|
||||||
|
fn clone_arc(&self) -> Arc<dyn StorageBackend> {
|
||||||
|
// Note: This is a simplified clone - in production you might want to
|
||||||
|
// handle this differently as sled::Db is already Arc internally
|
||||||
|
Arc::new(SledStorage {
|
||||||
|
db: self.db.clone(),
|
||||||
|
types: self.types.clone(),
|
||||||
|
crypto: self.crypto.clone(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
69
src/storage_trait.rs
Normal file
69
src/storage_trait.rs
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
// src/storage_trait.rs
|
||||||
|
use crate::error::DBError;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
pub trait StorageBackend: Send + Sync {
|
||||||
|
// Basic key operations
|
||||||
|
fn get(&self, key: &str) -> Result<Option<String>, DBError>;
|
||||||
|
fn set(&self, key: String, value: String) -> Result<(), DBError>;
|
||||||
|
fn setx(&self, key: String, value: String, expire_ms: u128) -> Result<(), DBError>;
|
||||||
|
fn del(&self, key: String) -> Result<(), DBError>;
|
||||||
|
fn exists(&self, key: &str) -> Result<bool, DBError>;
|
||||||
|
fn keys(&self, pattern: &str) -> Result<Vec<String>, DBError>;
|
||||||
|
fn dbsize(&self) -> Result<i64, DBError>;
|
||||||
|
fn flushdb(&self) -> Result<(), DBError>;
|
||||||
|
fn get_key_type(&self, key: &str) -> Result<Option<String>, DBError>;
|
||||||
|
|
||||||
|
// Scanning
|
||||||
|
fn scan(
|
||||||
|
&self,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError>;
|
||||||
|
fn hscan(
|
||||||
|
&self,
|
||||||
|
key: &str,
|
||||||
|
cursor: u64,
|
||||||
|
pattern: Option<&str>,
|
||||||
|
count: Option<u64>,
|
||||||
|
) -> Result<(u64, Vec<(String, String)>), DBError>;
|
||||||
|
|
||||||
|
// Hash operations
|
||||||
|
fn hset(&self, key: &str, pairs: Vec<(String, String)>) -> Result<i64, DBError>;
|
||||||
|
fn hget(&self, key: &str, field: &str) -> Result<Option<String>, DBError>;
|
||||||
|
fn hgetall(&self, key: &str) -> Result<Vec<(String, String)>, DBError>;
|
||||||
|
fn hdel(&self, key: &str, fields: Vec<String>) -> Result<i64, DBError>;
|
||||||
|
fn hexists(&self, key: &str, field: &str) -> Result<bool, DBError>;
|
||||||
|
fn hkeys(&self, key: &str) -> Result<Vec<String>, DBError>;
|
||||||
|
fn hvals(&self, key: &str) -> Result<Vec<String>, DBError>;
|
||||||
|
fn hlen(&self, key: &str) -> Result<i64, DBError>;
|
||||||
|
fn hmget(&self, key: &str, fields: Vec<String>) -> Result<Vec<Option<String>>, DBError>;
|
||||||
|
fn hsetnx(&self, key: &str, field: &str, value: &str) -> Result<bool, DBError>;
|
||||||
|
|
||||||
|
// List operations
|
||||||
|
fn lpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError>;
|
||||||
|
fn rpush(&self, key: &str, elements: Vec<String>) -> Result<i64, DBError>;
|
||||||
|
fn lpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError>;
|
||||||
|
fn rpop(&self, key: &str, count: u64) -> Result<Vec<String>, DBError>;
|
||||||
|
fn llen(&self, key: &str) -> Result<i64, DBError>;
|
||||||
|
fn lindex(&self, key: &str, index: i64) -> Result<Option<String>, DBError>;
|
||||||
|
fn lrange(&self, key: &str, start: i64, stop: i64) -> Result<Vec<String>, DBError>;
|
||||||
|
fn ltrim(&self, key: &str, start: i64, stop: i64) -> Result<(), DBError>;
|
||||||
|
fn lrem(&self, key: &str, count: i64, element: &str) -> Result<i64, DBError>;
|
||||||
|
|
||||||
|
// Expiration
|
||||||
|
fn ttl(&self, key: &str) -> Result<i64, DBError>;
|
||||||
|
fn expire_seconds(&self, key: &str, secs: u64) -> Result<bool, DBError>;
|
||||||
|
fn pexpire_millis(&self, key: &str, ms: u128) -> Result<bool, DBError>;
|
||||||
|
fn persist(&self, key: &str) -> Result<bool, DBError>;
|
||||||
|
fn expire_at_seconds(&self, key: &str, ts_secs: i64) -> Result<bool, DBError>;
|
||||||
|
fn pexpire_at_millis(&self, key: &str, ts_ms: i64) -> Result<bool, DBError>;
|
||||||
|
|
||||||
|
// Metadata
|
||||||
|
fn is_encrypted(&self) -> bool;
|
||||||
|
fn info(&self) -> Result<Vec<(String, String)>, DBError>;
|
||||||
|
|
||||||
|
// Clone to Arc for sharing
|
||||||
|
fn clone_arc(&self) -> Arc<dyn StorageBackend>;
|
||||||
|
}
|
||||||
657
src/tantivy_search.rs
Normal file
657
src/tantivy_search.rs
Normal file
@@ -0,0 +1,657 @@
|
|||||||
|
use crate::error::DBError;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::{Arc, RwLock};
|
||||||
|
use tantivy::{
|
||||||
|
collector::TopDocs,
|
||||||
|
directory::MmapDirectory,
|
||||||
|
query::{BooleanQuery, Occur, Query, QueryParser, TermQuery},
|
||||||
|
schema::{Field, Schema, TextFieldIndexing, TextOptions, Value, STORED, STRING},
|
||||||
|
tokenizer::TokenizerManager,
|
||||||
|
DateTime, Index, IndexReader, IndexWriter, ReloadPolicy, TantivyDocument, Term,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub enum FieldDef {
|
||||||
|
Text {
|
||||||
|
stored: bool,
|
||||||
|
indexed: bool,
|
||||||
|
tokenized: bool,
|
||||||
|
fast: bool,
|
||||||
|
},
|
||||||
|
Numeric {
|
||||||
|
stored: bool,
|
||||||
|
indexed: bool,
|
||||||
|
fast: bool,
|
||||||
|
precision: NumericType,
|
||||||
|
},
|
||||||
|
Tag {
|
||||||
|
stored: bool,
|
||||||
|
separator: String,
|
||||||
|
case_sensitive: bool,
|
||||||
|
},
|
||||||
|
Geo {
|
||||||
|
stored: bool,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub enum NumericType {
|
||||||
|
I64,
|
||||||
|
U64,
|
||||||
|
F64,
|
||||||
|
Date,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct IndexSchema {
|
||||||
|
schema: Schema,
|
||||||
|
fields: HashMap<String, (Field, FieldDef)>,
|
||||||
|
default_search_fields: Vec<Field>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct TantivySearch {
|
||||||
|
index: Index,
|
||||||
|
writer: Arc<RwLock<IndexWriter>>,
|
||||||
|
reader: IndexReader,
|
||||||
|
index_schema: IndexSchema,
|
||||||
|
name: String,
|
||||||
|
config: IndexConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct IndexConfig {
|
||||||
|
pub language: String,
|
||||||
|
pub stopwords: Vec<String>,
|
||||||
|
pub stemming: bool,
|
||||||
|
pub max_doc_count: Option<usize>,
|
||||||
|
pub default_score: f64,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for IndexConfig {
|
||||||
|
fn default() -> Self {
|
||||||
|
IndexConfig {
|
||||||
|
language: "english".to_string(),
|
||||||
|
stopwords: vec![],
|
||||||
|
stemming: true,
|
||||||
|
max_doc_count: None,
|
||||||
|
default_score: 1.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TantivySearch {
|
||||||
|
pub fn new_with_schema(
|
||||||
|
base_path: PathBuf,
|
||||||
|
name: String,
|
||||||
|
field_definitions: Vec<(String, FieldDef)>,
|
||||||
|
config: Option<IndexConfig>,
|
||||||
|
) -> Result<Self, DBError> {
|
||||||
|
let index_path = base_path.join(&name);
|
||||||
|
std::fs::create_dir_all(&index_path)
|
||||||
|
.map_err(|e| DBError(format!("Failed to create index dir: {}", e)))?;
|
||||||
|
|
||||||
|
// Build schema from field definitions
|
||||||
|
let mut schema_builder = Schema::builder();
|
||||||
|
let mut fields = HashMap::new();
|
||||||
|
let mut default_search_fields = Vec::new();
|
||||||
|
|
||||||
|
// Always add a document ID field
|
||||||
|
let id_field = schema_builder.add_text_field("_id", STRING | STORED);
|
||||||
|
fields.insert(
|
||||||
|
"_id".to_string(),
|
||||||
|
(
|
||||||
|
id_field,
|
||||||
|
FieldDef::Text {
|
||||||
|
stored: true,
|
||||||
|
indexed: true,
|
||||||
|
tokenized: false,
|
||||||
|
fast: false,
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Add user-defined fields
|
||||||
|
for (field_name, field_def) in field_definitions {
|
||||||
|
let field = match &field_def {
|
||||||
|
FieldDef::Text {
|
||||||
|
stored,
|
||||||
|
indexed,
|
||||||
|
tokenized,
|
||||||
|
fast: _fast,
|
||||||
|
} => {
|
||||||
|
let mut text_options = TextOptions::default();
|
||||||
|
|
||||||
|
if *stored {
|
||||||
|
text_options = text_options.set_stored();
|
||||||
|
}
|
||||||
|
|
||||||
|
if *indexed {
|
||||||
|
let indexing_options = if *tokenized {
|
||||||
|
TextFieldIndexing::default()
|
||||||
|
.set_tokenizer("default")
|
||||||
|
.set_index_option(
|
||||||
|
tantivy::schema::IndexRecordOption::WithFreqsAndPositions,
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
TextFieldIndexing::default()
|
||||||
|
.set_tokenizer("raw")
|
||||||
|
.set_index_option(tantivy::schema::IndexRecordOption::Basic)
|
||||||
|
};
|
||||||
|
text_options = text_options.set_indexing_options(indexing_options);
|
||||||
|
|
||||||
|
let f = schema_builder.add_text_field(&field_name, text_options);
|
||||||
|
if *tokenized {
|
||||||
|
default_search_fields.push(f);
|
||||||
|
}
|
||||||
|
f
|
||||||
|
} else {
|
||||||
|
schema_builder.add_text_field(&field_name, text_options)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
FieldDef::Numeric {
|
||||||
|
stored,
|
||||||
|
indexed,
|
||||||
|
fast,
|
||||||
|
precision,
|
||||||
|
} => match precision {
|
||||||
|
NumericType::I64 => {
|
||||||
|
let mut opts = tantivy::schema::NumericOptions::default();
|
||||||
|
if *stored {
|
||||||
|
opts = opts.set_stored();
|
||||||
|
}
|
||||||
|
if *indexed {
|
||||||
|
opts = opts.set_indexed();
|
||||||
|
}
|
||||||
|
if *fast {
|
||||||
|
opts = opts.set_fast();
|
||||||
|
}
|
||||||
|
schema_builder.add_i64_field(&field_name, opts)
|
||||||
|
}
|
||||||
|
NumericType::U64 => {
|
||||||
|
let mut opts = tantivy::schema::NumericOptions::default();
|
||||||
|
if *stored {
|
||||||
|
opts = opts.set_stored();
|
||||||
|
}
|
||||||
|
if *indexed {
|
||||||
|
opts = opts.set_indexed();
|
||||||
|
}
|
||||||
|
if *fast {
|
||||||
|
opts = opts.set_fast();
|
||||||
|
}
|
||||||
|
schema_builder.add_u64_field(&field_name, opts)
|
||||||
|
}
|
||||||
|
NumericType::F64 => {
|
||||||
|
let mut opts = tantivy::schema::NumericOptions::default();
|
||||||
|
if *stored {
|
||||||
|
opts = opts.set_stored();
|
||||||
|
}
|
||||||
|
if *indexed {
|
||||||
|
opts = opts.set_indexed();
|
||||||
|
}
|
||||||
|
if *fast {
|
||||||
|
opts = opts.set_fast();
|
||||||
|
}
|
||||||
|
schema_builder.add_f64_field(&field_name, opts)
|
||||||
|
}
|
||||||
|
NumericType::Date => {
|
||||||
|
let mut opts = tantivy::schema::DateOptions::default();
|
||||||
|
if *stored {
|
||||||
|
opts = opts.set_stored();
|
||||||
|
}
|
||||||
|
if *indexed {
|
||||||
|
opts = opts.set_indexed();
|
||||||
|
}
|
||||||
|
if *fast {
|
||||||
|
opts = opts.set_fast();
|
||||||
|
}
|
||||||
|
schema_builder.add_date_field(&field_name, opts)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
FieldDef::Tag {
|
||||||
|
stored,
|
||||||
|
separator: _,
|
||||||
|
case_sensitive: _,
|
||||||
|
} => {
|
||||||
|
let mut text_options = TextOptions::default();
|
||||||
|
if *stored {
|
||||||
|
text_options = text_options.set_stored();
|
||||||
|
}
|
||||||
|
text_options = text_options.set_indexing_options(
|
||||||
|
TextFieldIndexing::default()
|
||||||
|
.set_tokenizer("raw")
|
||||||
|
.set_index_option(tantivy::schema::IndexRecordOption::Basic),
|
||||||
|
);
|
||||||
|
schema_builder.add_text_field(&field_name, text_options)
|
||||||
|
}
|
||||||
|
FieldDef::Geo { stored } => {
|
||||||
|
// For now, store as two f64 fields for lat/lon
|
||||||
|
let mut opts = tantivy::schema::NumericOptions::default();
|
||||||
|
if *stored {
|
||||||
|
opts = opts.set_stored();
|
||||||
|
}
|
||||||
|
opts = opts.set_indexed().set_fast();
|
||||||
|
|
||||||
|
let lat_field =
|
||||||
|
schema_builder.add_f64_field(&format!("{}_lat", field_name), opts.clone());
|
||||||
|
let lon_field =
|
||||||
|
schema_builder.add_f64_field(&format!("{}_lon", field_name), opts);
|
||||||
|
|
||||||
|
fields.insert(
|
||||||
|
format!("{}_lat", field_name),
|
||||||
|
(
|
||||||
|
lat_field,
|
||||||
|
FieldDef::Numeric {
|
||||||
|
stored: *stored,
|
||||||
|
indexed: true,
|
||||||
|
fast: true,
|
||||||
|
precision: NumericType::F64,
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
|
fields.insert(
|
||||||
|
format!("{}_lon", field_name),
|
||||||
|
(
|
||||||
|
lon_field,
|
||||||
|
FieldDef::Numeric {
|
||||||
|
stored: *stored,
|
||||||
|
indexed: true,
|
||||||
|
fast: true,
|
||||||
|
precision: NumericType::F64,
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
|
continue; // Skip adding the geo field itself
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
fields.insert(field_name.clone(), (field, field_def));
|
||||||
|
}
|
||||||
|
|
||||||
|
let schema = schema_builder.build();
|
||||||
|
let index_schema = IndexSchema {
|
||||||
|
schema: schema.clone(),
|
||||||
|
fields,
|
||||||
|
default_search_fields,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create or open index
|
||||||
|
let dir = MmapDirectory::open(&index_path)
|
||||||
|
.map_err(|e| DBError(format!("Failed to open index directory: {}", e)))?;
|
||||||
|
|
||||||
|
let mut index = Index::open_or_create(dir, schema)
|
||||||
|
.map_err(|e| DBError(format!("Failed to create index: {}", e)))?;
|
||||||
|
|
||||||
|
// Configure tokenizers
|
||||||
|
let tokenizer_manager = TokenizerManager::default();
|
||||||
|
index.set_tokenizers(tokenizer_manager);
|
||||||
|
|
||||||
|
let writer = index
|
||||||
|
.writer(15_000_000)
|
||||||
|
.map_err(|e| DBError(format!("Failed to create index writer: {}", e)))?;
|
||||||
|
|
||||||
|
let reader = index
|
||||||
|
.reader_builder()
|
||||||
|
.reload_policy(ReloadPolicy::OnCommitWithDelay)
|
||||||
|
.try_into()
|
||||||
|
.map_err(|e| DBError(format!("Failed to create reader: {}", e)))?;
|
||||||
|
|
||||||
|
let config = config.unwrap_or_default();
|
||||||
|
|
||||||
|
Ok(TantivySearch {
|
||||||
|
index,
|
||||||
|
writer: Arc::new(RwLock::new(writer)),
|
||||||
|
reader,
|
||||||
|
index_schema,
|
||||||
|
name,
|
||||||
|
config,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn add_document_with_fields(
|
||||||
|
&self,
|
||||||
|
doc_id: &str,
|
||||||
|
fields: HashMap<String, String>,
|
||||||
|
) -> Result<(), DBError> {
|
||||||
|
let mut writer = self
|
||||||
|
.writer
|
||||||
|
.write()
|
||||||
|
.map_err(|e| DBError(format!("Failed to acquire writer lock: {}", e)))?;
|
||||||
|
|
||||||
|
// Delete existing document with same ID
|
||||||
|
if let Some((id_field, _)) = self.index_schema.fields.get("_id") {
|
||||||
|
writer.delete_term(Term::from_field_text(*id_field, doc_id));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create new document
|
||||||
|
let mut doc = tantivy::doc!();
|
||||||
|
|
||||||
|
// Add document ID
|
||||||
|
if let Some((id_field, _)) = self.index_schema.fields.get("_id") {
|
||||||
|
doc.add_text(*id_field, doc_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add other fields based on schema
|
||||||
|
for (field_name, field_value) in fields {
|
||||||
|
if let Some((field, field_def)) = self.index_schema.fields.get(&field_name) {
|
||||||
|
match field_def {
|
||||||
|
FieldDef::Text { .. } => {
|
||||||
|
doc.add_text(*field, &field_value);
|
||||||
|
}
|
||||||
|
FieldDef::Numeric { precision, .. } => match precision {
|
||||||
|
NumericType::I64 => {
|
||||||
|
if let Ok(v) = field_value.parse::<i64>() {
|
||||||
|
doc.add_i64(*field, v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NumericType::U64 => {
|
||||||
|
if let Ok(v) = field_value.parse::<u64>() {
|
||||||
|
doc.add_u64(*field, v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NumericType::F64 => {
|
||||||
|
if let Ok(v) = field_value.parse::<f64>() {
|
||||||
|
doc.add_f64(*field, v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NumericType::Date => {
|
||||||
|
if let Ok(v) = field_value.parse::<i64>() {
|
||||||
|
doc.add_date(*field, DateTime::from_timestamp_millis(v));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
FieldDef::Tag {
|
||||||
|
separator,
|
||||||
|
case_sensitive,
|
||||||
|
..
|
||||||
|
} => {
|
||||||
|
let tags = if !case_sensitive {
|
||||||
|
field_value.to_lowercase()
|
||||||
|
} else {
|
||||||
|
field_value.clone()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store tags as separate terms for efficient filtering
|
||||||
|
for tag in tags.split(separator.as_str()) {
|
||||||
|
doc.add_text(*field, tag.trim());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
FieldDef::Geo { .. } => {
|
||||||
|
// Parse "lat,lon" format
|
||||||
|
let parts: Vec<&str> = field_value.split(',').collect();
|
||||||
|
if parts.len() == 2 {
|
||||||
|
if let (Ok(lat), Ok(lon)) =
|
||||||
|
(parts[0].parse::<f64>(), parts[1].parse::<f64>())
|
||||||
|
{
|
||||||
|
if let Some((lat_field, _)) =
|
||||||
|
self.index_schema.fields.get(&format!("{}_lat", field_name))
|
||||||
|
{
|
||||||
|
doc.add_f64(*lat_field, lat);
|
||||||
|
}
|
||||||
|
if let Some((lon_field, _)) =
|
||||||
|
self.index_schema.fields.get(&format!("{}_lon", field_name))
|
||||||
|
{
|
||||||
|
doc.add_f64(*lon_field, lon);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
writer
|
||||||
|
.add_document(doc)
|
||||||
|
.map_err(|e| DBError(format!("Failed to add document: {}", e)))?;
|
||||||
|
|
||||||
|
writer
|
||||||
|
.commit()
|
||||||
|
.map_err(|e| DBError(format!("Failed to commit: {}", e)))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn search_with_options(
|
||||||
|
&self,
|
||||||
|
query_str: &str,
|
||||||
|
options: SearchOptions,
|
||||||
|
) -> Result<SearchResults, DBError> {
|
||||||
|
let searcher = self.reader.searcher();
|
||||||
|
|
||||||
|
// Parse query based on search fields
|
||||||
|
let query: Box<dyn Query> = if self.index_schema.default_search_fields.is_empty() {
|
||||||
|
return Err(DBError(
|
||||||
|
"No searchable fields defined in schema".to_string(),
|
||||||
|
));
|
||||||
|
} else {
|
||||||
|
let query_parser = QueryParser::for_index(
|
||||||
|
&self.index,
|
||||||
|
self.index_schema.default_search_fields.clone(),
|
||||||
|
);
|
||||||
|
|
||||||
|
Box::new(
|
||||||
|
query_parser
|
||||||
|
.parse_query(query_str)
|
||||||
|
.map_err(|e| DBError(format!("Failed to parse query: {}", e)))?,
|
||||||
|
)
|
||||||
|
};
|
||||||
|
|
||||||
|
// Apply filters if any
|
||||||
|
let final_query = if !options.filters.is_empty() {
|
||||||
|
let mut clauses: Vec<(Occur, Box<dyn Query>)> = vec![(Occur::Must, query)];
|
||||||
|
|
||||||
|
// Add filters
|
||||||
|
for filter in options.filters {
|
||||||
|
if let Some((field, _)) = self.index_schema.fields.get(&filter.field) {
|
||||||
|
match filter.filter_type {
|
||||||
|
FilterType::Equals(value) => {
|
||||||
|
let term_query = TermQuery::new(
|
||||||
|
Term::from_field_text(*field, &value),
|
||||||
|
tantivy::schema::IndexRecordOption::Basic,
|
||||||
|
);
|
||||||
|
clauses.push((Occur::Must, Box::new(term_query)));
|
||||||
|
}
|
||||||
|
FilterType::Range { min: _, max: _ } => {
|
||||||
|
// Would need numeric field handling here
|
||||||
|
// Simplified for now
|
||||||
|
}
|
||||||
|
FilterType::InSet(values) => {
|
||||||
|
let mut sub_clauses: Vec<(Occur, Box<dyn Query>)> = vec![];
|
||||||
|
for value in values {
|
||||||
|
let term_query = TermQuery::new(
|
||||||
|
Term::from_field_text(*field, &value),
|
||||||
|
tantivy::schema::IndexRecordOption::Basic,
|
||||||
|
);
|
||||||
|
sub_clauses.push((Occur::Should, Box::new(term_query)));
|
||||||
|
}
|
||||||
|
clauses.push((Occur::Must, Box::new(BooleanQuery::new(sub_clauses))));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Box::new(BooleanQuery::new(clauses))
|
||||||
|
} else {
|
||||||
|
query
|
||||||
|
};
|
||||||
|
|
||||||
|
// Execute search
|
||||||
|
let top_docs = searcher
|
||||||
|
.search(
|
||||||
|
&*final_query,
|
||||||
|
&TopDocs::with_limit(options.limit + options.offset),
|
||||||
|
)
|
||||||
|
.map_err(|e| DBError(format!("Search failed: {}", e)))?;
|
||||||
|
|
||||||
|
let total_hits = top_docs.len();
|
||||||
|
let mut documents = Vec::new();
|
||||||
|
|
||||||
|
for (score, doc_address) in top_docs.iter().skip(options.offset).take(options.limit) {
|
||||||
|
let retrieved_doc: TantivyDocument = searcher
|
||||||
|
.doc(*doc_address)
|
||||||
|
.map_err(|e| DBError(format!("Failed to retrieve doc: {}", e)))?;
|
||||||
|
|
||||||
|
let mut doc_fields = HashMap::new();
|
||||||
|
|
||||||
|
// Extract all stored fields
|
||||||
|
for (field_name, (field, field_def)) in &self.index_schema.fields {
|
||||||
|
match field_def {
|
||||||
|
FieldDef::Text { stored, .. } | FieldDef::Tag { stored, .. } => {
|
||||||
|
if *stored {
|
||||||
|
if let Some(value) = retrieved_doc.get_first(*field) {
|
||||||
|
if let Some(text) = value.as_str() {
|
||||||
|
doc_fields.insert(field_name.clone(), text.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
FieldDef::Numeric {
|
||||||
|
stored, precision, ..
|
||||||
|
} => {
|
||||||
|
if *stored {
|
||||||
|
let value_str = match precision {
|
||||||
|
NumericType::I64 => retrieved_doc
|
||||||
|
.get_first(*field)
|
||||||
|
.and_then(|v| v.as_i64())
|
||||||
|
.map(|v| v.to_string()),
|
||||||
|
NumericType::U64 => retrieved_doc
|
||||||
|
.get_first(*field)
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.map(|v| v.to_string()),
|
||||||
|
NumericType::F64 => retrieved_doc
|
||||||
|
.get_first(*field)
|
||||||
|
.and_then(|v| v.as_f64())
|
||||||
|
.map(|v| v.to_string()),
|
||||||
|
NumericType::Date => retrieved_doc
|
||||||
|
.get_first(*field)
|
||||||
|
.and_then(|v| v.as_datetime())
|
||||||
|
.map(|v| v.into_timestamp_millis().to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(v) = value_str {
|
||||||
|
doc_fields.insert(field_name.clone(), v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
FieldDef::Geo { stored } => {
|
||||||
|
if *stored {
|
||||||
|
let lat_field = self
|
||||||
|
.index_schema
|
||||||
|
.fields
|
||||||
|
.get(&format!("{}_lat", field_name))
|
||||||
|
.unwrap()
|
||||||
|
.0;
|
||||||
|
let lon_field = self
|
||||||
|
.index_schema
|
||||||
|
.fields
|
||||||
|
.get(&format!("{}_lon", field_name))
|
||||||
|
.unwrap()
|
||||||
|
.0;
|
||||||
|
|
||||||
|
let lat = retrieved_doc.get_first(lat_field).and_then(|v| v.as_f64());
|
||||||
|
let lon = retrieved_doc.get_first(lon_field).and_then(|v| v.as_f64());
|
||||||
|
|
||||||
|
if let (Some(lat), Some(lon)) = (lat, lon) {
|
||||||
|
doc_fields.insert(field_name.clone(), format!("{},{}", lat, lon));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
documents.push(SearchDocument {
|
||||||
|
fields: doc_fields,
|
||||||
|
score: *score,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(SearchResults {
|
||||||
|
total: total_hits,
|
||||||
|
documents,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_info(&self) -> Result<IndexInfo, DBError> {
|
||||||
|
let searcher = self.reader.searcher();
|
||||||
|
let num_docs = searcher.num_docs();
|
||||||
|
|
||||||
|
let fields_info: Vec<FieldInfo> = self
|
||||||
|
.index_schema
|
||||||
|
.fields
|
||||||
|
.iter()
|
||||||
|
.map(|(name, (_, def))| FieldInfo {
|
||||||
|
name: name.clone(),
|
||||||
|
field_type: format!("{:?}", def),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
Ok(IndexInfo {
|
||||||
|
name: self.name.clone(),
|
||||||
|
num_docs,
|
||||||
|
fields: fields_info,
|
||||||
|
config: self.config.clone(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct SearchOptions {
|
||||||
|
pub limit: usize,
|
||||||
|
pub offset: usize,
|
||||||
|
pub filters: Vec<Filter>,
|
||||||
|
pub sort_by: Option<String>,
|
||||||
|
pub return_fields: Option<Vec<String>>,
|
||||||
|
pub highlight: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for SearchOptions {
|
||||||
|
fn default() -> Self {
|
||||||
|
SearchOptions {
|
||||||
|
limit: 10,
|
||||||
|
offset: 0,
|
||||||
|
filters: vec![],
|
||||||
|
sort_by: None,
|
||||||
|
return_fields: None,
|
||||||
|
highlight: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct Filter {
|
||||||
|
pub field: String,
|
||||||
|
pub filter_type: FilterType,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub enum FilterType {
|
||||||
|
Equals(String),
|
||||||
|
Range { min: String, max: String },
|
||||||
|
InSet(Vec<String>),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct SearchResults {
|
||||||
|
pub total: usize,
|
||||||
|
pub documents: Vec<SearchDocument>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct SearchDocument {
|
||||||
|
pub fields: HashMap<String, String>,
|
||||||
|
pub score: f32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct IndexInfo {
|
||||||
|
pub name: String,
|
||||||
|
pub num_docs: u64,
|
||||||
|
pub fields: Vec<FieldInfo>,
|
||||||
|
pub config: IndexConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct FieldInfo {
|
||||||
|
pub name: String,
|
||||||
|
pub field_type: String,
|
||||||
|
}
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "supervisor"
|
|
||||||
version = "0.1.0"
|
|
||||||
edition = "2021"
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
# The supervisor will eventually depend on the herodb crate.
|
|
||||||
# We can add this dependency now.
|
|
||||||
# herodb = { path = "../herodb" }
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
fn main() {
|
|
||||||
println!("Hello from the supervisor crate!");
|
|
||||||
// Supervisor logic will be implemented here.
|
|
||||||
}
|
|
||||||
@@ -298,7 +298,7 @@ main() {
|
|||||||
|
|
||||||
# Start the server
|
# Start the server
|
||||||
print_status "Starting HeroDB server..."
|
print_status "Starting HeroDB server..."
|
||||||
./target/release/herodb --dir "$DB_DIR" --port $PORT &
|
../target/release/herodb --dir "$DB_DIR" --port $PORT &
|
||||||
SERVER_PID=$!
|
SERVER_PID=$!
|
||||||
|
|
||||||
# Wait for server to start
|
# Wait for server to start
|
||||||
@@ -352,4 +352,4 @@ check_dependencies() {
|
|||||||
|
|
||||||
# Run dependency check and main function
|
# Run dependency check and main function
|
||||||
check_dependencies
|
check_dependencies
|
||||||
main "$@"
|
main "$@"
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
use herodb::{server::Server, options::DBOption};
|
use herodb::{options::DBOption, server::Server};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
@@ -7,7 +7,7 @@ use tokio::time::sleep;
|
|||||||
// Helper function to send command and get response
|
// Helper function to send command and get response
|
||||||
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
||||||
stream.write_all(command.as_bytes()).await.unwrap();
|
stream.write_all(command.as_bytes()).await.unwrap();
|
||||||
|
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
String::from_utf8_lossy(&buffer[..n]).to_string()
|
String::from_utf8_lossy(&buffer[..n]).to_string()
|
||||||
@@ -19,7 +19,7 @@ async fn debug_hset_simple() {
|
|||||||
let test_dir = "/tmp/herodb_debug_hset";
|
let test_dir = "/tmp/herodb_debug_hset";
|
||||||
let _ = std::fs::remove_dir_all(test_dir);
|
let _ = std::fs::remove_dir_all(test_dir);
|
||||||
std::fs::create_dir_all(test_dir).unwrap();
|
std::fs::create_dir_all(test_dir).unwrap();
|
||||||
|
|
||||||
let port = 16500;
|
let port = 16500;
|
||||||
let option = DBOption {
|
let option = DBOption {
|
||||||
dir: test_dir.to_string(),
|
dir: test_dir.to_string(),
|
||||||
@@ -27,36 +27,51 @@ async fn debug_hset_simple() {
|
|||||||
debug: false,
|
debug: false,
|
||||||
encrypt: false,
|
encrypt: false,
|
||||||
encryption_key: None,
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut server = Server::new(option).await;
|
let mut server = Server::new(option).await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port)).await.unwrap();
|
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
// Test simple HSET
|
// Test simple HSET
|
||||||
println!("Testing HSET...");
|
println!("Testing HSET...");
|
||||||
let response = send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HSET response: {}", response);
|
println!("HSET response: {}", response);
|
||||||
assert!(response.contains("1"), "Expected '1' but got: {}", response);
|
assert!(response.contains("1"), "Expected '1' but got: {}", response);
|
||||||
|
|
||||||
// Test HGET
|
// Test HGET
|
||||||
println!("Testing HGET...");
|
println!("Testing HGET...");
|
||||||
let response = send_command(&mut stream, "*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HGET response: {}", response);
|
println!("HGET response: {}", response);
|
||||||
assert!(response.contains("value1"), "Expected 'value1' but got: {}", response);
|
assert!(
|
||||||
}
|
response.contains("value1"),
|
||||||
|
"Expected 'value1' but got: {}",
|
||||||
|
response
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
use herodb::{server::Server, options::DBOption};
|
use herodb::{options::DBOption, server::Server};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
@@ -7,50 +7,55 @@ use tokio::time::sleep;
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn debug_hset_return_value() {
|
async fn debug_hset_return_value() {
|
||||||
let test_dir = "/tmp/herodb_debug_hset_return";
|
let test_dir = "/tmp/herodb_debug_hset_return";
|
||||||
|
|
||||||
// Clean up any existing test data
|
// Clean up any existing test data
|
||||||
let _ = std::fs::remove_dir_all(&test_dir);
|
let _ = std::fs::remove_dir_all(&test_dir);
|
||||||
std::fs::create_dir_all(&test_dir).unwrap();
|
std::fs::create_dir_all(&test_dir).unwrap();
|
||||||
|
|
||||||
let option = DBOption {
|
let option = DBOption {
|
||||||
dir: test_dir.to_string(),
|
dir: test_dir.to_string(),
|
||||||
port: 16390,
|
port: 16390,
|
||||||
debug: false,
|
debug: false,
|
||||||
encrypt: false,
|
encrypt: false,
|
||||||
encryption_key: None,
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut server = Server::new(option).await;
|
let mut server = Server::new(option).await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind("127.0.0.1:16390")
|
let listener = tokio::net::TcpListener::bind("127.0.0.1:16390")
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
// Connect and test HSET
|
// Connect and test HSET
|
||||||
let mut stream = TcpStream::connect("127.0.0.1:16390").await.unwrap();
|
let mut stream = TcpStream::connect("127.0.0.1:16390").await.unwrap();
|
||||||
|
|
||||||
// Send HSET command
|
// Send HSET command
|
||||||
let cmd = "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n";
|
let cmd = "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n";
|
||||||
stream.write_all(cmd.as_bytes()).await.unwrap();
|
stream.write_all(cmd.as_bytes()).await.unwrap();
|
||||||
|
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
|
|
||||||
println!("HSET response: {}", response);
|
println!("HSET response: {}", response);
|
||||||
println!("Response bytes: {:?}", &buffer[..n]);
|
println!("Response bytes: {:?}", &buffer[..n]);
|
||||||
|
|
||||||
// Check if response contains "1"
|
// Check if response contains "1"
|
||||||
assert!(response.contains("1"), "Expected response to contain '1', got: {}", response);
|
assert!(
|
||||||
}
|
response.contains("1"),
|
||||||
|
"Expected response to contain '1', got: {}",
|
||||||
|
response
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -1,12 +1,15 @@
|
|||||||
use herodb::protocol::Protocol;
|
|
||||||
use herodb::cmd::Cmd;
|
use herodb::cmd::Cmd;
|
||||||
|
use herodb::protocol::Protocol;
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_protocol_parsing() {
|
fn test_protocol_parsing() {
|
||||||
// Test TYPE command parsing
|
// Test TYPE command parsing
|
||||||
let type_cmd = "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n";
|
let type_cmd = "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n";
|
||||||
println!("Parsing TYPE command: {}", type_cmd.replace("\r\n", "\\r\\n"));
|
println!(
|
||||||
|
"Parsing TYPE command: {}",
|
||||||
|
type_cmd.replace("\r\n", "\\r\\n")
|
||||||
|
);
|
||||||
|
|
||||||
match Protocol::from(type_cmd) {
|
match Protocol::from(type_cmd) {
|
||||||
Ok((protocol, _)) => {
|
Ok((protocol, _)) => {
|
||||||
println!("Protocol parsed successfully: {:?}", protocol);
|
println!("Protocol parsed successfully: {:?}", protocol);
|
||||||
@@ -17,11 +20,14 @@ fn test_protocol_parsing() {
|
|||||||
}
|
}
|
||||||
Err(e) => println!("Protocol parsing failed: {:?}", e),
|
Err(e) => println!("Protocol parsing failed: {:?}", e),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test HEXISTS command parsing
|
// Test HEXISTS command parsing
|
||||||
let hexists_cmd = "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n";
|
let hexists_cmd = "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n";
|
||||||
println!("\nParsing HEXISTS command: {}", hexists_cmd.replace("\r\n", "\\r\\n"));
|
println!(
|
||||||
|
"\nParsing HEXISTS command: {}",
|
||||||
|
hexists_cmd.replace("\r\n", "\\r\\n")
|
||||||
|
);
|
||||||
|
|
||||||
match Protocol::from(hexists_cmd) {
|
match Protocol::from(hexists_cmd) {
|
||||||
Ok((protocol, _)) => {
|
Ok((protocol, _)) => {
|
||||||
println!("Protocol parsed successfully: {:?}", protocol);
|
println!("Protocol parsed successfully: {:?}", protocol);
|
||||||
@@ -32,4 +38,4 @@ fn test_protocol_parsing() {
|
|||||||
}
|
}
|
||||||
Err(e) => println!("Protocol parsing failed: {:?}", e),
|
Err(e) => println!("Protocol parsing failed: {:?}", e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -16,9 +16,9 @@ fn get_redis_connection(port: u16) -> Connection {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
if attempts >= 20 {
|
if attempts >= 120 {
|
||||||
panic!(
|
panic!(
|
||||||
"Failed to connect to Redis server after 20 attempts: {}",
|
"Failed to connect to Redis server after 120 attempts: {}",
|
||||||
e
|
e
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -81,15 +81,15 @@ fn setup_server() -> (ServerProcessGuard, u16) {
|
|||||||
])
|
])
|
||||||
.spawn()
|
.spawn()
|
||||||
.expect("Failed to start server process");
|
.expect("Failed to start server process");
|
||||||
|
|
||||||
// Create a new guard that also owns the test directory path
|
// Create a new guard that also owns the test directory path
|
||||||
let guard = ServerProcessGuard {
|
let guard = ServerProcessGuard {
|
||||||
process: child,
|
process: child,
|
||||||
test_dir,
|
test_dir,
|
||||||
};
|
};
|
||||||
|
|
||||||
// Give the server a moment to start
|
// Give the server time to build and start (cargo run may compile first)
|
||||||
std::thread::sleep(Duration::from_millis(500));
|
std::thread::sleep(Duration::from_millis(2500));
|
||||||
|
|
||||||
(guard, port)
|
(guard, port)
|
||||||
}
|
}
|
||||||
@@ -206,7 +206,9 @@ async fn test_expiration(conn: &mut Connection) {
|
|||||||
async fn test_scan_operations(conn: &mut Connection) {
|
async fn test_scan_operations(conn: &mut Connection) {
|
||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
for i in 0..5 {
|
for i in 0..5 {
|
||||||
let _: () = conn.set(format!("key{}", i), format!("value{}", i)).unwrap();
|
let _: () = conn
|
||||||
|
.set(format!("key{}", i), format!("value{}", i))
|
||||||
|
.unwrap();
|
||||||
}
|
}
|
||||||
let result: (u64, Vec<String>) = redis::cmd("SCAN")
|
let result: (u64, Vec<String>) = redis::cmd("SCAN")
|
||||||
.arg(0)
|
.arg(0)
|
||||||
@@ -253,7 +255,9 @@ async fn test_scan_with_count(conn: &mut Connection) {
|
|||||||
async fn test_hscan_operations(conn: &mut Connection) {
|
async fn test_hscan_operations(conn: &mut Connection) {
|
||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
for i in 0..3 {
|
for i in 0..3 {
|
||||||
let _: () = conn.hset("testhash", format!("field{}", i), format!("value{}", i)).unwrap();
|
let _: () = conn
|
||||||
|
.hset("testhash", format!("field{}", i), format!("value{}", i))
|
||||||
|
.unwrap();
|
||||||
}
|
}
|
||||||
let result: (u64, Vec<String>) = redis::cmd("HSCAN")
|
let result: (u64, Vec<String>) = redis::cmd("HSCAN")
|
||||||
.arg("testhash")
|
.arg("testhash")
|
||||||
@@ -273,8 +277,16 @@ async fn test_hscan_operations(conn: &mut Connection) {
|
|||||||
async fn test_transaction_operations(conn: &mut Connection) {
|
async fn test_transaction_operations(conn: &mut Connection) {
|
||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
let _: () = redis::cmd("MULTI").query(conn).unwrap();
|
let _: () = redis::cmd("MULTI").query(conn).unwrap();
|
||||||
let _: () = redis::cmd("SET").arg("key1").arg("value1").query(conn).unwrap();
|
let _: () = redis::cmd("SET")
|
||||||
let _: () = redis::cmd("SET").arg("key2").arg("value2").query(conn).unwrap();
|
.arg("key1")
|
||||||
|
.arg("value1")
|
||||||
|
.query(conn)
|
||||||
|
.unwrap();
|
||||||
|
let _: () = redis::cmd("SET")
|
||||||
|
.arg("key2")
|
||||||
|
.arg("value2")
|
||||||
|
.query(conn)
|
||||||
|
.unwrap();
|
||||||
let _: Vec<String> = redis::cmd("EXEC").query(conn).unwrap();
|
let _: Vec<String> = redis::cmd("EXEC").query(conn).unwrap();
|
||||||
let result: String = conn.get("key1").unwrap();
|
let result: String = conn.get("key1").unwrap();
|
||||||
assert_eq!(result, "value1");
|
assert_eq!(result, "value1");
|
||||||
@@ -286,7 +298,11 @@ async fn test_transaction_operations(conn: &mut Connection) {
|
|||||||
async fn test_discard_transaction(conn: &mut Connection) {
|
async fn test_discard_transaction(conn: &mut Connection) {
|
||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
let _: () = redis::cmd("MULTI").query(conn).unwrap();
|
let _: () = redis::cmd("MULTI").query(conn).unwrap();
|
||||||
let _: () = redis::cmd("SET").arg("discard").arg("value").query(conn).unwrap();
|
let _: () = redis::cmd("SET")
|
||||||
|
.arg("discard")
|
||||||
|
.arg("value")
|
||||||
|
.query(conn)
|
||||||
|
.unwrap();
|
||||||
let _: () = redis::cmd("DISCARD").query(conn).unwrap();
|
let _: () = redis::cmd("DISCARD").query(conn).unwrap();
|
||||||
let result: Option<String> = conn.get("discard").unwrap();
|
let result: Option<String> = conn.get("discard").unwrap();
|
||||||
assert_eq!(result, None);
|
assert_eq!(result, None);
|
||||||
@@ -306,7 +322,6 @@ async fn test_type_command(conn: &mut Connection) {
|
|||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
async fn test_info_command(conn: &mut Connection) {
|
async fn test_info_command(conn: &mut Connection) {
|
||||||
cleanup_keys(conn).await;
|
cleanup_keys(conn).await;
|
||||||
let result: String = redis::cmd("INFO").query(conn).unwrap();
|
let result: String = redis::cmd("INFO").query(conn).unwrap();
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
use herodb::{server::Server, options::DBOption};
|
use herodb::{options::DBOption, server::Server};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
@@ -8,22 +8,23 @@ use tokio::time::sleep;
|
|||||||
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
||||||
use std::sync::atomic::{AtomicU16, Ordering};
|
use std::sync::atomic::{AtomicU16, Ordering};
|
||||||
static PORT_COUNTER: AtomicU16 = AtomicU16::new(16379);
|
static PORT_COUNTER: AtomicU16 = AtomicU16::new(16379);
|
||||||
|
|
||||||
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||||
let test_dir = format!("/tmp/herodb_test_{}", test_name);
|
let test_dir = format!("/tmp/herodb_test_{}", test_name);
|
||||||
|
|
||||||
// Clean up and create test directory
|
// Clean up and create test directory
|
||||||
let _ = std::fs::remove_dir_all(&test_dir);
|
let _ = std::fs::remove_dir_all(&test_dir);
|
||||||
std::fs::create_dir_all(&test_dir).unwrap();
|
std::fs::create_dir_all(&test_dir).unwrap();
|
||||||
|
|
||||||
let option = DBOption {
|
let option = DBOption {
|
||||||
dir: test_dir,
|
dir: test_dir,
|
||||||
port,
|
port,
|
||||||
debug: true,
|
debug: true,
|
||||||
encrypt: false,
|
encrypt: false,
|
||||||
encryption_key: None,
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
};
|
};
|
||||||
|
|
||||||
let server = Server::new(option).await;
|
let server = Server::new(option).await;
|
||||||
(server, port)
|
(server, port)
|
||||||
}
|
}
|
||||||
@@ -46,7 +47,7 @@ async fn connect_to_server(port: u16) -> TcpStream {
|
|||||||
// Helper function to send command and get response
|
// Helper function to send command and get response
|
||||||
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
||||||
stream.write_all(command.as_bytes()).await.unwrap();
|
stream.write_all(command.as_bytes()).await.unwrap();
|
||||||
|
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
String::from_utf8_lossy(&buffer[..n]).to_string()
|
String::from_utf8_lossy(&buffer[..n]).to_string()
|
||||||
@@ -55,22 +56,22 @@ async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_basic_ping() {
|
async fn test_basic_ping() {
|
||||||
let (mut server, port) = start_test_server("ping").await;
|
let (mut server, port) = start_test_server("ping").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
let response = send_command(&mut stream, "*1\r\n$4\r\nPING\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$4\r\nPING\r\n").await;
|
||||||
assert!(response.contains("PONG"));
|
assert!(response.contains("PONG"));
|
||||||
@@ -79,40 +80,44 @@ async fn test_basic_ping() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_string_operations() {
|
async fn test_string_operations() {
|
||||||
let (mut server, port) = start_test_server("string").await;
|
let (mut server, port) = start_test_server("string").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test SET
|
// Test SET
|
||||||
let response = send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test GET
|
// Test GET
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("value"));
|
assert!(response.contains("value"));
|
||||||
|
|
||||||
// Test GET non-existent key
|
// Test GET non-existent key
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$7\r\nnoexist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$7\r\nnoexist\r\n").await;
|
||||||
assert!(response.contains("$-1")); // NULL response
|
assert!(response.contains("$-1")); // NULL response
|
||||||
|
|
||||||
// Test DEL
|
// Test DEL
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nDEL\r\n$3\r\nkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nDEL\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test GET after DEL
|
// Test GET after DEL
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("$-1")); // NULL response
|
assert!(response.contains("$-1")); // NULL response
|
||||||
@@ -121,33 +126,37 @@ async fn test_string_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_incr_operations() {
|
async fn test_incr_operations() {
|
||||||
let (mut server, port) = start_test_server("incr").await;
|
let (mut server, port) = start_test_server("incr").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test INCR on non-existent key
|
// Test INCR on non-existent key
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$7\r\ncounter\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$7\r\ncounter\r\n").await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test INCR on existing key
|
// Test INCR on existing key
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$7\r\ncounter\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$7\r\ncounter\r\n").await;
|
||||||
assert!(response.contains("2"));
|
assert!(response.contains("2"));
|
||||||
|
|
||||||
// Test INCR on string value (should fail)
|
// Test INCR on string value (should fail)
|
||||||
send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nhello\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nhello\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$6\r\nstring\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nINCR\r\n$6\r\nstring\r\n").await;
|
||||||
assert!(response.contains("ERR"));
|
assert!(response.contains("ERR"));
|
||||||
}
|
}
|
||||||
@@ -155,63 +164,83 @@ async fn test_incr_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_hash_operations() {
|
async fn test_hash_operations() {
|
||||||
let (mut server, port) = start_test_server("hash").await;
|
let (mut server, port) = start_test_server("hash").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test HSET
|
// Test HSET
|
||||||
let response = send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1")); // 1 new field
|
assert!(response.contains("1")); // 1 new field
|
||||||
|
|
||||||
// Test HGET
|
// Test HGET
|
||||||
let response = send_command(&mut stream, "*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
|
|
||||||
// Test HSET multiple fields
|
// Test HSET multiple fields
|
||||||
let response = send_command(&mut stream, "*6\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield2\r\n$6\r\nvalue2\r\n$6\r\nfield3\r\n$6\r\nvalue3\r\n").await;
|
let response = send_command(&mut stream, "*6\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield2\r\n$6\r\nvalue2\r\n$6\r\nfield3\r\n$6\r\nvalue3\r\n").await;
|
||||||
assert!(response.contains("2")); // 2 new fields
|
assert!(response.contains("2")); // 2 new fields
|
||||||
|
|
||||||
// Test HGETALL
|
// Test HGETALL
|
||||||
let response = send_command(&mut stream, "*2\r\n$7\r\nHGETALL\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$7\r\nHGETALL\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("field1"));
|
assert!(response.contains("field1"));
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
assert!(response.contains("field2"));
|
assert!(response.contains("field2"));
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
|
|
||||||
// Test HLEN
|
// Test HLEN
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nHLEN\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nHLEN\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("3"));
|
assert!(response.contains("3"));
|
||||||
|
|
||||||
// Test HEXISTS
|
// Test HEXISTS
|
||||||
let response = send_command(&mut stream, "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
let response = send_command(&mut stream, "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("0"));
|
assert!(response.contains("0"));
|
||||||
|
|
||||||
// Test HDEL
|
// Test HDEL
|
||||||
let response = send_command(&mut stream, "*3\r\n$4\r\nHDEL\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$4\r\nHDEL\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test HKEYS
|
// Test HKEYS
|
||||||
let response = send_command(&mut stream, "*2\r\n$5\r\nHKEYS\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$5\r\nHKEYS\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("field2"));
|
assert!(response.contains("field2"));
|
||||||
assert!(response.contains("field3"));
|
assert!(response.contains("field3"));
|
||||||
assert!(!response.contains("field1")); // Should be deleted
|
assert!(!response.contains("field1")); // Should be deleted
|
||||||
|
|
||||||
// Test HVALS
|
// Test HVALS
|
||||||
let response = send_command(&mut stream, "*2\r\n$5\r\nHVALS\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$5\r\nHVALS\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
@@ -221,46 +250,50 @@ async fn test_hash_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_expiration() {
|
async fn test_expiration() {
|
||||||
let (mut server, port) = start_test_server("expiration").await;
|
let (mut server, port) = start_test_server("expiration").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test SETEX (expire in 1 second)
|
// Test SETEX (expire in 1 second)
|
||||||
let response = send_command(&mut stream, "*5\r\n$3\r\nSET\r\n$6\r\nexpkey\r\n$5\r\nvalue\r\n$2\r\nEX\r\n$1\r\n1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*5\r\n$3\r\nSET\r\n$6\r\nexpkey\r\n$5\r\nvalue\r\n$2\r\nEX\r\n$1\r\n1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test TTL
|
// Test TTL
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nTTL\r\n$6\r\nexpkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nTTL\r\n$6\r\nexpkey\r\n").await;
|
||||||
assert!(response.contains("1") || response.contains("0")); // Should be 1 or 0 seconds
|
assert!(response.contains("1") || response.contains("0")); // Should be 1 or 0 seconds
|
||||||
|
|
||||||
// Test EXISTS
|
// Test EXISTS
|
||||||
let response = send_command(&mut stream, "*2\r\n$6\r\nEXISTS\r\n$6\r\nexpkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$6\r\nEXISTS\r\n$6\r\nexpkey\r\n").await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Wait for expiration
|
// Wait for expiration
|
||||||
sleep(Duration::from_millis(1100)).await;
|
sleep(Duration::from_millis(1100)).await;
|
||||||
|
|
||||||
// Test GET after expiration
|
// Test GET after expiration
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$6\r\nexpkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$6\r\nexpkey\r\n").await;
|
||||||
assert!(response.contains("$-1")); // Should be NULL
|
assert!(response.contains("$-1")); // Should be NULL
|
||||||
|
|
||||||
// Test TTL after expiration
|
// Test TTL after expiration
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nTTL\r\n$6\r\nexpkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nTTL\r\n$6\r\nexpkey\r\n").await;
|
||||||
assert!(response.contains("-2")); // Key doesn't exist
|
assert!(response.contains("-2")); // Key doesn't exist
|
||||||
|
|
||||||
// Test EXISTS after expiration
|
// Test EXISTS after expiration
|
||||||
let response = send_command(&mut stream, "*2\r\n$6\r\nEXISTS\r\n$6\r\nexpkey\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$6\r\nEXISTS\r\n$6\r\nexpkey\r\n").await;
|
||||||
assert!(response.contains("0"));
|
assert!(response.contains("0"));
|
||||||
@@ -269,33 +302,37 @@ async fn test_expiration() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_scan_operations() {
|
async fn test_scan_operations() {
|
||||||
let (mut server, port) = start_test_server("scan").await;
|
let (mut server, port) = start_test_server("scan").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Set up test data
|
// Set up test data
|
||||||
for i in 0..5 {
|
for i in 0..5 {
|
||||||
let cmd = format!("*3\r\n$3\r\nSET\r\n$4\r\nkey{}\r\n$6\r\nvalue{}\r\n", i, i);
|
let cmd = format!("*3\r\n$3\r\nSET\r\n$4\r\nkey{}\r\n$6\r\nvalue{}\r\n", i, i);
|
||||||
send_command(&mut stream, &cmd).await;
|
send_command(&mut stream, &cmd).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test SCAN
|
// Test SCAN
|
||||||
let response = send_command(&mut stream, "*6\r\n$4\r\nSCAN\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*6\r\n$4\r\nSCAN\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("key"));
|
assert!(response.contains("key"));
|
||||||
|
|
||||||
// Test KEYS
|
// Test KEYS
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nKEYS\r\n$1\r\n*\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nKEYS\r\n$1\r\n*\r\n").await;
|
||||||
assert!(response.contains("key0"));
|
assert!(response.contains("key0"));
|
||||||
@@ -305,29 +342,32 @@ async fn test_scan_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_hscan_operations() {
|
async fn test_hscan_operations() {
|
||||||
let (mut server, port) = start_test_server("hscan").await;
|
let (mut server, port) = start_test_server("hscan").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Set up hash data
|
// Set up hash data
|
||||||
for i in 0..3 {
|
for i in 0..3 {
|
||||||
let cmd = format!("*4\r\n$4\r\nHSET\r\n$8\r\ntesthash\r\n$6\r\nfield{}\r\n$6\r\nvalue{}\r\n", i, i);
|
let cmd = format!(
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$8\r\ntesthash\r\n$6\r\nfield{}\r\n$6\r\nvalue{}\r\n",
|
||||||
|
i, i
|
||||||
|
);
|
||||||
send_command(&mut stream, &cmd).await;
|
send_command(&mut stream, &cmd).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test HSCAN
|
// Test HSCAN
|
||||||
let response = send_command(&mut stream, "*7\r\n$5\r\nHSCAN\r\n$8\r\ntesthash\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n").await;
|
let response = send_command(&mut stream, "*7\r\n$5\r\nHSCAN\r\n$8\r\ntesthash\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n").await;
|
||||||
assert!(response.contains("field"));
|
assert!(response.contains("field"));
|
||||||
@@ -337,42 +377,50 @@ async fn test_hscan_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_transaction_operations() {
|
async fn test_transaction_operations() {
|
||||||
let (mut server, port) = start_test_server("transaction").await;
|
let (mut server, port) = start_test_server("transaction").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test MULTI
|
// Test MULTI
|
||||||
let response = send_command(&mut stream, "*1\r\n$5\r\nMULTI\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$5\r\nMULTI\r\n").await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test queued commands
|
// Test queued commands
|
||||||
let response = send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$4\r\nkey1\r\n$6\r\nvalue1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$4\r\nkey1\r\n$6\r\nvalue1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("QUEUED"));
|
assert!(response.contains("QUEUED"));
|
||||||
|
|
||||||
let response = send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("QUEUED"));
|
assert!(response.contains("QUEUED"));
|
||||||
|
|
||||||
// Test EXEC
|
// Test EXEC
|
||||||
let response = send_command(&mut stream, "*1\r\n$4\r\nEXEC\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$4\r\nEXEC\r\n").await;
|
||||||
assert!(response.contains("OK")); // Should contain results of executed commands
|
assert!(response.contains("OK")); // Should contain results of executed commands
|
||||||
|
|
||||||
// Verify commands were executed
|
// Verify commands were executed
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$4\r\nkey1\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$4\r\nkey1\r\n").await;
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
|
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$4\r\nkey2\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$4\r\nkey2\r\n").await;
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
}
|
}
|
||||||
@@ -380,35 +428,39 @@ async fn test_transaction_operations() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_discard_transaction() {
|
async fn test_discard_transaction() {
|
||||||
let (mut server, port) = start_test_server("discard").await;
|
let (mut server, port) = start_test_server("discard").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test MULTI
|
// Test MULTI
|
||||||
let response = send_command(&mut stream, "*1\r\n$5\r\nMULTI\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$5\r\nMULTI\r\n").await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test queued command
|
// Test queued command
|
||||||
let response = send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$7\r\ndiscard\r\n$5\r\nvalue\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$7\r\ndiscard\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("QUEUED"));
|
assert!(response.contains("QUEUED"));
|
||||||
|
|
||||||
// Test DISCARD
|
// Test DISCARD
|
||||||
let response = send_command(&mut stream, "*1\r\n$7\r\nDISCARD\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$7\r\nDISCARD\r\n").await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Verify command was not executed
|
// Verify command was not executed
|
||||||
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$7\r\ndiscard\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$3\r\nGET\r\n$7\r\ndiscard\r\n").await;
|
||||||
assert!(response.contains("$-1")); // Should be NULL
|
assert!(response.contains("$-1")); // Should be NULL
|
||||||
@@ -417,33 +469,41 @@ async fn test_discard_transaction() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_type_command() {
|
async fn test_type_command() {
|
||||||
let (mut server, port) = start_test_server("type").await;
|
let (mut server, port) = start_test_server("type").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test string type
|
// Test string type
|
||||||
send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$6\r\nstring\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$6\r\nstring\r\n").await;
|
||||||
assert!(response.contains("string"));
|
assert!(response.contains("string"));
|
||||||
|
|
||||||
// Test hash type
|
// Test hash type
|
||||||
send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("hash"));
|
assert!(response.contains("hash"));
|
||||||
|
|
||||||
// Test non-existent key
|
// Test non-existent key
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n").await;
|
||||||
assert!(response.contains("none"));
|
assert!(response.contains("none"));
|
||||||
@@ -452,30 +512,38 @@ async fn test_type_command() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_config_commands() {
|
async fn test_config_commands() {
|
||||||
let (mut server, port) = start_test_server("config").await;
|
let (mut server, port) = start_test_server("config").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test CONFIG GET databases
|
// Test CONFIG GET databases
|
||||||
let response = send_command(&mut stream, "*3\r\n$6\r\nCONFIG\r\n$3\r\nGET\r\n$9\r\ndatabases\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$6\r\nCONFIG\r\n$3\r\nGET\r\n$9\r\ndatabases\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("databases"));
|
assert!(response.contains("databases"));
|
||||||
assert!(response.contains("16"));
|
assert!(response.contains("16"));
|
||||||
|
|
||||||
// Test CONFIG GET dir
|
// Test CONFIG GET dir
|
||||||
let response = send_command(&mut stream, "*3\r\n$6\r\nCONFIG\r\n$3\r\nGET\r\n$3\r\ndir\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$6\r\nCONFIG\r\n$3\r\nGET\r\n$3\r\ndir\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("dir"));
|
assert!(response.contains("dir"));
|
||||||
assert!(response.contains("/tmp/herodb_test_config"));
|
assert!(response.contains("/tmp/herodb_test_config"));
|
||||||
}
|
}
|
||||||
@@ -483,27 +551,27 @@ async fn test_config_commands() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_info_command() {
|
async fn test_info_command() {
|
||||||
let (mut server, port) = start_test_server("info").await;
|
let (mut server, port) = start_test_server("info").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test INFO
|
// Test INFO
|
||||||
let response = send_command(&mut stream, "*1\r\n$4\r\nINFO\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$4\r\nINFO\r\n").await;
|
||||||
assert!(response.contains("redis_version"));
|
assert!(response.contains("redis_version"));
|
||||||
|
|
||||||
// Test INFO replication
|
// Test INFO replication
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nINFO\r\n$11\r\nreplication\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nINFO\r\n$11\r\nreplication\r\n").await;
|
||||||
assert!(response.contains("role:master"));
|
assert!(response.contains("role:master"));
|
||||||
@@ -512,36 +580,44 @@ async fn test_info_command() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_error_handling() {
|
async fn test_error_handling() {
|
||||||
let (mut server, port) = start_test_server("error").await;
|
let (mut server, port) = start_test_server("error").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test WRONGTYPE error - try to use hash command on string
|
// Test WRONGTYPE error - try to use hash command on string
|
||||||
send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n").await;
|
send_command(
|
||||||
let response = send_command(&mut stream, "*3\r\n$4\r\nHGET\r\n$6\r\nstring\r\n$5\r\nfield\r\n").await;
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$4\r\nHGET\r\n$6\r\nstring\r\n$5\r\nfield\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("WRONGTYPE"));
|
assert!(response.contains("WRONGTYPE"));
|
||||||
|
|
||||||
// Test unknown command
|
// Test unknown command
|
||||||
let response = send_command(&mut stream, "*1\r\n$7\r\nUNKNOWN\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$7\r\nUNKNOWN\r\n").await;
|
||||||
assert!(response.contains("unknown cmd") || response.contains("ERR"));
|
assert!(response.contains("unknown cmd") || response.contains("ERR"));
|
||||||
|
|
||||||
// Test EXEC without MULTI
|
// Test EXEC without MULTI
|
||||||
let response = send_command(&mut stream, "*1\r\n$4\r\nEXEC\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$4\r\nEXEC\r\n").await;
|
||||||
assert!(response.contains("ERR"));
|
assert!(response.contains("ERR"));
|
||||||
|
|
||||||
// Test DISCARD without MULTI
|
// Test DISCARD without MULTI
|
||||||
let response = send_command(&mut stream, "*1\r\n$7\r\nDISCARD\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$7\r\nDISCARD\r\n").await;
|
||||||
assert!(response.contains("ERR"));
|
assert!(response.contains("ERR"));
|
||||||
@@ -550,29 +626,37 @@ async fn test_error_handling() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_list_operations() {
|
async fn test_list_operations() {
|
||||||
let (mut server, port) = start_test_server("list").await;
|
let (mut server, port) = start_test_server("list").await;
|
||||||
|
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test LPUSH
|
// Test LPUSH
|
||||||
let response = send_command(&mut stream, "*4\r\n$5\r\nLPUSH\r\n$4\r\nlist\r\n$1\r\na\r\n$1\r\nb\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$5\r\nLPUSH\r\n$4\r\nlist\r\n$1\r\na\r\n$1\r\nb\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("2")); // 2 elements
|
assert!(response.contains("2")); // 2 elements
|
||||||
|
|
||||||
// Test RPUSH
|
// Test RPUSH
|
||||||
let response = send_command(&mut stream, "*4\r\n$5\r\nRPUSH\r\n$4\r\nlist\r\n$1\r\nc\r\n$1\r\nd\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$5\r\nRPUSH\r\n$4\r\nlist\r\n$1\r\nc\r\n$1\r\nd\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("4")); // 4 elements
|
assert!(response.contains("4")); // 4 elements
|
||||||
|
|
||||||
// Test LLEN
|
// Test LLEN
|
||||||
@@ -580,29 +664,52 @@ async fn test_list_operations() {
|
|||||||
assert!(response.contains("4"));
|
assert!(response.contains("4"));
|
||||||
|
|
||||||
// Test LRANGE
|
// Test LRANGE
|
||||||
let response = send_command(&mut stream, "*4\r\n$6\r\nLRANGE\r\n$4\r\nlist\r\n$1\r\n0\r\n$2\r\n-1\r\n").await;
|
let response = send_command(
|
||||||
assert_eq!(response, "*4\r\n$1\r\nb\r\n$1\r\na\r\n$1\r\nc\r\n$1\r\nd\r\n");
|
&mut stream,
|
||||||
|
"*4\r\n$6\r\nLRANGE\r\n$4\r\nlist\r\n$1\r\n0\r\n$2\r\n-1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_eq!(
|
||||||
|
response,
|
||||||
|
"*4\r\n$1\r\nb\r\n$1\r\na\r\n$1\r\nc\r\n$1\r\nd\r\n"
|
||||||
|
);
|
||||||
|
|
||||||
// Test LINDEX
|
// Test LINDEX
|
||||||
let response = send_command(&mut stream, "*3\r\n$6\r\nLINDEX\r\n$4\r\nlist\r\n$1\r\n0\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$6\r\nLINDEX\r\n$4\r\nlist\r\n$1\r\n0\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert_eq!(response, "$1\r\nb\r\n");
|
assert_eq!(response, "$1\r\nb\r\n");
|
||||||
|
|
||||||
// Test LPOP
|
// Test LPOP
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nLPOP\r\n$4\r\nlist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nLPOP\r\n$4\r\nlist\r\n").await;
|
||||||
assert_eq!(response, "$1\r\nb\r\n");
|
assert_eq!(response, "$1\r\nb\r\n");
|
||||||
|
|
||||||
// Test RPOP
|
// Test RPOP
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nRPOP\r\n$4\r\nlist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nRPOP\r\n$4\r\nlist\r\n").await;
|
||||||
assert_eq!(response, "$1\r\nd\r\n");
|
assert_eq!(response, "$1\r\nd\r\n");
|
||||||
|
|
||||||
// Test LREM
|
// Test LREM
|
||||||
send_command(&mut stream, "*3\r\n$5\r\nLPUSH\r\n$4\r\nlist\r\n$1\r\na\r\n").await; // list is now a, c, a
|
send_command(
|
||||||
let response = send_command(&mut stream, "*4\r\n$4\r\nLREM\r\n$4\r\nlist\r\n$1\r\n1\r\n$1\r\na\r\n").await;
|
&mut stream,
|
||||||
|
"*3\r\n$5\r\nLPUSH\r\n$4\r\nlist\r\n$1\r\na\r\n",
|
||||||
|
)
|
||||||
|
.await; // list is now a, c, a
|
||||||
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nLREM\r\n$4\r\nlist\r\n$1\r\n1\r\n$1\r\na\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test LTRIM
|
// Test LTRIM
|
||||||
let response = send_command(&mut stream, "*4\r\n$5\r\nLTRIM\r\n$4\r\nlist\r\n$1\r\n0\r\n$1\r\n0\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$5\r\nLTRIM\r\n$4\r\nlist\r\n$1\r\n0\r\n$1\r\n0\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nLLEN\r\n$4\r\nlist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nLLEN\r\n$4\r\nlist\r\n").await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
}
|
}
|
||||||
@@ -1,40 +1,43 @@
|
|||||||
use herodb::{server::Server, options::DBOption};
|
use herodb::{options::DBOption, server::Server};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::time::sleep;
|
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
|
use tokio::time::sleep;
|
||||||
|
|
||||||
// Helper function to start a test server with clean data directory
|
// Helper function to start a test server with clean data directory
|
||||||
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
||||||
use std::sync::atomic::{AtomicU16, Ordering};
|
use std::sync::atomic::{AtomicU16, Ordering};
|
||||||
static PORT_COUNTER: AtomicU16 = AtomicU16::new(17000);
|
static PORT_COUNTER: AtomicU16 = AtomicU16::new(17000);
|
||||||
|
|
||||||
// Get a unique port for this test
|
// Get a unique port for this test
|
||||||
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||||
|
|
||||||
let test_dir = format!("/tmp/herodb_test_{}", test_name);
|
let test_dir = format!("/tmp/herodb_test_{}", test_name);
|
||||||
|
|
||||||
// Clean up any existing test data
|
// Clean up any existing test data
|
||||||
let _ = std::fs::remove_dir_all(&test_dir);
|
let _ = std::fs::remove_dir_all(&test_dir);
|
||||||
std::fs::create_dir_all(&test_dir).unwrap();
|
std::fs::create_dir_all(&test_dir).unwrap();
|
||||||
|
|
||||||
let option = DBOption {
|
let option = DBOption {
|
||||||
dir: test_dir,
|
dir: test_dir,
|
||||||
port,
|
port,
|
||||||
debug: true,
|
debug: true,
|
||||||
encrypt: false,
|
encrypt: false,
|
||||||
encryption_key: None,
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
};
|
};
|
||||||
|
|
||||||
let server = Server::new(option).await;
|
let server = Server::new(option).await;
|
||||||
(server, port)
|
(server, port)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper function to send Redis command and get response
|
// Helper function to send Redis command and get response
|
||||||
async fn send_redis_command(port: u16, command: &str) -> String {
|
async fn send_redis_command(port: u16, command: &str) -> String {
|
||||||
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port)).await.unwrap();
|
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
stream.write_all(command.as_bytes()).await.unwrap();
|
stream.write_all(command.as_bytes()).await.unwrap();
|
||||||
|
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
String::from_utf8_lossy(&buffer[..n]).to_string()
|
String::from_utf8_lossy(&buffer[..n]).to_string()
|
||||||
@@ -43,13 +46,13 @@ async fn send_redis_command(port: u16, command: &str) -> String {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_basic_redis_functionality() {
|
async fn test_basic_redis_functionality() {
|
||||||
let (mut server, port) = start_test_server("basic").await;
|
let (mut server, port) = start_test_server("basic").await;
|
||||||
|
|
||||||
// Start server in background with timeout
|
// Start server in background with timeout
|
||||||
let server_handle = tokio::spawn(async move {
|
let server_handle = tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Accept only a few connections for testing
|
// Accept only a few connections for testing
|
||||||
for _ in 0..10 {
|
for _ in 0..10 {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
@@ -57,68 +60,79 @@ async fn test_basic_redis_functionality() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
// Test PING
|
// Test PING
|
||||||
let response = send_redis_command(port, "*1\r\n$4\r\nPING\r\n").await;
|
let response = send_redis_command(port, "*1\r\n$4\r\nPING\r\n").await;
|
||||||
assert!(response.contains("PONG"));
|
assert!(response.contains("PONG"));
|
||||||
|
|
||||||
// Test SET
|
// Test SET
|
||||||
let response = send_redis_command(port, "*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n").await;
|
let response =
|
||||||
|
send_redis_command(port, "*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n").await;
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test GET
|
// Test GET
|
||||||
let response = send_redis_command(port, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$3\r\nGET\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("value"));
|
assert!(response.contains("value"));
|
||||||
|
|
||||||
// Test HSET
|
// Test HSET
|
||||||
let response = send_redis_command(port, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n").await;
|
let response = send_redis_command(
|
||||||
|
port,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test HGET
|
// Test HGET
|
||||||
let response = send_redis_command(port, "*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$5\r\nfield\r\n").await;
|
let response =
|
||||||
|
send_redis_command(port, "*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$5\r\nfield\r\n").await;
|
||||||
assert!(response.contains("value"));
|
assert!(response.contains("value"));
|
||||||
|
|
||||||
// Test EXISTS
|
// Test EXISTS
|
||||||
let response = send_redis_command(port, "*2\r\n$6\r\nEXISTS\r\n$3\r\nkey\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$6\r\nEXISTS\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test TTL
|
// Test TTL
|
||||||
let response = send_redis_command(port, "*2\r\n$3\r\nTTL\r\n$3\r\nkey\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$3\r\nTTL\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("-1")); // No expiration
|
assert!(response.contains("-1")); // No expiration
|
||||||
|
|
||||||
// Test TYPE
|
// Test TYPE
|
||||||
let response = send_redis_command(port, "*2\r\n$4\r\nTYPE\r\n$3\r\nkey\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$4\r\nTYPE\r\n$3\r\nkey\r\n").await;
|
||||||
assert!(response.contains("string"));
|
assert!(response.contains("string"));
|
||||||
|
|
||||||
// Test QUIT to close connection gracefully
|
// Test QUIT to close connection gracefully
|
||||||
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port)).await.unwrap();
|
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port))
|
||||||
stream.write_all("*1\r\n$4\r\nQUIT\r\n".as_bytes()).await.unwrap();
|
.await
|
||||||
|
.unwrap();
|
||||||
|
stream
|
||||||
|
.write_all("*1\r\n$4\r\nQUIT\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Ensure the stream is closed
|
// Ensure the stream is closed
|
||||||
stream.shutdown().await.unwrap();
|
stream.shutdown().await.unwrap();
|
||||||
|
|
||||||
// Stop the server
|
// Stop the server
|
||||||
server_handle.abort();
|
server_handle.abort();
|
||||||
|
|
||||||
println!("✅ All basic Redis functionality tests passed!");
|
println!("✅ All basic Redis functionality tests passed!");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_hash_operations() {
|
async fn test_hash_operations() {
|
||||||
let (mut server, port) = start_test_server("hash_ops").await;
|
let (mut server, port) = start_test_server("hash_ops").await;
|
||||||
|
|
||||||
// Start server in background with timeout
|
// Start server in background with timeout
|
||||||
let server_handle = tokio::spawn(async move {
|
let server_handle = tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Accept only a few connections for testing
|
// Accept only a few connections for testing
|
||||||
for _ in 0..5 {
|
for _ in 0..5 {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
@@ -126,53 +140,57 @@ async fn test_hash_operations() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
// Test HSET multiple fields
|
// Test HSET multiple fields
|
||||||
let response = send_redis_command(port, "*6\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n$6\r\nfield2\r\n$6\r\nvalue2\r\n").await;
|
let response = send_redis_command(port, "*6\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n$6\r\nfield2\r\n$6\r\nvalue2\r\n").await;
|
||||||
assert!(response.contains("2")); // 2 new fields
|
assert!(response.contains("2")); // 2 new fields
|
||||||
|
|
||||||
// Test HGETALL
|
// Test HGETALL
|
||||||
let response = send_redis_command(port, "*2\r\n$7\r\nHGETALL\r\n$4\r\nhash\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$7\r\nHGETALL\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("field1"));
|
assert!(response.contains("field1"));
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
assert!(response.contains("field2"));
|
assert!(response.contains("field2"));
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
|
|
||||||
// Test HEXISTS
|
// Test HEXISTS
|
||||||
let response = send_redis_command(port, "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_redis_command(
|
||||||
|
port,
|
||||||
|
"*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test HLEN
|
// Test HLEN
|
||||||
let response = send_redis_command(port, "*2\r\n$4\r\nHLEN\r\n$4\r\nhash\r\n").await;
|
let response = send_redis_command(port, "*2\r\n$4\r\nHLEN\r\n$4\r\nhash\r\n").await;
|
||||||
assert!(response.contains("2"));
|
assert!(response.contains("2"));
|
||||||
|
|
||||||
// Test HSCAN
|
// Test HSCAN
|
||||||
let response = send_redis_command(port, "*7\r\n$5\r\nHSCAN\r\n$4\r\nhash\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n").await;
|
let response = send_redis_command(port, "*7\r\n$5\r\nHSCAN\r\n$4\r\nhash\r\n$1\r\n0\r\n$5\r\nMATCH\r\n$1\r\n*\r\n$5\r\nCOUNT\r\n$2\r\n10\r\n").await;
|
||||||
assert!(response.contains("field1"));
|
assert!(response.contains("field1"));
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
assert!(response.contains("field2"));
|
assert!(response.contains("field2"));
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
|
|
||||||
// Stop the server
|
// Stop the server
|
||||||
// For hash operations, we don't have a persistent stream, so we'll just abort the server.
|
// For hash operations, we don't have a persistent stream, so we'll just abort the server.
|
||||||
// The server should handle closing its connections.
|
// The server should handle closing its connections.
|
||||||
server_handle.abort();
|
server_handle.abort();
|
||||||
|
|
||||||
println!("✅ All hash operations tests passed!");
|
println!("✅ All hash operations tests passed!");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_transaction_operations() {
|
async fn test_transaction_operations() {
|
||||||
let (mut server, port) = start_test_server("transactions").await;
|
let (mut server, port) = start_test_server("transactions").await;
|
||||||
|
|
||||||
// Start server in background with timeout
|
// Start server in background with timeout
|
||||||
let server_handle = tokio::spawn(async move {
|
let server_handle = tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Accept only a few connections for testing
|
// Accept only a few connections for testing
|
||||||
for _ in 0..5 {
|
for _ in 0..5 {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
@@ -180,49 +198,69 @@ async fn test_transaction_operations() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(100)).await;
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
|
||||||
// Use a single connection for the transaction
|
// Use a single connection for the transaction
|
||||||
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port)).await.unwrap();
|
let mut stream = TcpStream::connect(format!("127.0.0.1:{}", port))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
// Test MULTI
|
// Test MULTI
|
||||||
stream.write_all("*1\r\n$5\r\nMULTI\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*1\r\n$5\r\nMULTI\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("OK"));
|
assert!(response.contains("OK"));
|
||||||
|
|
||||||
// Test queued commands
|
// Test queued commands
|
||||||
stream.write_all("*3\r\n$3\r\nSET\r\n$4\r\nkey1\r\n$6\r\nvalue1\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*3\r\n$3\r\nSET\r\n$4\r\nkey1\r\n$6\r\nvalue1\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("QUEUED"));
|
assert!(response.contains("QUEUED"));
|
||||||
|
|
||||||
stream.write_all("*3\r\n$3\r\nSET\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*3\r\n$3\r\nSET\r\n$4\r\nkey2\r\n$6\r\nvalue2\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("QUEUED"));
|
assert!(response.contains("QUEUED"));
|
||||||
|
|
||||||
// Test EXEC
|
// Test EXEC
|
||||||
stream.write_all("*1\r\n$4\r\nEXEC\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*1\r\n$4\r\nEXEC\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("OK")); // Should contain array of OK responses
|
assert!(response.contains("OK")); // Should contain array of OK responses
|
||||||
|
|
||||||
// Verify commands were executed
|
// Verify commands were executed
|
||||||
stream.write_all("*2\r\n$3\r\nGET\r\n$4\r\nkey1\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*2\r\n$3\r\nGET\r\n$4\r\nkey1\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
|
|
||||||
stream.write_all("*2\r\n$3\r\nGET\r\n$4\r\nkey2\r\n".as_bytes()).await.unwrap();
|
stream
|
||||||
|
.write_all("*2\r\n$3\r\nGET\r\n$4\r\nkey2\r\n".as_bytes())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
let response = String::from_utf8_lossy(&buffer[..n]);
|
let response = String::from_utf8_lossy(&buffer[..n]);
|
||||||
assert!(response.contains("value2"));
|
assert!(response.contains("value2"));
|
||||||
|
|
||||||
// Stop the server
|
// Stop the server
|
||||||
server_handle.abort();
|
server_handle.abort();
|
||||||
|
|
||||||
println!("✅ All transaction operations tests passed!");
|
println!("✅ All transaction operations tests passed!");
|
||||||
}
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
use herodb::{server::Server, options::DBOption};
|
use herodb::{options::DBOption, server::Server};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
use tokio::net::TcpStream;
|
use tokio::net::TcpStream;
|
||||||
@@ -8,22 +8,23 @@ use tokio::time::sleep;
|
|||||||
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
||||||
use std::sync::atomic::{AtomicU16, Ordering};
|
use std::sync::atomic::{AtomicU16, Ordering};
|
||||||
static PORT_COUNTER: AtomicU16 = AtomicU16::new(16500);
|
static PORT_COUNTER: AtomicU16 = AtomicU16::new(16500);
|
||||||
|
|
||||||
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||||
let test_dir = format!("/tmp/herodb_simple_test_{}", test_name);
|
let test_dir = format!("/tmp/herodb_simple_test_{}", test_name);
|
||||||
|
|
||||||
// Clean up any existing test data
|
// Clean up any existing test data
|
||||||
let _ = std::fs::remove_dir_all(&test_dir);
|
let _ = std::fs::remove_dir_all(&test_dir);
|
||||||
std::fs::create_dir_all(&test_dir).unwrap();
|
std::fs::create_dir_all(&test_dir).unwrap();
|
||||||
|
|
||||||
let option = DBOption {
|
let option = DBOption {
|
||||||
dir: test_dir,
|
dir: test_dir,
|
||||||
port,
|
port,
|
||||||
debug: false,
|
debug: false,
|
||||||
encrypt: false,
|
encrypt: false,
|
||||||
encryption_key: None,
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
};
|
};
|
||||||
|
|
||||||
let server = Server::new(option).await;
|
let server = Server::new(option).await;
|
||||||
(server, port)
|
(server, port)
|
||||||
}
|
}
|
||||||
@@ -31,7 +32,7 @@ async fn start_test_server(test_name: &str) -> (Server, u16) {
|
|||||||
// Helper function to send command and get response
|
// Helper function to send command and get response
|
||||||
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
async fn send_command(stream: &mut TcpStream, command: &str) -> String {
|
||||||
stream.write_all(command.as_bytes()).await.unwrap();
|
stream.write_all(command.as_bytes()).await.unwrap();
|
||||||
|
|
||||||
let mut buffer = [0; 1024];
|
let mut buffer = [0; 1024];
|
||||||
let n = stream.read(&mut buffer).await.unwrap();
|
let n = stream.read(&mut buffer).await.unwrap();
|
||||||
String::from_utf8_lossy(&buffer[..n]).to_string()
|
String::from_utf8_lossy(&buffer[..n]).to_string()
|
||||||
@@ -55,22 +56,22 @@ async fn connect_to_server(port: u16) -> TcpStream {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_basic_ping_simple() {
|
async fn test_basic_ping_simple() {
|
||||||
let (mut server, port) = start_test_server("ping").await;
|
let (mut server, port) = start_test_server("ping").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
let response = send_command(&mut stream, "*1\r\n$4\r\nPING\r\n").await;
|
let response = send_command(&mut stream, "*1\r\n$4\r\nPING\r\n").await;
|
||||||
assert!(response.contains("PONG"));
|
assert!(response.contains("PONG"));
|
||||||
@@ -79,31 +80,43 @@ async fn test_basic_ping_simple() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_hset_clean_db() {
|
async fn test_hset_clean_db() {
|
||||||
let (mut server, port) = start_test_server("hset_clean").await;
|
let (mut server, port) = start_test_server("hset_clean").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test HSET - should return 1 for new field
|
// Test HSET - should return 1 for new field
|
||||||
let response = send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HSET response: {}", response);
|
println!("HSET response: {}", response);
|
||||||
assert!(response.contains("1"), "Expected HSET to return 1, got: {}", response);
|
assert!(
|
||||||
|
response.contains("1"),
|
||||||
|
"Expected HSET to return 1, got: {}",
|
||||||
|
response
|
||||||
|
);
|
||||||
|
|
||||||
// Test HGET
|
// Test HGET
|
||||||
let response = send_command(&mut stream, "*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$4\r\nHGET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HGET response: {}", response);
|
println!("HGET response: {}", response);
|
||||||
assert!(response.contains("value1"));
|
assert!(response.contains("value1"));
|
||||||
}
|
}
|
||||||
@@ -111,73 +124,101 @@ async fn test_hset_clean_db() {
|
|||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_type_command_simple() {
|
async fn test_type_command_simple() {
|
||||||
let (mut server, port) = start_test_server("type").await;
|
let (mut server, port) = start_test_server("type").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Test string type
|
// Test string type
|
||||||
send_command(&mut stream, "*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$3\r\nSET\r\n$6\r\nstring\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$6\r\nstring\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$6\r\nstring\r\n").await;
|
||||||
println!("TYPE string response: {}", response);
|
println!("TYPE string response: {}", response);
|
||||||
assert!(response.contains("string"));
|
assert!(response.contains("string"));
|
||||||
|
|
||||||
// Test hash type
|
// Test hash type
|
||||||
send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$5\r\nfield\r\n$5\r\nvalue\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$4\r\nhash\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$4\r\nhash\r\n").await;
|
||||||
println!("TYPE hash response: {}", response);
|
println!("TYPE hash response: {}", response);
|
||||||
assert!(response.contains("hash"));
|
assert!(response.contains("hash"));
|
||||||
|
|
||||||
// Test non-existent key
|
// Test non-existent key
|
||||||
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n").await;
|
let response = send_command(&mut stream, "*2\r\n$4\r\nTYPE\r\n$7\r\nnoexist\r\n").await;
|
||||||
println!("TYPE noexist response: {}", response);
|
println!("TYPE noexist response: {}", response);
|
||||||
assert!(response.contains("none"), "Expected 'none' for non-existent key, got: {}", response);
|
assert!(
|
||||||
|
response.contains("none"),
|
||||||
|
"Expected 'none' for non-existent key, got: {}",
|
||||||
|
response
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_hexists_simple() {
|
async fn test_hexists_simple() {
|
||||||
let (mut server, port) = start_test_server("hexists").await;
|
let (mut server, port) = start_test_server("hexists").await;
|
||||||
|
|
||||||
// Start server in background
|
// Start server in background
|
||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
if let Ok((stream, _)) = listener.accept().await {
|
if let Ok((stream, _)) = listener.accept().await {
|
||||||
let _ = server.handle(stream).await;
|
let _ = server.handle(stream).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
sleep(Duration::from_millis(200)).await;
|
sleep(Duration::from_millis(200)).await;
|
||||||
|
|
||||||
let mut stream = connect_to_server(port).await;
|
let mut stream = connect_to_server(port).await;
|
||||||
|
|
||||||
// Set up hash
|
// Set up hash
|
||||||
send_command(&mut stream, "*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n").await;
|
send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*4\r\n$4\r\nHSET\r\n$4\r\nhash\r\n$6\r\nfield1\r\n$6\r\nvalue1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
// Test HEXISTS for existing field
|
// Test HEXISTS for existing field
|
||||||
let response = send_command(&mut stream, "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$6\r\nfield1\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HEXISTS existing field response: {}", response);
|
println!("HEXISTS existing field response: {}", response);
|
||||||
assert!(response.contains("1"));
|
assert!(response.contains("1"));
|
||||||
|
|
||||||
// Test HEXISTS for non-existent field
|
// Test HEXISTS for non-existent field
|
||||||
let response = send_command(&mut stream, "*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n").await;
|
let response = send_command(
|
||||||
|
&mut stream,
|
||||||
|
"*3\r\n$7\r\nHEXISTS\r\n$4\r\nhash\r\n$7\r\nnoexist\r\n",
|
||||||
|
)
|
||||||
|
.await;
|
||||||
println!("HEXISTS non-existent field response: {}", response);
|
println!("HEXISTS non-existent field response: {}", response);
|
||||||
assert!(response.contains("0"), "Expected HEXISTS to return 0 for non-existent field, got: {}", response);
|
assert!(
|
||||||
}
|
response.contains("0"),
|
||||||
|
"Expected HEXISTS to return 0 for non-existent field, got: {}",
|
||||||
|
response
|
||||||
|
);
|
||||||
|
}
|
||||||
960
tests/usage_suite.rs
Normal file
960
tests/usage_suite.rs
Normal file
@@ -0,0 +1,960 @@
|
|||||||
|
use herodb::{options::DBOption, server::Server};
|
||||||
|
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||||
|
use tokio::net::TcpStream;
|
||||||
|
use tokio::time::{sleep, Duration};
|
||||||
|
|
||||||
|
// =========================
|
||||||
|
// Helpers
|
||||||
|
// =========================
|
||||||
|
|
||||||
|
async fn start_test_server(test_name: &str) -> (Server, u16) {
|
||||||
|
use std::sync::atomic::{AtomicU16, Ordering};
|
||||||
|
static PORT_COUNTER: AtomicU16 = AtomicU16::new(17100);
|
||||||
|
let port = PORT_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||||
|
|
||||||
|
let test_dir = format!("/tmp/herodb_usage_suite_{}", test_name);
|
||||||
|
let _ = std::fs::remove_dir_all(&test_dir);
|
||||||
|
std::fs::create_dir_all(&test_dir).unwrap();
|
||||||
|
|
||||||
|
let option = DBOption {
|
||||||
|
dir: test_dir,
|
||||||
|
port,
|
||||||
|
debug: false,
|
||||||
|
encrypt: false,
|
||||||
|
encryption_key: None,
|
||||||
|
backend: herodb::options::BackendType::Redb,
|
||||||
|
};
|
||||||
|
|
||||||
|
let server = Server::new(option).await;
|
||||||
|
(server, port)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn spawn_listener(server: Server, port: u16) {
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let listener = tokio::net::TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||||
|
.await
|
||||||
|
.expect("bind listener");
|
||||||
|
loop {
|
||||||
|
match listener.accept().await {
|
||||||
|
Ok((stream, _)) => {
|
||||||
|
let mut s_clone = server.clone();
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let _ = s_clone.handle(stream).await;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(_e) => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build RESP array for args ["PING"] -> "*1\r\n$4\r\nPING\r\n"
|
||||||
|
fn build_resp(args: &[&str]) -> String {
|
||||||
|
let mut s = format!("*{}\r\n", args.len());
|
||||||
|
for a in args {
|
||||||
|
s.push_str(&format!("${}\r\n{}\r\n", a.len(), a));
|
||||||
|
}
|
||||||
|
s
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn connect(port: u16) -> TcpStream {
|
||||||
|
let mut attempts = 0;
|
||||||
|
loop {
|
||||||
|
match TcpStream::connect(format!("127.0.0.1:{}", port)).await {
|
||||||
|
Ok(s) => return s,
|
||||||
|
Err(_) if attempts < 30 => {
|
||||||
|
attempts += 1;
|
||||||
|
sleep(Duration::from_millis(100)).await;
|
||||||
|
}
|
||||||
|
Err(e) => panic!("Failed to connect: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn find_crlf(buf: &[u8], start: usize) -> Option<usize> {
|
||||||
|
let mut i = start;
|
||||||
|
while i + 1 < buf.len() {
|
||||||
|
if buf[i] == b'\r' && buf[i + 1] == b'\n' {
|
||||||
|
return Some(i);
|
||||||
|
}
|
||||||
|
i += 1;
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_number_i64(buf: &[u8], start: usize, end: usize) -> Option<i64> {
|
||||||
|
let s = std::str::from_utf8(&buf[start..end]).ok()?;
|
||||||
|
s.parse::<i64>().ok()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return number of bytes that make up a complete RESP element starting at 'i', or None if incomplete.
|
||||||
|
fn parse_elem(buf: &[u8], i: usize) -> Option<usize> {
|
||||||
|
if i >= buf.len() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
match buf[i] {
|
||||||
|
b'+' | b'-' | b':' => {
|
||||||
|
let end = find_crlf(buf, i + 1)?;
|
||||||
|
Some(end + 2 - i)
|
||||||
|
}
|
||||||
|
b'$' => {
|
||||||
|
let hdr_end = find_crlf(buf, i + 1)?;
|
||||||
|
let n = parse_number_i64(buf, i + 1, hdr_end)?;
|
||||||
|
if n < 0 {
|
||||||
|
// Null bulk string: only header
|
||||||
|
Some(hdr_end + 2 - i)
|
||||||
|
} else {
|
||||||
|
let need = hdr_end + 2 + (n as usize) + 2;
|
||||||
|
if need <= buf.len() {
|
||||||
|
Some(need - i)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
b'*' => {
|
||||||
|
let hdr_end = find_crlf(buf, i + 1)?;
|
||||||
|
let n = parse_number_i64(buf, i + 1, hdr_end)?;
|
||||||
|
if n < 0 {
|
||||||
|
// Null array: only header
|
||||||
|
Some(hdr_end + 2 - i)
|
||||||
|
} else {
|
||||||
|
let mut j = hdr_end + 2;
|
||||||
|
for _ in 0..(n as usize) {
|
||||||
|
let consumed = parse_elem(buf, j)?;
|
||||||
|
j += consumed;
|
||||||
|
}
|
||||||
|
Some(j - i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn resp_frame_len(buf: &[u8]) -> Option<usize> {
|
||||||
|
parse_elem(buf, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn read_full_resp(stream: &mut TcpStream) -> String {
|
||||||
|
let mut buf: Vec<u8> = Vec::with_capacity(8192);
|
||||||
|
let mut tmp = vec![0u8; 4096];
|
||||||
|
|
||||||
|
loop {
|
||||||
|
if let Some(total) = resp_frame_len(&buf) {
|
||||||
|
if buf.len() >= total {
|
||||||
|
return String::from_utf8_lossy(&buf[..total]).to_string();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match tokio::time::timeout(Duration::from_secs(2), stream.read(&mut tmp)).await {
|
||||||
|
Ok(Ok(n)) => {
|
||||||
|
if n == 0 {
|
||||||
|
if let Some(total) = resp_frame_len(&buf) {
|
||||||
|
if buf.len() >= total {
|
||||||
|
return String::from_utf8_lossy(&buf[..total]).to_string();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return String::from_utf8_lossy(&buf).to_string();
|
||||||
|
}
|
||||||
|
buf.extend_from_slice(&tmp[..n]);
|
||||||
|
}
|
||||||
|
Ok(Err(e)) => panic!("read error: {}", e),
|
||||||
|
Err(_) => panic!("timeout waiting for reply"),
|
||||||
|
}
|
||||||
|
|
||||||
|
if buf.len() > 8 * 1024 * 1024 {
|
||||||
|
panic!("reply too large");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn send_cmd(stream: &mut TcpStream, args: &[&str]) -> String {
|
||||||
|
let req = build_resp(args);
|
||||||
|
stream.write_all(req.as_bytes()).await.unwrap();
|
||||||
|
read_full_resp(stream).await
|
||||||
|
}
|
||||||
|
|
||||||
|
// Assert helpers with clearer output
|
||||||
|
fn assert_contains(haystack: &str, needle: &str, ctx: &str) {
|
||||||
|
assert!(
|
||||||
|
haystack.contains(needle),
|
||||||
|
"ASSERT CONTAINS failed: '{}' not found in response.\nContext: {}\nResponse:\n{}",
|
||||||
|
needle,
|
||||||
|
ctx,
|
||||||
|
haystack
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn assert_eq_resp(actual: &str, expected: &str, ctx: &str) {
|
||||||
|
assert!(
|
||||||
|
actual == expected,
|
||||||
|
"ASSERT EQUAL failed.\nContext: {}\nExpected:\n{:?}\nActual:\n{:?}",
|
||||||
|
ctx,
|
||||||
|
expected,
|
||||||
|
actual
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract the payload of a single RESP Bulk String reply.
|
||||||
|
/// Example input:
|
||||||
|
/// "$5\r\nhello\r\n" -> Some("hello".to_string())
|
||||||
|
fn extract_bulk_payload(resp: &str) -> Option<String> {
|
||||||
|
// find first CRLF after "$len"
|
||||||
|
let first = resp.find("\r\n")?;
|
||||||
|
let after = &resp[(first + 2)..];
|
||||||
|
// find next CRLF ending payload
|
||||||
|
let second = after.find("\r\n")?;
|
||||||
|
Some(after[..second].to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
// =========================
|
||||||
|
// Test suites
|
||||||
|
// =========================
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_01_connection_and_info() {
|
||||||
|
let (server, port) = start_test_server("conn_info").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// redis-cli may send COMMAND DOCS, our server replies empty array; harmless.
|
||||||
|
let pong = send_cmd(&mut s, &["PING"]).await;
|
||||||
|
assert_contains(&pong, "PONG", "PING should return PONG");
|
||||||
|
|
||||||
|
let echo = send_cmd(&mut s, &["ECHO", "hello"]).await;
|
||||||
|
assert_contains(&echo, "hello", "ECHO hello");
|
||||||
|
|
||||||
|
// INFO (general)
|
||||||
|
let info = send_cmd(&mut s, &["INFO"]).await;
|
||||||
|
assert_contains(&info, "redis_version", "INFO should include redis_version");
|
||||||
|
|
||||||
|
// INFO REPLICATION (static stub)
|
||||||
|
let repl = send_cmd(&mut s, &["INFO", "replication"]).await;
|
||||||
|
assert_contains(&repl, "role:master", "INFO replication role");
|
||||||
|
|
||||||
|
// CONFIG GET subset
|
||||||
|
let cfg = send_cmd(&mut s, &["CONFIG", "GET", "databases"]).await;
|
||||||
|
assert_contains(&cfg, "databases", "CONFIG GET databases");
|
||||||
|
assert_contains(&cfg, "16", "CONFIG GET databases value");
|
||||||
|
|
||||||
|
// CLIENT name
|
||||||
|
let setname = send_cmd(&mut s, &["CLIENT", "SETNAME", "myapp"]).await;
|
||||||
|
assert_contains(&setname, "OK", "CLIENT SETNAME");
|
||||||
|
|
||||||
|
let getname = send_cmd(&mut s, &["CLIENT", "GETNAME"]).await;
|
||||||
|
assert_contains(&getname, "myapp", "CLIENT GETNAME");
|
||||||
|
|
||||||
|
// SELECT db
|
||||||
|
let sel = send_cmd(&mut s, &["SELECT", "0"]).await;
|
||||||
|
assert_contains(&sel, "OK", "SELECT 0");
|
||||||
|
|
||||||
|
// QUIT should close connection after sending OK
|
||||||
|
let quit = send_cmd(&mut s, &["QUIT"]).await;
|
||||||
|
assert_contains(&quit, "OK", "QUIT should return OK");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_02_strings_and_expiry() {
|
||||||
|
let (server, port) = start_test_server("strings").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// SET / GET
|
||||||
|
let set = send_cmd(&mut s, &["SET", "user:1", "alice"]).await;
|
||||||
|
assert_contains(&set, "OK", "SET user:1 alice");
|
||||||
|
|
||||||
|
let get = send_cmd(&mut s, &["GET", "user:1"]).await;
|
||||||
|
assert_contains(&get, "alice", "GET user:1");
|
||||||
|
|
||||||
|
// EXISTS / DEL
|
||||||
|
let ex1 = send_cmd(&mut s, &["EXISTS", "user:1"]).await;
|
||||||
|
assert_contains(&ex1, "1", "EXISTS user:1");
|
||||||
|
|
||||||
|
let del = send_cmd(&mut s, &["DEL", "user:1"]).await;
|
||||||
|
assert_contains(&del, "1", "DEL user:1");
|
||||||
|
|
||||||
|
let ex0 = send_cmd(&mut s, &["EXISTS", "user:1"]).await;
|
||||||
|
assert_contains(&ex0, "0", "EXISTS after DEL");
|
||||||
|
|
||||||
|
// INCR behavior
|
||||||
|
let i1 = send_cmd(&mut s, &["INCR", "count"]).await;
|
||||||
|
assert_contains(&i1, "1", "INCR new key -> 1");
|
||||||
|
let i2 = send_cmd(&mut s, &["INCR", "count"]).await;
|
||||||
|
assert_contains(&i2, "2", "INCR existing -> 2");
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "notnum", "abc"]).await;
|
||||||
|
let ierr = send_cmd(&mut s, &["INCR", "notnum"]).await;
|
||||||
|
assert_contains(&ierr, "ERR", "INCR on non-numeric should ERR");
|
||||||
|
|
||||||
|
// Expiration via SET EX
|
||||||
|
let setex = send_cmd(&mut s, &["SET", "tmp:1", "boom", "EX", "1"]).await;
|
||||||
|
assert_contains(&setex, "OK", "SET tmp:1 EX 1");
|
||||||
|
|
||||||
|
let g_immediate = send_cmd(&mut s, &["GET", "tmp:1"]).await;
|
||||||
|
assert_contains(&g_immediate, "boom", "GET tmp:1 immediately");
|
||||||
|
|
||||||
|
let ttl = send_cmd(&mut s, &["TTL", "tmp:1"]).await;
|
||||||
|
// Implementation returns a SimpleString, accept any numeric content
|
||||||
|
assert!(
|
||||||
|
ttl.contains("1") || ttl.contains("0"),
|
||||||
|
"TTL should be 1 or 0, got: {}",
|
||||||
|
ttl
|
||||||
|
);
|
||||||
|
|
||||||
|
sleep(Duration::from_millis(1100)).await;
|
||||||
|
let g_after = send_cmd(&mut s, &["GET", "tmp:1"]).await;
|
||||||
|
assert_contains(&g_after, "$-1", "GET tmp:1 after expiry -> Null");
|
||||||
|
|
||||||
|
// TYPE
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "t", "v"]).await;
|
||||||
|
let ty = send_cmd(&mut s, &["TYPE", "t"]).await;
|
||||||
|
assert_contains(&ty, "string", "TYPE string key");
|
||||||
|
let ty_none = send_cmd(&mut s, &["TYPE", "noexist"]).await;
|
||||||
|
assert_contains(&ty_none, "none", "TYPE nonexistent");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_03_scan_and_keys() {
|
||||||
|
let (server, port) = start_test_server("scan").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
for i in 0..5 {
|
||||||
|
let _ = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["SET", &format!("key{}", i), &format!("value{}", i)],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
|
let scan = send_cmd(&mut s, &["SCAN", "0", "MATCH", "key*", "COUNT", "10"]).await;
|
||||||
|
assert_contains(&scan, "key0", "SCAN should return keys with MATCH");
|
||||||
|
assert_contains(&scan, "key4", "SCAN should return last key");
|
||||||
|
|
||||||
|
let keys = send_cmd(&mut s, &["KEYS", "*"]).await;
|
||||||
|
assert_contains(&keys, "key0", "KEYS * includes key0");
|
||||||
|
assert_contains(&keys, "key4", "KEYS * includes key4");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_04_hashes_suite() {
|
||||||
|
let (server, port) = start_test_server("hashes").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// HSET (single, returns number of new fields)
|
||||||
|
let h1 = send_cmd(&mut s, &["HSET", "profile:1", "name", "alice"]).await;
|
||||||
|
assert_contains(&h1, "1", "HSET new field -> 1");
|
||||||
|
|
||||||
|
// HGET
|
||||||
|
let hg = send_cmd(&mut s, &["HGET", "profile:1", "name"]).await;
|
||||||
|
assert_contains(&hg, "alice", "HGET existing field");
|
||||||
|
|
||||||
|
// HSET multiple
|
||||||
|
let h2 = send_cmd(&mut s, &["HSET", "profile:1", "age", "30", "city", "paris"]).await;
|
||||||
|
assert_contains(&h2, "2", "HSET added 2 new fields");
|
||||||
|
|
||||||
|
// HMGET
|
||||||
|
let hmg = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["HMGET", "profile:1", "name", "age", "city", "nope"],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_contains(&hmg, "alice", "HMGET name");
|
||||||
|
assert_contains(&hmg, "30", "HMGET age");
|
||||||
|
assert_contains(&hmg, "paris", "HMGET city");
|
||||||
|
assert_contains(&hmg, "$-1", "HMGET non-existent -> Null");
|
||||||
|
|
||||||
|
// HGETALL
|
||||||
|
let hga = send_cmd(&mut s, &["HGETALL", "profile:1"]).await;
|
||||||
|
assert_contains(&hga, "name", "HGETALL contains name");
|
||||||
|
assert_contains(&hga, "alice", "HGETALL contains alice");
|
||||||
|
|
||||||
|
// HLEN
|
||||||
|
let hlen = send_cmd(&mut s, &["HLEN", "profile:1"]).await;
|
||||||
|
assert_contains(&hlen, "3", "HLEN is 3");
|
||||||
|
|
||||||
|
// HEXISTS
|
||||||
|
let hex1 = send_cmd(&mut s, &["HEXISTS", "profile:1", "age"]).await;
|
||||||
|
assert_contains(&hex1, "1", "HEXISTS age true");
|
||||||
|
let hex0 = send_cmd(&mut s, &["HEXISTS", "profile:1", "nope"]).await;
|
||||||
|
assert_contains(&hex0, "0", "HEXISTS nope false");
|
||||||
|
|
||||||
|
// HKEYS / HVALS
|
||||||
|
let hkeys = send_cmd(&mut s, &["HKEYS", "profile:1"]).await;
|
||||||
|
assert_contains(&hkeys, "name", "HKEYS includes name");
|
||||||
|
let hvals = send_cmd(&mut s, &["HVALS", "profile:1"]).await;
|
||||||
|
assert_contains(&hvals, "alice", "HVALS includes alice");
|
||||||
|
|
||||||
|
// HSETNX
|
||||||
|
let hnx0 = send_cmd(&mut s, &["HSETNX", "profile:1", "name", "bob"]).await;
|
||||||
|
assert_contains(&hnx0, "0", "HSETNX existing field -> 0");
|
||||||
|
let hnx1 = send_cmd(&mut s, &["HSETNX", "profile:1", "nickname", "ali"]).await;
|
||||||
|
assert_contains(&hnx1, "1", "HSETNX new field -> 1");
|
||||||
|
|
||||||
|
// HSCAN
|
||||||
|
let hscan = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["HSCAN", "profile:1", "0", "MATCH", "n*", "COUNT", "10"],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_contains(&hscan, "name", "HSCAN matches fields starting with n");
|
||||||
|
assert_contains(&hscan, "nickname", "HSCAN nickname present");
|
||||||
|
|
||||||
|
// HDEL
|
||||||
|
let hdel = send_cmd(&mut s, &["HDEL", "profile:1", "city", "age"]).await;
|
||||||
|
assert_contains(&hdel, "2", "HDEL removed two fields");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_05_lists_suite_including_blpop() {
|
||||||
|
let (server, port) = start_test_server("lists").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut a = connect(port).await;
|
||||||
|
|
||||||
|
// LPUSH / RPUSH / LLEN
|
||||||
|
let lp = send_cmd(&mut a, &["LPUSH", "q:jobs", "a", "b"]).await;
|
||||||
|
assert_contains(&lp, "2", "LPUSH added 2, length 2");
|
||||||
|
|
||||||
|
let rp = send_cmd(&mut a, &["RPUSH", "q:jobs", "c"]).await;
|
||||||
|
assert_contains(&rp, "3", "RPUSH now length 3");
|
||||||
|
|
||||||
|
let llen = send_cmd(&mut a, &["LLEN", "q:jobs"]).await;
|
||||||
|
assert_contains(&llen, "3", "LLEN 3");
|
||||||
|
|
||||||
|
// LINDEX / LRANGE
|
||||||
|
let lidx = send_cmd(&mut a, &["LINDEX", "q:jobs", "0"]).await;
|
||||||
|
assert_eq_resp(&lidx, "$1\r\nb\r\n", "LINDEX q:jobs 0 should be b");
|
||||||
|
|
||||||
|
let lr = send_cmd(&mut a, &["LRANGE", "q:jobs", "0", "-1"]).await;
|
||||||
|
assert_eq_resp(
|
||||||
|
&lr,
|
||||||
|
"*3\r\n$1\r\nb\r\n$1\r\na\r\n$1\r\nc\r\n",
|
||||||
|
"LRANGE q:jobs 0 -1 should be [b,a,c]",
|
||||||
|
);
|
||||||
|
|
||||||
|
// LTRIM
|
||||||
|
let ltrim = send_cmd(&mut a, &["LTRIM", "q:jobs", "0", "1"]).await;
|
||||||
|
assert_contains(<rim, "OK", "LTRIM OK");
|
||||||
|
let lr_post = send_cmd(&mut a, &["LRANGE", "q:jobs", "0", "-1"]).await;
|
||||||
|
assert_eq_resp(
|
||||||
|
&lr_post,
|
||||||
|
"*2\r\n$1\r\nb\r\n$1\r\na\r\n",
|
||||||
|
"After LTRIM, list [b,a]",
|
||||||
|
);
|
||||||
|
|
||||||
|
// LREM remove first occurrence of b
|
||||||
|
let lrem = send_cmd(&mut a, &["LREM", "q:jobs", "1", "b"]).await;
|
||||||
|
assert_contains(&lrem, "1", "LREM removed 1");
|
||||||
|
|
||||||
|
// LPOP and RPOP
|
||||||
|
let lpop1 = send_cmd(&mut a, &["LPOP", "q:jobs"]).await;
|
||||||
|
assert_contains(&lpop1, "$1\r\na\r\n", "LPOP returns a");
|
||||||
|
let rpop_empty = send_cmd(&mut a, &["RPOP", "q:jobs"]).await; // empty now
|
||||||
|
assert_contains(&rpop_empty, "$-1", "RPOP on empty -> Null");
|
||||||
|
|
||||||
|
// LPOP with count on empty -> []
|
||||||
|
let lpop0 = send_cmd(&mut a, &["LPOP", "q:jobs", "2"]).await;
|
||||||
|
assert_eq_resp(
|
||||||
|
&lpop0,
|
||||||
|
"*0\r\n",
|
||||||
|
"LPOP with count on empty returns empty array",
|
||||||
|
);
|
||||||
|
|
||||||
|
// BLPOP: block on one client, push from another
|
||||||
|
let c1 = connect(port).await;
|
||||||
|
let mut c2 = connect(port).await;
|
||||||
|
|
||||||
|
// Start BLPOP on c1
|
||||||
|
let blpop_task = tokio::spawn(async move {
|
||||||
|
let mut c1_local = c1;
|
||||||
|
send_cmd(&mut c1_local, &["BLPOP", "q:block", "5"]).await
|
||||||
|
});
|
||||||
|
|
||||||
|
// Give it time to register waiter
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
// Push from c2 to wake BLPOP
|
||||||
|
let _ = send_cmd(&mut c2, &["LPUSH", "q:block", "x"]).await;
|
||||||
|
|
||||||
|
// Await BLPOP result
|
||||||
|
let blpop_res = blpop_task.await.expect("BLPOP task join");
|
||||||
|
assert_contains(&blpop_res, "q:block", "BLPOP returned key");
|
||||||
|
assert_contains(&blpop_res, "x", "BLPOP returned element");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_06_flushdb_suite() {
|
||||||
|
let (server, port) = start_test_server("flushdb").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "k1", "v1"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["HSET", "h1", "f", "v"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["LPUSH", "l1", "a"]).await;
|
||||||
|
|
||||||
|
let keys_before = send_cmd(&mut s, &["KEYS", "*"]).await;
|
||||||
|
assert_contains(&keys_before, "k1", "have string key before FLUSHDB");
|
||||||
|
assert_contains(&keys_before, "h1", "have hash key before FLUSHDB");
|
||||||
|
assert_contains(&keys_before, "l1", "have list key before FLUSHDB");
|
||||||
|
|
||||||
|
let fl = send_cmd(&mut s, &["FLUSHDB"]).await;
|
||||||
|
assert_contains(&fl, "OK", "FLUSHDB OK");
|
||||||
|
|
||||||
|
let keys_after = send_cmd(&mut s, &["KEYS", "*"]).await;
|
||||||
|
assert_eq_resp(&keys_after, "*0\r\n", "DB should be empty after FLUSHDB");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_07_age_stateless_suite() {
|
||||||
|
let (server, port) = start_test_server("age_stateless").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// GENENC -> [recipient, identity]
|
||||||
|
let gen = send_cmd(&mut s, &["AGE", "GENENC"]).await;
|
||||||
|
assert!(
|
||||||
|
gen.starts_with("*2\r\n$"),
|
||||||
|
"AGE GENENC should return array [recipient, identity], got:\n{}",
|
||||||
|
gen
|
||||||
|
);
|
||||||
|
|
||||||
|
// Parse simple RESP array of two bulk strings to extract keys
|
||||||
|
fn parse_two_bulk_array(resp: &str) -> (String, String) {
|
||||||
|
// naive parse for tests
|
||||||
|
let mut lines = resp.lines();
|
||||||
|
let _ = lines.next(); // *2
|
||||||
|
// $len
|
||||||
|
let _ = lines.next();
|
||||||
|
let recip = lines.next().unwrap_or("").to_string();
|
||||||
|
let _ = lines.next();
|
||||||
|
let ident = lines.next().unwrap_or("").to_string();
|
||||||
|
(recip, ident)
|
||||||
|
}
|
||||||
|
let (recipient, identity) = parse_two_bulk_array(&gen);
|
||||||
|
assert!(
|
||||||
|
recipient.starts_with("age1") && identity.starts_with("AGE-SECRET-KEY-1"),
|
||||||
|
"Unexpected AGE key formats.\nrecipient: {}\nidentity: {}",
|
||||||
|
recipient,
|
||||||
|
identity
|
||||||
|
);
|
||||||
|
|
||||||
|
// ENCRYPT / DECRYPT
|
||||||
|
let ct = send_cmd(&mut s, &["AGE", "ENCRYPT", &recipient, "hello world"]).await;
|
||||||
|
let ct_b64 = extract_bulk_payload(&ct).expect("Failed to parse bulk payload from ENCRYPT");
|
||||||
|
let pt = send_cmd(&mut s, &["AGE", "DECRYPT", &identity, &ct_b64]).await;
|
||||||
|
assert_contains(&pt, "hello world", "AGE DECRYPT round-trip");
|
||||||
|
|
||||||
|
// GENSIGN -> [verify_pub_b64, sign_secret_b64]
|
||||||
|
let gensign = send_cmd(&mut s, &["AGE", "GENSIGN"]).await;
|
||||||
|
let (verify_pub, sign_secret) = parse_two_bulk_array(&gensign);
|
||||||
|
assert!(
|
||||||
|
!verify_pub.is_empty() && !sign_secret.is_empty(),
|
||||||
|
"GENSIGN returned empty keys"
|
||||||
|
);
|
||||||
|
|
||||||
|
// SIGN / VERIFY
|
||||||
|
let sig = send_cmd(&mut s, &["AGE", "SIGN", &sign_secret, "msg"]).await;
|
||||||
|
let sig_b64 = extract_bulk_payload(&sig).expect("Failed to parse bulk payload from SIGN");
|
||||||
|
let v_ok = send_cmd(&mut s, &["AGE", "VERIFY", &verify_pub, "msg", &sig_b64]).await;
|
||||||
|
assert_contains(&v_ok, "1", "VERIFY should be 1 for valid signature");
|
||||||
|
|
||||||
|
let v_bad = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["AGE", "VERIFY", &verify_pub, "tampered", &sig_b64],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_contains(
|
||||||
|
&v_bad,
|
||||||
|
"0",
|
||||||
|
"VERIFY should be 0 for invalid message/signature",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_08_age_persistent_named_suite() {
|
||||||
|
let (server, port) = start_test_server("age_persistent").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// KEYGEN + ENCRYPTNAME/DECRYPTNAME
|
||||||
|
let kg = send_cmd(&mut s, &["AGE", "KEYGEN", "app1"]).await;
|
||||||
|
assert!(
|
||||||
|
kg.starts_with("*2\r\n"),
|
||||||
|
"AGE KEYGEN should return [recipient, identity], got:\n{}",
|
||||||
|
kg
|
||||||
|
);
|
||||||
|
|
||||||
|
let ct = send_cmd(&mut s, &["AGE", "ENCRYPTNAME", "app1", "hello"]).await;
|
||||||
|
let ct_b64 = extract_bulk_payload(&ct).expect("Failed to parse bulk payload from ENCRYPTNAME");
|
||||||
|
let pt = send_cmd(&mut s, &["AGE", "DECRYPTNAME", "app1", &ct_b64]).await;
|
||||||
|
assert_contains(&pt, "hello", "DECRYPTNAME round-trip");
|
||||||
|
|
||||||
|
// SIGNKEYGEN + SIGNNAME/VERIFYNAME
|
||||||
|
let skg = send_cmd(&mut s, &["AGE", "SIGNKEYGEN", "app1"]).await;
|
||||||
|
assert!(
|
||||||
|
skg.starts_with("*2\r\n"),
|
||||||
|
"AGE SIGNKEYGEN should return [verify_pub, sign_secret], got:\n{}",
|
||||||
|
skg
|
||||||
|
);
|
||||||
|
|
||||||
|
let sig = send_cmd(&mut s, &["AGE", "SIGNNAME", "app1", "m"]).await;
|
||||||
|
let sig_b64 = extract_bulk_payload(&sig).expect("Failed to parse bulk payload from SIGNNAME");
|
||||||
|
let v1 = send_cmd(&mut s, &["AGE", "VERIFYNAME", "app1", "m", &sig_b64]).await;
|
||||||
|
assert_contains(&v1, "1", "VERIFYNAME valid => 1");
|
||||||
|
|
||||||
|
let v0 = send_cmd(&mut s, &["AGE", "VERIFYNAME", "app1", "bad", &sig_b64]).await;
|
||||||
|
assert_contains(&v0, "0", "VERIFYNAME invalid => 0");
|
||||||
|
|
||||||
|
// AGE LIST
|
||||||
|
let lst = send_cmd(&mut s, &["AGE", "LIST"]).await;
|
||||||
|
assert_contains(&lst, "encpub", "AGE LIST label encpub");
|
||||||
|
assert_contains(&lst, "app1", "AGE LIST includes app1");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_10_expire_pexpire_persist() {
|
||||||
|
let (server, port) = start_test_server("expire_suite").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// EXPIRE: seconds
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "exp:s", "v"]).await;
|
||||||
|
let ex = send_cmd(&mut s, &["EXPIRE", "exp:s", "1"]).await;
|
||||||
|
assert_contains(&ex, "1", "EXPIRE exp:s 1 -> 1 (applied)");
|
||||||
|
let ttl1 = send_cmd(&mut s, &["TTL", "exp:s"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl1.contains("1") || ttl1.contains("0"),
|
||||||
|
"TTL exp:s should be 1 or 0, got: {}",
|
||||||
|
ttl1
|
||||||
|
);
|
||||||
|
sleep(Duration::from_millis(1100)).await;
|
||||||
|
let get_after = send_cmd(&mut s, &["GET", "exp:s"]).await;
|
||||||
|
assert_contains(&get_after, "$-1", "GET after expiry should be Null");
|
||||||
|
let ttl_after = send_cmd(&mut s, &["TTL", "exp:s"]).await;
|
||||||
|
assert_contains(&ttl_after, "-2", "TTL after expiry -> -2");
|
||||||
|
let exists_after = send_cmd(&mut s, &["EXISTS", "exp:s"]).await;
|
||||||
|
assert_contains(&exists_after, "0", "EXISTS after expiry -> 0");
|
||||||
|
|
||||||
|
// PEXPIRE: milliseconds
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "exp:ms", "v"]).await;
|
||||||
|
let pex = send_cmd(&mut s, &["PEXPIRE", "exp:ms", "1500"]).await;
|
||||||
|
assert_contains(&pex, "1", "PEXPIRE exp:ms 1500 -> 1 (applied)");
|
||||||
|
let ttl_ms1 = send_cmd(&mut s, &["TTL", "exp:ms"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl_ms1.contains("1") || ttl_ms1.contains("0"),
|
||||||
|
"TTL exp:ms should be 1 or 0 soon after PEXPIRE, got: {}",
|
||||||
|
ttl_ms1
|
||||||
|
);
|
||||||
|
sleep(Duration::from_millis(1600)).await;
|
||||||
|
let exists_ms_after = send_cmd(&mut s, &["EXISTS", "exp:ms"]).await;
|
||||||
|
assert_contains(&exists_ms_after, "0", "EXISTS exp:ms after ms expiry -> 0");
|
||||||
|
|
||||||
|
// PERSIST: remove expiration
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "exp:persist", "v"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["EXPIRE", "exp:persist", "5"]).await;
|
||||||
|
let ttl_pre = send_cmd(&mut s, &["TTL", "exp:persist"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl_pre.contains("5")
|
||||||
|
|| ttl_pre.contains("4")
|
||||||
|
|| ttl_pre.contains("3")
|
||||||
|
|| ttl_pre.contains("2")
|
||||||
|
|| ttl_pre.contains("1")
|
||||||
|
|| ttl_pre.contains("0"),
|
||||||
|
"TTL exp:persist should be >=0 before persist, got: {}",
|
||||||
|
ttl_pre
|
||||||
|
);
|
||||||
|
let persist1 = send_cmd(&mut s, &["PERSIST", "exp:persist"]).await;
|
||||||
|
assert_contains(&persist1, "1", "PERSIST should remove expiration");
|
||||||
|
let ttl_post = send_cmd(&mut s, &["TTL", "exp:persist"]).await;
|
||||||
|
assert_contains(&ttl_post, "-1", "TTL after PERSIST -> -1 (no expiration)");
|
||||||
|
// Second persist should return 0 (nothing to remove)
|
||||||
|
let persist2 = send_cmd(&mut s, &["PERSIST", "exp:persist"]).await;
|
||||||
|
assert_contains(
|
||||||
|
&persist2,
|
||||||
|
"0",
|
||||||
|
"PERSIST again -> 0 (no expiration to remove)",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_11_set_with_options() {
|
||||||
|
let (server, port) = start_test_server("set_opts").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// SET with GET on non-existing key -> returns Null, sets value
|
||||||
|
let set_get1 = send_cmd(&mut s, &["SET", "s1", "v1", "GET"]).await;
|
||||||
|
assert_contains(
|
||||||
|
&set_get1,
|
||||||
|
"$-1",
|
||||||
|
"SET s1 v1 GET returns Null when key didn't exist",
|
||||||
|
);
|
||||||
|
let g1 = send_cmd(&mut s, &["GET", "s1"]).await;
|
||||||
|
assert_contains(&g1, "v1", "GET s1 after first SET");
|
||||||
|
|
||||||
|
// SET with GET should return old value, then set to new
|
||||||
|
let set_get2 = send_cmd(&mut s, &["SET", "s1", "v2", "GET"]).await;
|
||||||
|
assert_contains(&set_get2, "v1", "SET s1 v2 GET returns previous value v1");
|
||||||
|
let g2 = send_cmd(&mut s, &["GET", "s1"]).await;
|
||||||
|
assert_contains(&g2, "v2", "GET s1 now v2");
|
||||||
|
|
||||||
|
// NX prevents update when key exists; with GET should return Null and not change
|
||||||
|
let set_nx = send_cmd(&mut s, &["SET", "s1", "v3", "NX", "GET"]).await;
|
||||||
|
assert_contains(&set_nx, "$-1", "SET s1 v3 NX GET returns Null when not set");
|
||||||
|
let g3 = send_cmd(&mut s, &["GET", "s1"]).await;
|
||||||
|
assert_contains(&g3, "v2", "GET s1 remains v2 after NX prevented write");
|
||||||
|
|
||||||
|
// NX allows set when key does not exist
|
||||||
|
let set_nx2 = send_cmd(&mut s, &["SET", "s2", "v10", "NX"]).await;
|
||||||
|
assert_contains(&set_nx2, "OK", "SET s2 v10 NX -> OK for new key");
|
||||||
|
let g4 = send_cmd(&mut s, &["GET", "s2"]).await;
|
||||||
|
assert_contains(&g4, "v10", "GET s2 is v10");
|
||||||
|
|
||||||
|
// XX requires existing key; with GET returns old value and sets new
|
||||||
|
let set_xx = send_cmd(&mut s, &["SET", "s2", "v11", "XX", "GET"]).await;
|
||||||
|
assert_contains(&set_xx, "v10", "SET s2 v11 XX GET returns previous v10");
|
||||||
|
let g5 = send_cmd(&mut s, &["GET", "s2"]).await;
|
||||||
|
assert_contains(&g5, "v11", "GET s2 is now v11");
|
||||||
|
|
||||||
|
// PX expiration path via SET options
|
||||||
|
let set_px = send_cmd(&mut s, &["SET", "s3", "vpx", "PX", "500"]).await;
|
||||||
|
assert_contains(&set_px, "OK", "SET s3 vpx PX 500 -> OK");
|
||||||
|
let ttl_px1 = send_cmd(&mut s, &["TTL", "s3"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl_px1.contains("0") || ttl_px1.contains("1"),
|
||||||
|
"TTL s3 immediately after PX should be 1 or 0, got: {}",
|
||||||
|
ttl_px1
|
||||||
|
);
|
||||||
|
sleep(Duration::from_millis(650)).await;
|
||||||
|
let g6 = send_cmd(&mut s, &["GET", "s3"]).await;
|
||||||
|
assert_contains(&g6, "$-1", "GET s3 after PX expiry -> Null");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_09_mget_mset_and_variadic_exists_del() {
|
||||||
|
let (server, port) = start_test_server("mget_mset_variadic").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// MSET multiple keys
|
||||||
|
let mset = send_cmd(&mut s, &["MSET", "k1", "v1", "k2", "v2", "k3", "v3"]).await;
|
||||||
|
assert_contains(&mset, "OK", "MSET k1 v1 k2 v2 k3 v3 -> OK");
|
||||||
|
|
||||||
|
// MGET should return values and Null for missing
|
||||||
|
let mget = send_cmd(&mut s, &["MGET", "k1", "k2", "nope", "k3"]).await;
|
||||||
|
// Expect an array with 4 entries; verify payloads
|
||||||
|
assert_contains(&mget, "v1", "MGET k1");
|
||||||
|
assert_contains(&mget, "v2", "MGET k2");
|
||||||
|
assert_contains(&mget, "v3", "MGET k3");
|
||||||
|
assert_contains(&mget, "$-1", "MGET missing returns Null");
|
||||||
|
|
||||||
|
// EXISTS variadic: count how many exist
|
||||||
|
let exists_multi = send_cmd(&mut s, &["EXISTS", "k1", "nope", "k3"]).await;
|
||||||
|
// Server returns SimpleString numeric, e.g. +2
|
||||||
|
assert_contains(&exists_multi, "2", "EXISTS k1 nope k3 -> 2");
|
||||||
|
|
||||||
|
// DEL variadic: delete multiple keys, return count deleted
|
||||||
|
let del_multi = send_cmd(&mut s, &["DEL", "k1", "k3", "nope"]).await;
|
||||||
|
assert_contains(&del_multi, "2", "DEL k1 k3 nope -> 2");
|
||||||
|
|
||||||
|
// Verify deletion
|
||||||
|
let exists_after = send_cmd(&mut s, &["EXISTS", "k1", "k3"]).await;
|
||||||
|
assert_contains(&exists_after, "0", "EXISTS k1 k3 after DEL -> 0");
|
||||||
|
|
||||||
|
// MGET after deletion should include Nulls for deleted keys
|
||||||
|
let mget_after = send_cmd(&mut s, &["MGET", "k1", "k2", "k3"]).await;
|
||||||
|
assert_contains(&mget_after, "$-1", "MGET k1 after DEL -> Null");
|
||||||
|
assert_contains(&mget_after, "v2", "MGET k2 remains");
|
||||||
|
assert_contains(&mget_after, "$-1", "MGET k3 after DEL -> Null");
|
||||||
|
}
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_12_hash_incr() {
|
||||||
|
let (server, port) = start_test_server("hash_incr").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// Integer increments
|
||||||
|
let _ = send_cmd(&mut s, &["HSET", "hinc", "a", "1"]).await;
|
||||||
|
let r1 = send_cmd(&mut s, &["HINCRBY", "hinc", "a", "2"]).await;
|
||||||
|
assert_contains(&r1, "3", "HINCRBY hinc a 2 -> 3");
|
||||||
|
|
||||||
|
let r2 = send_cmd(&mut s, &["HINCRBY", "hinc", "a", "-1"]).await;
|
||||||
|
assert_contains(&r2, "2", "HINCRBY hinc a -1 -> 2");
|
||||||
|
|
||||||
|
let r3 = send_cmd(&mut s, &["HINCRBY", "hinc", "b", "5"]).await;
|
||||||
|
assert_contains(&r3, "5", "HINCRBY hinc b 5 -> 5");
|
||||||
|
|
||||||
|
// HINCRBY error on non-integer field
|
||||||
|
let _ = send_cmd(&mut s, &["HSET", "hinc", "s", "x"]).await;
|
||||||
|
let r_err = send_cmd(&mut s, &["HINCRBY", "hinc", "s", "1"]).await;
|
||||||
|
assert_contains(&r_err, "ERR", "HINCRBY on non-integer field should ERR");
|
||||||
|
|
||||||
|
// Float increments
|
||||||
|
let r4 = send_cmd(&mut s, &["HINCRBYFLOAT", "hinc", "f", "1.5"]).await;
|
||||||
|
assert_contains(&r4, "1.5", "HINCRBYFLOAT hinc f 1.5 -> 1.5");
|
||||||
|
|
||||||
|
let r5 = send_cmd(&mut s, &["HINCRBYFLOAT", "hinc", "f", "2.5"]).await;
|
||||||
|
// Could be "4", "4.0", or "4.000000", accept "4" substring
|
||||||
|
assert_contains(&r5, "4", "HINCRBYFLOAT hinc f 2.5 -> 4");
|
||||||
|
|
||||||
|
// HINCRBYFLOAT error on non-float field
|
||||||
|
let _ = send_cmd(&mut s, &["HSET", "hinc", "notf", "abc"]).await;
|
||||||
|
let r6 = send_cmd(&mut s, &["HINCRBYFLOAT", "hinc", "notf", "1"]).await;
|
||||||
|
assert_contains(&r6, "ERR", "HINCRBYFLOAT on non-float field should ERR");
|
||||||
|
}
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_05b_brpop_suite() {
|
||||||
|
let (server, port) = start_test_server("lists_brpop").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut a = connect(port).await;
|
||||||
|
|
||||||
|
// RPUSH some initial data, BRPOP should take from the right
|
||||||
|
let _ = send_cmd(&mut a, &["RPUSH", "q:rjobs", "1", "2"]).await;
|
||||||
|
let br_nonblock = send_cmd(&mut a, &["BRPOP", "q:rjobs", "0"]).await;
|
||||||
|
// Should pop the rightmost element "2"
|
||||||
|
assert_contains(&br_nonblock, "q:rjobs", "BRPOP returns key");
|
||||||
|
assert_contains(&br_nonblock, "2", "BRPOP returns rightmost element");
|
||||||
|
|
||||||
|
// Now test blocking BRPOP: start blocked client, then RPUSH from another client
|
||||||
|
let c1 = connect(port).await;
|
||||||
|
let mut c2 = connect(port).await;
|
||||||
|
|
||||||
|
// Start BRPOP on c1
|
||||||
|
let brpop_task = tokio::spawn(async move {
|
||||||
|
let mut c1_local = c1;
|
||||||
|
send_cmd(&mut c1_local, &["BRPOP", "q:blockr", "5"]).await
|
||||||
|
});
|
||||||
|
|
||||||
|
// Give it time to register waiter
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
// Push from right to wake BRPOP
|
||||||
|
let _ = send_cmd(&mut c2, &["RPUSH", "q:blockr", "X"]).await;
|
||||||
|
|
||||||
|
// Await BRPOP result
|
||||||
|
let brpop_res = brpop_task.await.expect("BRPOP task join");
|
||||||
|
assert_contains(&brpop_res, "q:blockr", "BRPOP returned key");
|
||||||
|
assert_contains(&brpop_res, "X", "BRPOP returned element");
|
||||||
|
}
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_13_dbsize() {
|
||||||
|
let (server, port) = start_test_server("dbsize").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// Initially empty
|
||||||
|
let n0 = send_cmd(&mut s, &["DBSIZE"]).await;
|
||||||
|
assert_contains(&n0, "0", "DBSIZE initial should be 0");
|
||||||
|
|
||||||
|
// Add a string, a hash, and a list -> dbsize = 3
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "s", "v"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["HSET", "h", "f", "v"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["LPUSH", "l", "a", "b"]).await;
|
||||||
|
|
||||||
|
let n3 = send_cmd(&mut s, &["DBSIZE"]).await;
|
||||||
|
assert_contains(&n3, "3", "DBSIZE after adding s,h,l should be 3");
|
||||||
|
|
||||||
|
// Expire the string and wait, dbsize should drop to 2
|
||||||
|
let _ = send_cmd(&mut s, &["PEXPIRE", "s", "400"]).await;
|
||||||
|
sleep(Duration::from_millis(500)).await;
|
||||||
|
|
||||||
|
let n2 = send_cmd(&mut s, &["DBSIZE"]).await;
|
||||||
|
assert_contains(&n2, "2", "DBSIZE after string expiry should be 2");
|
||||||
|
|
||||||
|
// Delete remaining keys and confirm 0
|
||||||
|
let _ = send_cmd(&mut s, &["DEL", "h"]).await;
|
||||||
|
let _ = send_cmd(&mut s, &["DEL", "l"]).await;
|
||||||
|
|
||||||
|
let n_final = send_cmd(&mut s, &["DBSIZE"]).await;
|
||||||
|
assert_contains(&n_final, "0", "DBSIZE after deleting all keys should be 0");
|
||||||
|
}
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_14_expireat_pexpireat() {
|
||||||
|
use std::time::{SystemTime, UNIX_EPOCH};
|
||||||
|
|
||||||
|
let (server, port) = start_test_server("expireat_suite").await;
|
||||||
|
spawn_listener(server, port).await;
|
||||||
|
sleep(Duration::from_millis(150)).await;
|
||||||
|
|
||||||
|
let mut s = connect(port).await;
|
||||||
|
|
||||||
|
// EXPIREAT: seconds since epoch
|
||||||
|
let now_secs = SystemTime::now()
|
||||||
|
.duration_since(UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_secs() as i64;
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "exp:at:s", "v"]).await;
|
||||||
|
let exat = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["EXPIREAT", "exp:at:s", &format!("{}", now_secs + 1)],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_contains(&exat, "1", "EXPIREAT exp:at:s now+1s -> 1 (applied)");
|
||||||
|
let ttl1 = send_cmd(&mut s, &["TTL", "exp:at:s"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl1.contains("1") || ttl1.contains("0"),
|
||||||
|
"TTL exp:at:s should be 1 or 0 shortly after EXPIREAT, got: {}",
|
||||||
|
ttl1
|
||||||
|
);
|
||||||
|
sleep(Duration::from_millis(1200)).await;
|
||||||
|
let exists_after_exat = send_cmd(&mut s, &["EXISTS", "exp:at:s"]).await;
|
||||||
|
assert_contains(
|
||||||
|
&exists_after_exat,
|
||||||
|
"0",
|
||||||
|
"EXISTS exp:at:s after EXPIREAT expiry -> 0",
|
||||||
|
);
|
||||||
|
|
||||||
|
// PEXPIREAT: milliseconds since epoch
|
||||||
|
let now_ms = SystemTime::now()
|
||||||
|
.duration_since(UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as i64;
|
||||||
|
let _ = send_cmd(&mut s, &["SET", "exp:at:ms", "v"]).await;
|
||||||
|
let pexat = send_cmd(
|
||||||
|
&mut s,
|
||||||
|
&["PEXPIREAT", "exp:at:ms", &format!("{}", now_ms + 450)],
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
assert_contains(&pexat, "1", "PEXPIREAT exp:at:ms now+450ms -> 1 (applied)");
|
||||||
|
let ttl2 = send_cmd(&mut s, &["TTL", "exp:at:ms"]).await;
|
||||||
|
assert!(
|
||||||
|
ttl2.contains("0") || ttl2.contains("1"),
|
||||||
|
"TTL exp:at:ms should be 0..1 soon after PEXPIREAT, got: {}",
|
||||||
|
ttl2
|
||||||
|
);
|
||||||
|
sleep(Duration::from_millis(600)).await;
|
||||||
|
let exists_after_pexat = send_cmd(&mut s, &["EXISTS", "exp:at:ms"]).await;
|
||||||
|
assert_contains(
|
||||||
|
&exists_after_pexat,
|
||||||
|
"0",
|
||||||
|
"EXISTS exp:at:ms after PEXPIREAT expiry -> 0",
|
||||||
|
);
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user