Holochain Agent Skill
A comprehensive Agent Skills Open Standard skill for Holochain hApp development, compatible with Claude Code, GitHub Copilot, Cursor, Augment, and any other tool supporting the standard. Covers the full development spiral from architecture and design through scaffolding, implementation, testing, and deployment.
What It Covers
| Domain | Description |
|---|---|
| Architecture | Coordinator/integrity zome split, DNA structure, Cargo workspace, Nix dev environment, progenitor pattern, multi-DNA, private entries |
| Design | DHT data modeling, entry/link type design, discovery strategy, validation rules |
| Scaffold | Holonix setup, Nix flake, hc CLI, hc scaffold commands, new project and new domain workflows |
| Implement | Entry types, link types, CRUD patterns, cross-zome calls, signals, validation, HDK 0.6 API |
| Test | Tryorama + Vitest setup, two-agent scenarios, dhtSync, update/delete patterns, test organization |
| Deploy | Kangaroo-Electron packaging, .webhapp bundling, CI/CD, versioning semantics, auto-update |
Current version pins: hdk = "=0.6.1" | hdi = "=0.7.1" | holonix ref=main-0.6
Installation
This skill conforms to the Agent Skills Open Standard. The universal install path .claude/skills/holochain/ is recognized by all compatible tools.
Compatible Tools
| Tool | Supported | Invocation |
|---|---|---|
| Claude Code | ✅ | /holochain |
| GitHub Copilot | ✅ | via agent skills |
| Cursor | ✅ | via agent skills |
| Augment Code | ✅ | via agent skills |
| OpenAI Codex CLI | ✅ | via agent skills |
Claude Code
Option A: Global — available in all projects
cp -r holochain-agent-skill ~/.claude/skills/holochain
Option B: Project-local — scoped to one project
mkdir -p your-project/.claude/skills
cp -r holochain-agent-skill your-project/.claude/skills/holochain
Option C: Symlink (recommended — auto-updates with git pull)
git clone https://github.com/Soushi888/holochain-agent-skill ~/holochain-agent-skill
ln -s ~/holochain-agent-skill ~/.claude/skills/holochain
Once installed, invoke with /holochain or let Claude detect Holochain-related work automatically.
Cursor, GitHub Copilot, Augment, and others
Install to your project root’s .claude/skills/ directory — all Agent Skills-compatible tools search this path:
mkdir -p .claude/skills
cp -r holochain-agent-skill .claude/skills/holochain
The tool discovers the skill automatically on next launch. For global install paths specific to each tool, refer to the tool’s own documentation.
Upgrading from an older install? If you previously installed to
.claude/skills/Holochain/(uppercase), rename the directory:mv ~/.claude/skills/Holochain ~/.claude/skills/holochain
Quick Start
# Design a new data model
/holochain design data model for a marketplace listing with status transitions
# Scaffold a new hApp from scratch
/holochain scaffold new happ called my-network
# Implement a full CRUD zome
/holochain implement zome for Profile entry type
# Debug a flaky test
/holochain my Tryorama test passes alone but fails when Bob reads Alice's entry
# Package for distribution
/holochain deploy package my happ for desktop distribution
Workflow Triggers
| Say… | Triggers |
|---|---|
| “design data model”, “model entries”, “what entries” | DesignDataModel workflow |
| “scaffold”, “new happ”, “new project”, “setup environment” | Scaffold workflow |
| “implement zome”, “create zome”, “write zome” | ImplementZome workflow |
| “design access control”, “cap grant”, “who can call” | DesignAccessControl workflow |
| “deploy”, “package”, “webhapp”, “kangaroo” | PackageAndDeploy workflow |
Ecosystem Roadmap
This skill follows a spiral from core to periphery:
v1 (current): Full development cycle — architecture, design, scaffold, implement, test, deploy
v2 (planned): Ecosystem expansion
- hREA / ValueFlows sub-skill
- holochain-open-dev patterns
- ADAM (coasys) integration
- Wind Tunnel performance testing
- unyt integration
- Cross-LLM portability
v3 (vision): GUI and visual tooling
- No-code workflow interface
- Visual DHT data model explorer
- Diagram generation for architecture
- Progressive disclosure (junior to senior)
Contributing
Contributions welcome. The skill follows this structure:
SKILL.md Entry point — routing table and quick reference
Architecture.md Core concepts: zome split, DNA, Nix, progenitor
Patterns.md Implementation patterns: entry types, links, CRUD, signals
Scaffold.md Dev environment and project scaffolding
AccessControl.md Capability grants system
CellCloning.md Partitioned data via clone cells
ErrorHandling.md thiserror + WasmError patterns
Testing.md Tryorama + Vitest patterns
TypeScript.md holochain-client, signals, Svelte integration
Deployment.md Kangaroo-Electron packaging and distribution
Workflows/ Step-by-step guided workflows
docs/ Requirements, roadmap, and design decisions
When updating for new Holochain versions, update the version pins in SKILL.md Quick Reference and in any code examples across all files.
License
Apache-2.0
Holochain Development Skill
Expert assistant for Holochain hApp development. Covers the full development spiral: architecture, design, scaffolding, implementation, testing, and deployment.
Proactive Invocation Rule
Always invoke this skill in the PLAN phase when the task touches a Holochain project. Do not wait to be asked explicitly.
Trigger conditions — any of these means the skill should be loaded before coding begins:
- Working directory is a Holochain project (contains
workdir/*.happordnas/*/zomes/) - Task involves
.rsfiles insidezomes/coordinator/orzomes/integrity/ - Task involves entry types, link types, cross-DNA calls, or zome functions
- Task involves a PR on a Holochain project
When proactively invoked: load Architecture.md + Patterns.md, run the ReviewZome checklist against any files being modified, surface issues before implementation begins.
Workflow Routing
| Workflow | Trigger | File |
|---|---|---|
| ReviewZome | review zome, audit zome, check implementation, validate patterns, before implementing, PR review, code review on zome | Workflows/ReviewZome.md |
| DesignDataModel | design data model, model entries, what entries, what links, DHT schema | Workflows/DesignDataModel.md |
| Scaffold | scaffold, new happ, new project, setup environment, init project, Holonix, nix develop, hc scaffold | Workflows/Scaffold.md |
| ImplementZome | implement zome, create zome, scaffold zome, write zome | Workflows/ImplementZome.md |
| DesignAccessControl | design access control, who can call, cap grant design | Workflows/DesignAccessControl.md |
| PackageAndDeploy | deploy, package, distribute, kangaroo, installer, desktop app, webhapp | Workflows/PackageAndDeploy.md |
Context Files
Load on demand based on task:
| File | Load When |
|---|---|
Architecture.md | Coordinator/integrity split, DNA structure, Cargo workspace, Nix, dna_info, network_seed, private entries, multi-DNA (multiple roles, bridge call, OtherRole) |
Progenitor.md | Progenitor pattern, DnaProperties struct, check_if_progenitor, bootstrap mode, coordinator guard, integrity enforcement (Moss pattern), auto-registration in create_user, deploy-time injection (dna.yaml / Sweettest / Kangaroo / Moss) |
Scaffold.md | New project setup, Holonix installation, Nix flake, hc CLI, hc scaffold commands, adding a new domain to existing project |
Patterns.md | Entry types, link types, CRUD, cross-zome calls, validation, HDK 0.6 API (GetStrategy, LinkQuery, Local vs Network), must_get, signals (remote signal, init cap grant) |
AccessControl.md | Cap grants, capability system, cap claim, recv_remote_signal setup, admin-only access |
CellCloning.md | Cell cloning, partitioned data, clone roles, createCloneCell, clone_limit |
ErrorHandling.md | Error types, WasmError, ExternResult patterns, thiserror |
Testing.md | Four-layer strategy, Sweettest (Rust-native), E2E Playwright + AdminWebsocket, Wind-Tunnel performance |
WindTunnel.md | Performance/load testing with wind-tunnel: ScenarioDefinitionBuilder, call_zome, ReportMetric, multi-agent roles, DHT sync lag measurement, InfluxDB metrics pipeline |
TypeScript.md | holochain-client setup, callZome, signals, SvelteKit integration |
Deployment.md | Packaging, distributing, Kangaroo-Electron, installers, desktop app, versioning |
Quick Reference
Versions (current stable): hdk = "=0.6.1" hdi = "=0.7.1" holonix ref=main-0.6
Dev commands: nix develop | hc s sandbox generate workdir/ | bun run test
Scaffold: hc scaffold entry-type MyEntry | hc scaffold link-type AgentToMyEntry
Common Pitfalls Checklist
Run this against any zome code being written or reviewed. Each item is a class of bug that has burned projects before.
Entry Schema Evolution
-
#[serde(default)]on new optional fields — Any field added to an existing entry struct after initial deployment MUST have#[serde(default)]. Without it, existing entries serialized before the field existed will fail to deserialize.Option<T>alone is NOT sufficient.#![allow(unused)] fn main() { #[serde(default)] // ← REQUIRED for fields added post-deployment pub new_field: Option<ActionHash>, }
Cross-DNA Calls
-
ZomeCallResponseis exhaustive — HDK 0.6 has 5 variants:Ok,Unauthorized,AuthenticationFailed,NetworkError,CountersigningSession. Wildcard_is safe but hides new variants. Exhaustive match is preferred. - Role name matches
happ.yaml—CallTargetCell::OtherRole("role_name")must exactly match the role name inworkdir/happ.yaml. Typos fail silently at runtime. - Zome name matches coordinator crate name —
ZomeName("zome_name")must match the coordinator’snameinCargo.toml. Check both. - Local mirror structs for cross-DNA types — Avoid importing the remote DNA’s Cargo crate. Define a local serialization mirror struct instead.
Validation Rules
- No DHT reads in
validate()—validate()must be deterministic. Noget(),get_links(),agent_info(),sys_time(). Only inspect the op itself. - Use
op.flattened::<EntryTypes, LinkTypes>()— Not the oldop.to_type(). Patterns.md has the correct pattern.
HDK 0.6 API
-
delete_link()requiresGetOptions—delete_link(hash, GetOptions::default())notdelete_link(hash). -
get_links()usesLinkQuery::try_new()— NotGetLinksInputBuilderfor most cases. -
GetStrategy::LocalvsNetwork— UseLocalfor own-data queries (fast, no network),Networkfor DHT queries (cross-agent data).
Shared Utility Patterns (project-specific)
-
agent_pub_keyandcreated_atare NOT entry fields — They live in the action header. Remove them from entry structs. - If using a shared utility crate — verify intra-DNA and cross-DNA call helpers are used consistently rather than raw
call()inline.
Examples
Example 1: Design a new entry type for a marketplace listing
User: "I need to model a Listing entry with status transitions"
→ Loads Patterns.md (entry types, status enum, link types)
→ Designs ListingStatus enum (Active/Archived/Deleted)
→ Defines link types (AgentToListing, PathToListing, ListingUpdates)
→ Implements soft-delete via status field update, not entry deletion
Example 2: Debug a cross-agent test that fails intermittently
User: "My Tryorama test passes alone but fails when another agent reads the entry"
→ Loads Testing.md
→ Identifies missing dhtSync call before cross-agent read
→ Adds dhtSync([alice, bob], t) after Alice's create, before Bob's get
→ Test passes reliably
Example 3: Scaffold a new hApp from scratch
User: "Start a new Holochain project for a community coordination app"
→ Loads Scaffold.md + Workflows/Scaffold.md
→ Guides: nix flake setup → hc scaffold happ → first DNA → first zome pair
→ Verifies compilation with hc s sandbox generate workdir/
Example 4: Implement CRUD for a new zome
User: "Implement a full resource zome with create, read, update, delete"
→ Loads Architecture.md + Patterns.md
→ Invokes Workflows/ImplementZome.md
→ Creates integrity crate (entry struct, link enum, validation)
→ Creates coordinator crate (create/read/update/delete functions)
→ Writes Tryorama tests at foundation + integration layers
Holochain Architecture
Coordinator vs. Integrity Zomes
Every domain in a Holochain hApp is split into two crates:
| Layer | Crate type | Role |
|---|---|---|
| Integrity | hdi | Defines entry types, link types, and validation rules. Pure deterministic logic — no I/O. |
| Coordinator | hdk | Implements CRUD functions, calls other zomes, emits signals. Can be updated post-deployment. |
Why the split matters:
- Integrity code is committed to the DNA hash — it cannot change without forking the network
- Coordinator code can be hot-swapped without breaking agent data
- Validation runs in integrity (deterministic, no external calls allowed)
What belongs where
Integrity crate only:
#[hdk_entry_types]enum#[hdk_link_types]enumvalidate()callback- Entry structs with
#[hdk_entry_helper] - Status enums (e.g.,
ListingStatus)
Coordinator crate only:
create_*,get_*,update_*,delete_*pub functionsrecv_remote_signalhandlerpost_commithook (signals)- Cross-zome calls
DNA Structure
Each domain = one pair: {domain}_integrity + {domain} (coordinator).
dnas/
└── my_dna/
├── dna.yaml
└── zomes/
├── integrity/
│ ├── my_domain_integrity/
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── lib.rs # Entry types, link types, validate()
│ │ └── types.rs # Entry structs
├── coordinator/
│ └── my_domain/
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs # pub extern "C" fn declarations
│ └── my_entry.rs # CRUD implementation
└── utils/ # Shared crate (optional)
├── Cargo.toml
└── src/
├── lib.rs
├── errors.rs # thiserror enums
└── cross_zome.rs # external_local_call helpers
Cargo Workspace
Root Cargo.toml — always pin HDK/HDI with exact versions (=):
[workspace]
resolver = "2"
members = [
"dnas/my_dna/zomes/integrity/my_domain_integrity",
"dnas/my_dna/zomes/coordinator/my_domain",
"dnas/my_dna/zomes/coordinator/utils",
]
[workspace.dependencies]
hdi = "=0.7.1"
hdk = "=0.6.1"
serde = { version = "1", features = ["derive"] }
thiserror = "1"
Why exact pins? Holochain zome compilation is extremely sensitive to minor version differences. Range deps (^) cause breakage when new patch releases change internal APIs.
Individual crate Cargo.toml:
[package]
name = "my_domain_integrity"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib", "rlib"]
name = "my_domain_integrity"
[dependencies]
hdi = { workspace = true }
serde = { workspace = true }
Nix Dev Environment
Standard flake.nix using holonix (pin to main-0.6 branch for HDK 0.6.x):
{
inputs = {
holonix.url = "github:holochain/holonix?ref=main-0.6";
nixpkgs.follows = "holonix/nixpkgs";
flake-parts.follows = "holonix/flake-parts";
};
outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } {
systems = builtins.attrNames inputs.holonix.devShells;
perSystem = { inputs', ... }: {
devShells.default = inputs'.holonix.devShells.default;
};
};
}
Enter dev shell: nix develop
Manifest Files
happ.yaml
manifest_version: "1"
name: my_happ
description: "My hApp"
roles:
- name: my_dna
provisioning:
strategy: create
deferred: false
dna:
bundled: "./my_dna.dna"
modifiers:
network_seed: ~
properties: ~
dna.yaml
manifest_version: "1"
name: my_dna
integrity:
network_seed: ~
properties: ~
origin_time: 1704067200000000
zomes:
- name: my_domain_integrity
bundled: "./zomes/integrity/my_domain_integrity.wasm"
coordinator:
zomes:
- name: my_domain
bundled: "./zomes/coordinator/my_domain.wasm"
dependencies:
- name: my_domain_integrity
Scaffolding Commands
# Generate entry type boilerplate (integrity + coordinator stubs)
hc scaffold entry-type MyEntry
# Generate link type
hc scaffold link-type AgentToMyEntry
# Build and verify compilation
hc s sandbox generate workdir/
# Run tests
bun run test
DNA Properties & Progenitor Pattern
DNA properties let you embed configuration into the DNA at deploy time. The progenitor pattern uses this to designate one agent as the permanent administrator of a DHT network — their pubkey is burned into the DNA at install time via modifiers.properties, making admin authority immutable and cryptographically verifiable.
Reading network info from DNA properties:
#![allow(unused)]
fn main() {
let info = dna_info()?;
let network_seed = info.modifiers.network_seed.to_string();
let dna_hash = info.hash;
}
For the full progenitor implementation — DnaProperties struct, check_if_progenitor(), coordinator guard, optional integrity enforcement, bootstrap auto-registration, deploy-time injection (dna.yaml / Sweettest / Kangaroo / Moss), and pitfalls — see Progenitor.md.
Cross-ref: AccessControl.md for capability grants and delegated admin patterns.
Private Entries
#![allow(unused)]
fn main() {
// In integrity crate — mark entry as private:
#[hdk_entry_types]
pub enum EntryTypes {
#[entry_type(visibility = "private")]
MyPrivateEntry(MyPrivateEntry),
MyPublicEntry(MyPublicEntry), // default is public
}
}
Key semantics:
- Private entries are stored on the author’s source chain only — never published to the DHT
- Other agents can see the action (action hash, author, timestamp) but cannot retrieve the entry content
- Private ≠ encrypted — other agents simply cannot fetch the entry, but if the data were leaked, it would be readable
- Use encryption (e.g.,
x_25519_x_salsa20_poly1305_encrypt) if you need genuine confidentiality beyond network-level privacy
When to use private entries:
- Personal notes or drafts not meant for others
- Intermediate state that should not be globally visible
- Data that only the agent and explicitly authorized parties should read
Multi-DNA Architecture
Most hApps can use a single DNA. When to consider multiple DNAs (roles):
| Pattern | When to use |
|---|---|
| Single DNA | All agents share the same DHT network; simplest |
| Multiple roles | Separate concerns with different network boundaries (e.g., public + private data) |
| Clone cells | Partitioned data — separate instances per user, group, or time period |
Bridge Calls Between Roles
#![allow(unused)]
fn main() {
// Call a function in a different role within the same hApp:
let response = call(
CallTargetCell::OtherRole("other_role_name".into()),
"other_zome".into(),
"function_name".into(),
None,
input,
)?;
}
happ.yaml Multi-Role Structure
manifest_version: "1"
name: my_happ
roles:
- name: primary_role
provisioning:
strategy: create
deferred: false
dna:
bundled: "./primary.dna"
- name: secondary_role
provisioning:
strategy: create
deferred: true # provisioned later by the app
dna:
bundled: "./secondary.dna"
modifiers:
network_seed: ~
clone_limit: 10 # allow up to 10 clones of this role
deferred: true — the cell is not created on install; the app creates it programmatically when needed.
clone_limit — enables cell cloning for this role (see CellCloning.md).
Progenitor Pattern
The progenitor is a single agent whose public key is burned into the DNA at install time via DNA modifiers.properties. Every peer in the network can read the progenitor’s identity deterministically, making admin authority immutable and cryptographically verifiable without a centralized registry.
Two reference implementations inform this page:
- Requests & Offers (
happenings-community/requests-and-offers) — coordinator-only enforcement, auto-registration via the firstcreate_usercall - Moss (
lightningrodlabs/moss) — opt-in at group creation, integrity-level enforcement invalidate(), progenitor key transported via invite-link
1. DnaProperties struct
Place this in a shared utils crate consumed by all integrity and coordinator zomes. The SerializedBytes derive is required — Holochain serializes YAML properties to MessagePack at install time, and try_into() decodes it:
#![allow(unused)]
fn main() {
// dnas/my_dna/utils/src/dna_properties.rs
use hdi::prelude::*;
#[derive(Serialize, Deserialize, SerializedBytes, Debug, Clone)]
pub struct DnaProperties {
pub progenitor_pubkey: Option<String>, // null = dev / bootstrap mode
}
impl DnaProperties {
pub fn get() -> ExternResult<Self> {
dna_info()?
.modifiers
.properties
.try_into()
.map_err(|e| wasm_error!(WasmErrorInner::Guest(
format!("Failed to deserialize DnaProperties: {e}")
)))
}
pub fn get_progenitor_pubkey() -> ExternResult<Option<AgentPubKey>> {
match Self::get()?.progenitor_pubkey {
None => Ok(None),
Some(s) => AgentPubKey::try_from(s).map(Some).map_err(|e| {
wasm_error!(WasmErrorInner::Guest(
format!("Invalid progenitor pubkey in DNA properties: {e}")
))
}),
}
}
}
}
#![allow(unused)]
fn main() {
// dnas/my_dna/utils/src/lib.rs
pub fn check_if_progenitor() -> ExternResult<bool> {
match DnaProperties::get_progenitor_pubkey()? {
None => Ok(false), // no progenitor configured → bootstrap mode
Some(progenitor) => Ok(progenitor == agent_info()?.agent_initial_pubkey),
}
}
}
check_if_progenitor() returns false when no progenitor is configured. Bootstrap logic (who becomes the first admin in that case) lives in your application code — see section 4.
Moss variant: Moss uses
{ progenitor: AgentPubKeyB64 | null }(field nameprogenitor, notprogenitor_pubkey) with the sameOption<String>Rust type and the sameSerializedBytesdeserialization pattern.
2. Coordinator guard
Expose is_progenitor as an hdk_extern for the UI, and guard admin functions with check_if_progenitor():
#![allow(unused)]
fn main() {
#[hdk_extern]
pub fn is_progenitor(_: ()) -> ExternResult<bool> {
check_if_progenitor()
}
#[hdk_extern]
pub fn add_administrator(input: EntityAgent) -> ExternResult<bool> {
let is_prog = check_if_progenitor()?;
let is_admin = check_if_agent_is_administrator(agent_info()?.agent_initial_pubkey)?;
let progenitor_configured = DnaProperties::get_progenitor_pubkey()?.is_some();
let is_bootstrap = !progenitor_configured
&& get_all_administrators_links(input.entity.clone())?.is_empty();
if !is_prog && !is_admin && !is_bootstrap {
return Err(wasm_error!(WasmErrorInner::Guest(
"Only the progenitor or an existing administrator can add administrators".into()
)));
}
register_administrator(input)?;
Ok(true)
}
}
The is_bootstrap branch handles dev mode: when no progenitor is configured and no admins exist yet, the first caller of add_administrator is allowed through.
3. Integrity enforcement (Moss pattern — optional hardening)
R&O enforces the progenitor check only in the coordinator. Moss additionally enforces it in validate() so that a malicious peer cannot bypass the coordinator by calling zome functions directly:
#![allow(unused)]
fn main() {
// In integrity validate() — dna_info() is safe here: deterministic, reads own DNA metadata
fn validate_create_admin_entry(
action: Create,
_entry: AdminEntry,
) -> ExternResult<ValidateCallbackResult> {
match DnaProperties::get()?.progenitor_pubkey {
None => Ok(ValidateCallbackResult::Valid), // bootstrap mode: no restriction
Some(progenitor_str) => {
let progenitor = AgentPubKey::try_from(progenitor_str)
.map_err(|e| wasm_error!(WasmErrorInner::Guest(format!("{e}"))))?;
if action.author != progenitor {
return Ok(ValidateCallbackResult::Invalid(
"Only the progenitor can author this entry".into(),
));
}
Ok(ValidateCallbackResult::Valid)
}
}
}
}
Rules for validation:
dna_info()is safe — reads the DNA’s own metadata, fully deterministic- Use
action.author— you are validating someone else’s action, not checking yourself get()(DHT read) is forbidden in validation — breaks determinism; inspect only the op itself
Tradeoff: Coordinator-only (R&O) is simpler and sufficient for most apps. Integrity enforcement (Moss) is defense-in-depth for higher-security entries where you cannot trust peers to follow coordinator rules.
4. Bootstrap and auto-registration
init() runs on every agent at install time and has no DHT state to query. It is NOT the place to auto-register the progenitor. Instead, put auto-registration inside your first entity creation function (e.g. create_user, create_profile):
#![allow(unused)]
fn main() {
// In coordinator create_user / create_profile — after creating the entry:
let is_prog = check_if_progenitor()?;
let progenitor_configured = DnaProperties::get_progenitor_pubkey()?.is_some();
let should_auto_register = if progenitor_configured {
is_prog // production: only the progenitor auto-gets admin
} else {
// dev / bootstrap: first agent whose profile creation finds no existing admins
let existing_admins: Vec<Link> = external_local_call(
"get_all_administrators_links",
"administration",
"network".to_string(),
)?;
existing_admins.is_empty()
};
if should_auto_register {
external_local_call(
"add_administrator",
"administration",
EntityActionHashAgents {
entity: "network".to_string(),
entity_original_action_hash: OriginalActionHash(profile_hash.clone()),
agent_pubkeys: vec![agent_info()?.agent_initial_pubkey],
},
)?;
}
}
init() itself should only set up the unrestricted signal cap grant and return Pass:
#![allow(unused)]
fn main() {
#[hdk_extern]
pub fn init(_: ()) -> ExternResult<InitCallbackResult> {
let mut functions = HashSet::new();
functions.insert((zome_info()?.name, "recv_remote_signal".into()));
create_cap_grant(ZomeCallCapGrant {
tag: "recv_remote_signal".into(),
access: CapAccess::Unrestricted,
functions: GrantedFunctions::Listed(functions),
})?;
Ok(InitCallbackResult::Pass)
}
}
5. Setting properties at deploy time
Dev / CI — dna.yaml
Leave null for local development; the first-user bootstrap handles the admin seed:
# workdir/dna.yaml
integrity:
properties:
progenitor_pubkey: null # bootstrap mode; set a key for production tests
zomes:
- name: my_domain_integrity
bundled: "./zomes/integrity/my_domain_integrity.wasm"
Get an agent pubkey from a running sandbox:
hc sandbox call --running my-app my_zome get_agent_info '{}' \
| jq -r '.agent_initial_pubkey'
Sweettest
#![allow(unused)]
fn main() {
let props = DnaProperties { progenitor_pubkey: Some(alice_pubkey.to_string()) };
let props_bytes = SerializedBytes::try_from(props).unwrap();
let dna = SweetDnaFile::from_bundle_with_overrides(
Path::new(DNA_PATH),
DnaModifiersOpt::default().with_properties(props_bytes),
).await?;
}
Kangaroo / custom Electron
Make the installing agent the progenitor at runtime:
import { encode } from "@msgpack/msgpack";
import { encodeHashToBase64 } from "@holochain/client";
const agentPubKey = await adminWs.generateAgentPubKey();
await adminWs.installApp({
installed_app_id: "my-app",
agent_key: agentPubKey,
bundle: appBundle,
roles_settings: {
my_dna: {
type: "provisioned",
value: {
modifiers: {
properties: encode({ progenitor_pubkey: encodeHashToBase64(agentPubKey) }),
},
},
},
},
});
Note value wraps modifiers — this is required by the Holochain client RolesSettings type.
Moss (group DNA)
Moss treats progenitor as an opt-in per-group choice via a withProgenitor boolean in the group creation UI. Joiners receive the creator’s key via invite-link and install with it verbatim — they never substitute their own key, so all peers derive the same DNA hash:
// Creator (src/main/index.ts in lightningrodlabs/moss)
const properties = withProgenitor
? { progenitor: encodeHashToBase64(agentPubKey) }
: { progenitor: null };
await adminWebsocket.installApp({
...
roles_settings: {
group: {
type: "provisioned",
value: { modifiers: { properties } },
},
},
});
// Joiner: properties come verbatim from the invite-link (&progenitor=uhCAk... or "null")
// Joiners NEVER substitute their own key — DNA hashes must converge across all peers
Moss-specific conventions:
- Field name is
progenitor(notprogenitor_pubkey) - Progenitor injection is only for the
groupDNA — Moss applets must inject their own if needed - The invite-link carries
networkSeed+progenitortogether; validation confirms the key starts withuhCAkand decodes to 39 bytes
Common Pitfalls
| Pitfall | Fix |
|---|---|
Registering progenitor as admin inside init() | Put auto-registration in your first entity creation fn (e.g. create_user) |
| Coordinator-only guard for high-security entries | Add integrity enforcement (Moss pattern) if peers must not bypass the coordinator |
agent_info()?.agent_initial_pubkey used in validate() | Use action.author — you are checking the action author, not yourself |
get() (DHT read) inside validate() | Forbidden — only dna_info(), zome_info(), and the op itself are safe |
Missing SerializedBytes derive on DnaProperties | The .try_into() deserialization will fail at runtime without it |
Missing value wrapper in roles_settings TypeScript | { type: "provisioned", value: { modifiers: { ... } } } — value is required |
| Joiner substituting their own key as progenitor | Copy the creator’s key verbatim (invite-link / config); joiners must match DNA hash |
| Hardcoding a pubkey in source | Always read from dna_info().modifiers.properties |
| Progenitor key rotation | The pattern does not support it — use role-based access (AccessControl.md) for delegatable authority |
Cross-ref: AccessControl.md — delegating admin authority beyond the progenitor | Workflows/DesignAccessControl.md — choosing the right access model for your app
Holochain Patterns
Entry Types (Integrity Crate)
What NOT to put in entry fields — already in action headers:
Every committed action carries free metadata in its header. Never duplicate these as entry fields:
| Already in header | How to access (coordinator) |
|---|---|
| Author (agent pubkey) | record.action().author() |
| Timestamp | record.action().timestamp() |
| Entry hash | record.action().entry_hash() |
| Previous action hash | available on Update/Delete actions |
If you find yourself adding created_by: AgentPubKey or created_at: Timestamp to an entry struct, remove them — they’re already there.
#![allow(unused)]
fn main() {
use hdi::prelude::*;
// Entry struct — always derive these
#[hdk_entry_helper]
#[derive(Clone, PartialEq)]
pub struct MyEntry {
pub title: String,
pub description: String,
pub status: MyEntryStatus,
// Use #[serde(default)] for fields added after initial deployment
#[serde(default)]
pub tags: Vec<String>,
// DO NOT add: author, created_at, updated_at — those are in the action header
}
// Status enum for soft-delete pattern
#[derive(Serialize, Deserialize, Clone, PartialEq, Debug)]
pub enum MyEntryStatus {
Active,
Archived,
Deleted,
}
// Register all entry types in one enum (integrity crate)
#[hdk_entry_types]
#[unit_enum(UnitEntryTypes)]
pub enum EntryTypes {
MyEntry(MyEntry),
AnotherEntry(AnotherEntry),
}
}
Link Types (Integrity Crate)
#![allow(unused)]
fn main() {
// Register all link types in one enum (integrity crate)
#[hdk_link_types]
pub enum LinkTypes {
// Naming convention: BaseToTarget (PascalCase)
AgentToMyEntry,
PathToMyEntry,
MyEntryUpdates, // Update chain tracking
MyEntryToRelated, // Bidirectional: also RelatedToMyEntry
RelatedToMyEntry,
}
}
Naming convention: {Base}To{Target} — always PascalCase, always directional.
Implicit vs. Explicit Links
Holochain has two layers of navigable relationships. Understanding the distinction prevents over-engineering and redundant data.
Implicit — action metadata and DHT metadata (no create_link needed)
1. Action metadata — fields baked into every action header:
| Field | Type | How to access |
|---|---|---|
author | AgentPubKey | record.action().author() |
timestamp | Timestamp | record.action().timestamp() |
original_action_address | ActionHash | only on Action::Update — the original creation action |
deletes_address | ActionHash | only on Action::Delete — the action being deleted |
Walking backward through an update chain uses this — no links needed:
#![allow(unused)]
fn main() {
// From any update action hash → find the original
match record.action().clone() {
Action::Update(u) => current_hash = u.original_action_address, // go back one step
Action::Create(_) => return Ok(OriginalActionHash(current_hash)), // found it
_ => ...
}
}
2. DHT metadata — aggregated by the DHT automatically, returned by get_details:
#![allow(unused)]
fn main() {
pub struct RecordDetails {
pub record: Record,
pub validation_status: ValidationStatus,
pub updates: Vec<SignedHashed<Action>>, // all Update actions on this record
pub deletes: Vec<SignedHashed<Action>>, // all Delete actions on this record
}
pub struct EntryDetails {
pub entry: Entry,
pub actions: Vec<SignedHashed<Action>>, // all Create/Update actions for this entry
pub updates: Vec<SignedHashed<Action>>,
pub deletes: Vec<SignedHashed<Action>>,
}
}
3. Embedded ActionHash in entry fields — a relationship baked INTO the entry content
#![allow(unused)]
fn main() {
#[hdk_entry_helper]
#[derive(Clone, PartialEq)]
pub struct Offer {
pub title: String,
pub organization_hash: ActionHash, // embedded relationship — no create_link needed
}
}
Critical tradeoff: If organization_hash changes, the content changes → new entry hash → requires update_entry. Use embedded hashes when the reference is intrinsic to the entry’s identity. Use explicit links when the relationship may change independently.
Explicit links — you define, create, and query them
| Link type | Purpose |
|---|---|
PathToMyEntry | Global discovery — browse all entries from a known path string |
AgentToMyEntry | Per-agent listing — “show me this agent’s entries” |
MyEntryUpdates | Forward traversal — original hash → latest version |
MyEntryToRelated | Cross-domain relationship navigation |
Decision rule
| Question | Tool |
|---|---|
| “Who created this entry? When?” | record.action().author() / .timestamp() — no links |
| “Has this record been updated or deleted?” | get_details(action_hash) → .updates / .deletes |
| “What is the LATEST version of this entry?” | get_links(original_hash, UpdatesLinkType) → max timestamp |
| “Find entries without knowing any hash” | Explicit PathTo* or AgentTo* links |
| “Navigate from entry A to related entry B” | Explicit AToB link |
| “Link is intrinsic to entry identity?” | Embedded ActionHash field in entry struct |
| “Link may change independently of entry?” | Explicit link — keeps entry hash stable |
Create Pattern
#![allow(unused)]
fn main() {
pub fn create_my_entry(my_entry: MyEntry) -> ExternResult<Record> {
let my_entry_hash = create_entry(&EntryTypes::MyEntry(my_entry.clone()))?;
// 1. Discovery anchor (path)
let path = Path::from("entries.active");
create_link(
path.path_entry_hash()?,
my_entry_hash.clone(),
LinkTypes::PathToMyEntry,
(),
)?;
// 2. Agent index
let agent_info = agent_info()?;
create_link(
agent_info.agent_initial_pubkey,
my_entry_hash.clone(),
LinkTypes::AgentToMyEntry,
(),
)?;
// 3. Get and return the full record
let record = get(my_entry_hash.clone(), GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Entry not found after create".into())))?;
Ok(record)
}
}
Read Latest Pattern (Walking Update Chain)
#![allow(unused)]
fn main() {
pub fn get_latest_my_entry(original_action_hash: ActionHash) -> ExternResult<Option<Record>> {
let links = get_links(
GetLinksInputBuilder::try_new(original_action_hash.clone(), LinkTypes::MyEntryUpdates)?
.build(),
)?;
let latest_link = links
.into_iter()
.max_by(|a, b| a.timestamp.cmp(&b.timestamp));
let latest_hash = match latest_link {
Some(link) => {
link.target
.into_action_hash()
.ok_or(wasm_error!(WasmErrorInner::Guest("Invalid target hash".into())))?
}
None => original_action_hash, // No updates — original is latest
};
get(latest_hash, GetOptions::default())
}
}
Read Collection Pattern
#![allow(unused)]
fn main() {
pub fn get_all_my_entries() -> ExternResult<Vec<Record>> {
let path = Path::from("entries.active");
let links = get_links(
GetLinksInputBuilder::try_new(path.path_entry_hash()?, LinkTypes::PathToMyEntry)?.build(),
)?;
let get_inputs: Vec<GetInput> = links
.into_iter()
.filter_map(|link| link.target.into_action_hash())
.map(|hash| GetInput::new(hash.into(), GetOptions::default()))
.collect();
let records = HDK.with(|hdk| hdk.borrow().get(get_inputs))?;
Ok(records.into_iter().flatten().collect())
}
}
Update Pattern
#![allow(unused)]
fn main() {
pub fn update_my_entry(
original_action_hash: ActionHash,
previous_action_hash: ActionHash,
updated_entry: MyEntry,
) -> ExternResult<Record> {
// 1. Author check
let original_record = get(original_action_hash.clone(), GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Entry not found".into())))?;
let action = original_record.action();
let agent = agent_info()?.agent_initial_pubkey;
if action.author() != &agent {
return Err(wasm_error!(WasmErrorInner::Guest("Not authorized".into())));
}
// 2. Update entry
let updated_action_hash = update_entry(previous_action_hash, &EntryTypes::MyEntry(updated_entry))?;
// 3. Track update chain with link
create_link(
original_action_hash,
updated_action_hash.clone(),
LinkTypes::MyEntryUpdates,
(),
)?;
let record = get(updated_action_hash, GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Updated record not found".into())))?;
Ok(record)
}
}
Delete Pattern
#![allow(unused)]
fn main() {
pub fn delete_my_entry(original_action_hash: ActionHash) -> ExternResult<ActionHash> {
let path = Path::from("entries.active");
let path_links = get_links(
GetLinksInputBuilder::try_new(path.path_entry_hash()?, LinkTypes::PathToMyEntry)?.build(),
)?;
for link in path_links {
if let Some(hash) = link.target.into_action_hash() {
if hash == original_action_hash {
delete_link(link.create_link_hash)?;
}
}
}
delete_entry(original_action_hash)
}
}
Status Transition (Soft Delete)
Prefer updating status over deleting for data that other agents may reference:
#![allow(unused)]
fn main() {
pub fn archive_my_entry(original_action_hash: ActionHash, previous_action_hash: ActionHash)
-> ExternResult<Record> {
let mut record = get_latest_my_entry(original_action_hash.clone())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Entry not found".into())))?;
let mut entry: MyEntry = record.entry().to_app_option()?.ok_or(
wasm_error!(WasmErrorInner::Guest("Expected MyEntry".into()))
)?;
if entry.status == MyEntryStatus::Deleted {
return Err(wasm_error!(WasmErrorInner::Guest("Cannot archive deleted entry".into())));
}
entry.status = MyEntryStatus::Archived;
update_my_entry(original_action_hash, previous_action_hash, entry)
}
}
Cross-Zome Calls
#![allow(unused)]
fn main() {
// In utils/src/cross_zome.rs
pub fn external_local_call<I, T>(zome_name: &str, fn_name: &str, input: I) -> ExternResult<T>
where
I: serde::Serialize + std::fmt::Debug,
T: serde::de::DeserializeOwned + std::fmt::Debug,
{
let zome_call_response = call(
CallTargetCell::Local,
zome_name.into(),
fn_name.into(),
None,
input,
)?;
match zome_call_response {
ZomeCallResponse::Ok(result) => {
let typed: T = result.decode().map_err(|e| {
wasm_error!(WasmErrorInner::Guest(format!("Decode error: {:?}", e)))
})?;
Ok(typed)
}
ZomeCallResponse::Error(e) => {
Err(wasm_error!(WasmErrorInner::Guest(format!("Zome call error: {:?}", e))))
}
_ => Err(wasm_error!(WasmErrorInner::Guest("Unexpected call response".into()))),
}
}
// Usage:
let result: MyOtherEntry = external_local_call("other_zome", "get_entry", hash)?;
}
Signals (post_commit)
#![allow(unused)]
fn main() {
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "type")]
pub enum Signal {
LinkCreated { action: SignedActionHashed, link_type: LinkTypes },
LinkDeleted { action: SignedActionHashed, link_type: LinkTypes },
EntryCreated { action: SignedActionHashed, app_entry: EntryTypes },
EntryUpdated { action: SignedActionHashed, app_entry: EntryTypes, original_app_entry: EntryTypes },
EntryDeleted { action: SignedActionHashed, original_app_entry: EntryTypes },
}
// NOTE: post_commit is infallible — use #[hdk_extern(infallible)] and log errors
#[hdk_extern(infallible)]
pub fn post_commit(committed_actions: Vec<SignedActionHashed>) {
for action in committed_actions {
if let Err(err) = signal_action(action) {
error!("Error signaling new action: {:?}", err);
}
}
}
}
Remote signals — send signals to other agents:
#![allow(unused)]
fn main() {
// Sender:
send_remote_signal(recipient_pubkey, SerializedBytes::try_from(MySignal::Ping)?)?;
// Receiver callback:
#[hdk_extern]
pub fn recv_remote_signal(signal: SerializedBytes) -> ExternResult<()> {
let sig: MySignal = signal.try_into()?;
emit_signal(sig)?;
Ok(())
}
// REQUIRED: cap grant in init() so any agent can call recv_remote_signal:
#[hdk_extern]
pub fn init(_: ()) -> ExternResult<InitCallbackResult> {
let mut functions = HashSet::new();
functions.insert((zome_info()?.name, "recv_remote_signal".into()));
create_cap_grant(ZomeCallCapGrant {
tag: "remote_signals".into(),
access: CapAccess::Unrestricted,
functions: GrantedFunctions::Listed(functions),
})?;
Ok(InitCallbackResult::Pass)
}
}
Note: send_remote_signal is fire-and-forget — it does not wait for confirmation and does not queue messages for offline agents.
HDK 0.6 API Changes (Breaking)
delete_link() — now requires GetOptions
#![allow(unused)]
fn main() {
// WRONG (pre-0.6):
delete_link(link.create_link_hash)?;
// CORRECT (0.6+):
delete_link(link.create_link_hash, GetOptions::default())?;
}
LinkQuery::new() + GetStrategy
#![allow(unused)]
fn main() {
let links = get_links(
LinkQuery::new(original_action_hash.clone(), LinkTypes::MyEntryUpdates),
GetStrategy::Local,
)?;
}
GetStrategy decision rule:
| Strategy | When to use |
|---|---|
GetStrategy::Local | Source chain only — use for get_my_* (own authored data, fast, no network) |
GetStrategy::Network | DHT — use for get_all_* (data authored by others, default behavior) |
Additional LinkQuery features:
#![allow(unused)]
fn main() {
// Tag prefix filter:
let query = LinkQuery::new(base, LinkTypes::MyLink)
.tag_prefix(tag_bytes);
// Count without fetching records:
let count = count_links(query)?;
// Include deleted links:
let details = get_links_details(query)?;
}
HDK.with() Batch Gets
More efficient than N individual get() calls:
#![allow(unused)]
fn main() {
let get_inputs: Vec<GetInput> = links
.into_iter()
.filter_map(|link| link.target.into_action_hash())
.map(|hash| GetInput::new(hash.into(), GetOptions::default()))
.collect();
let records = HDK.with(|hdk| hdk.borrow().get(get_inputs))?;
let records: Vec<Record> = records.into_iter().flatten().collect();
}
must_get_* Family (Fail-Fast Gets)
Unlike get() which returns Option, these return an error immediately if the record is not found.
#![allow(unused)]
fn main() {
// In coordinator — authorship check before update:
let original_record = must_get_valid_record(input.original_action_hash.clone().into())?;
let author = original_record.action().author().clone();
// In integrity validation — authorship check:
let original_action_record = must_get_action(original_action_hash.clone())?;
if action.action().author() != original_action_record.action().author() {
return Ok(ValidateCallbackResult::Invalid(
"Only the original author can update this entry.".to_string(),
));
}
}
Full family:
must_get_valid_record(action_hash)— record that passed validationmust_get_action(action_hash)— raw action (use in validation)must_get_entry(entry_hash)— entry contentmust_get_agent_activity(agent, filter)— agent’s source chain slice
Validation (Integrity Crate)
#![allow(unused)]
fn main() {
// CORRECT: use op.flattened() — NOT the old op.to_type()
#[hdk_extern]
pub fn validate(op: Op) -> ExternResult<ValidateCallbackResult> {
match op.flattened::<EntryTypes, LinkTypes>()? {
FlatOp::StoreEntry(store_entry) => match store_entry {
OpEntry::CreateEntry { app_entry, .. } => match app_entry {
EntryTypes::MyEntry(entry) => validate_create_my_entry(entry),
EntryTypes::AnotherEntry(entry) => validate_create_another_entry(entry),
},
OpEntry::UpdateEntry { app_entry, .. } => match app_entry {
EntryTypes::MyEntry(entry) => validate_update_my_entry(entry),
_ => Ok(ValidateCallbackResult::Valid),
},
_ => Ok(ValidateCallbackResult::Valid),
},
_ => Ok(ValidateCallbackResult::Valid),
}
}
fn validate_create_my_entry(entry: MyEntry) -> ExternResult<ValidateCallbackResult> {
if entry.title.is_empty() {
return Ok(ValidateCallbackResult::Invalid("Title cannot be empty".into()));
}
Ok(ValidateCallbackResult::Valid)
}
}
Determinism rules for validation:
- No
get(),get_links(), or any DHT reads - No
agent_info()(can vary by context) - No
sys_time()comparisons against current time - Only inspect the op itself and its embedded data
Path Anchors
#![allow(unused)]
fn main() {
// Global discovery anchor
let path = Path::from("entries.active");
let path_hash = path.path_entry_hash()?;
// Hierarchical paths
let category_path = Path::from(format!("entries.{}.active", category));
// Ensure path exists (creates the path entry if not present)
path.ensure()?;
}
get_details() + Details::Record Deserialization
#![allow(unused)]
fn main() {
pub fn get_original_record(hash: ActionHash) -> ExternResult<Option<Record>> {
let Some(details) = get_details(hash, GetOptions::default())? else {
return Ok(None);
};
match details {
Details::Record(d) => Ok(Some(d.record)),
_ => Err(wasm_error!(WasmErrorInner::Guest("Expected record".into()))),
}
}
}
In post_commit — extracting app entry type from a committed action:
#![allow(unused)]
fn main() {
let (zome_index, entry_index) = match record.action().entry_type() {
Some(EntryType::App(AppEntryDef { zome_index, entry_index, .. })) => (zome_index, entry_index),
_ => return Ok(None),
};
EntryTypes::deserialize_from_type(*zome_index, *entry_index, entry)
}
Update Chain Utilities
find_original_action_hash() — traverse backward to the Create action
Given any action hash in an update chain, loop back to the original Create:
#![allow(unused)]
fn main() {
pub fn find_original_action_hash(action_hash: ActionHash) -> ExternResult<OriginalActionHash> {
let mut current_hash = action_hash;
loop {
let record = get(current_hash.clone(), GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Record not found".into())))?;
match record.action().clone() {
Action::Create(_) => return Ok(OriginalActionHash(current_hash)),
Action::Update(u) => { current_hash = u.original_action_address; }
_ => return Err(wasm_error!(WasmErrorInner::Guest("Unexpected action type".into()))),
}
}
}
}
get_all_revisions_for_entry() — original + all updates chronologically
Use LinkQuery::new() + GetStrategy::Local over the {Entry}Updates link type, prepend the original record. Returns all versions in order from oldest to newest.
Path Status Hierarchies
For status-filtered global collections, use hierarchical path strings rather than a single path + runtime filtering:
#![allow(unused)]
fn main() {
const PENDING_PATH: &str = "entries.status.pending";
const APPROVED_PATH: &str = "entries.status.approved";
const REJECTED_PATH: &str = "entries.status.rejected";
// On creation — add link to pending path:
let pending_hash = Path::from(PENDING_PATH).path_entry_hash()?;
create_link(pending_hash, entry_hash.clone(), LinkTypes::AllEntries, ())?;
// On approval — move from pending to approved:
let approved_hash = Path::from(APPROVED_PATH).path_entry_hash()?;
create_link(approved_hash, entry_hash, LinkTypes::AllEntries, ())?;
// (delete the pending link separately)
}
Enables get_links filtered by status without fetching all entries — queries only the relevant path.
Type-Safe Hash Wrappers
Prevent passing wrong hash type to functions:
#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OriginalActionHash(pub ActionHash);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PreviousActionHash(pub ActionHash);
// Function signature is self-documenting and compile-time safe
pub fn update_my_entry(
original: OriginalActionHash,
previous: PreviousActionHash,
entry: MyEntry,
) -> ExternResult<Record> { ... }
}
Holochain Access Control
Why Capability Grants Exist
Holochain zome functions are not open by default. When agent A wants to call a zome function on agent B’s cell (a “remote call”), B’s cell must have an explicit capability grant authorizing that call. Without a grant, the call is rejected.
This applies to:
call_remote()— calling a zome function on another agent’s cellsend_remote_signal→recv_remote_signal— the receiver needs a grant so the signal handler can be invoked
Calls from the same hApp’s UI (same agent, same cell) do not need grants.
Three CapAccess Tiers
1. CapAccess::Unrestricted — Any agent may call
#![allow(unused)]
fn main() {
use std::collections::HashSet;
#[hdk_extern]
pub fn init(_: ()) -> ExternResult<InitCallbackResult> {
let mut functions = HashSet::new();
functions.insert((zome_info()?.name, "recv_remote_signal".into()));
create_cap_grant(ZomeCallCapGrant {
tag: "open_to_all".into(),
access: CapAccess::Unrestricted,
functions: GrantedFunctions::Listed(functions),
})?;
Ok(InitCallbackResult::Pass)
}
}
Use when: the function should be callable by any agent (e.g., recv_remote_signal).
2. CapAccess::Transferable { secret } — Any agent with the secret may call
#![allow(unused)]
fn main() {
let secret = generate_cap_secret()?;
create_cap_grant(ZomeCallCapGrant {
tag: "transferable_grant".into(),
access: CapAccess::Transferable { secret },
functions: GrantedFunctions::Listed(functions),
})?;
// Share `secret` with the grantee out-of-band (e.g., via a private entry or direct message)
}
Use when: you want to delegate access to anyone who holds the secret — like a token.
3. CapAccess::Assigned { secret, assignees } — Only specific agents with the secret
#![allow(unused)]
fn main() {
let secret = generate_cap_secret()?;
let mut assignees = BTreeSet::new();
assignees.insert(grantee_pubkey.clone());
create_cap_grant(ZomeCallCapGrant {
tag: "assigned_grant".into(),
access: CapAccess::Assigned { secret, assignees },
functions: GrantedFunctions::Listed(functions),
})?;
}
Use when: access is explicitly scoped to one or more named agents.
Grant Lifecycle
Grantor side: Grantee side:
───────────────────────────────── ──────────────────────────────────
1. generate_cap_secret()? → (receive secret out-of-band)
2. create_cap_grant(ZomeCallCapGrant → 3. create_cap_claim(CapClaim {
{ tag, access, functions }) tag, grantor, secret,
})?
→ 4. call_remote(
grantor_pubkey,
zome_name,
fn_name,
Some(secret),
payload,
)?
Step-by-step:
- Grantor generates a secret:
let secret = generate_cap_secret()?; - Grantor creates grant on their source chain (stored locally, not DHT)
- Grantee receives the secret via private entry, signal, or other channel
- Grantee stores it as a cap claim:
create_cap_claim(CapClaim { tag, grantor, secret })?; - Grantee calls with secret:
call_remote(grantor, zome, fn_name, Some(secret), payload)?;
Decision Table
| Scenario | Pattern |
|---|---|
recv_remote_signal open to all agents | CapAccess::Unrestricted in init() |
| Delegate a specific function to one agent | CapAccess::Assigned + share secret via private entry |
| UI calling own zome (same agent, same cell) | No grant needed |
| Admin-only zome function | Progenitor check in coordinator (see Architecture.md § DNA Properties) |
| Public API any agent can call | CapAccess::Unrestricted in init() for that function |
Notes
- Cap grants are stored on the grantor’s source chain — they are private, not shared to the DHT
- Cap claims are stored on the grantee’s source chain
- Revoking a grant: use
delete_cap_grant(grant_action_hash)?; GrantedFunctions::Allgrants access to ALL functions in the zome — use with extreme caution
Reference: developer.holochain.org/build/capabilities/
Cell Cloning
What Is Cell Cloning?
Cell cloning creates new network instances from the same DNA code by varying the DNA hash modifier (network seed or properties). Each clone is a separate DHT network — agents in clone A cannot directly see data in clone B even though they run identical code.
This is distinct from having multiple roles in a happ — cloning is for partitioning data within a single role.
When to Use Cloning
| Use case | Pattern |
|---|---|
| Private group spaces (each group gets its own DHT) | Clone per group |
| Time-bounded archives (one clone per year) | Clone per time period |
| Community partitions (separate networks per community) | Clone per community |
| Single shared network for all users | No cloning — single provisioned cell |
happ.yaml Setup
roles:
- name: group_spaces
provisioning:
strategy: create
deferred: true # not created on install — app creates cells on demand
dna:
bundled: "./group_spaces.dna"
modifiers:
network_seed: ~
clone_limit: 50 # allow up to 50 clones of this role
clone_limit must be set to enable cloning. If clone_limit: 0 (default), cloning is not permitted.
TypeScript Client — Creating a Clone
import { AppClient } from '@holochain/client';
// Create a new clone cell with a unique network seed:
const cloneCell = await appClient.createCloneCell({
role_name: 'group_spaces',
modifiers: {
network_seed: `group-${groupId}`, // unique seed = unique network
properties: encode({ group_name: groupName }),
},
name: `Group: ${groupName}`,
});
const clonedCellId = cloneCell.cell_id;
Addressing Clone Cells
Clone cells use a composite role name format: "{role_name}.{clone_index}"
// First clone: "group_spaces.0"
// Second clone: "group_spaces.1"
// etc.
// Call a function on a specific clone:
const result = await appClient.callZome({
cell_id: clonedCellId, // or use role_name: "group_spaces.0"
zome_name: 'group_spaces',
fn_name: 'create_post',
payload: { content: 'Hello group!' },
});
Enabling / Disabling Clones
// Disable a clone (data preserved, cell not running):
await appClient.disableCloneCell({ clone_cell_id: clonedCellId });
// Re-enable a previously disabled clone:
await appClient.enableCloneCell({ clone_cell_id: clonedCellId });
Key Constraints
- The maximum number of clones is set by
clone_limitinhapp.yaml— plan capacity upfront - Each clone’s network seed must be unique — using the same seed creates the same network
- Cloned cells share the same WASM binary but have separate source chains and DHTs
deferred: trueis required for clonable roles — they are not provisioned on install
Reference: developer.holochain.org/build/cell-cloning/
Holochain Error Handling
The Pattern: thiserror + WasmError
Every domain should have a typed error enum in the utils (or domain-specific) crate using thiserror. This prevents stringly-typed errors and gives callsites exhaustive match coverage.
Error Enum Definition (utils/src/errors.rs)
#![allow(unused)]
fn main() {
use hdk::prelude::*;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum MyDomainError {
#[error("Entry not found: {0}")]
NotFound(String),
#[error("Agent is not authorized to perform this action")]
NotAuthorized,
#[error("Cannot update entry with status: {0}")]
CannotUpdateArchived(String),
#[error("Cannot delete entry with status: {0}")]
CannotDeleteNonActive(String),
#[error("Serialization error: {0}")]
SerializationError(String),
#[error("Cross-zome call failed: {0}")]
CrossZomeCallFailed(String),
#[error("Invalid input: {0}")]
InvalidInput(String),
}
// THE critical conversion — maps your typed error to WasmError
impl From<MyDomainError> for WasmError {
fn from(err: MyDomainError) -> WasmError {
wasm_error!(WasmErrorInner::Guest(err.to_string()))
}
}
}
ExternResult and the ? Operator
All public zome functions return ExternResult<T>. The ? operator works throughout because:
From<MyDomainError> for WasmErroris implemented (above)WasmErrorimplementsInto<ExternResult>via HDK
#![allow(unused)]
fn main() {
pub fn update_my_entry(
original_hash: ActionHash,
previous_hash: ActionHash,
entry: MyEntry,
) -> ExternResult<Record> {
// ? works on both MyDomainError and other ExternResult operations
let record = get(original_hash.clone(), GetOptions::default())?
.ok_or(MyDomainError::NotFound(original_hash.to_string()))?;
let agent = agent_info()?.agent_initial_pubkey;
if record.action().author() != &agent {
return Err(MyDomainError::NotAuthorized.into());
}
let updated = update_entry(previous_hash, &EntryTypes::MyEntry(entry))?;
let result = get(updated, GetOptions::default())?
.ok_or(MyDomainError::NotFound("Updated entry".into()))?;
Ok(result)
}
}
Ad-Hoc Errors (without thiserror)
For simple one-off error cases, use wasm_error! directly:
#![allow(unused)]
fn main() {
// Simple guest error — no dedicated type needed
return Err(wasm_error!(WasmErrorInner::Guest("Expected app entry type".into())));
// Wrapping serialization failures
let entry: MyEntry = record.entry()
.to_app_option()
.map_err(|e| wasm_error!(WasmErrorInner::Guest(format!("Deserialization failed: {e}"))))?
.ok_or(wasm_error!(WasmErrorInner::Guest("Entry is not MyEntry type".into())))?;
}
When to use ad-hoc vs. typed:
- Ad-hoc: one-off cases in coordinators, unlikely to be matched by callers
- Typed enum: domain errors that cross-zome callers need to inspect or that validators need
Common Error Variants Checklist
When defining a domain error enum, cover these cases:
| Variant | When to use |
|---|---|
NotFound(String) | DHT get returns None after expected create |
NotAuthorized | Author check fails — agent is not the entry creator |
CannotUpdateArchived(String) | Status guard on update — entry is archived/deleted |
CannotDeleteNonActive(String) | Status guard on delete |
SerializationError(String) | to_app_option() or decode() failure |
CrossZomeCallFailed(String) | external_local_call returns error variant |
InvalidInput(String) | Validation-style check in coordinator (before HDK calls) |
EntryTypeMismatch | Retrieved entry is wrong type |
Validation Error Handling (Integrity)
Validation functions return ValidateCallbackResult, not ExternResult:
#![allow(unused)]
fn main() {
fn validate_create_my_entry(entry: MyEntry) -> ExternResult<ValidateCallbackResult> {
if entry.title.trim().is_empty() {
// Invalid — data is rejected, not a runtime error
return Ok(ValidateCallbackResult::Invalid(
"MyEntry title cannot be empty".into()
));
}
if entry.title.len() > 200 {
return Ok(ValidateCallbackResult::Invalid(
"MyEntry title exceeds 200 characters".into()
));
}
Ok(ValidateCallbackResult::Valid)
}
}
Cargo.toml Setup for thiserror
In utils/Cargo.toml:
[dependencies]
hdk = { workspace = true }
thiserror = { workspace = true }
In workspace Cargo.toml:
[workspace.dependencies]
thiserror = "1"
Holochain Scaffold
Prerequisites
Holochain development requires Nix for a reproducible development environment. All tooling (Rust, hc CLI, holochain, lair-keystore) is managed through Holonix.
Install Nix
# Official Nix installer (recommended)
sh <(curl -L https://nixos.org/nix/install) --no-daemon
# Or with Determinate Systems installer (more reliable, adds uninstaller)
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
Enable flakes (required for Holonix):
# Add to ~/.config/nix/nix.conf (or /etc/nix/nix.conf):
experimental-features = nix-command flakes
Standard flake.nix (Holonix)
Pin to main-0.6 for HDK 0.6.x stability:
{
inputs = {
holonix.url = "github:holochain/holonix?ref=main-0.6";
nixpkgs.follows = "holonix/nixpkgs";
flake-parts.follows = "holonix/flake-parts";
};
outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } {
systems = builtins.attrNames inputs.holonix.devShells;
perSystem = { inputs', ... }: {
devShells.default = inputs'.holonix.devShells.default;
};
};
}
Enter the dev shell:
nix develop
# Now hc, cargo, rustc, and all Holochain tooling are available
Why pin the branch? Holonix main tracks the latest dev version. main-0.6 pins all tooling to HDK 0.6.x compatibility. Mixing versions causes compilation failures.
hc Scaffold Commands
The hc scaffold CLI generates boilerplate that follows Holochain conventions. Always use it before writing by hand.
New hApp
# Create a complete new hApp project
hc scaffold happ
# Prompts for: app name, DNA name, coordinator zome name
# Generates: flake.nix, happ.yaml, dna.yaml, Cargo workspace, first zome pair
New DNA (for multi-DNA hApps)
# Add a new DNA to an existing hApp
hc scaffold dna
# Prompts for: DNA name
# Generates: dna.yaml, new zome pair stubs
New Zome Pair
# Add a coordinator/integrity zome pair to an existing DNA
hc scaffold zome
# Prompts for: zome name, DNA to add it to
# Generates: integrity crate + coordinator crate with Cargo.toml
Entry Type
# Add an entry type to an existing zome pair
hc scaffold entry-type MyEntry
# Generates: entry struct in integrity, create/get/update/delete stubs in coordinator
# Also generates basic Tryorama test file
Link Type
# Add a link type
hc scaffold link-type AgentToMyEntry
# Generates: link type variant in integrity, create/get/delete helpers in coordinator
Collection
# Add a collection (global path anchor) for an entry type
hc scaffold collection
# Prompts for: entry type to index, collection type (global or by-agent)
Verify Compilation
After any scaffold operation, always verify the project compiles:
# Generate and verify WASM compilation
hc s sandbox generate workdir/
# Or using the build alias (if package.json scripts are set up)
bun run build
First build is slow (WASM compilation + wasm-opt). Subsequent builds use the Rust cache. Expect 2-5 minutes for a fresh build.
Project Structure After Scaffolding
my-happ/
├── flake.nix # Nix dev environment (Holonix)
├── Cargo.toml # Workspace root — pins hdk/hdi versions
├── happ.yaml # hApp manifest (roles, DNAs)
├── workdir/ # Build output directory
└── dnas/
└── my_dna/
├── dna.yaml # DNA manifest (zomes, properties)
└── zomes/
├── integrity/
│ └── my_domain_integrity/
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs # Entry types, link types, validate()
│ └── types.rs # Entry structs
└── coordinator/
└── my_domain/
├── Cargo.toml
└── src/
├── lib.rs # pub extern declarations
└── my_entry.rs # CRUD implementation
Tests live outside dnas:
tests/
├── package.json # @holochain/tryorama, vitest
├── vitest.config.ts # testTimeout: 60000
├── foundation/ # Single-agent happy-path CRUD
│ └── my_entry.test.ts
└── integration/ # Two-agent cross-propagation tests
└── my_entry.test.ts
Cargo Workspace Version Pins
Root Cargo.toml — always use exact pins (=):
[workspace]
resolver = "2"
members = [
"dnas/my_dna/zomes/integrity/my_domain_integrity",
"dnas/my_dna/zomes/coordinator/my_domain",
]
[workspace.dependencies]
hdi = "=0.7.1"
hdk = "=0.6.1"
serde = { version = "1", features = ["derive"] }
thiserror = "1"
Why exact pins (=)? Holochain zome compilation is extremely sensitive to minor version differences. Range deps (^) can silently pull in incompatible patch releases.
Add Domain to Existing Project
When adding a new feature domain to an existing hApp:
# 1. Enter Nix dev shell if not already in it
nix develop
# 2. Scaffold a new zome pair
hc scaffold zome
# Enter: domain name (e.g., "profiles"), select existing DNA
# 3. Scaffold entry types for the domain
hc scaffold entry-type Profile
hc scaffold link-type AgentToProfile
hc scaffold link-type PathToProfile
hc scaffold link-type ProfileUpdates
# 4. Add the new crates to workspace Cargo.toml members list
# 5. Verify compilation
hc s sandbox generate workdir/
Proceed to Workflows/ImplementZome.md to fill in the implementation.
Common Setup Issues
| Problem | Cause | Fix |
|---|---|---|
nix: command not found | Nix not installed or not in PATH | Restart shell after install; check ~/.nix-profile/bin in PATH |
flakes not enabled | Missing experimental-features config | Add experimental-features = nix-command flakes to ~/.config/nix/nix.conf |
hc: command not found inside nix develop | Wrong holonix branch | Check flake.nix ref — must be main-0.6, not main |
wasm32 target not found | Rust toolchain outside Nix | Use nix develop; don’t use system Rust for Holochain builds |
First build hangs at wasm-opt | wasm-opt is slow on first run | Normal — wait 5-10 min; subsequent builds are fast |
Reference: developer.holochain.org/get-started/
Holochain Testing
Four-Layer Testing Strategy
┌──────────────────────────────────────────────────────────────────┐
│ Layer 4 — Performance (Wind-Tunnel, load testing) │
│ "How fast, scalable, and resilient is this under load?" │
├──────────────────────────────────────────────────────────────────┤
│ Layer 3 — E2E UI (Playwright + real conductor) │
│ "Does the UI render real data and journeys work?" │
├──────────────────────────────────────────────────────────────────┤
│ Layer 2 — Integration (Sweettest, cargo test) │
│ "Do zomes, DHT sync, and validation work?" │
├──────────────────────────────────────────────────────────────────┤
│ Layer 1 — Unit (Vitest, stores/services/mappers) │
│ "Do computed values and business logic work?" │
└──────────────────────────────────────────────────────────────────┘
| Layer | Tool | Output | What it catches |
|---|---|---|---|
| Unit | Vitest | pass/fail | Store logic, mappers, computed values |
| Integration | Sweettest | pass/fail | Zome logic, validation, DHT sync, auth |
| E2E UI | Playwright + @holochain/client | pass/fail | Full user journeys, real data display |
| Performance | Wind-Tunnel | metrics (latency/throughput) | Regressions under load, DHT sync lag, soak issues |
Gap: Browser-side signal handling (recv_remote_signal) is not well covered by any layer — it requires a running UI receiving WebSocket push events from a real conductor.
Framework Overview
- Sweettest (
holochain::sweettest) — Rust-native, in-process conductor. Official Holochain team recommendation. Run withcargo test. - Playwright +
@holochain/client— Browser automation against a real conductor. No mocks. - Wind-Tunnel (
holochain_wind_tunnel_runner) — Rust load testing. Separate repo. Measures latency, throughput, DHT sync lag. Used for Holochain core CI performance regression. SeeWindTunnel.md.
Note on Tryorama: Deprecated for HDK 0.7+ by the Holochain team. Use Sweettest for integration tests and Playwright for E2E UI tests.
When to Use Which
| Use Case | Sweettest | Playwright | Wind-Tunnel |
|---|---|---|---|
| Zome logic, validation, CRUD | ✅ Preferred | No | No |
| DHT propagation, consistency | ✅ Preferred | No | No |
| Multi-agent scenarios | ✅ Preferred | No | No |
| Inline zomes (no WASM compile) | ✅ Yes | No | No |
| Direct DHT database inspection | ✅ Yes | No | No |
| Full UI user journeys | No | ✅ Yes | No |
| Real data rendered in browser | No | ✅ Yes | No |
| Latency / throughput metrics | No | No | ✅ Yes |
| DHT sync lag measurement | No | No | ✅ Yes |
| Soak / sustained load testing | No | No | ✅ Yes |
| Language | Rust | TypeScript | Rust |
Sweettest (Rust-Native)
Setup (Cargo.toml)
[dev-dependencies]
holochain = { version = "=0.6.1", features = ["test_utils"] }
tokio = { version = "1", features = ["full"] }
Core Types
| Type | Purpose |
|---|---|
SweetConductor | Single conductor instance |
SweetConductorBatch | Multiple conductors for multi-agent scenarios |
SweetApp | Installed app with pre-built cells |
SweetCell | Cell reference — access agent key, DNA hash, zome handles |
SweetZome | (CellId, ZomeName) handle passed to conductor.call() |
SweetAgents | Agent key generation utilities |
SweetDnaFile | DNA construction helpers |
SweetInlineZomes | Define zome functions directly in test code |
Standard Two-Agent Test
#![allow(unused)]
fn main() {
use holochain::sweettest::*;
use std::path::Path;
#[tokio::test(flavor = "multi_thread")]
async fn two_agents_can_share_entries() {
// 1. Create two conductors
let mut conductors = SweetConductorBatch::from_config(
2,
SweetConductorConfig::standard(),
).await;
// 2. Load DNA bundle
let dna = SweetDnaFile::from_bundle(Path::new("workdir/my.dna")).await.unwrap();
// 3. Install app on both conductors
let apps = conductors.setup_app("my-app", &[dna]).await.unwrap();
let ((alice_cell,), (bob_cell,)) = apps.into_tuples();
// 4. Exchange peer info so conductors can gossip
conductors.exchange_peer_info().await;
// 5. Alice creates an entry
let alice_zome = alice_cell.zome("my_coordinator");
let hash: ActionHash = conductors[0]
.call(&alice_zome, "create_my_entry", my_payload)
.await;
// 6. Wait for DHT consistency (replaces Tryorama's dhtSync)
await_consistency(&[&alice_cell, &bob_cell]).await.unwrap();
// 7. Bob reads the entry
let bob_zome = bob_cell.zome("my_coordinator");
let record: Option<Record> = conductors[1]
.call(&bob_zome, "get_my_entry", hash)
.await;
assert!(record.is_some());
}
}
CRITICAL: await_consistency is MANDATORY Before Cross-Agent Reads
The Rust equivalent of Tryorama’s dhtSync:
#![allow(unused)]
fn main() {
// After any write, before cross-agent reads:
await_consistency(&[&alice_cell, &bob_cell]).await.unwrap();
// Custom timeout in seconds (default is 60s):
await_consistency_s(30, &[&alice_cell, &bob_cell]).await.unwrap();
// Instant non-waiting check:
check_consistency(&[&alice_cell, &bob_cell]).await.unwrap();
}
await_consistency polls every 500ms, comparing all peers’ DHT databases at the op level until every op is integrated across all nodes.
Calling Zome Functions
#![allow(unused)]
fn main() {
// Standard call — panics on error, uses authorship cap automatically:
let result: MyOutputType = conductor.call(&cell.zome("my_zome"), "fn_name", payload).await;
// Fallible call — returns ConductorApiResult:
let result = conductor.call_fallible(&cell.zome("my_zome"), "fn_name", payload).await?;
// Cross-agent call — simulate another agent calling with a cap secret:
let result: MyOutputType = conductor.call_from(
&other_agent_key,
Some(cap_secret),
&cell.zome("my_zome"),
"restricted_fn",
payload,
).await;
}
Agent Key Generation
#![allow(unused)]
fn main() {
// Named deterministic keys (same every run — useful for debugging):
let (alice, bob) = SweetAgents::alice_and_bob();
let alice = SweetAgents::alice();
// Random keys:
let agent = SweetAgents::one(conductor.keystore()).await;
let (a, b, c) = SweetAgents::three(conductor.keystore()).await;
let agents: Vec<AgentPubKey> = SweetAgents::get(conductor.keystore(), 5).await;
}
Inline Zomes (Quick Isolated Tests, No WASM Compile)
#![allow(unused)]
fn main() {
let mut zomes = SweetInlineZomes::new();
zomes.function("create_thing", |api, input: MyInput| {
let hash = api.create(CreateInput::new(
EntryDefLocation::app(0, 0),
EntryVisibility::Public,
Entry::app(SerializedBytes::try_from(input)?)?,
ChainTopOrdering::default(),
))?;
Ok(hash)
});
let dna = SweetDnaFile::unique_from_inline_zomes(zomes).await.unwrap();
}
Single-Conductor Pattern (Validation and Unit Tests)
#![allow(unused)]
fn main() {
#[tokio::test(flavor = "multi_thread")]
async fn validate_entry_on_create() {
let conductor = SweetConductor::from_config(SweetConductorConfig::standard()).await;
let dna = SweetDnaFile::from_bundle(Path::new("workdir/my.dna")).await.unwrap();
let app = conductor.setup_app("my-app", &[dna]).await.unwrap();
let (cell,) = app.into_tuple();
let zome = cell.zome("my_coordinator");
// Test validation rejection
let result = conductor.call_fallible(&zome, "create_my_entry", invalid_payload).await;
assert!(result.is_err());
}
}
Common Sweettest Failures
| Symptom | Root Cause | Fix |
|---|---|---|
| Bob can’t find Alice’s entry | Missing await_consistency | Add await_consistency(&[&alice_cell, &bob_cell]).await.unwrap() |
Compilation error on call() | Missing feature flag | Add features = ["test_utils"] to holochain dev-dep |
Timeout in await_consistency | Conductors not networked | Call conductors.exchange_peer_info().await after setup_app |
Wrong type on call() | Type annotation missing | Add explicit type: let result: MyType = conductor.call(...) |
into_tuple() fails | Wrong number of cells destructured | Match tuple arity to number of DNA roles |
SweetConductorConfig — Network Tuning
Most tests use SweetConductorConfig::standard(). Override for stress tests or timing-sensitive scenarios:
#![allow(unused)]
fn main() {
let mut config = SweetConductorConfig::standard();
// Tune gossip frequency (default: 1000ms)
config.tune_network_config(|net| {
net.gossip_initiate_interval_ms = 500; // More frequent gossip
net.gossip_round_timeout_ms = 20_000; // Longer timeout
net.gossip_min_initiate_interval_ms = 500;
net.gossip_initiate_jitter_ms = 50;
});
// Tune validation and countersigning
config.tune_conductor(|tune| {
tune.sys_validation_retry_delay = Some(Duration::from_secs(3));
tune.countersigning_resolution_retry_delay = Some(Duration::from_secs(5));
tune.countersigning_resolution_retry_limit = Some(10);
});
let conductor = SweetConductor::from_config_rendezvous(
config,
SweetLocalRendezvous::new().await,
).await;
}
Installation Patterns
Single Conductor, Multiple Agents
#![allow(unused)]
fn main() {
// Install same app for N generated agents (app IDs: "{prefix}0", "{prefix}1", ...)
let apps: SweetAppBatch = conductor
.setup_apps("my-app", 3, &[dna_file])
.await.unwrap();
let cells: Vec<SweetCell> = apps.cells_flattened();
// Install for a specific pre-generated agent
let agent = SweetAgents::one(conductor.keystore()).await;
let app: SweetApp = conductor
.setup_app_for_agent("my-app", agent.clone(), &[dna_file])
.await.unwrap();
// Install for multiple pre-generated agents
let agents = SweetAgents::get(conductor.keystore(), 3).await;
let apps: SweetAppBatch = conductor
.setup_app_for_agents("my-app", &agents, &[dna_file])
.await.unwrap();
}
Explicit DNA Role Binding
#![allow(unused)]
fn main() {
// Bind DNA to a named role (required when role name differs from DNA hash)
let dna_with_role: (RoleName, DnaFile) = ("my_role".into(), dna_file);
let app = conductor.setup_app("my-app", &[dna_with_role]).await.unwrap();
}
Multi-Cell App (Multiple DNA Roles)
#![allow(unused)]
fn main() {
let role_a = ("role_a", dna_a);
let role_b = ("role_b", dna_b);
let app = conductor.setup_app("my-app", &[role_a, role_b]).await.unwrap();
// Destructure cells by role order
let (cell_a, cell_b) = app.into_tuple();
}
SweetAppBatch Destructuring
#![allow(unused)]
fn main() {
// Two apps, one cell each
let ((alice,), (bob,)) = conductors
.setup_app("my-app", &[dna_file])
.await.unwrap()
.into_tuples();
// Two apps, two cells each
let ((alice_a, alice_b), (bob_a, bob_b)) = conductors
.setup_app("my-app", &[dna_a, dna_b])
.await.unwrap()
.into_tuples();
}
SweetConductorBatch — Advanced Patterns
#![allow(unused)]
fn main() {
// From custom config applied to all conductors
let conductors = SweetConductorBatch::from_config_rendezvous(
3,
SweetConductorConfig::standard(),
).await;
// From different configs per conductor
let configs = vec![config_a, config_b, config_c];
let conductors = SweetConductorBatch::from_configs_rendezvous(configs).await;
// Force peer visibility between two specific conductors (unidirectional)
conductors.reveal_peer_info(0, 1).await; // conductor 0 sees conductor 1
// Persist databases for debugging
conductors[0].persist_dbs(); // must call BEFORE shutdown
}
App Lifecycle Management
#![allow(unused)]
fn main() {
// Disable then re-enable an app
conductor.disable_app("my-app".to_string(), DisabledAppReason::User).await.unwrap();
conductor.enable_app("my-app".to_string()).await.unwrap();
// Hot-reload coordinator zomes without restarting conductor
conductor.update_coordinators(
cell.cell_id().clone(),
updated_coordinator_zomes,
vec![new_wasm],
).await.unwrap();
// Create a clone cell of an existing role
let cloned = conductor.create_clone_cell(
&"my-app".to_string(),
CreateCloneCellPayload {
role_name: "clonable_role".into(),
modifiers: DnaModifiersOpt::default().with_network_seed("clone-1"),
membrane_proof: None,
name: Some("My Clone".to_string()),
},
).await.unwrap();
// Restart conductor
conductor.shutdown().await;
conductor.startup(false).await;
}
Database Access and Inspection
Use these for debugging or asserting internal state without going through zome calls:
#![allow(unused)]
fn main() {
// Access authored and DHT databases directly
let authored_db = cell.authored_db();
let dht_db = cell.dht_db();
let dht_db_from_conductor = conductor.get_dht_db(cell.dna_hash()).unwrap();
// Read the full source chain for an agent
let chain = conductor
.get_agent_source_chain(&agent_key, cell.dna_hash())
.await;
// Get invalid / rejected ops (validates your validation logic)
let invalid_ops = conductor.get_invalid_integrated_ops(&dht_db).await.unwrap();
assert!(invalid_ops.is_empty(), "Found invalid ops: {invalid_ops:?}");
// Persist databases to disk before shutdown (for debugging)
let path = conductor.persist_dbs();
println!("DB saved to: {}", path.display());
conductor.shutdown().await;
}
Network and Gossip Testing
#![allow(unused)]
fn main() {
// Wait for specific peers to become visible on this conductor
conductor.wait_for_peer_visible(
vec![alice_pubkey.clone(), bob_pubkey.clone()],
Some(cell.cell_id().clone()),
Duration::from_secs(30),
).await.unwrap();
// Require at least N peers before gossip starts (avoids false positives)
conductor
.require_initial_gossip_activity_for_cell(&cell, 2, Duration::from_secs(30))
.await.unwrap();
// Declare this node holds the full DHT arc (affects peer routing)
conductor.declare_full_storage_arcs(cell.dna_hash()).await;
// Check consistency without blocking (instant snapshot)
check_consistency(&[&alice_cell, &bob_cell]).await.unwrap();
// Drop and restart signaling server (simulates network partition)
let rendezvous = SweetLocalRendezvous::new_raw().await;
rendezvous.drop_sig().await; // kill signal channel
// ... test behavior during outage ...
rendezvous.start_sig().await; // restore
}
Op Integration Verification
Assert that ops are fully integrated without using await_consistency:
#![allow(unused)]
fn main() {
// All ops in the DHT for this DNA are integrated
let integrated = conductor.all_ops_integrated(cell.dna_hash()).unwrap();
assert!(integrated, "Ops not yet integrated");
// All ops authored by a specific agent are integrated
let author_integrated = conductor
.all_ops_of_author_integrated(cell.dna_hash(), cell.agent_pubkey())
.unwrap();
}
Time-Based Testing (Scheduled Functions)
#![allow(unused)]
fn main() {
// Start scheduler with custom interval
conductor.start_scheduler(Duration::from_millis(100)).await.unwrap();
// Manually fire scheduled functions at a specific timestamp
let target_time = Timestamp::now() + Duration::from_secs(3600); // 1 hour in future
conductor.dispatch_scheduled_fns(target_time).await;
// Verify effects after scheduler fires
let result: Vec<Record> = conductor.call(&zome, "get_scheduled_entries", ()).await;
assert!(!result.is_empty());
}
SweetInlineZomes — Integrity and Coordinator Separation
The full pattern separates integrity (validation) from coordinator (business logic):
#![allow(unused)]
fn main() {
use holochain::sweettest::{SweetInlineZomes, SweetDnaFile};
use holochain_zome_types::{EntryDef, EntryVisibility};
let entry_def = EntryDef {
id: "my_entry".into(),
visibility: EntryVisibility::Public,
required_validations: RequiredValidations::default(),
cache_at_agent_activity: false,
required_validation_type: Default::default(),
};
let zomes = SweetInlineZomes::new(vec![entry_def], /* num_link_types */ 0)
// Integrity zome: validation callbacks
.integrity_function("validate", |_api, _op: Op| {
Ok(ValidateCallbackResult::Valid)
})
// Coordinator zome: zome functions
.function("create_entry", |api, input: MyInput| {
let hash = api.create(CreateInput::new(
EntryDefLocation::app(0, 0),
EntryVisibility::Public,
Entry::app(SerializedBytes::try_from(input)?)?,
ChainTopOrdering::default(),
))?;
Ok(hash)
})
.function("get_entry", |api, hash: ActionHash| {
api.get(vec![GetInput::new(hash.into(), GetOptions::default())])
.map(|gets| gets.into_iter().next().flatten())
});
let (dna, _, _) = SweetDnaFile::unique_from_inline_zomes(zomes).await;
}
Zome name constants: SweetInlineZomes::INTEGRITY = "integrity", SweetInlineZomes::COORDINATOR = "coordinator".
WebSocket Interface Testing
For tests that need to verify WebSocket behavior (signals, app interface):
#![allow(unused)]
fn main() {
// Get admin WebSocket client
let (admin_sender, _admin_recv) = conductor.admin_ws_client::<AdminResponse>().await;
// Get app WebSocket client (auto-authenticated)
let (app_sender, mut app_recv) = conductor
.app_ws_client::<AppResponse>("my-app".to_string())
.await;
// Or authenticate manually for custom setup
let (app_sender, _) = websocket_client_by_port(app_port).await.unwrap();
authenticate_app_ws_client(app_sender.clone(), admin_port, "my-app".to_string()).await;
}
Common Sweettest Failures (Extended)
| Symptom | Root Cause | Fix |
|---|---|---|
| Bob can’t find Alice’s entry | Missing await_consistency | Add await_consistency(&[&alice_cell, &bob_cell]).await.unwrap() |
Compilation error on call() | Missing feature flag | Add features = ["test_utils"] to holochain dev-dep |
Timeout in await_consistency | Conductors not networked | Call conductors.exchange_peer_info().await after setup_app |
Wrong type on call() | Type annotation missing | Add explicit type: let result: MyType = conductor.call(...) |
into_tuple() fails | Wrong number of cells destructured | Match tuple arity to number of DNA roles |
| Invalid ops present unexpectedly | Validation logic accepting bad data | Use get_invalid_integrated_ops() to inspect rejected ops |
reveal_peer_info / gossip never starts | Full arc not declared | Call declare_full_storage_arcs() on test conductors |
| Scheduled fn never fires | Scheduler not started | Call start_scheduler() or dispatch_scheduled_fns(timestamp) |
| WebSocket auth fails in test | Using wrong port | Use admin_ws_client() for admin, app_ws_client() for app calls |
E2E UI Testing (Playwright + Real Conductor)
For full end-to-end tests that drive the UI against a real Holochain backend — no mocks. Use @holochain/client directly — Tryorama is deprecated.
Setup (package.json)
{
"devDependencies": {
"@playwright/test": "^1.40.0",
"@holochain/client": "^0.18.0"
}
}
Conductor Setup Pattern (globalSetup)
The critical pattern: use AdminWebsocket to install the app and get a proper auth token, then keep the conductor alive for all Playwright tests.
// tests/e2e/setup/global-setup.ts
import { AdminWebsocket, AppWebsocket } from '@holochain/client';
import { execSync, spawn } from 'child_process';
export default async function globalSetup() {
// 1. Start conductor via hc sandbox
const conductor = spawn('hc', ['sandbox', 'run', '--root', './test-workdir'], {
stdio: ['ignore', 'pipe', 'pipe']
});
// 2. Wait for conductor ready signal in stdout (not polling)
await new Promise<void>((resolve, reject) => {
conductor.stdout?.on('data', (data: Buffer) => {
if (data.toString().includes('Conductor ready')) resolve();
});
setTimeout(() => reject(new Error('Conductor startup timeout')), 30000);
});
// 3. Connect admin client (use admin port, not app port)
const admin = await AdminWebsocket.connect({
url: new URL('ws://localhost:8888')
});
// 4. Install and enable the happ properly
const agentKey = await admin.generateAgentPubKey();
await admin.installApp({
installed_app_id: 'my_happ',
agent_key: agentKey,
path: './workdir/my_happ.happ',
});
await admin.enableApp({ installed_app_id: 'my_happ' });
// 5. Open app interface on a free port
const { port } = await admin.attachAppInterface({ port: 0 });
// 6. Issue auth token
const { token } = await admin.issueAppAuthenticationToken({
installed_app_id: 'my_happ'
});
// 7. Connect app client and seed test data
const client = await AppWebsocket.connect({
url: new URL(`ws://localhost:${port}`),
token,
});
await seedTestData(client);
// 8. Store conductor process for teardown
process.env.E2E_CONDUCTOR_PID = String(conductor.pid);
process.env.E2E_APP_PORT = String(port);
}
Data Seeding
Seed data directly via AppWebsocket.callZome before Playwright opens the browser:
async function seedTestData(client: AppWebsocket) {
// Seed in dependency order
await client.callZome({
role_name: 'my_dna',
zome_name: 'my_coordinator',
fn_name: 'create_service_type',
payload: { name: 'Web Development', description: '...' },
});
await client.callZome({
role_name: 'my_dna',
zome_name: 'my_coordinator',
fn_name: 'create_offer',
payload: { title: 'Seed Offer', description: '...' },
});
}
Playwright Test Pattern
// tests/e2e/specs/offers.spec.ts
import { test, expect } from '@playwright/test';
test('user sees seeded offers on load', async ({ page }) => {
await page.goto('/offers');
// Wait for Holochain connection (not a mock — real loading time)
await expect(page.locator('[data-testid="offer-card"]'))
.toHaveCount(1, { timeout: 15000 });
await expect(page.locator('text=Seed Offer')).toBeVisible();
});
playwright.config.ts Key Settings
export default defineConfig({
globalSetup: './tests/e2e/setup/global-setup.ts',
globalTeardown: './tests/e2e/setup/global-teardown.ts',
workers: 1, // Single worker — one conductor, no conflicts
fullyParallel: false, // Holochain state is shared across tests
timeout: 60000, // Holochain operations are slow
use: {
baseURL: 'http://localhost:5173',
},
webServer: {
command: 'bun run dev',
url: 'http://localhost:5173',
reuseExistingServer: true,
},
});
Common E2E Failures
| Symptom | Root Cause | Fix |
|---|---|---|
| Conductor never ready | Polling instead of stdout | Listen for "Conductor ready" in stdout |
callZome rejected | Using AppWebsocket on admin port | Use AdminWebsocket on admin port (8888), AppWebsocket on app port |
| Auth error on connect | Missing token | Call admin.issueAppAuthenticationToken() and pass token to AppWebsocket.connect |
| Tests interfere with each other | Shared conductor state | Run with workers: 1, reset data in beforeEach if needed |
| UI shows no data | Race — browser loads before seeding | Seed in globalSetup (runs before browser opens), not in beforeAll |
Wind-Tunnel — Performance and Load Testing
Wind-Tunnel is Holochain’s load testing framework. It applies user-defined load to running Holochain conductors and measures system response: latency, throughput, DHT sync lag, resource usage. It is completely separate from Sweettest (integration/correctness) and Playwright (E2E UI).
Repo: https://github.com/holochain/wind-tunnel Version: 0.6.1 Used for: Performance regression CI (every merge to holochain main), soak testing, benchmarking
Testing Layers Compared
| Layer | Tool | Purpose | Output |
|---|---|---|---|
| 1 | Sweettest (Rust) | Correctness — does the hApp work right? | pass/fail |
| 2 | Playwright (TypeScript) | E2E functional — does the UI+zome flow work? | pass/fail |
| 3 | Wind-Tunnel (Rust) | Performance — how fast/scalable is this? | metrics (latency, throughput) |
Rule: Use Wind-Tunnel when you need time-series performance data, not when you need correctness assertions.
Published Crates
| Crate | Purpose |
|---|---|
wind_tunnel_runner | Core: ScenarioDefinitionBuilder, run(), AgentContext, RunnerContext, Executor |
wind_tunnel_instruments | Metrics: Reporter, ReportMetric, OperationRecord |
wind_tunnel_instruments_derive | Proc macro: #[wind_tunnel_instrument] |
wind_tunnel_core | Core types: AgentBailError, ShutdownHandle |
holochain_wind_tunnel_runner | Holochain bindings: call_zome(), install_app(), HolochainAgentContext |
holochain_client_instrumented | Auto-instrumented AdminWebsocket / AppWebsocket |
Add to Cargo.toml:
[dev-dependencies]
holochain_wind_tunnel_runner = "0.6"
Core API
ScenarioDefinitionBuilder
#![allow(unused)]
fn main() {
use holochain_wind_tunnel_runner::prelude::*;
use holochain_wind_tunnel_runner::happ_path;
ScenarioDefinitionBuilder::<HolochainRunnerContext, HolochainAgentContext>::new_with_init(
env!("CARGO_PKG_NAME")
)
.with_default_duration_s(60) // seconds to run
.use_build_info(conductor_build_info) // attach conductor metadata to reports
.use_agent_setup(fn) // called once per agent before loop
.use_agent_behaviour(fn) // called repeatedly per agent for entire duration
.use_agent_teardown(fn) // called once per agent after duration ends
.use_named_agent_behaviour("write", fn) // named role for multi-behavior scenarios
.use_named_agent_behaviour("read", fn) // multiple roles assigned via CLI
.use_setup(fn) // global setup (before any agents)
.use_teardown(fn) // global teardown (after all agents)
.add_capture_env("MY_ENV_VAR") // include env vars in report metadata
}
Lifecycle Order
Global Setup
→ Agent Setup (each agent, once)
→ Agent Behaviour loop (each agent, repeated until duration/shutdown)
→ Agent Teardown (each agent, once)
Global Teardown
Contexts
#![allow(unused)]
fn main() {
// Per-agent context — available inside every hook
impl AgentContext<HolochainRunnerContext, HolochainAgentContext<SV>> {
fn agent_index(&self) -> usize;
fn agent_name(&self) -> &str;
fn runner_context(&self) -> &RunnerContext;
fn get(&self) -> &HolochainAgentContext<SV>; // read agent state
fn get_mut(&mut self) -> &mut HolochainAgentContext<SV>; // write agent state
}
// Shared runner context
impl RunnerContext {
fn reporter(&self) -> Arc<Reporter>; // metrics sink
fn executor(&self) -> &Executor; // async runtime
fn get_connection_string(&self) -> Option<&str>;
fn force_stop_scenario(&self);
}
// Async code inside sync hooks
ctx.runner_context().executor().execute_in_place(async {
// Holochain client calls (async) go here
})?;
}
Holochain Conductor Helpers
#![allow(unused)]
fn main() {
// Setup helpers (call in agent_setup)
start_conductor_and_configure_urls(ctx)?; // start conductor + bind ports
install_app(ctx, happ_path!("my_happ"), &"my_happ".to_string())?;
use_installed_app(ctx, app_id)?; // connect to already-installed app
// Teardown helpers (call in agent_teardown)
uninstall_app(ctx, None).ok(); // None = use current app_id
// Peer coordination
try_wait_for_min_agents(ctx, 3, Duration::from_secs(30))?;
try_wait_until_full_arc_peer_discovered(ctx)?;
get_peer_list_randomized(ctx)?; // -> Vec<AgentPubKey>
// Zome calls
let result: MyType = call_zome(ctx, "zome_name", "fn_name", payload)?;
}
Custom Metrics (ReportMetric)
#![allow(unused)]
fn main() {
use wind_tunnel_runner::prelude::ReportMetric;
let metric = ReportMetric::new("sync_lag") // auto-prefixed: wt.custom.sync_lag
.with_tag("agent", agent_pubkey.to_string())
.with_field("value", lag_seconds); // f64
ctx.runner_context().reporter().clone().add_custom(metric);
}
Custom Per-Agent State
#![allow(unused)]
fn main() {
#[derive(Debug, Default)]
struct ScenarioValues {
sent_count: u32,
seen_hashes: HashSet<ActionHash>,
}
impl UserValuesConstraint for ScenarioValues {}
// Use HolochainAgentContext<ScenarioValues> everywhere
// Access via: ctx.get().scenario_values and ctx.get_mut().scenario_values
}
Scenario Patterns
Pattern 1: Simple Zome Call Benchmark
use holochain_wind_tunnel_runner::prelude::*;
use holochain_wind_tunnel_runner::happ_path;
fn agent_setup(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>,
) -> HookResult {
start_conductor_and_configure_urls(ctx)?;
install_app(ctx, happ_path!("my_happ"), &"my_happ".to_string())?;
Ok(())
}
fn agent_behaviour(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>,
) -> HookResult {
// Runs repeatedly. Zome call latency auto-captured.
let _: MyReturn = call_zome(ctx, "my_zome", "my_fn", ())?;
Ok(())
}
fn main() -> WindTunnelResult<()> {
let builder =
ScenarioDefinitionBuilder::<HolochainRunnerContext, HolochainAgentContext>::new_with_init(
env!("CARGO_PKG_NAME"),
)
.with_default_duration_s(60)
.use_build_info(conductor_build_info)
.use_agent_setup(agent_setup)
.use_agent_behaviour(agent_behaviour)
.use_agent_teardown(|ctx| { uninstall_app(ctx, None).ok(); Ok(()) });
run(builder)?;
Ok(())
}
Pattern 2: Write/Read CRUD Performance
#![allow(unused)]
fn main() {
fn agent_behaviour(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>,
) -> HookResult {
let action_hash: ActionHash = call_zome(
ctx, "my_zome", "create_entry", MyEntry { value: "test".to_string() },
)?;
let record: Option<Record> = call_zome(
ctx, "my_zome", "get_entry", action_hash,
)?;
assert!(record.is_some(), "Entry must be readable immediately after create");
Ok(())
}
}
Pattern 3: DHT Sync Lag (Multi-Role)
#[derive(Debug, Default)]
struct ScenarioValues {
sent_actions: u32,
seen_actions: HashSet<ActionHash>,
}
impl UserValuesConstraint for ScenarioValues {}
// Writer: creates timestamped entries, records sent_count metric
fn agent_behaviour_write(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext<ScenarioValues>>,
) -> HookResult {
call_zome(ctx, "timed", "create_timed_entry", Timestamp::now())?;
ctx.get_mut().scenario_values.sent_actions += 1;
let metric = ReportMetric::new("sent_count")
.with_field("value", ctx.get().scenario_values.sent_actions);
ctx.runner_context().reporter().clone().add_custom(metric);
Ok(())
}
// Reader: queries locally, computes lag since creation, records sync_lag metric
fn agent_behaviour_record_lag(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext<ScenarioValues>>,
) -> HookResult {
let found: Vec<(ActionHash, Timestamp)> =
call_zome(ctx, "timed", "get_timed_entries_local", ())?;
let reporter = ctx.runner_context().reporter().clone();
for (hash, created_at) in found {
if !ctx.get().scenario_values.seen_actions.contains(&hash) {
let lag_s = (Timestamp::now().as_micros() - created_at.as_micros()) as f64 / 1e6;
reporter.add_custom(ReportMetric::new("sync_lag").with_field("value", lag_s));
ctx.get_mut().scenario_values.seen_actions.insert(hash);
}
}
Ok(())
}
fn main() -> WindTunnelResult<()> {
let builder = ScenarioDefinitionBuilder::<
HolochainRunnerContext, HolochainAgentContext<ScenarioValues>,
>::new_with_init(env!("CARGO_PKG_NAME"))
.with_default_duration_s(60)
.use_build_info(conductor_build_info)
.use_agent_setup(agent_setup)
.use_named_agent_behaviour("write", agent_behaviour_write)
.use_named_agent_behaviour("record_lag", agent_behaviour_record_lag)
.use_agent_teardown(|ctx| { uninstall_app(ctx, None).ok(); Ok(()) });
run(builder)?;
Ok(())
}
Run with: cargo run -- --behaviour write:2 --behaviour record_lag:2 --duration 120
Running Wind-Tunnel Tests
# Minimal run (in-memory reporter, single agent)
RUST_LOG=info cargo run -p my_scenario -- --duration 60
# Multiple agents, named roles
cargo run -p my_scenario -- --agents 4 --duration 120
cargo run -p my_scenario -- --behaviour write:2 --behaviour read:2 --duration 120
# Against external conductor (pre-running)
cargo run -p my_scenario -- --connection-string ws://localhost:8888 --duration 60
# With InfluxDB file reporter (for analysis)
cargo run -p my_scenario -- --reporter=influx-file --duration 300
# Soak test (no time limit)
cargo run -p my_scenario -- --soak --reporter=influx-file
Environment Variables
| Variable | Purpose |
|---|---|
WT_HOLOCHAIN_PATH | Path to custom Holochain binary |
HOLOCHAIN_INFLUXIVE_FILE | Enable conductor-level metrics to file |
WT_METRICS_DIR | Directory for metrics output (set by Nix) |
RUST_LOG | Log level (e.g. RUST_LOG=info) |
Metrics Architecture
Wind-Tunnel collects three simultaneous metric layers, enabling correlation:
| Layer | Source | What |
|---|---|---|
| OS | Telegraf (systemd) | CPU, memory, disk I/O, network, swap |
| Conductor | Holochain influxive | Internal conductor performance |
| Scenario | ReportMetric | Custom application metrics |
Reporter backends:
| Flag | Use |
|---|---|
--reporter=in-memory | Console output (default, local dev) |
--reporter=influx-file | Write InfluxDB line protocol for upload |
--reporter=noop | Disable all metrics |
Cargo.toml Metadata for hApp Packaging
[package.metadata.required-dna]
name = "my_zome"
zomes = ["my_zome"]
[package.metadata.required-happ]
name = "my_happ"
dnas = ["my_zome"]
Use happ_path!("my_happ") macro in code to resolve the built hApp path.
A shared build.rs (build = "../scenario_build.rs") packages zomes into DNAs/hApps automatically.
Pre-Built Scenarios (Reference)
25 scenarios in the wind-tunnel repo cover common Holochain performance patterns:
| Scenario | Tests |
|---|---|
zome_call_single_value | Baseline zome call latency |
write_read | Create + immediate get throughput |
dht_sync_lag | DHT propagation delay between agents |
app_install | App installation latency (minimal vs large) |
remote_signals | Remote signal round-trip latency |
remote_call_rate | Remote zome call throughput |
two_party_countersigning | Full countersigning session lifecycle |
single_write_many_read | Write amplification pattern |
validation_receipts | Validation receipt delivery timing |
local_signals | Local signal handling performance |
full_arc_create_validated_zero_arc_read | Mixed-arc topology |
zero_arc_create_data | Zero-arc creation throughput |
Published results: https://holochain.github.io/wind-tunnel/
When to Write a Wind-Tunnel Scenario
Write a Wind-Tunnel scenario (not a Sweettest test) when you need:
- Continuous latency monitoring — track how long zome calls take under load over time
- Throughput measurement — ops/sec for entry creation, linking, querying
- DHT propagation timing — how long until another agent sees your entries
- Regression detection — catch performance regressions between Holochain versions
- Soak testing — sustained load over hours to detect memory leaks or degradation
- Multi-node topology testing — full-arc vs zero-arc behavior at scale
Do NOT use Wind-Tunnel for:
- Checking correctness (use Sweettest)
- Testing UI flows (use Playwright)
- Testing specific HDK behaviors (use Sweettest inline zomes)
Holochain TypeScript Client
Package Versions
@holochain/client ^0.20.x (compatible with hdk 0.6.x / hdi 0.7.x)
Connection Setup
import { AppWebsocket, AppAgentWebsocket } from "@holochain/client";
// Basic connection (for simple apps)
const appWs = await AppWebsocket.connect(
new URL(`ws://localhost:${process.env.HC_PORT}`),
30000 // timeout ms
);
// Agent-aware connection (recommended — wraps calls with cell context)
const client = await AppAgentWebsocket.connect(
new URL(`ws://localhost:${process.env.HC_PORT}`),
"my-app-id" // Installed app ID
);
callZome Pattern
// Direct AppWebsocket (requires explicit cell_id)
const record = await appWs.callZome({
cell_id: [dnaHash, agentPubKey],
zome_name: "my_zome",
fn_name: "create_my_entry",
payload: {
title: "New Entry",
description: "Created from TypeScript",
status: "Active",
},
cap_secret: null,
provenance: agentPubKey,
});
// AppAgentWebsocket (cleaner — cell resolved by role name)
const record = await client.callZome({
role_name: "my_dna",
zome_name: "my_zome",
fn_name: "create_my_entry",
payload: { title: "New Entry", status: "Active" },
});
Signal Subscription
// Subscribe to all signals from the app
appWs.on("signal", (signal) => {
if (signal.type !== "App") return; // Filter system signals
const { zome_name, payload } = signal.data.payload;
// Discriminate by zome
if (zome_name === "my_zome") {
handleMyZomeSignal(payload);
}
});
// Signal payload matches Rust enum (serde tag = "type")
type MySignal =
| { type: "EntryCreated"; action: SignedActionHashed }
| { type: "EntryUpdated"; action: SignedActionHashed; original_action_hash: HoloHash }
| { type: "EntryDeleted"; action: SignedActionHashed; original_action_hash: HoloHash };
function handleMyZomeSignal(payload: MySignal) {
switch (payload.type) {
case "EntryCreated":
// Refresh entry list
break;
case "EntryUpdated":
// Update specific entry in store
break;
}
}
Effect Library Pattern
The Effect library provides typed error handling and timeouts for zome calls:
import * as E from "effect";
import { Effect, pipe } from "effect";
// Typed error
class ZomeCallError {
readonly _tag = "ZomeCallError";
constructor(readonly message: string, readonly cause?: unknown) {}
}
// Wrapped zome call with timeout and error handling
function callZomeEffect<T>(params: CallZomeRequest) {
return pipe(
E.tryPromise({
try: () => client.callZome(params) as Promise<T>,
catch: (cause) => new ZomeCallError(`Zome call failed: ${params.fn_name}`, cause),
}),
E.timeout("10 seconds"),
E.mapError((e) =>
e._tag === "TimeoutException"
? new ZomeCallError(`Zome call timed out: ${params.fn_name}`)
: e
)
);
}
// Usage
const result = await E.runPromise(
callZomeEffect<MyEntry>({
role_name: "my_dna",
zome_name: "my_zome",
fn_name: "get_my_entry",
payload: actionHash,
})
);
Svelte 5 Reactive Store Integration
// stores/myEntry.svelte.ts
import { AppAgentWebsocket } from "@holochain/client";
export class MyEntryStore {
entries = $state<MyEntry[]>([]);
loading = $state(false);
error = $state<string | null>(null);
private client: AppAgentWebsocket;
constructor(client: AppAgentWebsocket) {
this.client = client;
// Subscribe to signals for real-time updates
client.on("signal", (signal) => {
if (signal.type !== "App") return;
const { zome_name, payload } = signal.data.payload;
if (zome_name === "my_zome") this.handleSignal(payload);
});
}
async loadAll() {
this.loading = true;
try {
const records = await this.client.callZome({
role_name: "my_dna",
zome_name: "my_zome",
fn_name: "get_all_my_entries",
payload: null,
});
this.entries = records.map(decodeEntry);
} catch (e) {
this.error = String(e);
} finally {
this.loading = false;
}
}
private handleSignal(signal: MySignal) {
switch (signal.type) {
case "EntryCreated":
this.loadAll();
break;
case "EntryDeleted":
this.entries = this.entries.filter(
(e) => e.originalHash !== signal.original_action_hash
);
break;
}
}
}
Type Utilities
import { decodeHashFromBase64, encodeHashToBase64, HoloHash } from "@holochain/client";
// Hash serialization (for URLs, localStorage)
const hashString = encodeHashToBase64(actionHash);
const hashBack = decodeHashFromBase64(hashString);
// Decode entry from record
function decodeEntry<T>(record: Record): T {
if (!("Present" in record.entry)) {
throw new Error("Expected Present entry");
}
return decode(record.entry.Present.entry) as T;
}
// Extract action hash from record
function getActionHash(record: Record): HoloHash {
return record.signed_action.hashed.hash;
}
Connection Context (SvelteKit)
// src/lib/holochainClient.ts
import { AppAgentWebsocket } from "@holochain/client";
import { getContext, setContext } from "svelte";
const CLIENT_KEY = Symbol("holochain-client");
export function setHolochainClient(client: AppAgentWebsocket) {
setContext(CLIENT_KEY, client);
}
export function getHolochainClient(): AppAgentWebsocket {
const client = getContext<AppAgentWebsocket>(CLIENT_KEY);
if (!client) throw new Error("Holochain client not initialized");
return client;
}
// In +layout.svelte:
// const client = await AppAgentWebsocket.connect(...);
// setHolochainClient(client);
Environment Variables
HC_PORT=8888 # Holochain conductor WebSocket port
HC_ADMIN_PORT=9000 # Admin port (for conductor management)
VITE_HC_PORT=8888 # Vite prefix for browser access
Holochain Deployment — Kangaroo-Electron
Reference for packaging and distributing Holochain hApps as standalone desktop applications using Kangaroo-Electron.
What is Kangaroo-Electron
Kangaroo-Electron (holochain/kangaroo-electron) is Holochain’s official framework for bundling a complete hApp into a standalone cross-platform desktop application. It packages together:
- The Holochain conductor
- lair-keystore (key management)
- Your hApp (
.webhappbundle with DNA + UI) - An Electron shell
Users receive a single installer (.exe / .dmg / .AppImage) with no Holochain tooling required.
Official repo: https://github.com/holochain/kangaroo-electron
Multi-branch strategy: One branch per supported Holochain version. Always work from the branch matching your hApp’s Holochain version.
Platforms: Windows, macOS, Linux
Branch Selection
| Branch | Holochain version | Status | Use when |
|---|---|---|---|
main | 0.7.0-dev.x | Development | Cutting edge / experimental only |
main-0.6 | 0.6.1 | Recommended | New production projects |
main-0.5 | 0.5.x | Legacy | Existing 0.5.x apps only |
main-0.3 | 0.3.x | Archived | Old apps only |
Default choice: main-0.6 unless you have a specific reason to use another.
Prerequisites
All platforms
- Rust toolchain (stable)
- Node.js + npm
Linux
webkit2gtk(for Electron WebView)libssl-dev
# Ubuntu/Debian
sudo apt install libwebkit2gtk-4.1-dev libssl-dev
macOS
- Xcode Command Line Tools
xcode-select --install
Windows
- Visual Studio 2019+ with “Desktop development with C++” workload
Repository Setup
# Clone and checkout the right branch
git clone https://github.com/holochain/kangaroo-electron
cd kangaroo-electron
git checkout main-0.6
# Install dependencies (auto-fetches conductor binaries with SHA256 validation)
npm install
No manual compilation needed. Binaries (conductor, lair-keystore) are automatically fetched and verified via SHA256 checksums during npm install.
Artifact Structure
Production: .webhapp bundle in pouch/
A .webhapp is a single archive containing:
- Your
.happ(conductor + DNA + zomes) - Your UI assets (HTML/JS/CSS)
Key distinction: .webhapp ≠ .happ. The .happ is backend only. The .webhapp bundles both backend and frontend.
Your hApp build pipeline produces the .webhapp outside Kangaroo. Place the built file here:
pouch/
your-app.webhapp ← place your built bundle here
UI icon requirement: Include icon.png (≥ 256×256 px) at the UI root. Missing icon will cause build warnings or failures on some platforms.
Development mode
For dev mode you don’t need a .webhapp. Instead, kangaroo.config.ts accepts:
happPath— path to your.happfileuiPort— port where your UI dev server is running
Configuration
package.json
{
"name": "your-app-name",
"version": "0.1.0"
}
electron/config.ts
export const HOLOCHAIN_VERSION = "holochain-0.6.1"; // match your branch
export const APP_ID = "com.yourorg.yourapp"; // reverse-domain identifier
export const PRODUCT_NAME = "Your App Name"; // alphanumeric + hyphens on Windows
export const HAPP_PATH = "pouch/your-app.webhapp";
Windows MSI warning:
PRODUCT_NAMEmust use only alphanumeric characters and hyphens. Spaces and special characters cause MSI packaging failures.
Critical Versioning Semantics
Kangaroo uses a versioning convention that controls user data isolation:
| Version change | Data folder | Effect |
|---|---|---|
Patch (0.1.0 → 0.1.1) | Shared | Safe upgrade — user keeps data |
Minor (0.1.0 → 0.2.0) | Isolated | Breaking — user starts fresh |
Major (0.1.0 → 1.0.0) | Isolated | Breaking — user starts fresh |
| Pre-release tag (any) | Isolated | Always isolated |
Rule of thumb: Only use patch bumps for backward-compatible updates. Reserve minor/major bumps for intentional breaking changes where data migration is not required (or is handled in-app).
Network Transport (0.6+)
Holochain 0.6 replaced tx5 with iroh as the default network transport.
If your app targets 0.6+, your conductor config must include a relayUrl:
// In your Kangaroo conductor config
relayUrl: "wss://relay.holochain.org" // official relay, or run your own
Apps migrated from 0.5.x without this update will fail to establish peer connections.
CLI Commands
npm install # install dependencies + auto-fetch conductor binaries
npm run start # launch in development mode (hot reload, DevTools available)
npm run kangaroo # production build for all configured platforms
CI/CD via GitHub Actions
Kangaroo-Electron includes GitHub Actions workflows. Control builds via branch naming:
| Branch | Build type | Code signing |
|---|---|---|
release | Cross-platform executables | Unsigned |
release-codesigned | Cross-platform executables | Signed (requires secrets) |
Required secrets for code-signed builds:
- macOS:
APPLE_CERTIFICATE,APPLE_CERTIFICATE_PASSWORD,APPLE_ID, etc. - Windows:
WINDOWS_CERTIFICATE,WINDOWS_CERTIFICATE_PASSWORD
Auto-Update
Kangaroo uses @matthme/electron-updater — a semver-aware fork of electron-updater.
- Checks GitHub releases on app startup
- Respects versioning semantics: patch updates install silently; minor/major present a breaking-change notice
- Requires your GitHub repo to have releases with attached installers (produced by CI)
Other Deployment Options
| Option | Description | When to use |
|---|---|---|
| p2p Shipyard | Community-maintained Tauri + Nix approach | Need Android support |
| Moss | Holochain groupware framework | App integrates into a shared workspace |
What NOT to Use
| Tool | Reason |
|---|---|
| Holochain Launcher | Officially deprecated; development paused. Do not build new projects for it. |
| Kangaroo-Tauri | Frozen at Holochain 0.3.2 (last update Aug 2024). Not maintained. |
Troubleshooting
| Error / Symptom | Cause | Fix |
|---|---|---|
| App won’t start after version bump | Minor/major bump → new isolated data folder | Expected behavior. User data not migrated automatically — implement migration if needed. |
| Network connectivity fails | iroh relay not configured (0.6+) | Add relayUrl to conductor config in electron/config.ts |
| Binary checksum mismatch | Corrupted or incomplete download | rm -rf node_modules/.cache && npm install |
| Windows MSI build fails | Special characters in PRODUCT_NAME | Use only alphanumeric characters and hyphens in PRODUCT_NAME |
Reference: developer.holochain.org/get-started/4-packaging-and-distribution/
Workflow: Design DHT Data Model
Use this workflow when designing the data model for a new domain or feature in a Holochain hApp. Produces: entry type definitions, link type definitions, discovery strategy, and validation rules ready for implementation.
Step 1: Identify Domains and Zome Pairs
Map the business domain to Holochain’s zome architecture:
For each distinct business domain:
→ 1 integrity crate: {domain}_integrity
→ 1 coordinator crate: {domain}
Questions to answer:
- What are the distinct nouns in this feature? (e.g., Request, Offer, Person, Resource)
- Which nouns belong together conceptually? (e.g., all marketplace data in one zome pair)
- Which nouns need to be queried independently at scale? (separate zome pairs)
Output: List of zome pairs with their domain responsibilities.
Step 2: Define Entry Types Per Domain
For each entry type, define:
Entry: {EntryName}
Fields:
- field_name: type (required)
- field_name: type (required)
- status: StatusEnum (if soft-delete needed)
- optional_field: type #[serde(default)] (if backward-compatible addition)
Visibility: Public | Private
Public: stored on DHT, visible to all agents
Private: stored locally only, not shared
State enum (if applicable):
enum {Entry}Status { Active, Archived, Deleted }
Decision criteria:
- Is this data meaningful to other agents? → Public
- Is this personal/sensitive? → Private
- Does this entry transition through states? → Add
statusfield with enum - Can this entry be “updated in place” or should old versions be preserved? → Update chain (links) vs overwrite
Step 3: Design Link Types
For every relationship between entries, define a directional link:
Link: {Base}To{Target}
Base: {what you start from}
Target: {what you navigate to}
Tag: bytes | () | typed data for filtering
Required links per entry:
┌─ PathTo{Entry} Discovery from global path anchor
├─ AgentTo{Entry} Discovery from agent's pubkey
└─ {Entry}Updates Update chain tracking (for get-latest)
Optional:
├─ {Entry}To{Related} Bidirectional relationship
└─ {Related}To{Entry} Reverse direction (add both)
Bidirectional rule: If you need to navigate A → B and B → A, create two link types. Never navigate backwards through a forward link.
Update chain rule: Every entry that supports update needs a {Entry}Updates link type that records the chain from original_action_hash → updated_action_hash.
Step 4: Choose Discovery Strategy
How will agents find entries?
| Pattern | Link | Use When |
|---|---|---|
| Global path anchor | Path::from("entries.active") → Entry | All agents browse all entries |
| Status-scoped path | Path::from("entries.active") vs "entries.archived" | Browse by status |
| Agent-centric | AgentPubKey → Entry | Each agent manages their own entries |
| Both | Path + Agent links | Global browse AND per-agent listing |
| Hierarchical path | Path::from("category.{id}.entries") | Category/tag based grouping |
Decision: Almost always use Both (path + agent) unless the domain is strictly personal.
Step 5: Write Validation Rules
For each entry type, define what makes it INVALID:
Validation rules for {EntryName}:
Field constraints:
- title: non-empty, max 200 chars
- description: max 2000 chars
- status: must be valid enum variant
Business rules (that can be checked deterministically):
- Cannot create entry with status = Deleted
- Cannot have duplicate fields X and Y both empty
- Tags: max 10 items, each max 50 chars
FORBIDDEN in validation (causes non-determinism):
- No DHT reads (get, get_links)
- No agent_info()
- No sys_time() comparisons to current time
- No randomness
Key rule: Validation runs in integrity. It must be pure and deterministic — same input always produces same result, regardless of when or where it runs.
Step 6: Review — Apply the Splitting Test
Before finalizing, run each design decision through the splitting test:
Entry field review:
- Is every field necessary? (Remove if unused by UI or other zomes)
- Are there fields that could be derived? (Remove if computable)
- Are there fields that change independently? (May belong in a separate entry)
Link review:
- Does every link have a clear query use case?
- Are bidirectional links actually needed in both directions?
- Are
{Entry}Updateslinks present for every updatable entry?
Validation review:
- Is every validation rule actually deterministic?
- Are validation error messages user-readable?
- Are there business rules that need to be enforced elsewhere (coordinator) because they require DHT reads?
Output Artifacts
After completing this workflow, you have:
- Zome pair list — domain to crate name mapping
- Entry structs (Rust) — ready to paste into integrity crate
- Link type enum — ready to paste into integrity crate
- Discovery strategy — path vs agent vs both, with path strings
- Validation checklist — rules ready for
validate()callback - Summary table:
| Entry | Links out | Update chain? | Discovery | Status enum? |
|-------|-----------|---------------|-----------|-------------|
| MyEntry | AgentToMyEntry, PathToMyEntry | Yes (MyEntryUpdates) | Path + Agent | Yes |
Proceed to Workflows/ImplementZome.md to implement.
Workflow: Scaffold a Holochain Project
Use this workflow to set up a new Holochain project from scratch, or to add a new domain to an existing hApp.
Reference: ../Scaffold.md for full details on any step.
Path A: New hApp From Scratch
Step 1 — Install Nix and Holonix
# Install Nix (Determinate Systems installer — recommended)
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
# Enable flakes (add to ~/.config/nix/nix.conf)
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
Restart your shell after installation. Verify: nix --version
Checkpoint: nix --version returns a version number.
Step 2 — Create the flake.nix
In your new project directory:
# flake.nix
{
inputs = {
holonix.url = "github:holochain/holonix?ref=main-0.6";
nixpkgs.follows = "holonix/nixpkgs";
flake-parts.follows = "holonix/flake-parts";
};
outputs = inputs: inputs.flake-parts.lib.mkFlake { inherit inputs; } {
systems = builtins.attrNames inputs.holonix.devShells;
perSystem = { inputs', ... }: {
devShells.default = inputs'.holonix.devShells.default;
};
};
}
Enter the dev shell:
nix develop
Checkpoint: hc --version returns a version number inside nix develop.
Step 3 — Scaffold the hApp
# Inside nix develop:
hc scaffold happ
The CLI will prompt for:
- App name — e.g.,
my-community-app(kebab-case) - DNA name — e.g.,
community(the first domain) - Coordinator zome name — e.g.,
posts(first feature)
This generates the complete project structure.
Checkpoint: ls shows happ.yaml, Cargo.toml, flake.nix, and dnas/ directory.
Step 4 — Verify Cargo Workspace
Check Cargo.toml at the root uses exact version pins:
[workspace.dependencies]
hdi = "=0.7.1"
hdk = "=0.6.1"
serde = { version = "1", features = ["derive"] }
If the scaffold generated range versions (^), replace them with exact pins (=).
Why: Holochain is sensitive to minor version differences. Range deps can silently break compilation.
Step 5 — Add Entry Types
For each data type in your domain:
# Inside nix develop, from project root
hc scaffold entry-type MyEntry
# Then add required link types
hc scaffold link-type AgentToMyEntry
hc scaffold link-type PathToMyEntry
hc scaffold link-type MyEntryUpdates
Step 6 — Verify Compilation
hc s sandbox generate workdir/
Expected: Build succeeds (may take 5-10 minutes on first run due to WASM compilation).
Common issues:
wasm32 target not found— you’re outsidenix develop; runnix developfirst- Slow first build — normal; wait for
wasm-optto complete
Step 7 — Set Up Tests
Create the test directory structure:
mkdir -p tests/{foundation,integration}
# Initialize package.json
cd tests
bun init # or npm init
# Install test dependencies
bun add -d @holochain/tryorama vitest
Create tests/vitest.config.ts:
import { defineConfig } from "vitest/config";
export default defineConfig({
test: { testTimeout: 60000, hookTimeout: 60000 },
});
Add test scripts to tests/package.json:
{
"scripts": {
"test": "vitest run",
"test:foundation": "vitest run foundation",
"test:integration": "vitest run integration"
}
}
Checkpoint: bun run test runs without errors (may have no tests yet — that’s fine).
Step 8 — Initial Commit
git init
git add .
git commit -m "feat: scaffold initial happ structure"
Proceed to Workflows/DesignDataModel.md to design your first domain’s data model, then Workflows/ImplementZome.md to implement.
Path B: Add Domain to Existing hApp
Use this path when your hApp already exists and you need to add a new feature domain.
Step 1 — Enter Dev Shell
nix develop
Step 2 — Scaffold New Zome Pair
hc scaffold zome
# Enter: domain name (e.g., "profiles")
# Select: existing DNA to add it to
Step 3 — Scaffold Entry Types
hc scaffold entry-type Profile
hc scaffold link-type AgentToProfile
hc scaffold link-type PathToProfile
hc scaffold link-type ProfileUpdates
Step 4 — Register in Cargo Workspace
Add new crates to root Cargo.toml members:
[workspace]
members = [
# ... existing members ...
"dnas/my_dna/zomes/integrity/profiles_integrity",
"dnas/my_dna/zomes/coordinator/profiles",
]
Step 5 — Verify Compilation
hc s sandbox generate workdir/
Step 6 — Commit
git add .
git commit -m "feat(profiles): scaffold profiles zome pair"
Proceed to Workflows/ImplementZome.md to implement the domain.
Quick Reference
# Enter dev environment
nix develop
# New project
hc scaffold happ
# New domain
hc scaffold zome
hc scaffold entry-type MyEntry
hc scaffold link-type AgentToMyEntry
# Verify build
hc s sandbox generate workdir/
# Run tests
bun run test:foundation
bun run test:integration
Reference: ../Scaffold.md for full setup details, troubleshooting, and workspace structure.
Workflow: Implement a Zome Pair
Use this workflow when implementing a new zome pair (integrity + coordinator) for a Holochain domain. Prerequisites: data model designed (see Workflows/DesignDataModel.md).
Step 1: Scaffold — Generate Boilerplate
Start from scaffold output to avoid blank-page overhead:
# Navigate to your DNA directory
cd dnas/my_dna
# Scaffold entry type (generates integrity + coordinator stubs)
hc scaffold entry-type MyEntry
# Scaffold link types
hc scaffold link-type AgentToMyEntry
hc scaffold link-type PathToMyEntry
hc scaffold link-type MyEntryUpdates
# Verify compilation after scaffolding
cd ../../
hc s sandbox generate workdir/
What scaffolding generates:
- Integrity crate: entry type variant + link type variants + stub
validate() - Coordinator crate: stub
create_*,get_*,update_*,delete_*functions - Updated
happ.yamlanddna.yaml(verify these are correct)
After scaffolding: READ the generated files before editing. Understand what’s there.
Step 2: Integrity Crate — Define Types and Validation
File: zomes/integrity/{domain}_integrity/src/lib.rs
#![allow(unused)]
fn main() {
use hdi::prelude::*;
// 1. Entry struct (from DesignDataModel output)
#[hdk_entry_helper]
#[derive(Clone, PartialEq)]
pub struct MyEntry {
pub title: String,
pub description: String,
pub status: MyEntryStatus,
}
// 2. Status enum (if soft-delete pattern needed)
#[derive(Serialize, Deserialize, Clone, PartialEq, Debug)]
pub enum MyEntryStatus {
Active,
Archived,
Deleted,
}
// 3. Entry types enum (register all entry types)
#[hdk_entry_types]
#[unit_enum(UnitEntryTypes)]
pub enum EntryTypes {
MyEntry(MyEntry),
}
// 4. Link types enum (register all link types)
#[hdk_link_types]
pub enum LinkTypes {
AgentToMyEntry,
PathToMyEntry,
MyEntryUpdates,
}
// 5. Validation callback — MUST use op.flattened() (NOT op.to_type())
#[hdk_extern]
pub fn validate(op: Op) -> ExternResult<ValidateCallbackResult> {
match op.flattened::<EntryTypes, LinkTypes>()? {
FlatOp::StoreEntry(store_entry) => match store_entry {
OpEntry::CreateEntry { app_entry, .. } => match app_entry {
EntryTypes::MyEntry(entry) => validate_create_my_entry(entry),
},
OpEntry::UpdateEntry { app_entry, .. } => match app_entry {
EntryTypes::MyEntry(entry) => validate_update_my_entry(entry),
},
_ => Ok(ValidateCallbackResult::Valid),
},
_ => Ok(ValidateCallbackResult::Valid),
}
}
fn validate_create_my_entry(entry: MyEntry) -> ExternResult<ValidateCallbackResult> {
if entry.title.trim().is_empty() {
return Ok(ValidateCallbackResult::Invalid(
"MyEntry title cannot be empty".into(),
));
}
Ok(ValidateCallbackResult::Valid)
}
fn validate_update_my_entry(entry: MyEntry) -> ExternResult<ValidateCallbackResult> {
validate_create_my_entry(entry)
}
}
Step 3: Coordinator Crate — Implement CRUD
File: zomes/coordinator/{domain}/src/my_entry.rs
Implement in this order: create → get_latest → get_all → update → delete
#![allow(unused)]
fn main() {
use hdk::prelude::*;
use {domain}_integrity::*;
// CREATE
#[hdk_extern]
pub fn create_my_entry(my_entry: MyEntry) -> ExternResult<Record> {
let hash = create_entry(&EntryTypes::MyEntry(my_entry.clone()))?;
// Path anchor
let path = Path::from("entries.active");
create_link(path.path_entry_hash()?, hash.clone(), LinkTypes::PathToMyEntry, ())?;
// Agent index
create_link(
agent_info()?.agent_initial_pubkey,
hash.clone(),
LinkTypes::AgentToMyEntry,
(),
)?;
get(hash, GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Record not found after create".into())))
}
// GET LATEST (walks update chain)
#[hdk_extern]
pub fn get_latest_my_entry(original_action_hash: ActionHash) -> ExternResult<Option<Record>> {
let links = get_links(
GetLinksInputBuilder::try_new(original_action_hash.clone(), LinkTypes::MyEntryUpdates)?
.build(),
)?;
let latest_hash = links
.into_iter()
.max_by(|a, b| a.timestamp.cmp(&b.timestamp))
.and_then(|l| l.target.into_action_hash())
.unwrap_or(original_action_hash);
get(latest_hash, GetOptions::default())
}
// GET ALL (from path anchor)
#[hdk_extern]
pub fn get_all_my_entries(_: ()) -> ExternResult<Vec<Record>> {
let path = Path::from("entries.active");
let links = get_links(
GetLinksInputBuilder::try_new(path.path_entry_hash()?, LinkTypes::PathToMyEntry)?.build(),
)?;
let inputs: Vec<GetInput> = links
.into_iter()
.filter_map(|l| l.target.into_action_hash())
.map(|h| GetInput::new(h.into(), GetOptions::default()))
.collect();
let records = HDK.with(|hdk| hdk.borrow().get(inputs))?;
Ok(records.into_iter().flatten().collect())
}
// UPDATE
#[hdk_extern]
pub fn update_my_entry(input: UpdateMyEntryInput) -> ExternResult<Record> {
let original = get(input.original_action_hash.clone(), GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Original not found".into())))?;
if original.action().author() != &agent_info()?.agent_initial_pubkey {
return Err(wasm_error!(WasmErrorInner::Guest("Not authorized".into())));
}
let updated = update_entry(input.previous_action_hash, &EntryTypes::MyEntry(input.updated_entry))?;
create_link(input.original_action_hash, updated.clone(), LinkTypes::MyEntryUpdates, ())?;
get(updated, GetOptions::default())?
.ok_or(wasm_error!(WasmErrorInner::Guest("Record not found after update".into())))
}
// DELETE
#[hdk_extern]
pub fn delete_my_entry(original_action_hash: ActionHash) -> ExternResult<ActionHash> {
// Clean path links
let path = Path::from("entries.active");
for link in get_links(
GetLinksInputBuilder::try_new(path.path_entry_hash()?, LinkTypes::PathToMyEntry)?.build(),
)? {
if link.target.into_action_hash() == Some(original_action_hash.clone()) {
delete_link(link.create_link_hash)?;
}
}
delete_entry(original_action_hash)
}
// Input type for update (needed since update takes 3 params)
#[derive(Serialize, Deserialize, Debug)]
pub struct UpdateMyEntryInput {
pub original_action_hash: ActionHash,
pub previous_action_hash: ActionHash,
pub updated_entry: MyEntry,
}
}
lib.rs — register all functions:
#![allow(unused)]
fn main() {
pub mod my_entry;
use my_entry::*;
#[hdk_extern]
pub fn create_my_entry(entry: MyEntry) -> ExternResult<Record> { my_entry::create_my_entry(entry) }
// ... repeat for all pub functions
}
Step 4: Utils Crate (if cross-zome calls needed)
Add to utils/src/errors.rs:
#![allow(unused)]
fn main() {
// (see ErrorHandling.md for full pattern)
}
Add to utils/src/cross_zome.rs:
#![allow(unused)]
fn main() {
// (see Patterns.md for external_local_call helper)
}
Update workspace Cargo.toml to include utils crate.
Step 5: Tests
Write tests in this order. Use Sweettest (Rust, cargo test) or Tryorama (TypeScript, bun run test) — see Testing.md for full patterns for both.
Foundation (single-agent):
1. Create an entry — assert record returned
2. Get latest — assert matches created entry
3. Get all — assert list contains created entry
4. Update — assert updated fields reflected
5. Delete — assert entry gone from list
Integration (two agents):
1. Alice creates → await_consistency / dhtSync → Bob reads — assert cross-agent read works
2. Alice creates → await_consistency / dhtSync → Bob gets all — assert entry in collection
3. Alice creates → updates → await_consistency / dhtSync → Bob gets latest — assert latest version
Sweettest (Rust) commands:
cargo test --package my_dna_tests
cargo test --package my_dna_tests two_agents # single test
Tryorama (TypeScript) commands:
bun run test:foundation
bun run test:integration
See Testing.md for full code patterns including await_consistency (Sweettest) and dhtSync (Tryorama) placement.
Step 6: Build and Verify
# Full build — verify no compile errors
hc s sandbox generate workdir/
# If build succeeds, run tests
bun run test:foundation
# After foundation passes, run integration
bun run test:integration
Common build errors:
| Error | Cause | Fix |
|---|---|---|
cannot find type EntryTypes | Missing import | Add use {domain}_integrity::*; |
op.to_type() deprecated | Old API | Replace with op.flattened() |
expected ExternResult, found ValidateCallbackResult | Wrong return | Use Ok(ValidateCallbackResult::Valid) |
| Link type not found | Unregistered link | Add to #[hdk_link_types] enum in integrity |
wasm-opt timeout | Build too slow | Normal for first build; subsequent builds cache |
Workflow: Design Access Control
Use this workflow when you need to design who can call what zome functions, how remote signals are authorized, or how admin operations are gated.
Step 1: Identify Callers
Map every zome function to its caller type:
| Function | Caller | Notes |
|---|---|---|
create_post | UI (same agent) | No grant needed |
recv_remote_signal | Any remote agent | Needs Unrestricted grant |
update_admin_status | Admin agent only | Progenitor check |
get_shared_resource | Specific partner agent | Assigned grant |
Questions to answer:
- Is the caller the same agent as the cell owner? (No grant needed)
- Can any agent call this function? (Unrestricted)
- Can only a specific agent call this? (Assigned)
- Can anyone with a token call this? (Transferable)
Step 2: Choose Pattern per Function
| Caller scope | Pattern | Where |
|---|---|---|
| Same agent (UI) | No grant | N/A |
| Any agent | CapAccess::Unrestricted in init() | init() callback |
| Named agent(s) | CapAccess::Assigned | On-demand grant creation |
| Token holder | CapAccess::Transferable | On-demand grant creation |
| Admin-only | Progenitor check in coordinator | Coordinator function body |
Step 3: Design Cap Grants
For each function requiring a grant:
Function: recv_remote_signal
Grantor: self (init)
Grantee: all
Access: Unrestricted
Grant timing: init() on first run
Function: approve_member
Grantor: progenitor cell
Grantee: specific delegate agent
Access: Assigned { secret, assignees: [delegate_pubkey] }
Grant timing: progenitor creates grant on delegation
Secret distribution: progenitor sends via private entry to delegate
Step 4: Write the init() Function
For every Unrestricted grant, add to init():
#![allow(unused)]
fn main() {
#[hdk_extern]
pub fn init(_: ()) -> ExternResult<InitCallbackResult> {
let mut functions = HashSet::new();
// Add each function that needs an unrestricted grant:
functions.insert((zome_info()?.name, "recv_remote_signal".into()));
// functions.insert((zome_info()?.name, "another_open_fn".into()));
create_cap_grant(ZomeCallCapGrant {
tag: "open_functions".into(),
access: CapAccess::Unrestricted,
functions: GrantedFunctions::Listed(functions),
})?;
Ok(InitCallbackResult::Pass)
}
}
Step 5: Write Validation Constraints
For admin operations, the coordinator check is the enforcement point:
#![allow(unused)]
fn main() {
pub fn admin_only_function(input: AdminInput) -> ExternResult<ActionHash> {
// Always check first — before any state mutation
if !check_if_progenitor()? {
return Err(wasm_error!(WasmErrorInner::Guest(
"This function is restricted to the network progenitor.".into()
)));
}
// Proceed with admin logic
}
}
For update/delete operations, also validate in the integrity zome using must_get_action():
#![allow(unused)]
fn main() {
// In integrity validate() for update ops:
let original = must_get_action(original_action_hash)?;
if action.author() != original.action().author() {
return Ok(ValidateCallbackResult::Invalid("Not the original author".into()));
}
}
Reference
- Cap grant patterns:
AccessControl.md - Progenitor setup:
Architecture.md§ DNA Properties must_get_*authorship checks:Patterns.md§ must_get
ReviewZome Workflow
Review existing zome code against Holochain best practices, HDK 0.6 patterns, and the project’s established conventions. Run proactively before implementing any zome changes, or explicitly when asked to audit code.
Step 1 — Load context files
Always load both:
Architecture.md— coordinator/integrity split, DNA roles, cross-DNA patternsPatterns.md— HDK 0.6 API, entry types, link types, CRUD, validation rules
Step 2 — Identify files in scope
If invoked proactively (PLAN phase), scope = files identified in the task plan. If invoked explicitly, scope = files provided or the current PR diff.
For each file determine: integrity zome, coordinator zome, shared types, tests.
Step 3 — Run the checklist
Work through each category. Flag every issue with severity: BLOCK (must fix before merge), WARN (should fix), NOTE (informational).
Entry Schema
- New fields on existing entry structs have
#[serde(default)]— required for schema evolution, prevents deserialization failures on existing entries - No
agent_pub_key,created_at, orupdated_atfields on entry structs (those are in the action header — access viarecord.action().author()/.timestamp()) - Status enums use a dedicated enum type, not a raw
String
Integrity / Validation
-
validate()usesop.flattened::<EntryTypes, LinkTypes>()?not deprecatedop.to_type() - No DHT reads inside
validate()— noget(),get_links(),agent_info(),sys_time() - New entry types are registered in the
#[hdk_entry_types]enum - New link types are registered in the
#[hdk_link_types]enum
Coordinator — HDK 0.6 API
-
delete_link(hash, GetOptions::default())— not the pre-0.6 single-arg form -
LinkQuery::try_new()used for link queries (not oldGetLinksInputBuilderunless specifically needed) -
GetStrategy::Localfor own-data queries;GetStrategy::Networkfor DHT queries -
must_get_valid_record()used for fail-fast gets in update/delete authorship checks
Cross-Zome / Cross-DNA Calls
-
CallTargetCell::OtherRole("hrea")role name matchesworkdir/happ.yamlexactly -
ZomeName(...)matches the coordinator cratenamein itsCargo.toml -
ZomeCallResponsematch is exhaustive (5 variants in HDK 0.6: Ok, Unauthorized, AuthenticationFailed, NetworkError, CountersigningSession) - No direct Cargo dependency on the remote DNA’s crate — use local mirror structs for serialization
- If using shared utility crates: verify intra-DNA and cross-DNA call helpers match the project’s established patterns (e.g., wrapper functions in a
utilscrate rather than rawcall()everywhere)
Error Handling
- All fallible operations use
ExternResult<T>; no.unwrap()or.expect()in zome functions -
wasm_error!(WasmErrorInner::Guest(...))used for domain errors (notWasmErrorInner::Host) - Custom error types implement
From<MyError> for WasmError
Tests (Sweettest)
-
await_consistency(&[&cell_a, &cell_b]).awaitcalled before any cross-agent read - Tests use
#[tokio::test(flavor = "multi_thread")]andholochaindev-dependency withtest_utilsfeature - New
#[hdk_extern]functions have at least one Sweettest test
Step 4 — Output findings
Group by severity:
## ReviewZome: {scope}
### BLOCK (must fix before merge)
- [ ] {file}:{issue} — {explanation}
### WARN (should fix)
- [ ] {file}:{issue} — {explanation}
### NOTE (informational)
- {file}:{observation}
### PASS
- {category}: no issues found
If no issues: “All checks pass. Ready to implement / merge.”
Step 5 — Offer to fix
If BLOCK items were found: “I can fix these now. Say ‘fix’ to proceed.” If only WARN/NOTE: “No blockers found. Suggestions above are optional improvements.”
Workflow: Package and Deploy a Holochain hApp
Guided 7-step workflow for packaging a Holochain hApp into a standalone desktop application using Kangaroo-Electron.
Reference: ../Deployment.md for full details on any step.
Step 1 — Verify Holochain Version Compatibility
Confirm your hApp targets a supported Kangaroo-Electron branch and that your conductor config is compatible.
Check your Cargo.toml versions:
hdk = "=0.6.1"
hdi = "=0.7.1"
Check for iroh transport (required for 0.6+):
Ensure your conductor config or app configuration includes a relayUrl. If your app was built for 0.5.x, you must add this before deploying on 0.6+.
Checkpoint: You know which Kangaroo branch to use (main-0.6 for 0.6.x apps).
Common mistake: Using main (0.7.0-dev) for a production app. Use main-0.6 unless you explicitly need cutting-edge features.
Step 2 — Set Up Kangaroo-Electron
Clone the repository and install dependencies.
git clone https://github.com/holochain/kangaroo-electron
cd kangaroo-electron
git checkout main-0.6
npm install
npm install automatically fetches and validates the conductor + lair-keystore binaries with SHA256 checksums. No manual compilation required.
Checkpoint: node_modules/ is populated and npm run start doesn’t error on missing binaries.
Common mistake: Forgetting to git checkout main-0.6 after cloning (defaults to main / 0.7.0-dev).
Step 3 — Place Your Artifacts
Build your hApp outside Kangaroo and place the .webhapp bundle into pouch/.
# Build your webhapp (from your hApp project root)
hc app pack ./workdir --recursive
# or your build script:
bun run build:webhapp
# Copy the output into Kangaroo's pouch directory
cp path/to/your-app.webhapp /path/to/kangaroo-electron/pouch/
What goes in pouch/: A .webhapp file — a single bundle containing both your .happ (conductor + DNAs + zomes) and your UI assets. This is NOT the same as a .happ file, which is backend only.
UI icon: Ensure your UI assets include icon.png (≥ 256×256 px) at the UI root. This is required for desktop packaging.
Checkpoint: pouch/your-app.webhapp exists and has a non-zero file size.
Step 4 — Configure Metadata
Update the configuration files with your app’s identity.
package.json — name and version:
{
"name": "your-app-name",
"version": "0.1.0"
}
electron/config.ts — product identity and paths:
export const APP_ID = "com.yourorg.yourapp"; // reverse-domain, unique
export const PRODUCT_NAME = "Your App Name"; // Windows: alphanumeric + hyphens only
export const HAPP_PATH = "pouch/your-app.webhapp";
Sync version across files: Ensure package.json version and any version displayed in your UI match.
Checkpoint: APP_ID is unique to your app, PRODUCT_NAME contains no special characters, HAPP_PATH matches the file in pouch/.
Common mistake: Leaving APP_ID as the kangaroo template default — two apps with the same APP_ID will share data folders on user machines.
Step 5 — Test Locally
Verify the app works before publishing.
# Development mode (hot reload, DevTools available)
npm run start
# Production build (all configured platforms)
npm run kangaroo
Checkpoint:
npm run startlaunches the app without errors- UI loads and basic zome calls succeed
npm run kangaroocompletes without errors- Installer in
dist/is present and installs cleanly
Common mistake: Testing only in dev mode. Production builds can fail due to code signing or asset path issues not present in dev.
Step 6 — Publish via CI
Push to the appropriate branch to trigger automated cross-platform builds.
# Unsigned builds (development / beta releases)
git push origin HEAD:release
# Code-signed builds (production releases)
git push origin HEAD:release-codesigned
GitHub Actions will build installers for Windows (.exe / .msi), macOS (.dmg), and Linux (.AppImage / .deb).
For code-signed builds, the following secrets must be set in your GitHub repo settings before pushing to release-codesigned:
- macOS:
APPLE_CERTIFICATE,APPLE_CERTIFICATE_PASSWORD,APPLE_ID,APPLE_APP_SPECIFIC_PASSWORD,APPLE_TEAM_ID - Windows:
WINDOWS_CERTIFICATE,WINDOWS_CERTIFICATE_PASSWORD
Checkpoint: GitHub Actions run completes. Installers are attached to the GitHub release.
Step 7 — Version Future Releases
Apply the correct version bump type for each future release.
| Change type | Version bump | User data |
|---|---|---|
| Bug fix, minor enhancement | Patch (0.1.0 → 0.1.1) | Preserved |
| New features, schema changes | Minor (0.1.0 → 0.2.0) | Isolated (user starts fresh) |
| Breaking architecture change | Major (0.1.0 → 1.0.0) | Isolated |
| Any pre-release tag | Any | Always isolated |
Rule: Only use patch bumps for updates that are backward-compatible at the data layer.
Quick Reference
Setup: git checkout main-0.6 → npm install
Dev: npm run start
Build: npm run kangaroo
Publish: git push origin HEAD:release
See also: ../Deployment.md for troubleshooting, CI secrets reference, and alternative deployment options.
Requirements Specification: Holochain Agent Skill
Discovery session: 2026-03-12 Status: v1 scope confirmed
Problem Statement
The Holochain developer ecosystem lacks a comprehensive AI coding assistant skill that covers the full development cycle in one place. Existing documentation is scattered across developer.holochain.org, GitHub repos, and community channels. New developers face steep learning curves; experienced developers lack a fast co-pilot for implementation patterns. This skill addresses both by providing a structured, context-aware assistant that works across the full spiral: architecture, design, scaffolding, implementation, testing, and deployment.
A secondary goal is enabling the wider Holochain community to benefit from AI-assisted development without requiring PAI (Personal AI Infrastructure) — the skill must work as a standalone agent skill with zero external dependencies.
Target Users
| Persona | Context | Primary Need |
|---|---|---|
| Junior Holochain developer | Learning the framework, first hApp | Guided workflows, explanations, scaffold commands |
| Experienced Holochain developer | Active project, knows the patterns | Fast pattern lookup, CRUD generation, debugging help |
| Full-stack developer new to Holochain | Knows Rust/TypeScript, learning DHT concepts | Architecture explanation, data model design, TypeScript client integration |
Functional Requirements
| ID | Requirement | Priority | Acceptance Criteria |
|---|---|---|---|
| FR-01 | Skill must cover Architecture domain | Must | Architecture.md loads on request; covers coordinator/integrity split, DNA structure, Nix, progenitor, multi-DNA, private entries |
| FR-02 | Skill must cover Design domain | Must | Workflows/DesignDataModel.md guides entry/link type design with output artifacts |
| FR-03 | Skill must cover Scaffold domain | Must | Scaffold.md + Workflows/Scaffold.md cover: Holonix setup, Nix flake, hc CLI, hc scaffold commands, new project workflow, add-domain-to-existing workflow |
| FR-04 | Scaffold workflow follows official Holochain documentation | Must | Commands and patterns reference developer.holochain.org; version pins current (hdk=0.6.1, hdi=0.7.1) |
| FR-05 | Skill must cover Implementation domain | Must | Patterns.md covers entry types, link types, CRUD, cross-zome calls, signals, validation, HDK 0.6 API |
| FR-06 | Skill must cover Testing domain | Must | Testing.md covers Tryorama setup, two-agent scenarios, dhtSync, update/delete patterns |
| FR-07 | Skill must cover Deployment domain | Must | Deployment.md + Workflows/PackageAndDeploy.md cover Kangaroo-Electron packaging, CI/CD, versioning |
| FR-08 | Skill must be PAI-independent | Must | No voice notification curl, no SKILLCUSTOMIZATIONS hook, no PROJECTS.md references, no Algorithm routing; works in vanilla Claude Code |
| FR-09 | Skill must include installation documentation | Must | README.md with 3 installation options (global, project-local, symlink), quick start examples |
| FR-10 | Domain correspondence with PAI version | Should | Same sections, same knowledge depth, same workflow structure — different wrappers |
| FR-11 | Skill routing covers all 5 workflows | Must | SKILL.md routing table maps natural language triggers to correct workflows |
| FR-12 | Context files load on demand | Must | SKILL.md specifies which context file to load for each topic; not all pre-loaded |
Non-Functional Requirements
| ID | Category | Requirement | Target |
|---|---|---|---|
| NFR-01 | Portability | Works with zero PAI infrastructure | Verified by install in fresh Claude Code with no ~/.claude/PAI/ |
| NFR-02 | Currency | Version pins match current stable Holochain | hdk=0.6.1, hdi=0.7.1, holonix ref=main-0.6 at release |
| NFR-03 | Completeness | All 6 domains have content | No stub files in v1 release |
| NFR-04 | Accuracy | Code examples compile and run correctly | Examples tested against real hAppenings/Nondominium codebase |
Constraints
- v1 conforms to Agent Skills Open Standard — compatible with Claude Code, GitHub Copilot, Cursor, Augment, and Codex
- Ecosystem expansion deferred to v2 — hREA, unyt, holochain-open-dev, ADAM, Wind Tunnel not in scope
- Two independent codebases for v1 — PAI version and vanilla version developed separately; integration/merge post-v1
- No GUI in v1 — visual tooling, diagram generation, and no-code interfaces are v3+ vision
- Official docs anchor — Scaffold workflow must follow developer.holochain.org, not invent conventions
Open Questions
- Public repo location: Personal GitHub or a community org (Holochain Foundation, holochain-open-dev)?
- Community discovery: How to publicize the skill to the Holochain developer community?
- PAI merge trigger: Time-based (3 months) or milestone-based (X workflows proven stable)?
- Contribution model: Solo-maintained or open contributions from day one?
Deferred (v2+)
| Feature | Target Version | Description |
|---|---|---|
| hREA / ValueFlows sub-skill | v2 | Scaffold and implement ValueFlows-compatible zomes |
| holochain-open-dev patterns | v2 | Profiles, links to other happs, linked devices |
| ADAM (coasys) integration | v2 | AD4M perspectives and expression languages |
| Wind Tunnel testing | v2 | Performance testing for Holochain apps |
| unyt integration | v2 | Unit-aware numeric types for resource tracking |
| Holo hosting / edge nodes | v2 | HTTP gateway, HolOS, Holo Node ISO setup |
| Cross-LLM portability | v2 | Adapt for GLM 5, other AI clients with skill support |
| Skill graph / ecosystem orchestrator | v2 | Parent skill routing to domain sub-skills |
| GUI / visual programming | v3+ | No-code interface with DHT model explorer |
| Diagram generation | v3+ | Visual architecture and data flow diagrams |
| Progressive disclosure UI | v3+ | Junior/senior mode switching |
Roadmap
v1 — Core Spiral (current)
Theme: Everything needed to build, test, and deploy a Holochain hApp from scratch.
Domains: Architecture, Design, Scaffold, Implement, Test, Deploy
Workflows:
DesignDataModel— DHT entry/link type design with validation rulesScaffold— New project and new domain scaffolding workflowsImplementZome— Full CRUD zome implementationDesignAccessControl— Capability grants and admin patternsPackageAndDeploy— Kangaroo-Electron packaging and CI/CDReviewZome— Proactive code review checklist
Context files shipped ahead of schedule:
WindTunnel.md— Performance/load testing with wind-tunnel (originally v2)
Target: All Agent Skills-compatible tools (Claude Code, GitHub Copilot, Cursor, Augment, Codex)
v2 — Ecosystem Expansion
Theme: Connect to the broader Holochain ecosystem. Cross-hApp and cross-network patterns.
Planned additions:
Sub-skills
- hREA / ValueFlows — Scaffold and implement ValueFlows-compatible economic resource tracking;
EconomicEvent,EconomicResource,Processentry types; REA ontology patterns - holochain-open-dev — Community-standard patterns: Profiles zome, linked devices, file storage, notifications
- ADAM (coasys) — AD4M perspectives, expression languages, cross-hApp linking
- Holo Hosting — HTTP gateway setup, edge node configuration, Holo Node ISO, HolOS
- Unyt — Holochain Foundation’s P2P accounting and payment infrastructure; Alliance setup and configuration, Smart Agreements (RHAI scripting, three-layer template/agreement/RAVE architecture), transaction types (Pay, Request, Trade), inter-network and EVM bridging, agent onboarding via Joining Service REST API, Pricing Oracle integration, and deployment with
tauri-plugin-holochain
Architecture improvements
- Skill graph: parent orchestrator routing to sub-skills
- Cross-LLM portability (GLM 5, any client with skill support)
v3 — GUI and Visual Tooling
Theme: Make Holochain accessible without deep framework knowledge. From developers to builders.
Vision:
- Visual DHT data model explorer — design entry/link types through a diagram interface
- No-code workflow UI — guided scaffold and deploy without terminal commands
- Architecture diagram generation — auto-generate from zome code
- Progressive disclosure — beginner mode (guided, verbose) vs. expert mode (fast, terse)
- Monitoring integration — visual DHT health, gossip status, conductor logs
Inspiration: Holo Node ISO’s web-based Node Manager shows the direction — powerful infrastructure made accessible through UI. This skill’s v3 applies the same principle to development tooling.
PAI Integration (post-v1)
Once both the PAI version and vanilla version are field-tested:
- Audit differences — what did each version evolve to independently?
- Extract shared knowledge — create canonical knowledge files usable by both
- Layer PAI on top — PAI SKILL.md wraps shared files and adds PAI-specific features (voice, project routing, Algorithm integration)
- Publish shared core — vanilla skill becomes the community baseline; PAI version is a superset
Version History
| Version | Date | Changes |
|---|---|---|
| 0.1.0 | 2026-03-12 | Initial vanilla skill — 6 domains, 5 workflows, requirements spec |
| 0.1.1 | 2026-03-12 | Agent Skills Open Standard conformance, multi-platform README, testing plan |
| 0.1.2 | 2026-05-15 | Version bump to Holochain 0.6.1 (hdk=0.6.1, hdi=0.7.1); WindTunnel.md shipped ahead of schedule |
| 0.2.0 | 2026-05-15 | Expanded progenitor pattern: full DnaProperties setup, bootstrap mode (Option |
Testing Plan: Holochain Agent Skill v1
Status: Pre-release checklist Target: v1.0.0 release gate
All tests are manual unless marked [auto]. Check each box before cutting a release.
T1 — Agent Skills Open Standard Conformance
Validate that SKILL.md frontmatter meets the Agent Skills Open Standard spec.
head -25 SKILL.md
| # | Test | Pass Condition |
|---|---|---|
| T1.1 | name field present | Key exists in frontmatter |
| T1.2 | name is lowercase | Value is holochain (no uppercase, no spaces) |
| T1.3 | name uses only alphanumeric + hyphens | Regex: ^[a-z0-9-]+$ |
| T1.4 | description field present | Key exists |
| T1.5 | description is between 1 and 1024 characters | wc -c on the value |
| T1.6 | description mentions primary use cases | Contains: “zome”, “HDK”, “Holochain” |
| T1.7 | license field is Apache-2.0 | Value exactly matches |
| T1.8 | compatibility field present and non-empty | Key exists, value not blank |
| T1.9 | metadata.author is soushi888 | Value matches |
| T1.10 | metadata.version is present | Key exists, SemVer format |
| T1.11 | metadata.holochain-versions references current pins | Contains hdk=0.6.1, hdi=0.7.1, holonix ref=main-0.6 |
T2 — File Integrity
Verify every file referenced in SKILL.md routing tables actually exists.
ls Workflows/*.md
ls *.md
| # | File | Exists? |
|---|---|---|
| T2.1 | Workflows/DesignDataModel.md | ☐ |
| T2.2 | Workflows/Scaffold.md | ☐ |
| T2.3 | Workflows/ImplementZome.md | ☐ |
| T2.4 | Workflows/DesignAccessControl.md | ☐ |
| T2.5 | Workflows/PackageAndDeploy.md | ☐ |
| T2.6 | Architecture.md | ☐ |
| T2.7 | Scaffold.md | ☐ |
| T2.8 | Patterns.md | ☐ |
| T2.9 | AccessControl.md | ☐ |
| T2.10 | CellCloning.md | ☐ |
| T2.11 | ErrorHandling.md | ☐ |
| T2.12 | Testing.md | ☐ |
| T2.13 | TypeScript.md | ☐ |
| T2.14 | Deployment.md | ☐ |
| T2.15 | LICENSE at repo root | ☐ |
| T2.16 | README.md at repo root | ☐ |
T3 — Routing Accuracy
For each Workflow Routing entry in SKILL.md, verify the trigger resolves to the correct file and the file’s content matches the described purpose.
| # | Trigger phrase | Expected file | Content check |
|---|---|---|---|
| T3.1 | “design data model” | Workflows/DesignDataModel.md | Contains Step 1 (domains/zome pairs) and Step 2 (entry type definition) |
| T3.2 | “new happ” | Workflows/Scaffold.md | Contains Nix install and hc scaffold happ commands |
| T3.3 | “implement zome” | Workflows/ImplementZome.md | Contains hc scaffold entry-type and integrity/coordinator structure |
| T3.4 | “who can call” | Workflows/DesignAccessControl.md | Contains CapAccess::Unrestricted, CapAccess::Assigned |
| T3.5 | “package” | Workflows/PackageAndDeploy.md | Contains Kangaroo-Electron setup steps |
For each Context Files entry in SKILL.md:
| # | Load-when trigger | Expected file | Content check |
|---|---|---|---|
| T3.6 | coordinator/integrity split | Architecture.md | Contains hdi and hdk crate explanation |
| T3.7 | Nix flake setup | Scaffold.md | Contains nix develop and flake.nix |
| T3.8 | entry types, CRUD | Patterns.md | Contains #[hdk_entry_helper] and create_entry() |
| T3.9 | cap grants | AccessControl.md | Contains CapAccess::Unrestricted and init() |
| T3.10 | cell cloning | CellCloning.md | Contains createCloneCell and clone_limit |
| T3.11 | WasmError | ErrorHandling.md | Contains WasmError and ExternResult |
| T3.12 | Tryorama tests | Testing.md | Contains dhtSync and two-agent scenario |
| T3.13 | holochain-client | TypeScript.md | Contains callZome and signal handling |
| T3.14 | packaging, Kangaroo | Deployment.md | Contains .webhapp and versioning guidance |
T4 — Content Coverage
Verify each of the 6 skill domains has substantive (non-stub) content.
| # | Domain | Primary file | Pass condition |
|---|---|---|---|
| T4.1 | Architecture | Architecture.md | > 100 lines, covers integrity/coordinator split |
| T4.2 | Design | Workflows/DesignDataModel.md | Has at least 4 numbered steps with examples |
| T4.3 | Scaffold | Scaffold.md + Workflows/Scaffold.md | Contains nix develop, hc scaffold happ, Nix flake template |
| T4.4 | Implement | Patterns.md | Contains CRUD patterns, link types, validation section |
| T4.5 | Test | Testing.md | Contains Tryorama setup, dhtSync, two-agent example |
| T4.6 | Deploy | Deployment.md + Workflows/PackageAndDeploy.md | Contains kangaroo-electron, .webhapp bundling, versioning |
T5 — Code Example Accuracy
Validate specific API calls against the actual HDK 0.6 API (use the hAppenings or Nondominium codebase as reference).
HDK / HDI API
| # | Example to validate | Expected form | File |
|---|---|---|---|
| T5.1 | Entry type macro | #[hdk_entry_helper] on struct | Patterns.md |
| T5.2 | Entry type enum in integrity | #[hdk_entry_types] on enum with #[unit_enum(UnitEntryTypes)] | Patterns.md |
| T5.3 | Create entry | create_entry(EntryTypes::MyEntry(entry)) | Patterns.md |
| T5.4 | Get entry | get(hash, GetOptions::default()) or must_get_entry(hash) | Patterns.md |
| T5.5 | Delete link | delete_link(link_hash, GetOptions::default()) (second arg required in 0.6) | Patterns.md |
| T5.6 | Link types enum | #[hdk_link_types] on enum | Patterns.md |
| T5.7 | Update chain tracking | create_link(original_hash, new_hash, LinkTypes::EntryUpdates, ()) | Patterns.md |
| T5.8 | Validation signature | pub fn validate(op: Op) -> ExternResult<ValidateCallbackResult> | Patterns.md |
| T5.9 | post_commit infallible | #[hdk_extern(infallible)] + pub fn post_commit(...) | Architecture.md or Patterns.md |
| T5.10 | Remote signal cap grant | CapAccess::Unrestricted grant created in init() | AccessControl.md |
| T5.11 | dhtSync call | dhtSync([&alice, &bob], &conductor) or equivalent Tryorama API | Testing.md |
| T5.12 | Scaffold compile check | hc s sandbox generate workdir/ | Workflows/ImplementZome.md |
Version pin consistency [auto]
grep -rn "hdk\s*=\s*\"=" . --include="*.md" --include="*.toml" | grep -v Plans/
grep -rn "hdi\s*=\s*\"=" . --include="*.md" --include="*.toml" | grep -v Plans/
grep -rn "holonix" . --include="*.md" | grep -v Plans/
| # | Check | Expected value | Pass condition |
|---|---|---|---|
| T5.13 | hdk pin in SKILL.md Quick Reference | "=0.6.1" | All occurrences match |
| T5.14 | hdi pin in SKILL.md Quick Reference | "=0.7.1" | All occurrences match |
| T5.15 | holonix ref in SKILL.md and Scaffold.md | main-0.6 | All occurrences match |
| T5.16 | No file references hdk = "0.5.*" or older | — | Zero matches |
| T5.17 | PackageAndDeploy.md Cargo.toml example pins match current | hdk = "=0.6.1" | Matches T5.13 |
T6 — Installation Tests
Perform each installation method in a clean environment.
Option A — Global copy
cd /tmp
git clone https://github.com/Soushi888/holochain-agent-skill
cp -r holochain-agent-skill ~/.claude/skills/holochain
| # | Test | Pass condition |
|---|---|---|
| T6.1 | Directory created | ~/.claude/skills/holochain/ exists |
| T6.2 | SKILL.md present inside | ~/.claude/skills/holochain/SKILL.md exists |
| T6.3 | Workflows/ subdirectory present | ~/.claude/skills/holochain/Workflows/ exists with 5 files |
| T6.4 | All context files present | Architecture.md, Patterns.md, etc. all copied |
Option B — Project-local
cd /tmp/my-test-project
mkdir -p .claude/skills
cp -r /tmp/holochain-agent-skill .claude/skills/holochain
| # | Test | Pass condition |
|---|---|---|
| T6.5 | Skill installed at project path | .claude/skills/holochain/SKILL.md exists |
| T6.6 | Does not affect global ~/.claude/skills/ | Global directory unchanged |
Option C — Symlink
git clone https://github.com/Soushi888/holochain-agent-skill ~/holochain-agent-skill
ln -s ~/holochain-agent-skill ~/.claude/skills/holochain
| # | Test | Pass condition |
|---|---|---|
| T6.7 | Symlink created | ~/.claude/skills/holochain is a symlink |
| T6.8 | Symlink resolves | ls -la ~/.claude/skills/holochain/SKILL.md returns file |
| T6.9 | git pull propagates | Pull in ~/holochain-agent-skill, symlink sees updates immediately |
T7 — Invocation Tests
Verify the skill loads and responds correctly in Claude Code.
| # | Test | Steps | Pass condition |
|---|---|---|---|
| T7.1 | Explicit command invocation | Type /holochain in Claude Code | Skill loads, greets with Holochain context |
| T7.2 | Natural language trigger — workflow | Type “implement zome for Profile entry type” | Workflows/ImplementZome.md guidance appears |
| T7.3 | Natural language trigger — context file | Type “how do I set up a Tryorama test?” | Testing.md content cited |
| T7.4 | Natural language trigger — scaffold | Type “scaffold a new happ called my-network” | Workflows/Scaffold.md steps appear |
| T7.5 | Version question | Ask “what version of hdk does this skill target?” | Responds with 0.6.1 |
| T7.6 | Out-of-scope question | Ask a non-Holochain question | Skill does not answer as if it’s Holochain-related |
T8 — PAI Independence
Verify the skill works in a clean Claude Code environment with no PAI infrastructure.
| # | Test | Steps | Pass condition |
|---|---|---|---|
| T8.1 | No ~/.claude/PAI/ required | Temporarily rename ~/.claude/PAI/ to ~/.claude/PAI_bak/, invoke skill | Skill loads without error |
| T8.2 | No voice curl in skill files | grep -r "localhost:8888" . | Zero matches |
| T8.3 | No Algorithm routing references | grep -r "ALGORITHM|AlgorithmMode|PAI/Algorithm" . | Zero matches in skill files |
| T8.4 | No PROJECTS.md references | grep -r "PROJECTS.md" . | Zero matches in skill files |
| T8.5 | Restore PAI after test | mv ~/.claude/PAI_bak ~/.claude/PAI | Restore before next session |
T9 — Workflow End-to-End Tests
For each workflow, walk through the steps in Claude Code with a real or simulated project and verify guidance is accurate and complete.
T9.A — DesignDataModel
Trigger: “design data model for a marketplace listing”
| # | Step | Pass condition |
|---|---|---|
| T9.A.1 | Step 1: Identify domains | Skill asks or describes how to map business nouns to zome pairs |
| T9.A.2 | Step 2: Define entry types | Produces a Rust struct definition with field types |
| T9.A.3 | Step 3: Define link types | Produces at least AgentTo*, PathTo*, *Updates link types |
| T9.A.4 | Step 4: Discovery strategy | Explains Path anchor vs. agent-linked discovery tradeoffs |
| T9.A.5 | Step 5: Validation rules | Produces at least one validation rule per entry type |
| T9.A.6 | Output completeness | Produces a summary table or structured output usable as implementation spec |
T9.B — Scaffold
Trigger: “scaffold new happ called community-app”
| # | Step | Pass condition |
|---|---|---|
| T9.B.1 | Nix install step | Provides curl Determinate Nix installer command |
| T9.B.2 | flake.nix creation | Provides template with holonix ref=main-0.6 |
| T9.B.3 | hc scaffold happ command | Correct command with app name parameter |
| T9.B.4 | First DNA scaffold | hc scaffold dna command shown |
| T9.B.5 | First zome pair scaffold | hc scaffold zome for integrity + coordinator |
| T9.B.6 | Compile verification | hc s sandbox generate workdir/ step present |
T9.C — ImplementZome
Trigger: “implement zome for Profile entry type”
| # | Step | Pass condition |
|---|---|---|
| T9.C.1 | Scaffold step | hc scaffold entry-type Profile and link-type commands shown |
| T9.C.2 | Integrity crate | Produces Profile struct with #[hdk_entry_helper], entry type enum |
| T9.C.3 | Validation function | Produces validate() function with Op pattern matching |
| T9.C.4 | Coordinator — create | Produces create_profile() using create_entry() |
| T9.C.5 | Coordinator — read | Produces get_profile() using get() with GetOptions::default() |
| T9.C.6 | Coordinator — update | Uses update_entry() and create_link() for update chain |
| T9.C.7 | Coordinator — delete | Uses delete_entry() and handles link cleanup |
| T9.C.8 | Test scaffold | Produces at minimum a two-agent Tryorama test structure |
T9.D — DesignAccessControl
Trigger: “design access control for my admin zome”
| # | Step | Pass condition |
|---|---|---|
| T9.D.1 | Caller mapping table | Produces table of function → caller type |
| T9.D.2 | Unrestricted grant | Shows init() with CapAccess::Unrestricted for remote signals |
| T9.D.3 | Progenitor check | Shows dna_info().provenance check for admin-only functions |
| T9.D.4 | Assigned grant | Shows CapAccess::Assigned pattern with agent key |
| T9.D.5 | recv_remote_signal | Shows correct extern signature and cap grant pairing |
T9.E — PackageAndDeploy
Trigger: “package my happ for desktop distribution”
| # | Step | Pass condition |
|---|---|---|
| T9.E.1 | Version compatibility check | Asks for or checks hdk/hdi versions before proceeding |
| T9.E.2 | Kangaroo-Electron setup | git clone command for Kangaroo repo shown |
| T9.E.3 | .happ bundle step | hc app pack or equivalent command shown |
| T9.E.4 | .webhapp bundle step | UI + .happ combined packaging step shown |
| T9.E.5 | Versioning guidance | Explains semantic version bump for DNA updates vs UI-only updates |
| T9.E.6 | CI/CD note | At minimum mentions GitHub Actions or manual release process |
T10 — Cross-Tool Compatibility
Claude Code (primary)
Covered by T7 and T9 above.
GitHub Copilot
| # | Test | Pass condition |
|---|---|---|
| T10.1 | Install to .claude/skills/holochain/ in project root | Directory exists with SKILL.md |
| T10.2 | Copilot agent mode recognizes skill | Skill name holochain appears in available skills list |
| T10.3 | Basic invocation | Copilot responds with Holochain context when asked about zomes |
Cursor
| # | Test | Pass condition |
|---|---|---|
| T10.4 | Install to .claude/skills/holochain/ in project root | Directory exists with SKILL.md |
| T10.5 | Cursor agent detects skill | Skill is listed or referenced in agent context |
| T10.6 | Basic invocation | Cursor responds with Holochain guidance when triggered |
Augment Code
| # | Test | Pass condition |
|---|---|---|
| T10.7 | Install to .claude/skills/holochain/ | Directory exists |
| T10.8 | Skill loaded by Augment | Skill context is included in agent workspace |
OpenAI Codex CLI
| # | Test | Pass condition |
|---|---|---|
| T10.9 | Install to .claude/skills/holochain/ | Directory exists |
| T10.10 | Codex reads SKILL.md frontmatter | Invocation triggers Holochain-domain responses |
Note: T10.2–T10.10 require access to each tool. Mark as
N/Aif the tool is not installed. T10.1 and T10.4 are always testable.
T11 — Repository Hygiene
| # | Test | Command | Pass condition |
|---|---|---|---|
| T11.1 | LICENSE file is Apache-2.0 | head -3 LICENSE | Contains “Apache License, Version 2.0” |
| T11.2 | No Plans/ content ships as skill | SKILL.md routing table has no reference to Plans/ | Zero Plans/ entries in routing table |
| T11.3 | No docs/ loaded by skill | SKILL.md routing table has no reference to docs/ | Zero docs/ entries in routing table |
| T11.4 | No broken markdown links | Scan for [text](file.md) links in all files | All linked files exist |
| T11.5 | No TODO / STUB markers | grep -rn "TODO|STUB|PLACEHOLDER" . --include="*.md" | Zero matches in non-Plans/ files |
| T11.6 | README reflects current install path | grep "holochain-agent-skill" README.md | All cp/ln commands use holochain-agent-skill as source |
| T11.7 | CLAUDE.md license annotation | grep "Apache" CLAUDE.md | Matches Apache-2.0 |
Release Gate
All items below must be ✅ before tagging a release.
Spec & Structure (non-negotiable)
- T1 — All 11 frontmatter checks pass
- T2 — All 16 files exist
- T3 — All 14 routing entries resolve correctly
- T11 — All 7 hygiene checks pass
Content
- T4 — All 6 domains have substantive content
- T5.13–T5.17 — Version pins consistent across all files
Code Accuracy (sample — validate at least 6 of 12)
- T5.1–T5.12 — At least 6 code examples verified against real codebase
Installation
- T6.1–T6.4 — Option A passes
- T6.5–T6.6 — Option B passes
- T6.7–T6.9 — Option C passes
Invocation
- T7.1–T7.5 — Claude Code invocation tests pass (T7.6 optional)
Workflows (all 5 required)
- T9.A — DesignDataModel workflow complete
- T9.B — Scaffold workflow complete
- T9.C — ImplementZome workflow complete
- T9.D — DesignAccessControl workflow complete
- T9.E — PackageAndDeploy workflow complete
Independence
- T8.1–T8.4 — PAI independence verified
Cross-tool (Claude Code required; others optional for v1)
- T10.1 — Install path verified for at least one non-Claude-Code tool
Once all release gate items are checked, tag v1.0.0 and publish.