Tools, infrastructure, and applied research we ship

We maintain a focused portfolio of developer tools and AI infrastructure used across our own builds and client systems. The standard is simple: measurable performance, reproducible behavior, security by default, and documentation that makes systems legible to the next engineer.

WispDB
Vector DB · Retrieval
A WebGPU-accelerated vector database and retrieval layer (browser + server) for semantic search, RAG pipelines, and knowledge systems. Designed for clean APIs, stable performance, and practical ops.
WebGPUVector searchRAGPerformance
Focus: retrieval quality, latency predictability, and deployable primitives.
Invar
Developer Tool
A workflow and invariants toolkit to keep systems consistent as teams scale. Invar emphasizes explicit contracts, regression control, and automation that reduces integration drift.
InvariantsAutomationRegressionDX
Focus: repeatable builds, predictable interfaces, fewer “mystery failures.”
KoMaKo Inference
AI Infrastructure
Serverless inference primitives for modern AI stacks: embeddings, rerankers, and speech pipelines with production-friendly controls. Built to integrate cleanly into apps, APIs, and internal tooling.
EmbeddingsRerankingEvaluationObservability
Focus: reliability, monitoring, and safe iteration in production.
ZeroTrace · Ripcord
Security · Compliance
Security tooling built for auditability: verifiable data-erasure workflows (with audit logs), endpoint wipe/retention policy tooling, and compliance-oriented exports designed for real organizations.
Audit logsErasurePolicyCompliance
Focus: provable behavior, traceability, and secure defaults.
Selected R&D and systems work
We also run projects that sharpen our systems and AI edge, including KomaKit Pro (on-device OCR), Nodus (secure sync), Aevrix (runtime/browser systems), and research prototypes such as Sylphos. Not everything becomes a product, but everything is built with the same engineering discipline.

Reliability engineering, not “support”

We operate software and AI systems with measurable baselines, controlled change, and a prevention-first mindset. The goal is fewer incidents, clearer behavior, and safer iteration.

Operations scope
Platform & product ops
  • Stability work: bug fixes, regression control, reliability improvements
  • Performance: latency tracking, cost profiling, targeted optimization
  • Security hygiene: dependency updates, secrets practices, safe defaults
  • Resilience: backups, recovery checks, auditable change history
  • Observability: logs, metrics, tracing, alerting tuned to what matters
AI ops (where it gets real)
  • Drift monitoring + periodic quality checks against evaluation sets
  • RAG health: retrieval accuracy, indexing freshness, source coverage
  • Guardrails & policy tuning based on real usage + failure modes
  • Model/version rollouts with rollback paths and tracked changes
  • Eval regressions: prevent “it got worse” quietly shipping to users

This is maintenance as a discipline: observe → measure → improve → prevent recurrence.

Operational baselines
SLOs
Targets for uptime, latency, and error rates
Response
Same-day critical triage + tracked incident timeline
Change
Release notes, approvals, rollback-ready deployments
Visibility
Weekly report: shipped work, risk, next priorities
Engagement models
  • Retainer: stable capacity for ongoing improvements
  • SLA: production-critical systems with response guarantees
  • On-call: incident readiness + postmortems + prevention

Optional: a short stabilization phase to establish baselines before steady-state ops.