Session lifecycle
SessionStarted, TurnStarted, TurnCompleted (with handoff successor id where applicable).
Lash emits a structured trace of every session, turn, LLM call, tool call, and streaming event. Traces are written through a TraceSink trait — the bundled JsonlTraceSink writes one record per line to a file; you can fan out to stderr, a database, or OpenTelemetry. The lash-trace-viewer binary renders a JSONL trace into a self-contained HTML page for human inspection.
Attach a sink at LashCoreBuilder time. The sink, level, and an optional TraceContext propagate to every session built from that core.
use std::sync::Arc;
use lash::{
LashCore,
tracing::{JsonlTraceSink, TraceLevel, TraceSink},
};
let trace_sink: Arc<dyn TraceSink> = Arc::new(JsonlTraceSink::new("./.lash-data/trace.jsonl"));
let core = LashCore::builder()
.provider(provider)
.model(model, None)
.max_context_tokens(200_000)
.trace_sink(Some(trace_sink))
.trace_level(TraceLevel::Extended)
.build()?;
TraceLevel::Standard (the default) keeps turns, tool calls, LLM start/complete, and token usage. TraceLevel::Extended additionally records provider-side stream chunks, prompt-component hashes, and runtime stream deltas — useful for debugging provider behavior, expensive in volume.
TraceSink is intentionally narrow: one synchronous append. Sinks are called from runtime context but must not block for long; JsonlTraceSink serializes one line and appends with a short-held mutex.
pub trait TraceSink: Send + Sync {
fn append(&self, record: &TraceRecord) -> Result<(), TraceSinkError>;
}
Real-world sinks usually wrap multiple destinations. A common pattern is a fan-out sink that writes to both stderr (for live inspection) and a JSONL file (for replay):
struct FanoutTraceSink { sinks: Vec<Arc<dyn TraceSink>> }
impl TraceSink for FanoutTraceSink {
fn append(&self, record: &TraceRecord) -> Result<(), TraceSinkError> {
for sink in &self.sinks {
// Treat errors per-sink; one failing destination shouldn't take the others down.
let _ = sink.append(record);
}
Ok(())
}
}
Records are tagged enums on TraceEvent. The shape stays consistent across schema versions; new variants are added rather than fields renamed. The main categories:
SessionStarted, TurnStarted, TurnCompleted (with handoff successor id where applicable).
PromptBuilt — component hashes, source, length. Lets you detect prompt drift / cache invalidation across turns.
LlmCallStarted, LlmCallCompleted, LlmCallFailed. Carries provider, model, variant, timeouts, and request shape.
ProviderStreamEvent (raw provider chunks) and RuntimeStreamEvent (post-projection SDK deltas).
ToolCallStarted, ToolCallCompleted. Includes args, result success flag, duration.
TokenUsage — per-turn input/output/cached/reasoning token deltas, mirroring what's persisted to the session usage ledger.
ModeStep — agentic-loop iterations (RLM execute / observe / submit).
Custom { name, payload } — escape hatch for host-specific events that don't fit the built-in taxonomy.
Every record is one line of UTF-8 JSON. Schema version is pinned at TRACE_SCHEMA_VERSION = 2; consumers should reject (or warn on) records with a higher version they don't recognize.
{
"schema_version": 2,
"id": "01HGQX...",
"timestamp": "2026-05-11T11:42:01.234Z",
"context": {
"session_id": "chat-123",
"turn_id": "turn-7",
"llm_call_id": null,
"tool_call_id": null,
"graph_node_id": null,
"run_id": null,
"experiment_id": null,
"metadata": {}
},
"event": {
"kind": "tool_call_completed",
"tool_name": "read_file",
"duration_ms": 8,
"success": true,
"...": "..."
}
}
The full record type is lash_trace::TraceRecord; the event variants are lash_trace::TraceEvent. Both are Deserialize — your consumer can parse the JSONL with serde_json::from_str::<TraceRecord>() directly.
lash-trace-viewer is a small standalone binary that takes a JSONL trace and renders a self-contained HTML page — every record laid out as a timeline with nested LLM calls, tool calls, and streaming chunks visible inline. Unparseable lines are preserved as raw text so old or forward-compatible records don't get dropped.
$ cargo run -p lash-trace-viewer -- ./.lash-data/trace.jsonl
# Writes ./.lash-data/trace.html next to the input
$ cargo run -p lash-trace-viewer -- trace.jsonl \
--out report.html \
--title "Session 2026-05-11"
# Custom output path and title.
The viewer is workspace-only — run it via cargo run -p lash-trace-viewer from the workspace root. There is no separate install or release.
An optional cargo feature converts Lash trace events into OpenTelemetry spans for export to Jaeger, Honeycomb, Datadog, or any OTLP-compatible backend. The feature is off by default to keep the runtime lean; the OTel sink lives in lash-trace and is re-exported by lash-core when the otel-trace feature is on.
# In your downstream Cargo.toml — pull in lash-core with the otel-trace feature
# alongside the lash facade.
lash = { git = "https://github.com/SamGalanakis/lash.git" }
lash-core = { git = "https://github.com/SamGalanakis/lash.git", features = ["otel-trace"] }
With the feature enabled, lash_core::OtelTraceSink wraps an opentelemetry tracer and converts every Lash event to a span with the event payload attached as attributes. Use it in place of (or alongside) JsonlTraceSink:
use lash::{LashCore, tracing::{TraceLevel, TraceSink}};
use lash_core::OtelTraceSink;
let sink: Arc<dyn TraceSink> = Arc::new(OtelTraceSink::new(tracer));
let core = LashCore::builder()
.trace_sink(Some(sink))
.trace_level(TraceLevel::Extended)
.build()?;