Live streaming
Use TurnBuilder::stream(&sink) instead of run() to emit TurnActivity items as the model produces them. The Lash API guide covers the full event taxonomy.
The shortest path to a working Lash turn. Add the crate, pick a provider, open a session, run one turn. Once this works you can layer on tools, modes, plugins, persistence, and tracing as you need them.
The lash crate is the user-facing facade. Pull in the provider crate you use directly. Add lash-providers-builtin only when your host wants registry-driven provider lookup.
Lash is not on crates.io yet. Pull the crates straight from the monorepo by git URL:
[dependencies]
lash = { git = "https://github.com/SamGalanakis/lash.git" }
lash-provider-openai = { git = "https://github.com/SamGalanakis/lash.git" }
anyhow = "1"
tokio = { version = "1", features = ["full"] }
Cargo resolves each name against packages inside that repo. Pin to a commit with rev = "..." or a tag with tag = "..." for reproducible builds.
For ordinary projects the bundled facade and provider crates cover the common ground. Drop down to lash-core only when you need runtime-host plumbing such as custom PluginHost instances or shared TokioSessionTaskExecutor values.
The minimal example below names LashCore, LashSession, TurnInput, TurnBuilder, and ProviderHandle. SessionSpec and PluginStack are the next two types you'll meet in the Lash API guide. Skim all of them before reading the code — the rest follows from how they compose.
LashCoreSessionSpecPluginStackLashCore::standard() and LashCore::rlm() start with the runtime defaults; LashCore::builder() is explicit and starts empty.LashSessionLashCore by a stable session id.TurnInputTurnInput::text(...) for the common case.TurnBuildersession.turn(input). Call .run() for a collected result, .stream(&sink) for live events.ProviderHandlelash_providers_builtin::register_all() wires the OpenAI / Anthropic / Codex / Google factories into the global registry once at process start.The smallest meaningful program: build a core, open a session, run one turn, print the assistant's prose.
use lash::{provider::ProviderHandle, LashCore, TurnEvent, TurnInput};
use lash_provider_openai::{OPENROUTER_BASE_URL, OpenAiCompatibleProvider};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Build a provider handle. Substitute your own creds + base URL.
let api_key = std::env::var("OPENROUTER_API_KEY")?;
let provider = ProviderHandle::new(
OpenAiCompatibleProvider::new(api_key, OPENROUTER_BASE_URL).into_components(),
);
// One LashCore per app, cloned freely.
let core = LashCore::standard()
.provider(provider)
.model("anthropic/claude-sonnet-4.6", None)
.max_context_tokens(200_000)
.build()?;
// One session per chat / task.
let session = core.session("hello-1").open().await?;
// Run one turn; collect prose from the activity stream.
let result = session
.turn(TurnInput::text("Say hi in one short sentence."))
.run()
.await?;
let prose: String = result.activities.iter().filter_map(|a| match &a.event {
TurnEvent::AssistantProseDelta { text } => Some(text.as_str()),
_ => None,
}).collect();
println!("{prose}");
Ok(())
}
That's the whole flow: LashCore for shared configuration, LashSession for one logical conversation, TurnBuilder::run() for a single collected turn. Nothing more is required.
The example above has no persistence (the session lives in memory and goes away when the process exits), no host tools, only the default runtime plugin stack, no tracing, and no live streaming. Each is a small additional step.
Use TurnBuilder::stream(&sink) instead of run() to emit TurnActivity items as the model produces them. The Lash API guide covers the full event taxonomy.
Implement ToolProvider, hand it to .tools(Arc::new(MyTools)) on the builder. See the Plugins guide.
Pass a SessionStoreFactory to .store_factory(...) so sessions survive process restarts. See the Persistence guide.
Attach a TraceSink via .trace_sink(Some(Arc::new(sink))) for JSONL traces of every turn. See the Tracing guide.
Install ModePreset::rlm() for Lashlang-driven agentic turns with structured submit/handoff semantics. See RLM And Submit.
Wire up MCP via McpPluginFactory::new(servers).await? and pass it to .plugin(...). See the MCP Servers section in the Lash API guide.