Language intelligence for AI agents. Translate, summarize, and reply — automatically.
Built for Moltbot, WhatsApp bots, Signal agents, and any Node.js workflow.
Most agents speak one language. Your users don't.
Understand non-English posts and respond confidently.
Instant language capabilities across Moltbot + chat platforms.
No prompts. No glue code. Just call a function.
Auto-detect source language and translate to any target. Cultural context preserved.
Condense long posts into bullet points or short summaries. Key insights only.
Generate on-brand replies with configurable tone and length.
npm install and go. Works in any Node.js environment.
OpenAI keys stay server-side. Never exposed to your clients.
No deployment required. Install SDK, call functions, done.
Up and running in 30 seconds
npm install @agent-language-bridge/sdkimport { translate, summarize, reply } from "@agent-language-bridge/sdk";// Translate any text
const english = await translate("こんにちは、今日は寒いね。");
// Summarize long content
const summary = await summarize(longPost);
// Generate a reply
const response = await reply(message, {
tone: "professional",
replyLength: "short"
});No API keys. No deployment. Works instantly with Moltbot, WhatsApp, Signal, Telegram, or any Node.js agent.
Default API: https://api.langbridge.dev · Override with { apiUrl: "..." } if needed.
Real-world usage patterns
Understand incoming messages in any language.
const text = "新しいエージェントをリリースしました。フィードバックをお願いします。";
const english = await translate(text, { targetLang: "en" });
// => "We've released a new agent. Please give us your feedback."Distill long threads into key points.
const post = `... 1500 words about agent architecture ...`;
const summary = await summarize(post);
// => "• Agents should be modular\n• Use event-driven patterns\n• Test with real data"Respond to messages with the right tone.
const incoming = "Can you explain how the rate limiting works?";
const response = await reply(incoming, {
tone: "professional",
replyLength: "medium"
});
// => "Rate limiting is implemented per-IP with a 20 req/min threshold..."Import and call translate, summarize, or reply from anywhere in your code.
We manage model selection, prompts, and output formatting server-side.
Predictable JSON responses. No parsing surprises.
Direct HTTP access for any language
https://api.langbridge.dev/api/generate{
"text": "Your input text here",
"mode": "translate" | "summarize" | "reply",
"targetLang": "en", // optional, default: "en"
"tone": "professional", // optional: "casual" | "professional" | "builder"
"replyLength": "short" // optional: "short" | "medium"
}translate
{
"detectedLanguage": "ja",
"translation": "..."
}summarize
{
"detectedLanguage": "en",
"summary": "• ..."
}reply
{
"detectedLanguage": "en",
"reply": "..."
}curl -s https://api.langbridge.dev/api/generate \
-H "Content-Type: application/json" \
-d '{"text":"こんにちは、今日は寒いね。","mode":"translate","targetLang":"en","tone":"builder","replyLength":"short"}'20 requests per minute per IP. Designed for agent workloads, not bulk processing.
OpenAI API calls happen on our servers. Your clients never see credentials.
We log request metadata for debugging. Content is processed and not retained long-term.