SDK Available

Agent Language Bridge

Language intelligence for AI agents. Translate, summarize, and reply — automatically.

Built for Moltbot, WhatsApp bots, Signal agents, and any Node.js workflow.

Why It Matters

Most agents speak one language. Your users don't.

Understand non-English posts and respond confidently.

Instant language capabilities across Moltbot + chat platforms.

No prompts. No glue code. Just call a function.

Features

Translate

Auto-detect source language and translate to any target. Cultural context preserved.

Summarize

Condense long posts into bullet points or short summaries. Key insights only.

Reply

Generate on-brand replies with configurable tone and length.

SDK-First

npm install and go. Works in any Node.js environment.

🔒

Secure

OpenAI keys stay server-side. Never exposed to your clients.

0

Zero Setup

No deployment required. Install SDK, call functions, done.

Quickstart

Up and running in 30 seconds

1Install
npm install @agent-language-bridge/sdk
2Import
import { translate, summarize, reply } from "@agent-language-bridge/sdk";
3Use anywhere
// Translate any text
const english = await translate("こんにちは、今日は寒いね。");

// Summarize long content
const summary = await summarize(longPost);

// Generate a reply
const response = await reply(message, {
  tone: "professional",
  replyLength: "short"
});

No API keys. No deployment. Works instantly with Moltbot, WhatsApp, Signal, Telegram, or any Node.js agent.

Default API: https://api.langbridge.dev · Override with { apiUrl: "..." } if needed.

Examples

Real-world usage patterns

Translate Japanese → English

Understand incoming messages in any language.

const text = "新しいエージェントをリリースしました。フィードバックをお願いします。";
const english = await translate(text, { targetLang: "en" });
// => "We've released a new agent. Please give us your feedback."

Summarize a Moltbook Post

Distill long threads into key points.

const post = `... 1500 words about agent architecture ...`;
const summary = await summarize(post);
// => "• Agents should be modular\n• Use event-driven patterns\n• Test with real data"

Generate a Professional Reply

Respond to messages with the right tone.

const incoming = "Can you explain how the rate limiting works?";
const response = await reply(incoming, {
  tone: "professional",
  replyLength: "medium"
});
// => "Rate limiting is implemented per-IP with a 20 req/min threshold..."

How It Works

1

Your agent calls the SDK

Import and call translate, summarize, or reply from anywhere in your code.

2

API handles the intelligence

We manage model selection, prompts, and output formatting server-side.

3

You get structured output

Predictable JSON responses. No parsing surprises.

API Reference

Direct HTTP access for any language

POSThttps://api.langbridge.dev/api/generate

Request Body

{
  "text": "Your input text here",
  "mode": "translate" | "summarize" | "reply",
  "targetLang": "en",           // optional, default: "en"
  "tone": "professional",       // optional: "casual" | "professional" | "builder"
  "replyLength": "short"        // optional: "short" | "medium"
}

Response by Mode

translate

{
  "detectedLanguage": "ja",
  "translation": "..."
}

summarize

{
  "detectedLanguage": "en",
  "summary": "• ..."
}

reply

{
  "detectedLanguage": "en",
  "reply": "..."
}

cURL Example

curl -s https://api.langbridge.dev/api/generate \
  -H "Content-Type: application/json" \
  -d '{"text":"こんにちは、今日は寒いね。","mode":"translate","targetLang":"en","tone":"builder","replyLength":"short"}'

Limits & Privacy

Rate Limiting

20 requests per minute per IP. Designed for agent workloads, not bulk processing.

Keys Stay Server-Side

OpenAI API calls happen on our servers. Your clients never see credentials.

Minimal Logging

We log request metadata for debugging. Content is processed and not retained long-term.