Building an AI Call Center: Conversation Engine
The Problem
A voice conversation isn't a single request-response. The caller says something, the agent responds, the caller follows up, and this continues for potentially dozens of turns. The Lambda handling this needs to:
- Maintain conversation history across invocations
- Send the full context to Bedrock on every turn
- Handle tool use (the model might need to look something up or take an action)
- Enforce guardrails (max turns, content filters)
- Publish events for analytics
- Do all of this in under 8 seconds (Connect's invocation timeout)
This is Part 2 of a three-part series. Part 1 covers the architecture, and Part 3 walks through the dashboard.
Conversation Flow
Each time Connect invokes the Lambda, it passes the caller's latest utterance and a
contactId that identifies the call. The Lambda:
- Loads the agent configuration (cached in module scope across invocations)
- Loads existing conversation history from DynamoDB
- Appends the new user message
- Calls Bedrock's
ConverseAPI with the full history - Handles the response — which might be text, a tool use request, or a stop signal
- Persists the updated history with a TTL
- Returns the response text for Connect to speak via Polly
const response = await bedrock.send(
new ConverseCommand({
modelId: agent.modelId,
system: [{ text: agent.systemPrompt }],
messages: bedrockMessages,
toolConfig: buildToolConfig(agent.tools),
...buildGuardrailConfig(agent.guardrails),
}),
);Tool Use Loop
The interesting part is tool use. Bedrock might respond with stopReason: 'tool_use'
instead of 'end_turn'. This means the model wants to call a function before responding
to the caller.
The Lambda enters a loop:
- Extract the tool use block from the response
- Execute the tool (for now, a
report_outcometool that stores the call result) - Append the tool result to the conversation
- Call Bedrock again with the updated history
- Repeat until Bedrock returns
'end_turn'or we hit the iteration limit
let iterations = 0;
while (stopReason === 'tool_use' && iterations < MAX_TOOL_ITERATIONS) {
const toolBlock = response.output?.message?.content?.find(
(block) => 'toolUse' in block,
);
const result = await executeTool(toolBlock.toolUse);
messages.push({ role: 'user', content: [{ toolResult: result }] });
response = await bedrock.send(new ConverseCommand({ ... }));
stopReason = response.stopReason;
iterations++;
}The MAX_TOOL_ITERATIONS cap (currently 5) prevents runaway loops if the model keeps
requesting tools. In practice, most calls use 0–1 tool invocations.
Agent Configuration
Each agent is a JSON document with everything Bedrock needs:
- systemPrompt — the agent's personality and instructions (up to 10,000 chars)
- modelId — which Bedrock model to use (defaults to Claude Sonnet)
- tools — array of tool definitions with name, description, and input schema
- guardrails — max turns, optional Bedrock guardrail ID/version, disclaimers
- pollyVoiceId and pollyEngine — voice settings for TTS
The schema enforces that if you specify a bedrockGuardrailId, you must also provide
bedrockGuardrailVersion (and vice versa). This prevents a class of runtime errors where
Bedrock rejects the request because one field is missing:
const GuardrailsSchema = z
.object({
maxTurns: z.number().int().positive().default(50),
bedrockGuardrailId: z.string().optional(),
bedrockGuardrailVersion: z.string().optional(),
disclaimers: z.array(z.string()).default([]),
})
.refine(
(data) =>
(data.bedrockGuardrailId === undefined) ===
(data.bedrockGuardrailVersion === undefined),
{ message: 'bedrockGuardrailId and bedrockGuardrailVersion must both be provided or both omitted' },
);Shared Lambda Utilities
Four Lambda handlers share a lot of boilerplate — DynamoDB client initialization,
environment variable loading, Bedrock configuration building. I extracted these into a
_shared/ module:
clients.ts— singleton DynamoDB document clientenv.ts—requireEnv()that throws at cold start if a variable is missingbedrock.ts—buildToolConfig()andbuildGuardrailConfig()pure functionsevents.ts—publishEvent()that wraps EventBridge with error swallowing
The requireEnv() pattern is worth highlighting. Instead of process.env.TABLE_NAME ?? ''
(which silently produces an empty string), the Lambda fails immediately at cold start:
export function requireEnv(name: string): string {
const value = process.env[name];
if (!value) {
throw new Error(`Missing required environment variable: ${name}`);
}
return value;
}Called at module scope, this surfaces misconfiguration in seconds rather than letting a Lambda silently write to a non-existent table.
Conversation History TTL
Every conversation record gets a TTL of 30 days. After that, DynamoDB automatically deletes it. This keeps the table size bounded without requiring a cleanup job.
The TTL is computed when the conversation is first created:
const ttl = Math.floor(Date.now() / 1000) + CONVERSATION_TTL_DAYS * 86400;For compliance or debugging, the Analytics Lambda copies relevant data (outcome, sentiment,
duration) into a separate record keyed by OUTCOMETIME#<timestamp>, which persists
independently.
Event Publishing
After each turn, the Lambda publishes a structured event to EventBridge:
await publishEvent(TURN_EVENT, {
tenantId,
agentId,
contactId,
turnNumber: history.turnCount,
});The publishEvent helper never throws — it catches errors and logs them. A failed event
should never break the caller's experience. EventBridge handles retry and dead-lettering
at the rule level.
What's Next
In Part 3, I'll cover the TanStack Start dashboard — managing agents, reviewing calls, and viewing analytics.