Blog / · 6 min read

Stop Hardcoding Your System Prompts — A Prompt Assembly Pattern Extracted from 672 Lines of Code

A modular, conditionally-rendered prompt assembly framework distilled from OpenClaw's system prompt builder

How many lines is your system prompt? If it’s over 100, you’re probably maintaining it with string concatenation. Then one day you add a feature, the prompt breaks, and you spend 2 hours debugging — only to find a missing newline character.

The Problem

In the previous two articles, I extracted a resilient LLM layer and a three-tier context management defense from OpenClaw. This time I’m looking at another core file — system-prompt.ts, 672 lines.

This file does exactly one thing: assemble the system prompt.

672 lines, just to generate a single system prompt.

Sounds excessive? But when your AI application has 10+ tools, 3 operating modes, dynamic context files, conditional skill descriptions, runtime environment info… your system prompt stops being a string constant. It becomes a complex output that needs orchestration.

Here’s what most people do:

const systemPrompt = `You are a helpful assistant.

${tools.length > 0 ? `## Tools\n${tools.map(t => `- ${t.name}`).join('\n')}` : ''}

${isAdvanced ? 'You have access to advanced features.' : ''}

${contextFiles.map(f => `## ${f.path}\n${f.content}`).join('\n\n')}

${runtime ? `Runtime: os=${runtime.os} model=${runtime.model}` : ''}

Be concise and helpful.`

Looks manageable? Wait until you hit the 15th conditional branch.

Three fatal problems with this approach:

  1. Unmaintainable — Nested ternaries, template literals, newline management. Change one thing, break three others.
  2. Untestable — The entire prompt is one giant expression. You can’t test “is the tools section correct?” in isolation.
  3. Not reusable — Every project builds this from scratch. Same wheel, every time.

How OpenClaw Does It: The Section Builder Pattern

OpenClaw’s 672-line system-prompt.ts isn’t one massive template string. It’s 20+ independent sections (8 standalone builder functions + a dozen inline conditional blocks), each responsible for one part of the prompt:

buildAgentSystemPrompt(params)

    ├─ identity section         → "You are Claude Code..."
    ├─ tool list section        → "## Tools\n- read\n- exec\n..."
    ├─ tool documentation       → Detailed usage for each tool
    ├─ context files section    → CLAUDE.md, .cursorrules, etc.
    ├─ skills section           → Available skill descriptions
    ├─ memory section           → Persisted memory content
    ├─ git status section       → Current git state
    ├─ runtime info section     → OS, model, shell info
    ├─ ...(10+ more sections)

    └─ merge → filter(Boolean) → join("\n")

Each section builder is a standalone function that takes context parameters and returns string[] (an array of lines) or an empty array.

// A section builder from OpenClaw (simplified)
function buildToolListSection(tools: Tool[]): string[] {
  if (tools.length === 0) return []  // No tools? Skip this section

  const seen = new Set<string>()
  const lines = ['## Tools']

  for (const tool of tools) {
    const key = tool.name.toLowerCase()
    if (seen.has(key)) continue  // Deduplicate
    seen.add(key)
    lines.push(`- ${tool.name}: ${tool.summary}`)
  }

  return lines
}

Three key design decisions to note:

  1. Empty array = skip — No need for outer if/else logic
  2. Return line arrays — The framework handles joining; no manual newline management
  3. Standalone functions — Each one is independently testable

This is the pattern I want to extract.

The Extraction: Three Concepts

After stripping away the 20+ concrete sections and OpenClaw-specific logic from those 672 lines, the universal framework boils down to three concepts:

1. Section — The Building Block

Each section can provide content in three ways:

// Static content — never changes
{ name: 'identity', content: 'You are a helpful assistant.' }

// Dynamic builder — generates from context
{ name: 'tools', builder: (ctx) => ctx.tools.map(t => `- ${t.name}`) }

// Conditional rendering — included only when condition is met
{
  name: 'advanced',
  content: 'You have access to advanced features.',
  when: (ctx) => ctx.isAdvanced,
}

Static content is for fixed parts (identity, base rules). Dynamic builders are for parts that depend on runtime data (tool lists, context files). Conditional rendering is for “sometimes needed, sometimes not” parts (advanced features, debug info).

2. Assembler — The Orchestrator

Processes all sections in order and merges them into the final prompt:

sections.forEach(section => {
  if (section.when && !section.when(ctx)) → skip
  if (section.builder) → execute builder
  else → use static content
  collect results
})
→ join(separator)
→ final prompt string

3. Section Helpers — Reusable Formatters

Several section builders in OpenClaw are universally applicable across any LLM application:

  • Tool list formatting — Nearly every agent needs to tell the LLM “here are your available tools”
  • Context file injection — Project config files like CLAUDE.md, .cursorrules
  • Runtime info — OS, model name, Node version, and other environment details

These can be extracted directly as general-purpose helpers.

Comparison: Before and After

Let’s compare with a real scenario. Say you’re building an AI coding assistant whose system prompt needs: identity, tool list, project files, runtime info, and an optional advanced features section.

Before: Template String Hell

function buildSystemPrompt(
  tools: Tool[],
  files: File[],
  runtime: Runtime,
  isMinimal: boolean,
): string {
  let prompt = 'You are a coding assistant.\n'

  if (tools.length > 0) {
    prompt += '\n## Tools\n'
    const seen = new Set<string>()
    for (const tool of tools) {
      const key = tool.name.toLowerCase()
      if (!seen.has(key)) {
        seen.add(key)
        prompt += tool.summary
          ? `- ${tool.name}: ${tool.summary}\n`
          : `- ${tool.name}\n`
      }
    }
  }

  if (files.length > 0) {
    for (const file of files) {
      prompt += `\n## ${file.path}\n\n${file.content}\n`
    }
  }

  if (!isMinimal) {
    prompt += '\nYou have access to advanced features.\n'
  }

  const runtimeParts: string[] = []
  if (runtime.os) runtimeParts.push(`os=${runtime.os}`)
  if (runtime.model) runtimeParts.push(`model=${runtime.model}`)
  if (runtimeParts.length > 0) {
    prompt += `\nRuntime: ${runtimeParts.join(' ')}\n`
  }

  prompt += '\nBe concise. Follow best practices.'

  return prompt
}

40 lines, and that’s only 5 sections. Imagine what 20+ sections looks like.

After: Section Builder Pattern

import {
  createPromptAssembler,
  formatToolList,
  formatContextFiles,
  formatRuntimeInfo,
} from '@yuyuqueen/prompt-assembler'

type MyContext = {
  tools: ToolEntry[]
  files: ContextFile[]
  runtime: RuntimeInfo
  isMinimal: boolean
}

const prompt = createPromptAssembler<MyContext>({
  sections: [
    { name: 'identity', content: 'You are a coding assistant.' },
    {
      name: 'tools',
      builder: (ctx) => formatToolList(ctx.tools),
      when: (ctx) => ctx.tools.length > 0,
    },
    {
      name: 'context',
      builder: (ctx) => formatContextFiles(ctx.files),
      when: (ctx) => ctx.files.length > 0,
    },
    {
      name: 'advanced',
      content: 'You have access to advanced features.',
      when: (ctx) => !ctx.isMinimal,
    },
    {
      name: 'runtime',
      builder: (ctx) => formatRuntimeInfo(ctx.runtime),
    },
    { name: 'rules', content: 'Be concise. Follow best practices.' },
  ],
})

// One-liner invocation
const systemPrompt = prompt.build({
  tools: [...],
  files: [...],
  runtime: { os: 'Darwin', model: 'claude-opus-4' },
  isMinimal: false,
})

Same functionality, but every section boundary is crystal clear. Add a section? Add one line. Remove one? Delete one line. Change a condition? Update the when. No more hunting through 40 lines of string concatenation.

Appendix: Universal Section Checklist

Of OpenClaw’s 20+ sections, 11 are universal patterns any LLM agent can adopt (the rest are product-specific logic like message routing and heartbeat detection). Use this checklist when building your own agent’s system prompt:

SectionPurposeWhen to Use
IdentityRole definition (“You are a…”)All agents
ToolingTool list + summaries, auto-deduplicatedAgents with tool calling
Tool Call StyleWhen to explain actions vs. execute silentlyAgents with tool calling
SafetySafety guardrails (no privilege escalation, no bypassing review)All agents
Memory RecallSearch memory before answeringAgents with persistent memory
WorkspaceWorking directory declarationFile/coding agents
User IdentityUser identity and preferencesPersonalized agents
Date & TimeTimezone and current timeTime-sensitive agents
Context FilesProject config injection (CLAUDE.md, etc.)Coding/project agents
RuntimeOS, model, Node version environment snapshotAll agents
Reasoning FormatThinking tag format controlWhen using reasoning models

Not every agent needs all 11 — pick what fits your use case. But if you’re building a coding agent or AI assistant, you’ll likely need 7-8 of them.

Start Using It Today

npm install @yuyuqueen/prompt-assembler

GitHub: github.com/yuyuqueen/llm-toolkit — Star

Core API

import { createPromptAssembler } from '@yuyuqueen/prompt-assembler'

const prompt = createPromptAssembler({
  sections: [
    // Static
    { name: 'identity', content: 'You are a helpful assistant.' },
    // Dynamic
    { name: 'tools', builder: (ctx) => [`Tools: ${ctx.toolCount}`] },
    // Conditional
    { name: 'debug', content: 'Debug mode on.', when: (ctx) => ctx.debug },
  ],
  separator: '\n',  // Separator between sections
})

const result = prompt.build({ toolCount: 5, debug: true })

Built-in Section Helpers

Three general-purpose formatting functions extracted from OpenClaw:

import {
  formatToolList,
  formatContextFiles,
  formatRuntimeInfo,
} from '@yuyuqueen/prompt-assembler'

// Tool list (auto-deduplication, case-insensitive)
formatToolList([
  { name: 'read', summary: 'Read file contents' },
  { name: 'Read', summary: 'Duplicate' },  // Deduplicated
  { name: 'exec', summary: 'Run commands' },
])
// → ["## Tools", "- read: Read file contents", "- exec: Run commands", ""]

// Context files
formatContextFiles([
  { path: 'CLAUDE.md', content: '# Project\nRules here.' },
])
// → ["## CLAUDE.md", "", "# Project\nRules here.", ""]

// Runtime info (automatically filters undefined values)
formatRuntimeInfo({
  os: 'Darwin',
  model: 'claude-opus-4',
  node: undefined,  // Filtered out
})
// → ["Runtime: os=Darwin model=claude-opus-4"]

Debugging and Token Estimation

// Inspect output per section (for debugging)
const sections = prompt.buildSections(ctx)
for (const [name, content] of sections) {
  console.log(`[${name}] ${content.length} chars`)
}
// → [identity] 28 chars
// → [tools] 156 chars
// → [context] 2340 chars

// Estimate token count
const tokens = prompt.estimateTokens(ctx)
console.log(`System prompt ≈ ${tokens} tokens`)

buildSections returns a Map<string, string>, giving you precise visibility into how much content each section contributes. When your prompt is too long, you can quickly pinpoint which section is bloated — instead of Ctrl+F-ing through hundreds of lines of concatenated strings.

Combining with resilient-llm and llm-context-kit

All three libraries together form a complete LLM application infrastructure:

import { createPromptAssembler } from '@yuyuqueen/prompt-assembler'
import { createContextBudget } from '@yuyuqueen/llm-context-kit'
import { createResilientLLM } from '@yuyuqueen/resilient-llm'

// 1. Assemble the system prompt
const prompt = createPromptAssembler({ sections: [...] })
const systemPrompt = prompt.build(ctx)

// 2. Check token budget (system prompt counts too)
const budget = createContextBudget({ contextWindowTokens: 200_000 })
const status = budget.check(messages)  // messages includes system message

// 3. Resilient invocation
const resilient = createResilientLLM({ providers: [...] })
await resilient.call(async (rCtx) => {
  return {
    response: await anthropic.messages.create({
      model: rCtx.model,
      system: systemPrompt,
      messages,
    }),
  }
})
System Prompt Assembly (prompt-assembler)


Context Management (llm-context-kit)
    │ Budget check → Tool truncation → Conversation compression

Resilient Invocation (resilient-llm)
    │ Key rotation → Provider fallback → Exponential backoff

LLM API

Before vs. After

ScenarioBeforeAfter
Add a sectionFind the right spot in a 40-line template stringAdd one section definition
Remove a sectionCarefully delete code and newlinesDelete one line or add when: () => false
Test a single sectionNot possible in isolationbuildSections(ctx).get('tools')
View token distributionManual calculationestimateTokens(ctx) + buildSections
Conditional renderingNested ternary expressionswhen: (ctx) => ctx.condition
Tool list deduplicationHand-write Set dedup logicformatToolList(tools) built-in
Team collaborationMerge conflict hell (same file, same function)Each person edits their own section

Design Principles

Same as the previous two libraries:

  • Zero dependencies — Pure TypeScript, no runtime dependencies
  • Generic contextcreatePromptAssembler<YourContext> provides full type safety
  • Provider-agnostic — Outputs plain strings, no LLM SDK lock-in
  • Composable — Section helpers work standalone or within the assembler

Conclusion

A system prompt is not a string — it’s a product that needs engineering-grade management.

When your prompt exceeds 50 lines, you need sections. When it has conditional branches, you need conditional rendering. When it has dynamic data, you need the builder pattern.

OpenClaw uses 672 lines of code to manage its system prompt, because a good AI product’s prompt really is that complex. You don’t need to write 672 lines — but you do need a framework to manage that complexity.

@yuyuqueen/prompt-assembler on npm @yuyuqueen/llm-context-kit on npm @yuyuqueen/resilient-llm on npm GitHub Source


This is the third and final article in the “Extracting Libraries from Open Source Projects” series.

Follow for updates → Twitter @YuYuQueen_ · GitHub

Related Posts

Subscribe to the Newsletter

Navigate the AI era — original research, opinionated analysis, real-world lessons. Weekly, no fluff.

评论