OpenClaw Skill Variables and Dynamic Content in Slack

Learn how to use OpenClaw skill variables and dynamic content to build intelligent, context-aware Slack automations that pull live data from GitHub, Linear, Jira, and more — without writing a new skill for every variation.

Why Static Skills Fall Short in Real Workflows

When teams first start building custom skills in OpenClaw, the instinct is to create one skill per task: one for fetching open GitHub PRs, another for listing Jira tickets by status, another for pulling a specific Linear project. It works — until you have forty skills that all do roughly the same thing with slightly different hardcoded values.

Skill variables solve this. They let you define a single, flexible skill that accepts inputs at runtime, fills in dynamic content from connected tools, and returns responses tailored to the exact context of each request. When you combine this with SlackClaw's persistent memory and its connections to 800+ integrated tools, you get automations that feel genuinely intelligent rather than scripted.

This article walks through how skill variables work in OpenClaw, how to use them effectively inside Slack, and how to structure dynamic skills that hold up in messy, real-world team workflows.

Understanding Skill Variables in OpenClaw

A skill in OpenClaw is a structured instruction set that tells the agent what to do, how to do it, and what to return. Variables are named placeholders inside that instruction set — they get resolved at runtime using one of three sources:

  • User input — values passed directly in the Slack message that triggered the skill
  • Memory context — values stored from previous interactions, such as a user's preferred project, their timezone, or their role
  • Tool output — live data fetched from a connected integration mid-skill, used to populate later steps

Variables follow a double-brace syntax throughout the skill definition. A simple example looks like this:

skill: summarize_project_status
description: Summarize the current status of a Linear project for a given team member

steps:
  - fetch:
      tool: linear
      action: get_project
      params:
        project_name: "{{project_name}}"
  - fetch:
      tool: linear
      action: list_issues
      params:
        project_id: "{{steps.0.result.id}}"
        assignee: "{{user_email}}"
  - respond:
      template: |
        Here's the latest on {{project_name}} for {{user_name}}:
        - Open issues assigned to you: {{steps.1.result.count}}
        - Overdue: {{steps.1.result.overdue_count}}
        - Last updated: {{steps.0.result.updated_at}}

When someone in Slack types @SlackClaw summarize project status for Alpha Launch, the agent resolves {{project_name}} from the message, pulls {{user_email}} and {{user_name}} from persistent memory, fetches the live data, and returns a clean summary — all in one interaction.

Pulling Dynamic Content from Connected Tools

The real power unlocks when you chain tool calls together and use the output of one step as the input of the next. SlackClaw's dedicated server per team means each agent instance maintains its own execution context throughout a skill run, so intermediate values don't get dropped or confused across concurrent requests from different team members.

Cross-Tool Variable Chaining

Imagine a daily standup skill that pulls context from three different sources and stitches them into a single Slack message. Here's a condensed version of what that skill definition looks like: Learn more about our pricing page.

skill: morning_briefing
description: Deliver a personalized morning briefing for a team member

steps:
  - fetch:
      tool: github
      action: list_pull_requests
      params:
        author: "{{github_username}}"
        state: open
  - fetch:
      tool: jira
      action: list_issues
      params:
        assignee: "{{jira_username}}"
        status: ["In Progress", "Review"]
  - fetch:
      tool: notion
      action: get_page
      params:
        page_id: "{{notion_daily_notes_id}}"
  - respond:
      template: |
        Good morning, {{user_name}} 👋

        **GitHub:** You have {{steps.0.result.count}} open PRs.
        **Jira:** {{steps.1.result.count}} issues need your attention.
        **Today's notes:** {{steps.2.result.excerpt}}

The values for {{github_username}}, {{jira_username}}, and {{notion_daily_notes_id}} are stored in SlackClaw's persistent memory after a one-time setup conversation. Once they're set, the skill runs cleanly every morning without prompting the user for configuration details they've already provided. Learn more about our integrations directory.

Conditional Dynamic Content

Variables also work inside conditional blocks, which lets you build skills that adapt their response based on what they find at runtime:

  - respond:
      template: |
        {{#if steps.1.result.overdue_count > 0}}
        ⚠️ You have {{steps.1.result.overdue_count}} overdue Jira issues.
        {{else}}
        ✅ No overdue issues — you're on track.
        {{/if}}

This kind of conditional rendering means you're not always showing the same boilerplate response. The message adjusts to what's actually true, which is the difference between a useful automation and a noisy one.

Using Persistent Memory as a Variable Source

SlackClaw's persistent memory is what makes skill variables genuinely low-friction for end users. Rather than requiring someone to pass --project=Alpha --assignee=sarah@company.com every single time, the agent learns and remembers preferences over the course of normal Slack conversations.

Setting Memory Values Conversationally

A team member can establish their context in plain language:

"Hey SlackClaw, my main project is Alpha Launch, I use @sarah-k on GitHub, and my Linear workspace is acme-corp."

The agent extracts and stores those values. From that point on, any skill that references {{primary_project}}, {{github_username}}, or {{linear_workspace}} will resolve them automatically without asking again.

Team-Level vs. User-Level Variables

OpenClaw distinguishes between memory scoped to an individual user and memory scoped to the whole workspace. This matters in practice:

  • User-level: personal GitHub handle, Jira account ID, preferred timezone, default Linear team
  • Team-level: shared Notion workspace ID, main GitHub org, Slack channel IDs for routing alerts, default sprint label in Linear

In your skill definitions, you reference these with a scope prefix: {{user.github_username}} versus {{team.github_org}}. The agent resolves the right value based on who triggered the skill and what's available in memory at that moment.

Practical Patterns That Work Well in Slack

The "Fill In the Blanks" Pattern

Design skills with sensible defaults stored in memory, but allow in-message overrides for one-off cases. If {{project_name}} is in memory but someone types a different project name in their message, the message input takes precedence. This gives you the convenience of automation with the flexibility of ad-hoc requests. For related insights, see Using OpenClaw in Slack for Distributed Engineering Teams.

The "Pipe and Transform" Pattern

Fetch raw data from one tool, transform it using a format or filter step, and pass the cleaned result into a second tool or a response template. For example: pull open issues from GitHub, filter to only those with a bug label, and post a formatted digest to a specific Slack channel. The filter step uses a variable for the label so the same skill works across different repos and label conventions.

The "Acknowledge and Expand" Pattern

When a user asks a follow-up question mid-conversation, SlackClaw's persistent context means the agent already knows what was discussed. You can build skills that use {{conversation.last_topic}} or {{conversation.last_fetched_issue_id}} to continue naturally rather than starting from scratch. This is especially useful for incident response workflows where the agent is tracking a live situation across multiple messages.

Avoiding Common Mistakes

A few things trip up teams when they first start working with dynamic skills:

  • Not handling missing variables gracefully. If a variable isn't in memory and wasn't passed in the message, the skill will stall or return an error. Always define a fallback prompt or a default value for optional variables.
  • Overloading a single skill with too many branches. Conditional blocks are powerful but they make skills hard to debug. If you find yourself with more than three or four conditionals, split the skill into focused sub-skills and let the agent route between them.
  • Ignoring OAuth scope requirements. SlackClaw's one-click OAuth setup covers the connection, but some tool actions require elevated permissions. If a skill fails silently, check that the connected account has the right scopes for the actions you're calling — particularly for write operations in Jira, Gmail, or GitHub.
  • Conflating team-level and user-level memory. Storing a personal GitHub username at team scope means everyone in the workspace uses the same value. Scope matters — get it wrong and skills return data for the wrong person.

Putting It Together: Credit Efficiency and Skill Design

Because SlackClaw uses credit-based pricing rather than per-seat fees, the cost model rewards well-designed skills over chatty, inefficient ones. A skill that makes four targeted API calls and returns a clean response costs far less than one that fetches broad datasets and discards most of what it retrieves. Using precise variable values — specific project IDs rather than broad queries, filtered issue lists rather than full dumps — keeps your credit usage lean and your responses faster.

The best approach is to treat each skill like a small, purpose-built function: clear inputs, minimal side effects, predictable output. Dynamic content and skill variables let you write that function once and reuse it across every variation your team actually needs. For related insights, see OpenClaw in Slack for Engineering Teams: A Complete Guide.

When your skills are built this way, SlackClaw stops feeling like a chatbot you query and starts feeling like a teammate who already knows the context — because it does.