Claude Commands: Build Predictable AI Coding Workflows (Complete Guide)

Learn how to use Claude Code commands to build predictable AI coding workflows. Complete guide to custom commands, MCP integration, and structured AI development.

Claude Commands Or How I stopped worrying and learned to love structured agent interactions

After weeks of working with AI coding assistants, I noticed a pattern: the inconsistency wasn't coming from the AI's capabilities - it was how I was using it.

Every time I asked Claude to plan a feature, I'd get something different. Sometimes detailed, sometimes vague, always formatted differently. The CLAUDE.md file I'd carefully crafted? Ignored half the time. Context that should've been obvious? Lost in translation. I was stuck in an endless loop of re-prompting, reformatting, and rewriting the same instructions.

Then I discovered Claude commands, and everything changed.

If you're using Claude Code or other coding agents for development, you've probably experienced this inconsistency. This guide shows you how to transform Claude into a predictable AI coding assistant using custom commands and structured workflows with the Model Context Protocol (MCP).

Here's what nobody tells you about working with AI coding agents: consistency is your biggest challenge. Not capability - consistency.

You can get brilliant code from Claude one day, and confusing output the next, all from the same prompt. Why? Because natural language is inherently ambiguous, and AI agents (no matter how sophisticated) interpret your requests differently each time based on subtle contextual variations.

I'd write detailed prompts like: "Plan this feature following the structure in CLAUDE.md, include database schema changes with migration scripts, API endpoints with authentication middleware, update the frontend components using our Shadcn/UI patterns, follow our CamelCase naming convention for database fields..." Sometimes it worked perfectly. Other times, critical pieces would be missing or it ended up in long back and forths.

The breakthrough came when I stopped treating AI as a conversational partner and started treating it as a programmable workflow system using Claude commands

What Are Claude Commands?

According to the official documentation, Claude commands (also called slash commands) are "a way to control Claude's behavior during an interactive session." They come in several types: built-in commands (like /help and /review), custom commands you define, plugin commands, and MCP (Model Context Protocol) commands from connected servers.

But here's what that means in practice: Claude commands are custom functions for your AI agent. Unlike traditional AI coding assistants that require constant re-prompting, you define structured commands that:

  • Always follow the same format
  • Always respect your project context (like CLAUDE.md)
  • Always produce predictable outputs
  • Can call other commands to create workflows
  • Accept arguments and execute bash commands

A custom command is simply a markdown file in .claude/commands/ that defines instructions for Claude. When you invoke it with /command-name [arguments], Claude follows those instructions exactly - no interpretation needed, no context forgotten.

Here's the key insight: commands turn unpredictable conversations into deterministic workflows.

Note: All the commands I describe in this article are available as open-source templates in the claude-plugin repository.

Command-Driven Development Workflow: A Complete Guide

I now run my entire design and development cycle through a few key commands. Let me walk you through them.

/plan-feature - Structured Feature Planning

This command transforms vague ideas into detailed, implementation-ready plans. Here's a real example from my workflow:

/plan-feature please plan a new feature KT-4054-manage-employer-checkbook-mapping
 
We want to add an option for only users with role recon-admin to be able to do
a few admin things:
 
- import button - to import/update new records should only be available to
  recon-admin users - so please hide button import from Employer Checkbook
  mapping page if user is not recon-admin and protect API endpoint backend
  also for importing to only be available for recon-admin users
 
- delete record - show action button in each row as last column to be able to
  delete a record, ensure deleting is a soft delete, so please create a new
  sql-schema/reporting 02-employer-checkbook-mapping-soft-delete.sql to add
  soft delete column boolean/tinyint(1) in the table called IsDeleted and
  default should be 0, because all records are not deleted yet. I will run
  that SQL myself later, you don't need any code for it. Once soft delete is
  implemented, implement API for soft deleting and protect it so only
  recon-admin user role can use it. For frontend add simple text button in
  last column that opens a popup to confirm deletion of record, provide
  details of the record and Delete and Cancel button. Delete should be red,
  and cancel should be simple text button, on click call soft delete API and
  close popup and update the table by removing deleted record if API was
  successful. If API had an error you can show a small toast message (and
  implement toast message as global component any page can use) that accepts
  a text and shows toast with text and x, and auto hides after 10 seconds.
 
- since we added soft delete now let's update the table list to show only
  IsDeleted 0 - rows that are active

The command analyzes my requirements, researches the codebase to understand existing patterns (role-based access control, database conventions, UI components), asks clarifying questions about implementation details, and creates a comprehensive plan covering database migrations, API endpoints with proper authorization, and frontend components. You can see the complete command logic on GitHub.

The output includes implementation phases broken into logical tasks:

Phase 1: Database & API Foundation
- Create soft delete migration script
- Update GET endpoint to filter deleted records
- Implement DELETE endpoint with role protection
 
Phase 2: Global Components
- Create reusable Toast notification component
- Create ConfirmDialog component
 
Phase 3: Role-Based UI
- Add role check utility to Import button
- Add Delete action column to table
- Wire up confirmation dialog and API calls
 
Phase 4: Testing & Edge Cases
- Test role-based access (both frontend and API)
- Verify soft delete filtering
- Test error handling and toast notifications
- Testing strategy and acceptance criteria

Every plan comes out the same way. Every time. No surprises, no missing pieces, no reformatting needed.

The output is saved to planning/kt-4054-manage-employer-checkbook-mapping.md with a consistent structure that the next command in my workflow can validate and consume.

/implement-feature - Validated Implementation

Once I have a plan, implementation becomes methodical:

/implement-feature planning/employer-checkbook-mapping.md

This command is smarter than just "read the plan and code it". Check it out on GitHub.

It follow the following workflow:

  1. Validates the planning document against the structure defined in /plan-feature

    • If the plan is incomplete or malformed, it tells me exactly what's missing
    • No wasted time starting implementation with insufficient specs
  2. Creates an implementation plan with reasonably-sized, logical tasks

    • Adds a progress tracker table at the top of the planning document:
Phase Task Status Notes
Phase 1 Create database schema In Progress Using CamelCase
  1. Asks for confirmation before executing anything

    • I review the implementation plan
    • I can adjust or clarify before work begins
  2. Implements task by task with feedback checkpoints

    • After each task completes, execution stops
    • I review the changes, test functionality
    • Progress tracker updates with status and notes
    • Human stays in the loop throughout

This is the critical difference between commands and raw prompts: structured checkpoints instead of hoping the AI gets everything right in one shot.

Building Complex AI Workflows with Epic Commands

I took this further with epic planning - commands that call other commands. This AI workflow automation approach eliminates hours of manual coordination:

/plan-epic "Reconciliation dashboard improvements"

The workflow:

  1. /plan-epic breaks the epic into high-level requirements
  2. Once I confirm the breakdown, it calls /plan-feature for each requirement
  3. Each feature gets its own detailed plan
  4. /implement-epic can then orchestrate feature implementation in the right order

I'm still experimenting with this. Implementation order matters, and getting the dependency graph right is tricky, but the foundation works.

How to Create Your Own Commands

Here's the practical part - creating commands is simpler than you might think. According to the official documentation, commands are markdown files stored in .claude/commands/ (project-level) or ~/.claude/commands/ (personal-level). The key insight? Use Claude to generate your commands, then refine them.

Command Creation Process

Commands use a simple structure with optional frontmatter for configuration:

---
description: Brief description shown in command list
argument-hint: "[file-path] [optional-flags]"
allowed-tools: Bash(git add:*), Bash(git commit:*)
---
 
Your command instructions here.
 
Arguments: $ARGUMENTS (all args) or $1, $2 (individual args)

Key frontmatter parameters:

  • description: Shows up when you type / to see available commands
  • argument-hint: Tells users what arguments the command expects
  • allowed-tools: Restricts which tools Claude can use (security/safety)
  • model: Specify a particular Claude model if needed

However, you don't need to actually include this frontmatter infromation, as the commands work well even without it, and will ask for any tool access, or accept params properly if you just add it to the command instructions.

Here's how I create commands:

  1. Start with a prompt to Claude: "Create a command called /implement-feature that takes a planning document path as $1 and..."
  2. Claude generates the initial command structure with frontmatter, instructions, validation steps, and workflow logic
  3. Test it a few times on real workflows
  4. Refine based on results - add validation, improve clarity, fix edge cases, tune the allowed-tools list

Real Example: Building the Implement Feature Command

When I needed /implement-feature, I gave Claude this prompt:

Create a command that requires a planning document path as a parameter. 
If the document doesn't match the structure from `/plan-feature`, explain why it's not ready. 
If it passes validation, create an implementation plan with a progress tracker table at the top of the document. 
Before executing, ask for confirmation. 
Execute each step one at a time with reasonably-sized tasks, stopping after each for review.

Claude generated a command with:

  • Parameter validation
  • Document structure checking against /plan-feature requirements
  • Automatic progress tracker generation
  • Step-by-step execution with human checkpoints
  • Status updates after each completed task

I tested it, found it tried to do too much at once, and refined it to follow my structure better and ensure it confirms every step unless I tell if specifically to go ahead and complete everything. Three iterations later, it worked perfectly.

Example: The Write Blog Post Command

For /write-blog-post, I wanted Claude to match my writing style. I provided:

Create a command that takes markdown files and instructions, analyzes the topic (technical vs. business), adjusts vocabulary accordingly, follows my article structure (hook → build-up → deep dive → recommendations → conclusion) and style (reference [https://www.msthgn.com/articles/the-micro-saas-revolution-from-giants-to-solopreneurs](https://www.msthgn.com/articles/the-micro-saas-revolution-from-giants-to-solopreneurs "https://www.msthgn.com/articles/the-micro-saas-revolution-from-giants-to-solopreneurs")), verifies facts with credible sources, and outputs to `/blog-posts/`.

Claude generated a command that:

  • Analyzes writing style from reference articles
  • Recognizes topic type and adjusts tone
  • Structures content with my preferred flow
  • Fact-checks before publishing
  • Offers SEO optimization after completion

Tweaking Commands for Better Results

The /write-blog-post command initially didn't offer SEO optimization. I added one line at the end (after I created a seo-blog-post command):

"After writing, ask if the user wants to run /seo-blog-post for keyword research and optimization."

Now every article automatically prompts for SEO work. Small tweak, huge workflow improvement.

Command Structure Basics

A typical command file (e.g., .claude/commands/plan-feature.md) looks like this:

---
description: Plan a feature with validation and structured output
argument-hint: "[feature-name] [description]"
allowed-tools: Read, Grep, Edit, Write
---
 
# Plan Feature Command
 
Take the feature name ($1) and description ($ARGUMENTS after $1), then:
 
## Input Requirements
- Feature name for file naming
- Description of what needs to be built
 
## Validation Steps
- Read CLAUDE.md to understand project conventions
- Check if similar features exist using Grep
- Validate feature name format
 
## Workflow
1. Research codebase for existing patterns
2. Ask clarifying questions if requirements are unclear
3. Create structured plan in planning/{feature-name}.md
4. Include: database changes, API endpoints, frontend components, testing strategy
 
## Context
- Always respect patterns defined in CLAUDE.md
- Use project-specific naming conventions
- Break into phases with clear acceptance criteria

Pro tip: You can organize commands into subdirectories for namespacing (e.g., .claude/commands/jira/create-ticket.md becomes /jira/create-ticket).

The beauty? You don't need to know all this upfront. Ask Claude to create the command, use it, refine it. Commands improve through iteration, just like code.

Want to see real command examples? Check out my claude-plugin repository with production-ready commands for feature planning, implementation, blog writing, SEO optimization, and more. You can copy them directly or use them as templates for your own workflows.

Claude Commands vs Subagents: When to Use Each for AI Coding

Here's where the conceptual model gets interesting. According to the official documentation, Claude Code supports both commands (structured instructions) and subagents (pre-configured AI personalities with specialized expertise).

What are subagents? They're specialized AI assistants that operate in their own separate context window, each with a custom system prompt and specific tool permissions. Claude can either delegate tasks to them automatically (based on their description) or you can invoke them explicitly by name.

Use commands when you want:

  • Predictable, repeatable workflows
  • Multi-step processes with human checkpoints
  • Context that must always be respected (like CLAUDE.md)
  • Workflows that call other workflows
  • Explicit control over execution flow

Use subagents when you need:

  • Specialized expertise for specific domains (code review, debugging, data analysis)
  • Separate context windows to preserve main conversation
  • Reusable, consistent workflows with specific tool access
  • Research or exploration without strict structure
  • One-off tasks that don't need repeatability

The key difference? Commands are instructions you write that Claude follows. Subagents are pre-configured AI personalities that Claude delegates work to, each operating in their own context with specialized prompts.

My preference? Commands that orchestrate subagents. Commands provide the structure and checkpoints, subagents handle the specialized work.

For example, when building AI development tools with Claude, my /implement-feature command might delegate to subagents for:

  • Database schema analysis (data scientist subagent)
  • Code review of generated endpoints (code reviewer subagent)
  • Debugging integration issues (debugger subagent)

But the command controls the workflow, validates readiness, and keeps the human in the loop.

Integrate Claude with External Tools Using Model Context Protocol (MCP)

Here's where it gets really powerful: Model Context Protocol (MCP) servers let your commands interact with external systems.

I integrated the Atlassian MCP server to sync planning docs and implementation status with Jira. Here's the complete workflow:

Step 1: Create Jira content from planning docs

/create-jira-content planning/kt-4054-employer-checkbook-mapping.md

This command:

  • Reads the planning document
  • Extracts tasks and requirements
  • Generates structured Jira content (Description, Business Value, Acceptance Criteria, Technical Decisions)
  • Saves to jira/KT-4054-employer-checkbook-mapping/KT-4054-employer-checkbook-mapping.md

Step 2: Update the actual Jira ticket

/update-jira-ticket jira/KT-4054-employer-checkbook-mapping/KT-4054-employer-checkbook-mapping.md

This command:

  • Validates the file format and structure
  • Connects to Atlassian via MCP
  • Verifies the ticket exists and you have permissions
  • Updates the Jira ticket with the structured content
  • Returns a direct link to the updated ticket

The workflow ensures my planning docs and Jira tickets stay in perfect sync. No manual copying, no formatting inconsistencies, no outdated ticket descriptions.

Setup wasn't trivial. MCP servers can be slow, authentication can be finicky, but once working, it closes the loop between planning, execution, and project management.

I use two key MCP integrations:

  • Atlassian MCP: For Jira synchronization (setup guide)
  • DataForSEO MCP: For SEO optimization workflows (setup guide)

Pro tip: Use claude add mcp CLI for Claude, or Cursor's UI for easier configuration. But you can always edit the config JSON directly if you prefer.

Model Context Protocol (MCP) Beyond Coding: Claude Desktop & Web

Here's something important: MCP isn't just for Claude Code. The Model Context Protocol works across:

  • Claude Desktop: The standalone desktop app supports MCP servers for any workflow - research, writing, data analysis, customer support automation
  • Claude.ai (Web): Through browser extensions and API integrations, you can connect MCP servers for enhanced capabilities
  • Claude Code: The full developer experience with commands, agents, and MCP integration
  • Cursor: Native MCP support with an even cleaner UI for configuration

What About Commands beyond Code Agents?

Here's where it gets nuanced. Custom commands (the /command-name system) are currently specific to Claude Code and Cursor. They're not available in Claude Desktop or the web interface.

However, Claude Desktop supports something similar through MCP-based prompt templates. Using tools like the Claude Prompts MCP Server, you can:

  • Define reusable prompt templates in markdown files
  • Create multi-step prompt chains with input/output mapping
  • Build workflow orchestration (though less structured than Code commands)
  • Hot-reload prompts without restarting

It's not as powerful as Code's command system - no automatic context from CLAUDE.md, no built-in validation, no progress tracking. But for non-coding workflows (research automation, content creation, data analysis), it's surprisingly capable.

Use Cases Beyond Coding

I've seen people use Claude Desktop + MCP for:

  • Research workflows: MCP servers for academic databases, web scraping, citation management
  • Content pipelines: Automated SEO research (like my DataForSEO setup), fact-checking, image optimization
  • Customer support: Connect to Zendesk, Intercom, or custom CRMs via MCP
  • Data analysis: PostgreSQL/MySQL MCP servers for business intelligence queries
  • Personal productivity: Calendar integration, email automation, note-taking workflows

The limitation? You need to keep the human in the loop. Without Code's structured checkpoints and validation, Claude Desktop workflows can drift. MCP servers can be slow (especially DataForSEO). Authentication tokens expire. But for focused, repeatable tasks, it works.

Another issue is speed. MCP is not as fast as orchestrating APIs directly with code, but it makes automation more accessible to people who don't want to write custom integration code. Tools like n8n offer a good middle ground with visual workflow builders, though owning your stack with custom code has its benefits: faster execution, full control, and no third-party dependencies. The trade-off? More upfront development time versus MCP's plug-and-play simplicity.

Claude Commands in Cursor vs VS Code: Cross-Platform AI Development

The best part? These commands work identically in Claude Code and Cursor.

I started with Visual Studio Code running Claude Code, but found Cursor more effective for AI workflows. The experience is just smoother - better context handling, faster responses, cleaner interface for agent interactions.

Sharing commands between projects is trivial: copy the .claude/commands/ folder, or publish them as plugins for the community.

Same commands, same workflows, different editors. That's the power of standardized command definitions.

Real-World AI Workflow Example: Automated Blog Post Writing

Let me show you a concrete example. I created a /write-blog-post command that follows my writing style:

/write-blog-post notes.md "Write about Claude commands"

The command:

  1. Analyzes my writing style from reference articles
  2. Recognizes the topic (technical vs. business) and adjusts vocabulary
  3. Structures the article with: hook → build-up → deep dive → recommendations → conclusion
  4. Verifies facts and finds credible sources
  5. Outputs to /blog-posts/article-slug.md

After writing, it offers to run SEO optimization:

/seo-blog-post /blog-posts/article-slug.md

The /seo-blog-post command uses the DataForSEO MCP to:

  • Analyze article content and key topics
  • Research relevant keywords with search volume data
  • Present optimization recommendations
  • Update the article while preserving voice and style

Two commands, chained together, producing publication-ready content. No manual reformatting, no context loss between steps.

The Non-Deterministic Reality

I need to be honest: this isn't perfect.

Even with structured commands, AI agents can be non-deterministic. Sometimes the same command produces slightly different outputs. MCP servers can be slow (especially DataForSEO on complex queries). Authentication tokens expire. Network requests fail.

That's why human-in-the-loop is critical. Commands give you the structure and checkpoints, but you still need to:

  • Review outputs at each step
  • Validate that context was correctly applied
  • Catch edge cases the command didn't anticipate
  • Iterate when results aren't quite right

The difference is: instead of fighting with free-form prompts that might completely miss the mark, you're fine-tuning structured outputs that are 80% correct.

Why This Matters

Commands give you complete control over agent workflows:

  • No random outputs that ignore your conventions
  • No endless re-prompting to get the format right
  • No lost context when you switch between tasks
  • No manual copying of implementation status to Jira

Structured. Predictable. Fast.

You can let commands run for substantial periods (hours even) because each checkpoint gives you a chance to course-correct. The agent becomes a reliable collaborator instead of an unpredictable assistant.

Getting Started with Claude Code Commands

Want to try this? Here's the simplest path:

  1. Pick one repetitive workflow - What do you do over and over? Feature planning? API docs? Code reviews?
  2. Ask Claude to create the command - Describe what you want in plain English. "Create a command that validates my API schema and generates OpenAPI docs." Claude will generate the initial command structure.
  3. Use it and refine - Run it a few times. What's missing? What's confusing? Tell Claude to update the command based on what you learned.
  4. Chain commands together - Once you have a few working commands, create workflows where one command calls another.

Learn more: Check out the official Claude Code commands tutorial for comprehensive documentation on command structure and best practices.

The best part? You don't need to know command syntax or structure upfront. Claude generates commands, you refine them through use. It's like pair programming, but for workflow automation.

The investment pays off exponentially. Every feature I plan now takes minutes instead of hours. Every implementation follows the same validated pattern. Every blog post comes out in my voice, properly structured, SEO-optimized.

Commands turned AI from a frustrating experiment into an indispensable workflow tool.

Frequently Asked Questions About Claude Commands

Q: What are Claude commands?

A: According to the official documentation, Claude commands (also called slash commands) are "a way to control Claude's behavior during an interactive session." They're custom markdown-based instructions stored in .claude/commands/ that create structured, repeatable workflows. Commands enable predictable AI coding workflows instead of inconsistent free-form prompts by accepting arguments, executing bash commands, and maintaining consistent context.

Q: Do Claude commands work with Cursor AI?

A: Yes! Claude commands work identically in both Claude Code (VS Code extension) and Cursor AI IDE. You can copy the .claude/commands/ folder between tools seamlessly. The commands are standardized, so the same workflow runs in both editors without modification.

Q: What's the difference between Claude commands and subagents?

A: Commands are structured instructions you write that Claude follows for predictable workflows with human checkpoints. Subagents are pre-configured AI personalities with specialized expertise that operate in their own separate context windows. Commands provide explicit control over execution flow, while subagents handle specialized tasks (code review, debugging, data analysis) that Claude delegates to them automatically or that you invoke explicitly.

Q: Do I need frontmatter in my command files?

A: No, frontmatter (description, argument-hint, allowed-tools, model) is optional. Commands work well without it - they'll ask for tool access and accept parameters properly if you include instructions in the command body. Frontmatter is useful for documentation and security (restricting tools), but you can start simple and add it later.

Q: Can I use Model Context Protocol (MCP) with Claude Desktop?

A: Yes, the Model Context Protocol works with Claude Desktop, Claude Code, Claude.ai (web), and Cursor for external integrations like Jira, databases, and SEO tools. However, custom commands (the /command-name system) are specific to Claude Code and Cursor - Claude Desktop uses MCP-based prompt templates instead.

Q: How do I pass arguments to commands?

A: Use $ARGUMENTS to capture all arguments, or $1, $2, etc. for individual arguments. For example, /plan-feature feature-name "description here" makes $1 equal to "feature-name" and $ARGUMENTS contains the full string. Define expected arguments in the argument-hint frontmatter parameter or in your command instructions.

Q: Can commands call other commands?

A: Yes! Commands can invoke other commands to create complex workflows. For example, my /plan-epic command calls /plan-feature for each sub-feature, and /write-blog-post suggests running /seo-blog-post after completion. This enables workflow orchestration and automation.

Q: How do I get started with Claude Code commands?

A: Start with one repetitive workflow. Ask Claude: "Create a command called /command-name that does X." Claude generates the initial structure. Test it, refine based on results, and iterate. You don't need to know command syntax upfront - Claude helps you build commands through conversation. See the official tutorial for comprehensive guidance.

Q: What AI development tools integrate with Claude commands?

A: Popular MCP integrations include Atlassian (Jira), DataForSEO (SEO research), PostgreSQL/MySQL databases, GitHub, and custom API servers via the Model Context Protocol. You can find setup guides for Atlassian MCP and DataForSEO MCP.

Q: Where can I find example commands?

A: Check out the claude-plugin repository with production-ready commands for feature planning (/plan-feature), implementation, JIRA integration, blog writing (/write-blog-post), and SEO optimization (/seo-blog-post). You can copy them directly or use them as templates.

Q: Can I organize commands into subdirectories?

A: Yes! Commands support namespacing through subdirectories. For example, .claude/commands/jira/create-ticket.md becomes /jira/create-ticket. This helps organize commands by domain (jira, seo, git, etc.) and prevents naming conflicts.

Q: Are AI coding workflows with commands really predictable?

A: Commands provide structure and consistency, but AI agents can still be non-deterministic. The same command might produce slightly different outputs. That's why human-in-the-loop checkpoints are critical - review outputs at each step, validate context, and iterate when needed. Commands give you 80% correct structured outputs instead of unpredictable free-form responses.

References

Official Documentation

Command Examples & Templates

Model Context Protocol (MCP) Integration