Skills vs Agents

Concepts β€” understand before you build
πŸ’‘
The one-sentence version: A Skill is a reusable workflow Claude follows in your conversation. An Agent is an isolated specialist that runs separately and returns a summary.

The Mental Model

Think of it in Red Hat terms:

Skill

πŸ“‹ Standard Operating Procedure

  • Runs inside your current conversation
  • Invoked via /namespace:name command
  • Can ask you questions, generate files
  • Shares your context window
  • Best for interactive, guided workflows
  • Defined in a SKILL.md file
Agent

πŸ€– Specialist on a Ticket

  • Gets its own isolated context window
  • Does not see your conversation history
  • Returns only a summary to your session
  • Can run in parallel with other agents
  • Best for review, analysis, parallel work
  • Defined in an .md file under agents/

Full Comparison

SkillAgent
What it isWorkflow instructions Claude followsIsolated worker with its own context
ContextShares your main conversation windowGets a fresh, isolated context window
Invocation/namespace:skill-name or auto-loadedDelegated by skill or by you directly
Sees conversation historyYesNo β€” only what you pass it
ReturnsWorks directly in your sessionA summary back to your main session
Can modify filesYes (in your session)Yes (in its own workspace)
Spawns other agentsNoNo (prevents infinite nesting)
File definitionSKILL.md in skills/<name>/.md file in agents/
Best forInteractive step-by-step workflowsReview, validation, parallel tasks

Red Hat Role Examples

Different people at Red Hat use skills and agents differently. Here’s the breakdown by role.

Sales Engineer

Goal: Automate pre-demo prep β€” research the customer, draft a briefing, find the right catalog item.

Use a Skill (/sales:demo-brief):

> /sales:demo-brief
Customer: Acme Corp
Opportunity: OpenShift Virtualization migration, 500 VMs
Rep: Jane Smith

The skill asks follow-up questions and generates the briefing doc in your session. Interactive, guided β€” exactly what a skill is for.

Not an Agent β€” you want back-and-forth conversation, not a silently dispatched task.


Frontend Developer

Goal: Automatically review every React component for accessibility, PatternFly compliance, and responsiveness.

Use an Agent (frontend-reviewer):

> Use the frontend-reviewer agent to review src/components/DashboardCard.tsx

The agent gets the file, checks it independently β€” your main conversation doesn’t fill up with hundreds of lint lines. It returns: β€œ3 issues found” with specifics.

With a hook, this fires automatically on every .tsx edit without you asking.

Not a Skill β€” you don’t want to sit through an interactive Q&A for each file. You want a silent specialist that pings you only when there’s a real problem.


Solutions Architect

Goal: Validate that a demo environment is healthy after deployment.

Use an Agent (health-validator):

> Use the health-validator agent to check the AAP deployment

The agent connects, runs checks, returns a pass/fail summary. Your main conversation stays focused on the customer conversation.

Use a Skill for the interactive part β€” /health:deployment-validator to create the validation role step by step, then the agent to run it.


RHDP Engineer

Goal: Build a new workshop catalog item and have it independently reviewed.

Use both together:

  1. /showroom:create-lab (Skill) β€” guides you through generating the AsciiDoc modules
  2. workshop-reviewer (Agent) β€” independently reviews the output for quality issues
  3. style-enforcer (Agent) β€” checks Red Hat naming conventions and inclusive language

The skill does the creation work in your session. The agents review independently, with fresh eyes, and return specific feedback.


Decision Flowchart

Is it interactive? Do you need to answer questions?
β”œβ”€β”€ YES β†’ Skill
β”‚         Do you want it invoked automatically or manually?
β”‚         β”œβ”€β”€ Manually (you type /name) β†’ Skill (default)
β”‚         └── Automatically when relevant β†’ Skill with model-invocable: true
β”‚
└── NO β†’ Is the task self-contained (review, analysis, validation)?
          β”œβ”€β”€ YES β†’ Agent
          β”‚         Does output need to stay out of your main context?
          β”‚         β”œβ”€β”€ YES β†’ Agent (isolated context window)
          β”‚         └── NO β†’ Skill with context: fork
          β”‚
          └── NO β†’ CLAUDE.md rule or Hook

Using Skills and Agents Together

The real power comes from combining them. In the RHDP marketplace, every major workflow follows this pattern:

User invokes Skill (/showroom:create-lab)
     β”‚
     β”œβ”€β”€ Skill generates AsciiDoc modules (in your session)
     β”‚
     β”œβ”€β”€ Skill invokes workshop-reviewer Agent
     β”‚       └── Agent checks learning objectives, exercise quality
     β”‚       └── Returns: specific BEFORE/AFTER feedback
     β”‚
     └── Skill invokes style-enforcer Agent
             └── Agent checks RH naming, inclusive language
             └── Returns: specific violations with file locations

The skill orchestrates. The agents specialise. Your main context only sees the final results.


When to Use Agents Inside a Skill

Once you start building skills, you’ll face a choice: should the skill do checks itself (inline), or delegate to an agent?

This is one of the most common design mistakes β€” using agents when inline is faster and simpler, or avoiding agents when they’d genuinely help.

The Core Question

Is the check the main purpose of the skill, or a secondary gate after heavy work?


Use Inline When

Verification is the primary purpose of the skill β€” like /showroom:verify-content. The skill exists to check things. There’s no large pre-existing context. Running checks directly in the skill’s own context is faster and produces consistent output.

/showroom:verify-content
  ↓
Reads prompt files
  ↓
Runs all checks inline (one context)
  ↓
Returns single findings table

Also use inline when:


Use Agents When

The check is secondary β€” it runs after heavy generation work. For example, /showroom:create-lab spends 9 steps generating content. At step 10, the context is large and the skill is almost done. An agent gets a fresh context and can review without bias from all the generation choices that came before.

/showroom:create-lab
  ↓
Steps 1–9: Heavy generation (large context builds up)
  ↓
Step 10: Ask workshop-reviewer agent to check
         β†’ Fresh context, unbiased view
         β†’ Returns specific feedback
  ↓
Apply fixes, deliver

Also use agents when:


The Speed Trade-off

Agents are never faster than inline. Each agent call is a separate context spin-up. If a skill invokes three agents in sequence, that’s three round-trips instead of one.

The reason to use agents is quality and maintainability, not speed:


Decision Table

SituationUseWhy
Skill's main job is checking/validatingInlineFaster, one context, structured output
Quick quality gate at end of generationInlineContent already in context, specific checklist
Post-generation review needing fresh eyesAgentUnbiased context, specialised knowledge
Specialist domain maintained separatelyAgentUpdate agent once, all skills benefit
Open-ended qualitative feedbackAgentAgents are better at narrative, exploratory review
Structured table output neededInlineEasier to control output format precisely

Real RHDP Examples

Skill Approach Reason
/showroom:verify-content Inline Verification IS the skill. Reads prompts, runs all checks in one pass.
/showroom:create-lab Step 10 Inline Focused 10-item checklist on just-generated module. Fast, specific.

File Structure Reference

Defining a Skill (showroom/skills/create-lab/SKILL.md):

---
name: showroom:create-lab
description: Create a Showroom workshop lab module. Invoke when the user
  wants to build hands-on workshop content for Red Hat Showroom.
context: main
model: claude-opus-4-6
---

# Create Lab Skill

[Step-by-step workflow instructions...]

Defining an Agent (showroom/agents/workshop-reviewer.md):

# Workshop Reviewer Agent

## Role
You are a senior instructional designer specialising in Red Hat
technical workshops. You review content for learning effectiveness.

## Instructions
Review the provided modules for:
- Clear learning objectives (Know/Do/Check structure)
- Actionable lab exercises with verification steps
- Appropriate technical depth for the target audience

## Feedback Requirements
For each issue, provide:
- WHY it is a problem
- BEFORE: the current text
- AFTER: the improved version
- WHICH FILE and line to change

Further Reading