/aiops-skill:feedback-capture
Ask the user for feedback at the end of a skill invocation, categorize it, and store it as a structured record in ~/feedback.txt with session tracking.
When to Use
- After completing a root cause analysis session
- When you want to record a note about the quality or accuracy of an investigation
- To log a bug report or suggestion about skill behavior
Example invocations:
"Capture feedback for this session"
"Record my feedback"
"I want to leave feedback about this analysis"
Prerequisites
No external services or environment variables are required. Feedback is written to the local filesystem at ~/feedback.txt. The skill requires scripts/formatting.py to be present in the skill directory.
4-Step Workflow
-
Ask for Feedback
[Claude]Claude asks the user if they would like to provide feedback. The prompt is kept open-ended β any comment the user provides is treated as feedback.
-
Select Category
[Claude]Claude reads the user's comment and selects the most appropriate feedback category from the standard list. Categories are not presented as options to the user β Claude infers the best fit.
-
Summarize Context
[Claude]Claude creates two summaries: the user's feedback condensed into
users-feedback, and a brief description of what happened during the session ascontext. -
Store Feedback
[Python]Runs
```bash python scripts/formatting.py \ --category {Category} \ --skill {Skill} \ --feedback {users-feedback} \ --context {summary-of-what-happened} ```scripts/formatting.pywith the categorized, structured feedback. The script appends a timestamped record to~/feedback.txt.
Feedback Categories
Claude selects one category per feedback entry based on the nature of the comment:
| Category | Use When |
|---|---|
Complexity |
The skill was too complex or had too many steps |
Clarity |
Output was unclear, confusing, or hard to interpret |
Accuracy |
Root cause or findings were incorrect or incomplete |
Performance |
The skill was slow or timed out |
Search Quality |
Search results were irrelevant or missed key content |
Interpretation |
Claude misunderstood the intent or scope |
Positive |
The investigation was helpful and worked well |
For feedback that fits a more specific label (e.g., βIt keeps repeating the same solutionβ), Claude creates a short 1β2 word custom label (e.g., Repetition) instead.
Storage Format
Feedback is appended to ~/feedback.txt. Each record includes:
- Timestamp β When the feedback was captured
- Session ID β Links the feedback to the Claude Code session
- Skill β Which skill was being evaluated
- Category β The selected or inferred category
- Feedback β The userβs comment, summarized
- Context β Brief description of what the skill did during the session
Usage Guidelines
- Claude never presents the category list as options β it infers the category from context
- Whatever the user says is treated as feedback; no follow-up questions are asked
- The skill should only be invoked once per session β do not ask repeatedly
- Feedback is local only; it is not automatically uploaded unless the parent skill (e.g., root-cause-analysis) runs an upload step