/aiops-skill:feedback-capture

πŸ’¬ Feedback Capture

Ask the user for feedback at the end of a skill invocation, categorize it, and store it as a structured record in ~/feedback.txt with session tracking.


When to Use

πŸ’‘
This skill is typically called automatically by other skills (such as context-fetcher) at the end of an investigation. You can also invoke it directly:
  • After completing a root cause analysis session
  • When you want to record a note about the quality or accuracy of an investigation
  • To log a bug report or suggestion about skill behavior

Example invocations:

"Capture feedback for this session"
"Record my feedback"
"I want to leave feedback about this analysis"

Prerequisites

No external services or environment variables are required. Feedback is written to the local filesystem at ~/feedback.txt. The skill requires scripts/formatting.py to be present in the skill directory.


4-Step Workflow

  1. Ask for Feedback [Claude]

    Claude asks the user if they would like to provide feedback. The prompt is kept open-ended β€” any comment the user provides is treated as feedback.

  2. Select Category [Claude]

    Claude reads the user's comment and selects the most appropriate feedback category from the standard list. Categories are not presented as options to the user β€” Claude infers the best fit.

  3. Summarize Context [Claude]

    Claude creates two summaries: the user's feedback condensed into users-feedback, and a brief description of what happened during the session as context.

  4. Store Feedback [Python]

    Runs scripts/formatting.py with the categorized, structured feedback. The script appends a timestamped record to ~/feedback.txt.

    ```bash python scripts/formatting.py \ --category {Category} \ --skill {Skill} \ --feedback {users-feedback} \ --context {summary-of-what-happened} ```

Feedback Categories

Claude selects one category per feedback entry based on the nature of the comment:

Category Use When
Complexity The skill was too complex or had too many steps
Clarity Output was unclear, confusing, or hard to interpret
Accuracy Root cause or findings were incorrect or incomplete
Performance The skill was slow or timed out
Search Quality Search results were irrelevant or missed key content
Interpretation Claude misunderstood the intent or scope
Positive The investigation was helpful and worked well

For feedback that fits a more specific label (e.g., β€œIt keeps repeating the same solution”), Claude creates a short 1–2 word custom label (e.g., Repetition) instead.


Storage Format

Feedback is appended to ~/feedback.txt. Each record includes:


Usage Guidelines

ℹ️
  • Claude never presents the category list as options β€” it infers the category from context
  • Whatever the user says is treated as feedback; no follow-up questions are asked
  • The skill should only be invoked once per session β€” do not ask repeatedly
  • Feedback is local only; it is not automatically uploaded unless the parent skill (e.g., root-cause-analysis) runs an upload step