Responses

Responses are the raw outputs captured when Cleotic runs your prompts against AI models. Each response contains the full text of what the AI model said, along with Cleotic's analysis of brand mentions, sentiment, and citations.

What a response contains

Every response includes:

Full response text

The complete text output from the AI model. You can view this in three formats:

  • Parsed -- Formatted and readable, with brand mentions highlighted
  • Raw -- The unformatted text as returned by the AI model
  • Code -- A code-style view for technical responses

You can copy any response to your clipboard for use elsewhere.

Brand mentions

Cleotic identifies every brand mentioned in the response and extracts:

  • Brand name -- Which of your configured brands was mentioned
  • Sentiment -- Whether the mention was positive, negative, or neutral
  • Position -- Where in the response the brand was mentioned (earlier mentions tend to indicate stronger association)

Sentiment is displayed as colour-coded badges: green for positive, red for negative, and grey for neutral.

Citations

If the AI model referenced any URLs or sources, these are captured as citations. Each citation includes:

  • URL -- The full link (clickable)
  • Domain -- The website domain
  • Source type -- The kind of source (news, blog, official website, social media, etc.)
  • Brands mentioned -- Which brands the cited source relates to

Citations are a key signal for understanding which content is influencing AI responses about your brand. See Citations for deeper analysis.

Viewing responses

From the monitor page, click on any prompt to see its responses. Responses are grouped by:

  • Capture date -- When the response was collected
  • Model -- Which AI model produced it

Each response card shows a summary with the model name, timestamp, and number of brands mentioned. Click to expand and see the full details.

How responses feed into analytics

You don't need to read every response individually -- that's what the analytics dashboards are for. Responses are automatically processed to produce:

Individual response viewing is most useful for:

  • Investigating anomalies -- When a visibility score drops, checking the actual responses helps you understand why
  • Understanding AI perception -- Reading how models describe your brand gives qualitative insight beyond the numbers
  • Verifying data quality -- Confirming that brand mentions and sentiment are being detected correctly

Tips

  • Check responses after adding new brands or aliases. If you notice a brand isn't being detected, you may need to add more aliases. After updating aliases, use the Reanalyse action on the Brands tab to reprocess historical responses.
  • Compare models side by side. Look at the same prompt across different AI models to understand how each one talks about your brand differently.
  • Use the code view for technical content. When AI responses include code examples, lists, or structured data, the code view makes them easier to read.