Menu
Grafana Cloud

Correlate multiple signals

Work through this guide when you need to stitch metrics, logs, traces, profiles, and SQL insights into a single story. The workflow keeps you inside the Assistant while you pivot between signal types.

What you’ll achieve

  • Establish a shared timeframe and anchor metric for the incident or hypothesis you are exploring.
  • Pull supporting evidence from logs, traces, profiles, and business data without losing context.
  • Deliver a concise narrative that explains how the signals relate and what to do next.

Before you begin

  • Data sources for each signal: Prometheus for metrics, Loki for logs, Tempo for traces, Pyroscope for profiles, and any SQL data sources that hold business metrics.
  • Defined scope: Know the service, feature, or customer journey you want to trace across signals.

Set the stage in chat

Frame the problem and establish a baseline metric so each subsequent signal aligns to the same scope and window.

  1. Describe the symptom and timeframe. Mention the impacted service or dashboard.
  2. Ask for a metrics snapshot to create a baseline.
Show the request error rate for @prometheus-ds checkout-service over the last two hours.
  1. Follow up immediately with a logs or traces request that references the same timeframe:
Find logs mentioning "timeout" for the same timeframe in @checkout-logs and group by status code.

Pivot across signals

Chain prompts that jump between metrics, logs, traces, profiles, and SQL data to uncover how the signals relate.

  • Use natural follow-ups, for example, Drill into traces for the slowest requests, to chain Tempo or Pyroscope queries.
  • Ask the Assistant to highlight correlations, for example, Do spikes in CPU coincide with the error peak?.
  • If the Assistant misinterprets a prompt, clarify politely with the correct label, port, or data source before moving on.
  • If you store synthetic results or business KPIs in SQL, pull them into the conversation to validate customer impact:
Query @orders-database for failed checkouts per minute between 17:30 and 18:00.

Capture the narrative

Ask the Assistant to synthesize what you learned and outline follow-up actions your team should take.

  1. Request a combined summary that explains how the signals relate.
Summarize how the error spikes, log timeouts, and trace latency relate to each other.
  1. Ask the Assistant to suggest next steps, such as running a deeper investigation or creating an alert.
  2. Save the conversation or export the results if you need to share them in an incident review. When you document the findings, include representative metrics, log lines, or trace IDs so teammates can reproduce the analysis.

Share the cross-signal story

You now have a cross-signal explanation that ties quantitative evidence to recommended actions. Use it to brief incident responders, update status pages, or feed follow-up investigations and dashboards.

Troubleshooting

  • Missing signal coverage: confirm the data source exists, for example, List Tempo data sources available to me.
  • Disconnected findings: anchor each follow-up prompt to the same timeframe and entity to help the Assistant maintain context.
  • High-cardinality noise: filter by relevant labels, for example, Filter logs to namespace=production and service=checkout.

Next steps

  • Try Use infrastructure memory to add service topology context to your analysis.
  • Build monitoring views with Dashboard management for ongoing correlation.
  • Capture the learnings in your runbooks and link them from the Assistant prompts.