NavigationContentFooter
Jump toSuggest an edit

How to monitor an LLM Inference deployment

Reviewed on 06 March 2024Published on 06 March 2024

This documentation page shows you how to monitor your LLM Inference deployment with Cockpit.

Cockpit dashboard updates

Starting April 2024, a new version of Cockpit will be released.

In this version, the concept of regionalization will be introduced to offer you more flexibility and resilience for seamless monitoring. If you have created customized dashboards with data for your Scaleway resources before April 2024, you will need to update your queries in Grafana, with the new regionalized data sources.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • An LLM Inference deployment

How to monitor your LLM dashboard

  1. Click LLM Inference in the AI & Data section of the Scaleway console side menu. A list of your deployments displays.
  2. Click a deployment name or «See more Icon» > More info to access the deployment dashboard.
  3. Click the Monitoring tab of your deployment. The Cockpit overview displays.
  4. Click Open Grafana metrics dashboard to open your Cockpit’s Grafana interface.
  5. Authenticate with your Grafana credentials. The Grafana dashboard displays.
  6. Select your LLM Inference dashboard from the list of your managed dashboards to visualize your metrics.
See also
How to how to create a deploymentHow to how to manage allowed IP addresses
Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway