logo
Alauda AI
English
Русский
English
Русский
logo
Alauda AI
Navigation

Overview

Introduction
Quick Start
Release Notes

Install

Pre-installation Configuration
Install Alauda AI Essentials
Install Alauda AI

Upgrade

Upgrade from AI 1.3

Uninstall

Uninstall

Infrastructure Management

Device Management

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Multi-Tenant

Guides

Namespace Management

Workbench

Overview

Introduction
Install
Upgrade

How To

Create WorkspaceKind
Create Workbench

Model Deployment & Inference

Overview

Introduction
Features

Inference Service

Introduction

Guides

Inference Service

How To

Extend Inference Runtimes
Configure External Access for Inference Services
Configure Scaling for Inference Services

Troubleshooting

Experiencing Inference Service Timeouts with MLServer Runtime
Inference Service Fails to Enter Running State

Model Management

Introduction

Guides

Model Repository

Monitoring & Ops

Overview

Introduction
Features Overview

Logging & Tracing

Introduction

Guides

Logging

Resource Monitoring

Introduction

Guides

Resource Monitoring

API Reference

Introduction

Kubernetes APIs

Inference Service APIs

ClusterServingRuntime [serving.kserve.io/v1alpha1]
InferenceService [serving.kserve.io/v1beta1]

Workbench APIs

Workspace Kind [kubeflow.org/v1beta1]
Workspace [kubeflow.org/v1beta1]

Manage APIs

AmlNamespace [manage.aml.dev/v1alpha1]

Operator APIs

AmlCluster [amlclusters.aml.dev/v1alpha1]
Glossary
Previous PageGuides
Next PageResource Monitoring

#Logging

#TOC

#Feature Overview

The Logging feature in Alauda AI enables real-time monitoring and analysis of logs generated by Inference Service pods. This functionality helps users troubleshoot issues, track service behavior, and ensure operational efficiency by providing immediate access to critical log data. It is particularly useful for debugging model inference errors.

#Core Features

  • Real-Time Log Streaming: Automatically displays logs as they are generated by the selected pod.
  • Replica Pod Selection: Switch between pods to view logs from specific replicas.
  • Keyword Search (Find): Locate specific log entries using exact or partial keyword matching.
  • Log Export (Export): Download logs in .txt format for offline analysis or archival.

#Accessing Logs

Follow these steps to view logs for an Inference Service:

#Step 1: Navigate to the Inference Service

  1. Go to Inference Services in the left navigation pane.
  2. Click the target Inference Service name to open its details page.

#Step 2: Open the Logging Interface

  1. Select the Logging tab in the tabs bar under the Inference Service name.

#Step 3: Select a Pod

  1. Use the Replica dropdown to choose a pod.
  2. The log viewer will automatically display and stream logs from the selected pod.

#Using the Find Feature

Search logs efficiently with keyword matching:

#Step 1: Activate Search

  1. Click the Find button in the log viewer's upper-right corner.

#Step 2: Enter Search Term

  1. Type a keyword or phrase in the search field.

#Step 3: Navigate Results

  1. Matching entries are highlighted in yellow.
  2. Use ↑ and ↓ buttons to jump between matches.

#Exporting Logs

Export logs for offline storage and analysis:

#Step 1: Initiate Export

  1. Click the Export button in the log viewer's upper-right corner.

#Step 2: Download File

  1. The browser will automatically download the log file to your local system.