logo
Alauda AI
English
Русский
English
Русский
logo
Alauda AI
Navigation

Overview

Introduction
Quick Start
Release Notes

Install

Pre-installation Configuration
Install Alauda AI Essentials
Install Alauda AI

Upgrade

Upgrade from AI 1.3

Uninstall

Uninstall

Infrastructure Management

Device Management

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Multi-Tenant

Guides

Namespace Management

Workbench

Overview

Introduction
Install
Upgrade

How To

Create WorkspaceKind
Create Workbench

Model Deployment & Inference

Overview

Introduction
Features

Inference Service

Introduction

Guides

Inference Service

How To

Extend Inference Runtimes
Configure External Access for Inference Services
Configure Scaling for Inference Services

Troubleshooting

Experiencing Inference Service Timeouts with MLServer Runtime
Inference Service Fails to Enter Running State

Model Management

Introduction

Guides

Model Repository

Monitoring & Ops

Overview

Introduction
Features Overview

Logging & Tracing

Introduction

Guides

Logging

Resource Monitoring

Introduction

Guides

Resource Monitoring

API Reference

Introduction

Kubernetes APIs

Inference Service APIs

ClusterServingRuntime [serving.kserve.io/v1alpha1]
InferenceService [serving.kserve.io/v1beta1]

Workbench APIs

Workspace Kind [kubeflow.org/v1beta1]
Workspace [kubeflow.org/v1beta1]

Manage APIs

AmlNamespace [manage.aml.dev/v1alpha1]

Operator APIs

AmlCluster [amlclusters.aml.dev/v1alpha1]
Glossary

Introduction#

Previous PageOverview
Next PageQuick Start

Welcome to Alauda AI! Alauda AI presents best practices for large models and MLOps platforms, helping you to efficiently complete tasks in multiple scenarios such as large model applications, AI applications, and machine learning. Utilizing Alauda AI, you can quickly complete the following common tasks:

  • Model storage and versioning management
  • Model (large model) reasoning service release and operation
  • Agent orchestration, AI application development and release

In this user manual, we will introduce you to the various functions of the Alauda AI platform so that you can make full use of the tools and services we provide. Whether you are a data scientist, machine learning engineer, or application developer, I believe that our platform can bring convenience and efficiency to your work!

#Introduction to MLOps and LLMOps

MLOps (Machine Learning Operations) delineates a series of principles and tool forms in the development, training, release, and operation and maintenance processes of machine learning, large models, or other AI applications, becoming the best practice for releasing AI applications efficiently, accurately, and traceably.

Alauda AI not only covers the tool support for the subject process in the MLOps process, but also furnishes a more concise user interface for the practice of LLMOps (Large Language Model Operations). From the release of inference services for large models, to the construction of intelligent agents and applications,all are completed in a codeless way.