logo
Alauda AI
English
Русский
English
Русский
logo
Alauda AI
Navigation

Overview

Introduction
Quick Start
Release Notes

Install

Pre-installation Configuration
Install Alauda AI Essentials
Install Alauda AI

Upgrade

Upgrade from AI 1.3

Uninstall

Uninstall

Infrastructure Management

Device Management

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Multi-Tenant

Guides

Namespace Management

Workbench

Overview

Introduction
Install
Upgrade

How To

Create WorkspaceKind
Create Workbench

Model Deployment & Inference

Overview

Introduction
Features

Inference Service

Introduction

Guides

Inference Service

How To

Extend Inference Runtimes
Configure External Access for Inference Services
Configure Scaling for Inference Services

Troubleshooting

Experiencing Inference Service Timeouts with MLServer Runtime
Inference Service Fails to Enter Running State

Model Management

Introduction

Guides

Model Repository

Monitoring & Ops

Overview

Introduction
Features Overview

Logging & Tracing

Introduction

Guides

Logging

Resource Monitoring

Introduction

Guides

Resource Monitoring

API Reference

Introduction

Kubernetes APIs

Inference Service APIs

ClusterServingRuntime [serving.kserve.io/v1alpha1]
InferenceService [serving.kserve.io/v1beta1]

Workbench APIs

Workspace Kind [kubeflow.org/v1beta1]
Workspace [kubeflow.org/v1beta1]

Manage APIs

AmlNamespace [manage.aml.dev/v1alpha1]

Operator APIs

AmlCluster [amlclusters.aml.dev/v1alpha1]
Glossary
Previous PageInstall Alauda AI Essentials
Next PageUpgrade

#Install Alauda AI

Alauda AI now offers flexible deployment options. Starting with Alauda AI 1.4, the Serverless capability is an optional feature, allowing for a more streamlined installation if it's not needed.

To begin, you will need to deploy the Alauda AI Operator. This is the core engine for all Alauda AI products. By default, it uses the KServe Raw Deployment mode for the inference backend, which is particularly recommended for resource-intensive generative workloads. This mode provides a straightforward way to deploy models and offers robust, customizable deployment capabilities by leveraging foundational Kubernetes functionalities.

If your use case requires Serverless functionality, which enables advanced features like scaling to zero on demand for cost optimization, you can optionally install the Alauda AI Model Serving Operator. This operator is not part of the default installation and can be added at any time to enable Serverless functionality.

INFO

Recommended deployment option: For generative inference workloads, the Raw Kubernetes Deployment approach is recommended as it provides the most control over resource allocation and scaling.

#TOC

#Downloading

Operator Components:

  • Alauda AI Operator

    Alauda AI Operator is the main engine that powers Alauda AI products. It focuses on two core functions: model management and inference services, and provides a flexible framework that can be easily expanded.

    Download package: aml-operator.xxx.tgz

  • Alauda AI Model Serving Operator

    Alauda AI Model Serving Operator provides serverless model inference.

    Download package: kserveless-operator.xxx.tgz

INFO

You can download the app named 'Alauda AI' and 'Alauda AI Model Serving' from the Marketplace on the Customer Portal website.

#Uploading

We need to upload both Alauda AI and Alauda AI Model Serving to the cluster where Alauda AI is to be used.

#Downloading the violet tool

First, we need to download the violet tool if not present on the machine.

Log into the Web Console and switch to the Administrator view:

  1. Click Marketplace / Upload Packages.
  2. Click Download Packaging and Listing Tool.
  3. Locate the right OS / CPU architecture under Execution Environment.
  4. Click Download to download the violet tool.
  5. Run chmod +x ${PATH_TO_THE_VIOLET_TOOL} to make the tool executable.

#Uploading package

Save the following script in uploading-ai-cluster-packages.sh first, then read the comments below to update environment variables for configuration in that script.

uploading-ai-cluster-packages.sh
#!/usr/bin/env bash
export PLATFORM_ADDRESS=https://platform-address  
export PLATFORM_ADMIN_USER=<admin>
export PLATFORM_ADMIN_PASSWORD=<admin-password>
export CLUSTER=<cluster-name>

export AI_CLUSTER_OPERATOR_NAME=<path-to-aml-operator-tarball>
export KSERVELESS_OPERATOR_PKG_NAME=<path-to-kserveless-operator-tarball>

VIOLET_EXTRA_ARGS=()
IS_EXTERNAL_REGISTRY=

# If the image registry type of destination cluster is not platform built-in (external private or public repository).
# Additional configuration is required (uncomment following line):
# IS_EXTERNAL_REGISTRY=true
if [[ "${IS_EXTERNAL_REGISTRY}" == "true" ]]; then
    REGISTRY_ADDRESS=<external-registry-url>
    REGISTRY_USERNAME=<registry-username>
    REGISTRY_PASSWORD=<registry-password>

    VIOLET_EXTRA_ARGS+=(
        --dst-repo "${REGISTRY_ADDRESS}"
        --username "${REGISTRY_USERNAME}"
        --password "${REGISTRY_PASSWORD}"
    )
fi

# Push **Alauda AI Cluster** operator package to destination cluster
violet push \
    ${AI_CLUSTER_OPERATOR_NAME} \
    --platform-address=${PLATFORM_ADDRESS} \
    --platform-username=${PLATFORM_ADMIN_USER} \
    --platform-password=${PLATFORM_ADMIN_PASSWORD} \
    --clusters=${CLUSTER} \
    ${VIOLET_EXTRA_ARGS[@]}

# Push **KServeless** operator package to destination cluster
violet push \
    ${KSERVELESS_OPERATOR_PKG_NAME} \
    --platform-address=${PLATFORM_ADDRESS} \
    --platform-username=${PLATFORM_ADMIN_USER} \
    --platform-password=${PLATFORM_ADMIN_PASSWORD} \
    --clusters=${CLUSTER} \
    ${VIOLET_EXTRA_ARGS[@]}
  1. ${PLATFORM_ADDRESS} is your ACP platform address.
  2. ${PLATFORM_ADMIN_USER} is the username of the ACP platform admin.
  3. ${PLATFORM_ADMIN_PASSWORD} is the password of the ACP platform admin.
  4. ${CLUSTER} is the name of the cluster to install the Alauda AI components into.
  5. ${AI_CLUSTER_OPERATOR_NAME} is the path to the Alauda AI Cluster Operator package tarball.
  6. ${KSERVELESS_OPERATOR_PKG_NAME} is the path to the KServeless Operator package tarball.
  7. ${REGISTRY_ADDRESS} is the address of the external registry.
  8. ${REGISTRY_USERNAME} is the username of the external registry.
  9. ${REGISTRY_PASSWORD} is the password of the external registry.

After configuration, execute the script file using bash ./uploading-ai-cluster-packages.sh to upload both Alauda AI and Alauda AI Model Serving operator.

#Installing Alauda AI Operator

#Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install Alauda AI.

  3. Select Alauda AI, then click Install.

    Install Alauda AI window will popup.

  4. Then in the Install Alauda AI window.

  5. Leave Channel unchanged.

  6. Check whether the Version matches the Alauda AI version you want to install.

  7. Leave Installation Location unchanged, it should be aml-operator by default.

  8. Select Manual for Upgrade Strategy.

  9. Click Install.

#Verification

Confirm that the Alauda AI tile shows one of the following states:

  • Installing: installation is in progress; wait for this to change to Installed.
  • Installed: installation is complete.

#Creating Alauda AI Instance

Once Alauda AI Operator is installed, you can create an Alauda AI instance.

#Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install the Alauda AI Operator.

  3. Select Alauda AI, then Click.

  4. In the Alauda AI page, click All Instances from the tab.

  5. Click Create.

    Select Instance Type window will pop up.

  6. Locate the AmlCluster tile in Select Instance Type window, then click Create.

    Create AmlCluster form will show up.

  7. Keep default unchanged for Name.

  8. Select Deploy Flavor from dropdown:

    1. single-node for non HA deployments.
    2. ha-cluster for HA cluster deployments (Recommended for production).
  9. Set KServe Mode to Managed.

  10. Input a valid domain for Domain field.

    INFO

    This domain is used by ingress gateway for exposing model serving services. Most likely, you will want to use a wildcard name, like *.example.com.

    You can specify the following certificate types by updating the Domain Certificate Type field:

    • Provided
    • SelfSigned
    • ACPDefaultIngress

    By default, the configuration uses SelfSigned certificate type for securing ingress traffic to your cluster, the certificate is stored in the knative-serving-cert secret that is specified in the Domain Certificate Secret field.

    To use certificate provided by your own, store the certificate secret in the istio-system namespace, then update the value of the Domain Certificate Secret field, and change the value of the Domain Certificate Secret field to Provided.

  11. In the Serverless Configuration section, set Knative Serving Provider to Operator; leave all other parameters blank.

  12. Under Gitlab section:

    1. Type the URL of self-hosted Gitlab for Base URL.
    2. Type cpaas-system for Admin Token Secret Namespace.
    3. Type aml-gitlab-admin-token for Admin Token Secret Name.
  13. Review above configurations and then click Create.

#Verification

Check the status field from the AmlCluster resource which named default:

kubectl get amlcluster default

Should returns Ready:

NAME      READY   REASON
default   True    Succeeded

Now, the core capabilities of Alauda AI have been successfully deployed. If you want to quickly experience the product, please refer to the Quick Start.

#Enabling Serverless Functionality

Serverless functionality is an optional capability that requires an additional operator and instance to be deployed.

#1. Installing the Alauda AI Model Serving Operator

#Prerequisites

The Serverless capability relies on the Istio Gateway for its networking. Please install the Service Mesh first by following the documentation.

#Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install.

  3. Select Alauda AI Model Serving, then click Install.

    Install Alauda AI Model Serving window will popup.

  4. Then in the Install Alauda AI Model Serving window.

  5. Leave Channel unchanged.

  6. Check whether the Version matches the Alauda AI Model Serving version you want to install.

  7. Leave Installation Location unchanged, it should be kserveless-operator by default.

  8. Select Manual for Upgrade Strategy.

  9. Click Install.

#Verification

Confirm that the Alauda AI Model Serving tile shows one of the following states:

  • Installing: installation is in progress; wait for this to change to Installed.
  • Installed: installation is complete.

#2. Creating Alauda AI Model Serving Instance

Once Alauda AI Model Serving Operator is installed, you can create an instance. There are two ways to do this:

#Automated Creation (Recommended)

You can have the instance automatically created and managed by the AmlCluster by editing its parameters.

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you previously installed the AmlCluster.

  3. Select Alauda AI, then Click.

  4. In the Alauda AI page, click All Instances from the tab.

  5. Click name default.

  6. Locate Actions dropdown list and select update.

    update default form will show up.

  7. In the Serverless Configuration section:

    1. Set Knative Serving Provider to Legacy.
    2. Set BuiltIn Knative Serving to Managed.
  8. Leave all other parameters unchanged. Click Update.

#Manual Creation and Integration

You can manually create the KnativeServing (knativeservings.components.aml.dev) instance.

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install.

  3. Select Alauda AI Model Serving, then Click.

  4. In the Alauda AI Model Serving page, click All Instances from the tab.

  5. Click Create.

    Select Instance Type window will pop up.

  6. Locate the KnativeServing tile in Select Instance Type window, then click Create.

    Create KnativeServing form will show up.

  7. Keep default-knative-serving unchanged for Name.

  8. Keep knative-serving unchanged for Knative Serving Namespace.

  9. In the Ingress Gateway section, configure the following:

    1. Set the Ingress Gateway Istio Revision to a value that corresponds to your Istio version (e.g., 1-22).
    2. Set a valid domain for the Domain field.
    3. Set the appropriate Domain Certificate Type.
    INFO

    For details on configuring the domain and certificate type, refer to the relevant section.

  10. In the Values section, configure the following:

    1. Select Deploy Flavor from dropdown:

      1. single-node for non HA deployments.
      2. ha-cluster for HA cluster deployments (Recommended for production).
    2. Set Global Registry Address to Match Your Cluster

      You can find your cluster's private registry address by following these steps:

      1. In the Web Console, go to Administrator / Clusters.
      2. Select your target cluster.
      3. On the Overview tab, find the Private Registry address value in the Basic Info section.

Configure the AmlCluster instance to integrate with a KnativeServing instance.

In the AmlCluster instance update window, you will need to fill in the required parameters in the Serverless Configuration section.

INFO

After the initial installation, you will find that only the Knative Serving Provider is set to Operator. You will now need to provide values for the following parameters:

  • APIVersion: components.aml.dev/v1alpha1
  • Kind: KnativeServing
  • Name: default-knative-serving
  • Leave Namespace blank.

#Replace GitLab Service After Installation

If you want to replace GitLab Service after installation, follow these steps:

  1. Reconfigure GitLab Service
    Refer to the Pre-installation Configuration and re-execute its steps.

  2. Update Alauda AI Instance

    • In Administrator view, navigate to Marketplace > OperatorHub
    • From the Cluster dropdown, select the target cluster
    • Choose Alauda AI and click the All Instances tab
    • Locate the 'default' instance and click Update
  3. Modify GitLab Configuration
    In the Update default form:

    • Locate the GitLab section
    • Enter:
      • Base URL: The URL of your new GitLab instance
      • Admin Token Secret Namespace: cpaas-system
      • Admin Token Secret Name: aml-gitlab-admin-token
  4. Restart Components
    Restart the aml-controller deployment in the kubeflow namespace.

  5. Refresh Platform Data
    In Alauda AI management view, re-manage all namespaces.

    • In Alauda AI view, navigate to Admin view from Business View
    • On the Namespace Management page, delete all existing managed namespaces
    • Use "Managed Namespace" to add namespaces requiring Alauda AI integration
    INFO

    Original models won't migrate automatically Continue using these models:

    • Recreate and re-upload in new GitLab OR
    • Manually transfer model files to new repository