Skip to main content

Create and Manage Azure AKS Cluster

Palette supports creating and managing Kubernetes clusters deployed to an Azure subscription. This section guides you on how to create an IaaS Kubernetes cluster in Azure that is managed by Palette.

Azure clusters can be created under the following scopes:

  • Tenant admin

  • Project Scope - This is the recommended scope.

Be aware that clusters that are created under the Tenant Admin scope are not visible under Project scope .

Prerequisites

These prerequisites must be met before deploying an AKS workload cluster:

  1. You need an active Azure cloud account with sufficient resource limits and permissions to provision compute, network, and security resources in the desired regions.
  1. You will need to have permissions to deploy clusters using the AKS service on Azure.
  1. Register your Azure cloud account in Palette as described in the Creating an Azure Cloud Account section below.
  1. You should have a cluster profile created in Palette for AKS.
  1. Associate an SSH key pair to the cluster worker node.

Additional Prerequisites

There are additional prerequisites if you want to set up Azure Active Directory integration for the AKS cluster:

  1. A Tenant Name must be provided as part of the Azure cloud account creation in Palette.
  1. For the Azure client used in the Azure cloud account, these API permissions have to be provided:

    Microsoft GraphGroup.Read.All (Application Type)
    Microsoft GraphDirectory.Read.All (Application Type)
  2. You can configure these permissions from the Azure cloud console under App registrations > API permissions for the specified application.

    info

    Palette also enables the provisioning of private AKS clusters via a private cloud gateway (Self Hosted PCGs). The Self-Hosted PCG is an AKS cluster that needs to be launched manually and linked to an Azure cloud account in Palette Management Console. Click here for more..

To create an Azure cloud account you need the following Azure account information:

  • Client ID
  • Tenant ID
  • Client Secret
  • Tenant Name (optional)
  • Toggle Connect Private Cloud Gateway option and select the Self-Hosted PCG already created from the drop-down menu to link it to the cloud account.

Note:

For existing cloud account go to Edit and toggle the Connect Private Cloud Gateway option to select the created gateway from the drop down menu.

For Azure cloud account creation, we first need to create an Azure Active Directory (AAD) application that can be used with role-based access control. Follow the steps below to create a new AAD application, assign roles, and create the client secret:


  1. Follow the steps described here to create a new Azure Active Directory application. Note down your ClientID and TenantID.
  1. On creating the application, assign a minimum required ContributorRole. To assign any type of role, the user must have a minimum role of UserAccessAdministrator. Follow the Assign Role To Application link learn more about roles.
  1. Follow the steps described in the Create an Application Secret section to create the client application secret. Store the Client Secret safely as it will not be available as plain text later.

Deploy an AKS Cluster


The following steps need to be performed to provision a new cluster:


  1. If you already have a profile to use, go to Cluster > Add a New Cluster > Deploy New Cluster and select an Azure cloud. If you do not have a profile to use, review the Creating a Cluster Profile page for guidance on profile types to create.
  1. Fill the basic cluster profile information such as Name, Description, Tags and Cloud Account.
  1. In the Cloud Account dropdown list, select the Azure Cloud account or create a new one. See the Creating an Azure Cloud Account section above.
  1. Next, in the Cluster profile tab from the Managed Kubernetes list, pick AKS, and select the AKS cluster profile definition.
  1. Review the Parameters for the selected cluster profile definitions. By default, parameters for all packs are set with values defined in the cluster profile.
  1. Complete the Cluster config section with the information for each parameter listed below.

    ParameterDescription
    SubscriptionSelect the subscription which is to be used to access Azure Services.
    RegionSelect a region in Azure in where the cluster should be deployed.
    Resource GroupSelect the resource group in which the cluster should be deployed.
    SSH KeyThe public SSH key for connecting to the nodes. Review Microsoft's supported SSH formats.
    Static PlacementBy default, Palette uses dynamic placement, wherein a new VPC with a public and private subnet is created to place cluster resources for every cluster. These resources are fully managed by Palette and deleted when the corresponding cluster is deleted.
    Turn on the Static Placement option if it is desired to place resources into preexisting VPCs and subnets. If the user is making the selection of Static Placement of resources, the following placement information needs to be provided:
    Virtual Resource Group: The logical container for grouping related Azure resources.
    Virtual Network: Select the virtual network from dropdown menu.
    Control plane Subnet: Select the control plane network from the dropdown menu.
    Worker Network: Select the worker network from the dropdown.
    Update worker pools in parallelCheck the box to concurrently update the worker pools.
caution

If the Palette cloud account is created with Disable Properties and the cluster option Static Placement is enabled, the network information from your Azure account will not be imported to Palette. You can manually input the information for the Control Plane Subnet and the Worker Network.

  1. Click Next to configure the node pools.

The maximum number of pods per node in an AKS cluster is 250. If you don't specify maxPods when creating new node pools, then the default value of 30 is applied. You can edit this value from the Kubernetes configuration file at any time by editing the maxPodPerNode value. Refer to the snippet below:


managedMachinePool:
maxPodPerNode: 30

Node Pools

This section guides you to through configuring Node Pools. As you set up the cluster, the Nodes config section will allow you to customize node pools. AKS Clusters are comprised of System and User node pools, and all pool types can be configured to use the Autoscaler, which scales out pools horizontally based on per node workload counts.

A complete AKS cluster contains the following:


  1. As a mandatory primary System Node Pool, this pool will run the pods necessary to run a Kubernetes cluster, like the control plane and etcd. All system pools must have at least a single node for a development cluster; one (1) node is enough for high availability production clusters, and three (3) or more is recommended.
  1. Worker Node pools consist of one (1) or more per workload requirements. Worker node pools can be sized to zero (0) nodes when not in use.

Create and Remove Node Pools

During cluster creation, you will default to a single pool.


  1. To add additional pools, click Add Node Pool.
  1. Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional, and you can use a key:value structure, press your space bar to add additional labels, and click the X with your mouse to remove unwanted labels.
  1. To remove a pool, click Remove across from the title for each pool.

Create a System Node Pool

  1. Each cluster requires at least one (1) system node pool. To define a pool as a system pool, check the box labeled System Node Pool.
info

Identifying a Node Pool as a System Pool will deactivate taints, and the operating system options within the Cloud Configuration section, as you can not to taint or change their OS from Linux. See the AKS Documentation for more details on pool limitations.


  1. Provide a name in the Node pool name text box. When creating a node, it is good practice to include an identifying name that matches the node in Azure.
  1. Add the Desired size. You can start with three for multiple nodes.
  1. Include Additional Labels. This is optional.
  1. In the Azure Cloud Configuration section, add the Instance type. The cost details are present for review.
  1. Enter the Managed Disk information and its size.
  1. If you are including additional or multiple nodes to make a node pool, click the Add Worker Pool button to create the next node.

Configure Node Pools

In all types of node pools, configure the following.


  1. Provide a name in the Node pool name text box. When creating a node, it is good practice to include an identifying name.

    Note: Windows clusters have a name limitation of six (6) characters.

  1. Provide how many nodes the pool will contain by adding the count to the box labeled Number of nodes in the pool. Configure each pool to use the autoscaler controller. There are more details on how to configure that below.
  1. Alternative to a static node pool count, you can enable the autoscaler controller, click Enable Autoscaler to change to the Minimum size and Maximum size fields which will allow AKS to increase or decrease the size of the node pool based on workloads. The smallest size of a dynamic pool is zero (0), and the maximum is one thousand (1000); setting both to the same value is identical to using a static pool size.
  1. Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional; you can use a key:value structure. Press your space bar to add additional labels and click the X with your mouse to remove unwanted labels.
  1. In the Azure Cloud Configuration section:
  • Provide instance details for all nodes in the pool with the Instance type dropdown. The cost details are present for review.

info

New worker pools may be added if you want to customize specific worker nodes to run specialized workloads. As an example, the default worker pool may be configured with the Standard_D2_v2 instance types for general-purpose workloads, and another worker pool with the instance type Standard_NC12s_v3 can be configured to run GPU workloads.


  • Provide the disk type via the Managed Disk dropdown and the size in Gigabytes (GB) in the Disk size field.
info

A minimum allocation of two (2) CPU cores is required across all worker nodes.

A minimum allocation of 4Gi of memory is required across all worker nodes.


  • When are done setting up all node pools, click Next to go to the Settings page to Validate and finish the cluster deployment wizard.

    Note: Keep an eye on the Cluster Status once you click Finish Configuration as it will start as Provisioning. Deploying an AKS cluster does take a considerable amount of time to complete, and the Cluster Status in Palette will say Ready when it is complete and ready to use.


Configure an Azure Active Directory

The Azure Active Directory (AAD) could be enabled while creating and linking the Azure Cloud account for the Palette Platform, using a simple check box. Once the cloud account is created, you can create the Azure AKS cluster. The AAD-enabled AKS cluster will have its Admin kubeconfig file created and can be downloaded from our Palette UI as the 'Kubernetes config file'. You need to manually create the user's kubeconfig file to enable AAD completely. The following are the steps to create the custom user kubeconfig file:


  1. Go to the Azure console to create the Groups in Azure AD to access the Kubernetes RBAC and Azure AD control access to cluster resources.
  1. After you create the groups, create users in the Azure AD.
  1. Create custom Kubernetes roles and role bindings for the created users and apply the roles and role bindings, using the Admin kubeconfig file.

info

The above step can also be completed using Spectro RBAC pack available under the Authentication section of Add-on Packs.


  1. Once the roles and role bindings are created, these roles can be linked to the Groups created in Azure AD.
  1. The users can now access the Azure clusters with the complete benefits of AAD. To get the user-specific kubeconfig file, please run the following command:

az aks get-credentials --resource-group <resource-group> --name <cluster-name>


Resources