Creating a Cluster
This topic summarizes the workflow to create and deploy a cluster, followed by procedure details. After adding your cloud credentials to Megaport ONE, you can deploy a cluster.
To begin, the basic steps are:
Create a cluster.
Configure provider connection details - Megaport ONE provides seamless access to hybrid and multicloud environments: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, Equinix Bare Metal, Oracle Cloud, and DigitalOcean. Megaport ONE automates connections by predicting and applying best practice configurations. You deploy a Kubernetes cluster to any Megaport ONE provider that your account credentials allow.
Megaport ONE simplifies deployments for pre-existing cloud configurations by importing your preconfigured VPC, VNET, and VCN information. After logging in to a CSP with your account credentials and selecting the location of the pre-existing cloud configuration, Megaport ONE collects and imports the details. Once Megaport ONE imports the resource details, you can build a cluster without returning to the provider to complete the configuration.
For account credential details, see Adding Credentials.
Configure a node pool - A cluster unifies a node grouping of individual virtual or physical machines deployed in the same region into a pool. The nodes in a cluster are the machines that run the end-user applications.
Deploy the cluster.
- Add containerized applications to the cluster. For details, see Applications.
To create a new cluster
- Log in to the Megaport ONE Portal.
Choose Compute > Clusters.
Click Deploy New Cluster.
Enter a unique cluster name with a length of 5 t o 38 lowercase characters. Use lowercase letters, numbers, or hyphens. Don’t use dots, spaces, or underscores.
Select a cloud, colocation, or edge provider.
The list includes available providers based on your account credentials.
Select a data center location for the cluster.
The list includes available data center locations based on your account.
To use a preconfigured cloud configuration, select the location of the pre-existing cloud resources.
A GPU ENABLED flag appears next to locations with GPUSpecialized processors that provide computing power to process many pieces of data simultaneously, accelerating the workload and increasing performance. capabilities. Use GPUs for high-performance acceleration of a specific Kubernetes workload.
A Kubernetes version is preselected for you based on available versions for your provider and location.
The default selection is the best choice if you don’t have a specific need for an earlier version.
The next step is to configure provider connection details that connect the cluster compute resources with the service resource provider. The cluster can connect to any consumable endpoint, whether that is on-premises, public cloud, or bare metal.
Configuration details vary by provider.
To configure provider connection details
The configuration options reflect the required settings for your provider.
- Amazon Web Services - Select a VPC and at least two AWS subnets for the cluster.
- DigitalOcean - No additional configuration details needed.
- Equinix - Select your Equinix Metal project SSHSecure Shell protocol. key from the drop-down list. This is a public project key previously added to your Equinix Metal account. This key is specific to your single project.
- Google Cloud Platform - Select a Google VPC network and a subnet for the cluster.
- Microsoft Azure - Select a network configuration using kubenet with a default VNet, or select a network configuration using Azure CNI with the option to customize your VNet.
Oracle Cloud - Provide these details to connect to your preconfigured Oracle Cloud Infrastructure. The fields are prepopulated based on your configuration. All fields are required:
- Virtual Cloud Network (VCN) Oracle ID - Select the VCN for your cluster to reside.
- Load Balancer Subnets - Select the subnets to provision the load balancers.
- Kubernetes Master Endpoint Subnet Oracle ID - Select the OCID of the Kubernetes control plane node endpoint subnet. You access the Kubernetes API on the cluster control plane through this endpoint, which is hosted in a subnet of the VCN the cluster resides in. The Kubernetes endpoint resides in this subnet.
- Node Pool Subnet ID - Select the ID of the subnet used to process the node pool within a cluster.
- Node Pool Availability Domain - Select the regional node pool availability domainOne or more data centers located in a region. for the Kubernetes endpoint to reside in.
The next step is to configure the node machine compute type that runs the underlying cluster.
The machine type definition includes the CPU and memory specifications. Use the filter to limit the number of compute options returned by the provider.
Filters provide a way to find a GPU-enabled provider for high-performance acceleration of a specific Kubernetes workload. Select a resource from the GPU capabilities drop-down list to narrow your search.
All nodes in a pool group must be the same machine type.
For details on GPU resources, see Discover GPU Resources.
To configure a node pool
Select the number of CPU nodes and the minimum RAM.
Select whether the node pool has GPU capabilities.
The location description indicates its GPU capability.
Select the machine type.
A verification appears when the machine type is available in the selected location.
A GPU type is automatically selected based on the machine type.
Select the number of nodes in the pool.
Enable or disable autoscaling.
Autoscaling adjusts the size of a Kubernetes cluster by increasing or decreasing the number of nodes to meet the current demands for service. As conditions change, automatically scales the cluster nodes based on metrics such as CPU and RAM use.
After enabling autoscaling, set the minimum and maximum number of nodes in the pool.
Select Yes for direct access to the Kubernetes DashboardAn official web-based user interface (UI) designed especially for Kubernetes clusters to give Kubernetes administrators control of the clusters. from relevant pages to view details and troubleshoot the cluster in Megaport ONE using the browser. By default, the Kubernetes Dashboard is enabled.
You can also access the Kubernetes Dashboard by clicking the cluster name on the Megaport ONE dashboard and then selecting Actions > Launch Kubernetes Dashboard.
Deploying a cluster
After creating a cluster, the next step is to deploy it. Deploying a cluster connects it to the provider and provisions the cluster on the account connected with the Megaport ONE instance.
To deploy a cluster
Click Deploy Cluster.
Review the configuration.
Click Confirm to deploy the cluster or Cancel to change settings.
The cluster starts communicating with the provider and proceeds through the deployment process, which can take approximately 10 to 25 minutes, depending on the underlying provider’s current traffic level.
Click the status indicator to view the cluster deployment states.
After a successful cluster deployment, you can control it completely from the Megaport ONE platform. If you want more details and functionality, the Clusters page provides a direct link to the Kubernetes Dashboard.
If the deployment state of a cluster indicates an issue, the cluster could not be deployed. The status indicator reports any provisioning errors and where in the provisioning process the error occurred.
To see status error details
- Click the state indicator and click More Information.
The cause of the failure is listed to pinpoint the problem.
Once you have deployed a Kubernetes cluster, it can receive cloud-native applications with Helm charts. For details, see Viewing and Managing Applications.
You can set alerts to monitor the cluster. For details, see Configuring a Cluster.