Saturday, September 21, 2024

Amazon SageMaker HyperPod introduces Amazon EKS assist

Voiced by Polly

At present, we’re happy to announce Amazon Elastic Kubernetes Service (EKS) assist in Amazon SageMaker HyperPod — purpose-built infrastructure engineered with resilience at its core for basis mannequin (FM) growth. This new functionality allows prospects to orchestrate HyperPod clusters utilizing EKS, combining the ability of Kubernetes with Amazon SageMaker HyperPod‘s resilient atmosphere designed for coaching giant fashions. Amazon SageMaker HyperPod helps effectively scale throughout greater than a thousand synthetic intelligence (AI) accelerators, decreasing coaching time by as much as 40%.

Amazon SageMaker HyperPod now allows prospects to handle their clusters utilizing a Kubernetes-based interface. This integration permits seamless switching between Slurm and Amazon EKS for optimizing numerous workloads, together with coaching, fine-tuning, experimentation, and inference. The CloudWatch Observability EKS add-on gives complete monitoring capabilities, providing insights into CPU, community, disk, and different low-level node metrics on a unified dashboard. This enhanced observability extends to useful resource utilization throughout the complete cluster, node-level metrics, pod-level efficiency, and container-specific utilization knowledge, facilitating environment friendly troubleshooting and optimization.

Launched at re:Invent 2023, Amazon SageMaker HyperPod has turn into a go-to resolution for AI startups and enterprises seeking to effectively prepare and deploy giant scale fashions. It’s suitable with SageMaker’s distributed coaching libraries, which supply Mannequin Parallel and Information Parallel software program optimizations that assist scale back coaching time by as much as 20%. SageMaker HyperPod mechanically detects and repairs or replaces defective situations, enabling knowledge scientists to coach fashions uninterrupted for weeks or months. This permits knowledge scientists to deal with mannequin growth, somewhat than managing infrastructure.

The mixing of Amazon EKS with Amazon SageMaker HyperPod makes use of some great benefits of Kubernetes, which has turn into common for machine studying (ML) workloads resulting from its scalability and wealthy open-source tooling. Organizations typically standardize on Kubernetes for constructing purposes, together with these required for generative AI use circumstances, because it permits reuse of capabilities throughout environments whereas assembly compliance and governance requirements. At present’s announcement allows prospects to scale and optimize useful resource utilization throughout greater than a thousand AI accelerators. This flexibility enhances the developer expertise, containerized app administration, and dynamic scaling for FM coaching and inference workloads.

Amazon EKS assist in Amazon SageMaker HyperPod strengthens resilience by deep well being checks, automated node restoration, and job auto-resume capabilities, making certain uninterrupted coaching for giant scale and/or long-running jobs. Job administration could be streamlined with the elective HyperPod CLI, designed for Kubernetes environments, although prospects can even use their very own CLI instruments. Integration with Amazon CloudWatch Container Insights gives superior observability, providing deeper insights into cluster efficiency, well being, and utilization. Moreover, knowledge scientists can use instruments like Kubeflow for automated ML workflows. The mixing additionally contains Amazon SageMaker managed MLflow, offering a strong resolution for experiment monitoring and mannequin administration.

At a excessive degree, Amazon SageMaker HyperPod cluster is created by the cloud admin utilizing the HyperPod cluster API and is totally managed by the HyperPod service, eradicating the undifferentiated heavy lifting concerned in constructing and optimizing ML infrastructure. Amazon EKS is used to orchestrate these HyperPod nodes, just like how Slurm orchestrates HyperPod nodes, offering prospects with a well-recognized Kubernetes-based administrator expertise.

Let’s discover get began with Amazon EKS assist in Amazon SageMaker HyperPod
I begin by making ready the situation, checking the conditions, and creating an Amazon EKS cluster with a single AWS CloudFormation stack following the Amazon SageMaker HyperPod EKS workshop, configured with VPC and storage assets.

To create and handle Amazon SageMaker HyperPod clusters, I can use both the AWS Administration Console or AWS Command Line Interface (AWS CLI). Utilizing the AWS CLI, I specify my cluster configuration in a JSON file. I select the Amazon EKS cluster created beforehand because the orchestrator of the SageMaker HyperPod Cluster. Then, I create the cluster employee nodes that I name “worker-group-1”, with a non-public Subnet, NodeRecovery set to Computerized to allow computerized node restoration and for OnStartDeepHealthChecks I add InstanceStress and InstanceConnectivity to allow deep well being checks.

cat > eli-cluster-config.json << EOL
{
    "ClusterName": "example-hp-cluster",
    "Orchestrator": {
        "Eks": {
            "ClusterArn": "${EKS_CLUSTER_ARN}"
        }
    },
    "InstanceGroups": [
        {
            "InstanceGroupName": "worker-group-1",
            "InstanceType": "ml.p5.48xlarge",
            "InstanceCount": 32,
            "LifeCycleConfig": {
                "SourceS3Uri": "s3://${BUCKET_NAME}",
                "OnCreate": "on_create.sh"
            },
            "ExecutionRole": "${EXECUTION_ROLE}",
            "ThreadsPerCore": 1,
            "OnStartDeepHealthChecks": [
                "InstanceStress",
                "InstanceConnectivity"
            ],
        },
  ....
    ],
    "VpcConfig": {
        "SecurityGroupIds": [
            "$SECURITY_GROUP"
        ],
        "Subnets": [
            "$SUBNET_ID"
        ]
    },
    "ResilienceConfig": {
        "NodeRecovery": "Computerized"
    }
}
EOL

You possibly can add InstanceStorageConfigs to provision and mount an extra Amazon EBS volumes on HyperPod nodes.

To create the cluster utilizing the SageMaker HyperPod APIs, I run the next AWS CLI command:

aws sagemaker create-cluster  
--cli-input-json file://eli-cluster-config.json

The AWS command returns the ARN of the brand new HyperPod cluster.

{
"ClusterArn": "arn:aws:sagemaker:us-east-2:ACCOUNT-ID:cluster/wccy5z4n4m49"
}

I then confirm the HyperPod cluster standing within the SageMaker Console, awaiting till the standing adjustments to InService.

Alternatively, you’ll be able to verify the cluster standing utilizing the AWS CLI operating the describe-cluster command:

aws sagemaker describe-cluster --cluster-name my-hyperpod-cluster

As soon as the cluster is prepared, I can entry the SageMaker HyperPod cluster nodes. For many operations, I can use kubectl instructions to handle assets and jobs from my growth atmosphere, utilizing the complete energy of Kubernetes orchestration whereas benefiting from SageMaker HyperPod’s managed infrastructure. On this event, for superior troubleshooting or direct node entry, I exploit AWS Techniques Supervisor (SSM) to log into particular person nodes, following the directions within the Entry your SageMaker HyperPod cluster nodes web page.

To run jobs on the SageMaker HyperPod cluster orchestrated by EKS, I observe the steps outlined within the Run jobs on SageMaker HyperPod cluster by Amazon EKS web page. You should use the HyperPod CLI and the native kubectl command to search out avaible HyperPod clusters and submit coaching jobs (Pods). For managing ML experiments and coaching runs, you should use Kubeflow Coaching Operator, Kueue and Amazon SageMaker-managed MLflow.

Lastly, within the SageMaker Console, I can view the Standing and Kubernetes model of not too long ago added EKS clusters, offering a complete overview of my SageMaker HyperPod atmosphere.

And I can monitor cluster efficiency and well being insights utilizing Amazon CloudWatch Container.

Issues to know
Listed below are some key issues you must learn about Amazon EKS assist in Amazon SageMaker HyperPod:

Resilient Surroundings – This integration gives a extra resilient coaching atmosphere with deep well being checks, automated node restoration, and job auto-resume. SageMaker HyperPod mechanically detects, diagnoses, and recovers from faults, permitting you to repeatedly prepare basis fashions for weeks or months with out disruption. This could scale back coaching time by as much as 40%.

Enhanced GPU Observability Amazon CloudWatch Container Insights gives detailed metrics and logs in your containerized purposes and microservices. This permits complete monitoring of cluster efficiency and well being.

Scientist-Pleasant Software – This launch features a customized HyperPod CLI for job administration, Kubeflow Coaching Operators for distributed coaching, Kueue for scheduling, and integration with SageMaker Managed MLflow for experiment monitoring. It additionally works with SageMaker’s distributed coaching libraries, which give Mannequin Parallel and Information Parallel optimizations to considerably scale back coaching time. These libraries, mixed with auto-resumption of jobs, allow environment friendly and uninterrupted coaching of huge fashions.

Versatile Useful resource Utilization – This integration enhances developer expertise and scalability for FM workloads. Information scientists can effectively share compute capability throughout coaching and inference duties. You should use your current Amazon EKS clusters or create and fix new ones to HyperPod compute, deliver your individual instruments for job submission, queuing and monitoring.

To get began with Amazon SageMaker HyperPod on Amazon EKS, you’ll be able to discover assets such because the SageMaker HyperPod EKS Workshop, the aws-do-hyperpod mission, and the awsome-distributed-training mission. This launch is usually accessible within the AWS Areas the place Amazon SageMaker HyperPod is on the market besides Europe(London). For pricing data, go to the Amazon SageMaker Pricing web page.

This weblog publish was a collaborative effort. I want to thank Manoj Ravi, Adhesh Garg, Tomonori Shimomura, Alex Iankoulski, Anoop Saha, and the complete crew for his or her important contributions in compiling and refining the knowledge introduced right here. Their collective experience was essential in creating this complete article.

– Eli.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles