AWS Compute Deep Dive for Backend Engineers

 

⚙️ AWS Compute Deep Dive for Backend Engineers

EC2 | ECS | EKS (Kubernetes) | Lambda — Concepts, Architecture & Interview Q&A


🚀 Introduction

Compute is the core layer of any backend architecture.
In AWS, you have multiple compute options — from bare-metal-like EC2 instances to fully managed serverless runtimes like Lambda.

A good backend engineer must understand when to use each service, how to scale and secure workloads, and how to integrate compute with storage, networking, and CI/CD.


🧱 1. Amazon EC2 — Elastic Compute Cloud

🔹 Overview

Amazon EC2 provides virtual machines (instances) in the cloud.
You can choose OS, CPU, memory, storage, and networking configuration.
It’s best suited when you need full control of the OS and runtime.

FeatureDescription
Instance TypesOptimized for general, compute, memory, GPU, or storage
AMIAmazon Machine Image – template for your instance
EBS VolumesPersistent block storage for EC2
Security GroupsVirtual firewalls controlling inbound/outbound traffic
Elastic IPStatic IP for instances
Auto ScalingAutomatically adjust instance count based on demand

🧠 Interview Questions & Answers

1️⃣ What is the difference between EC2 and Lambda?
→ EC2 is infrastructure-as-a-service — you manage the OS and scaling.
→ Lambda is serverless — AWS manages servers and scales automatically per request.

2️⃣ What are EC2 purchasing options?

  • On-Demand: Pay by the second/hour — flexible but costly for long-term.

  • Reserved Instances: 1–3 year commitment; up to 72% cheaper.

  • Spot Instances: Unused capacity at up to 90% discount (can be interrupted).

  • Savings Plans: Flexible compute discounts for steady workloads.

3️⃣ What are placement groups?
→ Logical grouping of instances to control networking:

  • Cluster: Low latency, high throughput (same rack).

  • Spread: Instances across hardware (for HA).

  • Partition: Grouped for large distributed systems like Hadoop.

4️⃣ How do you secure EC2 instances?

  • Use IAM roles instead of access keys.

  • Enable security groups + NACLs.

  • Patch OS regularly.

  • Store secrets in AWS Secrets Manager.


✅ Best Practices

  • Use Auto Scaling Groups (ASG) for elasticity.

  • Use Elastic Load Balancer (ALB/NLB) for fault tolerance.

  • Use EBS gp3 volumes for cost optimization.

  • Always attach an IAM Role for AWS API access.

  • Use EC2 Image Builder for automated AMI updates.


🐳 2. Amazon ECS — Elastic Container Service

🔹 Overview

Amazon ECS is a container orchestration service that runs and scales Docker containers on AWS.
You can run ECS on:

  • EC2 (self-managed cluster) or

  • AWS Fargate (serverless compute for containers)

ComponentDescription
Task DefinitionBlueprint describing container image, CPU/memory, ports
ServiceLong-running task definition with scaling rules
ClusterLogical group of EC2/Fargate resources
Task RoleIAM role assigned to containers
Load BalancingUses ALB/NLB for routing traffic

🧠 Interview Questions & Answers

1️⃣ ECS vs EKS?
→ ECS is AWS-native container orchestration.
→ EKS is Kubernetes-based, more portable but more complex.

2️⃣ ECS vs Fargate?
→ Fargate is the serverless mode for ECS — no EC2 management, pay per CPU-second.
→ ECS on EC2 gives you control of the instance fleet.

3️⃣ How does scaling work in ECS?
→ Use Service Auto Scaling based on CloudWatch metrics (CPU, memory, queue length).

4️⃣ How do you secure containers?
→ Assign IAM roles to tasks, not containers.
→ Store images in ECR (Elastic Container Registry) with scanning enabled.
→ Run containers with read-only root filesystem.


✅ Best Practices

  • Use Fargate for short-lived or spiky workloads.

  • Use ECS Capacity Providers for efficient scaling.

  • Keep task definitions version-controlled.

  • Use ALB target groups for each service.

  • Centralize logs in CloudWatch Logs or Fluent Bit + OpenSearch.


☸️ 3. Amazon EKS — Elastic Kubernetes Service

🔹 Overview

Amazon EKS provides a fully managed Kubernetes control plane.
You focus on pods, nodes, and deployments — AWS manages the Kubernetes API servers and etcd.

ComponentDescription
ClusterManaged control plane in AWS
Node GroupEC2 or Fargate nodes running workloads
PodSmallest deployable unit (containers)
ServiceExposes pods internally/externally
IngressRoutes traffic to services via ALB/NLB
ConfigMap & SecretApp configuration and credentials

🧠 Interview Questions & Answers

1️⃣ Difference between ECS and EKS?

FeatureECSEKS
OrchestratorAWS proprietaryKubernetes (open source)
PortabilityTied to AWSMulti-cloud capable
ComplexityEasierMore complex
EcosystemAWS native toolsKubernetes ecosystem

2️⃣ What are the compute options for EKS?

  • Managed EC2 node groups

  • Fargate (serverless pods)

3️⃣ How does networking work in EKS?
→ Uses VPC CNI plugin: each pod gets an ENI (Elastic Network Interface) with its own IP.
→ Use CoreDNS for service discovery.

4️⃣ How do you expose applications?
→ Through Kubernetes Ingress, which integrates with AWS ALB Ingress Controller.


✅ Best Practices

  • Use managed node groups for easy lifecycle management.

  • Use IRSA (IAM Roles for Service Accounts) to grant pod-level permissions.

  • Enable cluster autoscaler and horizontal pod autoscaler (HPA).

  • Integrate CloudWatch, Prometheus, Grafana for observability.

  • Use private endpoint access for secure clusters.


⚡ 4. AWS Lambda — Serverless Compute

🔹 Overview

AWS Lambda runs your code without provisioning servers.
You just upload your function; AWS handles scaling, execution, and fault tolerance.

FeatureDescription
RuntimeNode.js, Python, Go, Java, .NET, Custom
Trigger SourcesAPI Gateway, S3, DynamoDB, SNS, EventBridge
ScalingAutomatic — per request
BillingPay only for execution time (ms)
ConcurrencyScales automatically; default limit ~1,000 per Region

🧠 Interview Questions & Answers

1️⃣ When would you choose Lambda over EC2?
→ For event-driven, short-duration workloads (e.g., data processing, API backend, file transformation).
→ EC2 is better for long-running or stateful apps.

2️⃣ Lambda vs Fargate?
→ Lambda = Function-level serverless.
→ Fargate = Container-level serverless.

3️⃣ Cold start vs warm start?
→ Cold start = first invocation; container and runtime start-up adds latency.
→ Warm start = reused container → faster execution.

4️⃣ How do you secure Lambda functions?

  • Use IAM execution role (least privilege).

  • Store secrets in AWS Secrets Manager or SSM Parameter Store.

  • Enable VPC access only when necessary.

  • Use Dead Letter Queues (DLQ) for failure handling.


✅ Best Practices

  • Use Lambda layers for shared libraries.

  • Use Provisioned Concurrency to eliminate cold starts.

  • Monitor with CloudWatch Logs and X-Ray.

  • Combine with API Gateway or EventBridge for event-driven patterns.

  • Keep functions small and single-purpose (≤ 15 min runtime).


🧮 5. Choosing the Right AWS Compute Service

RequirementRecommended Service
Full OS control, long-running appEC2
Dockerized app, AWS-native orchestrationECS
Kubernetes workload, multi-cloud portabilityEKS
Event-driven or short-lived tasksLambda
Need both containers + serverlessECS/EKS on Fargate

⚖️ 6. Architecture Comparison

FeatureEC2ECSEKSLambda
Compute ModelVMContainerKubernetesFunction
Management LevelSelf-managedSemi-managedManaged control planeFully managed
ScalingAuto Scaling GroupService Auto ScalingCluster + HPAAutomatic
Cost ModelPay for uptimePay per task/containerPay for nodes/podsPay per invocation
Startup TimeMinutesSecondsSecondsMilliseconds
Best ForStateful workloadsMicroservicesCloud-native appsEvent-driven tasks

🧠 Interview Cheat Sheet

QuestionShort Answer
What is EC2 Auto Scaling?Dynamically adds/removes instances based on demand.
ECS vs Fargate?Fargate = no EC2 management, pay-per-task.
What’s IRSA in EKS?IAM Roles for Service Accounts — granular permissions.
Lambda concurrency limit?1,000 per Region (can request increase).
How does Lambda scale?Each request = new container; scales automatically.
How to reduce Lambda cold starts?Use Provisioned Concurrency or keep function warm.
Which is more portable — ECS or EKS?EKS (Kubernetes).
Can EKS run on Fargate?✅ Yes — serverless pods mode.

🧩 7. Best Practice Summary

AreaRecommendation
SecurityUse IAM roles, VPC isolation, and secrets management.
Cost OptimizationUse Spot/Reserved instances; Fargate for burst workloads.
ScalingEnable Auto Scaling (ASG/HPA).
MonitoringCloudWatch, X-Ray, Prometheus, Grafana.
CI/CDCodePipeline → CodeBuild → ECS/EKS/Lambda deploy.
ResilienceMulti-AZ deployment + Load Balancers.

🧩 8. Real-World Scenario Examples

Scenario 1:
Microservice API backend with variable load → Use ECS Fargate + ALB for scaling without managing servers.

Scenario 2:
Batch data processing job triggered by S3 uploads → Use Lambda (trigger via S3 event).

Scenario 3:
AI/ML model serving on GPUs → Use EC2 G5 instances or EKS GPU node group.

Scenario 4:
Enterprise-grade microservices requiring Kubernetes governance → Use EKS with GitOps + ArgoCD.

Scenario 5:
Cron-like periodic jobs → Use EventBridge Scheduler + Lambda or ECS Scheduled Tasks.


🧠 Key Takeaways

  • EC2 → Full control.

  • ECS → AWS-managed container orchestration.

  • EKS → Kubernetes power with AWS integration.

  • Lambda → Pure serverless.

  • Choose based on control vs. automation vs. workload pattern.

No comments:

Post a Comment

Model Context Protocol (MCP) — Complete Guide for Backend Engineers

  Model Context Protocol (MCP) — Complete Guide for Backend Engineers Build Tools, Resources, and AI-Driven Services Using LangChain Moder...

Featured Posts