• Follow Us On :
Google Cloud Platform Tutorial

Google Cloud Platform Tutorial: The Ultimate Proven Guide to Master GCP in 2026

In the fiercely competitive cloud computing landscape, Google Cloud Platform (GCP) stands as one of the most technically innovative and rapidly growing cloud platforms in the world. Built on the same infrastructure that powers Google Search, YouTube, Gmail, Google Maps, and the world’s most ambitious AI research — GCP brings Google’s legendary engineering excellence to the fingertips of developers, data engineers, data scientists, and enterprises around the globe.

If you’re looking for a comprehensive Google Cloud Platform tutorial that takes you from beginner to confident practitioner — covering core services, hands-on deployment examples, data and AI capabilities, security, DevOps, certifications, and career paths — this is the definitive guide you’ve been searching for.

This ultimate Google Cloud Platform tutorial covers everything you need: what GCP is and why it matters, setting up your account, the GCP organizational structure, core compute services, storage options, world-class data and analytics tools, networking, databases, AI and machine learning services, identity and security, serverless computing, Kubernetes, CI/CD, monitoring, and the certification path to launch your GCP career in 2025.

Whether you’re a developer building cloud-native applications, a data engineer designing data pipelines, a machine learning practitioner leveraging Google’s AI infrastructure, or an IT professional migrating enterprise workloads — this Google Cloud Platform tutorial is your complete starting point.

Let’s unlock the power of Google Cloud.

What is Google Cloud Platform? — The Foundation

Google Cloud Platform (GCP) is Google’s suite of cloud computing services that runs on the same infrastructure Google uses internally for its end-user products. GCP offers a comprehensive range of services across computing, storage, data analytics, machine learning, networking, databases, and DevOps — all delivered over the internet on a pay-as-you-go basis.

GCP by the Numbers (2025)

  • 40+ GCP regions and 120+ availability zones worldwide
  • 200+ products and services across all major cloud categories
  • $40+ billion in annual cloud revenue (fastest-growing major cloud)
  • Google’s global network — one of the largest private networks on Earth, covering over 1 million miles of fiber and 30+ submarine cable systems
  • 10 Gbps+ network speeds between regions
  • Carbon-neutral since 2007; running on 100% renewable energy

Why Choose Google Cloud Platform?

1. Data Analytics and Big Data Leadership Google invented MapReduce, Bigtable, and Colossus — the foundational technologies behind modern big data processing. GCP’s BigQuery, Dataflow, Dataproc, and Pub/Sub are industry-leading data tools used by the world’s most data-driven organizations.

2. Artificial Intelligence and Machine Learning GCP is home to Google DeepMind, TensorFlow (open-source), Vertex AI, TPUs (Tensor Processing Units) — custom AI hardware that’s 10–30x faster than GPU clusters for ML workloads. If you work in AI/ML, GCP’s infrastructure is unmatched.

3. Kubernetes and Container Innovation Kubernetes was invented at Google. Google Kubernetes Engine (GKE) remains the gold standard for managed Kubernetes in the cloud. Google’s deep container expertise translates into the most mature, feature-rich Kubernetes experience available.

4. Network Performance Google’s private global network — distinct from the public internet — provides extremely low latency between regions. Applications running on GCP benefit from Google’s fiber backbone rather than the unpredictable public internet.

5. Developer-Friendly and Open Source GCP leads in open-source friendliness — supporting Kubernetes, TensorFlow, Apache Beam, Apache Kafka, and hundreds of other open-source projects. GCP avoids vendor lock-in by building on open standards.

6. Sustainability and Environmental Leadership GCP is the cleanest cloud — powered by 100% renewable energy and committed to operating on carbon-free energy 24/7 globally by 2030.

Setting Up Your GCP Account — Getting Started

Step 1: Create a Google Cloud Account

  1. Visit https://cloud.google.com
  2. Click “Get started for free”
  3. Sign in with an existing Google account or create a new one
  4. Complete the billing setup (credit card required for identity verification)
  5. Verify your phone number

What You Get with the Free Account:

  • $300 USD in free credits valid for 90 days
  • Always Free tier — 20+ GCP products with monthly usage limits:
    • 1 f1-micro VM (US regions only, 720 hours/month)
    • 5 GB Cloud Storage (US regions)
    • 10 GB BigQuery storage + 1 TB queries/month
    • Cloud Functions: 2 million invocations/month
    • Cloud Run: 2 million requests/month
    • Cloud Build: 120 build-minutes/day

Step 2: Install the Google Cloud CLI (gcloud)

The gcloud CLI is your primary tool for managing GCP resources from the terminal.

bash
# ── Install gcloud CLI ────────────────────────────────────

# On macOS (using Homebrew)
brew install --cask google-cloud-sdk

# On Linux (Debian/Ubuntu)
curl https://sdk.cloud.google.com | bash
exec -l $SHELL

# On Windows — download installer from:
# https://cloud.google.com/sdk/docs/install

# ── Initial Configuration ─────────────────────────────────
gcloud init
# This will:
# 1. Open browser for authentication
# 2. Set your default project
# 3. Set default compute region/zone

# Verify installation
gcloud --version
gcloud auth list
gcloud config list

# ── Essential gcloud Commands ─────────────────────────────
# Set a different project
gcloud config set project YOUR_PROJECT_ID

# Set default region and zone
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

# View all configurations
gcloud config configurations list

# Authenticate with Application Default Credentials (ADC)
# Used by client libraries and SDKs
gcloud auth application-default login

echo "✅ gcloud CLI configured successfully"

Step 3: Create and Organize Your GCP Project

bash
# GCP Organizational Hierarchy:
# Organization → Folders → Projects → Resources

# Create a new project
gcloud projects create elearncourses-gcp \
  --name="eLearn Courses GCP" \
  --set-as-default

# Enable essential APIs for the project
gcloud services enable \
  compute.googleapis.com \
  storage.googleapis.com \
  container.googleapis.com \
  cloudfunctions.googleapis.com \
  run.googleapis.com \
  bigquery.googleapis.com \
  sqladmin.googleapis.com \
  aiplatform.googleapis.com \
  monitoring.googleapis.com \
  logging.googleapis.com \
  cloudbuild.googleapis.com \
  iam.googleapis.com \
  --project=elearncourses-gcp

echo "✅ Project created and APIs enabled"

# Set billing account (required for paid services)
# First, get your billing account ID
gcloud billing accounts list

# Link billing account to project
gcloud billing projects link elearncourses-gcp \
  --billing-account=YOUR_BILLING_ACCOUNT_ID

echo "✅ Billing account linked to project"

Step 4: Understanding GCP’s Global Infrastructure

Regions — Geographic locations where GCP data centers are located:

  • Examples: us-central1 (Iowa), europe-west1 (Belgium), asia-south1 (Mumbai), australia-southeast1 (Sydney)

Zones — Deployment areas within a region (typically 3 per region):

  • Examples: us-central1-a, us-central1-b, us-central1-c

Multi-regions — Geographic aggregates for higher availability:

  • US, EU, ASIA — Used for Cloud Storage, BigQuery, and other global services

Points of Presence (PoPs) — 180+ edge locations for Cloud CDN and load balancing

Part 1: GCP Compute Services

Google Compute Engine (GCE) — Virtual Machines

Google Compute Engine provides virtual machines (VMs) running in Google’s data centers. GCE offers exceptional performance, flexible VM configurations, and live migration technology that moves running VMs between physical hosts without downtime.

Machine Types:

Family Series Best For
General Purpose E2, N2, N2D, N1 Web serving, dev/test, databases
Compute Optimized C2, C2D High-performance computing, gaming servers
Memory Optimized M2, M3 In-memory databases (SAP HANA), analytics
Accelerator Optimized A2, G2 Machine learning, GPU workloads
Scale-Out Optimized T2D Scale-out workloads, microservices

Unique GCE Features:

  • Live Migration — VMs migrate automatically during host maintenance with zero downtime
  • Preemptible/Spot VMs — Up to 91% discount for fault-tolerant batch workloads
  • Sole-Tenant Nodes — Dedicated physical servers for compliance requirements
  • Custom Machine Types — Fine-tune vCPU and memory independently
bash
#!/bin/bash
# ── GCP Compute Engine: Complete VM Deployment ─────────────

PROJECT_ID="elearncourses-gcp"
REGION="us-central1"
ZONE="us-central1-a"

echo "=== Google Cloud Platform Tutorial: VM Deployment ==="

# ── Step 1: Create VPC Network ─────────────────────────────
gcloud compute networks create elearn-vpc \
  --project=$PROJECT_ID \
  --subnet-mode=custom \
  --mtu=1460 \
  --bgp-routing-mode=regional

# Create subnet
gcloud compute networks subnets create elearn-subnet \
  --project=$PROJECT_ID \
  --network=elearn-vpc \
  --region=$REGION \
  --range=10.0.0.0/24 \
  --enable-private-ip-google-access

echo "✅ VPC network and subnet created"

# ── Step 2: Create Firewall Rules ──────────────────────────
# Allow HTTP/HTTPS from internet
gcloud compute firewall-rules create allow-web-traffic \
  --project=$PROJECT_ID \
  --network=elearn-vpc \
  --direction=INGRESS \
  --priority=1000 \
  --action=ALLOW \
  --rules=tcp:80,tcp:443 \
  --source-ranges=0.0.0.0/0 \
  --target-tags=web-server

# Allow SSH from IAP (Identity-Aware Proxy — more secure than public SSH)
gcloud compute firewall-rules create allow-ssh-iap \
  --project=$PROJECT_ID \
  --network=elearn-vpc \
  --direction=INGRESS \
  --priority=1000 \
  --action=ALLOW \
  --rules=tcp:22 \
  --source-ranges=35.235.240.0/20 \
  --target-tags=web-server

# Deny all other ingress by default
gcloud compute firewall-rules create deny-all-ingress \
  --project=$PROJECT_ID \
  --network=elearn-vpc \
  --direction=INGRESS \
  --priority=65534 \
  --action=DENY \
  --rules=all

echo "✅ Firewall rules configured"

# ── Step 3: Create VM with Startup Script ──────────────────
STARTUP_SCRIPT='#!/bin/bash
apt-get update
apt-get install -y nginx python3 python3-pip

# Install application dependencies
pip3 install flask gunicorn

# Create a simple Flask web app
cat > /var/www/elearn/app.py << "EOF"
from flask import Flask, jsonify
app = Flask(__name__)

@app.route("/")
def home():
    return jsonify({
        "message": "Welcome to elearncourses.com!",
        "platform": "Google Cloud Platform",
        "status": "running"
    })

@app.route("/health")
def health():
    return jsonify({"status": "healthy"})

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8080)
EOF

# Configure Nginx as reverse proxy
cat > /etc/nginx/sites-available/elearn << "NGINXEOF"
server {
    listen 80;
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
NGINXEOF

ln -s /etc/nginx/sites-available/elearn /etc/nginx/sites-enabled/
rm /etc/nginx/sites-enabled/default

# Start application
cd /var/www/elearn
gunicorn --bind 0.0.0.0:8080 --daemon --workers 2 app:app

systemctl restart nginx
systemctl enable nginx

echo "eLearn application started successfully" >> /var/log/startup.log'

gcloud compute instances create elearn-web-vm \
  --project=$PROJECT_ID \
  --zone=$ZONE \
  --machine-type=e2-medium \
  --network-interface=network=$VNET,subnet=elearn-subnet,no-address \
  --maintenance-policy=MIGRATE \
  --provisioning-model=STANDARD \
  --image-family=debian-11 \
  --image-project=debian-cloud \
  --boot-disk-size=20GB \
  --boot-disk-type=pd-balanced \
  --tags=web-server \
  --metadata=startup-script="$STARTUP_SCRIPT" \
  --labels=environment=tutorial,project=elearncourses

echo "✅ VM created: elearn-web-vm"

# ── Step 4: Managed Instance Group with Autoscaling ────────
# Create instance template
gcloud compute instance-templates create elearn-template \
  --project=$PROJECT_ID \
  --machine-type=e2-medium \
  --network-interface=subnet=elearn-subnet \
  --image-family=debian-11 \
  --image-project=debian-cloud \
  --tags=web-server \
  --metadata=startup-script="$STARTUP_SCRIPT"

# Create managed instance group
gcloud compute instance-groups managed create elearn-mig \
  --project=$PROJECT_ID \
  --region=$REGION \
  --template=elearn-template \
  --size=2

# Configure autoscaling (2 to 10 instances based on CPU)
gcloud compute instance-groups managed set-autoscaling elearn-mig \
  --project=$PROJECT_ID \
  --region=$REGION \
  --max-num-replicas=10 \
  --min-num-replicas=2 \
  --target-cpu-utilization=0.60 \
  --cool-down-period=90

echo "✅ Managed Instance Group with autoscaling configured (2-10 instances)"

# ── Step 5: HTTP Load Balancer ─────────────────────────────
# Create health check
gcloud compute health-checks create http elearn-health-check \
  --port=80 \
  --request-path=/health \
  --check-interval=10s \
  --timeout=5s \
  --healthy-threshold=2 \
  --unhealthy-threshold=3

# Create backend service
gcloud compute backend-services create elearn-backend \
  --protocol=HTTP \
  --health-checks=elearn-health-check \
  --global

# Add instance group to backend
gcloud compute backend-services add-backend elearn-backend \
  --instance-group=elearn-mig \
  --instance-group-region=$REGION \
  --balancing-mode=UTILIZATION \
  --max-utilization=0.8 \
  --global

# Create URL map and HTTP proxy
gcloud compute url-maps create elearn-url-map \
  --default-service=elearn-backend

gcloud compute target-http-proxies create elearn-http-proxy \
  --url-map=elearn-url-map

# Create global forwarding rule
gcloud compute forwarding-rules create elearn-forwarding-rule \
  --global \
  --target-http-proxy=elearn-http-proxy \
  --ports=80

LOAD_BALANCER_IP=$(gcloud compute forwarding-rules describe \
  elearn-forwarding-rule --global --format="value(IPAddress)")

echo ""
echo "🌐 Load Balancer IP: $LOAD_BALANCER_IP"
echo "✅ Global HTTP Load Balancer with autoscaling complete!"

Google App Engine (GAE)

App Engine is GCP’s fully managed Platform as a Service — deploy applications without managing servers, just push code and App Engine handles scaling, patching, and infrastructure.

python
# app.yaml — App Engine configuration
"""
runtime: python311
instance_class: F2

automatic_scaling:
  min_instances: 1
  max_instances: 20
  target_cpu_utilization: 0.65
  target_throughput_utilization: 0.60
  min_pending_latency: 30ms
  max_pending_latency: automatic

env_variables:
  FLASK_ENV: production
  APP_NAME: elearncourses

handlers:
  - url: /static
    static_dir: static
    secure: always
  - url: /.*
    script: auto
    secure: always
"""

# main.py — Flask application for App Engine
from flask import Flask, jsonify, request
import os
import logging

app = Flask(__name__)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@app.route("/")
def index():
    return jsonify({
        "platform": "Google App Engine",
        "project": "elearncourses",
        "message": "Welcome to the GCP Tutorial!"
    })

@app.route("/courses", methods=["GET"])
def get_courses():
    """Return list of available courses"""
    courses = [
        {"id": 1, "title": "Python for Beginners",
         "level": "Beginner", "category": "Programming"},
        {"id": 2, "title": "Machine Learning Tutorial",
         "level": "Intermediate", "category": "Data Science"},
        {"id": 3, "title": "Google Cloud Platform Tutorial",
         "level": "Intermediate", "category": "Cloud Computing"},
        {"id": 4, "title": "DevOps Fundamentals",
         "level": "Intermediate", "category": "DevOps"},
    ]

    category_filter = request.args.get("category")
    if category_filter:
        courses = [c for c in courses
                   if c["category"].lower() == category_filter.lower()]

    return jsonify({
        "total": len(courses),
        "courses": courses
    })

@app.route("/_ah/health")
def health_check():
    """App Engine health check endpoint"""
    return jsonify({"status": "healthy"}), 200

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 8080))
    app.run(host="0.0.0.0", port=port, debug=False)
bash
# Deploy to App Engine
gcloud app create --region=us-central
gcloud app deploy app.yaml --quiet

# Get app URL
gcloud app browse
APP_URL=$(gcloud app describe --format="value(defaultHostname)")
echo "✅ App deployed at: https://$APP_URL"

# View logs
gcloud app logs tail --service=default

Cloud Run — Serverless Containers

Cloud Run is GCP’s managed serverless platform for running containerized applications. Unlike App Engine, Cloud Run runs any language or framework packaged as a container — with automatic scaling to zero (no charges when not in use).

dockerfile
# Dockerfile — Containerized Python application for Cloud Run

FROM python:3.11-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create non-root user (security best practice)
RUN adduser --disabled-password --gecos '' appuser
USER appuser

# Cloud Run expects PORT environment variable
ENV PORT=8080
EXPOSE 8080

CMD exec gunicorn \
  --bind :$PORT \
  --workers 1 \
  --threads 8 \
  --timeout 0 \
  main:app
bash
# Build and deploy to Cloud Run

PROJECT_ID="elearncourses-gcp"
SERVICE_NAME="elearncourses-api"
REGION="us-central1"

# Build container image using Cloud Build (no Docker daemon needed locally)
gcloud builds submit \
  --tag gcr.io/$PROJECT_ID/$SERVICE_NAME \
  --project=$PROJECT_ID

echo "✅ Container image built and pushed to Container Registry"

# Deploy to Cloud Run
gcloud run deploy $SERVICE_NAME \
  --image gcr.io/$PROJECT_ID/$SERVICE_NAME \
  --platform managed \
  --region $REGION \
  --allow-unauthenticated \
  --min-instances=0 \
  --max-instances=100 \
  --memory=512Mi \
  --cpu=1 \
  --concurrency=80 \
  --timeout=300 \
  --set-env-vars="ENVIRONMENT=production,APP_NAME=elearncourses" \
  --project=$PROJECT_ID

# Get service URL
SERVICE_URL=$(gcloud run services describe $SERVICE_NAME \
  --region=$REGION \
  --format="value(status.url)")

echo "✅ Cloud Run service deployed: $SERVICE_URL"

# Enable Cloud Run traffic splitting (canary deployment)
gcloud run services update-traffic $SERVICE_NAME \
  --region=$REGION \
  --to-revisions=LATEST=10,STABLE=90 \
  --project=$PROJECT_ID

echo "✅ Canary deployment: 10% new, 90% stable"

Google Kubernetes Engine (GKE)

Google Kubernetes Engine (GKE) is the most mature, feature-rich managed Kubernetes service available. Google invented Kubernetes, and GKE reflects decades of container orchestration expertise.

GKE Modes:

  • GKE Autopilot — Fully managed Kubernetes where Google manages nodes, scaling, and security
  • GKE Standard — You manage node configuration; more control for advanced use cases
bash
# ── GKE Cluster Creation and Application Deployment ────────

PROJECT_ID="elearncourses-gcp"
CLUSTER_NAME="elearn-gke-cluster"
REGION="us-central1"

# Create GKE Autopilot cluster (Google manages everything)
gcloud container clusters create-auto $CLUSTER_NAME \
  --region=$REGION \
  --project=$PROJECT_ID \
  --release-channel=regular \
  --network=elearn-vpc \
  --subnetwork=elearn-subnet

echo "⏳ GKE cluster creation takes 5-8 minutes..."

# Get credentials
gcloud container clusters get-credentials $CLUSTER_NAME \
  --region=$REGION \
  --project=$PROJECT_ID

kubectl cluster-info
kubectl get nodes

# Deploy eLearning platform microservices
cat <<'EOF' | kubectl apply -f -
# ── Namespace ───────────────────────────────────────────────
apiVersion: v1
kind: Namespace
metadata:
  name: elearncourses
  labels:
    app: elearncourses

---
# ── ConfigMap: Application Configuration ───────────────────
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: elearncourses
data:
  ENVIRONMENT: "production"
  APP_NAME: "elearncourses"
  LOG_LEVEL: "INFO"
  MAX_CONNECTIONS: "100"

---
# ── Deployment: Web Application ─────────────────────────────
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  namespace: elearncourses
  labels:
    app: webapp
    version: "v1"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: webapp
        version: "v1"
    spec:
      containers:
      - name: webapp
        image: gcr.io/elearncourses-gcp/elearncourses-api:latest
        ports:
        - containerPort: 8080
        envFrom:
        - configMapRef:
            name: app-config
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
          failureThreshold: 3
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 15
          failureThreshold: 3
        securityContext:
          allowPrivilegeEscalation: false
          runAsNonRoot: true
          runAsUser: 1000

---
# ── Service: LoadBalancer ───────────────────────────────────
apiVersion: v1
kind: Service
metadata:
  name: webapp-service
  namespace: elearncourses
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  selector:
    app: webapp
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080

---
# ── Ingress: External Access ────────────────────────────────
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  namespace: elearncourses
  annotations:
    kubernetes.io/ingress.class: "gce"
    networking.gke.io/managed-certificates: "elearn-cert"
    kubernetes.io/ingress.global-static-ip-name: "elearn-static-ip"
spec:
  rules:
  - host: app.elearncourses.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: webapp-service
            port:
              number: 80

---
# ── HorizontalPodAutoscaler ─────────────────────────────────
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: webapp-hpa
  namespace: elearncourses
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: webapp
  minReplicas: 3
  maxReplicas: 50
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 65
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 30
      policies:
      - type: Pods
        value: 4
        periodSeconds: 60
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 25
        periodSeconds: 60
EOF

echo "✅ Microservices deployed to GKE"
kubectl get all -n elearncourses

Part 2: Cloud Storage

Google Cloud Storage (GCS)

Google Cloud Storage is GCP’s unified object storage — highly durable (11 nines: 99.999999999%), globally accessible, and designed for any amount of data.

Storage Classes:

Class Minimum Storage Access Frequency Use Case
Standard None Frequent Active data, websites
Nearline 30 days Monthly or less Monthly backups, data archives
Coldline 90 days Quarterly or less Disaster recovery
Archive 365 days Yearly or less Long-term preservation
python
# Google Cloud Storage — Python SDK
# pip install google-cloud-storage google-auth

from google.cloud import storage
from google.oauth2 import service_account
from datetime import datetime, timedelta
import os
import json
import hashlib
import mimetypes

class GCSTutorial:
    """
    Comprehensive Google Cloud Storage operations tutorial
    demonstrating all major GCS capabilities
    """

    def __init__(self, project_id: str, bucket_name: str):
        self.project_id = project_id
        self.bucket_name = bucket_name
        # Uses Application Default Credentials (ADC)
        self.client = storage.Client(project=project_id)

    # ── Bucket Management ────────────────────────────────────
    def create_bucket(self, location: str = "US",
                      storage_class: str = "STANDARD") -> storage.Bucket:
        """Create a GCS bucket with best-practice configuration"""

        bucket = self.client.bucket(self.bucket_name)
        bucket.storage_class = storage_class

        # Enable versioning for data protection
        bucket.versioning_enabled = True

        # Create the bucket
        bucket = self.client.create_bucket(
            bucket,
            location=location
        )

        # Set lifecycle rules to manage costs
        bucket.lifecycle_rules = [
            # Move to Nearline after 30 days
            storage.lifecycle.BucketLifecycleRuleCondition(
                age=30,
                matches_storage_class=["STANDARD"]
            ),
            # Move to Coldline after 90 days
            storage.lifecycle.BucketLifecycleRuleCondition(
                age=90,
                matches_storage_class=["NEARLINE"]
            ),
            # Delete after 365 days (for temp files)
            storage.lifecycle.BucketLifecycleRuleCondition(
                age=365,
                matches_prefix=["temp/", "cache/"]
            )
        ]
        bucket.patch()

        print(f"✅ Bucket created: gs://{self.bucket_name}")
        print(f"   Location: {bucket.location}")
        print(f"   Storage Class: {bucket.storage_class}")
        print(f"   Versioning: {bucket.versioning_enabled}")
        return bucket

    # ── Upload Operations ────────────────────────────────────
    def upload_file(self, local_path: str, gcs_path: str,
                    metadata: dict = None) -> str:
        """Upload a file with metadata and integrity verification"""

        bucket = self.client.bucket(self.bucket_name)
        blob = bucket.blob(gcs_path)

        # Calculate MD5 for integrity verification
        with open(local_path, "rb") as f:
            file_content = f.read()
            md5_hash = hashlib.md5(file_content).hexdigest()

        # Set content type automatically
        content_type, _ = mimetypes.guess_type(local_path)

        # Set metadata
        blob.metadata = {
            "uploaded_by": "elearncourses-system",
            "upload_timestamp": datetime.utcnow().isoformat(),
            "original_filename": os.path.basename(local_path),
            "md5_checksum": md5_hash,
            **(metadata or {})
        }

        blob.content_type = content_type or "application/octet-stream"

        # Upload with resumable upload for large files
        blob.upload_from_filename(
            local_path,
            content_type=content_type
        )

        gcs_uri = f"gs://{self.bucket_name}/{gcs_path}"
        public_url = f"https://storage.googleapis.com/{self.bucket_name}/{gcs_path}"

        print(f"✅ Uploaded: {gcs_path}")
        print(f"   Size: {blob.size / 1024:.1f} KB")
        print(f"   GCS URI: {gcs_uri}")
        return gcs_uri

    def upload_from_memory(self, data: bytes, gcs_path: str,
                           content_type: str = "application/json") -> str:
        """Upload data from memory (no local file needed)"""
        bucket = self.client.bucket(self.bucket_name)
        blob = bucket.blob(gcs_path)
        blob.upload_from_string(data, content_type=content_type)
        print(f"✅ Uploaded from memory: {gcs_path}")
        return f"gs://{self.bucket_name}/{gcs_path}"

    # ── Download Operations ──────────────────────────────────
    def download_file(self, gcs_path: str, local_path: str) -> None:
        """Download a blob to local file"""
        bucket = self.client.bucket(self.bucket_name)
        blob = bucket.blob(gcs_path)
        blob.download_to_filename(local_path)
        print(f"✅ Downloaded: {gcs_path}{local_path}")

    # ── Signed URLs (Temporary Access) ──────────────────────
    def generate_signed_url(self, gcs_path: str,
                             expiration_minutes: int = 60,
                             method: str = "GET") -> str:
        """
        Generate a time-limited signed URL for secure, temporary access.
        Useful for allowing users to download/upload without GCP credentials.
        """
        bucket = self.client.bucket(self.bucket_name)
        blob = bucket.blob(gcs_path)

        url = blob.generate_signed_url(
            version="v4",
            expiration=timedelta(minutes=expiration_minutes),
            method=method,
            content_type="application/octet-stream" if method == "PUT" else None
        )

        print(f"🔗 Signed URL (expires in {expiration_minutes} min):")
        print(f"   {url[:100]}...")
        return url

    # ── List and Search Objects ──────────────────────────────
    def list_objects(self, prefix: str = None,
                     delimiter: str = None) -> list:
        """List objects with optional prefix filter"""
        bucket = self.client.bucket(self.bucket_name)
        blobs = bucket.list_blobs(prefix=prefix, delimiter=delimiter)

        objects = []
        print(f"\n📂 Objects in gs://{self.bucket_name}/{prefix or ''}:")
        for blob in blobs:
            size_kb = blob.size / 1024 if blob.size else 0
            print(f"  📄 {blob.name} | {size_kb:.1f} KB | "
                  f"{blob.updated.strftime('%Y-%m-%d %H:%M')}")
            objects.append({
                "name": blob.name,
                "size_bytes": blob.size,
                "updated": blob.updated.isoformat(),
                "content_type": blob.content_type
            })
        return objects

    # ── Copy and Move ────────────────────────────────────────
    def copy_blob(self, source_blob_name: str,
                  destination_bucket_name: str,
                  destination_blob_name: str) -> None:
        """Copy a blob to another location"""
        source_bucket = self.client.bucket(self.bucket_name)
        source_blob = source_bucket.blob(source_blob_name)
        dest_bucket = self.client.bucket(destination_bucket_name)

        source_bucket.copy_blob(
            source_blob, dest_bucket, destination_blob_name
        )
        print(f"✅ Copied: {source_blob_name} → "
              f"gs://{destination_bucket_name}/{destination_blob_name}")

    # ── Access Control ───────────────────────────────────────
    def make_blob_public(self, gcs_path: str) -> str:
        """Make a single blob publicly readable"""
        bucket = self.client.bucket(self.bucket_name)
        blob = bucket.blob(gcs_path)
        blob.make_public()
        print(f"✅ Public URL: {blob.public_url}")
        return blob.public_url


# ── Usage Example ────────────────────────────────────────────
gcs = GCSTutorial(
    project_id="elearncourses-gcp",
    bucket_name="elearncourses-content-bucket"
)

# Create bucket
gcs.create_bucket(location="US-CENTRAL1")

# Upload course materials
course_data = json.dumps({
    "course_id": 1,
    "title": "Google Cloud Platform Tutorial",
    "modules": 12,
    "level": "Intermediate"
}).encode("utf-8")

gcs.upload_from_memory(
    course_data,
    "courses/gcp-tutorial/metadata.json",
    content_type="application/json"
)

# Generate temporary access URL
url = gcs.generate_signed_url(
    "courses/gcp-tutorial/metadata.json",
    expiration_minutes=120
)

# List all course objects
gcs.list_objects(prefix="courses/")

Part 3: BigQuery — GCP’s Data Warehouse

BigQuery is GCP’s flagship serverless, fully managed enterprise data warehouse. It enables SQL queries over petabytes of data in seconds — without any infrastructure to provision or manage. BigQuery is one of the most powerful and uniquely GCP capabilities.

python
# BigQuery Tutorial — Python SDK
# pip install google-cloud-bigquery pandas db-dtypes

from google.cloud import bigquery
from google.cloud.bigquery import SchemaField, LoadJobConfig
import pandas as pd
import json
from datetime import datetime

class BigQueryTutorial:
    """
    Comprehensive BigQuery tutorial covering all major operations
    """

    def __init__(self, project_id: str, dataset_id: str):
        self.project_id = project_id
        self.dataset_id = dataset_id
        self.client = bigquery.Client(project=project_id)
        self.full_dataset = f"{project_id}.{dataset_id}"

    # ── Dataset Management ───────────────────────────────────
    def create_dataset(self, location: str = "US") -> None:
        """Create a BigQuery dataset"""
        dataset = bigquery.Dataset(self.full_dataset)
        dataset.location = location
        dataset.description = "eLearn Courses analytics dataset"

        dataset = self.client.create_dataset(dataset, exists_ok=True)
        print(f"✅ Dataset created: {self.full_dataset}")

    # ── Table Creation ───────────────────────────────────────
    def create_course_analytics_table(self) -> None:
        """Create a partitioned and clustered table for optimal performance"""

        table_ref = f"{self.full_dataset}.course_enrollments"

        schema = [
            SchemaField("enrollment_id",    "STRING",    "REQUIRED",
                        description="Unique enrollment identifier"),
            SchemaField("student_id",       "STRING",    "REQUIRED"),
            SchemaField("course_id",        "STRING",    "REQUIRED"),
            SchemaField("course_title",     "STRING",    "NULLABLE"),
            SchemaField("course_category",  "STRING",    "NULLABLE"),
            SchemaField("course_level",     "STRING",    "NULLABLE"),
            SchemaField("enrolled_at",      "TIMESTAMP", "REQUIRED"),
            SchemaField("completed_at",     "TIMESTAMP", "NULLABLE"),
            SchemaField("progress_pct",     "FLOAT64",   "NULLABLE"),
            SchemaField("quiz_scores",      "FLOAT64",   "NULLABLE"),
            SchemaField("country",          "STRING",    "NULLABLE"),
            SchemaField("device_type",      "STRING",    "NULLABLE"),
            SchemaField("revenue",          "FLOAT64",   "NULLABLE"),
        ]

        table = bigquery.Table(table_ref, schema=schema)

        # Partition by enrollment date (improves query performance and reduces cost)
        table.time_partitioning = bigquery.TimePartitioning(
            type_=bigquery.TimePartitioningType.DAY,
            field="enrolled_at",
            expiration_ms=None  # Keep forever
        )

        # Cluster by commonly filtered columns
        table.clustering_fields = ["course_category", "country", "course_level"]

        table = self.client.create_table(table, exists_ok=True)
        print(f"✅ Table created: {table_ref}")
        print(f"   Partitioned by: enrolled_at (daily)")
        print(f"   Clustered by: course_category, country, course_level")

    # ── Data Loading ─────────────────────────────────────────
    def load_from_dataframe(self, df: pd.DataFrame,
                             table_id: str) -> None:
        """Load a pandas DataFrame into BigQuery"""

        table_ref = f"{self.full_dataset}.{table_id}"
        job_config = LoadJobConfig(
            write_disposition="WRITE_APPEND",  # or WRITE_TRUNCATE, WRITE_EMPTY
            schema_update_options=[
                "ALLOW_FIELD_ADDITION",
                "ALLOW_FIELD_RELAXATION"
            ]
        )

        job = self.client.load_table_from_dataframe(
            df, table_ref, job_config=job_config
        )
        job.result()  # Wait for completion

        table = self.client.get_table(table_ref)
        print(f"✅ Loaded {len(df)} rows into {table_ref}")
        print(f"   Total rows in table: {table.num_rows:,}")

    # ── Advanced Analytics Queries ───────────────────────────
    def run_course_analytics(self) -> pd.DataFrame:
        """
        Comprehensive course performance analytics query
        Demonstrates BigQuery SQL capabilities
        """

        query = f"""
        -- Course Performance Analytics Dashboard
        -- Using BigQuery's partition pruning for cost efficiency

        WITH enrollment_metrics AS (
            SELECT
                course_category,
                course_level,
                country,
                DATE_TRUNC(enrolled_at, MONTH)          AS enrollment_month,
                COUNT(enrollment_id)                     AS total_enrollments,
                COUNT(completed_at)                      AS completions,
                COUNTIF(progress_pct >= 50)              AS halfway_completions,
                ROUND(AVG(progress_pct), 2)              AS avg_progress,
                ROUND(AVG(quiz_scores), 2)               AS avg_quiz_score,
                SUM(revenue)                             AS total_revenue,
                COUNT(DISTINCT student_id)               AS unique_students
            FROM `{self.full_dataset}.course_enrollments`
            WHERE
                -- Partition pruning: only scan relevant partitions
                enrolled_at >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 365 DAY)
                AND enrolled_at < CURRENT_TIMESTAMP()
            GROUP BY 1, 2, 3, 4
        ),

        ranked_categories AS (
            SELECT
                *,
                ROUND(
                    SAFE_DIVIDE(completions, total_enrollments) * 100, 1
                )                                        AS completion_rate_pct,
                ROUND(
                    SAFE_DIVIDE(total_revenue, unique_students), 2
                )                                        AS revenue_per_student,
                ROW_NUMBER() OVER (
                    PARTITION BY enrollment_month
                    ORDER BY total_revenue DESC
                )                                        AS revenue_rank
            FROM enrollment_metrics
        )

        SELECT
            course_category,
            course_level,
            country,
            enrollment_month,
            total_enrollments,
            completions,
            completion_rate_pct,
            avg_progress,
            avg_quiz_score,
            ROUND(total_revenue, 2)                      AS total_revenue,
            unique_students,
            revenue_per_student,
            revenue_rank

            -- Window functions for trend analysis
            ,LAG(total_enrollments) OVER (
                PARTITION BY course_category, country
                ORDER BY enrollment_month
            )                                            AS prev_month_enrollments,
            ROUND(
                SAFE_DIVIDE(
                    total_enrollments - LAG(total_enrollments) OVER (
                        PARTITION BY course_category, country
                        ORDER BY enrollment_month
                    ),
                    LAG(total_enrollments) OVER (
                        PARTITION BY course_category, country
                        ORDER BY enrollment_month
                    )
                ) * 100, 1
            )                                            AS mom_growth_pct

        FROM ranked_categories
        WHERE revenue_rank <= 10
        ORDER BY enrollment_month DESC, total_revenue DESC
        """

        job_config = bigquery.QueryJobConfig(
            # Use partition pruning
            use_query_cache=True,
            # Maximum bytes billed (cost control — $5 limit approximately)
            maximum_bytes_billed=5 * 1024**4
        )

        print("⏳ Running BigQuery analytics query...")
        query_job = self.client.query(query, job_config=job_config)
        results_df = query_job.to_dataframe()

        # Query metadata
        print(f"✅ Query complete!")
        print(f"   Rows returned:    {len(results_df):,}")
        print(f"   Bytes processed:  "
              f"{query_job.total_bytes_processed / 1024**2:.2f} MB")
        print(f"   Bytes billed:     "
              f"{query_job.total_bytes_billed / 1024**2:.2f} MB")
        print(f"   Query duration:   "
              f"{(query_job.ended - query_job.started).total_seconds():.2f}s")

        return results_df

    # ── Streaming Inserts ────────────────────────────────────
    def stream_events(self, events: list) -> None:
        """
        Stream real-time events into BigQuery.
        Available for queries within seconds.
        """
        table_ref = f"{self.full_dataset}.realtime_events"

        # Stream data
        errors = self.client.insert_rows_json(table_ref, events)

        if errors:
            print(f"❌ Streaming errors: {errors}")
        else:
            print(f"✅ Streamed {len(events)} events to BigQuery")

    # ── ML in BigQuery (BQML) ────────────────────────────────
    def create_churn_prediction_model(self) -> None:
        """
        BigQuery ML: Train an ML model using SQL!
        No Python, no external tools — just SQL.
        """

        query = f"""
        CREATE OR REPLACE MODEL `{self.full_dataset}.student_churn_model`
        OPTIONS (
            model_type = 'LOGISTIC_REG',
            input_label_cols = ['churned'],
            max_iterations = 100,
            l1_reg = 0.001,
            l2_reg = 0.01,
            data_split_method = 'RANDOM',
            data_split_eval_fraction = 0.2
        ) AS
        SELECT
            CAST(DATE_DIFF(CURRENT_DATE(),
                 DATE(MAX(enrolled_at)), DAY) > 90 AS INT64)  AS churned,
            COUNT(enrollment_id)                               AS total_enrollments,
            AVG(progress_pct)                                  AS avg_progress,
            AVG(quiz_scores)                                   AS avg_quiz_score,
            SUM(revenue)                                       AS total_spent,
            MAX(course_level = 'Advanced')                     AS tried_advanced,
            country
        FROM `{self.full_dataset}.course_enrollments`
        GROUP BY student_id, country
        HAVING COUNT(enrollment_id) >= 2
        """

        print("⏳ Training churn prediction model with BigQuery ML...")
        job = self.client.query(query)
        job.result()
        print("✅ BQML churn model trained using pure SQL!")
        print("   Use ML.PREDICT() to score new students")


# Initialize and use BigQuery tutorial
bq = BigQueryTutorial(
    project_id="elearncourses-gcp",
    dataset_id="elearn_analytics"
)
bq.create_dataset()
bq.create_course_analytics_table()
print("🎉 BigQuery tutorial setup complete!")

Part 4: GCP AI and Machine Learning Services

GCP’s AI/ML suite is arguably the most powerful in the cloud industry — built on Google’s decade of AI research and powered by custom TPU hardware.

Also Read: How to install Google Cloud

Vertex AI — Unified ML Platform

Vertex AI is GCP’s unified platform for building, deploying, and scaling ML models. It brings together all of Google Cloud’s ML offerings into one cohesive platform.

python
# Vertex AI — Training and Deploying ML Models
# pip install google-cloud-aiplatform

from google.cloud import aiplatform
import pandas as pd
import numpy as np

# Initialize Vertex AI
aiplatform.init(
    project="elearncourses-gcp",
    location="us-central1",
    staging_bucket="gs://elearncourses-ml-staging"
)

# ── AutoML: No-Code Model Training ──────────────────────────
def train_automl_model():
    """
    Train a model using Vertex AI AutoML — no ML expertise required.
    Google automatically selects the best algorithm and hyperparameters.
    """
    print("Creating AutoML Tabular dataset...")

    # Upload training data to BigQuery or Cloud Storage first
    dataset = aiplatform.TabularDataset.create(
        display_name="course-completion-dataset",
        bq_source="bq://elearncourses-gcp.elearn_analytics.course_enrollments"
    )
    print(f"✅ Dataset created: {dataset.display_name}")

    # Train AutoML model
    job = aiplatform.AutoMLTabularTrainingJob(
        display_name="course-completion-predictor",
        optimization_prediction_type="classification",
        optimization_objective="maximize-au-roc",
        column_specs={
            "progress_pct": "numeric",
            "quiz_scores": "numeric",
            "total_enrollments": "numeric",
            "course_level": "categorical",
            "country": "categorical"
        }
    )

    model = job.run(
        dataset=dataset,
        target_column="completed",
        training_fraction_split=0.8,
        validation_fraction_split=0.1,
        test_fraction_split=0.1,
        budget_milli_node_hours=1000,  # 1 hour training budget
        model_display_name="course-completion-model-v1",
        disable_early_stopping=False
    )

    print(f"✅ AutoML model trained: {model.display_name}")
    return model


# ── Vertex AI Prediction Endpoint ───────────────────────────
def deploy_and_predict(model):
    """Deploy model to endpoint and make predictions"""

    # Deploy to endpoint
    endpoint = model.deploy(
        machine_type="n1-standard-2",
        min_replica_count=1,
        max_replica_count=5,
        accelerator_type=None,
        traffic_split={"0": 100}
    )

    print(f"✅ Model deployed to endpoint: {endpoint.display_name}")

    # Make predictions
    test_instances = [
        {
            "progress_pct": 75.5,
            "quiz_scores": 82.0,
            "total_enrollments": 3,
            "course_level": "Intermediate",
            "country": "IN"
        },
        {
            "progress_pct": 20.0,
            "quiz_scores": 45.0,
            "total_enrollments": 1,
            "course_level": "Advanced",
            "country": "US"
        }
    ]

    predictions = endpoint.predict(instances=test_instances)

    print("\n📊 Predictions:")
    for i, (pred, instance) in enumerate(
        zip(predictions.predictions, test_instances)
    ):
        print(f"  Student {i+1}: {pred}")

    return endpoint, predictions


# ── Google Cloud Natural Language API ───────────────────────
from google.cloud import language_v1

def analyze_course_feedback(feedback_texts: list) -> list:
    """
    Analyze student feedback using Google's NLP API.
    Sentiment analysis, entity extraction, and content classification.
    """
    nl_client = language_v1.LanguageServiceClient()
    results = []

    for text in feedback_texts:
        document = language_v1.Document(
            content=text,
            type_=language_v1.Document.Type.PLAIN_TEXT,
            language="en"
        )

        # Sentiment analysis
        sentiment = nl_client.analyze_sentiment(
            request={"document": document}
        ).document_sentiment

        # Entity analysis
        entities = nl_client.analyze_entities(
            request={"document": document}
        ).entities

        # Content classification
        categories = nl_client.classify_text(
            request={"document": document}
        ).categories if len(text.split()) > 20 else []

        result = {
            "text": text[:100] + "..." if len(text) > 100 else text,
            "sentiment": {
                "score": round(sentiment.score, 3),  # -1 to +1
                "magnitude": round(sentiment.magnitude, 3),
                "label": ("POSITIVE" if sentiment.score > 0.1
                          else "NEGATIVE" if sentiment.score < -0.1
                          else "NEUTRAL")
            },
            "key_entities": [
                {"name": e.name, "type": e.type_.name,
                 "salience": round(e.salience, 3)}
                for e in sorted(entities, key=lambda x: x.salience,
                                reverse=True)[:3]
            ],
            "categories": [
                {"name": c.name, "confidence": round(c.confidence, 3)}
                for c in categories[:2]
            ]
        }
        results.append(result)

    return results


# Example feedback analysis
sample_feedback = [
    "Excellent course! The GCP tutorial was incredibly detailed and practical. The hands-on labs made learning so much easier. Highly recommend to anyone starting cloud computing.",
    "The content was okay but the pace was too fast. I struggled with the Kubernetes section. More examples would have been helpful.",
    "Amazing value for money. The instructor explains complex concepts very clearly. BigQuery section was particularly impressive."
]

feedback_analysis = analyze_course_feedback(sample_feedback)

print("\n📊 Course Feedback Sentiment Analysis:")
for analysis in feedback_analysis:
    sentiment = analysis['sentiment']
    print(f"\n  Text: {analysis['text']}")
    print(f"  Sentiment: {sentiment['label']} "
          f"(score: {sentiment['score']}, "
          f"magnitude: {sentiment['magnitude']})")
    if analysis['key_entities']:
        print(f"  Key entities: "
              f"{[e['name'] for e in analysis['key_entities']]}")

Part 5: Cloud Functions — Serverless Event-Driven Computing

python
# Google Cloud Functions — HTTP and Event-Driven Functions
# pip install functions-framework google-cloud-pubsub google-cloud-firestore

import functions_framework
from google.cloud import pubsub_v1, firestore
import json
import logging
from datetime import datetime

# ── HTTP Cloud Function ──────────────────────────────────────
@functions_framework.http
def process_course_enrollment(request):
    """
    Cloud Function triggered by HTTP POST request.
    Processes a new course enrollment event.
    """
    logging.info("Course enrollment function triggered")

    if request.method != "POST":
        return json.dumps({"error": "Method not allowed"}), 405

    try:
        data = request.get_json(silent=True)
        if not data:
            return json.dumps({"error": "No JSON body"}), 400

        student_id = data.get("student_id")
        course_id = data.get("course_id")
        email = data.get("email")

        if not all([student_id, course_id, email]):
            return json.dumps({"error": "Missing required fields"}), 400

        # Save to Firestore
        db = firestore.Client()
        enrollment_ref = db.collection("enrollments").document()
        enrollment_data = {
            "enrollment_id": enrollment_ref.id,
            "student_id": student_id,
            "course_id": course_id,
            "email": email,
            "enrolled_at": datetime.utcnow(),
            "progress": 0,
            "status": "active"
        }
        enrollment_ref.set(enrollment_data)

        # Publish event to Pub/Sub for downstream processing
        publisher = pubsub_v1.PublisherClient()
        topic_path = publisher.topic_path(
            "elearncourses-gcp", "enrollment-events"
        )
        message_data = json.dumps({
            "event_type": "NEW_ENROLLMENT",
            "enrollment_id": enrollment_ref.id,
            "student_id": student_id,
            "course_id": course_id,
            "timestamp": datetime.utcnow().isoformat()
        }).encode("utf-8")

        future = publisher.publish(topic_path, data=message_data)
        message_id = future.result()

        logging.info(f"Enrollment {enrollment_ref.id} published to Pub/Sub: {message_id}")

        return json.dumps({
            "success": True,
            "enrollment_id": enrollment_ref.id,
            "message": f"Enrollment processed successfully",
            "pubsub_message_id": message_id
        }), 201

    except Exception as e:
        logging.error(f"Error processing enrollment: {str(e)}")
        return json.dumps({"error": "Internal server error"}), 500


# ── Pub/Sub Triggered Cloud Function ────────────────────────
@functions_framework.cloud_event
def send_welcome_email(cloud_event):
    """
    Cloud Function triggered by Pub/Sub message.
    Sends welcome email when new enrollment occurs.
    """
    import base64

    pubsub_message = base64.b64decode(
        cloud_event.data["message"]["data"]
    ).decode("utf-8")
    event_data = json.loads(pubsub_message)

    logging.info(f"Processing event: {event_data['event_type']}")

    if event_data.get("event_type") == "NEW_ENROLLMENT":
        student_id = event_data["student_id"]
        course_id = event_data["course_id"]

        # Simulate sending welcome email
        logging.info(
            f"✅ Welcome email sent to student {student_id} "
            f"for course {course_id}"
        )
bash
# Deploy Cloud Functions

# HTTP Function
gcloud functions deploy process-course-enrollment \
  --gen2 \
  --runtime=python311 \
  --region=us-central1 \
  --source=. \
  --entry-point=process_course_enrollment \
  --trigger-http \
  --allow-unauthenticated \
  --memory=256MB \
  --timeout=60s \
  --min-instances=0 \
  --max-instances=100 \
  --set-env-vars="PROJECT_ID=elearncourses-gcp"

# Pub/Sub Triggered Function
gcloud functions deploy send-welcome-email \
  --gen2 \
  --runtime=python311 \
  --region=us-central1 \
  --source=. \
  --entry-point=send_welcome_email \
  --trigger-topic=enrollment-events \
  --memory=256MB \
  --timeout=120s

echo "✅ Cloud Functions deployed (2nd gen)"

Part 6: GCP IAM and Security

bash
# ── Identity and Access Management (IAM) ───────────────────

PROJECT_ID="elearncourses-gcp"

# ── Service Accounts ────────────────────────────────────────
# Create a service account for the application
gcloud iam service-accounts create elearn-app-sa \
  --display-name="eLearning Application Service Account" \
  --description="Service account for elearncourses application" \
  --project=$PROJECT_ID

SA_EMAIL="elearn-app-sa@$PROJECT_ID.iam.gserviceaccount.com"

# Grant minimum necessary permissions (principle of least privilege)
# Storage: read/write to specific bucket only
gcloud storage buckets add-iam-policy-binding \
  gs://elearncourses-content-bucket \
  --member="serviceAccount:$SA_EMAIL" \
  --role="roles/storage.objectAdmin"

# BigQuery: data editor on specific dataset
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member="serviceAccount:$SA_EMAIL" \
  --role="roles/bigquery.dataEditor" \
  --condition="expression=resource.name.startsWith('projects/$PROJECT_ID/datasets/elearn_analytics'),title=dataset-restriction"

# Pub/Sub publisher
gcloud pubsub topics add-iam-policy-binding enrollment-events \
  --member="serviceAccount:$SA_EMAIL" \
  --role="roles/pubsub.publisher" \
  --project=$PROJECT_ID

echo "✅ Service account created with least-privilege permissions"

# ── Custom IAM Roles ────────────────────────────────────────
cat <<EOF > elearn-developer-role.yaml
title: eLearning Developer
description: Custom role for eLearn platform developers
stage: GA
includedPermissions:
  - storage.objects.get
  - storage.objects.list
  - bigquery.datasets.get
  - bigquery.tables.getData
  - bigquery.jobs.create
  - run.services.get
  - run.services.list
  - cloudfunctions.functions.get
  - cloudfunctions.functions.list
  - logging.logEntries.list
  - monitoring.timeSeries.list
EOF

gcloud iam roles create eLearnDeveloper \
  --project=$PROJECT_ID \
  --file=elearn-developer-role.yaml

echo "✅ Custom IAM role created: eLearnDeveloper"

# ── Secret Manager ──────────────────────────────────────────
# Store application secrets securely
gcloud secrets create database-password \
  --data-file=<(echo -n "super_secure_db_password_here") \
  --project=$PROJECT_ID

gcloud secrets create api-key-stripe \
  --data-file=<(echo -n "sk_live_your_stripe_key") \
  --project=$PROJECT_ID

# Grant access to service account
gcloud secrets add-iam-policy-binding database-password \
  --member="serviceAccount:$SA_EMAIL" \
  --role="roles/secretmanager.secretAccessor"

echo "✅ Secrets stored in Secret Manager"
echo "   Access via: gcloud secrets versions access latest --secret=database-password"

# ── VPC Service Controls ────────────────────────────────────
# Restrict GCP services to specific VPC networks
# Prevents data exfiltration — enterprise security requirement
echo "📋 VPC Service Controls configured via Organization Policy"

# ── Cloud Armor (WAF + DDoS Protection) ────────────────────
gcloud compute security-policies create elearn-security-policy \
  --description="WAF policy for eLearn platform" \
  --project=$PROJECT_ID

# Block requests from specific countries (if required)
gcloud compute security-policies rules create 1000 \
  --security-policy=elearn-security-policy \
  --expression="origin.region_code == 'XX'" \
  --action=deny-403 \
  --description="Block specific regions"

# Enable OWASP Top 10 protections
gcloud compute security-policies rules create 900 \
  --security-policy=elearn-security-policy \
  --expression="evaluatePreconfiguredExpr('xss-stable')" \
  --action=deny-403 \
  --description="Block XSS attacks"

gcloud compute security-policies rules create 901 \
  --security-policy=elearn-security-policy \
  --expression="evaluatePreconfiguredExpr('sqli-stable')" \
  --action=deny-403 \
  --description="Block SQL injection"

echo "✅ Cloud Armor WAF configured with OWASP protections"

Part 7: Cloud Build — CI/CD on GCP

yaml
# cloudbuild.yaml — Complete CI/CD Pipeline for GCP

steps:
  # ── Step 1: Install Dependencies ──────────────────────────
  - name: 'python:3.11-slim'
    id: 'install-deps'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        pip install --upgrade pip
        pip install -r requirements.txt
        pip install pytest pytest-cov flake8 safety bandit

  # ── Step 2: Code Quality Checks ───────────────────────────
  - name: 'python:3.11-slim'
    id: 'lint'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        echo "🔍 Running flake8 linter..."
        flake8 . --max-line-length=100 --exclude=.git,__pycache__
        echo "✅ Linting passed"
    waitFor: ['install-deps']

  # ── Step 3: Security Scanning ─────────────────────────────
  - name: 'python:3.11-slim'
    id: 'security-scan'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        echo "🛡️ Running security scans..."
        bandit -r . -x ./tests -ll -q
        safety check --short-report
        echo "✅ Security scan passed"
    waitFor: ['install-deps']

  # ── Step 4: Unit Tests ─────────────────────────────────────
  - name: 'python:3.11-slim'
    id: 'unit-tests'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        echo "🧪 Running unit tests..."
        pytest tests/unit/ \
          --cov=app \
          --cov-report=xml \
          --cov-fail-under=80 \
          --tb=short \
          -v
        echo "✅ All tests passed with 80%+ coverage"
    waitFor: ['lint', 'security-scan']

  # ── Step 5: Build Container Image ─────────────────────────
  - name: 'gcr.io/cloud-builders/docker'
    id: 'build-image'
    args:
      - 'build'
      - '-t'
      - 'gcr.io/$PROJECT_ID/elearncourses-api:$COMMIT_SHA'
      - '-t'
      - 'gcr.io/$PROJECT_ID/elearncourses-api:latest'
      - '--build-arg'
      - 'BUILD_DATE=${_BUILD_DATE}'
      - '--build-arg'
      - 'GIT_COMMIT=$COMMIT_SHA'
      - '--cache-from'
      - 'gcr.io/$PROJECT_ID/elearncourses-api:latest'
      - '.'
    waitFor: ['unit-tests']

  # ── Step 6: Container Security Scan ───────────────────────
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
    id: 'scan-container'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        gcloud artifacts docker images scan \
          gcr.io/$PROJECT_ID/elearncourses-api:$COMMIT_SHA \
          --format=json > scan-results.json
        echo "✅ Container security scan complete"
    waitFor: ['build-image']

  # ── Step 7: Push to Container Registry ────────────────────
  - name: 'gcr.io/cloud-builders/docker'
    id: 'push-image'
    args: ['push', '--all-tags', 'gcr.io/$PROJECT_ID/elearncourses-api']
    waitFor: ['scan-container']

  # ── Step 8: Deploy to Cloud Run (Production) ──────────────
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
    id: 'deploy-production'
    entrypoint: 'bash'
    args:
      - '-c'
      - |
        gcloud run deploy elearncourses-api \
          --image gcr.io/$PROJECT_ID/elearncourses-api:$COMMIT_SHA \
          --platform managed \
          --region us-central1 \
          --min-instances=1 \
          --max-instances=50 \
          --memory=512Mi \
          --cpu=1 \
          --concurrency=80 \
          --timeout=300 \
          --service-account=elearn-app-sa@$PROJECT_ID.iam.gserviceaccount.com \
          --set-env-vars="ENVIRONMENT=production,VERSION=$COMMIT_SHA" \
          --no-traffic

        # Gradually shift traffic (canary deployment)
        gcloud run services update-traffic elearncourses-api \
          --region=us-central1 \
          --to-latest=10

        echo "✅ Deployed with 10% canary traffic"
    waitFor: ['push-image']

# Timeout and resources
timeout: '1200s'
options:
  machineType: 'E2_HIGHCPU_8'
  logging: CLOUD_LOGGING_ONLY

substitutions:
  _BUILD_DATE: '$(date -u +%Y-%m-%dT%H:%M:%SZ)'

images:
  - 'gcr.io/$PROJECT_ID/elearncourses-api:$COMMIT_SHA'
  - 'gcr.io/$PROJECT_ID/elearncourses-api:latest'

GCP Certifications — Complete Roadmap

Foundational Level

Certification Focus Exam Code
Cloud Digital Leader Business case for cloud, GCP products overview CDL

Associate Level

Certification Focus Exam Code
Associate Cloud Engineer Deploy, monitor, and maintain GCP workloads ACE

Professional Level

Certification Focus Exam Code
Professional Cloud Architect Design and plan cloud solutions PCA
Professional Cloud Developer Build and deploy scalable apps PCD
Professional Data Engineer Design data pipelines and analytics PDE
Professional Cloud DevOps Engineer Reliability engineering and CI/CD PCDOE
Professional Cloud Security Engineer IAM, network security, compliance PCSE
Professional Cloud Network Engineer Networking design and implementation PCNE
Professional ML Engineer ML model design and deployment PMLE
Professional Workspace Administrator Google Workspace administration PWSA

Recommended Learning Path:

Cloud Digital Leader (optional foundation)
          ↓
Associate Cloud Engineer (ACE)
          ↓
Professional Cloud Architect (PCA)
    ↓                    ↓
Professional Data       Professional Cloud
Engineer (PDE)         Developer (PCD)

GCP vs AWS vs Azure — Comprehensive Comparison

Dimension GCP AWS Azure
Market Share (2025) ~12% ~32% ~22%
Unique Strength Data/Analytics/AI, Kubernetes Broadest services, largest ecosystem Enterprise/Hybrid/Microsoft
Best For Data engineering, ML/AI, startups Versatile, any workload Enterprise, Windows, compliance
Data Analytics ⭐⭐⭐⭐⭐ BigQuery ⭐⭐⭐⭐ Redshift ⭐⭐⭐⭐ Synapse
ML/AI Services ⭐⭐⭐⭐⭐ Vertex AI, TPUs ⭐⭐⭐⭐ SageMaker ⭐⭐⭐⭐⭐ OpenAI partnership
Kubernetes ⭐⭐⭐⭐⭐ Inventor of K8s ⭐⭐⭐⭐ EKS ⭐⭐⭐⭐ AKS
Serverless ⭐⭐⭐⭐⭐ Cloud Run, Functions ⭐⭐⭐⭐⭐ Lambda ⭐⭐⭐⭐ Azure Functions
Pricing ⭐⭐⭐⭐⭐ Often cheapest ⭐⭐⭐ Variable ⭐⭐⭐ Variable
Free Tier ⭐⭐⭐⭐⭐ Most generous ⭐⭐⭐⭐ Strong ⭐⭐⭐⭐ Strong
Network Speed ⭐⭐⭐⭐⭐ Private backbone ⭐⭐⭐⭐ ⭐⭐⭐⭐
Open Source ⭐⭐⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐
Sustainability ⭐⭐⭐⭐⭐ Carbon-neutral ⭐⭐⭐ ⭐⭐⭐⭐

GCP Career Opportunities and Salaries 2025

Role India (LPA) USA (USD/year) UK (GBP/year)
GCP Cloud Engineer ₹10–28 LPA $110K–$155K £65K–£110K
GCP Data Engineer ₹12–35 LPA $120K–$175K £75K–£130K
GCP Solutions Architect ₹18–50 LPA $140K–$210K £90K–£155K
GCP DevOps/SRE Engineer ₹12–35 LPA $120K–$180K £75K–£135K
GCP ML Engineer ₹15–50 LPA $130K–$210K £85K–£160K
BigQuery/Data Analyst ₹8–25 LPA $90K–$140K £60K–£100K
GCP Security Engineer ₹12–35 LPA $115K–$175K £75K–£130K

Frequently Asked Questions — Google Cloud Platform Tutorial

Q1: Is Google Cloud Platform good for beginners? Yes — GCP is very beginner-friendly with excellent documentation, the Google Cloud Skills Boost platform (free labs), and a generous free tier ($300 credits + always-free services). The gcloud CLI and Google Cloud Console make it easy to get started. The Associate Cloud Engineer certification is an excellent structured learning goal for beginners.

Q2: What makes GCP unique compared to AWS and Azure? GCP’s primary differentiators are: world-class data analytics (BigQuery), AI/ML infrastructure (TPUs, Vertex AI, TensorFlow), the cleanest global private network, Kubernetes expertise (Google invented it), and the most generous free tier. GCP is the preferred choice for data-intensive workloads and ML applications.

Q3: What is BigQuery and why is it special? BigQuery is GCP’s serverless data warehouse that can query terabytes of data in seconds using standard SQL — no infrastructure management required. Its unique columnar storage format, automatic scaling, separation of storage and compute, and extremely competitive pricing make it the most powerful cloud data warehouse available.

Q4: How does Cloud Run differ from Cloud Functions? Cloud Run runs any containerized application — you package your code in Docker and Cloud Run handles scaling, including scale-to-zero. Cloud Functions is for smaller, event-driven functions with a simpler deployment model (no Docker required). Cloud Run gives more control; Cloud Functions is simpler for lightweight tasks.

Q5: What is the best GCP certification to start with? The Associate Cloud Engineer (ACE) is the recommended starting certification. It covers the breadth of GCP services at a practical level. If you want to start even more foundational, the Cloud Digital Leader exam covers GCP concepts without hands-on requirements.

Q6: How much does Google Cloud Platform cost? GCP is generally cost-competitive with or cheaper than AWS and Azure — especially for data analytics (BigQuery charges only for queries run, not storage) and compute (per-second billing, sustained use discounts automatically applied). Use the GCP Pricing Calculator at cloud.google.com/products/calculator to estimate costs.

Q7: Is GCP suitable for enterprise applications? Absolutely. GCP powers enterprises like Twitter, Spotify, HSBC, PayPal, and thousands of other large organizations. GCP’s security certifications (ISO 27001, SOC 2/3, PCI DSS, HIPAA, FedRAMP), global compliance capabilities, and enterprise support tiers make it fully suitable for the most demanding enterprise requirements.

Conclusion — Your GCP Journey Starts Now

This comprehensive Google Cloud Platform tutorial has taken you on a complete journey through GCP’s most important services and capabilities — from setting up your account and understanding the platform architecture, to deploying VMs and containerized applications, querying petabytes with BigQuery, building AI/ML pipelines, securing your infrastructure, and automating deployments with Cloud Build.

Here’s everything you’ve covered in this tutorial:

  • GCP Foundations — Account setup, gcloud CLI, project hierarchy, global infrastructure
  • Compute Services — Compute Engine (VMs + autoscaling), App Engine, Cloud Run, GKE
  • Cloud Storage — Python SDK, lifecycle policies, signed URLs, access control
  • BigQuery — Dataset creation, partitioned tables, advanced analytics SQL, BQML
  • AI/ML Services — Vertex AI, AutoML, Natural Language API
  • Cloud Functions — HTTP triggers, Pub/Sub triggers, event-driven architecture
  • IAM & Security — Service accounts, custom roles, Secret Manager, Cloud Armor WAF
  • Cloud Build CI/CD — Complete multi-step pipeline with testing, scanning, and deployment
  • Certifications — Full roadmap from Cloud Digital Leader to Professional levels
  • GCP vs AWS vs Azure — Comprehensive feature comparison
  • Career & Salary Data — 7 GCP roles across global markets

Google Cloud Platform is not just a cloud provider — it is the infrastructure of Google’s intelligence. When you build on GCP, you build on the same platform that powers Google Search, YouTube, Gmail, Google Maps, and the world’s most advanced AI research. The tools, the performance, the data capabilities, and the AI infrastructure available on GCP are unmatched.

At elearncourses.com, we offer comprehensive, hands-on Google Cloud Platform courses covering everything from GCP Fundamentals through Associate Cloud Engineer, Professional Data Engineer, Professional Cloud Architect, and Professional ML Engineer certifications. Our courses combine video instruction, interactive labs, practice exams, and real-world projects to prepare you to pass GCP certifications and excel in cloud roles.

Start your Google Cloud journey today. The cloud that runs the world’s information is waiting for you.

Leave a Reply

Your email address will not be published. Required fields are marked *