Google Cloud Platform
Introduction
The Google Cloud Platform abbriviated as GCP is a Google built tool for cloud computing purposes. There are various specs which makes it to stand of the crowd. The first one being the $300 credit which it provides to the user at the time of creating account on GCP.

Need of GCP
The GCP is mainly built by google for cloud purposes. In order to maintain our stuffs online and making it available for the public we can use GCP as a cloud service. They also provide services like CaaS (Compute as a service), STaaS (Storage as a Service).
In GCP we have VPC (Virtual Private Cloud) for router services, VM Instances as a service which provides virtual hardware like RAM and CPU. Block Storage service which provides virtual hard disk.
VM Instances
An instance is a virtual machine (VM) hosted on Google’s infrastructure. You can create an instance by using the Google Cloud Console, the gcloud
command-line tool, or the Compute Engine API.
Compute Engine instances can run the public images for Linux and Windows Server that Google provides as well as private custom images that you can create or import from your existing systems. You can also deploy Docker containers, which are automatically launched on instances running the Container-Optimized OS public image.
You can choose the machine properties of your instances, such as the number of virtual CPUs and the amount of memory, by using a set of predefined machine types or by creating your own custom machine types.
Virtual Private Cloud (VPC)
Google Virtual Private Cloud (Amazon VPC) enables you to launch GCP resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of GCP.
VPCs and Subnets
A virtual private cloud (VPC) is a virtual network dedicated to your GCP account. It is logically isolated from other virtual networks in the GCPCloud. You can launch your resources, such as VM instances, into your VPC. You can specify an IP address range for the VPC, add subnets, associate security groups, and configure route tables.
To protect the GCP resources in each subnet, you can use multiple layers of security, including security groups and network access control lists.
A subnet is a range of IP addresses in your VPC. You can launch GCP resources into a specified subnet. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the internet.
Kubernetes
Kubernetes (commonly stylized as K8s) is an open-source container-orchestration system for automating computer application deployment, scaling, and management.
It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.
Many cloud services (here GCP) offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
Google K8s Engine (GKE)
Kubernetes applications
Enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity, available now on Google Cloud Marketplace.
Pod and cluster autoscaling
Horizontal pod autoscaling based on CPU utilization or custom metrics, cluster autoscaling that works on a per-node-pool basis and vertical pod autoscaling that continuously analyzes the CPU and memory usage of pods and dynamically adjusts their CPU and memory requests in response. Automatically scales the node pool and clusters across multiple node pools, based on changing workload requirements.
Workload and network security
GKE Sandbox provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters natively support Kubernetes Network Policy to restrict traffic with pod-level firewall rules. Private clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access.
Load Balancer
With Cloud Load Balancing, a single anycast IP front-ends all your backend instances in regions around the world. It provides cross-region load balancing, including automatic multi-region failover, which gently moves traffic in fractions if backends become unhealthy. In contrast to DNS-based global load balancing solutions, Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions.
Feel free to ping in case of any suggestions or anything of that sort.
Happy Reading!