GCP Workshop Self Reflection

  1. GCP provide us many service and one of the many service is Google kubernetes engine. Now we have to know how kubernetes work : hashtag#KUBERNETES • one use case is — if your OS running with the webserver goes down i.e completely terminated so there will be a huge lose in business and any of the client try to access your website , it fails . For this kind of scenario we need a program that keep on monitoring that particular OS and if OS terminated this program sends notification to the team and then the team contact to docker and again launch the same OS but this part is manual. I want instead my program i.e code sends notification to human beings to Launch one more OS, these program automatically launch OS within a milisecond. • If you want fault Tolerance type of infrastructure you need that monitoring program then kubernetes role come in play. Kubernetes is a tool or program that has an inbuilt capability to keep on monitoring the container. docker has its own product for monitoring i.e swarm but kubernetes is more powerful than swarm.
  • another use case is — if your OS running with the webserver so your webserver has a limit that is in 1 second it can accommodate 100 clients only but suddenly your getting 1000 of clients in 1 second and they are not able to connect , your site is showing error server time out so rather we launch one more OS again we can run program , as soon as no of request comes up i.e clients suddenly increases our program automatically launch one more OS for us and if client decreases code terminate that OS. Here we are doing scaling if client increase our program add new OS that is scale in while if client decreases our program terminate that OS that is scale out. And if your requirements is to increase RAM,CPU,HD, Network card etc that is scale up while if your requirements is to decrease RAM,CPU,HD, Network card etc that is scale down. • Scale in and Scale out are the part of horizontal scaling while scale up and scale down are the part of vertical scaling. Program who manages scaling is known as kubernetes
  • One more use case is : if we are running with three OS having webserver ,the biggest challenge here is we don’t know the IP , what new IP comes when new OS launched by a program and we don’t want to give 100 of IPs to the client.. • so again we write a code and provide one IP i.e node IP to the client and tell them suppose IP 100 is a webserver. If somebody come to IP 100 behind the scene IP 100 goes to IP 1 and deploy the webserver for client and for balancing the load .. when the next client comes to IP 100 send them to IP 2 and deploy the webserver and so on and the program which is doing Load balancing for us is known as Kubernetes.
  • kubernetes : * It manages fault Tolerance part . * It manages auto scaling part . * It manages load balancing part . And tone’s of use case’s managed by kubernetes. • clustering : * If you have one or more master and multiple slaves and they work together this kind of set up is known as multinode cluster. * If you have one node and both master and slave are using this node this kind of set up is known as singlenode cluster • minikube: * It is just a program to install kubernetes. * It setup the clusters. * It make things very easy • minikube Commands * minikube start — when you run this command first time .. it download the iso file and create the VM and install it for you and second time it starts minikube services. Note — in kubernetes when we launch container it is known as pod.
  • POD : * It is not at all equal to container. * Pod contain containers. * It is the main unit of kube. * Kube always monitor your pod. * Inside the pod we have containers. * Pod is the only one who manages your containers&contact DE to launch container • Kubectl run : it only launch pod.. this time lube don’t support for fault Tolerance only this power comes from deployment. • Kubectl : it is a client program . They only bother about where is your master because they always connect to the master node . Kubectl first always go to the config file. • kube API server program : it listen to the client and ask kubectl what they want and 8443 is the fixed port for API. • For config file you need CA, CRT and key for authentication. Note : some of the information I also provide related to minikube.
  • But how our frontend server came to know the IPs of differents pods , here we use the register concept . Either we manually register the backend servers IPs to the frontend server or we automatically register them and we know today is the world of automation so we always go for automated way. Load balancer is very intelligent whenever we launch one new pod that pod IP will be automatically register to the frontend server and that concept is known as automated discovery. Note : * Frontend server ip:port is known as end point. * We use round robin mechanism for the load balancer.
  • IAM is a way through which we can give access to multiple user that some user have owner power other one has view power while some other has edit power only.
  • #GoogleAppEngine • it provide us platform as a service which is useful for developer to test the codes.

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Host Ads.txt On Cloudflare Workers

How to use R scripts with Quarkus | Quarkify

The Esports Polymorphic Elephant

Day 15/100daysofK8s

[Changelog] Honā Updates 02.08.21

Number Conversion

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aaditya Tiwari

Aaditya Tiwari

More from Medium

Writing: Interview, Part 2: The Final Product

HOW TO PREPARE A SUCCESSFUL CRYPTOGAME?

CM101 New Media/Old Media: Twitter Spaces and the Radio Industry

Wars Need Planning