Google Cloud Full Course for Beginners [2025] | GCP Tutorial with Hands-On Labs | GCP Crash Course
By Tech Tutorials with Piyush
Summary
## Key takeaways - **GCP Free Tier Limits E2 Micro**: Free tier provides one non-preemptable E2 micro VM instance per month in specific US regions, 30 GB monthly SSD disk; excess charged at standard rates. [13:14], [13:25] - **Deletion Rule Trap**: Data disks default to 'keep disk' on VM deletion, continuing charges unless changed to 'delete disk'; boot disks default to delete. [23:37], [24:08] - **Static vs Ephemeral IPs**: Ephemeral IPs released on VM stop/delete; static IPs persist attached to project until explicitly released, preventing IP loss. [55:44], [56:18] - **MIG Auto-Healing Scales**: Managed Instance Groups auto-scale on 60% CPU threshold, auto-heal via health checks recreating failed VMs from templates. [01:15:31], [01:21:21] - **Billing Without Account Blocks**: Projects without linked billing account cannot enable APIs or use services, even free ones; link required for all operations. [01:37:12], [01:37:45] - **Storage Class Cost Tradeoff**: Standard highest retrieval cost for frequent access; archival lowest but 365-day minimum, early deletion incurs full period charge. [01:58:57], [01:58:45]
Topics Covered
- Full Video
Full Transcript
as someone who's a beginner to a cloud such as Google Cloud which has over 200 Services you will always wonder what all services to learn and in which order isn't that that is the reason I have
published this video to teach you from the very basic starting from how do we provision a free trial account in Google Cloud uh what is a GC instance how do we provision it
networking uh billing profile payment gcp resource hierarchy gke big query and many other services which are essential for you to know as a beginner and till
the intermediate level so I have covered everything it's going to be a long video for the next 5 hours just sit back relax and grab a tea or coffee whatever you
would prefer by the end of this video I promise you your Google Cloud fundamentals will be topnotch and you should be able to tackle any complex Problems by your own after this so let's
start with the video if you are new here my name is p and I published content on cloud and devops all three clouds Azure AWS gcp and on devops kubernetes cicd
and everything that is related with it so check out my channel uh subscribe this for more such videos and yeah without any further Ado let's start with this video with the first topic of how
do we provision a free trial account in Google Cloud to register a free gcp account you would head over to google.com and search for gcp
freeer then hit over to the first link and over here it'll have an option to get started for free click on
that once you do that you'll be presented with a sign up wizard so make sure you verify your account information your name your email address if you want
to use a different email address than your Chrome profile then click over here switch account and use the other address that you want to use so thise free trial
account comes with $300 of free credit that you could spend in next 90 days there are some limitations of using
this credit and many of the services will still be chargeable so make sure you watch the video till the end because I'll tell you
some important details some important insights about how to use it completely free and what are the limitations of it and what all the things that you need to
keep in mind so that you won't get overcharged so once the trial period ends you won't be getting overcharged unless you manually upgrade to a paid
account this is written over here as well okay so let's uh continue with the sign up so you verify your country and
select the one that is regional to you okay for me it's Canada then select what best describes an organization or needs so let's select personal
project check mark this box I have read and agree to the free trial terms of services and hit continue then it'll ask you to enter
your phone number for identity verification okay and hit send code it will send a code to your phone number once you received that you enter the
code over here and hit verify then your contact information will be verified and then it will autop populate
your name and address from your Chrome profile if you have an active one or you could just manually edit this by clicking on the pencil sign next to it
it will have your payment method and hit on start my free trial so this payment method could be your create card or your debit card or your PayPal account so you
can use either of those and then you can hit start my free trial once you do that it will automatically create a billing account
and a project for you as the default one so this is just uh some feedback uh survey type questions so you could just hit close for
now and uh these are some of the tutorials that would get you familiarized with gcp services so I'll just skip these for now
as well okay so this is my gcp console over here and at the top of the page you see my first project so this is the project if
you have multiple projects then it'll appear over here so let's say I have two projects because I have created one before as well so there are two projects
for me the name is my first project for each of those because they were autocreated and that is why they have the same name but a project is uniquely
identified by its project ID which is this one and which is unique for each of the project right so if you want to create a new project you could click on new
project over here and then you could give it a name okay and provide the organization that you want it to be part of I don't have any organization
currently so that is why it's not there we'll have a look uh over these things like project organizations and everything in the later videos but for
this video I'm just going to give you some basic understanding of gcp and how to get yourself started with it so I'll hit on the homepage over here let's uh
discuss some more details so over here this is your dashboard all the services that you would provision would appear over here and you could see the realtime
data like the API request is visible over here and then you could set up some monitoring dashboards and widgets which will appear over here from the left side
you would have all your services so these are the pin services at the top of the page pin services are something that you frequently use so you pin it and they would appear at the top of the page
like these many if you scroll a little down you'll see more products that means all the other services that are pinned and
unpinned as well so let's say you would be using compliance frequently so I just pin it from here okay and once you pin it
it'll appear at this section as well you see compliance in the pin services and the same way you could unpin any existing
services that has already been pinned so let's click on unpinning the marketplace once you do that it'll be gone from here from the pin
Services all right so this is how you could uh use your services so you select the service from here from compute engine let's say you want to create a VM
instance so you click on that so that's one way other way is to search it from here you search for compute engine and
click on compute engine right so these are the ways you could search a particular service in gcp Project now there are different ways of interacting
with the cloud services one is through console that we are already looking looking into other is restful apis which is programmatical access to the gcp
services and the third one is using the cloud shell or the g-cloud commands so you see a button over here which says activate Cloud shell if you click on
that a shell will be provisioned to you a screen over here which you could just expand as per your comfort
level okay so Cloud shell comes with the Cloud SDK which is gcloud cloud code and then online code editor as well with all the utilities pre-installed for you so
you don't don't have to worry about any installations and everything and it is also free for all the users so you hit continue and it will provision a cloud shell for
you all right so by default it will create from your email ID at theate Cloud shell and if you do a PWD in
this uh home directory will be created for you as well okay there are different options from it then you have terminal settings in terminal settings you could
go to terminal preferences and change the color theme from dark to light or custom change the text size let's go for the large one and the same way you could
just uh change other settings as well such as your phont your copy settings your keyboard and scroll bar as well so these
are all the settings that we have then it has a web preview option web preview if there is a service that is already running on port 8080 let's say you have
an enginex server or a Apache web server running on Port 880 you could just click on preview on Port 880 and it will show you the preview of that particular
application or you could change the port as well if you want to use a custom Port then this particular section will have some session information and if you
click on these three dots over here on the mo you'll have option to restart the cloud shell and upload and download files so to upload a file it's pretty
simple you click on the upload you choose the file and then you hit upload from here and the default directory is your home directory you could change it from here as
well all right the same way you could download the file this is the cloud console plus this is cloud shell and then we have another option which is
open editor so as I've have mentioned before it comes with an online code editor so if I click on that okay it will provision a code editor for you so this is a core editor which looks pretty
similar to visual studio code or py Charm or atom or the other integrated development environments that you might be familiar with so at it will have an
Explorer a Version Control and a few other things all right so I'll just open a folder from here
file open and this is my home directory in the cloud shell hit over here and this is the folder that is already there
so it has one file currently read me and Cloud shell. txt if I click on that
Cloud shell. txt if I click on that it'll open the file in the right mode Let's create a new file file new file
and then give it a name test.txt let's add some content to it Namaste Google cloud and I hit on
contrl S or you could just save it from file menu as well file save okay basically the same thing and then you could just switch to a terminal from here open Terminal
I click on LS over here now you would have two files one which was already there and this is what we have created if we cat on this particular
file you would see the content that we have just entered so instead of using VI editor or any Nano editor which is a command line
you would have a GUI based editor and you could again go to that editor from here okay so this is one way the other way of using it you open it in a new
window from here click on that and once you do that you don't have to switch between your Cloud shell and editor so it'll be presented to you in a
split screen so you see at the bottom of the page this is the cloud shell and over here is your code
editor so these were the other things so let me just close the cloud shell all right this is is what I was talking about free tier usage limits so
even though you would have $300 of credit that you can spend in next 90 days once you start using the free tier account but it will have some
limitations attached to it first and foremost not all the services of Google Cloud are covered in the free tier limit so make sure you understand this very
well before you start using any service so these are the services that are covered over here and uh let's say if you want to know what are the limitations
attached to compute engine then you scroll down to this section over here which says compute engine and it will tell you like one non preemptable E2
micro VM instance per month in one of the following us regions so if you create any other VM than E2 micro in any other region except the ones that are
mentioned over here then you'll be charged as per the standard rates similarly 30 GB of monthly SSD dis that you can use free of cost anything more
than that would be charged as per the standard billing rates and and so on so there are things that you can see over
here as well right so the same applies to all the other resources that are covered under the free tier right like
gke as well and Google Maps Pub sub and so on if if you scroll down a bit it says any usage above free tier limit is
automatically built at the standard rate you can monitor your control cost by setting up budgets and alerts so this is really important then production ready
images that are there with Google marketplace comes with premium OS licenses that are not covered in free license as well you still have to pay
for the licensing cost of the services that you would use right also billing support is included with all the cloud billing accounts but you should be a
billing administrator to interact with the billing support and other thing that you need to keep in mind is this technical support for free trial users
will end on 15th June so you won't be getting any support from uh gcp technical support team for the
free trial accounts after 15 June then if you want to have the technical support from the Google then you should upgrade your support plan so make sure
you read all the things over here that's been mentioned and I'll paste this document Link in the description section as well so that you could keep it handy
follow it along and you should be good to go so now that we have logged into our Google Cloud console I'll go over here in this section where it says
compute engine and look for VM instances I'll click over there and the first thing that will ask me is to enable the computer engine API
remember when you are using any gcp service for the first time you have to enable the API associated with it like if you want to use gke for the first
time or Cloud IM IM you have to enable its respective API so for this particular service I have to click over here to enable the compute engine
API without enabling the compute engine API I wouldn't be able to use the compute engine service so once the compute engine API is enabled you will see the screen something like this where
you have an option to create the instance or import the VM or refresh and all other options are visible and you could go ahead and create your first
instance so to create an instance click over here create instance and and then give this a name let's call it test VM
Linux you could add labels as well over here and based on the default configuration the one that you see over
here you will see the monthly estimate pricing for this if you are using a gcp trial account or a free tier account all
the regions and all the VMS are not supported with that so make sure you watch my previous video which which I have posted earlier I'll put the link in the description section below as well as
in the title bar and it'll show you the different limitations that it has for example only few regions and few machine
types are covered uh under the free tier so I'll select the US West one which is what I know is one of those regions I
could choose the zone as per my needs so let's use this one US West 1 a over here we have different machine families it's been categorized in general purpose
comput optimized memory optimized or GPU and uh the series is this one so if you select computer optimized you'll see
the option as C2 and c2d if you use memory optimize then you'll see M2 M1 and M3 and so on so there are different
categories of these machine families so I'll go ahead in the first one which says general purpose and I'll use the E2
family and inside that I'll use the machine type as micro so only E2 micro is covered under gcp free to your
account and it will have one shared core of vcpu and 1 GB of memory which should be sufficient for our demo and over here you'll see the
monthly estimate have gone down from $28 per month to $7 per month uh but this charge also will be covered under the credit that you get from the gcp free
tier account so to verify the same that you are covered under this you scroll down at the bottom of the page and you'll see your free trial credit will
be used for this VM instance okay so we are good till here we have selected a region Zone and then series and machine
type now these are some of the options uh whether you want to use it as a recording tool or a screen capture device I don't need it at this moment
then there is a concept of confidential VM Service uh we don't need that as well you could deploy your container images on top of this VM I'll skip this for now
then in the boot disc section you have the option to select the VM image that you want to use by default it is Debian
gnu Linux 11 so I can change it by clicking over here change button I could choose my operating system from uh Linux
based or Windows Server currently Mac is not supported so I'll choose UB 2 from here and uh you could just leave the
default version or you could select from any of the available versions then uh there is this boot dis type whether you want to use a balanced persistent disc
extreme SSD or standard persistent dis and the size is 10 GB okay so I'll use the default configuration there is this option to
use custom images as well but we haven't created any images or snapshot yet that is why it won't be visible for us and here you'll see different
snapshot if we would have created it already so it's not there okay so I'll just select the configuration that we have set for the Public
Image then there will be a service account created by default with the VM so this would be the name of it or you could just select no service account
if you don't want to create the service account I'll keep the default one and then it has access Scopes which
defines how your VM will interact with other gcp services so I'll select the default access but there is an option to select full access to all Cloud apis
that means compute engine API will be able to communicate with all the apis that is available for us to consume or you could select the particular API that
you would want to use okay so these are the different options available for that let's say if you want
to use cloud SQL and it says None or enabled if you want to use storage and it says if you want to have readon access right only access read write or
full access so you could Define the level of access as well as the type of API that you would want to use so I'll keep it default as
well then in this firewall option we could select if we want our application to be available over HTTP or https Port like if you have installed a web
application on your VM and it will be publicly available for the users to consume then you could allow HTTP and https
traffic right then we have this option which says networking dis Security Management and soul tency you could expand it by clicking over here it we'll be doing a deep dive into the networking
section in a separate video so let's continue from here there is an option to add the additional data diss currently the VM only has one boot dis
attached to it which we have selected and you could use to select this one which says add new disk okay you would give it a name
description you select uh if it's going to be a blank dis that means you have to format it and make it uh usable or you would want to create it
from a disk image or a snapshot right then the disk type so these are the same dis type that we have seen before okay dis size it could be
anything between 10 GB to 65,536 GB optionally you could create a snapshot schedule as well to make sure
that your data in the persistent disk is getting backed up regularly then you select your encryption type whether you want to use a Google manage encryption key a
customer manage encryption key or a customer supplied encryption key then you could select U the deletion rule this is really important so when
deleting instance there are two option keep disk or delete the disk by default with the data disk keep dis is enabled that means even if you delete your GCE
instance the additional data disk associated with it would not be deleted it will still be there so you'll still be getting charged for it and it will be
there for you to use in the future until unless you explicitly delete it so you could change the behavior of it by clicking over here delete disk so this
particular uh deletion rule is different for the boot dis and for the additional data disk so for data dis it is selected
as keep dis however in the boot disk it is selected as delete disk all right and once you fill all the details you could click over here save I'm not going to do
that for now so I'll just close this popup the next one is security section in this security section you could select how you would want your user to
connect to the VM by default when you connect to a VM using this console or using the g-cloud SDK your SSH keys are automatically generated so this is the
default authentication and authorization mechanism but you could use other permissions as as well which says control VM access through IM am
permission you could create a im role attach it to a user or a group of user and that would be used for your authorization authentication and you can
also uh enable the MFA over here multiactor authentication or you could go ahead over here and add manually generated SSH Keys you use a third party
tool to generate the keys and then you upload the keys then we have management option where you could add your user data script like a startup script which
will be executed after your VM is started so you add the commands like this appet update
or abcat install engine X so if you do that these two commands will be executed once the VM is up so
this is what uh start startup script will do then you could add your metadata over here it is basically a key value pairer where you add the item you enter the key and the value then on this next
section which says availability policy you could select your VM provisioning model the options are standard or spot standard is the by default policy that
is available but if you go with the spot you will see a drastic decrease in the monthly price estimate it was earlier 7 do something something dollar and now it is just
$283 this is because the spot instances or the preemptable instances comes with a lot of limitations you could go over here in this link which I'll provide in
the description section and you can read about all the instance limitations so the first one is compute engine might stop preemptable instances at anytime
due to the system events so these are not highly available or this would not be running all the time so if your application is running badge workload
that can be restarted at any time only then it is suitable for you then computer engine always stops preemptable instances after they run for 24 hours so
24 hours is the maximum you know lifetime of a preemptable VM then they might not always be
available and uh it is not covered under service level agreements and are excluded from compute engine slas and uh the Google Cloud free tier
credits for compute engine does not apply to preemptable instance so make sure you read all those things and see if this preemptable VM is a suitable
choice for you so I'll just uh choose standard again then it says on host maintenance when your compute engine performs
periodic infrastructure maintenance it can migrate your VM instance to other Hardware without downtime so this says
what you want to do in that case so if you want to migrate to a VM instance which is a recommended or you would want to terminate the VM
instance so you can select this option and then there is this uh automatic restart again if you want to have your VM restarted in case there was some
system crash or a hardware failure or a software failure or if you want to keep it off then you can use that as well and if you
scroll up you can verify all the details that you have looked into and if you could go back to this section boot disk and click on
change and click over here show Advanced configuration you will see the option of deletion rule here as well this is the same one that we saw in the additional
data disk section the only difference is that the default option over here is delete boot dis that means the boot dis boot persistent disk will be deleted
when you delete the VM okay so make sure you know what this is all about so once I verify everything
and once everything looks good to me I could click over here create instance cancel it or generate the equivalent
command line so if I click over there this is a g-cloud uh create VM command that you could use if you want to provision your VM from the command line
so I'll just copy it from here and close it and I just keep this command handy and we be provisioning the
instance from gcloud SDK for now I'll click on create instance and uh it'll just take couple of minutes and VM will
be created for you I'll just pause the video for a brief moment and we'll come back to see all right so our VM is now up and running you will see the status
is green and is running the name was this it was provision in this Zone that we selected it has an internal IP and an external IP associated with it and a
network interface card you could log to this VM by any of the the following methods using a new browser window or on a
custom port or you could use a user provided SSH keys or you could use a g-cloud command to SSH into the instance
or use another SSH client like byy or mobom right so one way is you click over here and it'll open a popup for you
it'll transfer the SSH key to the the VM it will automatically generate those keys and then transfer it to the VM and then it will establish the SSH
connection with the VM so once you do that you'll be logged in and let's say if you want to
use if you want to see the host name of this server you could do a host name hyphen I and the internal IP is 1013802 if you go back over here this is the
same one 1013802 so this is how we logged into this particular server if you want to use the g-cloud command to do the SSH
click over here you say view g-cloud command copy the command from here it basically has the details gcloud
compute SSH and then specify the Zone in which the VM was provisioned name of the VM and the project in which the VM was
created I just copied it you could run it in a cloud shell from here or if you can just close the window and click over here activate Cloud
shell it will open a new Cloud shell window for you at the bottom of the screen over here and the command that I have copied before I'll just paste it
over here and hit enter It'll ask you to authorize or reject the credential request I'll click over to this authorize do you want to continue to
generate the SSH keys I'll click yes and then hit enter once more and then it will create a SSH
metadata key for you okay I'm in the server and you could verify again with host name hyph I
1013802 this is our internal IP of the VM all right let's do an exit on this and now we'll try to create a new
VM with the g-cloud SDK so I'll just clear the screen and then the command that I have copied
before I'll just change the name over here because this VM was already created so I'll give it a name test vm2 hyphen Linux and rest of the details I'll keep
it same I'll just just delete the service account as well because this service account was also created already and then I'll update the device name as
well to test vm2 Linux and when I have verified everything is as per what we want it to provision I'll just copy this command
from here and I'll paste it over here in the cloud shell and hit enter okay it will provide you these details like this VM was just provision in this particular
Zone type is E2 micro this is non- preemptable that is why this field is blank internal IP is 10
13803 and this is the external IP let's see if it is present over here earlier there was just this one VM if I hit refresh you would see other VM as well
which is test vm2 this is what we have just created and this is the internal IP if you want to do SSH you just copy the gcloud
command click over here and go down I'll clear screen one more time and I'll paste the command
gcloud compute SS with the test vm2 Linux as the VM instance hit enter and again it'll add that uh
Keys okay I'm in the server let's clear the screen and verify the host name host name hyphen I this is 10138 03 so this
is our vm2 we have verified it now to delete the VM you just select the VM okay and click over more action and
delete are you sure you want to delete these two instance if you click on delete it will be deleted and this will also delete two BO
dis cuz the delete dis option on those VMS was enabled so that is why those boot dis will also be deleted let's say
someone accidentally clicked on this let's just unselect these two let's say someone accidentally did that and deleted the
VM click over here and it'll be deleted right but you would want to avoid someone accidentally deleting this particular VM so what you will do you go
to this V M and you click over edit and then you enable this option enable deletion protection when you do this instant would not be
deleted unless you uncheck this option and saved the configuration again so let's try this let's enable this and click on
Save okay let's go back and let's try to delete the VM again I'll select the VM click over here more actions and now you won't see the option to delete it is
grade out because the deletion protection is enabled on this particular VM now that we have seen how to provision and manage Linux based VM
let's use the same method to provision a Windows based VM and RDP into it so I'll click again over here create instance give it a
name test VM windows I'll again use the same region and I'll select the machine type as E2 [Music]
micro I'll go over here to the boot disk option and I Chang the operating system to Windows server and I'll select the option
Windows data center which would support a graphical user interface so this one over here which say server with desktop experience I'll
click over here and by default it will have the balance persistent dis of size 50 GB earlier it was
10gb and if you see the delete boot dis is also enabled because this is the boot dis had it been a data dis then this option would have been keep boot dis but
we can always alter the behavior as per our needs so I'll keep it default because I would want this uh disc to be
deleted once the VM is deleted select this I'll verify everything I'll keep everything as default I'll allow HTTP
and https traffic I'll click over here and these options will also be the same click create once this Windows VM is
provision and we'll see how we log to that VM because it'll be a little different than of Linux VM we don't SSH to the windows VM we do a
RDP so let's see what are the steps of doing it once the VM is up and running in the meantime I'll just go ahead and delete this uh test VM but I wouldn't be
able to delete it directly because I have enabled the termination protection so I'll just disable it first so I'll go into the
VM and edit and I'll uncheck this box over here click save go back to VM instances again and click over
here and then delete now it will be deleted now it's showing me this particular error in the notification
which says Windows VM instances are not included in the free trial to use them first enable billing on the account you'll still be able to apply your free trial credit to eligible product and
services that means I wouldn't be able to use it with the windows VM unless I enable the Belling
so let's go to the billing dashboard I'll open this in a new Dash search for billing so over here I could uh either upgrade from here or over here I could
just activate it because uh Windows VM is not included in the free trial account so I have to upgrade my account I would still have that uh balance
remaining with the credit which is $385 Canadian dollars that I could still use it with the services that are eligible to be used in free trial so I click over
here upgrade hit activate and it says activated successfully continue now I'll go back
to my compute engine BM instances I just go over here in the notification and click retry on the failed notification and it will just resume the service from
there okay so the windows VM is also being provisioned it has an internal IP and an external IP to do a login to this
Windows VM we have the option to RDP if you click over here it will ask you to connect using the RDP client so you would have to
first download the RDP file if you'll be using a third party client so you click over here here and RDP client will be downloaded for you you first have to set
the Windows password so this is the prerequisite so click over here you use a username or select the
one that is already been given to you hit set and then uh there'll be a
password generated for you so you copy it and hit close okay so you have set the Windows password you could set
it from the g-cloud command as well so once you reset the Windows password you download the RDP file and you edit the
file I'll click over here show in folder and then I'll edit
it okay and make sure you add 43389 after the colon at the end so that it'll try to connect on that
Port you hit save and then connect then it will ask you to enter the password that was generated for you
so I'll just paste the password over here hit okay I'll say connect me to this all right so here it is you are
successfully logged into to your windows VM that you created and it comes with pre-installed Google Cloud SDK Shell let's have a look at the resource hierarchy in
gcp suppose you are working for an XYZ bank which will be considered a gcp organization and is the root node of the hierarchy it can be further subdivided
into multiple suborganizations or line of businesses such as foreign exchange Capital markets Banking and so on this categorization in gcp can be done using
folders to provide an additional layer of isolation between suborganizations then these folders can be further subdivided into different
teams such as Equity derivatives and so on this categorization in gcp can be done using subfolders there could be many folders
and subfolders in the resource hierarchy and these folders can be further subdivided into environments such as testing
staging production and so on this is achieved using what we call projects a project organizes all your gcp resources
together for example you keep resources of your test environment inside project a and production environment in Project B to separately manage those resources
you cannot create a gcp resource without a project you can apply an IM am role at the organization level which is basically a set of permissions
it can also be applied at folder level subfolder level project level or in some cases at the resource level however it is applied
in a top- down approach which means the role applied at the parent node will be inherited by default by all the child nodes you cannot delete the role or permission at the child level if it was
inherited from the parent node but you can always override it at the child head level now let's head over to Cloud console and see how we can create our first gcp
project all right so I have logged into my cloud console and this is the dashboard that we have already seen in the previous videos so to create a new
project you could just click over here and it will show you different projects that are already present so name of these two projects
are same because project name is non unique and it can be changed even after the project is created it is just a human readable name of the project and
it is not referenced by any of the gcp apis however this ID it was autogenerated for you and this cannot be changed to switch between different
projects you could just click over here and the project will be changed right to create a new project you click over here new
project you give this project a human readable name call it test project 101 and over here you provide the parent
resource of this project such as a folder or an organization currently I don't have any organization created for me and I just cannot create Organization for myself if
I click on browse it will just show you this which says no organization there are some prerequisites for creating an organization if you go to this URL over
here it will show you that you should be either a cloud workspace user or a cloud identity user in order to use the organization resource and it will be
automatically created for you once you sign up for either of those Services right so I'm not doing either of those at the moment that is why it is no Organization for me and one more
thing if you see over here it will show you you have 23 projects remaining in your Kota that means there is a soft imit on the number of projects that you could create I've already created two
projects and it is showing me that you have 23 projects remaining in your Coda that means the soft limit is 25 by default but you could always request an
increase or delete projects by clicking over here to manage quotas and it will open a case with gcp support and they will increase your limits if all right
so for now I'll just verify the details and hit create over here and in the notification bar it will show you that it is now creating the project
and once it is created it will show you as success you could just select the project from here right and then it'll be switched
for you or you could just do the traditional way click over here and just select the project that is how you would switch to a project now let us have a
look at how you would shut down the project so shut down the project if you want to delete all the resources within a project and basically stop incing
charges on those resources then you would simply go ahead and delete or shut down the project right so you go over here on the right side where it says setting and
utilities on the three dots and go to Project settings once you do that it will ask you to verify these
details like this was the project name name project ID which was generated for you this is a unique number and the project number was also autogenerated
for you you could click over here shut down and it says to shut down project test project 101 type the project ID this so I'll just copy the project ID
from here I paste it over here there is a space at the beginning so I'll just remove it and uh it's also showing you
that uh owner of the project will be notified and can stop the deletion within 30 days so that means you will still have 30 days if you change your
mind or if you want to reinstate the Access Project will be schedule to be deleted after 30 days however resources
may be deleted much earlier even if you delete the project now it will still take 30 days to completely delete it
after 30 days it cannot be recovered but before that it can be done so I'll just click over here shut down and it is saying the same thing that we just saw
it will be scheduled to be deleted on 29th June 2022 today it's 30th May so we are good with that now let's go to Cloud
shell over here click over activate Cloud shell right now we could create multiple configuration to to seamlessly move from
one project to another or to switch the context between multiple projects to view the current configurations enter this command
g-cloud config configuration list and hit enter and it will show you the default configuration that we have this was created when the project was
created for you so this is the name and this is by default active configuration rest of the values are blank and these values can also be set by setting the
environment variables to create another set of configuration you would run this command g-cloud config configuration create and then name of configuration
which is Project a hit enter and it will create a project a named configuration for you and activate that configuration for you that means you are now in
Project a in the cloud shell if you run the list command again now you will see two configuration over here first was the default one that we had another one
was the project a and right now project a is activated there is no project in it there is no default zone or region with it right but you could always set this
variable so this is how you move seamlessly between multiple projects and multiple configuration you could either set your default zone region project and
account but make sure like to activate the configuration after it's been created by default it will activate it for you but if you want to move back
again to the previous configuration you would still use to use the activate command which is this one over here g-cloud config configuration activate project a all these commands on all
those details I'll put the link in the description section below don't worry about it all right I have logged into my Google Cloud console over here and I already have a Linux V VM running so
I'll go to my compute engine from this side to VM instances and this is the VM that I have running it's called test instance Linux
it has an internal IP and external IP attached to it to make any changes to VM or to do any operations to this particular VM you just select the VM
using the checkbox right here and then click over here to the three dots which denotes more action and then you can do a lot of operations from here you can
just create a start and stop schedule to save the cost if you click over here it'll ask you different options like
name of the schedule description region start time stop time time zone initiate date if you leave it empty the schedule will take effect immediately else it'll
initiate on that particular date and the time that you specify similarly the end date and the frequency if you are more comfortable with the cron syntax then
you can just enable this option from here which says use cron expression and then click on submit once you are satisfied with all the changes
okay for now I'm just clicking on cancel now let's go back over here again and the other options that we see over
here are delete reset stop suspend start and resume resume and start is not enabled because the instance is currently running and you cannot start already running instance it has to be
stopped first and suspend operation is not supported by E2 micro that is why it is also grade out if you delete the VM you
will lose all the data that is stored in a non-persistent dis attached to this particular VM or if it has the option of delete disk enable then persistent data
will also be deleted so make sure you use this option very carefully and with proper planning let's go ahead and make some changes to the existing VM so for
that you'll go over here where it says name you click over the name and then click add it there are some configuration that cannot be
changed once the instance is provisioned for example the name of the VM The Zone in which it was provision and so on but there are some configuration that
can be changed such as the internal and external IP addresses attached to the VM so to change that you click over here where it says network interfaces default
you expand this by clicking over here and if you scroll down a bit you'll see two type of IP addresses attached to it one is a primary internal IP another
one is external IP there'll be two type of addresses internal or external so the reason why we are using two types of IP has its own purpose let's say we have a
cloud storage over here which would need access to this particular server let's call it CS that cloud storage can access the server using its internal IP address
that means that internal IP would be internal to the gcp services however if there is any external application that would try to connect This Server let's call it EA it will try to connect to
this particular instance it has to connect to its external IP because the internal IP is only internal to the gcp
services and it cannot be accessed from outside so if you click over here now there'll be two type of primary internal IPS which says ephemeral and static an
ephemeral IP address is an IP address that doesn't persist beyond the life of the resource for example when you create an instance without specifying the IP address Google Cloud automatically
assigns that resource and ephemeral IP address in general the ephemeral IP address is released if you stop or delete the resource so once this server
is stopped or terminated Emeral IP address will be changed but if you use static IP address over here it is persistent even if you stop or delete the instance so it'll be assigned to
your project until you explicitly release it so that's the main difference between fmal and static IP address so let's keep it FML for the primary
internal IP and we want a static IP attached to the instance as an external IP so I'll click over here and create a
new IP address this IP address would be static let's give it a name static VM IP I'll hit reserve and this particular IP
has been created for you and then you click save now if you scroll down you will see a static IP attach to it and the IP
address is 3483 6653 just keep a note of it and if we go back so I'll select the instance and then try to stop it let's
stop the instance for now hit stop all right instance is STO now so if you scroll over here you'll see that
that particular static IP is still attached to the instance in general case when you stop the instance that IP would be released to the pool but this is not
the case even if you terminate the instance that IP will still be there for you to use now let's go ahead and delete this particular
instance so I have selected the instance already and I click over here say delete and delete okay it says instan is
deleted and it's removed from here as well now if you want to see if you still have that external IP address attached to your project let's open this in a new
tab and you click over here three dots and then look for VPC Network and then IP
addresses and over here you see there is one external IP address which is named as static VM IP and this is the same IP address that we
assigned and it says in use by none that means this IP hasn't been attached to any VM and in the same way you can reserve more external static IP
addresses by clicking over here Reserve external static IP address at the top of the screen to release this particular address you select the resource and
click over here release static address and this address will be released to the pool but before doing that let me try to create a new VM and show you how you can
attach this particular IP to a new VM as well so I'll hit cancel for now I'll go back to my VM instances and I quickly
create one VM click over here create instance let's call it test VM I'll put that in the same Zone Us
West one and then machine type as E2 micro that it is supported by gcp free trial
usage then I'll go down a bit networking and then select this particular option
over here scroll down external IP V4 address by default it is Emeral for both internal and external IP I click over
here now this particular IP is visible in our drop-down so I'll select this and hit create all right so the VM is Pro
provision and if we scroll a little to the right you'll see the same external IP is attached to that particular VM as
well so now I go back to IP addresses I have selected the IP address and hit release static address to delete the static IP hit
delete okay so the address is deleted let's go ahead and see how you can create a machine image to back up all the data stor
as part of that particular VM including metad data permissions configuration and data from the multiple diss of that particular VM so you go inside the
VM and then click over here create machine image or you can do that from this tab as well which says machine images so you could choose either of
those I'll click over here give this a name call it VM image 1 select the source VM instance for which the backup needs to be
taken so there is only one VM running so I'll choose that set the location whether you want the backup to be multi- Regional for high availability and
Disaster Recovery or you just want it for this particular demo I'll just select this as Regional and select the location from the drop down or just
leave the default one select the encryption type and and have a look at the advanced configuration which will have all the details of your VM image
there are some things that would not be covered as part of the VM backup and those are local SSD data in memory data
name and IP address of a VM right so please make sure you remember this point once you review everything you hit create and the Machine image will be
created for you and you can easily go ahead and create a new VM from that machine image so the machine image is now ready I'll just open the image and
hit over here to create the instance from that image once you do that it will still ask you to enter the name of the VM because as we have discussed before
name cannot be used as it was there in the source VM so you can choose a different name than the test VM so we'll call it VM Test 2 verify other details
as well so you see the default region was selected as whatever it was there in the source VM and all the other configuration like E2 micro instance
that we have selected and if you verify everything is correct you can just go ahead and hit create the VM so this is one way to take the backup but this
particular method will cost you a lot of money because VM images are huge in size and you will be charged the per GB storage cost of the machine image for
that particular region when you don't want to overspend on VM backups but still wants to backup critical data for various purpose then you can use dis snapshot which is just the backup of
that particular assistant storage attached to the VM so let's go back to that VM it doesn't have any additional disc attached to it but it has a boot disc
attached to it so we can create the backup of this particular disk using the snapshot so from the left side if you scroll down a bit you'll see disk and
snapshot so you go to diss so you could either create a new dis from this particular disk or click over here actions and you can create a
snapshot give this snapshot a name let's call it snapshot test select the source dis for which the snapshot needs to be taken select the
location as Regional or multi- Regional verify everything and hit create Hit over here hit refresh and now you see the snapshot was
successfully created you go inside the snapshot you create the disc from this particular snapshot and attach that disk
to a VM instance and then your VM will be cloned as per your needs right if you go back you'll see another option which
says create snapshot schedule to make sure you are regularly backing up your application and your VM instances for disaster recovery and maintenance
purpose you could also create a snapshot schedule from here it will have all the other options
like uh snapshot storage location schedule frequency start time you can also set a life cycle rule to Auto delete the snapshot after a certain
number of days by default it is 14 but you could put any value over here so if you put 30 it says after you delete the dis that used this schedule delete
snapshots older than 30 days and it'll delete the snapshot after 30 days right and uh basically that's it and if you hit create the schedule will be created
for you so I've logged into my Google Cloud console and I'll go over here in the search bar and search for Marketplace hit over
here once you are inside the marketplace you will see lot of packages that is there for you to use these are again the production grade packages and once you
choose to deploy one of those you wouldn't have to worry about configuring the virtual machine storages or network settings you can optionally alter those
settings but you can also go with the default one so let's say I would like to install a lamp stack so I'll just search lamp from here and hit
enter and now over here as well you'll see there are 50 results just for the lamp stack so the first one over here is lamp stack click to deploy virtual
machine it consist of Linux Apache sttb MySQL and PHP once you click over here to the package and scroll down a bit you
will see the software versions inside that particular package for example it has Apache 2
2438 and it has PHP 7.4.2 n and so on and uh for the pricing you scroll down a bit and you will see there is no
usage fee for this particular package but it will have the infrastructure cost for one shared vcpu and 2 GB of memory
which is 15 57 and there is a storage charge as well for 10 GB of SSD disc which is 60 cents and then estimated
monthly total would be your 1617 and when it comes to providing support based on the slas it says Google does not offer
support for this solution however Community Support is available on stack Overflow please keep that in mind that solution deployed by Marketplace are third party softwares and are not
supported by gcp support Google also provide Marketplace Solutions make sure you check the details of the marketplace product before start using it for example if you are using a bitnami image
you should be contacting bitnami support team and not the gcp support team to deploy a particular solution you select the image let's say we select this
one and hit over here to the launch after verifying all the details like the package content VM instance size and type or the
estimated monthly cost and the support provider right once you have verified everything hit over here launch and if you are using it for the
first time it will again ask you to enable the apis so because we already have comput engine API enabled it will ask you to enable rest of the apis such
as deployment manager runtime configuration API so I'll hit enable Okay once it is enabled it will redirect you to the setup screen where
you could just name your deployment this is not your instance name but your deployment name let's call it test deployment select the Zone in which you want it to be
provisioned so I'll just keep it default for now then you select the machine type so by default this is is E2 small let's
select E2 micro which is even a smaller one and you'll see the change in pricing structure over here on the right side then optionally you could install PHP my
admin or you could just uncheck this box if you don't want that it has a boot dis of uh type standard persistent dis and this is of
10gb you can update the size from here as well and this is the default network interface attached to it with sl20 as the subnet
mask and you could optionally allow HTTP or https traffic from the internet that means from Source IP 0.0.0.0 or you could just specify the
source IP range over here if you don't want it to be publicly available over the Internet these are the options for stack driver logging and monitoring
stack driver which is also known as Cloud monitoring and Cloud logging you could enable those and you could just accept the gcp marketplace
terms and hit deploy once you deployer you will see the screen something like this so this is using some ginger template to
provision the infrastructure for you and over here you'll see there is one VM instance created there there is a password generated and there are two
firewall rules created one which allows traffic on Port 80 another one on Port 443 so you see how easy it is to provision an infrastructure stack
without having to configure all the manual details says it has some warnings hit over here view details it says some of the features uh
is in beta so it's fine for now right I'll just go back and it is created now you'll see you'll have GCE VM instance
created so you can go to the VM instance from here or from the GC console itself so I'll just click over here and hit over here manage resource
or directly SSH into that let's click on manage resource and you'll see all the details over here
right this is the VM name uh we gave the deployment name as test deployment that is why this VM name is test hyphen deployment hyphen VM which is
autogenerated and it was created in this particular region status is running okay so I'll just click over here and hit SSH
let's do SSH into the VM hit connect all right I'm inside the VM
let's do PHP hyphen V and it says PHP 7429 has already been installed here I'll just clear the screen and you'll see other details as
well this is the site address this is the admin URL for PHP my admin if you click over here it will open a new tab for
you right so this is PHP my admin page and here are the details of my SQL user and the pass password which is a temporary password so if you go back
over here again even though you could just run a abat update or abet install a upgraded
version let's say upgrade install PHP 8 even though you can do that you can run this command but this is not supported as per the support agreement so if you
do that your support will be violated you won't be able to get support from the required third party seller or the first party seller which is gcp so in
order to avoid that let's say you have to upgrade PHP version from 7.4.2 to 8.0 then go to the marketplace and look for
a newer image of version 8 instead of just upgrading it from here by yourself so you have to redeploy the package with the required
configuration and if we are supported package is not available in the marketplace then you could just create your own instance in GC and configure
them to run your own apps and services make sure you know all the limitations of it and use it if that is a good fit
for your use case so mostly it is intended to be used as a isolated production radius system and if you need to upgrade the version of a software package you would have to redeploy the
package using a different Cloud Marketplace image so let's say you don't want to use the services anymore due to any particular reason you just hit over
here delete and then it will ask you two options whether you want to just delete the deployment but keep resources created by it or delete the deployment
as well as all the resources created by it so I'll just select the first version and hit delete all and all of your resources will be
deleted after this okay it's been deleted and you don't have any deployment present right now so what exactly are the managed instance groups
an instance group is a collection of virtual machine instances that you can manage as a single entity for example mig1 is a managed
instance group having five instances to create a managed instance group you would first have to create an instance template which would Define the VM
configur ation like name machine details OS image security disk firewall networking and so on instant templates
are designed to create instances with identical configuration which Scale based on the demand this is also known as auro scaling let's say you set the
auro scaling on CP utilization at 60% threshold so when CP utilization for any of the VM goes above 60% a new VM will be added to the
managed instance group based on the instance templates configuration when the CP utilization goes below 60% the newly added VM will be deleted
from the instance group along with the AO scaling it has some additional benefits as well such as high availability if a VM
in the group crashes or deleted or stopped accidentally then manage instance group automatically recreates the VM using the instance
template next one is fall tolerance you can choose from Regional or multizone deployment that protects the application against Zone failure your application
will still be serving traffic from the remaining available Zone with the help of load balancers you can set heal checks on the VM on a specific Port that
recreates the VM if a certain number of consecutive health check fails now that we have seen what exactly is a manage group instance and what are the benefits of using it let's go ahead to our Cloud
console and provision one managed instance group so I'm logged into my Google Cloud console and I'll search over here instance
groups the first result over here I'll click and then as it says on the screen create instance group I'll hit over this
icon then I'll enter some basic details let's give it a name test instance group and then create an instance
template if you already have created an instance template it will be appear over here as a drop-down list but we don't have anything that is why it is asking
us to create a new instance template so I'll click over here and again I'll give it a name best instance
template you could optionally add labels so consider this as the image which has all the configuration required to provision an instance so that when one
of the instance in your manage instense group goes down the another one will be provisioned as an identical copy of the previous one and this template will make
sure that all the instances within a managed instance group will have the identical configuration I'll just select the
series as E2 and give the machine type as E2 micro I'll keep all the options as
default and allow HTTP and https traffic and I'll hit save and continue so my instance template is created now I
have the option to provision this and single zone or multiple zone for high availability and redundency so for this demo purpose I'm going to
create this in a single zone if you select multiple zones from here it will give you an option to provision the instance in multiple zones so let's say
you have three zones and the target distribution shape is even then it will provision three instances in three different zones for high availability and
redundancy right but for this demo purpose let's select single zone and our instances will be provision in US Central 1
a then there is an autoscaling mode to automatically add or remove the instances to the group based on the metrix threshold that we specify so
let's say we select this option add or remove instances to the group minimum number of instances as one and let's put
maximum number of instances as three so whenever there is a CPU utilization greater than 600 it will it will scale out and add
one more instance if the number of instances are less than three then we have cooldown period cool down period is the time it would take uh
for you to initialize from boot time until it is ready to serve the traffic sometimes a virtual machine takes time to boot up and install all the prerequisite softwares and packages so
during that time if the auto scale detects that your instance is not up it will try to spin up one more instance so this is not what we want so let's say our VM takes 1 minute of time to
provision so we'll put cool down period as once so that only after this particular period it will try to add one more instance if it is not
available then we have scaling controls scale in controls will set some limit for scaling the group then we have Auto healing it
provides the ability for VM to Auto heal so we could just uh set health checks based on a protocol let's go ahead and create a
heal check let's call it test health check on S GTP Port so it will try to reach out the server on Port
80 based on this criteria that we have set so it will check after every 5 Second and it will time out in 5 second if it doesn't get the reply healthy
threshold is two that means two consecutive success will result in a healthy threshold and two consecutive failure will result in an unhealthy threshold whenever it will find that
there are two unhealthy consecutive failures then it will report the hell check status as failed and it will just restart the VM to Auto heal from the
failure or the crash that it might have right so I'll click over here save okay initial delay is 300 second this is similar to cool down period it just cool
down period is for provisioning new instances as part of the Autos scaling group but this initial delay will wait for 300 second to perform
First Health check right so you could just update this number based on your use case okay and there'll be some Advanced configuration over here so this
configuration is only available when we are not using autoscaling but because we are using this is disabled these were all the details that we have to enter so there are three type of instance groups
we have selected the first one which is the manage instance group for stateless applications this is ideal in case when you are serving the stateless and batch
processing data then there are two other the second one is stateful managed instance group that have persistent data or configuration such as database or
Legacy application and then we have unmanaged instance group unmanaged instance group you have to manually manage those VM and load balance them
all right so these were the three types and we have already entered all the details so let's let's go over here and hit create so I'll pause the video for a
few minutes over here and then I'll come back because it'll take some time to provision all three VMS okay so it says instances is one that means it has
provisioned one instances so far because that was the minimum number of instances that it should have let's open this in a new tab and see what it has done so far
so the instance was created so if I click over that instance you you will see the same configuration that we set as part of the instance template you see E2 micro and
US Central 1A that's what we have selected and rest of the options were default except HTTP and https traffic which we enabled
explicitly if you hit refresh it says Auto scaling is on with Target CPU utilization as 60% now click over here if you want to see the details that we
have configured for this particular managed instance group you click over here details and over here you will see the auto scaling minimum number of instances
one maximum is three currently it has one instance and it will add more instance when the CPU utilization will reach 60% it'll wait for 60 second
before checking the CPU utilization Auto healing health check was set 300 seconds as initial delay so this is the time
ital wait as well and then you can check monitoring and errors from here so it says current minimum current maximum and
then we don't have much CP utilization right now that is why there isn't much data over here and basically all the details you could see from here then if there is some Autos scaling errors it
will report over here so if you want to once the servers are provisioned as part of manage instance group you could go ahead and edit the manage instance group
configuration like you could change the minimum number of instances from 1 to two so that will always have at least two instances running for high
availability and you could spread that across multiple zones instead of CP utilization as a threshold you could specify based on a schedule as well
let's add the minimum number of instances as two lower the initial delay to 50 seconds
okay and hit save now you'll see over here it says minimum as two and maximum as three and it'll try to add one more
instance so I'll go back and hit refresh over here now it says instances are two as part of this particular
Mig so now you'll see two instances one is this one another one is this one this is getting created so now you see even though instances are two still the
health check status is showing as unhealthy it is because we have not created the firewall rule which allows the health check to be performed so
we'll be seeing firewall rules later in the playlist probably in the networking section of this course so for now just
try to understand the concept behind it so now our test IG which is a managed instance group has two instances that was provisioned from an
instance template so the process is like this so the process is like this you create the instance template which has
all the details like CPU machine type machine image and all those things and from that
instance template you create managed instance Group which will have certain number of VMS running so it will provision those
number of VM for you let's say you have selected it as three so it will create three VMS for you
and auto scaling will keep checking the CPU utilization on these servers and whenever it finds out the threshold is
greater than whatever you have set let's say you did set it greater than
60% then it'll try to add more VM to it if the maximum number is set as four right and whenever the CP utilization
goes less than 60 then it will delete the VM from the manage instance group right so this is how manage instance group work and this is how you
would attain High availability or Auto scaling and auto healing capabilities so let's now go to instance templates over here and this is the instance template
that we have created so once you have created the instance template there is no option to edit you see we cannot edit the instance template once it is
created we could edit the manage instance group to use a new instance template and we can just replace this one with the newer one but we cannot add
edit it now you could also create new VM directly from this particular instance template if you don't want that VM to be part of an instance group you could just
click over here provide basic details like name description and it'll autop populate the rest of the details from the instance template like E2 E2 micro
is the machine type that we selected and it'll have HTTP and https enabled by default second option you could create a similar instance template from this one
so it'll be like updating a newer version of this particular instance template you could do that as well right so this is a new instance
template with name as test it1 the earlier one was test it and you could just make your changes from here let's say instead of E2 micro I would
want to use E2 small so I'll just click over here and hit on create so this will create a new instance template for me so I'll wait
for it to be provisioned now I go back to my instance Group which is over here and hit over this particular group click add
it and I can just update the instant template from test it to test it one and then save it okay so you see it
has has both the templates attached to it that means the existing VMS will not be replaced automatically with the new template but when there are new VMS
added to this uh Mig then only the new template will be used so if you don't want that and you want instantly replace your VMS so let's go ahead over here and
delete the instances enter delete says instances from group deleted
now let's go back and it has provision two new instances you see okay so these two instances were deleted and these two
instances just got provisioned and the new instances using test it one as the template so the earlier template was
removed so let's go back to VM instances over here and you will see one instance is already running and the other one is
still in progress It's being provision and soon will be running and now if you see this VM let's go ahead inside the
VM and over here the instance template is the newer one that means the machine type is e to small this is the change that we did as part of the template
update we didn't actually updated the temp template but we created a new template and replace the template in the manage instance group so what exactly is
a billing account a billing account is a gcp cloud level resource which tracks all of the costs such as your charges and usage credits incurred by your
Google Cloud usage a cloud billing account can be linked to one or more projects you can Link billing account from other organizations as well project
use usage is charged to the linked Cloud billing account and it results in single invoice per Cloud billing account you cannot use multiple currencies for different projects that is linked to a
single billing account and billing account also defines who pays for the given set of resources payment profile are connected to Cloud billing account
and it stores your various payment instruments such as credit cards debit cards bank accounts and other payment methods that you have used to buy through Google in the past payment
profiles are managed outside of your Cloud organization in Google payment center this is basically a central location where you can manage the ways
you pay for all your Google products and services such as Google ads Google cloud and 5 phone service provided by
Google please keep in mind one important Point as well so if you have a project that is not not link to an active valid Cloud building account you will not be
able to use the products and services enabled in your project this is true even if your project only uses Google cloud services that are free so your
project should be linked to an active billing account in order to use those Services now let's have a look at some of the important billing roles that we
have in gcp so two major roles are billing account admin and billing user so if you're not sure about the role we will be discussing that in detail in the
IM section of this course but for now uh just keep in mind that a role is nothing but a set of permissions which defines what level of access the user has the
user with the role billing account admin can create a billing export to the big query he or she can view cost and Spence set budgets and alerts and also can link
and unlink the project with the billing account however billing user role has limited permissions such as they can link projects to Cloud billing account
but they cannot unlink them so this is the major difference between billing account admin and the billing user role all right now let's go ahead to the
console and see the billing account in action so I'm in my Google Cloud console over here and if you click over here to the drop down where it says projects so
I have four projects currently and this project and this project my first projects these two projects are attached to the billing account however project a
and Project B are not attached to the billing account so we'll see what is the difference between those and how we can attach these two projects as well to the
billing account so I'll just keep that as selected and hit open now in the search window I'll search for billing I'll go to the
billing account right it says this project has no billing account so I'll click over here link a billing account and it shows me one option my
billing account and this is the default billing account that's get created when you create your Google Cloud account so I'll select this one and set
account once you do that this particular billing account gets attached to the project right and all your project costs and credit will be accumulated in that
billing account so the billing account is here this is my billing account this is the unique ID of that billing account and from here you can manage the billing account so if I click over here to the
manage section now it shows all the projects that are linked to this billing account there are three projects my first project this one and
this one and project a that we have just added if you want to delete any project link to the billing account you can do that by clicking on
the three dots over here and do disable billing and then you can close the billing account as well from here let's see the difference between the projects that has billing account linked with
them and the project that doesn't have the billing account link with them so this project we just linked it so that means we will be able to use IT services so if you click on any of the service
let's say compute engine instance it will show you the dashboard and you can click on create instance and basically you can you know use those
Services however if you go back and select the project that is not linked to a active billing account which is
Project B if I select that and let's go to a GCE service I'll search for [Music] GC computer
engine and it says enable API I'll enable the API and now it says computer engine requires a project with a billing
account and uh enable billing or cancel if I do cancel it won't let me enable the API to use the service so the only way to use use the
Google product and service by enabling the API and by enabling the billing account without billing account we cannot use any of the
services right so you can enable it from here as well or from the billing account like we have enabled for project a so
let's do this now it will ask you the same thing the billing account that you want to be linked with this project I only have one
billing account the default one so I'll select that and hit set account so let's go to bilding from here on billing and if you click on
manage billing account from here now you'll see all four projects are linked with this billing account so now let's go to the billing
account again I'll select the billing you have different options over here so the first one is overview where you can see your total cost incurred and your
forecasted cost for next month based on the previous spend then we have cost Trend and other autogenerated dashboard
wigets then we have report section over here in which you can basically create the report based on the filters that you
select like you can select based on current month last month or last 990 days and you can group it by a project service or
hierarchy and you can select the projects that should be part of this report or if you want to exclude any project or any service and you can also add labels to
it and then report will be generated for you right for example in my report there are only two Services used in past 90 days Cloud logging and compute engine
and the cost was this much but then it was part of the free trial so it was deducted from the promotion balance that I had and the sub toal was Zero for me
the same way we have cost table which will give you the cost report in another format like this is the project that has
the cost incurred with it so if I expand it it will show the services for which
the cost was incurred so ec2 instance in Americas it was this much and uh this instance had this much of cost
associated with it and so on so all the services would be visible over here whatever I have used including the persistence dis okay and the total would
be presented over here as cost and this was the credit subtotal is here okay the same way you can generate report from
here as well from the right side there are filters then we have cost breakdown this is the breakdown of cost
among usage cost promotion subtotal and total and then we have few other fields and this one is again important one budgets and alert in order to avoid any
surprises on your billing you can create budgets and alerts so that you will be notified when a charge exceeds a certain amount so to create a budget hit over
here create budget give it a name let's call it t budget range let's put monthly select
all the projects select all the services or you can customize it hit next select the budget type specific
amount or the last month spent let's say if you want to be notified if the if the cost is greater than what was there in the last month then you will get the
budget alert or you could just specify amount let's say for me I'll just set $10 per month hit next and then it will
generate the alert for you based on the threshold that you set over here so the first notification will be triggered when it reaches 50% of the budget
utilization which is $5 and it trigger on the actual cost not on the forecasted cost you can you know choose between those the second
notification will be triggered when the threshold will pass 90% which is $9 and the third notification will be triggered at
100% you can customize these values as well or you can add a new threshold value from here and you can select the type of
notification that you would want to receive like the email notification which is by default selected for you or you can connect to a pops up topic as
well you can subscribe to our topic and push the alert there and you hit finish once you verify everything so this
budget and the alerts associated with it are set now right you can update whenever you want and however you want right so this
is the limit over here $10 for July that we have set okay so this is how you can set budgets and alerts another important thing is billing export so you can
export your billing data to a big query data set right these are the steps of how to do that I will be covering that part in the big query section of this
course but for now just keep that in mind if you have to do any analysis on the billing data you export it to the big query and perform the calculation
over there identity and access management has three important aspects authentication authorization and accounting
authentication takes care of who has access to your gcp resources authorization takes care of what level of access the authenticated
user has and accounting on which resource the user has has access let's talk about authentication a user is referred to as
a principal and it could be a Google account or Cloud identity user a service account a Google
group or a Google workspace domain user let's talk about authorization now certain permissions can be granted to a principal with the help of
IM roles a role is nothing but a collection of permissions that determine what operations are allowed on a resource when you grant a role to a principal you grant all the permissions
that the role contains so this is how authorization works there are three types of IM roles in
gcp basic or primitive roles predefined roles and custom roles let's have a look
at each of those basic roles are referred to as primitive roles and these are owner editor viewer and
browser it provides a really broad level of permissions and they are not recommended in addition to the basic roles I am provide additional predefined
roles that gives granular access to specific Google Cloud resources and prevent unwanted access to other resources for example if you are using compute.
instance admin role it would have certain permissions such as compute.
instance list compute. instance get
compute. instance delete so when you are using this particular role all the permissions along with it will be inherited and you cannot alter the
behavior of this role you cannot add or delete permissions from it right so you have to use it the way it is these type
of roles are created and maintained by Google and Google automatically updates these permissions as necessary such as when Google Cloud adds
new feature or service then these roles would be updated if you need something even finer grains you use custom roles so these roles are created to tailer
permissions to the need of organization when predefined roles does not meet the needs for example if you would want to Grant
only instance list and get permissions to a specific set of users you create a role something like this and then you just add those two
permissions to it and you can use the custom rle custom roles cannot be applied at folder level let's talk about
I am policies now a policy is nothing but a collection of a role binding and met data each role binding can include
the following Fields a principle which we have already seen this could be a member or identity which can be a user
account service account a Google group or a workpace domain user role binding also has a role which is a named collection of permissions that provide
ability to perform certain actions on Google Cloud resources then it could have a condition which is an optional logic expression
that controls when should the access be granted to the principle metadata contains Fields such as eag and version details the important part to remember
here is policy applied at the parent level inherited from top to down for example if you want to Grant your principal access to all the projects in
an organization then you apply the policy at the organization level and all the projects inside that organization will inherit that policy let's see an
example of IM policy so this policy is with condition which is also referred to as a
conditional policy here we have created a role binding for a principle which is this over here we have applied the role
to this particular policy as a secure reviewer and we have added the condition as this so as part of this condition if we look at the
expression over here if the request time is less than 1st of July 2022 only then this policy would be
granted if a user if this particular member tries to access the resource as a security reviewer after this date then
the access will be denied and these are the metadata Fields over here which is eag and version now let's have a look at the service accounts
application uses service account to make authorized API calls service account does not have passwords and that is why they cannot
login via browser or cookies a service account can be attached to a compute engine VM so that application running on that VM can authen authenticate as the
service account in addition the service account can be granted IM Ru that let it access resources such as cloud storage
the service account is used as an identity of the application and service accounts role control which resource the application can
access service account is identified by its email address which is unique to the account let's have a look at the I am best
practices while implementing identity and access management you should always apply principle of lease privilege which restrict your users or application to
not do more than they are supposed to do you should always avoid applying roles directly to the users instead you
should use IM groups you assign the roles to those groups and add users to those groups instead of assigning roles to each user separately to give someone
temporary access to the project you should create a temporary account in the cloud identity you should use audit loging which helps you answer certain
questions such as who did what when and where basic or primitive roles are not recommended to be used so I have logged into my Google Cloud
console from the left navigation menu you search for I am an admin which is over here I'll click on that so there are few options available over here
first one is I am from here you could set your permissions rules and everything so you can check all the principles so these are the principles
so this one over here the first one it's a default compute engine service account which got created when I created my first compute engine then we have Google
API service agent service account and this one is my user account so there are these type of principles over here and
if I click on roles so these are the basic roles that we see currently I am the owner of this organization so I'm by
default assigned to the owner role and the two service account that we saw are part of editor role right so these were the basic roles then then the second
option is identity and organization currently I don't have any organization setup so this won't be visible but over here you could see the organizational
hierarchy okay then you could use the policy trouble shooter to see if certain user has access to certain rules right you provide their principal email over
here the resource and the permission right and and you check the API call it will provide you the details whether
that particular principal has access or not then we have organization policies so these are the default organization policies then in service account you
would see your compute engine default service account and over here there are the rules so these are all the predefined rules and custom roles will also appear
over here right so if you want to create a new role you click on that create rle you give it a name let's
say custom roll test a unique ID will be generated for that let call it custom roll hyphone
test you select in which launch stage you would want to make this role as available let's just keep it default Alpha for now then you assign
permissions to it so these are you see 5,971 all the permissions that a a principal could have so you can filter
these permission by predefined roles so you select it then select any role let's call
it let's search for compute engine so I'll select this one compute admin role if you select that and click okay so all the permissions that are
there as part of compute admin role so there are 6001 permissions all those would be presented to you and you could select either of
those there are some permissions which says supported or testing that means these are in Alpha
stage right now and they are not ready to be used for production system and you won't be getting any gcp supports if anything breaks on these services so you
should use this cautiously right so let's say I use address create address get address list
okay just for the example so I selected these and hit add right so these three permissions are
assigned and then hit create when you do that a role would be created now you see the other roles that you see had a different icon because
these were predefined role but this role is a custom role and custom roles are donated by this particular icon so if you want to see let me Zoom
it a little so you see the icon is little different than what we had before so let's quickly have a look what exactly is a cloud cloud
storage it is a service for storing your objects in Google cloud and what are the objects well an object is an immutable piece of data
consisting of a file of any format typically the unstructured data or the binary large objects objects such as
audios videos images archives Etc these type of unstructured data are typically stored into buckets
and then the data can be accessed by different users or gcp service using different authentication methods such as
IM roles or pre-signed URLs or these objects can also be public so we'll have a look at all those in
detail so cloud storage is suitable for many use cases such as storing and accessing unstructured data such as
audio video files binary large objects images and it can also be used to serve static website the video streaming data it can also be used to store your
backups and archives for Disaster Recovery or for know Regulatory and compliance purpose locks and other reporting data
can also be stored in the cloud storage so let's uh have a look at another important concept of storage classes so
whenever you create a bucket you specify the type of storage class that bucket belongs to and all the objects inside
that bucket will inherit that storage class unless it is specified otherwise so it defines the object's availability
and pricing model it also defines how fast the requested data can be retrieved from the bucket so there are four type
of storage classes that gcp support standard storage class near line Co line and archival let's see the differences
between all those four so first is data access so the standard storage class is used to store
the frequently accessed data nearline storage is used to store the data that is infrequently accessed typically once a month Gold Line storage
stores the data that is also infrequently accessed but even less frequent than nearline storage such as
once a quarter and as the name suggest archival storage class showes the data which is accessed rarely after it's been created typically once a year so there
is a minimum storage duration that you should be aware about before storing your data in either of these storage classes standard has a very small
duration because it is frequently accessed data and it should be typically stored for a brief duration you store data for a minimum of 30 days in
nearline storage minimum of 90 days for cold Line storage and for archival storage you store for at least 365 days let's see
the cost associated with each of these so because standard is used to store the frequently access data so the cost when it comes to retrieving or storing the
objects the standard has the highest cost near line has the lower cost than standard but higher than Cod line Cod
line has lower cost than nearline but higher than archival and archival has the lowest cost so all four of the storage classes support dual or multi-
region when you create a bucket and specify a storage class you could select whether you want it to be Regional dual Regional or multi-
Regional each of those would have different cost associated with it obviously multi- region would be the highest and then dual region will have a lower cost than multi- region and the
regional cost will be the lowest among all those so you choose that strategy as per your use case and as per the criticality of your data then let's have
a look at the use cases that they are ideally suitable for standard should be used for use cases such as serving web content or
streaming videos that requires the application or the user to frequently access the data then near line for reports such as
auditing profit and loss so those reports that is usually access once a month Gold Line storage is typically
used to store backup snapshots logs that is accessed like once a quarter and in archival we store the
data for auditing legal and compliance purpose or if it needs to be used as part of the disaster recovery process so this type of data is typically used once
a year all right so I have logged into my Google Cloud console I'll just go to the navigation menu and search for cloud
storage which is over here I'll click on the service and then I'll hit on create bucket so first we have to provide this
a unique name please pick a globally unique name and it will be a permanent name that means you cannot change the name once you have created it so I'll
give this a unique name let's call it tutorials with P hyphen 101 right I'll hit continue then it'll ask me the location type whether it
should be multi- Regional dual Regional or Regional so each of these have a different cost structure so if you see
the multi- region is 026 per GB per month because it'll be hosting your bucket in multiple regions in us if you
choose the regional one and you know select a region let's say I'll select this one which is nearest to me so it
will cost you 023 per GB per month so there is a difference between multi- Regional dual region and Regional costs right so I'll just select the
regional so that I could use the minimum possible cost I'll hit continue then then it'll ask me to choose between different storage classes if it is a
frequently accessed data that is used for a short-term storage then I'll use the storage as standard else near line
or Cod line archive storage we use for storing the long-term archive typically that is access once a year okay so I'll
just keep it default for now which is standard then there is an option which says enforce Public Access prevention on this bucket so by default this is
publicly accessible so if I check mark this box that means this bucket is now private and would not be available label
publicly then you specify the access control whether you just want it to be accessed by the Bucket Level permissions or you want to specify access to the
individual objects using the object level permissions so I'll just use the default but make sure you understand the difference between those two the first
one would be at the Bucket Level the second one would be at the object level the next is how to protect the object data by default the protection is none
you select the object versioning when you want your multiple versions of data to be stored let's say you upload a file and then you updated it so if the object
versioning is not enabled the previous version will be replaced by the newer version but if we select the object versioning both the versions will be
kept and you can use the previous version whenever you want to roll back the changes so it will ask you few different things like maximum number of
versions per object so let's say we want to keep it at least two versions of the object expired non-concurrent version after 7 days so it will keep two
versions for at least 7 Day and then it will delete the previous version then there is one option which says retention policy so we can prevent the buckets of
object to be deleted or modified before a specified number of days so you can enable this option as well and you can choose the data
encryption whether you want to use the Google manage encryption key or if you have your own key you can use that as well so after verifying all the details
after entering all the details and after verifying the price I'll just hit on create the bucket the bucket is created this is the bucket
name if you want to check go back and you see one bucket has been created just now which is a regional bucket created in this particular region so I'll just
go inside the bucket then there are different options first let's go ahead and upload some files to this
bucket I'll upload this image okay so the image is uploaded over here and these are the details of that particular image or the object you can
download it by clicking over here you can edit the metadata which is the content type right or the custom metadata you
can add then you can also delete the object from here this is the preview of that object if the preview is
supported right the object is not public because we have created the bucket as private that is where this object
inherited the policy and if we go back now we see different options as well first is configuration so configuration is your buckets
configuration that we have created right so all the options that we have selected while creating the bucket you can change few of the options few of
them are autogenerated and few of them you cannot change such as the URL of the bucket you cannot change because it was autogenerated with the predefined format and you cannot change the region of the
bucket or the name of the bucket once it is created then if you go to the permissions it says it is not public it is a private
bucket and we haven't created any ACL for that particular object then we have protection we have enabled the bucket versioning and that is why it is
showing as enabled and if we go to the life cycle this is another important concept so life cycle management uh consist of a set of rules
that you apply to a bucket object when certain conditions are met for example you want to delete the object after 30 days to save the cost and you know that
you won't be needing these object after 30 days so you could just set the life cycle rule instead of manually going and deleting each and every object after the 30-day period has been passed so let's
see how we can add a rule it already has two rules if we see let me collapse this because we have enabled the object versioning that is why it created the
rule which says it should have two new versions and it would delete the previous version when 7 days has been
passed after the version become non-current right so these two rules were created as part of that create a new rule you use this option add a rule
so there are different actions that you can select actions are of three type you can change the storage class from standard to near line from standard to
Cod line or from standard to archive or you can delete the object or you can delete the multiart object so
multiart object is something that was uploaded in multiple Parts but hasn't finished yet so it will not
delete the finished multiart but only the unfinished multi parts and important thing to remember over here is you can only change the
storage class going in the upward direction that means from standard to near line or from near line to Cod line or from Cod line to Archive right but you cannot go in the
backward Direction like you cannot change the storage class from archive to standard or from archive to Cod line like that okay so this is how it should
be so once you select an action let's say I want to change the near line then you spe spe ify object condition which means when the rule will be triggered
let's say you want to change the storage class after 30 days right so I'll click over here to the age condition I'll specify the number of days as 30 so now
the action that we specified was to change the storage class from standard to near line and the condition that we have set is age is greater than 30 days
right so when I click continue and create now it says the action is set to near
line and the object condition is 30 plus days since the object was updated right so this is how you can create a rule and
it will be automatically applied to all the objects in this particular bucket now let's see how you can access the
bucket using the command line so I'll just open the cloud shell from here right let's say I would want to download
an object so first I'll clear it and now I need the objects GS URL so I'll just go to the
object and I'll copy the URL that starts with GS right this is the URL generated for GS so I'll just copy the URL and I'll
type the command G GS U copy and the object location which we
have copied and the directory in which you would want to download the file so I'll choose the current directory once I hit enter it will
prompt you to authorize or reject the call so I'll just authorize it and once I do that it'll just download the file it says copying this
particular file to this location which is my current directory so if I do LS over here I will see the file has been downloaded this one the same way we can
upload the file from here to the bucket let's see how we can do that so I'll again use the same command
GS s. txt and I'll enter the GS Ule URL
GS s. txt and I'll enter the GS Ule URL I just need the the bucket URL until here copy
it I missed the CP after GS util so GS util CP and then source file name and location and then destination location I'll hit
enter and it uploaded the file it says operation completed so if I go back to my bucket I should see the test.txt file to
delete the object in the bucket you select the object hit delete then delete
it so it has deleted the objects now because there was only one version of the file that that's why deleted the
files completely even though the versioning was enabled if we had the multiple versions then it would have kept the non-current version okay so if
I go back to my bucket now I don't have anything so to delete the bucket I'll just select the bucket and hit delete it'll prompt me to enter
delete just to make sure that it is not an accidental delete of the bucket because once the bucket is deleted it cannot be recovered right so exercise
caution over here and hit delete let's see few of the cloud storage best practices as recommended by Google Cloud
so you should always apply the principle of lease privilege when it comes to authentication and authorization that means restricting
your users or application to not do more than they are supposed to you should not make the objects as public unless it is required if the
objects are used as part of video streaming or a Content service service if the objects are used as part of a video streaming or content serving
service then it can have the public readon access you should use the object life cycle rules to manage the cost bucket versioning should be enabled
for critical data so that it can be recovered in case of a disaster or any accidental deletion you should use signed URL to share the bucket objects with external
users or the unauthenticated users you can also set the expiry of the URL to even strengthen the access control and you use parallel composite
uploads to upload larger files using GS udle a block storage is a type of storage which emulates the behavior of a physical hard drive these data diss are
stored in blocks and are attached to the compute in Google Cloud it means they are attached to compute engine or a kubernetes engine which is
gke then we have a concept of persistent disc because a block storage can be persistent or non-persistent in nature a persistent disc is a high performance
block storage that uses solid state drive which is SSD or a hard disk drive which is sdd and you can attach multiple persistent discs to compute engine or
gke simultaneously the other difference between persistent and non-persistent disk is data stored in the persistent disk can be retrieved even after the
instance is stopped rebooted or if it is crashed the data will still be there however with non-persistent disk it doesn't comes with these capabilities
once the instance is stopped the non-persistent disc attach to the instance loses all its data compute engine offers several type of storage options Ops for your instance based on
your performance and pricing requirements so we have five different options to select from so first one is standard persistent disk so these are
backed by standard hard disk drive which is SGD and are lower in performance and rest four are backed by SSD which is solid straight drive which is high in
performance because standard persistent disk is backed by sdd it is recommended for the workload that requ requires standard throughput balance persistent
disc provides a balance of cost and performance and offers lowest cost per GB performance dis SSD or PD SSD provides lowest cost per iops the
recommended workload for extreme persistant dis are sap Hanna Oracle these types of application in which uncompromising performance is needed
local SSD are basically used as hot cache for database and real-time analytics purpose so standard persistent disk is the most coste effective solution and the
recommended use cases are big data and big compute workloads balance persistent disc is an ideal choice for running your standard Enterprise
applications and performance persistent discs are recommended for performance sensitive workload extreme per persistent dis guarantees highest performance among all
those persistent discs and local ssds guarantees lowest latency for your application so first four storage options that we have are
backed by persistent discs they can be used as a bootable disk you can install operating system on these type of
discs and these provide the capabilities of zonal and Regional replication however local ssds are ephemeral storage that is they are not persistent in
nature and it cannot be used as a bootable dis it does not provide zonal or Regional replication capabilities VPC network is a logically isolated Network
on Google Cloud platform you can think of a VPC the same way you would think of a physical Network except that it is virtualized within Google Cloud a VPC is
contained in a project which again is a part of Google Cloud platform each VPC Network consist of one or more IP range called subnets or sub networks for
instance Us West one could have one subnet which is subnet one and Us East one could have two subnets subnet 2 and
subnet 3 subnets are Regional resource and have IP addresses associated with them for example over here if you could
see subnet 1 has this IP range and subnet 2 and three has their own IP ranges please make sure you select the IP ranges carefully if you like to know
more about how to calculate IPS in a subnet range and what is a subnet mask then feel free to check out the video that I published for the same I will put the link in the description section as
well as in the title bar also you can use this online tool I'll put the link again in the description section and there is a utility called subnet calcul
later along with it so you can just put in your IP range over here let's say our subnet 1's IP range which is
192 168 1.0 and we have selected the subnet mask as /24 so here it is you can select any subnet mask so again this calculation how you can calculate it
manually I have given in the video that I was talking about /24 will have 256 IP addresses you can select any other range as as well and when you enter it just
hit calculate so it will populate the IP range for you the first address being 19268 1.0 and the last one would be 192
1681 255 even though it says it has 256 IP addresses but there will be few IP addresses in this range or in every CID range that is reserved by Google Cloud
for future purposes so please make sure you understand that as well okay so this is the way we can check our IP range all right so we have selected the IP range
of all three subnets if later on the IPS get exhausted because you see each of these range that we have selected would have 256 IP
addresses let's say in future at some point of time you would need more IP addresses to a particular subnet or one or more subnet you could just edit this
subnet or any subnet and extend the range let's say currently it is is /24 mask you could just edit it to have more
IP addresses sl23 subnet mask you can use or anything less than that it will give you more IP addresses just keep in mind you can only extend the IP range
but you cannot delete it and you cannot Shrunk the IP addresses now when you provision a compute instance you select the Zone in which it must be
provision and the region that you select for a resource determine the subnet that it can use so in this example when you
provision 2 GC instance in zone Us West a it determines that the region is Us
West 1 and this Us West one region has only one subnet with this IP range so these two instances will have the IP in
this range let's say it could be 192 168 1.5 for this instance and for this instance it could be 192 168
1.12 so this is how the IP will be selected for these two similarly when you provision instances in Us East one they will be part of subnet 2 and subnet
3 and they have the IP from that cidr range that they are part of if you are confused about the difference between Zone and region so just keep that in mind that zone is nothing but a
collection of data centers in a geographical location and region is a collection of multiple zones that are located miles
apart but why do we need region and Zone separately so we deploy a workload in multiple zones and regions to attain High availability and fall tolerance so
let's say we have deployed our workload in this region as well as this region let's say due to some natural disaster or anything this complete region goes
down then your application will still be available through this particular region so this is how we have attained high availability and fall tolerance with deploying our workload in multiple
regions so after you create a network you can create firewall rules to allow or deny traffic between resources in the network such as communication between VM
instances you also use firewall rules to control what traffic leaves or enter the VPC Network to and from the internet Vall
rules are defined at VPC Network level but you can associate it with one or more instances there are two types of rules you can create an inbound rule or
an outbound rule which is also referred to as Ingress or egress rules inbound as the name suggest restrict the inbound
traffic or as a certain Port from a certain Source or from the internet and the same way outbound
restrict the outbound traffic on a certain port to a certain destination by default there are two implied forward rules created one which allows all
outgoing connections and another that denies all incoming connection again these two rules
have the lowest priority of 6535 so you can easily override these rules by creating a lower priority rule let's say
65400 and this rule will be evaluated first and then this will be evaluated so let's say this rule has allowed the outgoing connections with this particular
priority right and you want to deny that rule so you could just create a deny rule with this priority and this rule
evaluated first and the connection will be denied this rule would not even be evaluated but you can just override these rules by creating a lower priority
rules and you cannot edit or delete these rules as these are implied Rules by default instances an a same VPC can
talk to each other with their private IP so these all instances can talk to each each other using their private IP they don't have to access their public IP
because they are within the same network like every instance has the connectivity with each other but you can also create a firewall rule to deny the connection
let's say between the instances of subnet 2 and subnet 3 now there is another important thing which is VPC routing VPC routing defines the network
path that packets take to travel from source to destination any incoming traffic like what is the destination of it what would be the next hop how it
will Traverse from one location to another right and how would the outgoing traffic would go from one location to another so this all will be defined with
the help of VPC routing and entries would be created in route table for the same right we will have a look at this routing in the next
video where we'll be discussing about VPC peing let's head over to the demo and see everything in action all right so let's head over to your Google Cloud
console and search for VPC networks over here click on that and you will see there is a default VPC already created for you when you have provisioned your
project and you will also see there are subnets created in each of the available region over here right so this this is what a default VPC does it by default
creates the subnets in all the regions that is available and they have a unique IP range you could use this VPC as well
but there is a caviat to it if you have an on-prem server or if you are planning on connecting your on-prem server in the future with this particular project or
if you have other vpcs or you will be creating multiple vpcs then there is a chance that these IP ranges could overlap with your other IP ranges so
this default VPC and default subnet range are good for your uh you know learning and POC purpose but it is not recommended to run your production
workload so this is the default click over here create VPC Network once you do that you could just
give it a name let's call it my VPC description this is not a mandatory field so you could just leave it blank as well and then if you scroll down it
says subnet creation mode so there are two type of creation mode with VPC custom or automatic if you select the automatic mode then again it will do the
same thing it will create subnets in each of the available regions which is what we don't want and you can choose the custom mode as well so with custom
mode you could add your own subnet there is no subnet created for you so you could just click on new subnet give it a name let's call it
subnet 1 select the region I'll select the same region Us West one and Us East one for the other one US West one there
is the IP stack mode so subnet Works in both the modes single stack or dual stack single stack it'll only have the ipv4 address range and for dual stack it
will have ipv4 and IPv6 range as well so for now I'll just select the single stack and provide the name that we have already selected 192 168
1.024 we have 256 IP we could additionally create secondary IP range as well done so our first Subnet is there let's create second
subnet subnet 2 this time I'll select the region as us Fest
one I'll give the IP range as 10.
1.0.0 sl24 done and I'll create my third subnet as well region is again Us West one we can
create more than one Subnet in a particular region one is already created by default if we are using the automatic mode but because we are using the custom
mode there was no subnets created automatically so 10.2.0 do0
sl24 all right I'll click on done right so we have uh specified three subnets to be created now firewall rules
so there are some default ipv4 firewall rules created for you first one is the Ingress rule which allows incoming connection ction from all the IP ranges
that we have locally so these are all the private IP ranges that we have set that means all the VMS within our VPC will have incoming connection between
each other on all the ports again the priority is the lowest you can override this priority if you want to deny any
access then second one is icmp allowed from outside the internet that means anyone over the Internet could ping your instance anyone
could RDP your instance if they have the keys and anyone could assss to instance if they have the login credentials and then we have a VPC deny
all access that means all other ports are denied other than these that have been specified because you see the rule
priority is the lowest first these rules will be evaluated which has the higher priority and then these two rules will be
evaluated so this will deny the axis on all other ports that have been specified so this will just deny the axis on the
ports that have not been specified over here and then there is a egress allow that means instances within your VPC can connect to the internet then I'll just
leave these fields as default and hit on create now this will create a VPC it will take a couple of minutes if you go
down your VPC is getting provisioned so I'll just pause the video till it's getting created all right my VPC has been
created now so you see this is the VPC and three subnets have been created with the IP ranges that I have specified if we go to my
VPC has all the details so with VPC it comes with another few components so it will have some static
internal IP if we have reserved it if not you can reserve it from here this is just like elastic IP in AWS we have static external IPS as well in gcp so
you could just Reserve static internal or external IP and associate with any of your instance then we have routes over here so it has a a default local route
to the subnet so that the instances can communicate with each other so there'll be three routes for that and there is one route to the Internet so that all
the instances within our VPC can communicate to the internet and these all details will look in the later sections when we cover VPC Network
peering so we have firewall rules over here because we did not selected any of the default rules that is why it's not there only the implicit rules will be
applied which is not visible so we could just add the rule from here add firewall rule okay let's give it a name allow
SSH and the network is my VPC Network that is fine this is a type of Ingress Rule and action should be
allowed then you specify y your target tags let's give it a tag let's call it Dev server so this rule will be applied
to all the servers that have a tag called Dev server I'll show you how it will match
it then it has ipv4 ranges so Source ipv4 ranges let's give it to everything then allowed on a particular
protocol let's call it TCP P22 so that I I should be able to SSH to it and that's it just verify everything and hit
create all right so now you would see the firewall rules created over here if you expand this this is allow SS rule which is a Ingress rule as I've
mentioned before Fireball rules applied at VPC Network level but you can associate it with one or more instances so now let's go to our GC compute engine
and provision a GC service hit on create instance if you are not familiar with gcp I have also created a video for that so feel free to check that out I'll
again put the link in the description section as well as in the title bar so let's call
it test VM one I use the default Zone I use the small machine type something
like E2 micro and then I'll go down so the rest of the section I'll just keep it default for now and the main is this
one Advance option in that networking okay here you mention the network tag so I'll just mention Dev server then you select the network
interface which is the default one attached to it so you select the network my VPC this is the custom VPC that we have created and then you select the
subnet Network now you see this is what I wanted to show you it says no sub Network in this region so you don't have anything and it is grade out you cannot
select this is why because we have not created any Subnet in US Central One region
right so this region if you remember I've told you if we select the Zone it will determine the region based on that and based on that it will
determine which subnet the instance should be part of but because we don't have anything in this particular zone or
region that is why it is giving us the error so let's just select region as us
West one Oregon region and zone is Us West 1A okay and now if you scroll down you
see it autop populated the subnet one details okay so that's it you click on done and hit create all right our VM has been
provisioned and if you see the internal IP this is within the range that we have selected for that particular subnet so let's click on the
VM and do SSH into it it is transferring the SSH key to the VM and establishing the connection right I am in let's clear the screen and
do uh let's clear the screen and do ping on [Music] google.com I'm getting the reply from
google.com this is because there is a default route to the internet gateway and this is what we wanted to test Google Cloud VPC Network peering allows
internal IP addresses connectivity between two VPC Networks those VPC networks could be from same project or organization or from different projects
or organizations there is no restrictions to that traffic remains inside the Google backbone Network and don't Traverse through the public
internet there are some advantages of using network Beering over external IP addresses or VPN to connect networks first one is Network latency
connectivity that uses internal IP address provide low latency than external IP addresses as there are no hops or devices between the peering
connection service doesn't need to be exposed to the public internet and deal with its Associated risks so it provides
a greater network security for your workloads by using internal IPS you are saving ESS communication cost so yes it is a cheaper option than external IP
connection now let's see how peering Works consider an organization which needs VPC Network peering to be established between
Network a in Project Dr and network B in Project PR for VPC Network appearing to be successfully established administrator
of network a and network B must separately configure the peering Association from Dr to prod as well as from prod
2dr bpca has one network subnet with cidr range as 10.0.0.0
sl16 similarly vpcb has one subnet with C range as 10.8 0.016 make sure that the
participating vpcs in a network peing should not have the overlapping cidr range otherwise there will be a conflict and connection would not be established
each subnet has a Google compute engine which you would want to connect via their private IP addresses let call them
VM drr1 for vpca and VM prod one for vpcb from vpca you create a connection and enter the project and network you
would want to peer with in this case which is Project Brad in vpcb you Import and Export custom routes and create the
peering connection this connection will be created from Dr to prod at this point the pairing State remains inactive as there is no matching configuration in
network B in Project B a network admin or a user with appropriate IM am permission in Project prodad must configure the matching configuration
from Project Dr in order for the appearing connection to be active on both ends you follow the same step from VPC Network B to create the connection
from prod to Dr and as soon as bearing moves to an active State subnet routes and custom routes are exchanged and both VM will be able to talk to each other
via their private IP addresses let's see another example over here VPC a and vpcb are already peered
we created a new network vpcc and wants to peer it with bpc P now VPC pairing cannot be established
in this case as well you see there is a c range overlap between VPC A and C has
10.0 1.0 it has 10.0 1.0 even though we are trying to peer vpcc with B but vpca also is a part of this active connection
so that is why you cannot create the connection between these two as well let's have a look at the transitive peering
now so we have three VPC networks vpca vpcb and vpcc if vpca is paired with VPC B and there is an active connection between
those from both the ends and if vpcb is also appeared with vpcc and there is active connection between both these VPC that doesn't mean
that bpc A and C are automatically peered so there is no connection between those two vpcs and this is what we call as
transitive peering so transitive peering is not supported in Google Cloud platform let's have a look some important points with respect to what we
have seen so far subnet cidr range overlapping is not allowed if you wish to peer the networks transitive peering is also not
supported you could have a maximum of 25 connections to a single VPC Network that means one VPC could only be paired with 25 other vpcs and not more than that
this is the limitation delete a VPC Network you should first delete all the peering configuration in the network and then you can delete the VPC Network otherwise
it will not allow you to do that and you would need compute. Network
admin role or the editor role to create or delete the VPC ping VPC Network pairing only works with computer engine gke and app engine
flexible environment other services are not supported because these are the only services that you provision inside a VPC all right to follow the steps that we
have seen so far I have logged into my Google Cloud console and I will be creating two separate vpcs in two of the projects that we have so first I'm in my
project Dr I'll Sear search for VPC Network click on create VPC
Network I'll give this a name vpca then I will create a
subnet call it subnet a and then I'll select the region as us east1 now I'll specify the IP range of
this particular subnet so let's call it 10 /16 it'll have 65,000 something
IPS and done then I'll select two default firewall rules one is icmp another one is SSH so that I could SSH into my instance that I'll be
provisioning in this VPC and I could ping the VM from the another VM okay and then keep everything as default
hit create now I'll open a new tab and create the VPC in the other project that
we have which is project prodad so click over here in the drop down and select the other project okay now search for the VPC Network
again and create a new VPC Network give it a name vpcb create the Subnet in custom
mode subnet name is subnet B region is earlier we chose us East one this this time we will be using Us West
one over here specify the IP range 10.
8.0.0 sl16 please make sure that IP range is not overlapping with the other range that we have in vpca so that's why
earlier we chose 10.0.0.0 sl16 now we are using 108.0 do0 so that they don't have any overlapping
IPS okay done the same way we will be using icmp and SSH default firewall
rules keep everything as default hit create and if you go back to our vpca and scroll down our VPC has been
provisioned now I'll create a instance in this particular VPC so search for compute engine create instance give it a name
BM1 drr select the region as us East one because that is where our subnet has been
provisioned right Us East one I'll choose a smaller machine size E2 micro scroll down to the networking
section over here advance options expand this and hit networking scroll down a bit and you
will see your default network interface so you again click the drop- down and change the network from default to
vpca okay and the instance will be created in the subnet that we have created and keep keep rest of the thing
as default hit done and create now let's follow the same step in vpcb which is our next
tab so let's quickly verify vpcb has been created it's there so search for computer engine create
instance give it a name vm2 Brad and specify
region as us West one because in this region our subnet was provisioned change the machine type to
micro scroll down to the networking section and change the default Network to vpcb and it autop populate the subnet
for you done and create the instance now are both the instance in two separate projects and two separate vpcs have been
created the first one is active let's log to this the internal IP is this one so this is what we're going to use for our VPC bearing but first
let's see uh how the instance see each other without the appearing right so I'll just SSH into this VM right I'll clear the screen and let's
grab grab the private IP from the other instance so this VM is also ready so I'll just grab the private IP from here which is 10.
8.0.2 copy this I'll go back to my ass window which is over here I'll try
to Ping the prodad instance from the Dr instance this is the private IP hit enter and I'm not getting any reply even
though the icmp port has been enabled right because there is no connectivity between both the instances as there is no connectivity between those two VMS as the both were
provisioned in two separate projects and two separate vpcs right so if you see 100% packet lost there is no connectivity now let's
go back to our project Dr and create the VPC puring connection so for that you go to VPC
Network and go to VPC network paing over here on the left side now create a connection says you will need the
following info the project ID if you're connecting to another project and name of the VPC Network that you want to appearer with I'll show you how you can
do that hit connect give it a name let's call it puring Dr to prod select the source VPC Network which is
vpca and the peered VPC network is in another project so choose this one in another project and specify the project ID this is not the project name but the
project ID so I'll go to the drop down again and I'll grab the project ID from here this is the project ID of our prod project right so I'll copy this paste it
over here and the VPC network name our VPC network name was vpcb I'll import custom routes as well
as import subnet routes and Export it as well hit create maybe I'll hit refresh okay so peering connection from
Dr to prod has been created but this is just the one side of the connection you still need to create the other connection from prod to Dr that is why
it is showing as inactive currently right so I'll go to my project prod search for VPC [Music]
networks and I'll create a new VPC Network peering so you hit create connection continue give it a
name paing prod to Dr select your Source VPC Network which is vpcb select the destination VPC puring Network which is in another project and
I'll grab the project ID from here I need the project ID of project Dr VPC network name of the destination VPC is
vpca I'll select all these options to Import and Export custom routes and subnet routes with public IP hit create so now the connection is showing as
active and if you go to the other VPC that we have from dr2 prod and hit refresh this connection is also active so once the connection is established
between Dr and prod 2 vpcs the custom routes will be exchanged and it will be created automatically if you go to
routes and if you could just filter this by Network VPC a right so you see three routes have
been created one is default route to the default local route to the subn network the destination of your subnet and then there is a default route
to the Internet so that this instance could talk to internet another one is route to the appear in connection which is in our prodad project right and the destination IP is
108.0 right the same way if you see the routes will be created in this VPC as
well routes and filter it by the network name workor name was vpcb and the same with three routes will
be created over here now let's go back to our GC instance compute instance and SS to the
terminal okay let's ping the instance again now we are getting reply from there that means we have successfully
paed our VPC connection from Dr to prod and this will work for other way around from prod to Dr as well let's have a look at another important concept
which is shared VPC Network so when you use shared VPC you designate a project as a host project let's name it test en
EnV and attach one or more service projects to it so uat en EnV and performance test EnV these two are a
service projects a shared VPC admin for the organization has created a host project and attached to service project to it there is a shared VPC connectivity
between uh these projects eligible resources from service project can use subnet and shared VPC
network service project uat EnV can be configured to access all or some of the subnets in the shared
VPC and admin has created an instance in u81 in the zone Us West 1 region and
this instance receive its internal IP address 10.0.1 .5 from the subnet 10.0.1
address 10.0.1 .5 from the subnet 10.0.1 do0 subnet a the same way admin has
created another instance bt1 in Us East 1 region and the instance receive its internal IP 10.0 4.8 from the cidr
10.0 4.24 shared vpcs allows an organization to connect resources from multiple projects to a common VPC Network so that
they can communicate each other securely and efficiently using the internal IP of that Network however shared VPC only works
for same organization it won't allow you to connect the project from multiple organization that was there in VPC peering so these two instances can now
talk to each other via their private IPS because those two private IPS have been created in the shared VPC Network and there is already a connectivity between
those so this is how shared VPC network works and you could have multiple host projects in an
organization so this is a test Network like what we have seen so far you could have another Network which is Broad Network and you could share it with your other service
projects however each service project can only be attached to a single host Network so this Dr EnV is attached to this host project over here so this cannot be attached to the test Network
this is the limitation not everyone wants to maintain the infrastructure setup networking manage application scaling and other operational and administrative tasks such as server
patching and upgrade and just focus on writing code but how would you ignore these important factors and just focus on your code
development well I app engine to the rescue you might be wondering what exactly is an app engine well it's a fully managed serverless platform as a
service for developing and hosting your web applications at scale you can choose from several popular languages libraries and Frameworks to develop your apps and
let app engine takes care of provisioning servers and scaling your applications based on the demand you don't have to worry about managing the
infrastructure and you can just focus on writing your code it is well suited for a microservice based architecture now let's understand what
exactly is a platform as a service b or platform as a service is a cloud service offering in which Google cloud or any cloud service provider
basically takes care of your infrastructure such as runtime middleware operating system virtualization
servers storage networking all these aspects and you can just focus on your application and data well there are two types of app
engine environment one is standard another one is flexible let's have a look at the difference between each of those well in standard deployment your application run
on a Sandbox in its own secure reliable environment that is independent of the Hardware operating system or the physical location of the
server and flexible your application instances run within the docker containers on compute engine virtual machines standard comes with
preconfigured rtime environment of a supported version of bow languages only python Java PHP nodejs Ruby and go only these languages and few particular
versions of these languages are supported and if you want to use any other programming language then flexible would be a better choice for you where you could just use your own custom
runtime or source code written in any other programming language well there is a free daily Kota in app service environment and it is a payper you
service in this particular version you can scale down the instances to zero to save some cost however inflexible the pricing is based on usage
of compute resources such as CPU memory dis R and instances cannot be scaled down to zero the minimum you can bought it down to is
one in standard you don't have the SSH capabilities if you want to perform the SSH to the underlying virtual machine then again flexible would be a better choice for
you standard is mostly suitable for the applications that experiences sudden and extreme traffic Spike where immediate scaling is the requir comment however if
you have application that receive consistent traffic experience regular traffic fluctuations then flexible would be a better choice for you so there is a limited third party
binary installation for standard you cannot install your third party binaries other than the versions and languages provided to you that we have already
seen in flexible there is no limitation to that you can install any third party binary there are three scaling types supported by standard app engine
manual basic and automatic however with flexible we have manual and automatic all right so this is our sample app. yaml file that is used as a
sample app. yaml file that is used as a deployment configuration file for any app engine deployment over here you have specified your runtime so I have used
node js16 you could use any other supported version if it is a standard deployment if it is a flexible deployment then you can use basically any version or any other programming
language as well here the instance class is F2 default is fub1 if you don't specify the instance class so default
will be fub1 so fub1 F2 F4 and basically all the F classes supports oruro
scaling however all the instance classes that starts with B like B1 B2 B4 B8 they support manual and basic scaling AO
scaling is not supported in the b class it's only supported in the F class here you can specify your environment variables this would be the
default bucket where your application engine code will be uploaded and downloaded during the version deployment and this is the Handler
section where you have specified like if the URL is this like look for this particular directory and download all
the static files from there this Handler is basically redirecting all your HTTP traffic to
https with the response code 301 this is the error Handler where all the default errors will be redirected to
this page and error code such as over Cota will be redirected to this page so this is a simple structure of a app. yaml
configuration file now let's have a look at the different scaling types in app engine we have three as we have just
seen basic automatic and manual so you specify basic scaling as this block in your app. yaml file where the max
your app. yaml file where the max instances would be the maximum number of instances that your version will be scaled to
in automatic scaling you basically specify the metrics According to which you would want your application to be scaled you specify minan and Max instances and few other optional
parameters as well in manual scaling you only specify the instances for example over here we specified instances as five so this
would be the initial number of instances that will be provisioned and it will be scaled accordingly now we have two more param to be used in app config file Max
idle instances and Min idle instances so maximum number of idle instances that are warmed up to be used when application needs scaling so this is to
enable the rapid scaling as the load increases on the server and minimum number of instances that are warmed up to be used when application needs
scaling so these two are again the optional parameter that can only be conf configured if you have enabled inbound Services warmed up right so you need to
put this block in your APPL in order to use the above two parameters let's have a look at the example now let's suppose
you have application a in your project a you could only have one application per project then you could have multiple Services as part of that application
let's say service one and service two each service could have multip mle versions whenever you deploy a service or you redeploy it with a new Option or you update anything in that a new
version will be created and you could choose to split traffic among the previous version and the new version or you could just migrate the traffic as well you could roll back from one version to another we'll see that in the
demo section and each version could have multiple instances serve the traffic the version one is running on instance one
however vers verion two of service 2 is running on instance 1 and two let's say we have a scaling event happen an application needs auro scaling the new
instances will be provisioned and the load will be distributed accordingly this is to make sure that you have high availability and fall tolerance Whenever
there is a spike in the traffic once everything is done follow these steps to perform the resource cleanup please keep in mind you cannot delete an app engine once it is created
but there are different steps you could take to stop including charges first go to application engine disable the application then you delete the default
app engine service account then you delete the app engine storage buckets all right so let's log to your Google Cloud console and search for app
engine right if you don't have anything created yet it will be like this then you hit over here create application first step is to select a region region Chang is permanent you
cannot change the behavior after this let me just use the default region which is US Central hit next now it will ask you to choose the
language and the environment type we have these many options to choose the language from python not JS Java go php.net Ruby and if you select any other
language than this this then this option the environment option will be changed to flexible because only flexible has the capabilities to use any other
language than the provided ones right so I'll just use the node Joys one in the standard environment right and the next step is
to create the app engine deployment from your Google Cloud SDK or from the cloud shell so you can just download the sdq on your local system or you can use the
cloud shell I'll just activate the cloud shell where this g-cloud SDK will already be installed and I basically have to run this command gcloud app
deploy but before that I need to have my repository ready right so I have downloaded the source code over here
nodejs sample docs if you want there is this GitHub repo which has some code samples that you could use so I'll just
go inside this directory app engine and there are different examples I'll just use the hello
world okay and standard deployment so in this I have an app.js file which will be serving as my web
page and there is this app. DML so app.
DML that we have already seen the syntax of it app. ml will have the configuration that is used by your app engine deployment So currently it only has the
runtime but you could also add the other options that we saw right okay this app.js will have a Hello World page will
print Namaste Google cloud and it will listen on Port 880 right so to deploy the app there was
this command over here gcloud app deploy and you need to just run the command over here here it's not this one gcloud app
deploy once you do that it will prompt you to authorize the request authorize and it will show you the
different details like your project name Target URL version service account so app engine default service account will be
used hit continue yes now it will upload these files to a Google Cloud Storage bucket all right it has deployed the
application okay and it'll be listening to this particular default service you could either run this command to get the URL or just open this
in a new tab paste it over here and it says Namaste Google Cloud so our app has been deployed successfully now few things over here
let's go back to our Cloud console now if you refresh this page you will see a default service created over here and one version listening to
it so this is like if you go to versions this is the version that is currently enabled and and serving 100 100% of your traffic right let's say you want to
deploy a new version of application okay let me make some change over here app
do CH and let's change the welcome message to Namaste Google Cloud 2.0 over
here I'll save the file let's redeploy again Google app deploy yes okay it'll again upload the file to
a cloud storage bucket for the newer version all right so a new version will be created
now let's go ahead and refresh this page okay Namaste Google Cloud 2.0 our version has been deployed now now if we go back over here and hit Refresh on the
versions page you will see see two versions and this newest version is now serving 100% of the traffic but you can always roll back to the previous version
you select the version that you want the traffic to be rolled back to and you hit over here migrate traffic once you hit migrate your traffic will be routed to
the previous version so this will become 100 and this will become zero so let's wait for a few seconds it'll just doing that
hit refresh now now you see the traffic is now serving back to the previous version right let's open this in a new window and now you see Namaste Google
Cloud which is our version one now you could also split the traffic in percentage between those two versions
to do that select the versions hit over here split traffic right select the split traffic by field by IP address by cookie or
randomly let's use this random and let's put like this particular version 77 will receive the remaining traffic and this
receive 50% of the traffic so you select over here 50% to this service and rest 50% to this
version let's hit save okay now it will split the traffic randomly between those two versions
version one and version two okay it says split successfully save let's go to versions now you see
50/50% let's open a new incognito window paste it over here so this was our version one hit refresh Namaste
Google Cloud 2.0 that is a newer version this is the older version so you hit refresh and it will keep on changing the redirection of
traffic from one version to another see this is version one hit multiple times this is version two
okay this is what we wanted to test it out so this is one way of splitting the traffic another way is to split the traffic by the g-cloud command itself online I'll put the link in the
description section how you could split the traffic during the deployment itself which is also known as Kenedy
deployment right the next thing is so we have seen this is the dashboard where you will see all your summary of the
request and responses to this particular service your billing status your current load how many requests were there and stats based on that and here will be
your services currently we have two version of the default service then your versions details over here instances right there is only one
instances right now for uh for this version and for this version there is two instances this is how the traffic is
getting splitted between those two then we have firewall rules I guess this is the basic one okay once the app engine application is created you cannot delete
it you could only disable it and let's see how to do that you go to settings over here and disable application enter the app ID over here
let me just copy it and remove the space hit disable this will just disable the application so that will start inquiring cost and if you see at the version now
these are in the stop state but again you will be still charged for the storage that you have used so go to
storage bucket for that cloud storage and just delete these buckets that was created by app engine so this
is the default bucket staging and apps serving traffic delete all these delete delete delete it then go to I
am and delete the app in this default service account which is this one remove
access like most of the gcp resources Google kubernetes engine can also be created using all those different methods like from the cloud shell using
g-cloud commands or using the API calls or from the cloud console itself so as part of this demo I'm going to create it from the cloud console so let's hit
search on gke and kubernetes engine this is what we need and if you are using this service for the first time it will ask you to enable GK API as well as
compute engine API so those needs to be enabled before you start using it so because I've already used it before that is why it's not showing me now I can hit create to create my first kubernetes
cluster and it is showing me two options to create the cluster one is autopilot another one is standard in autopilot GK Google takes care of all your notes
provisioning and administrative task you don't have to worry about anything and you need to pay per resource that you use but in standard kubernetes cluster you pay a node you takes care of all
your node provisioning and you set up Auto scaling based on your workload so I'll just select standard cluster so that we could see all other options that
are available within GK configure on that now the first thing is it's asking me to provide it a name let's call it test clust
then we select how we want this GK cluster to be hosted like whether zonal or Regional if we select zonal we need to specify in which zone we need it but
if we want our cluster to be highly available and fall tolerant we use the regional option where we select the region and it will deploy your kubernetes cluster in multiple regions
your control plane will also be hosted in multiple zones but if you select zonal your control plane will be hosted in one zone if the underl node is down
your cluster will be down right so choose this option wisely then you select your control plane version whether you want to provide a specific version or you select the release
channel so these release Channel there are different releases like rapid Channel regular channel stable Channel you could select the release type and select the version based on that now you
could just uh hit create over here to create your cluster with all the defaults settings or you can customize all these over here so let's go to default pool over here the first option
is node pool node pool is nothing but collection of uh different notes or in this case GC virtual machines group together in a pool right for example if
you want to have your nodes created on Linux UB 2 based system and all of these should have let's say 32 gigs of memory
and uh 12 vcpus then you create a node pool with that configuration and all the nodes that gets created in that node pool will have the same type of resource
configuration right so our first note pool is the default one and here is the size of nodes the number of nodes uh it
will have is three cluster autoscaler adds node whenever your workload require more nodes and it will delete the nodes if the resources are
underutilized right so it will take care of all your autoprovisioning of notes and you can specify minimum and maximum number of notes as well select this option for now and you can select your
node location as well for example I have just uh used a zonal cluster that is why there is only one location selected but you can change the behavior and then it
will become a regional cluster and then it says automatically upgrades node to next available version you keep your node up to date to the cluster control
plane version so whenever there is an upgrade to the control plane your node will also be upgraded to the same version and when we are using a release Channel these options are automatically
enabled and this takes care of your auto repair Whenever there is a issue with the node it crashes or it changes the underling host it will auto repair that node for you when there is a surge in
your system so let's say when there is a maintenance activity such as cluster upgrade node upgrade then it will will add additional notes to the cluster so
that your workload will not be impacted and the number of notes it will add is one because we have specified Max surge as one and Max unavailable is zero that
means there is at least one node running all the time to serve the traffic next is node so this is the configuration of
your default node pool right image type and then machine type machine family and then boot disk type all those things that's part of your GC configuration if
you go to networking you can specify the maximum number of ports you want per node right the default is 110 for this particular version that we have
selected but you can change this Behavior as well then if you go to security it says use the compute engine default service account because you're all the underline notes are nothing but
GC VMS right so it will use the gcvm default service account and you can specify your access type here is the metadata if you have to
add any TS to the node or any labels that you would like to specify you can do that as well you can enable the maintenance window maintenance window
will be the time frame or the timeline in which your scheduled maintenance activities such as cluster or node upgrade will be performed you can enable
it and uh you know provide a timeline like do it weekly at this time from this time to this time or if you do not select this then upgrades will be
performed anytime it feels suitable then you can uh enable enable vertical pod Auto scaling vertical pod Auto scaling is nothing but it will automatically
resize your pod based on the CPU request and memory request and then uh we have an option enable node autoprovisioning
that means it will create or delete more uh node pools as the workload demands then um this is the networking
how the cluster will be created which subnet and whether it will be a private cluster or a public cluster if we choose a private cluster then only internal IP addresses will be assigned to pods and
the nods it will not be accessible outside from the control plane so this is the better choice if you are using it for your production or your organization
workload right and you can enable the control plane AIS as well like only my organization C range would have access to this cluster so you can specify your CID or range as well like for this demo
I'm just using the public cluster you can enable uh workload Identity or Google Groups RBC all those things then this is the metadata field
like we see in node pool this is cluster level metadata and features is if you want to aable Cloud logging and monitoring by default it is enabled so let me just
disable it so this will let you have some insights into the cluster all the logs and Matrix will be captured and you could just query it from the Google log
Explorer or Matrix Explorer and there are a lot more options some of them are in beta that means it is not released for the production recommended setup but
you can still use it and once you verify everything everything looks great you hit on create all right as we can see that our cluster is ready
now and it has three nodes total vcpu is six total memory is 12 gigs it says uh low resource request because I haven't provisioned anything on this uh cluster
yet so I'll go inside my cluster by clicking over here nodes so it has one default node pool and these are the
three nodes that are part of this default node pool if you would like to see any of these nodes you click on the Noe and it has these SPS running so
these are just the control plane workload that was provisioned with the cluster and all the details such as resource request so because we disabled
the log it is not enabled but you can enable it and it'll be visible over here and this is the yaml for this particular
node right it has all the details to enter into the cluster what you can do is go back to the
cluster right this is a test cluster and over here you see a button which is connect click over there and basically this is the command
that you would need to run to get inside the cluster gcloud container cluster get credential and then name of the cluster Zone and and the project name right
either you could copy it or run it directly in the cloud shell from here so let's do that okay I'm inside my cloud shell and
this command was pasted for me automatically so I'll just have to hit enter and it will ask me to authorize it hit
authorize and I'm logged in so let's clear the screen let's clear the screen and run a cube CTL get
pods it says no resources found because it is default name space so let me provide the name space as cube
system right these are all the parts that are running inside this cluster now let's see how to perform a test
deployment of a sample enginex image so I'll go over here on the three dots and I'll click upload I have a yl file
already created in my local machine so choose file and I'll select the file right and hit upload this file will
be uploaded in my cloud shell here if I do an LS over here engine XML maybe I need to
replace the name engin X DOL let's do a VI on that now and this is my deployment
EML so it has two replicas of enginex image and it'll be using 1.4.2 and running on container Port
ad I'll just get out and now we can run Cube CDL let me clear the screen first now we
can run Cube CDL apply hyphen f and engine x.l file to apply the
engine x.l file to apply the deployment it says uh deployment created let's do a cube CTL get
deploy and it is updated all the pods are up and running so if we go back to our cluster and go inside the workload you will see our enginex
deployment is running if we click on that you can see other details like these are the two manage pods and it is
not exposing any service so that is it running enginex container just one container per pod and over here are the
details and events logs we have disabled it already and over here you see the yaml file that we have just
used it will also have some metadata because we have deployed it now when we want to update the deployment what we can do is like from
here as well we click and edit the deployment you know just make the changes over here and hit save or we can do it from the cloud shell itself like
Cube CDL edit deploy engine X um it was enginex
deployment so you can make the changes and save the file it'll be done let's update the replica from 2 to three and save the file once you do that
it says deployment edited and now if we go back to workload and just wait for a couple of seconds hit refresh and you see three pods are there the changes will be
effective immediately right so this is how you can update your uh deploy M as well now uh there'll be a time when you have more than one cluster running and
you would want to set a default cluster so to do that let me just exit from this Cloud shell and open a new shell now and then you run this command gcloud config set
container / cluster and the name of the cluster when you do that your default cluster container will be set to test cluster right this command is also
really important from the exam point of view g-cloud command we use when we need to create the cluster delete the cluster or interact with its configuration however we use Cube Cal command when we
need to interact with clusters control plane or workload or any kubernetes component well the first question you should ask yourself before choosing a
database service in gcp is whether your data is structured or not see structure data is
something that has as a predefined structure files such as Json EML log file or any text file for that matter of
fact these are considered as a structure data however your static files such as your audio files your video files
images these are all your unstructured data so if your data is not structured if it is unstructured then the next question you should ask is whether you
need mobile SDK capabilities if yes then cloud storage for Firebase would be the best database service for your use case if you don't need these
capabilities then cloud storage would be the best suitable database service for you so I have created a separate video for cloud storage I have discussed everything that you need to know about
cloud storage Link in the description as well as in the title bar also like this service if you are new to gcp cloud storage is similar to AWS S3
or Azure storage right so I guess uh you get the idea now what is cloud storage this cloud storage for Firebase just adds some mobile SDK capabilities on top
of cloud storage all right now let's say your data is structured now you should be asking whether you need data analytics
capabilities if you need low latency database then Cloud big table would be the suitable choice for you well Cloud big table is a nosql column wide
database optimized for heavy reads and right so if uh you need those capabilities then please go ahead with Cloud big table if you don't need low
latency database then Cloud big query would be the better option for you well big query is uh data warehousing
solution for large amount of relational data right this is not a nosql service this is a relational service but it is an ideal use case for
data warehousing where you can run SQL based queries on your data sets to get the required results let's say if your workload is not
analytics the next question you should ask is whether your data is a relational data or not well what exactly is a relational data any data in the form of
rows and columns is a relational data right and we manage and maintain relational data with the help of our relational database management system if
your data is relational data then do you need horizontal scalability if not then Cloud SQL would be the better choice for
you Cloud SQL is a general purpose SQL database these all services are managed so Cloud SQL is also a managed service
and it is cost effective when compared to Cloud spanner as it doesn't support horizontal scalability and auto scaling and it
supports SQL engine such as MySQL post gray SQL and SQL
Server right all these are manage services supported by Cloud SQL let's say you need the capabilities such as horizontal scalability and auto
scaling in case your workload unexpected and it grows as per the demand grows use cases such as gaming is an ideal choice
for cloud spanner because you don't know when the traffic will be high there could be unexpected spikes in the traffic on the server so Cloud spinner will be a better choice for you this is
again expensive option but it comes with some extra capabilities Cloud SQL is used for general purpose SQL database needs now let's say if your data is not
relational that means it is no SQL data that is in the form of key value pair or data in the form of columns
documents all these type of data are no SQL data and if you need the capabilities of a no SQL database then you should be checking whether you need
mobile SDK capabilities if yes then Cloud fire store for Firebase would be an ideal choice for you if you don't need these capabilities then the next
question you should ask yourself is whether you need inmemory database for for faster caching and faster performance for frequently queried data
if yes then memory store would be a better choice for you memory store comes with redish and mcast and provide multi-million second latency to your workload but if this is not an
requirement for you then cloud data store would be an ideal choice for you it is again a manage no SQL service that can store and manage your data so that you could get the maximum performance
out of it almost every organization spend a lot of their resources in analyzing the cost incurred by them in other words they analyze the billing
data to get insights about the cost and for reporting purpose and for making the future forecast but how do you do it in
gcp well in gcp you can run big query big query is a managed data warehousing solution that lets you run SQL queries on your data to get meaningful insights
from it it has a scalable distributed analysis system that Returns the query results within seconds for terabytes of data and within minutes for pedabytes of
data and as it is a manage service you don't have to worry about your infrastructure so let's head over to Cloud console and see how you can analyze your billing data and all other
type of data so I'll just go to my cloud console and search for big query so this is my default project which is I'm currently using this is my
query editor on the right side so when you expand the project it has a default external connection right so you can create your new data sets over here
using these three dots click over here create a data set okay the project name is the default one let's give it a name
test P one and the data location you can select multiple locations for high avilability or let's choose us East one
you can enable table expiration because this will create the temporary table that can be cleaned up after a certain number of days to save some cost so
let's just put number one and rest option you can leave it default create the data set this is as simple as that and when you click on the
data set it will show you the details now you can create a table inside this data set again click on these three dots hit create
table and the source you can specify whether it should be an empty table that means you'll be entering your data manually or you have this data somewhere
stored already like in Google Cloud Storage or some drive or big table or even awss 3 or Azure blob storage so let's select Google Cloud Storage I have
already uploaded a sample CSV file which uh we can use in this demo but again you can export your billing data and use that I'll show you the other option as
well uh so please be with me till the end of the video so I'll select the file from here click browse this is my bucket and then this
is my sample CSV file I'll select this file and it autod detected the file format as CSV now destination project is also the same project this is my data set name in
which I'm creating the table and you could also create a new data set if you want but we have already created give this table a name let's call it sample
data and there are two options s in table type native table or external table so native table is table will be created in big query itself but when you
choose this option external table it will just create the metadata in big query and the source table will be referred every time the data needs to be
analyzed right so this will also save some cost if the data is huge in the external table so just external table option would be sufficient for you
right then it can autod detect the schema as well and if you click on Advance options you see it auto populated the
field delimeter as comma because we have uh uploaded a CSV as the source data and header rows to skip so we have one
header inside that CSV with the name of the fields so I'll just skip that row and that's it and just hit create table all right so I'll just close this
table has been created over here sample data if you click on that you will see it autoc calculated the schema for you right these were the fields inside that
table we type string integer integer and the rest of the default values then if you see the details all the details that it has like number of rows and even the
size of the table then if you click on preview this is all the data that we have inside that table right so it will just convert that data to show you the
preview and you can run queries on it so you can just click over here you can create the query in a new tab or you can split this tab as well let's use a new
tab all right so if you do select star forms this is the simple SQL query format so the format of for accessing the data would be something like this
first is the project name then we have the data set name and then we have the table name right so it will
be selecting this data from sample data table from Project this data set this and the sample data so let's because we
only have 10 records in that so let's just use the limit as five and run the query right it will show you results over here we have five results and the
header row was excluded from the result it also shows you different details as well such that the Json format of this table output the execution details how
much time it consumed for the execution how much read and write it did and everything and the same way execution graph will also be shown over here now
you can save this result as well in again a Google Drive a local file Json CSV and all other options we have or even a big query table right so this
again the results that we just got from here from running this query this will be created in a temporary table right this table will be deleted once we close
it so this is the result from the data that we have already in our cloud storage but we can also query the public data sets that are publicly available
for example let's close this and close this as well all right I'll just expand this so over here you can search the public data I'll provide
you the link in the description from the Google Cloud docs from where we can get the sample public data sets so let's say we have a public data table called gsod
if you search okay it will not show you any results because currently it will search in your project only but you can click over here broaden search in all
projects and the results will be shown to you because this particular table was part of a different public project right so this is the project name big query
public data this is the data set and these are the table names so let's select this table and then we can run
query again like let's see in a new tab let's do a select star from this and put the limit as 10 and run the
query okay it will show you all the results over here right and you can apply any condition as you want like the we Clause
order by group bu everything would work just fine as in a SQL query also you can join one or more tables from the public
data set to get the results so let's use this query for example it will get uh these two fields from both the tables so
over here we are selecting these two fields from these two tables Gord 1929 and G 1930 from the same data set and we
are joining based on this field and we are just limiting the records uh by 10 if you run it it will show you the results as intended so again this uh
data will be created as part of a temporary table inside big query and it is referring the data from the external data sets that is publicly available
right this data set over here so yeah this is really important to understand this from the exam point of view you as well as from learning the bigquery
perspective you can also create a billing export which will send the data asynchronously to Big query and you can
run analytics on that so to do that you go to billing export let's just leave this all right so it has different
option like standard usage cost detail usage cost what type of data set you want to generate so this will be
generated and update daily with the cost details per SKU right and it will export the data so once you enable it I haven't enabled it
now but you can just do this at it settings and create the data set over here let's say test 1 2 3 save it and you create the same data set in big
query so it will take some time and your data will be exported in real time from the billing to the big query let's quickly have a look at the pricing for
big query comes with two main components one is analysis pricing another one is storage pricing so first one is cost that you incur when you run your SQL
queries your user defined functions or scripts and certain DML and ddl statements that scan your tables and the storage pricing is the cost to
store data that you load in big query and along with that it has two different pricing models one is on demand pricing and another one is flat rate pricing
with the first one you charge for the number of bytes processed by each query right all the reads that it gets from the query and the first 1 TV query
processed per month is free so with the flat rate price ing you buy the slots which is the number of virtual CPUs that you would want to use to execute your
queries and run analytics you can use a 60c slot a monthly slot or a annual slot and the prices will be different for each of these
slots after you have deployed your application to your live production environment the job is not done yet there are a lot of tasks that you would
have to perform regularly to Ensure High availability and scalability of your applications an organization proactively engages its resources in setting up its
operations and maintenance to avoid any last minute chaos but how would you exactly do that in Google Cloud well Cloud operations to
the rescue now you might be wondering what is this Cloud operations forly known as te driver well
it's a suite of applications running on Google Cloud that can help you Monitor and troubleshoot your infrastructure and applications the tools includes
monitoring logging and advanced observability such as Trace debugger and profiler these tools provide deep
insights and best practices that help your organization utilize Google SRE principles let's have a look at each each of these tools Cloud monitoring
provides visibility of Health across your apps and infrastructure regardless of where it is using metrix and dashboards using metrix Explorer and
monitary query language mql you can analyze these metrics and create alerts and notifications uptime checks and SLO monitoring is also an amazing feature of
monitoring you can even Define custom metrics unique to your use case and send them to external system for further analysis Cloud login Aggregates log data
at scale from all of your infrastructure and applications into a single location this enables user to search and analyze large volumes of logs to troubleshoot
problem quickly using log Explorer it also gives you the capability to create log based metrics and set alerts on them you can even create log syncs to Route logs from one location to multiple
supported destinations such as tables created in big query data sets pup subtopics cloud storage buckets and another Cloud logging
bucket Advanced observability feature Trace provides insight into request flow service topology and latency issues in your app which can help improve the
application latency and increase performance the profiler performs a dynamic code analysis and continually analyzes your codes performance on each
service so that you can improve its speed and reduce your cost the debugger allows you to inspect running applications after deployment without needing to stop a Slowdown by taking
snapshots of variables and the call stack and injecting debug lock points in your running application please note that the debugger is officially being
deprecated and support will be provided by gcp until May 2023 it's been officially replaced by an open-source tool called snapshot debugger which
pretty much does the same thing now that we have seen Cloud operations what it is and what are the tools that are part of it now let's head over to Google Cloud
console and see logging and monitoring in action please pay attention to this particular part as I will be covering some important topics that could come in your associate Cloud engineer
certification exam let's have a look at the logging first so I'll search logging over here here and go to the loging tool now once you open it this is the default view that you'll get the left side
you'll see your resource type over here and on the right side the preview of log entries currently it's been set to show the last one hour of log you can change
it from here and you can select as per your needs let's select last 30 days of logs you can enter your custom range as well okay hit
apply now now that I have broadened The Horizon it will show you a lot of locks and lot of resource types as part of it let's have a look at each of those so if
you see the resource type could be VM instance kubernetes cluster big query firewall all the resources for which locks are available during this duration
will be visible over here then we have the sity level of the logs debug default info notice error and warning when we
are doing some troubleshooting when we need to check if there is any error we basically check these two type of severities error and warnings now over
here as well there are different filters that you could use first is resource type so these all resources that were visible on the left side you can select
it from here as well and from the log name as well so log name is again a type of filter now there are different type of
log logs in gcp logs such as activity log data access log activity log could be something like when a particular user
was added to the project or who added the user what is the principle username of that user all those details will be logged under activity log now when there
is any manipulation on data that happened it would be logged under data access logs and the same way there'll be different different type of Serial
output logs application logs and many other types I just want to check when was the last time a user was added to this project and who added that user and
what was the user ID of the added user right so I can filter the query using the activity log it is a type of
activity log hit apply then you notice over here it will generate a filter over here so the value is of type activity and this is basically the log
name and the results are over here now based on that I could change the duration as well let's say I just
want to check in last 3 hours last 30 days it's fine now there are lots of entries over here the other filter that we can apply as resource type so let's
uh add the resource type as Google project so you notice something like whenever I click over here autogenerated the filter so you can add filters from the left
side as well as from this drop down as well it's one and the same thing or if you know the value of that particular filter let's say you just type label it
will show you all the different values available for that you select one of those and provide the value so now that
uh I have used the log as activity and resource type as project this is only one result it happened today it says the
method was set I am policy that means uh I am role has been added to the user and principal email who added the user was
this let's have a look at the Json payload if you go to service data you would see type as audit data and then go inside policy Delta binding
Deltas zero and you will see the action was add on the user tutorials with p@ gmail.com and this role was added so
this was the activity that was performed at this particular time is how we can check when a user was added to the project who added it and how it was done
now we want to see all the data access that happened in this project in last 30 days so again we can just remove this we
select the log name as data access log it apply so the filter is generated now and over here you will see all the
resource type are mostly big query because this type of log data access audit log we have to explicitly enable it for a particular resource for big
query it's enabled by default and you cannot turn it off that is why we are only seeing the big query logs all the operations that we did on data was logged over here so for for example
let's say there was this job insert happened so who ran this job and who created the log entry was logged so once
we have the data like whatever data we need we can download it by clicking over here it will ask you whether you would want to download in Json or CSV format
maximum log lines or if the log entries are more than 10,000 then you can export your logs or create a sync as as well right so to create a sync Let's uh go
over here logs router so whenever you do any operation in your Cloud project or any type of log is generated by default it will be
streamed to this default Bucket over here and you can customize what type of logs you would want to export to this required bucket so these two are the
default log router syns created for you but you can create your own sync if you want to transfer your data from one source to another supported Source create
sync give it a name test select the sync service destination so destination could be as we have seen before big query data set Cloud logging
bucket storage bucket Pub subtopic or we have another option which is plunk or you can export it to other project as well right let's select anything for now
cloud storage bucket you browse the bucket from here currently I don't have any bucket so that is why it's not visible but you can create one if you
want and then you can choose what logs to include in the sync and you can create an exclusion filter as well once you create the sync these logs will be
synced in near real time to that destination and you can run all type of analysis and uh can store these logs for a long archival purpose as
well so this is how you can create the log syns then we have log
storage again this is showing the usage of these two buckets and it says log analytics available and you can upgrade it to
store more uh type of logs and here are the retention period of that and over here you can create log based Matrix you can hit create Matrix
and then select the type whether it is a counter Matrix or distribution Matrix name and you apply the filter over here so it uses a query language to build the
filter for you you use that and you create the log base Matrix now let's go to monitoring
now this is the default page for monitoring and if you are using it for the first time or you haven't used it uh very much then uh this would be screen
presented to you something like this and it'll ask you to install the Ops agent Ops agent is nothing but the primary agent responsible for collecting
Telemetry data from your compute engine instances and it uses fluent bit for logs and it uh collects logs as well as
metrics so you can install Ops agent from here click on setup agent it will show you the instances in which the agent is installed or not detected you
can select that and install or update agent if you click that it will generate a command for for you that you could run in Cloud shell and it will basically
install that agent for you right so I'll just hit cancel for now and go back to the overview dashboard page so it's asking you to perform few things first
is integrate with Google cloud service then install an agent create a dashboard so there is already a default dashboard then you can create up time check or an
alert so now let's go to metrix scope first so with Google Cloud monitoring you could monitor more than one projects
you could add the project to the metric scope right from here currently there are two projects one is monitored project another one is scoping project hosting project is the project my
current project that is hosting the monitoring service and monitored project is the project that is being monitored along with the first
one right so project Dr would have visibility inside project prod but not the other way around right so you could
add more projects to this metric scope by clicking over here let's say I have one more two more projects I'll add one of those click add project projects and
when you go to metric scope now we have three projects so I can monitor all these three projects from this particular project right so the two of
them are monitored project and this one is the scoping project now let's go to metric Explorer so over here we have three options one is configuration which is a
GUI based metric Explorer and then we have mql which is monetary query language and then we have prom mql PR mql is Prometheus query
language so the first one let's go to a VM instance Matrix and let's see instance and let's select CPU
utilization all right so when I did that I see CPU utilization as an aggregated view of all
the VMS running in past 1 hour so let's expand this for 6 weeks and now we see multiple vmss over here
right so all the CPU utilization like on this particular day there were three VMS running and CP utilization of each of
them have been given here and this is my current VM which is running so we can add filters to it let's say you just want to
see only us only a particular region right so add the label label as Zone and value let's give it 1
a so there is only one VM running in region 1 a if we select the other one which is 1 C we
should have the rest of the three VMS right now you have added your filter as well now how do you want to view the data so you could add your group by
Clause over here so you can Group by by using one or more Fields so let's group them by their instance
ID okay so now these three instance IDs have these three values which is basically CPU utilization over a period
of 1 minute you could change it to 15 minute which makes more sense right and then there are some Advance option if you expand this so you
could add uh secondary aggregator and secondary aligner as well right so this is how you create a matrix from Matrix
Explorer to see the utilization of a particular resource over a particular time frame using some filters Group by Clause inclusion and
exclusions now if you want to convert the everything into a query language you just click over here
and it will be autocon converted for you so the first is table name from which all the matric would be
downloaded then we have the Matrix name and then the filter that we applied and we have our group by
condition every 15 minutes and there was I guess I might have added another group by condition all the details are over here and you could make your changes from here as
well and it is pretty handy to use mql instead of the GUI based Explorer because you could easily update these queries reuse it and while architecting
a compute solution with Google Cloud you need to take care of many important aspects such as your existing infrastructure your technical requirements and your business requirements and based on that you
provide a proposed solution in this video we're going to take a look at all those factors that should should be kept in mind while selecting the most appropriate compute solution in Google Cloud so let's take an example suppose
you have a requirement something like this your existing infrastructure running a monolithic application and you need to migrate it to Google cloud and
there shouldn't be any refactoring of code now which Google compute solution would you choose right so in that scenario if you see your workload is not container based because it is a legacy
and monolithic application so you cannot use the container based solution and you cannot use fully managed service now it is not event driven as well right so the
option that you have is compute engine this is what you use you would do something called as lift and shift move your existing infrastructure from on Prem to Cloud without making any code
changes so this is what you will be using now let's take another example let's say your requirement is to move your on Prem kubernetes cluster right
you have an on Prem K and then you have to move it to Google cloud and again you would want to to lift and shift without making any code changes without doing any refactoring right so in that case
you would be using something called as gke so gke would be running on Google Cloud it's a partially managed kubernetes service there are two deployment options with gke one is
standard and another one is autopilot right with standard standard you have the flexibility to manage the underline nodes right you can make
changes in the node uh select the number of nodes and all those things you can do it but with autopilot you don't get access to the underlined nodes but you can still see your workload you can
still your uh control plane components and all other things now why I say this as a partially manage service because your auto upgrades and your node repairs
your patching of servers those will be still taken care by Google Cloud so that is why it is partially managed and partially you have to take care of it
now this is the case when you have to migrate your on Prem K to Google Cloud right so this is what you would use now there would be cases when let's say this
would be your release 1.0 now in next release you've been task to move away from any of the partially managed service and you want to completely go
serverless so in that case Cloud run would be a better choice for you so this is a manage service for kubernetes where you don't have to manage your servers so
in this you won't have to do anything other than just run your code and you know focus on your business logic other than managing server operations and all
other tasks right so Cloud run is suitable in that use case now let's say your workload is uh not running on Google Cloud
it is container based and it is running on k8s but not on Google Cloud it is running on let's say either on AWS or asure BMV bare metal or even on premises
and there are multiple cluster running in each of those or either of those and you would have to manage everything from
Google Cloud itself so this is what we use over here is anthos anthos is a very powerful service provided by Google Cloud
and it will give you a single pan of glass for all your kubernetes cluster whether running on any of these Cloud providers or even on premises so you can
manage all your kubernetes cluster directly from gcp now let's say uh your workload is not a monolithic application and it is not container based you need
to check the requirements if it is even driven event driven means whether the execution of your application is triggered by any of the events generated
in other gcp services such as a pub subtopic or a cloud storage bucket object notification event so in that case if it is a event driven you would
use something called as Cloud function Cloud function would be triggered only when an event is received from any other service and you will be charged based on the execution of code not because it
won't be running all the time so whenever you execute your code whenever you invoke a cloud function only those number of times you will be charged if it is not event driven and if
it is not container based but you still want to have a serverless service which is fully managed by gcp then you would use app engine in app engine again there
are two offerings one is app engine standard and app engine flexible with standard you have some predefined run times that you could use but with
flexible there is a flexibility to create your own custom run time and you can even run containers on top of that so for app engine for gke for computer engine I have created separate video
with the detailed explanation and the demo as well but for this video I just wanted to cover in which scenario we use what type of gcp service and why do we
choose it with app engine there is uh one more benefit of using it like uh if you are using this service you can have multiple versions of an application
running let's say you have 1.0 and 1.1 running at the same time and you can split the traffic between those two Services let's say you want 50% of the
users to access the older version 50% to access the newer version like can redeployment so in that case you can use app engine as these are the basic
requirements that we should keep in mind while selecting a service but there could be some lowlevel requirements and you have to do a lot of deep investigation to choose the right service service for your use case all
right that's it for this video I hope you enjoyed the video and found it valuable if you did give it a thumbs up comment on the video with your feedback and share with your friends colleagues
whoever can take advantage uh to learn from this and yeah I will see you soon with the next video keep learning keep upskilling and yeah I will see you soon thank you so much and have a good day
bye-bye
Loading video analysis...