Infrastructure As Code
Sometimes building and handling infrastructure can be a tedious task , Moreover creating similar and complex infrastructure for multiple environments is like a cherry on top, it will just amplify the definition of frustration. Fulfilling such a requirement needs an ample amount of time , lots of hard work and obviously engineering skills. Even destroying such a requirement is not a handy task.
Hence we needed a solution that is reliable, effective, maintainable, reusable, efficient and most importantly saves your time because time is money 😉 .
Luckily some great minds found a solution to our problems and reintroduced us with a primitive but effective solution that is “ Just code what you want ”.
Infrastructure as a Code (IAC) is basically describing your cloud infrastructure as code. There are different tools for IAC in market today like AWS cloud formation, pulumi and many more , but Hashicorp Terraform has a little upper hand because of three solid reasons listed below and these points are quite self explanatory
– Cloud agnostic
– Declarative syntax
– Growing community
Terraform is an infrastructure provisioning tool created by Hashicorp. It allows you to describe your infrastructure as code, it creates a blueprint of your plan that outlines exactly what will happen when you run your code.
Terraform has a very basic model , mainly consists of a terraform core and plugins. They are independent to each other.
Terraform’s Core is responsible for reading configuration and building the dependency graph The core loads the plugins and communicates with them via remote procedure call (RPC), then plugins are responsible for interacting with client library like azure aws and using third party services which are upstream APIs
Plugins are further divided into two
- Provider plugins – Create, update or delete resources from cloud like AWS, Azure
- Provisioner plugins – Post creation of resources can use this plugins
Terraform uses its own language called Hashicorp Configuration Language (HCL). HCL is JSON-compatible and is used to create these configuration files that describe the infrastructure resources to be deployed.
To execute your terraform code you just need to fire below commands
Terraform init – to initialize plugins.
Terraform plan – to check what all resources will be created through your code.
Terraform apply – actually creating resources.
After the execution of your HCL code terraform provides you a state file specifically named as “terraform.tfstate” which has everything mentioned that is deployed in your cloud using your HCL code , So never delete it and better keep it safe and locked as it may consist some secrets you don’t want to share and moreover tfstate file is required for destroying your infrastructure in one go. But need not to worry , terraform gives us options to store our tfstates on cloud storage.
¿ How Our Organisation Is Utilising Terraform ?
At the very beginning it is just like Toddler steps , Basically getting along with Hashicorp Configuration Language (HCL) , doing R&D on resources and features provided by Hashicorp, then actual implementations and creating POCs. Once things are familiar it’s time to combine those POCs and deploy an actual infrastructure that can be used by the development team.
But delivering a fully functional infrastructure with all the dependencies and prerequisites in one click is not as simple as it sounds. Deploying a simple infrastructure sometimes requires a lot of code , a little complexity and most importantly one should know the interdependency of the resources that are used in code.
“ It is always a good decision to implement modularity in your coding style ”
One can easily define in terraform code what modules he/she wants to consume. so it’s always nice to create modules so that it is reusable
Below are some module definitions :-
– Modules for creating a kubernetes cluster in Azure , Google and AWS Cloud
Resources like azurerm_kubernetes_cluster , google_container_cluster makes our life easy to spin kubernetes cluster in Azure and Gcp but creating kubernetes cluster is not that simple in AWS hence the AWS module code consists of many resources like aws_security_group , aws_iam_role , aws_eks_node_group and many more.
– Modules for deploying prerequisites
There are predefined and self explanatory things which are must to deploy when we spin up a cluster for deploying any application on it.
These are like brigade apis , nginx , metric server for monitoring purposes.
Also there can be other requirements like chartmuseum , sonarqube , nexus and many more
Most of the prerequisites can be easily deployed into the cluster using helm_release resource and null_resource
– Modules for deploying monitoring stack
Monitoring stack is an essential part of every infrastructure these days. Popular tools are Grafana , Prometheus , Loki , Promtail and so on which can be deployed using helm_release resources
One can also create a custom helm chart and deploy it to the chartmuseum , which can be consumed by the terraform module.
TFSTATES for the modules:-
As terraform provides a feature backend for storing tfstates in remote locations. so its recommended to use it as it reduces the chances of losing the tfstates.
– Tfstate for Cluster module is stored in the S3 bucket.
– Tfstates for all Prerequisites modules utilises single tfstate using remote
backend feature in s3 bucket.
– Tfstates for all monitoring modules utilises single tfstate using remote backend feature in s3 bucket.
Workspaces in terraform:-
workspaces in terraform is like the working directory for it, it is a very cool and important feature as with the help of workspace concept we can reuse the code or the modified code, deploy it and the tfstate will be created under the selected workspace section even in S3 bucket if using backend feature to store tfstate remotely.
Get In Touch
How Can We Help ?
We make your product happen. Our dynamic, robust and scalable solutions help you drive value at the greatest speed in the market