sideara-image

Lorem ipsum dolor sit . Proin gravida nibh vel vealiquete sollicitudin, lorem quis bibendum auctonisilin sequat. Nam nec tellus a odio tincidunt auctor ornare.

Stay Connected & Follow us

What are you looking for?

Simply enter your keyword and we will help you find what you need.
Craftsmen sailing with terraform

Terra-forming Medialive Resources

We’ve all been there. Learning to use terraform, writing beautiful code to manage our cloud infrastructure; it’s all sunshine and rainbows. Until you realize you work with video and need to use Medialive resources. Now sure, you can create these resources from the aws console like a nomad and pass in the arn to make your CD pipeline do what it needs to do but unfortunately we are not nomads. We are engineers and hackers and problem solvers. This article talks about how I had to use terraform to deploy medialive resources and why I started to appreciate terraform even more than I already did.

1,2,3 Solve

The problem: Terraform does not support Medialive resources

Tools available to us:

  1. Terraform uses a state file to remember what it did the last time you ran terraform apply on an s3 bucket (other options are also available)
  2. Terraform has local-exec provisioner to run scripts on your behalf
  3. Boto3 supports Medialive resources
  4. Terraform can trigger specific commands for when it runs create and destroy
  5. Terraform can easily use data source to read s3 files

The plan: Write python scripts that will use boto3 to create, update and delete medialive resources but also keep track of what it did in an s3 in a parsable format (json/xml) etc. Use terraform’s local-exec to run the python script and specify what it needs to do (This will become clearer when we write the terraform code)

We will skip writing python scripts but all it needs to is know when it needs to update a resource (by checking if the resource identifier/user given name is the same and the update action’s input i.e the config is different); when it should create a new resource (by checking if the given s3 bucket contains a file with the given resource identifier) and when to delete (if terraform tells it to delete)

Code:

Module code

resource "local_file" "input_data" {
 content  = jsonencode(var.input_data)
 filename = var.input_name
}
resource "null_resource" "eml_input" {
 provisioner "local-exec" {
   command = "python ${path.module}/eml_input.py create ${var.eml_trace_bucket} ${var.input_name}"
   when    = create
 }
 provisioner "local-exec" {
   command = "python ${path.module}/eml_input.py destroy ${self.triggers.eml_trace_bucket} ${self.triggers.input_name}"
   when    = destroy
 }
 triggers = {
   eml_input_md5    = md5(jsonencode(var.input_data))
   eml_trace_bucket = var.eml_trace_bucket
   input_name       = var.input_name
 }
 depends_on = [
   local_file.input_data
 ]
}
data "aws_s3_bucket_object" "eml_input_state" {
 bucket = var.eml_trace_bucket
 key    = var.input_name
 depends_on = [
   null_resource.eml_input
 ]
}

And outputs

output "id" {
 value = jsondecode(data.aws_s3_bucket_object.eml_input_state.body).response.Input.Id
}
output "arn" {
 value = jsondecode(data.aws_s3_bucket_object.eml_input_state.body).response.Input.Arn
}

And inputs

variable "eml_trace_bucket" {
 type = string
}
variable "input_name" {
 type = string
}
variable "input_data" {
 type = any
}

So what’s happening here?

The following diagram should be able to explain things

The first local_file resource is creating a simple json file from the given EML input config for our python script to be able to parse it (Yes we can also just pass json string directly into the command line)

The next null_resource is where all the magic happens. We have a local-exec provisioner that will be executed when terraform decides that the required action is a “create” action; hence the when = create value. The second local-exec will be triggered when terraform deems that the resource should be “destroyed”. And it “decides” all this from the triggers block where we tell it to depend on the md5 of the data that the user has given us as an input or the name of the bucket that we use to store these states or the resource identifier that we use.

Do notice that for “destroy” we are using data from self because remember when this is being executed, things that need to be deleted are not a direct part of your given input, instead it’s a part of the original state that terraform created on its last run. There are some dependes_on blocks which are pretty self explanatory.

We also need a way to receive information about the resource that we just created. Luckily terraform has data_source to read from s3. Which we are using in the last two blocks to get and parse data into outputs.

Voilà, you now have a terraform module that is pluggable, extendable and maintained just like how a regular terraform resource would. You can use any code that you want through this technique. The universe is your limit since you can make the deployments as robust as you want in python/any other language that you like.

I hope this was helpful to everyone reading. Keep Terra-forming guys. 

author avatar
Md Sakibul Alam
No Comments

Sorry, the comment form is closed at this time.