Skip to content

Frequently Asked Questions

How do I get started?

You need a:

  • Github OR Bitbucket Account
  • AWS IAM Credentials.

Onboarding

  • takes about 5 minutes
  • your first project takes 2-3 minutes.

Who or where is Terraform/OpenTofu executed?

Config0 utilizes workers or consumers to trigger the execution of Terraform/OpenTofu code within users’ AWS accounts. This execution is triggered via cross-account assume role. The Terraform/OpenTofu code is then executed directly within the users’ AWS accounts, with create and destroying of infrastructure resources controlled by AWS roles. This approach aligns with best practices by avoiding the generation of long-term IAM keys that could be vulnerable to theft or unauthorized distribution.

Why does the execution take a bit of time to start?

The workers are run within an ec2 instance, which takes between 2-3 minutes to be created. This is similar to something you will see with Gitlab runners. It also allows us to control costs, by isolating “runs” through short-lived spot instances. In the future, we will offer works and consumers to be run within a K8 cluster or Fargate cluster.

Can stacks be protected?

Absolutely, stacks can be categorized as either public or private. Private stacks are exclusively visible to their owners and individuals explicitly granted access. In contrast, public stacks are openly shared and accessible to others within the Config0 Marketplace. This facilitates collaboration and enables the exchange of automation resources among users.

Why is the different between stacks and config0.yml?

Stacks contain the automation logic. They can be invoked within other stacks, allowing for modular and reusable automation. Stacks have versioning and locking mechanisms to ensure stability and control over changes.

The config0.yml file not only locks the stacks but also fixes the variables and arguments that determine the behavior of the stack(s). It locks the complete “launch” of automation, encompassing the variables and stacks involved. This ensures automation consistency and reproducibility.

Can you do function as a service (serverless)?

Absolutely, while Config0 and its associated helpers are particularly designed to support OpenTofu and Terraform, the concept of creating stacks and workflows within Config0 is technology agnostic. This means that you can utilize Config0 to orchestrate and manage automation workflows regardless of the underlying technologies or tools being used. Config0 provides a flexible and adaptable framework for automating processes across different environments and technologies.

Are stacks easy to create?

Indeed, stacks in Config0 are designed to be easily readable and writable, much like Docker files. They are written in Python and launched through YAML configuration files. These YAML configuration files provide a clear and concise way to define the desired behavior and parameters for running the stacks. Here is an example of a YAML configuration file:

Full Example EKS Cluster with Existing VPC
global:
  arguments: 
    aws_default_region: eu-west-1
  metadata:   
    labels:
       general: 
         environment: dev
         purpose: eval-config0
       infrastructure:
         cloud: aws
         product: eks
    matchSelectors:
       network_vars:
         labels:
           environment: dev
           purpose: eval-config0
           area: network
           region: eu-west-1
           cloud: aws
       eks_info:
         keys:
           provider: aws
           region: eu-west-1
           aws_default_region: eu-west-1
         params:
           resource_type: eks
         labels:
           environment: dev
           purpose: eval-config0
           cloud: aws
infrastructure:
  eks:
    stack_name: config0-publish:::aws_eks
    arguments:
      vpc_name: selector:::network_vars::vpc_name
      vpc_id: selector:::network_vars::vpc_id
      # vpc with NAT, private_subnet_ids is more secure
      subnet_ids: selector:::network_vars::public_subnet_ids:csv
      sg_id: selector:::network_vars::bastion_sg_id
      eks_cluster: eval-config0-eks
      eks_cluster_version: 1.25
      publish_to_saas: true
      # vpc with NAT, private_subnet_ids is more secure
      eks_subnet_ids: selector:::network_vars::public_subnet_ids:csv
      eks_node_role_arn: selector:::eks_info::node_role_arn
      eks_node_capacity_type: ON_DEMAND
      eks_node_ami_type: AL2_x86_64
      eks_node_max_capacity: 1
      eks_node_min_capacity: 1
      eks_node_desired_capacity: 1
      eks_node_disksize: 25
      eks_node_instance_types: 
        - t3.medium
        - t3.large
      cloud_tags_hash:
        environment: dev
        purpose: eval-config0
    spec:
      serialization:
        to_base64:
          arguments:
            - cloud_tags_hash
    metadata:
      labels:
        - general
        - infrastructure
      matchSelectors:
        - network_vars
        - eks_info

How are platforms built and distributed?

Self-service platforms are created by integrating various technologies into connected pipelines, also known as workflows. These pipelines can be either one-time or triggered, recurring automations. Here are examples of different types of pipelines:

  • One-time A to B automation: This type of automation involves tasks like managing AWS resources, where resources such as a database or an EKS cluster are created once. The automation is executed only once for these tasks.

  • Recurring or triggered automation: This type of automation includes data pipelines such as traditional Extract, Transform, Load (ETL) processes, which extract data from a data stream, transform it, and load it into a data analytics platform like RedShift. Another example of triggered automation is Continuous Integration and Continuous Deployment (CI/CD), where the automation is executed frequently in response to code changes.

The automation process operates at two layers:

  1. Orchestration Layer: This layer of automation primarily focuses on non-server configuration tasks, such as making API calls to cloud providers like AWS, Azure, and Google Compute. Tools like Jenkins and Stackstorm are examples of tools commonly used in this layer.

  2. Delegation Layer (Configuration Management): This layer involves modifying and configuring servers. Tools like Chef and Ansible are commonly used in this layer to manage server configurations.

Config0 enables the sequential execution of activities in both the orchestration and delegation layers. If you already have investments in tools like Ansible and Terraform, there is no need to abandon them. These tools can be easily integrated into stacks to provide a single entry point.