Photo by frank mckenna / Unsplash

Getting Started with Kratix — The Open-source Platform Engineering Framework

Prince Onyeanuna
Prince Onyeanuna

Table of Contents

Let's say you're a platform engineer, and you're trying to create an S3 bucket on AWS using Terraform. Usually, you'd have to write a Terraform configuration file, apply it, and then verify that the bucket was created correctly.

If the developers on your team needed to create another bucket, say for testing, you'd need to go through the same process of writing or duplicating the configuration, applying it, and ensuring it works as expected.

At the end of the day, you can quickly end up with a tangled web of configuration files, versioning issues, and a need for constant updates to keep things in sync.

Now, that's not so convenient. However, there is a way to make sure you don't repeat this setup multiple times, and that's with Kratix.

With Kratix, all you have to do is define your platform once using its Promise model, and you get an API that developers can call whenever they need to create an S3 bucket. This not only saves time but also ensures consistency and reduces the chance of errors.

In this article, we'll go through what Kratix is and why it's essential for your team. We'll also go through how you can install Kratix and break down its fundamental model - Promises.

What is Kratix?

Kratix is an open-source framework designed to help organizations create and manage "platforms" in a consistent and reusable way. A platform, in this context, is a system or service that developers can use to do their work, such as setting up a Jenkins CI/CD pipeline, provisioning an S3 bucket, or anything else required to support their software development or operations.

At its core, Kratix introduces a concept called Promises, which act as reusable, declarative templates for infrastructure and platform requirements.

Instead of writing and maintaining endless configuration files, you can define a Promise for a particular service or resource, and Kratix ensures it's consistently applied across all your environments. It works seamlessly with Kubernetes, leveraging its extensibility to deliver a highly scalable and automated solution for managing infrastructure.

Kratix is a platform that bridges the gap between developers and platform teams. It allows platform engineers to define Promises that meet their organization's standards while enabling developers to request and consume resources with minimal friction.

This way, Kratix not only simplifies infrastructure management but also empowers teams to move faster without getting bogged down in operational complexity.

Benefits of Kratix

Kratix brings several key advantages to teams, particularly those managing complex infrastructure and platforms. The following are three reasons why Kratix is important for your team:

Consistency and reusability
With Kratix, you define your platforms once and use them repeatedly without worrying about inconsistencies or errors. By creating a Promise, you standardize how a platform or resource is provisioned, ensuring every request follows the same predefined rules. This eliminates manual setup variations, making your infrastructure reliable and easier to manage.

Developer autonomy without sacrificing control
Kratix empowers developers by giving them the ability to request resources like S3 buckets, Jenkins instances, or any other service directly via APIs. At the same time, platform engineers maintain control over how those resources are configured and deployed. This balance reduces bottlenecks while keeping organizational standards intact.

Scalability and efficiency
Because Kratix runs on Kubernetes and uses worker clusters to handle provisioning, it scales seamlessly as your organization grows. Teams no longer waste time on repetitive tasks or managing sprawling configuration files. Instead, they can focus on more strategic work, confident that Kratix will handle the operational details efficiently.

How does Kratix work?

For a platform engineer, the journey begins with creating a Promise. A Promise is a declarative YAML file that specifies what resources or services (platforms) are being provisioned, such as S3 buckets or Jenkins instances.

Additionally, it can specify any parameters developers can configure, like bucket names or instance sizes and the backend logic to provision the resource, typically using tools like Helm, Terraform, or custom scripts.

Promises are built using Kubernetes Custom Resource Definitions (CRDs), extending Kubernetes to understand and manage these platform-specific configurations.

Once the Promise is created, the platform engineer deploys it to the Kratix control plane, which runs on a Kubernetes cluster. This makes the Promise available as a Custom Resource (CR) in the cluster.

From this point, developers can interact with the API to submit requests for the platform described by the Promise.

When a developer needs a service (e.g., an S3 bucket), they create a Custom Resource (CR) based on the Promise. For example, they’ll submit a YAML file specifying the bucket's name and region, adhering to the parameters defined in the Promise.

This request is handled by Kratix's control plane, which validates the input and prepares the resource for deployment.

Kratix separates the control plane from the execution environment. The control plane delegates the actual provisioning work to worker clusters. These worker clusters run the scripts, Helm charts, or Terraform modules defined in the Promise.

As the worker clusters execute the provisioning tasks, Kratix updates the developer's request with the status and outputs, such as access keys, URLs, or any other details they need to use the service.

This feedback is delivered directly through Kubernetes' API, making it easy for developers to track progress without leaving their workflow.

Image source: Kratix.io

Setting up Kratix

Before starting this guide, ensure you have the following tools installed and ready:

  • KinD (or another Kubernetes tool): KinD (Kubernetes in Docker) is a lightweight solution for running Kubernetes clusters locally. If you prefer, you can use any other Kubernetes cluster management tool that suits your environment, such as Minikube or K3s.
  • Kubectl: The Kubernetes command-line tool is essential for interacting with your clusters. You'll use it to deploy resources, manage workloads, and verify installations.
  • Docker: Required for containerized workloads. Docker powers tools like KinD, allowing you to run Kubernetes clusters and manage containers.

NOTE: If you want to use a cloud-managed Kubernetes cluster like EKS, GKS, or AKS, see the Kratix installation guide for the specific guide.

Now that your prerequisites are in place, it's time to set up Kratix on your Kubernetes cluster.
In this section, we'll walk you through the steps of installing Kratix and configuring it for your environment.

Installing Kratix
Kratix enables platform-as-a-service functionality in Kubernetes. This guide provides a clear overview of how to set up Kratix on two Kubernetes clusters: a platform cluster and a worker cluster.

Step 1: Set up the platform cluster
Kratix is installed in the platform cluster. It acts as the central hub for managing workloads and dispatching them to worker clusters.

Install cert-manager
Kratix uses dynamic admission controllers to validate resources like Promises and Workloads. cert-manager ensures these controllers work securely by automatically managing the necessary TLS certificates.

Run the following command to install the cert-manager:

kubectl apply --filename https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml

Verify cert-manager is running:

kubectl get pods --namespace cert-manager

You should see the following output on your CLI:

NAME                                       READY   STATUS    RESTARTS      AGE
cert-manager-74d8669f65-n7pqg              1/1     Running   1 (74s ago)   36h
cert-manager-cainjector-5d44b78cf5-4pcmn   1/1     Running   3 (60s ago)   36h
cert-manager-webhook-564bccc49-n2jbr       1/1     Running   1 (34h ago)   36h

Deploy Kratix
Deploy Kratix into the platform cluster. This creates all core Kratix components, such as Custom Resource Definitions (CRDs), controllers, and other resources required for managing workloads.

The following command will install Kratix on your cluster:

kubectl apply --filename https://github.com/syntasso/kratix/releases/latest/download/kratix.yaml

Check that Kratix is running:

kubectl get pods --namespace kratix-platform-system

If it is, you should see the following output:

NAME                                                 READY   STATUS      RESTARTS      AGE
kratix-platform-controller-manager-94556588d-zrkjd   2/2     Running     2 (34h ago)   37h

Configure the state store
The state store is where Kratix stores workload definitions and their states. It acts as a bridge between the platform and worker clusters, ensuring all workload states are stored and synchronized effectively. This ensures a clear separation between platform logic and workload delivery.

  • Bucket-based State Store: A simple setup for storing definitions in an object store like MinIO.
  • Git-based State Store: A more robust approach for production environments, leveraging GitOps principles.

For a bucket-based store, deploy MinIO:

kubectl apply --filename https://raw.githubusercontent.com/syntasso/kratix/main/config/samples/minio-install.yaml
kubectl apply --filename https://raw.githubusercontent.com/syntasso/kratix/main/config/samples/platform_v1alpha1_bucketstatestore.yaml

To ensure MinIO was deployed correctly, check the status of its namespace using the same verification command used while installing Kratix in the step above. Ensure it's in a running state like in the image below:

NAME                                                 READY   STATUS      RESTARTS      AGE
minio-54f847c6c5-7hg9c                               1/1     Running     1 (35h ago)   37h

Step 2: Set up the worker cluster
The worker cluster is responsible for running workloads dispatched by the platform cluster.

Install Flux
Flux enables GitOps-based synchronization, ensuring the worker cluster retrieves workload definitions and executes them as intended. It ensures that the worker cluster is always in sync with the State Store, automatically applying workload changes.

The following command will install Flux on your worker cluster:

kubectl apply --filename https://raw.githubusercontent.com/syntasso/kratix/main/hack/destination/gitops-tk-install.yaml

You can verify that Flux is running by executing the following command:

kubectl get pods --namespace flux-system

You should see an output similar to this if everything goes well:

NAME                                       READY   STATUS    RESTARTS        AGE
helm-controller-76dff45854-j2xx2           1/1     Running   1 (5h27m ago)   42h
kustomize-controller-6bc5d5b96-kvstv       1/1     Running   1 (5h27m ago)   42h
notification-controller-7f5cd7fdb8-tsv5j   1/1     Running   1 (5h27m ago)   42h
source-controller-54c89dcbf6-wwllq         1/1     Running   1 (5h27m ago)   42h

Connect Flux to the state Store
Configure Flux to pull workload definitions from the State Store. This step is important becasue it sets up the synchronization pipeline, ensuring workload requests from the platform cluster are delivered to the worker cluster.

If using MinIO, apply the following configuration:

kubectl apply --filename https://raw.githubusercontent.com/syntasso/kratix/main/hack/destination/gitops-tk-resources.yaml

Register the worker cluster
Finally, inform Kratix about the worker cluster by applying the worker cluster registration manifest in the platform cluster.

Worker registration enables the platform cluster to track and dispatch workloads to the worker cluster seamlessly.

You can apply the cluster registration by executing this command below:

kubectl apply --filename https://raw.githubusercontent.com/syntasso/kratix/main/config/samples/platform_v1alpha1_worker.yaml

Verify the worker cluster is registered by running this command:

kubectl get namespace kratix-worker-system

Your output should be similar to this:

NAME                   STATUS   AGE
kratix-worker-system   Active   42h

With this, you should have Kratix up and running in your cluster.

Kratix Promises

Now that's out of the way, we'll talk about what a Promise is and what a Promise contains.

What is a Promise?
A Promise in Kratix is a special way to define and deliver workloads. You can think of it as a contract between the platform cluster and the worker clusters. It promises to deliver a specific capability (like setting up cloud infrastructure, deploying an application, or creating Kubernetes resources) whenever it's requested.

A Promise wraps everything needed to fulfill a workload. This includes the logic for provisioning resources (for example, Terraform configurations) and a pipeline that handles input and output.
Promises make it easy to reuse common tasks across multiple clusters. So, instead of doing everything manually, you define the task once, and Kratix takes care of the rest.

What's in a Kratix Promise
A Kratix promise is a YAML file that contains three main sections. This is a template of how a Promise is structured:

apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
  name: s3-bucket-promise  # The unique name of the Promise
  namespace: default     # Namespace where the Promise is defined
spec:
  api:                   
  dependencies:          
  workflows:

Below is an explanation of each of these three sections:

API
The API, defined as a Custom Resource Definition (CRD) in Kubernetes, is the entry point for developers to interact with the Promise. It describes the resources developers can request and specifies configurable options, such as parameters for creating an S3 bucket (e.g., name, region). This ensures that every resource request aligns with the predefined standards set by the platform team.

Dependencies
Dependencies outline the prerequisites needed to provision the resource. These could include CRDs for third-party tools (e.g., Jenkins, PostgreSQL) or Operators required to manage those tools.

Platform engineers ensure these dependencies are installed on the worker cluster before the Promise can function properly. This guarantees a smooth workflow during resource creation.

Workflows
Workflows are pipelines that define the lifecycle actions for the requested resource. These include:

  • Creation workflow: Executes when a resource is requested. Converts developer input into the format needed by the backend tooling (e.g., Terraform or Helm).
  • Maintenance workflow: Handles updates, ensuring the resource remains in sync with any changes to its configuration.
  • Deletion workflow: Cleans up resources when no longer needed.

Workflows also extend beyond Kubernetes, enabling tasks such as sending alerts (e.g., notifying Slack about a deployment failure) or running external scripts or processes.

Conclusion

Kratix transforms how platform engineering teams operate by introducing reusable, declarative workflows called Promises. It simplifies complex infrastructure management, enhances developer autonomy, and ensures consistency across environments.

In this article, we defined Kratix as a tool that allows platform engineers to deliver a platform-as-a-service. We also broke down Kratix's works and dissected its underlying principle - Promises.

If you're part of a team struggling with scattered configurations, repetitive setups, or versioning headaches, adopting Kratix can revolutionize your workflows.

Ready to give Kratix a try? You can start by writing your first Promise or, better still, visit the Kratix marketplace and use any of the available promises created by the Kratix community.

Kubernetesplatform engineeringCloud

Prince Onyeanuna Twitter

Prince is a technical writer and DevOps engineer who believes in the power of showing up. He is passionate about helping others learn and grow through writing and coding.