How we use the Kubernetes Operator pattern

Organizations using NiFi for business-critical workloads have deep automation, orchestration, and security needs that Kubernetes by itself cannot support. In this second installment of our Kubernetes series, we explore how the Kubernetes Operator pattern alleviates these issues and enables businesses to scale Apache NiFi.

If you missed it, check out David Handermann’s previous blog explaining our approach to Constructing Apache NiFi Clusters on Kubernetes.

Why Operators?

Kubernetes documentation describes the operator pattern:

The operator pattern aims to capture the key aim of a human operator who is managing a service or set of services. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.

People who run workloads on Kubernetes often like to use automation to take care of repeatable tasks. The operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.

Organizations using NiFi find that, at some point, common tasks emerge that would benefit from some degree of automation.

  • The creation and deletion of NiFi runtimes with varying configurations (memory, CPU, min and max nodes)
  • Updating runtimes and extensions to newer versions
  • Supporting development and staging environments in addition to production NiFi clusters
  • Configuring horizontal autoscaling characteristics for a NiFi runtime
  • Persistent storage configuration that allows end users to self-service
  • TLS certificate creation and rotation
  • Establishing secure networking between end users and NiFi clusters
  • Deploying and orchestrating additional backend services to support specific processing needs

Clearly, NiFi could benefit from the Kubernetes operator pattern. Our deep expertise with the internals and design of NiFi led us to create the best operational model for running Datavolo at scale on Kubernetes.

Datavolo Operator

Datavolo uses Kubernetes to run our customers’ Datavolo Runtimes in addition to other backend services that extend NiFi with Datavolo-specific functionality. Our implementation of the operator pattern gives us the flexibility to serve customers with wide-ranging infrastructure, security, and performance requirements.

Managed Datavolo Cloud

Our managed Datavolo Cloud, currently in private beta, is a hosted service that allows users across multiple organizations to create NiFi runtimes, author flows, and monitor system performance. A couple clicks will provision a new runtime managed by Datavolo that scales horizontally with load and receives upgrades without downtime. In addition to the wide array of standard NiFi processors, Datavolo Cloud ships with specialized processors like Parse PDF Document, Detect Document PII, and prompters for many AI backends. All of this is possible because of Datavolo Operator. Datavolo Operator allows us to manage the dynamism needed to maintain hundreds of business-critical flows and runtimes over time and ensure our customers’ systems are resilient in the case of unexpected issues.

Datavolo Operator also makes it easy to roll out new services—like the backend for our PDF Parser—deployed on dedicated CPUs and GPUs outside of runtime environments. These services, too, scale with load and ensure traffic is balanced fairly across tenants.

BYOC Deployment

Datavolo can also be installed in an organization’s Virtual Private Cloud (VPC), often referred to as a Bring-Your-Own-Cloud (BYOC) model. Here, Datavolo Operator provides all the power it does in our managed Datavolo Cloud context with the additional benefit that it ships with time-tested best-practices for managing NiFi, developed over the past decade by the founding NiFi team. And while we provide discrete options for runtime configuration in our version of Datavolo Cloud, Datavolo Operator running in your cloud provides a lot more flexibility to define your own machine types, scaling characteristics, and more.

Datavolo Operator in your VPC

Ensure NiFi scales with your business

Long-time users of NiFi at scale know that a single runtime is an incredibly flexible environment for one to many data processing pipelines. However, organizations using NiFi in production contexts often find that many smaller runtimes offer better performance, resilience, and maintainability than a small number of large runtimes handling diverse data orchestration needs.

Datavolo Operator simplifies provisioning new runtimes and tearing down old ones, and it also plays a significant role in updating runtimes to the latest software versions. With the addition of the Datavolo Control Plane, this allows users to self-service runtime creation and modification, and provides centralized observability into performance and status across your Datavolo Runtimes.

As workloads reach production data volumes, Datavolo Operator allows customers to take advantage of Kubernetes horizontal autoscaling capabilities to ensure runtimes are able to meet demand while constraining costs. In combination with managing runtimes for a Datavolo Cloud installation, autoscaling configuration helps customers optimize resource utilization across different infrastructure components.

Apply Security Best Practices

NiFi was built from the ground up to handle highly sensitive data in secure environments. Datavolo Operator automates provisioning TLS certificates for each Datavolo Runtime node and configures runtimes with the appropriate key store and trust store. It also configures a secure HTTPS ingress gateway that centralizes access to clusters through a stable URL, allowing a user to securely connect to the backend nodes. 

Take advantage of updates

NiFi’s extensibility is a significant advantage for companies with diverse needs, but supporting and maintaining new features can be a challenge without the operator pattern. Datavolo Cloud customers are always able to fetch the latest Datavolo runtime-server and runtime-extensions images. However, getting the most out of many features will require additional configuration and, in some cases, the deployment of new services.

A recent update to Datavolo Cloud allows users to upload assets to their runtime, which can be used to supply ML models, database drivers, and more to the processors running there. Datavolo Operator configures where these files are persisted in your Kubernetes cluster. In more complex cases, Datavolo processors rely on separate services like the GPU-accelerated Document Parser. While calling processors run within the Datavolo Runtime, these additional services must be deployed within the VPC separately in order to leverage them. Datavolo Operator makes this deployment seamless, just like it does for Datavolo within our managed Datavolo Cloud offering.

Conclusion

The rapid adoption of GenAI spotlights the need for highly available, secure, auditable data processing pipelines. As we previously wrote, NiFi offers a uniquely powerful stack for the AI era. The speed at which companies need to adapt is only achievable if the obstacles to productivity and scale are minimized: NiFi needs an operator to get the most out of its usability and flexibility.

Top Related Posts

Constructing Apache NiFi Clusters on Kubernetes

Introduction Clustering is a core capability of Apache NiFi. Clustered deployments support centralized configuration and distributed processing. NiFi 1.0.0 introduced clustering based on Apache ZooKeeper for coordinated leader election and shared state tracking. Among...

Survey Findings – Evolving Apache NiFi

Survey of long time users to understand NiFi usage Datavolo empowers and enables the 10X Data Engineer. Today's 10X Data Engineer has to know about and tame unstructured and multi-modal data. Our core technology, Apache NiFi, has nearly 18 years of development,...