The Evolution of AI Engineering and Datavolo’s Role

Humility is the first lesson

In the machine learning era of software engineering, one persistent truth has emerged: engineers are increasingly submitting to the will of the machine. A significant milestone in the transition from classical machine learning to deep learning was the replacement of hand-designed features with learned features—representations learned by the model. It became evident that, for many tasks, the application of vast amounts of data and computational power could yield more breakthroughs than the human’s algorithmic insights.

We have swiftly transitioned from crafting algorithms to training models, from manually engineered features to learned features in neural networks, and most recently, from hand-crafted loss functions to learned ones, thanks to techniques like Reinforcement Learning with Human Feedback (RLHF). The profound impact of RL optimization on top of Large Language Models (LLMs) has been crucial for achieving breakthroughs for increasingly complex tasks. Surrendering control over the definition of the loss function is arguably one of the key breakthroughs associated with RLHF.

For engineers developing AI applications, we are witnessing a recapitulation of this pattern. As we converge on standardized approaches for building AI applications, which entail integrating and customizing LLM behavior, a continuum of approaches with associated trade-offs and payoffs has emerged. These methods range from basic prompt engineering to naive RAG, to sophisticated approaches to RAG (which require navigating challenging Information Retrieval problems), to fine-tuning and parameter-efficient fine-tuning methods, and even the creation of LLMs from the ground up, such as BloombergGPT.

When the model decides

One of the most exciting developments in recent waves of LLM innovation is the ability for models to invoke functions and expose an agent API, which is capable of utilizing tools created and registered by AI engineers.This is an instantiation of the idea that LLMs can improve their responses by using tools external to their language generation capability. These tools could be for Math, Coding, Search and Information Retrieval, and ultimately the use of any external API.

In this framework, the LLM becomes an intelligent router, possibly even a data OS as suggested by Andrej Karpathy, capable of directing the user’s intent to the most suitable tool to provide the best response. This evolution began with Meta’s ToolFormer paper, introducing the ability for LLMs to make function calls, and has continued with OpenAI’s announcement at their Dev Day of the Assistant API, which facilitates the integration of custom tools with GPT-4 model descendants.

The agent as the user persona

These agentic design patterns are once again empowering AI engineers to relinquish some control, with an eye toward new breakthroughs. Unlike RAG patterns where the engineer determines when and how to incorporate more context into the prompt, the model now decides when to leverage Retrieval as a tool to deliver the highest-quality response. As engineers explore this pattern, product designers must embrace the concept that they may be writing tool specifications for an agent API, which effectively becomes the user story they are delivering!

At Datavolo, we view the path forward as a continuum of approaches, each with its own trade-offs and payoffs, spanning these core design patterns. Enterprises will need to carefully consider critical trade-offs across dimensions such as system coupling and vendor lock-in, cost, complexity, and security and privacy. Furthermore, in each of these core design patterns, we believe Datavolo will play a pivotal role.

For in-context learning, Datavolo will prove invaluable for the data engineering steps essential in constructing effective RAG applications—acquiring, extracting, chunking & structuring, transforming, and loading multimodal data. In fine-tuning, Datavolo will assist data engineers in building staging environments with multimodal training data and assessing the performance of models. As for agent APIs, Datavolo will one day evolve into a tool that agents use themselves!

Datavolo offers the flexibility users need, ensuring that, regardless of the chosen design pattern, they will be well-equipped to develop multimodal AI applications that are impactful within the enterprise. Please stay tuned for future blogs where we will continue to explore this theme!

Top Related Posts

Generative AI – State of the Market – June 17, 2024

GenAI in the enterprise is still in its infancy.  The excitement and potential is undeniable.  However, enterprises have struggled to derive material value from GenAI and the hype surrounding this technology is waning.  We have talked with hundreds of organizations...

Building GenAI enterprise applications with Vectara and Datavolo

The Vectara and Datavolo integration and partnership When building GenAI apps that are meant to give users rich answers to complex questions or act as an AI assistant (chatbot), we often use Retrieval Augmented Generation (RAG) and want to ground the responses on...

Datavolo Announces Over $21M in Funding!

Datavolo Raises Over $21 Million in Funding from General Catalyst and others to Solve Multimodal Data Pipelines for AI Phoenix, AZ, April 2, 2024 – Datavolo, the leader in multimodal data pipelines for AI, announced today that it has raised over $21 million in...

Fueling your Chatbots with Slack

The true power of chatbots is not in how much the large language model (LLM) powering it understands. It’s the ability to provide relevant, organization-specific information to the LLM so that it can provide a natural language interface to vast amounts of data. That...

Datavolo Architecture Viewpoint

The Evolving AI Stack Datavolo is going to play in three layers of the evolving AI stack: data pipelines, orchestration, and observability & governance. The value of any stack is determined by the app layer, as we saw with Windows, iOS, and countless other...

ETL is dead, long live ETL (for multimodal data)

Why did ELT become the most effective pattern for structured data? A key innovation in the past decade that unlocked the modern data stack was the decoupling of storage and compute enabled by cloud data warehouses as well as cloud data platforms like Databricks. This...

FlowGen Improvements (already!)

In the past week, since Datavolo released its Flow Generation capability, we've witnessed fantastic adoption as users have eagerly requested flows from the Flow Generation bot. We're excited to share that we have recently upgraded our models, enhancing both the power...

Introducing our GenAI NiFi Flow Builder!

Hey everyone, it's been an incredible journey over the past ten years since we open-sourced Apache NiFi. Right from the beginning, our mission with NiFi was crystal clear: to make it easier for all of you to gather data from...