Tutorial – How to Convert to ONNX®

Converting from Pytorch/Safetensors to ONNX®

Given the advantages described in Onward With ONNX® we’ve taken the opinion that if it runs on ONNX that’s the way we want to go.  So while ONNX has a large model zoo we’ve had to convert a few models by hand.  Many models like YOLOX provide tools that make this a single command:

python tools/export_onnx.py -f yolox_layouts.py --output-name yolox_m_layout.onnx -n yolox-m -c YOLOX_outputs/yolox_layout/best_ckpt.pth

If the model exists in HuggingFace we’ve seen good results using the Optimum Conversion tool.

This can also be used easily with a few short python commands.  The following example uses the microsoft/table-transformer-structure-recognition-v1.1-all

Steps:

1. Create a directory called model

mkdir model
cd model

2. Download the model from huggingface into the model directory

  • Navigate to the model’s files tab
  • Select the model.safetensors file
  • Also download the config.json

3. Create and activate a Python virtual environment back in your root directory

cd ..
python3 -m venv .env
source .env/bin/activate

4. Install the optimum cli tool

pip install optimum

5. Since we’ll be exporting to onnx, also ensure onnx and onnxruntime is installed

pip install onnx onnxruntime

6. Run the conversion cli to put the exported onnx model in the model_onnx directory.  If the task can not be inferred, it may need to be specified. 

optimum-cli export onnx --task object-detection --model model model_onnx/

7. If successful you should see something similar to the following.  The outputs will be different depending on the model

Validating ONNX model model_onnx/model.onnx...
-[✓] ONNX model output names match reference model (logits, pred_boxes)
- Validating ONNX Model output "logits":
-[✓] (2, 125, 7) matches (2, 125, 7)
-[✓] all values close (atol: 1e-05)
- Validating ONNX Model output "pred_boxes":
-[✓] (2, 125, 4) matches (2, 125, 4)
-[✓] all values close (atol: 1e-05)

The ONNX export succeeded and the exported model was saved at: model_onnx

(.env) (base) bpaulin@bobs-mbp model_onnx % ls -l
total 226368
-rw-r--r-- 1 bpaulin staff 76783 Jun 4 11:48 config.json
-rw-r--r-- 1 bpaulin staff 115819080 Jun 4 11:48 model.onnx

Congratulations, you’ve just exported your first model to ONNX!  You can now use this model with the ONNX Runtime.  Stay tuned as the true power of using ONNX is unlocked when you can also convert all the pre and post data processing steps to smaller dependency trees. 

At Datavolo we’re on a journey to empowering the 10x Data Engineer and sharing knowledge with you along the way. 

Top Related Posts

Data Ingestion Strategies for GenAI Pipelines

You did it! You finally led the charge and persuaded your boss to let your team start working on a new generative AI application at work and you’re psyched to get started. You get your data and start the ingestion process but right when you think you’ve nailed it, you...

Onward with ONNX® – How We Did It

Digging into new AI models is one of the most exciting parts of my job here at Datavolo. However, having a new toy to play with can easily be overshadowed by the large assortment of issues that come up when you’re moving your code from your laptop to a production...

Survey Findings – Evolving Apache NiFi

Survey of long time users to understand NiFi usage Datavolo empowers and enables the 10X Data Engineer. Today's 10X Data Engineer has to know about and tame unstructured and multi-modal data. Our core technology, Apache NiFi, has nearly 18 years of development,...

Secure Data Pipeline Observability in Minutes

Monitoring data flows for Apache NiFi has evolved quite a bit since its inception. What started generally with logs and processors sprinkled throughout the pipeline grew to Prometheus REST APIs and a variety of Reporting Tasks. These components pushed NiFi closer to...

How to Package and Deploy Python Processors for Apache NiFi

Introduction Support for Processors in native Python is one of the most notable new features in Apache NiFi 2. Each milestone version of NiFi 2.0.0 has enhanced Python integration, with milestone 3 introducing support for loading Python Processors from NiFi Archive...

Troubleshooting Custom NiFi Processors with Data Provenance and Logs

We at Datavolo like to drink our own champagne, building internal tooling and operational workflows on top of the Datavolo Runtime, our distribution of Apache NiFi. We’ve written about several of these services, including our observability pipeline and Slack chatbots....

Apache NiFi – designed for extension at scale

Apache NiFi acquires, prepares, and delivers every kind of data, and that is exactly what AI systems are hungry for.  AI systems require data from all over the spectrum of unstructured, structured, and multi-modal and the protocols of data transport are as varied...

Data Pipeline Observability is Key to Data Quality

In my recent article, What is Observability, I discussed how observability is crucial for understanding complex architectures and their interactions and dependencies between different system components. Data Observability, unlike Software Observability, aims to...

Building GenAI enterprise applications with Vectara and Datavolo

The Vectara and Datavolo integration and partnership When building GenAI apps that are meant to give users rich answers to complex questions or act as an AI assistant (chatbot), we often use Retrieval Augmented Generation (RAG) and want to ground the responses on...

Datavolo Announces Over $21M in Funding!

Datavolo Raises Over $21 Million in Funding from General Catalyst and others to Solve Multimodal Data Pipelines for AI Phoenix, AZ, April 2, 2024 – Datavolo, the leader in multimodal data pipelines for AI, announced today that it has raised over $21 million in...