Simulating the Impossible: Building an Open-Source Horn Design Pipeline with AI

6 min read

For acoustic engineers and DIY audio enthusiasts, designing high-performance horns has often felt like a dark art. The process has been dominated by legacy software, most notably Hornresp—a powerful but frustratingly limited tool. It's a Windows-only application that you can't automate, making large-scale experimentation and optimization nearly impossible.

I always dreamed of a better way: a modern, open-source pipeline that could plot frequency response curves for any 3D horn design, fully automated and accessible. The problem? I didn't have the deep mathematical background required for the complex physics simulations. But, with the help of AI development tools like Cursor Composer and Google Gemini, I was able to build something I never could have before: the Horn Simulation project.

This project is a complete, end-to-end pipeline that takes a set of geometric parameters and produces detailed performance graphs for a horn loudspeaker. Here's a look at how it works, how it was built, and where it's going next.

The Pipeline: From Geometry to Graph

The entire system is designed as a modular, containerized pipeline orchestrated by Nextflow. This means each step is a separate, independent tool that can be updated and tested in isolation. The workflow is composed of three core stages:

  1. Geometry Generation (horn-geometry): The pipeline starts by programmatically generating a 3D model of the horn. Instead of relying on a heavy, external CAD program, the system uses Gmsh's built-in OpenCASCADE kernel. This allows us to create precise, parametric horn geometries (like conical or exponential flares) with a lightweight, scriptable Python API, exporting them to STEP files for the next stage.

  2. Solving (horn-solver): This is the heart of the project. The 3D model is ingested by the solver, which performs a high-fidelity simulation using the Finite Element Method (FEM). This stage, built on the powerful FEniCSx library, solves the complex Helmholtz equation to model how sound waves propagate through the horn. It runs a frequency sweep, calculating the acoustic pressure at each step.

  3. Analysis & Plotting (horn-analysis): Raw simulation data is just a pile of numbers. This final stage uses the familiar data science tools pandas and matplotlib to process the solver's output. It calculates the final Sound Pressure Level (SPL) and generates the key visualization: a frequency response plot that shows the horn's performance across the audio spectrum.

The Development Story: Wrestling with the Machine

Building this pipeline was an incredible journey of discovery, made possible by AI assistance. I could focus on the high-level logic and architecture while my AI partners helped write the complex code for the physics simulation. However, we hit a wall—a massive, project-halting roadblock.

The FEniCSx solver kept failing with a cryptic error: ValueError: Unexpected complex value in real expression.

We tried everything:

  • We rewrote the variational problem using ufl.inner instead of ufl.dot.
  • We tried explicitly compiling the forms before passing them to the solver.
  • We even tried to decompose the problem into separate real and imaginary parts.

Nothing worked. The error suggested our math was wrong, but the code seemed correct. After days of debugging, the answer was found not in the Python code, but in the Docker environment.

The official dolfinx Docker image, while shipping with a complex-number-capable version of the PETSc linear algebra library, defaults to a real-only environment mode. The JIT compiler was being told to work only with real numbers, so when it saw the complex numbers required for the acoustic simulation, it rightly threw an error.

The fix was a single line in the entrypoint.sh script to activate the complex-mode environment before running the solver: source /usr/local/bin/dolfinx-complex-mode. With that one change, the entire pipeline lit up, and the tests finally passed. It was an amazing moment and a perfect example of how AI collaboration can help solve even the most obscure technical challenges.

What's Next for Horn Simulation?

The MVP is complete, but the vision for this project is much larger. Now that the core pipeline is stable, the next steps are focused on expanding its capabilities:

  • Advanced Directivity Analysis: Extending the BEM simulation to calculate and plot the horn's off-axis response, creating full 2D/3D directivity patterns (polar plots).
  • Machine Learning Surrogate Models: The high-fidelity simulations are slow. The data they generate is the perfect training set for a neural network. A trained surrogate model could predict horn performance with near-3D accuracy but in a fraction of the time, enabling massive-scale optimization.
  • Crossover and System Integration: Adding a module to design the electronic crossover filters needed to combine the horn with other drivers, allowing for the simulation of complete multi-way loudspeaker systems.

This project started as an ambitious idea I never thought I could realize on my own. Today, it's a powerful, working tool that makes advanced acoustic simulation accessible, automated, and entirely open-source.

You can explore the complete project, dive into the code, and run your own simulations on GitHub at timini/horn-simulation.