Toward Predictive Combustion: A Blog Series
Posted by Kelly Senecal on August 26, 2013
When I first started running CFD back in the 90’s, coarse grids and simplified combustion models were the norm, and for good reason – processor speeds were slow and simulations were mainly run in serial. Fast forward to 2013 and now we have a different story. Most commercial codes run in parallel and CPU speeds have increased significantly. It’s now easier than ever to incorporate more resolution and more chemistry in your simulations. But there are other pieces that are needed to solve the predictive combustion puzzle. When properly linked, these pieces can work together to provide an accurate solution to one of the most complex problems in CFD today.
Automatic for the people
Automatic, adaptive mesh generation – for predictive combustion simulations, it’s not just convenient, it’s necessary. If you’re a CFD user, I’m guessing that the idea of never needing to make a mesh again is a dream come true. With this roadblock removed, you can spend more time performing and analyzing your simulations. But perhaps even more importantly, the guess work is removed from the mesh generation process. How can users be asked to generate a mesh ahead of time when the optimal mesh differs from case to case?
Now available in HD
High resolution is the key to predictive combustion simulations. As much as we would like CFD to be grid independent, it’s not. In fact, if coarse and fine grids give you the same answer then chances are your solution is not predictive. Why? Because complicated flows need mesh resolution in order to be accurate. What is important is that you understand the sensitivity of the solution to the resolution and that the simulations are grid-convergent.
A call to order
Low order numerical schemes typically suffer from over-mixing. Why is this bad? Because over-mixing reduces accuracy by smearing the flow field. With this loss of accuracy comes a false sense of repeatability for systems that are inherently non-linear. Running with higher order schemes helps alleviate these problems.
Divide and conquer
In this day and age, multi-core CFD should be a given. When was the last time you ran a simulation in serial? Nevertheless, this piece should not be overlooked. Running with high resolution would not be possible without the ability to divide a simulation on a number of processors. Massively parallel computations introduce their own set of challenges and are the focus of current research and development in the combustion community.
Next top model
While the items above allow us to say adios to large cells, we’re still not anywhere close to running DNS for practical combustion systems. Accurate, grid-convergent models are still needed for many of the physical processes included in CFD simulations.
Pushing back the boundaries
Engineers typically need their results yesterday. Even with faster processors and parallel computing, it’s tempting to run with the smallest domain and shortest time possible. However, along with increased accuracy comes the need to expand computational boundaries in both space and time.
You may be wondering why this one is listed last as this is, after all, a post about predictive combustion modeling. Having the ability to solve reaction chemistry in a fast and accurate way is critical for calculating performance and emissions. However listing this last emphasizes a very important point – you can have the most accurate reaction mechanism with the fastest solver but if the flow and mixing are not adequately predicted it doesn’t matter.
This concludes the first installment in this blog series on predictive combustion modeling. Stay tuned in the coming weeks for posts dedicated to each of the above topics.
Read the next blog in this series: Automatic (Meshing) for the People – From the “Toward Predictive Combustion” Blog Series