Interfacing MFIX with PETSC and HYPRE Linear Solver Libraries Email Page
Print This Page
Performer: University of North Dakota
Variations in the run time with the number of<br/>processors for different fixed problem sizes<br/>(indicated within brackets)
Variations in the run time with the number of
processors for different fixed problem sizes
(indicated within brackets)
Website: University of North Dakota
Award Number: FE0026191
Project Duration: 09/01/2015 – 08/31/2018
Total Award Value: $400,000
DOE Share: $400,000
Performer Share: $0
Technology Area: University Training and Research
Key Technology: Simulation-Based Engineering
Location: Grand Forks, ND

Project Description

The high computational cost associated with the solution of large, sparse, poorly conditioned matrices is currently a serious impediment to increasing the utility of CFD models for resolving multiphase flows. This project will interface NETL’s Multiphase with Interphase Exchanges (MFiX) code with Portable Extensible Toolkit for Scientific Computation (PETSc) and High Performance Preconditioners (HYPRE) linear solver libraries with the goal of reducing the time to solution for the matrix equations resulting during the solution process. The lack of robust convergence associated with the current iterative methods in MFiX can be alleviated through appropriate preconditioning techniques applied to Krylov subspace solvers and multigrid methods accessible from these third-party solver libraries.

The overall objective of this project is to first establish a robust well-abstracted solver interface that will present an extensible back end that would allow MFiX to successfully interface with various solver libraries. Next, this extensibility will be demonstrated by interfacing MFiX with PETSc and HYPRE linear solver libraries with the goal of reducing the time to solution for large, sparse, linearized matrix equations resulting from the discretization of multiphase transport equations.

Project Benefits

It is anticipated that this project could cut down the time to solution when compared to current linear solver options in MFiX by at least 50 percent. It could also show that near linear scaling in parallel performance can be achieved to at least 1000 processors. In addition, this could translate to achieving good scalability on current high-performance computing systems such as the DOE leadership computing facilities, as well as enabling the portability of MFiX with new hardware technologies.

Contact Information

Federal Project Manager Jason Hissam:
Technology Manager Briggs White:
Principal Investigator Gautham Krishnamoorthy:


Click to view Presentations, Papers, and Publications