Thursday, September 18, 2008

Solution of Linear Equations

The formulation algorithm should produce a system of linear equations that has a solution.
For the system to have a solution, the number of equations must be equal to the number of unknowns and the coefficients matrix must be nonsingular (its determinant must not be equal to zero).

Five of the methods that can be used to solve sets of linear equations are
  1. the inverse matrix
  2. Carmer's rule
  3. Gauss-Seidel
  4. Gaussian elimination
  5. LU factorization
The inverse matrix and Carmer's rule methods are computationally expensive and are seldom used in computer applications.
Gauss-Seidel is an iterative method that may not converge even if the system of linear equations has a solution.
In the field of computerized circuit analysis, the solution of linear equations is usually performed using either Gaussian elimination or LU factorization.

The Gaussian elimination and the LU factorization algorithms are approximately of equal complexity (the complexity of the Gaussian elimination is slightly smaller).
The LU factorization is better in situations where we want to solve multiple sets of linear equations having equal coefficients matrices but different RHS vectors. Such situations arise when performing sensitivity analysis.

In my programs, I will use the LU factorization technique to solve linear equations.


Solution of Linear Equations by LU Factorization

The algorithm is explained very well in "Electronic Circuit & System Simulation Methods" and in "Computer Methods for Circuit Analysis and Design". Here, I will discuss the high level steps without going into the details of the algorithm.

Step 1: LU factorization
Factorize the coefficients matrix into an upper triangular matrix and a lower triangular matrix.



Step 2: Forward substitution
Consider the product [U][x] to be an unknown vector and solve for this vector.


This step is called forward substitution because you start by evaluating the first unknown in the vector [z] using the first equation and end by evaluating the last unknown using the last equation.

Step 3: Back substitution
Evaluate the unknown vector [x] from

[U][x] = [z]

This step is called back substitution because you start by evaluating the last unknown in the vector [x] using the last equation and end by evaluating the first unknown using the first equation.

To solve other sets of equations having the same LHS and different RHS vectors, we can reuse the factored matrix, only steps 2 and 3 need to be repeated.

No comments: