1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
|
# Systems of Linear Equations
## Techniques to solve Systems of Equations
In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand.
$$
\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases}
$$
You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?*
When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process.
The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal).
For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like:
$$
\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$
Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve.
Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly:
$$
a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}}
$$
Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$:
$$
a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}}
$$
This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is:
$$
x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}}
$$
This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination.
<img
style="display: block;
margin-left: auto;
margin-right: auto;
width: 60%;"
src="fw_elim_bw_sub.png"
alt="Forward Elimination and Backwards Substitution">
Now that we have an algorithm to follow, let's code it.
```python
import numpy as np
# Create the Augmented matrix
A = np.array([[ 2.2, 1.5, -1.3, 8.2],
[-3.4, -1.7, 2.8, -11.1],
[-2.0, 1.0, 2.4, -3.5]])
```
### Forward Elimination
```python
n = len(A)
# Forward elimination loop
for i in range(n-1):
for j in range(i+1, n):
factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i]
A[j, i:] = A[j, i:] - factor * A[i, i:]
print("Upper triangular matrix after forward elimination:")
print(A)
```
### Back Substitution
```python
x = np.zeros(n)
for i in range(n-1, -1, -1):
x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i]
print("Solution:", x)
```
### Naive Gauss Elimination
We can write a function combining these two steps:
```python
def gaussian_elimination_naive(A, b):
A = A.astype(float).copy()
b = b.astype(float).copy()
n = len(b)
# Forward elimination
for k in range(n-1):
if np.isclose(A[k,k], 0.0):
raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.")
for i in range(k+1, n):
m = A[i,k]/A[k,k]
A[i, k:] -= m * A[k, k:]
b[i] -= m * b[k]
# Back substitution
x = np.zeros(n)
for i in range(n-1, -1, -1):
s = A[i, i+1:] @ x[i+1:]
x[i] = (b[i]-s)/A[i,i]
return x
```
### Gauss Elimination
```python
def gaussian_elimination_pp(A, b):
A = A.astype(float).copy()
b = b.astype(float).copy()
n = len(b)
# Forward elimination with partial pivoting
for k in range(n-1):
piv = k + np.argmax(np.abs(A[k:, k]))
if np.isclose(A[piv, k], 0.0):
raise np.linalg.LinAlgError("Matrix is singular or nearly singular.")
if piv != k: # row swap
A[[k, piv]] = A[[piv, k]]
b[[k, piv]] = b[[piv, k]]
for i in range(k+1, n):
m = A[i,k]/A[k,k]
A[i, k:] -= m * A[k, k:]
b[i] -= m * b[k]
# Back substitution
x = np.zeros(n)
for i in range(n-1, -1, -1):
s = A[i, i+1:] @ x[i+1:]
x[i] = (b[i]-s)/A[i,i]
return x
```
|