1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
|
# Linear Equations
Let's consider an linear equation
$$
y = m x + b
$$
where $a$ and $b$ are two known constants we can solve for $x$ easily.
# Linear Algebra
Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Matrices are rectangular arrays of numbers arranged in rows and columns. They are widely used in both engineering and computer science. Let's say we have the following system of equation.
$$
\begin{cases}
a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\
a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 \\
a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = b_3
\end{cases}
\tag{1}
$$
Where both the $a_{nn}$ coefficients and the $b_n$ variables are all known. We can re-write this system into a matrix form by creating an $A$ matrix of all $a_{nn}$ values and a vector in the form of:
$$
\left[ {\begin{array}{cc}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}\\
\end{array} } \right]
\left[ {\begin{array}{cc}
x_{1}\\
x_{2}\\
x_{3}\\
\end{array} } \right]
=
\left[ {\begin{array}{cc}
b_{1}\\
b_{2}\\
b_{3}\\
\end{array} } \right]
\tag{2}
$$
where
$$
\left[ {\begin{array}{cc}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}\\
\end{array} } \right]
= A,
\left[ {\begin{array}{cc}
b_{1}\\
b_{2}\\
b_{3}\\
\end{array} } \right]
= b
$$
we can also write the equation in it's simplified form
$$
Ax=b
\tag{3}
$$
Take a close look at equation 1 and 2 and look at their similarity and differences. Notice how the pattern of the $A$ matrix and $b$ vector and follow the same patterns as however the elements of the $x$ vector ($x_1$,$x_2$,$x_3$) are flipped vertically. This is because of how we multiple matrices
## Matrix operations in python
- Matrices of the same size can be added or subtracted element-wise:
- Multiply a matrix by a scalar yields a new matrix where every element is multiplied by the scalar.
- The determinant of a matrix is number of a square matrix that summarizes certain properties of the matrix.
- **Identity Matrix $I$**: Square matrix with 1’s on the diagonal and 0’s elsewhere.
- **Inverse Matrix $A^{-1}$**: Satisfies $AA^{-1} = I$. (Only square, non-singular matrices have inverses.)
## Matrix Math in Python
Let's create two square matrices $A$ and $B$ to perform some matrix operations on:
```python
import numpy as np
np.set_printoptions(suppress=True)
A = np.array([[1, 3],
[2, 4]]) # 2x2
B = np.array([[5, 6],
[7, 8]]) # 2x2
A.shape, B.shape
b = np.array([3, 5])
b.shape
```
Test and see how it works.
```python
# Addition and Subtraction of same shapes
A + B
A - B
# Scalar Multiplication
3*A
5*B
# Matrix Multiplication
mat_mult = A @ B
# Transpose
B.T
# Identity and Inverse matrix
I = np.eye(2)
A_inv = np.linalg.inv(A)
A @ A_inv, A_inv @ A
# Determinants
detA = np.linalg.det(A)
detB = np.linalg.det(B)
detA, detB
```
### Problem 1
Re-write the following systems of equation in matrix form to normal algebraic equations.
$$
\left[ {\begin{array}{cc}
3 & 5 & 7\\
2 & 1 & 3\\
2 & 3 & 6\\
\end{array} } \right]
\left[ {\begin{array}{cc}
x_{1}\\
x_{2}\\
x_{3}\\
\end{array} } \right]
=
\left[ {\begin{array}{cc}
24\\
52\\
12\\
\end{array} } \right]
$$
## Problem 2
```python
import numpy as np
# Coefficient matrix A
A = np.array([[2, 3], [4, 5]])
# Right - hand side vector b
b = np.array([4, 6])
# Solve the system of equations
x = np.linalg.solve(A, b)
print(x)
```
## Problem 3
Matrix Determinate
Cramer's Rule -
# Systems of Linear Equations
## Techniques to solve Systems of Equations
In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand.
$$
\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases}
$$
You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?*
When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process.
The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal).
For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like:
$$
\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$
Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve.
Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly:
$$
a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}}
$$
Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$:
$$
a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}}
$$
This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is:
$$
x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}}
$$
This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination.
<img
style="display: block;
margin-left: auto;
margin-right: auto;
width: 60%;"
src="fw_elim_bw_sub.png"
alt="Forward Elimination and Backwards Substitution">
Now that we have an algorithm to follow, let's code it.
```python
import numpy as np
# Create the Augmented matrix
A = np.array([[ 2.2, 1.5, -1.3, 8.2],
[-3.4, -1.7, 2.8, -11.1],
[-2.0, 1.0, 2.4, -3.5]])
```
### Forward Elimination
```python
n = len(A)
# Forward elimination loop
for i in range(n-1):
for j in range(i+1, n):
factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i]
A[j, i:] = A[j, i:] - factor * A[i, i:]
print("Upper triangular matrix after forward elimination:")
print(A)
```
### Back Substitution
```python
x = np.zeros(n)
for i in range(n-1, -1, -1):
x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i]
print("Solution:", x)
```
### Naive Gauss Elimination
We can write a function combining these two steps:
```python
def gaussian_elimination_naive(A, b):
A = A.astype(float).copy()
b = b.astype(float).copy()
n = len(b)
# Forward elimination
for k in range(n-1):
if np.isclose(A[k,k], 0.0):
raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.")
for i in range(k+1, n):
m = A[i,k]/A[k,k]
A[i, k:] -= m * A[k, k:]
b[i] -= m * b[k]
# Back substitution
x = np.zeros(n)
for i in range(n-1, -1, -1):
s = A[i, i+1:] @ x[i+1:]
x[i] = (b[i]-s)/A[i,i]
return x
```
### Gauss Elimination
```python
def gaussian_elimination_pp(A, b):
A = A.astype(float).copy()
b = b.astype(float).copy()
n = len(b)
# Forward elimination with partial pivoting
for k in range(n-1):
piv = k + np.argmax(np.abs(A[k:, k]))
if np.isclose(A[piv, k], 0.0):
raise np.linalg.LinAlgError("Matrix is singular or nearly singular.")
if piv != k: # row swap
A[[k, piv]] = A[[piv, k]]
b[[k, piv]] = b[[piv, k]]
for i in range(k+1, n):
m = A[i,k]/A[k,k]
A[i, k:] -= m * A[k, k:]
b[i] -= m * b[k]
# Back substitution
x = np.zeros(n)
for i in range(n-1, -1, -1):
s = A[i, i+1:] @ x[i+1:]
x[i] = (b[i]-s)/A[i,i]
return x
```
# LU Decomposition
Imagine you’re designing the heat shield of a spacecraft. Engineers provide you with a structural model of the heat shield, represented by a coefficient matrix $A$. To evaluate how the design performs under different operating conditions, you need to test many different load cases, each represented by a right-hand side vector $\mathbf{b}$.
Now suppose your model has a 50x50 matrix $A$, and you want to simulate 100 different load cases. If you rely on Gaussian elimination alone, you would need to **repeat the elimination process 100 times** - once for each $\mathbf{b}$. This quickly becomes computationally expensive, especially for larger systems.
This is where **LU decomposition** comes in. Instead of redoing elimination for every load case, we can factorize the matrix $A$ **just once** into two simpler pieces:
$$
A = LU
$$
expanding it:
$$
\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix}
=
\begin{bmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \\ \end{bmatrix}
\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}
$$
Once we have this factorization, solving $A\mathbf{x} = \mathbf{b}$ becomes a two-step process:
1. Solve $L\mathbf{d} = \mathbf{b}$ by **forward substitution**.
$$
d_m \;=\; \frac{\,b_m \;-\; \sum_{n=1}^{m-1} a_{mn}\, d_n\,}{a_{mm}}
$$
2. Solve $U\mathbf{x} = \mathbf{d}$ by **backward substitution**.
$$
x_m=\frac{\,d_m-\sum_{n=m+1}^{N} u_{mn}\,x_n\,}{u_{mm}}
$$
The big advantage is that the factorization is reused. For 100 different load vectors, we only do the expensive decomposition once, and then solve two triangular systems (fast operations) for each new $\mathbf{b}$.
<img
style="display: block;
margin-left: auto;
margin-right: auto;
width: 40%;"
src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Tadeusz_Banachiewicz_%28NAC%29.jpg/1024px-Tadeusz_Banachiewicz_%28NAC%29.jpg"
alt="Tadeusz Banchiewicz">
First formalized in 1938 by Tadeusz Banachiewicz, based on Gauss’s elimination ideas.
Example
<img
style="display: block;
margin-left: auto;
margin-right: auto;
width: 75%;"
src="lu_decomp.png"
alt="LU Decomposition">
## Problem 1
```python
"""
LU Decomposition Solver (with partial pivoting)
Solves A x = b by computing P, L, U such that P A = L U,
then performing:
1) y from L y = P b (forward substitution)
2) x from U x = y (back substitution)
- Works for a single RHS (b shape: (n,)) or multiple RHS (b shape: (n, m))
- Raises a ValueError if a near-zero pivot is encountered.
"""
import numpy as np
def lu_decompose(A, pivot_tol=1e-12):
"""
Compute LU factorization with partial pivoting.
Returns P, L, U such that P @ A = L @ U.
Parameters
----------
A : (n, n) ndarray
pivot_tol : float
If the absolute value of the pivot falls below this, the matrix is
treated as singular.
Returns
-------
P, L, U : ndarrays of shape (n, n)
"""
A = np.array(A, dtype=float, copy=True)
n = A.shape[0]
if A.shape[0] != A.shape[1]:
raise ValueError("A must be square")
U = A.copy()
L = np.eye(n)
P = np.eye(n)
for k in range(n - 1):
# Partial pivoting: find index of largest |U[i,k]| for i >= k
pivot_idx = np.argmax(np.abs(U[k:, k])) + k
pivot_val = U[pivot_idx, k]
if abs(pivot_val) < pivot_tol:
raise ValueError(f"Numerically singular matrix: pivot ~ 0 at column {k}")
# Swap rows in U and P (and the part of L constructed so far)
if pivot_idx != k:
U[[k, pivot_idx], :] = U[[pivot_idx, k], :]
P[[k, pivot_idx], :] = P[[pivot_idx, k], :]
if k > 0:
L[[k, pivot_idx], :k] = L[[pivot_idx, k], :k]
# Gaussian elimination to build L and zero out below the pivot in U
for i in range(k + 1, n):
L[i, k] = U[i, k] / U[k, k]
U[i, k:] -= L[i, k] * U[k, k:]
U[i, k] = 0.0
# Final pivot check on U[-1, -1]
if abs(U[-1, -1]) < pivot_tol:
raise ValueError(f"Numerically singular matrix: pivot ~ 0 at last diagonal")
return P, L, U
def forward_substitution(L, b):
"""
Solve L y = b for y, where L is unit lower-triangular.
b can be (n,) or (n, m).
"""
L = np.asarray(L)
b = np.asarray(b, dtype=float)
n = L.shape[0]
if b.ndim == 1:
b = b.reshape(n, 1)
y = np.zeros_like(b, dtype=float)
for i in range(n):
# L has ones on the diagonal (Doolittle), so we don't divide by L[i,i]
y[i] = b[i] - L[i, :i] @ y[:i]
return y if y.shape[1] > 1 else y.ravel()
def back_substitution(U, y):
"""
Solve U x = y for x, where U is upper-triangular.
y can be (n,) or (n, m).
"""
U = np.asarray(U)
y = np.asarray(y, dtype=float)
n = U.shape[0]
if y.ndim == 1:
y = y.reshape(n, 1)
x = np.zeros_like(y, dtype=float)
for i in range(n - 1, -1, -1):
if abs(U[i, i]) < 1e-15:
raise ValueError(f"Zero (or tiny) pivot encountered at U[{i},{i}]")
x[i] = (y[i] - U[i, i + 1:] @ x[i + 1:]) / U[i, i]
return x if x.shape[1] > 1 else x.ravel()
def lu_solve(A, b, pivot_tol=1e-12):
"""
Solve A x = b using LU with partial pivoting.
Returns x and the factors (P, L, U).
"""
P, L, U = lu_decompose(A, pivot_tol=pivot_tol)
Pb = P @ np.asarray(b, dtype=float)
y = forward_substitution(L, Pb)
x = back_substitution(U, y)
return x, (P, L, U)
def main():
# Example system:
# 3x + 2y - z = 1
# 2x - 2y + 4z = -2
# -x + 0.5y - z = 0
A = np.array([
[ 3.0, 2.0, -1.0],
[ 2.0, -2.0, 4.0],
[-1.0, 0.5, -1.0]
])
b = np.array([1.0, -2.0, 0.0])
x, (P, L, U) = lu_solve(A, b)
print("Solution x:\n", x)
print("\nP:\n", P)
print("\nL:\n", L)
print("\nU:\n", U)
# Residual check
r = A @ x - b
print("\nResidual A@x - b:\n", r)
print("Residual norm:", np.linalg.norm(r, ord=np.inf))
if __name__ == "__main__":
main()
```
## Problem 2
Ohm's and Kirchhoff's laws have been used to find a set of equations to model a circuit. Using the built-in Numpy solvers. Plot the function of current at different voltages.
```python
import numpy as np
import matplotlib.pyplot as plt
# Unknown ordering
VR1, VR2, VR3, VR4, VR5, I1, I2, IM1, IM2, IM3 = range(10)
def build_system_matrices(R1, R2, R3, R4, R5):
A = np.zeros((10,10), float)
b_const = np.zeros(10, float)
# KVL/KCL + Ohm's law equations
A[0, VR1] = -1; A[0, VR2] = -1 # -VR1 - VR2 = -Vs + 5
A[1, VR2] = 1; A[1, VR3] = -1; A[1, VR4] = -1; b_const[1] = -2 # VR2 - VR3 - VR4 = -2
A[2, VR4] = 1; A[2, VR5] = -1; b_const[2] = 2 # VR4 - VR5 = 2
A[3, VR1] = 1; A[3, I1 ] = -R1 # VR1 = I1 R1
A[4, VR2] = 1; A[4, IM1] = -R2 # VR2 = IM1 R2
A[5, VR3] = 1; A[5, I2 ] = -R3 # VR3 = I2 R3
A[6, VR4] = 1; A[6, IM2] = -R4 # VR4 = IM2 R4
A[7, VR5] = 1; A[7, IM3] = -R5 # VR5 = IM3 R5
A[8, I1 ] = 1; A[8, IM1] = -1; A[8, I2 ] = -1 # I1 = IM1 + I2
A[9, I2 ] = 1; A[9, IM2] = -1; A[9, IM3] = -1 # I2 = IM2 + IM3
return A, b_const
def b_vector(Vs, b_const):
b = b_const.copy()
b[0] = -Vs + 5.0
return b
# Example component values
R1, R2, R3, R4, R5 = 10.0, 5.0, 8.0, 6.0, 4.0
A, b_const = build_system_matrices(R1, R2, R3, R4, R5)
# Sanity: ensure A is non-singular
condA = np.linalg.cond(A)
condA
Vs_vals = np.arange(0.0, 48.0 + 0.1, 0.1)
# Option 1 (straightforward): loop with numpy.linalg.solve (LAPACK under the hood)
IM1_vals = np.empty_like(Vs_vals)
IM2_vals = np.empty_like(Vs_vals)
IM3_vals = np.empty_like(Vs_vals)
for k, Vs in enumerate(Vs_vals):
b = b_vector(Vs, b_const)
x = np.linalg.solve(A, b) # Uses LU factorization internally
IM1_vals[k], IM2_vals[k], IM3_vals[k] = x[IM1], x[IM2], x[IM3]
# Plot
plt.figure()
plt.plot(Vs_vals, IM1_vals, label="IM1 (solve)")
plt.plot(Vs_vals, IM2_vals, label="IM2 (solve)")
plt.plot(Vs_vals, IM3_vals, label="IM3 (solve)")
plt.xlabel("Supply Voltage Vs (V)")
plt.ylabel("Motor Current (A)")
plt.title(f"Motor Currents vs Supply Voltage (cond(A) ≈ {condA:.1f}, diff≈{max_diff:.2e})")
plt.legend()
plt.show()
```
|