summaryrefslogtreecommitdiff
path: root/tutorials/module_3/1_2_Systems_of_Linear_Equations.md
diff options
context:
space:
mode:
Diffstat (limited to 'tutorials/module_3/1_2_Systems_of_Linear_Equations.md')
-rw-r--r--tutorials/module_3/1_2_Systems_of_Linear_Equations.md122
1 files changed, 122 insertions, 0 deletions
diff --git a/tutorials/module_3/1_2_Systems_of_Linear_Equations.md b/tutorials/module_3/1_2_Systems_of_Linear_Equations.md
new file mode 100644
index 0000000..f29d285
--- /dev/null
+++ b/tutorials/module_3/1_2_Systems_of_Linear_Equations.md
@@ -0,0 +1,122 @@
+# Systems of Linear Equations
+
+## Techniques to solve Systems of Equations
+In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand.
+$$
+\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases}
+$$
+
+You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?*
+
+When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process.
+
+The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal).
+
+For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like:
+$$
+\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$
+Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve.
+
+Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly:
+$$
+a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}}
+$$
+Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$:
+$$
+a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}}
+$$
+This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is:
+$$
+x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}}
+$$
+This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 60%;"
+ src="fw_elim_bw_sub.png"
+ alt="Forward Elimination and Backwards Substitution">
+Now that we have an algorithm to follow, let's code it.
+```python
+import numpy as np
+
+# Create the Augmented matrix
+A = np.array([[ 2.2, 1.5, -1.3, 8.2],
+ [-3.4, -1.7, 2.8, -11.1],
+ [-2.0, 1.0, 2.4, -3.5]])
+
+
+```
+### Forward Elimination
+```python
+n = len(A)
+
+# Forward elimination loop
+for i in range(n-1):
+ for j in range(i+1, n):
+ factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i]
+ A[j, i:] = A[j, i:] - factor * A[i, i:]
+
+print("Upper triangular matrix after forward elimination:")
+print(A)
+```
+### Back Substitution
+```python
+x = np.zeros(n)
+
+for i in range(n-1, -1, -1):
+ x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i]
+
+print("Solution:", x)
+
+```
+
+### Naive Gauss Elimination
+We can write a function combining these two steps:
+```python
+def gaussian_elimination_naive(A, b):
+ A = A.astype(float).copy()
+ b = b.astype(float).copy()
+ n = len(b)
+ # Forward elimination
+ for k in range(n-1):
+ if np.isclose(A[k,k], 0.0):
+ raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.")
+ for i in range(k+1, n):
+ m = A[i,k]/A[k,k]
+ A[i, k:] -= m * A[k, k:]
+ b[i] -= m * b[k]
+ # Back substitution
+ x = np.zeros(n)
+ for i in range(n-1, -1, -1):
+ s = A[i, i+1:] @ x[i+1:]
+ x[i] = (b[i]-s)/A[i,i]
+ return x
+```
+### Gauss Elimination
+```python
+def gaussian_elimination_pp(A, b):
+ A = A.astype(float).copy()
+ b = b.astype(float).copy()
+ n = len(b)
+ # Forward elimination with partial pivoting
+ for k in range(n-1):
+ piv = k + np.argmax(np.abs(A[k:, k]))
+ if np.isclose(A[piv, k], 0.0):
+ raise np.linalg.LinAlgError("Matrix is singular or nearly singular.")
+ if piv != k: # row swap
+ A[[k, piv]] = A[[piv, k]]
+ b[[k, piv]] = b[[piv, k]]
+ for i in range(k+1, n):
+ m = A[i,k]/A[k,k]
+ A[i, k:] -= m * A[k, k:]
+ b[i] -= m * b[k]
+ # Back substitution
+ x = np.zeros(n)
+ for i in range(n-1, -1, -1):
+ s = A[i, i+1:] @ x[i+1:]
+ x[i] = (b[i]-s)/A[i,i]
+ return x
+```
+