summaryrefslogtreecommitdiff
path: root/tutorials/module_3
diff options
context:
space:
mode:
authorChristian Kolset <christian.kolset@gmail.com>2025-10-01 14:04:14 -0600
committerChristian Kolset <christian.kolset@gmail.com>2025-10-01 14:04:14 -0600
commitb7652c078a74ec0fd8419c4e0d8f9dc1d7b28020 (patch)
tree436a9952db1fb6f47fb9b6a15e40b9801f925f20 /tutorials/module_3
parent0678159d2801c99437e728823dfccccb9d608174 (diff)
Re-structure of Module 3 to make each file a seperate lecture
Diffstat (limited to 'tutorials/module_3')
-rw-r--r--tutorials/module_3/1_0_linear_equations.md10
-rw-r--r--tutorials/module_3/1_1_Linear_Algebra.md155
-rw-r--r--tutorials/module_3/1_2_Systems_of_Linear_Equations.md122
-rw-r--r--tutorials/module_3/1_3_LU_Decomposition.md258
-rw-r--r--tutorials/module_3/1_linear_equations.md540
-rw-r--r--tutorials/module_3/2_0_non-linear_equations.md43
-rw-r--r--tutorials/module_3/2_1_Bracketing_Method.md97
-rw-r--r--tutorials/module_3/2_2_Open_Methods.md185
-rw-r--r--tutorials/module_3/2_3_Systems_of_Non-linear_equations.md175
-rw-r--r--tutorials/module_3/2_roots_optimization.md500
-rw-r--r--tutorials/module_3/3_0_numerical_calculus.md3
-rw-r--r--tutorials/module_3/3_1_numerical_differentiation.md (renamed from tutorials/module_3/1_numerical_differentiation.md)9
-rw-r--r--tutorials/module_3/3_2_Advanced_Derivatives.md7
-rw-r--r--tutorials/module_3/3_3_numerical_integration.md (renamed from tutorials/module_3/4_numerical_integration.md)53
-rw-r--r--tutorials/module_3/3_4_More_integral.md52
-rw-r--r--tutorials/module_3/3_5_ode.md20
-rw-r--r--tutorials/module_3/3_6_Explicit_Methods.md (renamed from tutorials/module_3/5_ode.md)39
-rw-r--r--tutorials/module_3/3_7_Implicit_Method.md16
-rw-r--r--tutorials/module_3/3_8_Systems_of_ODEs.md5
-rw-r--r--tutorials/module_3/3_pde.md (renamed from tutorials/module_3/6_pde.md)0
-rw-r--r--tutorials/module_3/compiled note.md6
-rw-r--r--tutorials/module_3/example problem outline.md108
-rw-r--r--tutorials/module_3/example32-4.pngbin0 -> 39018 bytes
-rw-r--r--tutorials/module_3/example32-4element1.pngbin0 -> 24195 bytes
-rw-r--r--tutorials/module_3/image_1759346314337.pngbin0 -> 19234 bytes
-rw-r--r--tutorials/module_3/image_1759346552105.pngbin0 -> 17321 bytes
-rw-r--r--tutorials/module_3/newton-raphson_nonlinear_systems.py35
-rw-r--r--tutorials/module_3/newton-raphson_nonlinear_systemspy35
28 files changed, 1333 insertions, 1140 deletions
diff --git a/tutorials/module_3/1_0_linear_equations.md b/tutorials/module_3/1_0_linear_equations.md
new file mode 100644
index 0000000..c17c5ac
--- /dev/null
+++ b/tutorials/module_3/1_0_linear_equations.md
@@ -0,0 +1,10 @@
+# Linear Equations
+Let's consider an linear equation
+$$
+y = m x + b
+$$
+where $a$ and $b$ are two known constants we can solve for $x$ easily.
+
+[[1_1_Linear_Algebra]]
+[[1_2_Systems_of_Linear_Equations]]
+[[1_3_LU_Decomposition]] \ No newline at end of file
diff --git a/tutorials/module_3/1_1_Linear_Algebra.md b/tutorials/module_3/1_1_Linear_Algebra.md
new file mode 100644
index 0000000..a107c1f
--- /dev/null
+++ b/tutorials/module_3/1_1_Linear_Algebra.md
@@ -0,0 +1,155 @@
+# Linear Algebra
+Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Matrices are rectangular arrays of numbers arranged in rows and columns. They are widely used in both engineering and computer science. Let's say we have the following system of equation.
+$$
+\begin{cases}
+a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\
+a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 \\
+a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = b_3
+\end{cases}
+\tag{1}
+
+$$
+Where both the $a_{nn}$ coefficients and the $b_n$ variables are all known. We can re-write this system into a matrix form by creating an $A$ matrix of all $a_{nn}$ values and a vector in the form of:
+$$
+ \left[ {\begin{array}{cc}
+ a_{11} & a_{12} & a_{13}\\
+ a_{21} & a_{22} & a_{23}\\
+ a_{31} & a_{32} & a_{33}\\
+ \end{array} } \right]
+ \left[ {\begin{array}{cc}
+ x_{1}\\
+ x_{2}\\
+ x_{3}\\
+ \end{array} } \right]
+=
+ \left[ {\begin{array}{cc}
+ b_{1}\\
+ b_{2}\\
+ b_{3}\\
+ \end{array} } \right]
+ \tag{2}
+$$
+where
+$$
+ \left[ {\begin{array}{cc}
+ a_{11} & a_{12} & a_{13}\\
+ a_{21} & a_{22} & a_{23}\\
+ a_{31} & a_{32} & a_{33}\\
+ \end{array} } \right]
+ = A,
+ \left[ {\begin{array}{cc}
+ b_{1}\\
+ b_{2}\\
+ b_{3}\\
+ \end{array} } \right]
+ = b
+$$
+we can also write the equation in it's simplified form
+$$
+Ax=b
+\tag{3}
+$$
+
+Take a close look at equation 1 and 2 and look at their similarity and differences. Notice how the pattern of the $A$ matrix and $b$ vector and follow the same patterns as however the elements of the $x$ vector ($x_1$,$x_2$,$x_3$) are flipped vertically. This is because of how we multiple matrices.
+
+![](https://www.mathsisfun.com/algebra/images/matrix-multiply-a.svg)
+
+![](https://www.mathsisfun.com/algebra/images/matrix-multiply-b.svg)
+
+## Matrix operations in python
+
+- Matrices of the same size can be added or subtracted element-wise:
+- Multiply a matrix by a scalar yields a new matrix where every element is multiplied by the scalar.
+- The determinant of a matrix is number of a square matrix that summarizes certain properties of the matrix.
+- **Identity Matrix $I$**: Square matrix with 1’s on the diagonal and 0’s elsewhere.
+- **Inverse Matrix $A^{-1}$**: Satisfies $AA^{-1} = I$. (Only square, non-singular matrices have inverses.)
+
+## Matrix Math in Python
+Let's create two square matrices $A$ and $B$ to perform some matrix operations on:
+```python
+import numpy as np
+np.set_printoptions(suppress=True)
+
+A = np.array([[1, 3],
+ [2, 4]]) # 2x2
+B = np.array([[5, 6],
+ [7, 8]]) # 2x2
+A.shape, B.shape
+
+b = np.array([3, 5])
+b.shape
+```
+
+Test and see how it works.
+
+```python
+# Addition and Subtraction of same shapes
+A + B
+A - B
+
+# Scalar Multiplication
+3*A
+
+5*B
+
+# Matrix Multiplication
+mat_mult = A @ B
+
+# Transpose
+B.T
+
+# Identity and Inverse matrix
+
+I = np.eye(2)
+A_inv = np.linalg.inv(A)
+
+A @ A_inv, A_inv @ A
+
+# Determinants
+
+detA = np.linalg.det(A)
+detB = np.linalg.det(B)
+detA, detB
+```
+
+### Problem 1
+Re-write the following systems of equation in matrix form to normal algebraic equations.
+$$
+ \left[ {\begin{array}{cc}
+ 3 & 5 & 7\\
+ 2 & 1 & 3\\
+ 2 & 3 & 6\\
+ \end{array} } \right]
+ \left[ {\begin{array}{cc}
+ x_{1}\\
+ x_{2}\\
+ x_{3}\\
+ \end{array} } \right]
+=
+ \left[ {\begin{array}{cc}
+ 24\\
+ 52\\
+ 12\\
+ \end{array} } \right]
+$$
+## Problem 2
+```python
+import numpy as np
+
+# Coefficient matrix A
+A = np.array([[2, 3], [4, 5]])
+# Right - hand side vector b
+b = np.array([4, 6])
+
+# Solve the system of equations
+x = np.linalg.solve(A, b)
+print(x)
+```
+## Problem 3
+
+
+
+Matrix Determinate
+
+Cramer's Rule -
+
diff --git a/tutorials/module_3/1_2_Systems_of_Linear_Equations.md b/tutorials/module_3/1_2_Systems_of_Linear_Equations.md
new file mode 100644
index 0000000..f29d285
--- /dev/null
+++ b/tutorials/module_3/1_2_Systems_of_Linear_Equations.md
@@ -0,0 +1,122 @@
+# Systems of Linear Equations
+
+## Techniques to solve Systems of Equations
+In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand.
+$$
+\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases}
+$$
+
+You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?*
+
+When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process.
+
+The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal).
+
+For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like:
+$$
+\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$
+Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve.
+
+Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly:
+$$
+a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}}
+$$
+Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$:
+$$
+a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}}
+$$
+This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is:
+$$
+x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}}
+$$
+This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 60%;"
+ src="fw_elim_bw_sub.png"
+ alt="Forward Elimination and Backwards Substitution">
+Now that we have an algorithm to follow, let's code it.
+```python
+import numpy as np
+
+# Create the Augmented matrix
+A = np.array([[ 2.2, 1.5, -1.3, 8.2],
+ [-3.4, -1.7, 2.8, -11.1],
+ [-2.0, 1.0, 2.4, -3.5]])
+
+
+```
+### Forward Elimination
+```python
+n = len(A)
+
+# Forward elimination loop
+for i in range(n-1):
+ for j in range(i+1, n):
+ factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i]
+ A[j, i:] = A[j, i:] - factor * A[i, i:]
+
+print("Upper triangular matrix after forward elimination:")
+print(A)
+```
+### Back Substitution
+```python
+x = np.zeros(n)
+
+for i in range(n-1, -1, -1):
+ x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i]
+
+print("Solution:", x)
+
+```
+
+### Naive Gauss Elimination
+We can write a function combining these two steps:
+```python
+def gaussian_elimination_naive(A, b):
+ A = A.astype(float).copy()
+ b = b.astype(float).copy()
+ n = len(b)
+ # Forward elimination
+ for k in range(n-1):
+ if np.isclose(A[k,k], 0.0):
+ raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.")
+ for i in range(k+1, n):
+ m = A[i,k]/A[k,k]
+ A[i, k:] -= m * A[k, k:]
+ b[i] -= m * b[k]
+ # Back substitution
+ x = np.zeros(n)
+ for i in range(n-1, -1, -1):
+ s = A[i, i+1:] @ x[i+1:]
+ x[i] = (b[i]-s)/A[i,i]
+ return x
+```
+### Gauss Elimination
+```python
+def gaussian_elimination_pp(A, b):
+ A = A.astype(float).copy()
+ b = b.astype(float).copy()
+ n = len(b)
+ # Forward elimination with partial pivoting
+ for k in range(n-1):
+ piv = k + np.argmax(np.abs(A[k:, k]))
+ if np.isclose(A[piv, k], 0.0):
+ raise np.linalg.LinAlgError("Matrix is singular or nearly singular.")
+ if piv != k: # row swap
+ A[[k, piv]] = A[[piv, k]]
+ b[[k, piv]] = b[[piv, k]]
+ for i in range(k+1, n):
+ m = A[i,k]/A[k,k]
+ A[i, k:] -= m * A[k, k:]
+ b[i] -= m * b[k]
+ # Back substitution
+ x = np.zeros(n)
+ for i in range(n-1, -1, -1):
+ s = A[i, i+1:] @ x[i+1:]
+ x[i] = (b[i]-s)/A[i,i]
+ return x
+```
+
diff --git a/tutorials/module_3/1_3_LU_Decomposition.md b/tutorials/module_3/1_3_LU_Decomposition.md
new file mode 100644
index 0000000..5ed0f66
--- /dev/null
+++ b/tutorials/module_3/1_3_LU_Decomposition.md
@@ -0,0 +1,258 @@
+# LU Decomposition
+Imagine you’re designing the heat shield of a spacecraft. Engineers provide you with a structural model of the heat shield, represented by a coefficient matrix $A$. To evaluate how the design performs under different operating conditions, you need to test many different load cases, each represented by a right-hand side vector $\mathbf{b}$.
+
+Now suppose your model has a 50x50 matrix $A$, and you want to simulate 100 different load cases. If you rely on Gaussian elimination alone, you would need to **repeat the elimination process 100 times** - once for each $\mathbf{b}$. This quickly becomes computationally expensive, especially for larger systems.
+
+This is where **LU decomposition** comes in. Instead of redoing elimination for every load case, we can factorize the matrix $A$ **just once** into two simpler pieces:
+$$
+A = LU
+$$
+expanding it:
+$$
+\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix}
+=
+\begin{bmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \\ \end{bmatrix}
+\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}
+$$
+Once we have this factorization, solving $A\mathbf{x} = \mathbf{b}$ becomes a two-step process:
+
+1. Solve $L\mathbf{d} = \mathbf{b}$ by **forward substitution**.
+$$
+d_m \;=\; \frac{\,b_m \;-\; \sum_{n=1}^{m-1} a_{mn}\, d_n\,}{a_{mm}}
+$$
+2. Solve $U\mathbf{x} = \mathbf{d}$ by **backward substitution**.
+ $$
+ x_m=\frac{\,d_m-\sum_{n=m+1}^{N} u_{mn}\,x_n\,}{u_{mm}}
+ $$
+
+The big advantage is that the factorization is reused. For 100 different load vectors, we only do the expensive decomposition once, and then solve two triangular systems (fast operations) for each new $\mathbf{b}$.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 40%;"
+ src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Tadeusz_Banachiewicz_%28NAC%29.jpg/1024px-Tadeusz_Banachiewicz_%28NAC%29.jpg"
+ alt="Tadeusz Banchiewicz">
+First formalized in 1938 by Tadeusz Banachiewicz, based on Gauss’s elimination ideas.
+
+Example
+
+
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 75%;"
+ src="lu_decomp.png"
+ alt="LU Decomposition">
+## Problem 1
+```python
+"""
+LU Decomposition Solver (with partial pivoting)
+
+Solves A x = b by computing P, L, U such that P A = L U,
+then performing:
+ 1) y from L y = P b (forward substitution)
+ 2) x from U x = y (back substitution)
+
+- Works for a single RHS (b shape: (n,)) or multiple RHS (b shape: (n, m))
+- Raises a ValueError if a near-zero pivot is encountered.
+"""
+
+import numpy as np
+
+def lu_decompose(A, pivot_tol=1e-12):
+ """
+ Compute LU factorization with partial pivoting.
+ Returns P, L, U such that P @ A = L @ U.
+
+ Parameters
+ ----------
+ A : (n, n) ndarray
+ pivot_tol : float
+ If the absolute value of the pivot falls below this, the matrix is
+ treated as singular.
+
+ Returns
+ -------
+ P, L, U : ndarrays of shape (n, n)
+ """
+ A = np.array(A, dtype=float, copy=True)
+ n = A.shape[0]
+ if A.shape[0] != A.shape[1]:
+ raise ValueError("A must be square")
+
+ U = A.copy()
+ L = np.eye(n)
+ P = np.eye(n)
+
+ for k in range(n - 1):
+ # Partial pivoting: find index of largest |U[i,k]| for i >= k
+ pivot_idx = np.argmax(np.abs(U[k:, k])) + k
+ pivot_val = U[pivot_idx, k]
+
+ if abs(pivot_val) < pivot_tol:
+ raise ValueError(f"Numerically singular matrix: pivot ~ 0 at column {k}")
+
+ # Swap rows in U and P (and the part of L constructed so far)
+ if pivot_idx != k:
+ U[[k, pivot_idx], :] = U[[pivot_idx, k], :]
+ P[[k, pivot_idx], :] = P[[pivot_idx, k], :]
+ if k > 0:
+ L[[k, pivot_idx], :k] = L[[pivot_idx, k], :k]
+
+ # Gaussian elimination to build L and zero out below the pivot in U
+ for i in range(k + 1, n):
+ L[i, k] = U[i, k] / U[k, k]
+ U[i, k:] -= L[i, k] * U[k, k:]
+ U[i, k] = 0.0
+
+ # Final pivot check on U[-1, -1]
+ if abs(U[-1, -1]) < pivot_tol:
+ raise ValueError(f"Numerically singular matrix: pivot ~ 0 at last diagonal")
+
+ return P, L, U
+
+def forward_substitution(L, b):
+ """
+ Solve L y = b for y, where L is unit lower-triangular.
+ b can be (n,) or (n, m).
+ """
+ L = np.asarray(L)
+ b = np.asarray(b, dtype=float)
+ n = L.shape[0]
+
+ if b.ndim == 1:
+ b = b.reshape(n, 1)
+
+ y = np.zeros_like(b, dtype=float)
+ for i in range(n):
+ # L has ones on the diagonal (Doolittle), so we don't divide by L[i,i]
+ y[i] = b[i] - L[i, :i] @ y[:i]
+ return y if y.shape[1] > 1 else y.ravel()
+
+def back_substitution(U, y):
+ """
+ Solve U x = y for x, where U is upper-triangular.
+ y can be (n,) or (n, m).
+ """
+ U = np.asarray(U)
+ y = np.asarray(y, dtype=float)
+ n = U.shape[0]
+
+ if y.ndim == 1:
+ y = y.reshape(n, 1)
+
+ x = np.zeros_like(y, dtype=float)
+ for i in range(n - 1, -1, -1):
+ if abs(U[i, i]) < 1e-15:
+ raise ValueError(f"Zero (or tiny) pivot encountered at U[{i},{i}]")
+ x[i] = (y[i] - U[i, i + 1:] @ x[i + 1:]) / U[i, i]
+ return x if x.shape[1] > 1 else x.ravel()
+
+def lu_solve(A, b, pivot_tol=1e-12):
+ """
+ Solve A x = b using LU with partial pivoting.
+ Returns x and the factors (P, L, U).
+ """
+ P, L, U = lu_decompose(A, pivot_tol=pivot_tol)
+ Pb = P @ np.asarray(b, dtype=float)
+ y = forward_substitution(L, Pb)
+ x = back_substitution(U, y)
+ return x, (P, L, U)
+
+def main():
+ # Example system:
+ # 3x + 2y - z = 1
+ # 2x - 2y + 4z = -2
+ # -x + 0.5y - z = 0
+ A = np.array([
+ [ 3.0, 2.0, -1.0],
+ [ 2.0, -2.0, 4.0],
+ [-1.0, 0.5, -1.0]
+ ])
+ b = np.array([1.0, -2.0, 0.0])
+
+ x, (P, L, U) = lu_solve(A, b)
+ print("Solution x:\n", x)
+ print("\nP:\n", P)
+ print("\nL:\n", L)
+ print("\nU:\n", U)
+
+ # Residual check
+ r = A @ x - b
+ print("\nResidual A@x - b:\n", r)
+ print("Residual norm:", np.linalg.norm(r, ord=np.inf))
+
+if __name__ == "__main__":
+ main()
+```
+
+## Problem 2
+Ohm's and Kirchhoff's laws have been used to find a set of equations to model a circuit. Using the built-in Numpy solvers. Plot the function of current at different voltages.
+
+
+```python
+import numpy as np
+import matplotlib.pyplot as plt
+
+# Unknown ordering
+VR1, VR2, VR3, VR4, VR5, I1, I2, IM1, IM2, IM3 = range(10)
+
+def build_system_matrices(R1, R2, R3, R4, R5):
+ A = np.zeros((10,10), float)
+ b_const = np.zeros(10, float)
+
+ # KVL/KCL + Ohm's law equations
+ A[0, VR1] = -1; A[0, VR2] = -1 # -VR1 - VR2 = -Vs + 5
+ A[1, VR2] = 1; A[1, VR3] = -1; A[1, VR4] = -1; b_const[1] = -2 # VR2 - VR3 - VR4 = -2
+ A[2, VR4] = 1; A[2, VR5] = -1; b_const[2] = 2 # VR4 - VR5 = 2
+
+ A[3, VR1] = 1; A[3, I1 ] = -R1 # VR1 = I1 R1
+ A[4, VR2] = 1; A[4, IM1] = -R2 # VR2 = IM1 R2
+ A[5, VR3] = 1; A[5, I2 ] = -R3 # VR3 = I2 R3
+ A[6, VR4] = 1; A[6, IM2] = -R4 # VR4 = IM2 R4
+ A[7, VR5] = 1; A[7, IM3] = -R5 # VR5 = IM3 R5
+
+ A[8, I1 ] = 1; A[8, IM1] = -1; A[8, I2 ] = -1 # I1 = IM1 + I2
+ A[9, I2 ] = 1; A[9, IM2] = -1; A[9, IM3] = -1 # I2 = IM2 + IM3
+ return A, b_const
+
+def b_vector(Vs, b_const):
+ b = b_const.copy()
+ b[0] = -Vs + 5.0
+ return b
+
+# Example component values
+R1, R2, R3, R4, R5 = 10.0, 5.0, 8.0, 6.0, 4.0
+
+A, b_const = build_system_matrices(R1, R2, R3, R4, R5)
+
+# Sanity: ensure A is non-singular
+condA = np.linalg.cond(A)
+condA
+
+Vs_vals = np.arange(0.0, 48.0 + 0.1, 0.1)
+
+# Option 1 (straightforward): loop with numpy.linalg.solve (LAPACK under the hood)
+IM1_vals = np.empty_like(Vs_vals)
+IM2_vals = np.empty_like(Vs_vals)
+IM3_vals = np.empty_like(Vs_vals)
+
+for k, Vs in enumerate(Vs_vals):
+ b = b_vector(Vs, b_const)
+ x = np.linalg.solve(A, b) # Uses LU factorization internally
+ IM1_vals[k], IM2_vals[k], IM3_vals[k] = x[IM1], x[IM2], x[IM3]
+
+# Plot
+plt.figure()
+plt.plot(Vs_vals, IM1_vals, label="IM1 (solve)")
+plt.plot(Vs_vals, IM2_vals, label="IM2 (solve)")
+plt.plot(Vs_vals, IM3_vals, label="IM3 (solve)")
+plt.xlabel("Supply Voltage Vs (V)")
+plt.ylabel("Motor Current (A)")
+plt.title(f"Motor Currents vs Supply Voltage (cond(A) ≈ {condA:.1f}, diff≈{max_diff:.2e})")
+plt.legend()
+plt.show()
+
+``` \ No newline at end of file
diff --git a/tutorials/module_3/1_linear_equations.md b/tutorials/module_3/1_linear_equations.md
deleted file mode 100644
index 9f50b97..0000000
--- a/tutorials/module_3/1_linear_equations.md
+++ /dev/null
@@ -1,540 +0,0 @@
-# Linear Equations
-Let's consider an linear equation
-$$
-y = m x + b
-$$
-where $a$ and $b$ are two known constants we can solve for $x$ easily.
-
-# Linear Algebra
-Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Matrices are rectangular arrays of numbers arranged in rows and columns. They are widely used in both engineering and computer science. Let's say we have the following system of equation.
-$$
-\begin{cases}
-a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\
-a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 \\
-a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = b_3
-\end{cases}
-\tag{1}
-
-$$
-Where both the $a_{nn}$ coefficients and the $b_n$ variables are all known. We can re-write this system into a matrix form by creating an $A$ matrix of all $a_{nn}$ values and a vector in the form of:
-$$
- \left[ {\begin{array}{cc}
- a_{11} & a_{12} & a_{13}\\
- a_{21} & a_{22} & a_{23}\\
- a_{31} & a_{32} & a_{33}\\
- \end{array} } \right]
- \left[ {\begin{array}{cc}
- x_{1}\\
- x_{2}\\
- x_{3}\\
- \end{array} } \right]
-=
- \left[ {\begin{array}{cc}
- b_{1}\\
- b_{2}\\
- b_{3}\\
- \end{array} } \right]
- \tag{2}
-$$
-where
-$$
- \left[ {\begin{array}{cc}
- a_{11} & a_{12} & a_{13}\\
- a_{21} & a_{22} & a_{23}\\
- a_{31} & a_{32} & a_{33}\\
- \end{array} } \right]
- = A,
- \left[ {\begin{array}{cc}
- b_{1}\\
- b_{2}\\
- b_{3}\\
- \end{array} } \right]
- = b
-$$
-we can also write the equation in it's simplified form
-$$
-Ax=b
-\tag{3}
-$$
-
-Take a close look at equation 1 and 2 and look at their similarity and differences. Notice how the pattern of the $A$ matrix and $b$ vector and follow the same patterns as however the elements of the $x$ vector ($x_1$,$x_2$,$x_3$) are flipped vertically. This is because of how we multiple matrices
-
-## Matrix operations in python
-
-- Matrices of the same size can be added or subtracted element-wise:
-- Multiply a matrix by a scalar yields a new matrix where every element is multiplied by the scalar.
-- The determinant of a matrix is number of a square matrix that summarizes certain properties of the matrix.
-- **Identity Matrix $I$**: Square matrix with 1’s on the diagonal and 0’s elsewhere.
-- **Inverse Matrix $A^{-1}$**: Satisfies $AA^{-1} = I$. (Only square, non-singular matrices have inverses.)
-
-## Matrix Math in Python
-Let's create two square matrices $A$ and $B$ to perform some matrix operations on:
-```python
-import numpy as np
-np.set_printoptions(suppress=True)
-
-A = np.array([[1, 3],
- [2, 4]]) # 2x2
-B = np.array([[5, 6],
- [7, 8]]) # 2x2
-A.shape, B.shape
-
-b = np.array([3, 5])
-b.shape
-```
-
-Test and see how it works.
-
-```python
-# Addition and Subtraction of same shapes
-A + B
-A - B
-
-# Scalar Multiplication
-3*A
-
-5*B
-
-# Matrix Multiplication
-mat_mult = A @ B
-
-# Transpose
-B.T
-
-# Identity and Inverse matrix
-
-I = np.eye(2)
-A_inv = np.linalg.inv(A)
-
-A @ A_inv, A_inv @ A
-
-# Determinants
-
-detA = np.linalg.det(A)
-detB = np.linalg.det(B)
-detA, detB
-```
-
-### Problem 1
-Re-write the following systems of equation in matrix form to normal algebraic equations.
-$$
- \left[ {\begin{array}{cc}
- 3 & 5 & 7\\
- 2 & 1 & 3\\
- 2 & 3 & 6\\
- \end{array} } \right]
- \left[ {\begin{array}{cc}
- x_{1}\\
- x_{2}\\
- x_{3}\\
- \end{array} } \right]
-=
- \left[ {\begin{array}{cc}
- 24\\
- 52\\
- 12\\
- \end{array} } \right]
-$$
-## Problem 2
-```python
-import numpy as np
-
-# Coefficient matrix A
-A = np.array([[2, 3], [4, 5]])
-# Right - hand side vector b
-b = np.array([4, 6])
-
-# Solve the system of equations
-x = np.linalg.solve(A, b)
-print(x)
-```
-## Problem 3
-
-
-
-Matrix Determinate
-
-Cramer's Rule -
-
-
-# Systems of Linear Equations
-
-## Techniques to solve Systems of Equations
-In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand.
-$$
-\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases}
-$$
-
-You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?*
-
-When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process.
-
-The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal).
-
-For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like:
-$$
-\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$
-Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve.
-
-Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly:
-$$
-a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}}
-$$
-Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$:
-$$
-a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}}
-$$
-This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is:
-$$
-x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}}
-$$
-This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination.
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 60%;"
- src="fw_elim_bw_sub.png"
- alt="Forward Elimination and Backwards Substitution">
-Now that we have an algorithm to follow, let's code it.
-```python
-import numpy as np
-
-# Create the Augmented matrix
-A = np.array([[ 2.2, 1.5, -1.3, 8.2],
- [-3.4, -1.7, 2.8, -11.1],
- [-2.0, 1.0, 2.4, -3.5]])
-
-
-```
-### Forward Elimination
-```python
-n = len(A)
-
-# Forward elimination loop
-for i in range(n-1):
- for j in range(i+1, n):
- factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i]
- A[j, i:] = A[j, i:] - factor * A[i, i:]
-
-print("Upper triangular matrix after forward elimination:")
-print(A)
-```
-### Back Substitution
-```python
-x = np.zeros(n)
-
-for i in range(n-1, -1, -1):
- x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i]
-
-print("Solution:", x)
-
-```
-
-### Naive Gauss Elimination
-We can write a function combining these two steps:
-```python
-def gaussian_elimination_naive(A, b):
- A = A.astype(float).copy()
- b = b.astype(float).copy()
- n = len(b)
- # Forward elimination
- for k in range(n-1):
- if np.isclose(A[k,k], 0.0):
- raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.")
- for i in range(k+1, n):
- m = A[i,k]/A[k,k]
- A[i, k:] -= m * A[k, k:]
- b[i] -= m * b[k]
- # Back substitution
- x = np.zeros(n)
- for i in range(n-1, -1, -1):
- s = A[i, i+1:] @ x[i+1:]
- x[i] = (b[i]-s)/A[i,i]
- return x
-```
-### Gauss Elimination
-```python
-def gaussian_elimination_pp(A, b):
- A = A.astype(float).copy()
- b = b.astype(float).copy()
- n = len(b)
- # Forward elimination with partial pivoting
- for k in range(n-1):
- piv = k + np.argmax(np.abs(A[k:, k]))
- if np.isclose(A[piv, k], 0.0):
- raise np.linalg.LinAlgError("Matrix is singular or nearly singular.")
- if piv != k: # row swap
- A[[k, piv]] = A[[piv, k]]
- b[[k, piv]] = b[[piv, k]]
- for i in range(k+1, n):
- m = A[i,k]/A[k,k]
- A[i, k:] -= m * A[k, k:]
- b[i] -= m * b[k]
- # Back substitution
- x = np.zeros(n)
- for i in range(n-1, -1, -1):
- s = A[i, i+1:] @ x[i+1:]
- x[i] = (b[i]-s)/A[i,i]
- return x
-```
-
-
-# LU Decomposition
-Imagine you’re designing the heat shield of a spacecraft. Engineers provide you with a structural model of the heat shield, represented by a coefficient matrix $A$. To evaluate how the design performs under different operating conditions, you need to test many different load cases, each represented by a right-hand side vector $\mathbf{b}$.
-
-Now suppose your model has a 50x50 matrix $A$, and you want to simulate 100 different load cases. If you rely on Gaussian elimination alone, you would need to **repeat the elimination process 100 times** - once for each $\mathbf{b}$. This quickly becomes computationally expensive, especially for larger systems.
-
-This is where **LU decomposition** comes in. Instead of redoing elimination for every load case, we can factorize the matrix $A$ **just once** into two simpler pieces:
-$$
-A = LU
-$$
-expanding it:
-$$
-\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix}
-=
-\begin{bmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \\ \end{bmatrix}
-\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix}
-$$
-Once we have this factorization, solving $A\mathbf{x} = \mathbf{b}$ becomes a two-step process:
-
-1. Solve $L\mathbf{d} = \mathbf{b}$ by **forward substitution**.
-$$
-d_m \;=\; \frac{\,b_m \;-\; \sum_{n=1}^{m-1} a_{mn}\, d_n\,}{a_{mm}}
-$$
-2. Solve $U\mathbf{x} = \mathbf{d}$ by **backward substitution**.
- $$
- x_m=\frac{\,d_m-\sum_{n=m+1}^{N} u_{mn}\,x_n\,}{u_{mm}}
- $$
-
-The big advantage is that the factorization is reused. For 100 different load vectors, we only do the expensive decomposition once, and then solve two triangular systems (fast operations) for each new $\mathbf{b}$.
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 40%;"
- src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Tadeusz_Banachiewicz_%28NAC%29.jpg/1024px-Tadeusz_Banachiewicz_%28NAC%29.jpg"
- alt="Tadeusz Banchiewicz">
-First formalized in 1938 by Tadeusz Banachiewicz, based on Gauss’s elimination ideas.
-
-Example
-
-
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 75%;"
- src="lu_decomp.png"
- alt="LU Decomposition">
-## Problem 1
-```python
-"""
-LU Decomposition Solver (with partial pivoting)
-
-Solves A x = b by computing P, L, U such that P A = L U,
-then performing:
- 1) y from L y = P b (forward substitution)
- 2) x from U x = y (back substitution)
-
-- Works for a single RHS (b shape: (n,)) or multiple RHS (b shape: (n, m))
-- Raises a ValueError if a near-zero pivot is encountered.
-"""
-
-import numpy as np
-
-def lu_decompose(A, pivot_tol=1e-12):
- """
- Compute LU factorization with partial pivoting.
- Returns P, L, U such that P @ A = L @ U.
-
- Parameters
- ----------
- A : (n, n) ndarray
- pivot_tol : float
- If the absolute value of the pivot falls below this, the matrix is
- treated as singular.
-
- Returns
- -------
- P, L, U : ndarrays of shape (n, n)
- """
- A = np.array(A, dtype=float, copy=True)
- n = A.shape[0]
- if A.shape[0] != A.shape[1]:
- raise ValueError("A must be square")
-
- U = A.copy()
- L = np.eye(n)
- P = np.eye(n)
-
- for k in range(n - 1):
- # Partial pivoting: find index of largest |U[i,k]| for i >= k
- pivot_idx = np.argmax(np.abs(U[k:, k])) + k
- pivot_val = U[pivot_idx, k]
-
- if abs(pivot_val) < pivot_tol:
- raise ValueError(f"Numerically singular matrix: pivot ~ 0 at column {k}")
-
- # Swap rows in U and P (and the part of L constructed so far)
- if pivot_idx != k:
- U[[k, pivot_idx], :] = U[[pivot_idx, k], :]
- P[[k, pivot_idx], :] = P[[pivot_idx, k], :]
- if k > 0:
- L[[k, pivot_idx], :k] = L[[pivot_idx, k], :k]
-
- # Gaussian elimination to build L and zero out below the pivot in U
- for i in range(k + 1, n):
- L[i, k] = U[i, k] / U[k, k]
- U[i, k:] -= L[i, k] * U[k, k:]
- U[i, k] = 0.0
-
- # Final pivot check on U[-1, -1]
- if abs(U[-1, -1]) < pivot_tol:
- raise ValueError(f"Numerically singular matrix: pivot ~ 0 at last diagonal")
-
- return P, L, U
-
-def forward_substitution(L, b):
- """
- Solve L y = b for y, where L is unit lower-triangular.
- b can be (n,) or (n, m).
- """
- L = np.asarray(L)
- b = np.asarray(b, dtype=float)
- n = L.shape[0]
-
- if b.ndim == 1:
- b = b.reshape(n, 1)
-
- y = np.zeros_like(b, dtype=float)
- for i in range(n):
- # L has ones on the diagonal (Doolittle), so we don't divide by L[i,i]
- y[i] = b[i] - L[i, :i] @ y[:i]
- return y if y.shape[1] > 1 else y.ravel()
-
-def back_substitution(U, y):
- """
- Solve U x = y for x, where U is upper-triangular.
- y can be (n,) or (n, m).
- """
- U = np.asarray(U)
- y = np.asarray(y, dtype=float)
- n = U.shape[0]
-
- if y.ndim == 1:
- y = y.reshape(n, 1)
-
- x = np.zeros_like(y, dtype=float)
- for i in range(n - 1, -1, -1):
- if abs(U[i, i]) < 1e-15:
- raise ValueError(f"Zero (or tiny) pivot encountered at U[{i},{i}]")
- x[i] = (y[i] - U[i, i + 1:] @ x[i + 1:]) / U[i, i]
- return x if x.shape[1] > 1 else x.ravel()
-
-def lu_solve(A, b, pivot_tol=1e-12):
- """
- Solve A x = b using LU with partial pivoting.
- Returns x and the factors (P, L, U).
- """
- P, L, U = lu_decompose(A, pivot_tol=pivot_tol)
- Pb = P @ np.asarray(b, dtype=float)
- y = forward_substitution(L, Pb)
- x = back_substitution(U, y)
- return x, (P, L, U)
-
-def main():
- # Example system:
- # 3x + 2y - z = 1
- # 2x - 2y + 4z = -2
- # -x + 0.5y - z = 0
- A = np.array([
- [ 3.0, 2.0, -1.0],
- [ 2.0, -2.0, 4.0],
- [-1.0, 0.5, -1.0]
- ])
- b = np.array([1.0, -2.0, 0.0])
-
- x, (P, L, U) = lu_solve(A, b)
- print("Solution x:\n", x)
- print("\nP:\n", P)
- print("\nL:\n", L)
- print("\nU:\n", U)
-
- # Residual check
- r = A @ x - b
- print("\nResidual A@x - b:\n", r)
- print("Residual norm:", np.linalg.norm(r, ord=np.inf))
-
-if __name__ == "__main__":
- main()
-```
-
-## Problem 2
-Ohm's and Kirchhoff's laws have been used to find a set of equations to model a circuit. Using the built-in Numpy solvers. Plot the function of current at different voltages.
-
-
-```python
-import numpy as np
-import matplotlib.pyplot as plt
-
-# Unknown ordering
-VR1, VR2, VR3, VR4, VR5, I1, I2, IM1, IM2, IM3 = range(10)
-
-def build_system_matrices(R1, R2, R3, R4, R5):
- A = np.zeros((10,10), float)
- b_const = np.zeros(10, float)
-
- # KVL/KCL + Ohm's law equations
- A[0, VR1] = -1; A[0, VR2] = -1 # -VR1 - VR2 = -Vs + 5
- A[1, VR2] = 1; A[1, VR3] = -1; A[1, VR4] = -1; b_const[1] = -2 # VR2 - VR3 - VR4 = -2
- A[2, VR4] = 1; A[2, VR5] = -1; b_const[2] = 2 # VR4 - VR5 = 2
-
- A[3, VR1] = 1; A[3, I1 ] = -R1 # VR1 = I1 R1
- A[4, VR2] = 1; A[4, IM1] = -R2 # VR2 = IM1 R2
- A[5, VR3] = 1; A[5, I2 ] = -R3 # VR3 = I2 R3
- A[6, VR4] = 1; A[6, IM2] = -R4 # VR4 = IM2 R4
- A[7, VR5] = 1; A[7, IM3] = -R5 # VR5 = IM3 R5
-
- A[8, I1 ] = 1; A[8, IM1] = -1; A[8, I2 ] = -1 # I1 = IM1 + I2
- A[9, I2 ] = 1; A[9, IM2] = -1; A[9, IM3] = -1 # I2 = IM2 + IM3
- return A, b_const
-
-def b_vector(Vs, b_const):
- b = b_const.copy()
- b[0] = -Vs + 5.0
- return b
-
-# Example component values
-R1, R2, R3, R4, R5 = 10.0, 5.0, 8.0, 6.0, 4.0
-
-A, b_const = build_system_matrices(R1, R2, R3, R4, R5)
-
-# Sanity: ensure A is non-singular
-condA = np.linalg.cond(A)
-condA
-
-Vs_vals = np.arange(0.0, 48.0 + 0.1, 0.1)
-
-# Option 1 (straightforward): loop with numpy.linalg.solve (LAPACK under the hood)
-IM1_vals = np.empty_like(Vs_vals)
-IM2_vals = np.empty_like(Vs_vals)
-IM3_vals = np.empty_like(Vs_vals)
-
-for k, Vs in enumerate(Vs_vals):
- b = b_vector(Vs, b_const)
- x = np.linalg.solve(A, b) # Uses LU factorization internally
- IM1_vals[k], IM2_vals[k], IM3_vals[k] = x[IM1], x[IM2], x[IM3]
-
-# Plot
-plt.figure()
-plt.plot(Vs_vals, IM1_vals, label="IM1 (solve)")
-plt.plot(Vs_vals, IM2_vals, label="IM2 (solve)")
-plt.plot(Vs_vals, IM3_vals, label="IM3 (solve)")
-plt.xlabel("Supply Voltage Vs (V)")
-plt.ylabel("Motor Current (A)")
-plt.title(f"Motor Currents vs Supply Voltage (cond(A) ≈ {condA:.1f}, diff≈{max_diff:.2e})")
-plt.legend()
-plt.show()
-
-``` \ No newline at end of file
diff --git a/tutorials/module_3/2_0_non-linear_equations.md b/tutorials/module_3/2_0_non-linear_equations.md
new file mode 100644
index 0000000..c635513
--- /dev/null
+++ b/tutorials/module_3/2_0_non-linear_equations.md
@@ -0,0 +1,43 @@
+# Root Finding Methods
+For linear equations, finding the root is straightforward—you can usually solve for the unknown just by rearranging the equation algebraically. With nonlinear equations, however, the process quickly becomes more challenging.
+
+Up to this point, you've already learned tools like the quadratic formula,
+$$
+x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}
+$$
+this works nicely if we're given the function $f(x)$ of a second degree polynomial. However, as engineers we need to be able to solve equations that do not follow this trend, or perhaps we are working with experimental data rather than an analytical function. In this section we will cover bracketing and open methods to find roots of a function.
+
+Let us take a step back and think *algorithmically*. To find a root numerically, we can implement a search algorithm. Let's consider a small phone book - a database of names and numbers sorted alphabetically by last name. Our goal is to find the phone number of James Watt (our root). We could make an algorithm that starts at the the first page, checks to see if the current name equals our target (Watt), if it doesn't, check the next entry. This is method is called *incremental search*. As you may see, this can take very long if the database is large.
+
+> [!NOTE] Example Phonebook Database Entries
+> **A** – Archimedes
+> **B** – Bernoulli
+> **C** – Curie
+> **D** – Darwin
+> **E** – Einstein
+> **F** – Feynman
+> **G** – Galileo
+> **H** – Hubble
+> **I** – Ibn Sina
+> **J** – Joule
+> **K** – Kelvin
+> **L** – Leonardo
+> **M** – Maxwell
+> **N** – Newton
+> **O** – Oppenheimer
+> **P** – Pascal
+> **Q** – Quetelet
+> **R** – Röntgen
+> **S** – Schrödinger
+> **U** – Urey
+> **V** – Volta
+> **W** – Watt
+> **X** – Xenophon
+> **Y** – Yukawa
+> **Z** – Zuse
+
+Another approach may be to index the book and jump straight to the 'W' names however this requires two steps, we first need to. How would you approach this problem?
+
+[[2_1_Bracketing_Method]]
+[[2_2_Open_Methods]]
+[[2_3_Systems_of_Non-linear_equations]] \ No newline at end of file
diff --git a/tutorials/module_3/2_1_Bracketing_Method.md b/tutorials/module_3/2_1_Bracketing_Method.md
new file mode 100644
index 0000000..3a9a97b
--- /dev/null
+++ b/tutorials/module_3/2_1_Bracketing_Method.md
@@ -0,0 +1,97 @@
+# Bracketing Method
+## Incremental Search
+The incremental search method is the simplest of the bracketing methods. The basic idea is to start from an initial point and move forward in small increments along the function. At each step, the sign of the function value is checked. If a change of sign occurs between two consecutive points, then a root must lie between them (by the **Intermediate Value Theorem**). This gives us a bracket — an interval that contains the root — which can then be refined using more powerful methods such as bisection or false position.
+
+Although incremental search is not efficient for accurately finding roots, it is useful for _detecting the presence of roots_ and identifying approximate intervals where they exist. Once such an interval is located, other methods can be applied to converge quickly to the actual root. In this sense, incremental search often serves as a **pre-processing step** for bracketing or open methods, especially when the root’s location is not known in advance.
+## Bisection
+This method start by sectioning the database into two halves. We can do this with a phone book by simply opening it in the middle of the book. In our data set, we have find the middle entry to be 'M - Maxwell'. Based on the premises that all names are sorted alphabetically we can conclude that we will find our target in the second half, essentially eliminating the first half. Doing this a second time gives us our new midpoint 'S – Schrödinger' and a third time get us to our target 'W - Watt'. In only 3 iterations we have reached out target goal. This is a huge improvement from the Incremental search (22 iterations) versus our new 3 iterations.
+
+This exact same concept can be applied to finding roots of functions. Instead of searching for strings we are searching for floats. For computers this is a lot easier to do. Take a moment to think about how you would write psuedo-code to program this.
+
+> The **Intermediate Value Theorem** says that if f(x) is a continuous function between a and b, and sign(f(a))≠sign(f(b)), then there must be a c, such that a<c<b and f(c)=0.
+
+Let's consider a continuous function $f(x)$ with an unknown root $x_r$ . Using the intermediate value theorem we can bracket a root with the points $x_{lower}$ and $x_{upper}$ such that $f(x_{lower})$ and $f(x_{upper})$ have opposite signs. This idea is visualized in the graph below.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 50%;"
+ src="https://pythonnumericalmethods.studentorg.berkeley.edu/_images/19.03.01-Intermediate-value-theorem.png"
+ alt="Intermediate Value Theorem">
+Once we bisect the interval and found we set the new predicted root to be in the middle. We can then compare the two sections and see if there is a sign change between the bounds. Once the section with the sign change has been identified, we can repeat this process until we near the root.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 50%;"
+ src="bisection.png"
+ alt="Bisection">
+As you the figure shows, the predicted root $x_r$ get's closer to the actual root each iteration. In theory this is an infinite process that can keep on going. In practice, computer precision may cause error in the result. A work-around to these problems is setting a tolerance for the accuracy. As engineers it is our duty to determine what the allowable deviation is.
+
+So let's take a look at how we can write this in python.
+```python
+import numpy as np
+
+def my_bisection(f, x_l, x_u, tol):
+ # approximates a root, R, of f bounded
+ # by a and b to within tolerance
+ # | f(m) | < tol with m the midpoint
+ # between a and b Recursive implementation
+
+ # check if a and b bound a root
+ if np.sign(f(x_l)) == np.sign(f(x_u)):
+ raise Exception(
+ "The scalars x_l and x_u do not bound a root")
+
+ # get midpoint
+ x_r = (x_l + x_u)/2
+
+ if np.abs(f(x_r)) < tol:
+ # stopping condition, report m as root
+ return x_r
+ elif np.sign(f(x_l)) == np.sign(f(x_r)):
+ # case where m is an improvement on a.
+ # Make recursive call with a = m
+ return my_bisection(f, x_r, x_u, tol)
+ elif np.sign(f(x_l)) == np.sign(f(x_r)):
+ # case where m is an improvement on b.
+ # Make recursive call with b = m
+ return my_bisection(f, x_l, x_r, tol)
+```
+
+### Problem 1
+Call the *my_bisection* function to solve the equation $f(x) = x^2 -2$
+```python
+f = lambda x: x**2 - 2
+
+r1 = my_bisection(f, 0, 2, 0.1)
+print("r1 =", r1)
+r01 = my_bisection(f, 0, 2, 0.01)
+print("r01 =", r01)
+
+print("f(r1) =", f(r1))
+print("f(r01) =", f(r01))
+```
+
+
+### Problem 2
+Write an algorithm that uses a combination of the bisection and incremental search to find multiple roots in the first 8 minutes of the data set.
+```python
+import pandas as pd
+import matplotlib.pyplot as plt
+
+# Read the CSV file into a DataFrame called data
+data = pd.read_csv('your_file.csv')
+
+# Plot all numeric columns
+data.plot()
+
+# Add labels and show plot
+plt.xlabel("Time (min)")
+plt.ylabel("Strain (kPa)")
+plt.title("Data with multiple roots")
+plt.legend()
+plt.show()
+```
+
+## False Position
diff --git a/tutorials/module_3/2_2_Open_Methods.md b/tutorials/module_3/2_2_Open_Methods.md
new file mode 100644
index 0000000..7502a86
--- /dev/null
+++ b/tutorials/module_3/2_2_Open_Methods.md
@@ -0,0 +1,185 @@
+# Open Methods
+So far we looked at methods that require us to bracket a root before finding it. However, let's think outside of the box and ask ourselves if we can find a root with only 1 guess. Can we improve the our guess by if we have some knowledge of the function itself? The answer - derivatives.
+
+## Newton-Raphson
+Possibly the most used root finding method implemented in algorithms is the Newton-Raphson method. By using the initial guess we can draw a tangent line at this point on the curve. Where the tangent intercepts the x-axis is usually an improved estimate of the root. It can be more intuitive to see this represented graphically.
+<img
+ style="display: block;
+ margin-left: auto;
+ margin-right: auto;
+ width: 50%;"
+ src="https://pythonnumericalmethods.studentorg.berkeley.edu/_images/19.04.01-Newton-step.png"
+ alt="Newton-Raphson illustration">
+Given the function $f(x)$ we make our first guess at $x_0$. Using our knowledge of the function at that point we can calculate the derivative to obtain the tangent. Using the equation of the tangent we can re-arrange it to solve for $x$ when $f(x)=0$ giving us our first improved guess $x_1$.
+
+As seen in the figure above. The derivative of the function is equal to the slope.
+$$
+f'(x_0) = \frac{f(x_0)-0}{x_0-x_1}
+$$
+Re-arranging for our new guess we get:
+$$
+x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)}
+$$
+Since $x_0$ is our current guess and $x_0$ is our next guess, we can write these symbolically as $x_i$ and $x_{i+1}$ respectively. This gives us the *Newton-Raphson formula*.
+$$
+\boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} \tag{3.1}
+$$
+### Problem 1
+From experimental data we extrapolated a function f. Write a python function called *newtonraphson* which as the following input parameters
+ - `f` - function
+ - `df` - first derivative of the function
+ - `x0` - initial guess
+ - `tol` - the tolerance of error
+ That finds the root of the function $f(x)=e^{\left(x-6.471\right)}\cdot1.257-0.2$ and using an initial guess of 8.
+
+Using Recursion
+```python
+def newtonraphson(f, df, x0, tol):
+ # Output of this function is the estimated root of f
+ # using the Newton-Raphson method
+ # recursive method
+ if abs(f(x0)) < tol:
+ return x0
+ else:
+ return newtonraphson(f,df,x0)
+
+f = lambda x: x**2-1.5
+f_prime = labmda x: 2*x
+
+```
+
+Using Iteration
+```python
+import numpy as np
+
+def newton_raphson(f, f_prime, x0, tol=1e-8):
+ """
+ Newton-Raphson root finding method.
+ Parameters:
+ f : function, the equation f(x) = 0
+ f_prime : function, derivative of f(x)
+ x0 : float, initial guess
+ tol : float, stopping tolerance
+ Returns:
+ x : float, approximate root
+ history : list, values of x at each iteration
+ """
+
+ x = x0
+ roots = [x]
+
+ while True:
+ x_new = x - f(x) / f_prime(x)
+ roots.append(x_new)
+ if abs(x_new - x) < tol:
+ sol = x_new
+ total_iter = len(roots)
+ return sol, total_iter, roots
+
+ x = x_new
+
+# Example: solve x^2 - 2 = 0
+f = lambda x: x**2 - 2
+f_prime = lambda x: 2*x
+
+sol, total_iter, root = newton_raphson(f, f_prime, x0=1.4)
+
+actual=np.sqrt(2)
+
+print("Solution=", sol)
+print("No. iterations:", total_iter)
+print("Iteration history:", root)
+print("Actual sqrt(2) =", np.sqrt(2))
+```
+## Modified Secant
+A possible issue with the Newton-Raphson method is that we are required to know the derivative of the function we are finding the root for. Sometimes this may be extremely difficult or impossible to find analytically. Thus we can use the modified secant method. A numerical variant of the Newton-Raphson.
+
+This method uses the finite derivative to guess where the root lies. We can then iterate this guessing process until we're within tolerance.
+
+$$
+f'(x_i) \simeq \frac{f(x_{i-1})-f(x_i)}{x_{i-1}x_i}
+$$
+Then substituting this in equation (3.1) we get the equation for the secant method:
+
+$$
+x_{i+1} = x_i - \frac{f(x_i) (x_{i-1}x_i)}{f(x_{i-1})-f(x_i)} \tag {3.2}
+$$
+
+We can then *modify* the equation to instead of choosing two arbitrary numbers we use a small value $\delta$ in the independent variable to get a better reading.
+$$
+x_{x+1} = x_i - \frac{f(x_i)(x_{i-1}-x_i)}{f(x_i+\delta x)-f(x_i)}
+$$
+$$
+
+\boxed{x_{x+1} = x_i - \frac{f(x_i)\delta x_i}{f(x_i+\delta x)-f(x_i)}} \tag{3.3}
+$$
+Note that if $\delta$ is too small the method can fail by round-off error due to the subractive cancellation in the denominator in eqn. 3.3. If $\delta$ is too large, the method becomes inefficient or even divergent.
+
+## Issues with open methods
+
+Open methods may also have it's problems. Let's consider the following function: $f(x)=3.2*tan^{-1}(x-4.3)$ if we were to apply the Newton-Raphson to find the root of this function we can see that the results diverge.
+
+{Divergence Demo}
+
+
+## Problem 2: Pipe friction
+The flow of fluids through pipes is a common problem in engineering. In mechanical and aerospace fields, it often arises in applications such as liquid and gas transport for cooling systems. The resistance to this flow is described by a dimensionless parameter known as the friction factor. For turbulent flow, the Colebrook equation provides a method to determine this factor.
+$$
+0=\frac{1}{\sqrt{f}}+2log(\frac{\epsilon}{3.7D}+\frac{2.51}{Re\sqrt{f}}) \tag{2.2}
+$$
+where
+$$
+Re=\frac{\rho VD}{\mu}
+$$
+Suppose you're working on a carburetor which mixes air and fuel before. You are tasked with finding the friction factor of the pipes before entering the carburetor. A lab technician measure and provides the following data of the tube to you:
+$D=0.005 m$
+$\epsilon=0.0015mm$
+$V=40 m/s$
+We know we are working with air and therefore:
+$\rho= 1.23 \frac{kg}{m^3}$
+$\mu=1.79*10^{-5}Ns/m^3$
+
+If we define a function $g(f)$ equal to equation above. The root of this function will give us the friction factor of this tube.
+
+a) Plot the function of $g(f)$ to select a good initial guess.
+b) Determine the root by using the Newton-Raphson method.
+
+```python
+import numpy as np
+import matplotlib.pyplot as plt
+
+# Given parameters
+rho = 1.23 # kg/m^3
+mu = 1.79e-5 # Pa·s (N·s/m^2)
+D = 0.005 # m
+V = 40 # m/s
+e = 0.0015 / 1000 # convert mm to m
+
+# Reynolds number
+Re = rho * V * D / mu
+
+# Define g(f) function (Colebrook residual)
+g = lambda f: 1/np.sqrt(f) + 2.0 * np.log10((e/(3.7*D)) + (2.51/(Re*np.sqrt(f))))
+
+# Range of f values to check
+f_values = np.linspace(0.008, 0.08, 500)
+g_values = g(f_values)
+
+# Plot
+plt.figure(figsize=(8,6))
+plt.plot(f_values, g_values)
+plt.axhline(0, color='red', linestyle='--') # root line
+plt.grid(True)
+plt.xlabel('f')
+plt.ylabel('g(f)')
+plt.title('Colebrook Residual Function g(f)')
+plt.show()
+```
+
+```python
+g_prime = lambda x: 2*x
+x= = 1.5
+
+sol, total_iter, root = newton_raphson(g, g_prime, x0)
+```
+
diff --git a/tutorials/module_3/2_3_Systems_of_Non-linear_equations.md b/tutorials/module_3/2_3_Systems_of_Non-linear_equations.md
new file mode 100644
index 0000000..05781be
--- /dev/null
+++ b/tutorials/module_3/2_3_Systems_of_Non-linear_equations.md
@@ -0,0 +1,175 @@
+# Systems of Non-linear equations
+So far we've solved system of linear equations where the equations come in the form:
+$$
+f(x) = a_1x_1 + a_2x_2 + \ldots + a_nx_n-b = 0
+$$
+where the $a$'s and $b$ are constants. Equations that do not fit this format are called *non linear* equations. For example:
+
+$$
+x^2 + xy = 10
+$$
+$$
+y+3xy^2=57
+$$
+these are two nonlinear equations with two unknowns. To solve this system we can
+
+
+This roots of single equations and solved system of equations, but what if the system of equations are dependent on non-linear.
+
+$$
+ \begin{align*}
+f_{1}(x_{1},x_{2},\ldots ,x_{n}) &= 0 \\
+f_{2}(x_{1},x_{2},\ldots ,x_{n}) &= 0 \\
+&\ \vdots \\
+f_{n}(x_{1},x_{2},\ldots ,x_{n}) &= 0
+\end{align*}
+$$
+
+
+
+We've applied the Newton-Raphson to
+
+
+
+## Problem 1: Newton-Raphson for a Nonlinear system
+
+Use the multiple-equation Newton-Raphson method to determine the roots of the following system of non-linear equations
+$$
+\begin{cases}
+u(x,y) = x^2 + xy - 10 = 0 \\
+v(x,y) = y + 3xy^2 - 57 = 0
+\end{cases}
+$$
+after
+ a) 1 iteration
+ b) Write a python function that iterates until the tolerance is within 0.002
+
+Note the solution to $x$ =2 and $y$ = 3. Initiate the computation with guesses of $x$ = 1.5 and $y$ = 3.5.
+
+Solution:
+a)
+```python
+u = lambda x, y: x**2 + x*y - 10
+v = lambda x, y: y + 3*x*y**2 - 57
+
+# Initial guesses
+x_0 = 1.5
+y_0 = 3.5
+
+# Evaluate partial derivatives at initial guesses
+dudx = 2*x_0+y_0
+dudy = x_0
+
+dvdx = 3*y_0**2
+dvdy = 1 + 6*x_0*y_0
+
+# Find determinant of the Jacobian
+det = dudx * dvdy - dudy * dvdx
+
+# Values of functions valuated at initial guess
+u_0 = u(x_0,y_0)
+v_0 = v(x_0,y_0)
+
+# Substitute into newton-raphson equation
+x_1 = x_0 - (u_0*dvdy-v_0*dudy) / det
+y_1 = y_0 - (v_0*dudx-u_0*dvdx) / det
+
+print(x_1)
+print(y_1)
+```
+
+b)
+```python
+import numpy as np
+
+def newton_raphson_system(f, df, g0, tol=1e-8, max_iter=50, verbose=False):
+ """
+ Newton-Raphson solver for a 2x2 system of nonlinear equations.
+
+ Parameters
+ ----------
+ f : array of callables
+ f[i](x, y) should evaluate the i-th equation.
+ df : 2D array of callables
+ df[i][j](x, y) is the partial derivative of f[i] wrt variable j.
+ g0 : array-like
+ Initial guess [x0, y0].
+ tol : float
+ Convergence tolerance.
+ max_iter : int
+ Maximum iterations.
+ verbose : bool
+ If True, prints iteration details.
+
+ Returns
+ -------
+ (x, y) : solution vector
+ iters : number of iterations
+ """
+ x, y = g0
+
+ for k in range(1, max_iter + 1):
+ # Evaluate system of equations
+ F1 = f[0](x, y)
+ F2 = f[1](x, y)
+
+ # Evaluate Jacobian entries
+ J11 = df[0][0](x, y)
+ J12 = df[0][1](x, y)
+ J21 = df[1][0](x, y)
+ J22 = df[1][1](x, y)
+
+ # Determinant
+ det = J11 * J22 - J12 * J21
+ if det == 0:
+ raise ZeroDivisionError("Jacobian is singular")
+
+ # Solve for updates using 2x2 inverse
+ dx = ( F1*J22 - F2*J12) / det
+ dy = ( F2*J11 - F1*J21) / det3
+
+ # Update variables
+ x_new = x - dx
+ y_new = y - dy
+
+ if verbose:
+ print(f"iter {k}: x={x_new:.6f}, y={y_new:.6f}, "
+ f"|dx|={abs(dx):.2e}, |dy|={abs(dy):.2e}")
+
+ # Convergence check
+ if max(abs(dx), abs(dy)) < tol:
+ return np.array([x_new, y_new]), k
+
+ x, y = x_new, y_new
+
+ raise ValueError("Newton-Raphson did not converge")
+
+
+# -------------------
+# Example usage
+# -------------------
+
+# Define system: u(x,y), v(x,y)
+u = lambda x, y: x**2 + x*y - 10
+v = lambda x, y: y + 3*x*y**2 - 57
+
+# Partial derivatives
+dudx = lambda x, y: 2*x + y
+dudy = lambda x, y: x
+dvdx = lambda x, y: 3*y**2
+dvdy = lambda x, y: 1 + 6*x*y
+
+f = np.array([u, v])
+df = np.array([[dudx, dudy],
+ [dvdx, dvdy]])
+
+g0 = np.array([1.5, 3.5])
+
+sol, iters = newton_raphson_system(f, df, g0, tol=1e-10, verbose=True)
+print("Solution:", sol, "in", iters, "iterations")
+
+```
+
+
+
+
diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md
deleted file mode 100644
index 49c6273..0000000
--- a/tutorials/module_3/2_roots_optimization.md
+++ /dev/null
@@ -1,500 +0,0 @@
-# Root Finding Methods
-
-By now you've learned the quadratic formula to find the roots of a second degree polynomial.
-$$
-x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}
-$$
-This works nicely if we're given the function $f(x)$. However, how would you solve for $x$ if you were given experimental data? In this section we will cover bracketing and open methods to find roots of a function.
-
-Let us take a step back and think *algorithmically*. To find a root numerically, we can implement a search algorithm. Let's consider a small phone book - a database of names and numbers sorted alphabetically by last name. Our goal is to find the phone number of James Watt (our root). We could make an algorithm that starts at the the first page, checks to see if the current name equals our target (Watt), if it doesn't, check the next entry. This is method is called *incremental search*. As you may see, this can take very long if the database is large.
-
-> [!NOTE] Example Phonebook Database Entries
-> **A** – Archimedes
-> **B** – Bernoulli
-> **C** – Curie
-> **D** – Darwin
-> **E** – Einstein
-> **F** – Feynman
-> **G** – Galileo
-> **H** – Hubble
-> **I** – Ibn Sina
-> **J** – Joule
-> **K** – Kelvin
-> **L** – Leonardo
-> **M** – Maxwell
-> **N** – Newton
-> **O** – Oppenheimer
-> **P** – Pascal
-> **Q** – Quetelet
-> **R** – Röntgen
-> **S** – Schrödinger
-> **U** – Urey
-> **V** – Volta
-> **W** – Watt
-> **X** – Xenophon
-> **Y** – Yukawa
-> **Z** – Zuse
-
-Another approach may be to index the book and jump straight to the 'W' names however this requires two steps.
-
-When
-
-# Bracketing Method
-## Incremental Search
-The incremental search method is the simplest of the bracketing methods. The basic idea is to start from an initial point and move forward in small increments along the function. At each step, the sign of the function value is checked. If a change of sign occurs between two consecutive points, then a root must lie between them (by the **Intermediate Value Theorem**). This gives us a bracket — an interval that contains the root — which can then be refined using more powerful methods such as bisection or false position.
-
-Although incremental search is not efficient for accurately finding roots, it is useful for _detecting the presence of roots_ and identifying approximate intervals where they exist. Once such an interval is located, other methods can be applied to converge quickly to the actual root. In this sense, incremental search often serves as a **pre-processing step** for bracketing or open methods, especially when the root’s location is not known in advance.
-## Bisection
-This method start by sectioning the database into two halves. We can do this with a phone book by simply opening it in the middle of the book. In our data set, we have find the middle entry to be 'M - Maxwell'. Based on the premises that all names are sorted alphabetically we can conclude that we will find our target in the second half, essentially eliminating the first half. Doing this a second time gives us our new midpoint 'S – Schrödinger' and a third time get us to our target 'W - Watt'. In only 3 iterations we have reached out target goal. This is a huge improvement from the Incremental search (22 iterations) versus our new 3 iterations.
-
-This exact same concept can be applied to finding roots of functions. Instead of searching for strings we are searching for floats. For computers this is a lot easier to do. Take a moment to think about how you would write psuedo-code to program this.
-
-> The **Intermediate Value Theorem** says that if f(x) is a continuous function between a and b, and sign(f(a))≠sign(f(b)), then there must be a c, such that a<c<b and f(c)=0.
-
-Let's consider a continuous function $f(x)$ with an unknown root $x_r$ . Using the intermediate value theorem we can bracket a root with the points $x_{lower}$ and $x_{upper}$ such that $f(x_{lower})$ and $f(x_{upper})$ have opposite signs. This idea is visualized in the graph below.
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 50%;"
- src="https://pythonnumericalmethods.studentorg.berkeley.edu/_images/19.03.01-Intermediate-value-theorem.png"
- alt="Intermediate Value Theorem">
-Once we bisect the interval and found we set the new predicted root to be in the middle. We can then compare the two sections and see if there is a sign change between the bounds. Once the section with the sign change has been identified, we can repeat this process until we near the root.
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 50%;"
- src="bisection.png"
- alt="Bisection">
-As you the figure shows, the predicted root $x_r$ get's closer to the actual root each iteration. In theory this is an infinite process that can keep on going. In practice, computer precision may cause error in the result. A work-around to these problems is setting a tolerance for the accuracy. As engineers it is our duty to determine what the allowable deviation is.
-
-So let's take a look at how we can write this in python.
-```python
-import numpy as np
-
-def my_bisection(f, x_l, x_u, tol):
- # approximates a root, R, of f bounded
- # by a and b to within tolerance
- # | f(m) | < tol with m the midpoint
- # between a and b Recursive implementation
-
- # check if a and b bound a root
- if np.sign(f(x_l)) == np.sign(f(x_u)):
- raise Exception(
- "The scalars x_l and x_u do not bound a root")
-
- # get midpoint
- x_r = (x_l + x_u)/2
-
- if np.abs(f(x_r)) < tol:
- # stopping condition, report m as root
- return x_r
- elif np.sign(f(x_l)) == np.sign(f(x_r)):
- # case where m is an improvement on a.
- # Make recursive call with a = m
- return my_bisection(f, x_r, x_u, tol)
- elif np.sign(f(x_l)) == np.sign(f(x_r)):
- # case where m is an improvement on b.
- # Make recursive call with b = m
- return my_bisection(f, x_l, x_r, tol)
-```
-
-### Problem 1
-Call the *my_bisection* function to solve the equation $f(x) = x^2 -2$
-```python
-f = lambda x: x**2 - 2
-
-r1 = my_bisection(f, 0, 2, 0.1)
-print("r1 =", r1)
-r01 = my_bisection(f, 0, 2, 0.01)
-print("r01 =", r01)
-
-print("f(r1) =", f(r1))
-print("f(r01) =", f(r01))
-```
-
-
-### Problem 2
-Write an algorithm that uses a combination of the bisection and incremental search to find multiple roots in the first 8 minutes of the data set.
-```python
-import pandas as pd
-import matplotlib.pyplot as plt
-
-# Read the CSV file into a DataFrame called data
-data = pd.read_csv('your_file.csv')
-
-# Plot all numeric columns
-data.plot()
-
-# Add labels and show plot
-plt.xlabel("Time (min)")
-plt.ylabel("Strain (kPa)")
-plt.title("Data with multiple roots")
-plt.legend()
-plt.show()
-```
-
-## False Position
-
-# Open Methods
-So far we looked at methods that require us to bracket a root before finding it. However, let's think outside of the box and ask ourselves if we can find a root with only 1 guess. Can we improve the our guess by if we have some knowledge of the function itself? The answer - derivatives.
-
-## Newton-Raphson
-Possibly the most used root finding method implemented in algorithms is the Newton-Raphson method. By using the initial guess we can draw a tangent line at this point on the curve. Where the tangent intercepts the x-axis is usually an improved estimate of the root. It can be more intuitive to see this represented graphically.
-<img
- style="display: block;
- margin-left: auto;
- margin-right: auto;
- width: 50%;"
- src="https://pythonnumericalmethods.studentorg.berkeley.edu/_images/19.04.01-Newton-step.png"
- alt="Newton-Raphson illustration">
-Given the function $f(x)$ we make our first guess at $x_0$. Using our knowledge of the function at that point we can calculate the derivative to obtain the tangent. Using the equation of the tangent we can re-arrange it to solve for $x$ when $f(x)=0$ giving us our first improved guess $x_1$.
-
-As seen in the figure above. The derivative of the function is equal to the slope.
-$$
-f'(x_0) = \frac{f(x_0)-0}{x_0-x_1}
-$$
-Re-arranging for our new guess we get:
-$$
-x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)}
-$$
-Since $x_0$ is our current guess and $x_0$ is our next guess, we can write these symbolically as $x_i$ and $x_{i+1}$ respectively. This gives us the *Newton-Raphson formula*.
-$$
-\boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} \tag{3.1}
-$$
-### Problem 1
-From experimental data we extrapolated a function f. Write a python function called *newtonraphson* which as the following input parameters
- - `f` - function
- - `df` - first derivative of the function
- - `x0` - initial guess
- - `tol` - the tolerance of error
- That finds the root of the function $f(x)=e^{\left(x-6.471\right)}\cdot1.257-0.2$ and using an initial guess of 8.
-
-Using Recursion
-```python
-def newtonraphson(f, df, x0, tol):
- # Output of this function is the estimated root of f
- # using the Newton-Raphson method
- # recursive method
- if abs(f(x0)) < tol:
- return x0
- else:
- return newtonraphson(f,df,x0)
-
-f = lambda x: x**2-1.5
-f_prime = labmda x: 2*x
-
-```
-
-Using Iteration
-```python
-import numpy as np
-
-def newton_raphson(f, f_prime, x0, tol=1e-8):
- """
- Newton-Raphson root finding method.
- Parameters:
- f : function, the equation f(x) = 0
- f_prime : function, derivative of f(x)
- x0 : float, initial guess
- tol : float, stopping tolerance
- Returns:
- x : float, approximate root
- history : list, values of x at each iteration
- """
-
- x = x0
- roots = [x]
-
- while True:
- x_new = x - f(x) / f_prime(x)
- roots.append(x_new)
- if abs(x_new - x) < tol:
- sol = x_new
- total_iter = len(roots)
- return sol, total_iter, roots
-
- x = x_new
-
-# Example: solve x^2 - 2 = 0
-f = lambda x: x**2 - 2
-f_prime = lambda x: 2*x
-
-sol, total_iter, root = newton_raphson(f, f_prime, x0=1.4)
-
-actual=np.sqrt(2)
-
-print("Solution=", sol)
-print("No. iterations:", total_iter)
-print("Iteration history:", root)
-print("Actual sqrt(2) =", np.sqrt(2))
-```
-## Modified Secant
-A possible issue with the Newton-Raphson method is that we are required to know the derivative of the function we are finding the root for. Sometimes this may be extremely difficult or impossible to find analytically. Thus we can use the modified secant method. A numerical variant of the Newton-Raphson.
-
-This method uses the finite derivative to guess where the root lies. We can then iterate this guessing process until we're within tolerance.
-
-$$
-f'(x_i) \simeq \frac{f(x_{i-1})-f(x_i)}{x_{i-1}x_i}
-$$
-Then substituting this in equation (3.1) we get the equation for the secant method:
-
-$$
-x_{i+1} = x_i - \frac{f(x_i) (x_{i-1}x_i)}{f(x_{i-1})-f(x_i)} \tag {3.2}
-$$
-
-We can then *modify* the equation to instead of choosing two arbitrary numbers we use a small value $\delta$ in the independent variable to get a better reading.
-$$
-x_{x+1} = x_i - \frac{f(x_i)(x_{i-1}-x_i)}{f(x_i+\delta x)-f(x_i)}
-$$
-$$
-
-\boxed{x_{x+1} = x_i - \frac{f(x_i)\delta x_i}{f(x_i+\delta x)-f(x_i)}} \tag{3.3}
-$$
-Note that if $\delta$ is too small the method can fail by round-off error due to the subractive cancellation in the denominator in eqn. 3.3. If $\delta$ is too large, the method becomes inefficient or even divergent.
-
-## Issues with open methods
-
-Open methods may also have it's problems. Let's consider the following function: $f(x)=3.2*tan^{-1}(x-4.3)$ if we were to apply the Newton-Raphson to find the root of this function we can see that the results diverge.
-
-{Divergence Demo}
-
-
-## Problem 2: Pipe friction
-The flow of fluids through pipes is a common problem in engineering. In mechanical and aerospace fields, it often arises in applications such as liquid and gas transport for cooling systems. The resistance to this flow is described by a dimensionless parameter known as the friction factor. For turbulent flow, the Colebrook equation provides a method to determine this factor.
-$$
-0=\frac{1}{\sqrt{f}}+2log(\frac{\epsilon}{3.7D}+\frac{2.51}{Re\sqrt{f}}) \tag{2.2}
-$$
-where
-$$
-Re=\frac{\rho VD}{\mu}
-$$
-Suppose you're working on a carburetor which mixes air and fuel before. You are tasked with finding the friction factor of the pipes before entering the carburetor. A lab technician measure and provides the following data of the tube to you:
-$D=0.005 m$
-$\epsilon=0.0015mm$
-$V=40 m/s$
-We know we are working with air and therefore:
-$\rho= 1.23 \frac{kg}{m^3}$
-$\mu=1.79*10^{-5}Ns/m^3$
-
-If we define a function $g(f)$ equal to equation above. The root of this function will give us the friction factor of this tube.
-
-a) Plot the function of $g(f)$ to select a good initial guess.
-b) Determine the root by using the Newton-Raphson method.
-
-```python
-import numpy as np
-import matplotlib.pyplot as plt
-
-# Given parameters
-rho = 1.23 # kg/m^3
-mu = 1.79e-5 # Pa·s (N·s/m^2)
-D = 0.005 # m
-V = 40 # m/s
-e = 0.0015 / 1000 # convert mm to m
-
-# Reynolds number
-Re = rho * V * D / mu
-
-# Define g(f) function (Colebrook residual)
-g = lambda f: 1/np.sqrt(f) + 2.0 * np.log10((e/(3.7*D)) + (2.51/(Re*np.sqrt(f))))
-
-# Range of f values to check
-f_values = np.linspace(0.008, 0.08, 500)
-g_values = g(f_values)
-
-# Plot
-plt.figure(figsize=(8,6))
-plt.plot(f_values, g_values)
-plt.axhline(0, color='red', linestyle='--') # root line
-plt.grid(True)
-plt.xlabel('f')
-plt.ylabel('g(f)')
-plt.title('Colebrook Residual Function g(f)')
-plt.show()
-```
-
-```python
-g_prime = lambda x: 2*x
-x= = 1.5
-
-sol, total_iter, root = newton_raphson(g, g_prime, x0)
-```
-
-
-# Systems of Non-linear equations
-So far we've solved system of linear equations where the equations come in the form:
-$$
-f(x) = a_1x_1 + a_2x_2 + \ldots + a_nx_n-b = 0
-$$
-where the $a$'s and $b$ are constants. Equations that do not fit this format are called *non linear* equations. For example:
-
-$$
-x^2 + xy = 10
-$$
-$$
-y+3xy^2=57
-$$
-these are two nonlinear equations with two unknowns. To solve this system we can
-
-
-This roots of single equations and solved system of equations, but what if the system of equations are dependent on non-linear.
-
-$$
- \begin{align*}
-f_{1}(x_{1},x_{2},\ldots ,x_{n}) &= 0 \\
-f_{2}(x_{1},x_{2},\ldots ,x_{n}) &= 0 \\
-&\ \vdots \\
-f_{n}(x_{1},x_{2},\ldots ,x_{n}) &= 0
-\end{align*}
-$$
-
-
-
-We've applied the Newton-Raphson to
-
-
-
-## Problem 1: Newton-Raphson for a Nonlinear system
-
-Use the multiple-equation Newton-Raphson method to determine the roots of the following system of non-linear equations
-$$
-\begin{cases}
-u(x,y) = x^2 + xy - 10 = 0 \\
-v(x,y) = y + 3xy^2 - 57 = 0
-\end{cases}
-$$
-after
- a) 1 iteration
- b) Write a python function that iterates until the tolerance is within 0.002
-
-Note the solution to $x$ =2 and $y$ = 3. Initiate the computation with guesses of $x$ = 1.5 and $y$ = 3.5.
-
-Solution:
-a)
-```python
-u = lambda x, y: x**2 + x*y - 10
-v = lambda x, y: y + 3*x*y**2 - 57
-
-# Initial guesses
-x_0 = 1.5
-y_0 = 3.5
-
-# Evaluate partial derivatives at initial guesses
-dudx = 2*x_0+y_0
-dudy = x_0
-
-dvdx = 3*y_0**2
-dvdy = 1 + 6*x_0*y_0
-
-# Find determinant of the Jacobian
-det = dudx * dvdy - dudy * dvdx
-
-# Values of functions valuated at initial guess
-u_0 = u(x_0,y_0)
-v_0 = v(x_0,y_0)
-
-# Substitute into newton-raphson equation
-x_1 = x_0 - (u_0*dvdy-v_0*dudy) / det
-y_1 = y_0 - (v_0*dudx-u_0*dvdx) / det
-
-print(x_1)
-print(y_1)
-```
-
-b)
-```python
-import numpy as np
-
-def newton_raphson_system(f, df, g0, tol=1e-8, max_iter=50, verbose=False):
- """
- Newton-Raphson solver for a 2x2 system of nonlinear equations.
-
- Parameters
- ----------
- f : array of callables
- f[i](x, y) should evaluate the i-th equation.
- df : 2D array of callables
- df[i][j](x, y) is the partial derivative of f[i] wrt variable j.
- g0 : array-like
- Initial guess [x0, y0].
- tol : float
- Convergence tolerance.
- max_iter : int
- Maximum iterations.
- verbose : bool
- If True, prints iteration details.
-
- Returns
- -------
- (x, y) : solution vector
- iters : number of iterations
- """
- x, y = g0
-
- for k in range(1, max_iter + 1):
- # Evaluate system of equations
- F1 = f[0](x, y)
- F2 = f[1](x, y)
-
- # Evaluate Jacobian entries
- J11 = df[0][0](x, y)
- J12 = df[0][1](x, y)
- J21 = df[1][0](x, y)
- J22 = df[1][1](x, y)
-
- # Determinant
- det = J11 * J22 - J12 * J21
- if det == 0:
- raise ZeroDivisionError("Jacobian is singular")
-
- # Solve for updates using 2x2 inverse
- dx = ( F1*J22 - F2*J12) / det
- dy = ( F2*J11 - F1*J21) / det
-
- # Update variables
- x_new = x - dx
- y_new = y - dy
-
- if verbose:
- print(f"iter {k}: x={x_new:.6f}, y={y_new:.6f}, "
- f"|dx|={abs(dx):.2e}, |dy|={abs(dy):.2e}")
-
- # Convergence check
- if max(abs(dx), abs(dy)) < tol:
- return np.array([x_new, y_new]), k
-
- x, y = x_new, y_new
-
- raise ValueError("Newton-Raphson did not converge")
-
-
-# -------------------
-# Example usage
-# -------------------
-
-# Define system: u(x,y), v(x,y)
-u = lambda x, y: x**2 + x*y - 10
-v = lambda x, y: y + 3*x*y**2 - 57
-
-# Partial derivatives
-dudx = lambda x, y: 2*x + y
-dudy = lambda x, y: x
-dvdx = lambda x, y: 3*y**2
-dvdy = lambda x, y: 1 + 6*x*y
-
-f = np.array([u, v])
-df = np.array([[dudx, dudy],
- [dvdx, dvdy]])
-
-g0 = np.array([1.5, 3.5])
-
-sol, iters = newton_raphson_system(f, df, g0, tol=1e-10, verbose=True)
-print("Solution:", sol, "in", iters, "iterations")
-
-```
-
-
-
-
diff --git a/tutorials/module_3/3_0_numerical_calculus.md b/tutorials/module_3/3_0_numerical_calculus.md
new file mode 100644
index 0000000..37bfd63
--- /dev/null
+++ b/tutorials/module_3/3_0_numerical_calculus.md
@@ -0,0 +1,3 @@
+[[3_1_numerical_differentiation]]
+[[3_2_Advanced_Derivatives]]
+
diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/3_1_numerical_differentiation.md
index 441f838..2158c3b 100644
--- a/tutorials/module_3/1_numerical_differentiation.md
+++ b/tutorials/module_3/3_1_numerical_differentiation.md
@@ -90,12 +90,3 @@ Compare your results with the analytical solution at $x=1.2$. Comment on how the
```
-
----
-# Advanced Derivatives
-
-## High-Accuracy Differentiation Formulas
-## Richardson Extrapolation
-## Derivative of Unequally Spaced Data
-## Partial Derivatives
-
diff --git a/tutorials/module_3/3_2_Advanced_Derivatives.md b/tutorials/module_3/3_2_Advanced_Derivatives.md
new file mode 100644
index 0000000..d8fb4db
--- /dev/null
+++ b/tutorials/module_3/3_2_Advanced_Derivatives.md
@@ -0,0 +1,7 @@
+# Advanced Derivatives
+
+## High-Accuracy Differentiation Formulas
+## Richardson Extrapolation
+## Derivative of Unequally Spaced Data
+## Partial Derivatives
+
diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/3_3_numerical_integration.md
index 4448985..5e5bf91 100644
--- a/tutorials/module_3/4_numerical_integration.md
+++ b/tutorials/module_3/3_3_numerical_integration.md
@@ -205,55 +205,4 @@ print(f"Error (3-point): {err3:.2e}")
-## More integral
-
-### Newton-Cotes Algorithms for Equations
-### Adaptive Quadrature
-
-
-# Problems
-
-## Numerical Integration to Compute Work
-In physics we've learned that work is computed
-$$
-Work = force * distance
-$$
-This can be written in int's integral form:
-$$
-W = \int{F(x)dx}
-$$
-If F(x) is easy to integrate, we could solve this problem analytically. However, a realistic problem the force may not be available to you as a function, but rather, tabulated data. Suppose some measurements were take of when a weighted box was pulled with a wire. If we data of the force on the wire and the angle of the wire from the horizontal plane.
-
-| x (ft) | F(x) (lb) | θ (rad) | F(x) cos θ |
-| ------ | --------- | ------- | ---------- |
-| 0 | 0.0 | 0.50 | 0.0000 |
-| 5 | 9.0 | 1.40 | 1.5297 |
-| 10 | 13.0 | 0.75 | 9.5120 |
-| 15 | 14.0 | 0.90 | 8.7025 |
-| 20 | 10.5 | 1.30 | 2.8087 |
-| 25 | 12.0 | 1.48 | 1.0881 |
-| 30 | 5.0 | 1.50 | 0.3537 |
-| | | | |
-Use the trapezoidal rule to compute the work done on the box.
-
-## Implementing the Composite Trapezoidal Rule
-
-Implement a Python function to approximate integrals using the trapezoidal rule.
-
-```python
-import numpy as np
-
-def trapz(f, a, b, n):
- x = np.linspace(a, b, n+1)
- y = f(x)
- h = (b - a) / n
- return h * (0.5*y[0] + y[1:-1].sum() + 0.5*y[-1])
-
-# Example tests
-f1 = np.sin
-I_true1 = 2.0 # ∫_0^π sin(x) dx
-for n in [4, 8, 16, 32]:
- print(n, trapz(f1, 0, np.pi, n))
-```
-
-Compare the results for increasing $n$ and observe how the error decreases with $O(h^2$).
+[[3_4_More_integral]] \ No newline at end of file
diff --git a/tutorials/module_3/3_4_More_integral.md b/tutorials/module_3/3_4_More_integral.md
new file mode 100644
index 0000000..1e9c3c2
--- /dev/null
+++ b/tutorials/module_3/3_4_More_integral.md
@@ -0,0 +1,52 @@
+# More integral
+
+### Newton-Cotes Algorithms for Equations
+### Adaptive Quadrature
+
+
+## Problems
+
+## Numerical Integration to Compute Work
+In physics we've learned that work is computed
+$$
+Work = force * distance
+$$
+This can be written in int's integral form:
+$$
+W = \int{F(x)dx}
+$$
+If F(x) is easy to integrate, we could solve this problem analytically. However, a realistic problem the force may not be available to you as a function, but rather, tabulated data. Suppose some measurements were take of when a weighted box was pulled with a wire. If we data of the force on the wire and the angle of the wire from the horizontal plane.
+
+| x (ft) | F(x) (lb) | θ (rad) | F(x) cos θ |
+| ------ | --------- | ------- | ---------- |
+| 0 | 0.0 | 0.50 | 0.0000 |
+| 5 | 9.0 | 1.40 | 1.5297 |
+| 10 | 13.0 | 0.75 | 9.5120 |
+| 15 | 14.0 | 0.90 | 8.7025 |
+| 20 | 10.5 | 1.30 | 2.8087 |
+| 25 | 12.0 | 1.48 | 1.0881 |
+| 30 | 5.0 | 1.50 | 0.3537 |
+| | | | |
+Use the trapezoidal rule to compute the work done on the box.
+
+## Implementing the Composite Trapezoidal Rule
+
+Implement a Python function to approximate integrals using the trapezoidal rule.
+
+```python
+import numpy as np
+
+def trapz(f, a, b, n):
+ x = np.linspace(a, b, n+1)
+ y = f(x)
+ h = (b - a) / n
+ return h * (0.5*y[0] + y[1:-1].sum() + 0.5*y[-1])
+
+# Example tests
+f1 = np.sin
+I_true1 = 2.0 # ∫_0^π sin(x) dx
+for n in [4, 8, 16, 32]:
+ print(n, trapz(f1, 0, np.pi, n))
+```
+
+Compare the results for increasing $n$ and observe how the error decreases with $O(h^2$).
diff --git a/tutorials/module_3/3_5_ode.md b/tutorials/module_3/3_5_ode.md
new file mode 100644
index 0000000..6249ad6
--- /dev/null
+++ b/tutorials/module_3/3_5_ode.md
@@ -0,0 +1,20 @@
+# Numerical Solutions of Ordinary Differential Equations
+
+Fundamental laws of the universe derived in fields such as, physics, mechanics, electricity and thermodynamics define mechanisms of change. When combining these law's with continuity laws of energy, mass and momentum we get differential equations.
+
+| Newton's Second Law of Motion | Equation | Description |
+| ----------------------------- | -------------------------------- | ---------------------------------------------------- |
+| Newton's Second Law of Motion | $$\frac{dv}{dt}=\frac{F}{m}$$ | Motion |
+| Fourier's heat law | $$q=-kA\frac{dT}{dx}$$ | How heat is conducted through a material |
+| Fick's law of diffusion | $$J=-D\frac{dc}{dx}$$ | Movement of particles from high to low concentration |
+| Faraday's law | $$\Delta V_L = L \frac{di}{dt}$$ | Voltage drop across an inductor |
+
+In engineering ordinary differential equation's (ODE) are very common in the thermo-fluid science's, mechanics and control systems. By now you've solve many ODE's however probably not using numerical methods. Suppose we have an initial value problem of the form
+$$
+\frac{dy}{dt}=f(t,y), \quad y(t_0)=y_0
+$$
+where $f(t,y)$ describes the rate of change of $y$ with respect to time $t$.
+
+[[3_6_Explicit_Methods]]
+[[3_7_Implicit_Method]]
+[[3_8_Systems_of_ODEs]] \ No newline at end of file
diff --git a/tutorials/module_3/5_ode.md b/tutorials/module_3/3_6_Explicit_Methods.md
index b5d5e99..15ed87a 100644
--- a/tutorials/module_3/5_ode.md
+++ b/tutorials/module_3/3_6_Explicit_Methods.md
@@ -1,19 +1,3 @@
-# Numerical Solutions of Ordinary Differential Equations
-
-Fundamental laws of the universe derived in fields such as, physics, mechanics, electricity and thermodynamics define mechanisms of change. When combining these law's with continuity laws of energy, mass and momentum we get differential equations.
-
-| Newton's Second Law of Motion | Equation | Description |
-| ----------------------------- | -------------------------------- | ---------------------------------------------------- |
-| Newton's Second Law of Motion | $$\frac{dv}{dt}=\frac{F}{m}$$ | Motion |
-| Fourier's heat law | $$q=-kA\frac{dT}{dx}$$ | How heat is conducted through a material |
-| Fick's law of diffusion | $$J=-D\frac{dc}{dx}$$ | Movement of particles from high to low concentration |
-| Faraday's law | $$\Delta V_L = L \frac{di}{dt}$$ | Voltage drop across an inductor |
-
-In engineering ordinary differential equation's (ODE) are very common in the thermo-fluid science's, mechanics and control systems. By now you've solve many ODE's however probably not using numerical methods. Suppose we have an initial value problem of the form
-$$
-\frac{dy}{dt}=f(t,y), \quad y(t_0)=y_0
-$$
-where $f(t,y)$ describes the rate of change of $y$ with respect to time $t$.
# Explicit Methods
## Euler's Method
@@ -101,26 +85,3 @@ Also known as the **classical fourth-order Runge–Kutta (RK4)** method. It take
## Problem 3: Swinging Pendulum
-
-# Implicit Method
-Implicit methods for solving ODEs are another numerical schemes in which the new solution value at the next time step appears on both sides of the update equation. This makes them fundamentally different from explicit methods, where the next value is computed directly from already-known quantities. While explicit methods are simple and computationally cheap per step, implicit methods generally require solving algebraic equations at every step. In exchange, implicit methods provide much greater numerical stability, especially for stiff problems. The general idea of an implicit method for a first-order ODE is to write the time-stepping update in a form such that
-$$
-y_{n+1} = y_n + h f(t_{n+1}, y_{n+1})\tag{2}
-$$
-where $h$ is the step size. Unlike explicit schemes (where $f$ is evaluated at known values $t_n$, $y_n$​), here the function is evaluated at the unknown point $t_{n+1}, y_{n+1}$​. This results in an equation that must be solved for $y_{n+1}$​ before advancing to the next step.
-
-Implicit methods are especially useful when dealing with stiff ODEs, where some components of the system change much faster than others. In such cases, explicit methods require extremely small time steps to remain stable, making them inefficient. Implicit schemes allow much larger time steps without losing stability, which makes them the preferred choice in fields like chemical kinetics, heat conduction, and structural dynamics with damping. Although they require more computational effort per step, they are often far more efficient overall for stiff problems.
-## Backwards Eulers
-The implicit counter part of Forward Euler is the Backward Euler method. Like Forward Euler, it advances the solution using a single slope estimate, but instead of using the slope at the current point, it uses the slope at the unknown future point. This creates an implicit update equation that must be solved at each step, making Backward Euler both the easiest implicit method to derive and an excellent starting point for studying implicit schemes.
-
-## Problem 1: Chemical Kinetics
-
-## Problem 2:
-
-
-
-# Systems of ODE's
-All methods for single ODE equations can be extended to using for system of ODEs
-
-
-## Problem 1: \ No newline at end of file
diff --git a/tutorials/module_3/3_7_Implicit_Method.md b/tutorials/module_3/3_7_Implicit_Method.md
new file mode 100644
index 0000000..04c71cf
--- /dev/null
+++ b/tutorials/module_3/3_7_Implicit_Method.md
@@ -0,0 +1,16 @@
+# Implicit Method
+Implicit methods for solving ODEs are another numerical schemes in which the new solution value at the next time step appears on both sides of the update equation. This makes them fundamentally different from explicit methods, where the next value is computed directly from already-known quantities. While explicit methods are simple and computationally cheap per step, implicit methods generally require solving algebraic equations at every step. In exchange, implicit methods provide much greater numerical stability, especially for stiff problems. The general idea of an implicit method for a first-order ODE is to write the time-stepping update in a form such that
+$$
+y_{n+1} = y_n + h f(t_{n+1}, y_{n+1})\tag{2}
+$$
+where $h$ is the step size. Unlike explicit schemes (where $f$ is evaluated at known values $t_n$, $y_n$​), here the function is evaluated at the unknown point $t_{n+1}, y_{n+1}$​. This results in an equation that must be solved for $y_{n+1}$​ before advancing to the next step.
+
+Implicit methods are especially useful when dealing with stiff ODEs, where some components of the system change much faster than others. In such cases, explicit methods require extremely small time steps to remain stable, making them inefficient. Implicit schemes allow much larger time steps without losing stability, which makes them the preferred choice in fields like chemical kinetics, heat conduction, and structural dynamics with damping. Although they require more computational effort per step, they are often far more efficient overall for stiff problems.
+## Backwards Eulers
+The implicit counter part of Forward Euler is the Backward Euler method. Like Forward Euler, it advances the solution using a single slope estimate, but instead of using the slope at the current point, it uses the slope at the unknown future point. This creates an implicit update equation that must be solved at each step, making Backward Euler both the easiest implicit method to derive and an excellent starting point for studying implicit schemes.
+
+## Problem 1: Chemical Kinetics
+
+## Problem 2:
+
+
diff --git a/tutorials/module_3/3_8_Systems_of_ODEs.md b/tutorials/module_3/3_8_Systems_of_ODEs.md
new file mode 100644
index 0000000..52b2267
--- /dev/null
+++ b/tutorials/module_3/3_8_Systems_of_ODEs.md
@@ -0,0 +1,5 @@
+# Systems of ODE's
+All methods for single ODE equations can be extended to using for system of ODEs
+
+
+## Problem 1: \ No newline at end of file
diff --git a/tutorials/module_3/6_pde.md b/tutorials/module_3/3_pde.md
index 2980fa7..2980fa7 100644
--- a/tutorials/module_3/6_pde.md
+++ b/tutorials/module_3/3_pde.md
diff --git a/tutorials/module_3/compiled note.md b/tutorials/module_3/compiled note.md
new file mode 100644
index 0000000..3dc5aba
--- /dev/null
+++ b/tutorials/module_3/compiled note.md
@@ -0,0 +1,6 @@
+[[1_linear_equations]]
+[[2_non-linear_equations]]
+[[3_numerical_differentiation]]
+[[3_numerical_integration]]
+[[3_5_ode]]
+[[3_pde]]
diff --git a/tutorials/module_3/example problem outline.md b/tutorials/module_3/example problem outline.md
new file mode 100644
index 0000000..42a0554
--- /dev/null
+++ b/tutorials/module_3/example problem outline.md
@@ -0,0 +1,108 @@
+
+### **Module 2: Algorithm Developments for Mechanical Engineering**
+
+* **Numerical Methods**
+ 1. Beam deflection solved with `numpy.linalg.solve` (Statics/Structures)
+ 2. Cooling of a hot plate with `scipy.integrate.odeint` (Thermal sciences)
+
+* **Version Control**
+ 1. Git workflow on a bike frame design project (Solid mechanics / FEA data)
+ 2. Collaborative control system simulation (PID tuning in Python)
+
+* **Problem Solving Strategies**
+ 1. Break down multi-step dynamics problem: projectile with drag (Dynamics/Fluids)
+ 2. Decompose Rankine cycle analysis into modular Python functions (Thermo/Power cycles)
+
+* **Code Documentation**
+ 1. Document a function that computes stress concentration factor (Solid mechanics)
+ 2. Document a PID control simulation for DC motor speed (Mechatronics/Controls)
+
+* **Code Libraries & Resources**
+ 1. Use `matplotlib` + `numpy` for plotting vibration response of a 2-DOF spring-mass system (Dynamics)
+ 2. Use `CoolProp` for real gas property lookup (Thermo/Fluids)
+
+* **AI-Assisted Programming**
+ 1. Have AI draft code for a 1D heat conduction simulation, then debug/validate (Heat Transfer)
+ 2. Generate starter code for stress-strain curve fitting, then refine it (Solid Mechanics)
+
+* **Verification & Validation**
+ 1. Compare hand solution vs. Python solution of a static truss (Statics/Structures)
+ 2. Validate simulated temperature profile in a fin against analytical solution (Heat Transfer)
+
+* **Error**
+ 1. Show truncation error in numerical derivative of displacement data (Dynamics experiment)
+ 2. Compare Simpson’s rule vs trapezoidal integration for work done in P–V diagram (Thermo)
+
+### **Module 3: Applications of Computational Mathematics in Mechanical Engineering**
+
+#### **Linear Equations**
+
+* **Lecture: Linear Equations**
+ 1. Solve a 3-bar truss reaction forces system (Statics/Structures)
+ 2. Solve nodal voltages in a resistive electrical network (Mechatronics)
+
+* **Lecture: Linear Algebra**
+ 1. Stress transformation with rotation matrices (Solid Mechanics)
+ 2. Velocity transformation in 2D robotic arm kinematics (Mechatronics/Robotics)
+
+* **Lecture: LU Decomposition**
+ 1. Discretized fin heat conduction (Heat Transfer)
+ 2. 1D beam bending using finite difference discretization (Simple FEA)
+
+#### **Non-Linear Equations**
+
+* **Lecture: Bracketing Methods**
+ 1. Equilibrium temperature with radiation + convection balance (Heat Transfer)
+ 2. Solve spring-mass system with nonlinear stiffness (Solid Mechanics)
+
+* **Lecture: Open Methods**
+ 1. Mach number from nozzle area ratio (Aerospace/Compressible Flow)
+ 2. Natural frequency from transcendental vibration equation (Dynamics)
+
+* **Lecture: Systems of Nonlinear Equations I**
+ 1. Chemical equilibrium composition (Thermo)
+ 2. Solve coupled force/displacement in nonlinear truss (Structures)
+
+* **Lecture: Systems of Nonlinear Equations II**
+ 1. Lift & drag coefficients from nonlinear aerodynamic model (Aerospace)
+ 2. Nonlinear motor torque & current equations (Mechatronics/Controls)
+
+#### **Numerical Differentiation & Integration**
+* **Lecture: Differentiation**
+ 1. Estimate velocity/acceleration from piston displacement data (Dynamics)
+ 2. Approximate strain rate from stress–strain curve (Solid Mechanics)
+
+* **Lecture: Integration**
+ 1. Work from P–V data (Thermo)
+ 2. Area under vibration response curve for damping energy loss (Dynamics/Controls)
+
+#### **ODEs**
+
+* **Lecture: Explicit Methods**
+ 1. Cooling of hot sphere (Newton’s law of cooling) (Heat Transfer)
+ 2. Pendulum motion (small vs. large angle) (Dynamics)
+
+* **Lecture: Implicit Methods**
+ 1. Transient 1D heat conduction (Heat Transfer)
+ 2. Mass-spring-damper under step input (Dynamics/Controls)
+
+* **Lecture: Systems of ODEs**
+ 1. Coupled tanks draining problem (Fluids)
+ 2. 2-DOF vibration system (Solid Mechanics/Dynamics)
+
+#### **PDEs**
+* **Lecture: Elliptic PDEs (Finite Difference)**
+ 1. 2D steady-state conduction in a plate (Heat Transfer)
+ 2. Deflection of a membrane under load (Structures/FEA)
+
+* **Lecture: Parabolic/Hyperbolic PDEs (Finite Difference)**
+ 1. 1D transient conduction (Heat Transfer)
+ 2. Vibrating string or beam (Dynamics/Aerospace structures)
+
+
+* Statics/Structures → Truss, beams, stresses
+* Dynamics → Vibrations, pendulum, projectile, damping
+* Solid Mechanics → Stress/strain, nonlinear springs
+* Thermo/Fluids/Heat → Cooling, P–V, conduction, radiation
+* FEA → simple discretized beams/fins/plates
+* Mechatronics/Controls → motor dynamics, robotic arms, PID \ No newline at end of file
diff --git a/tutorials/module_3/example32-4.png b/tutorials/module_3/example32-4.png
new file mode 100644
index 0000000..932daa1
--- /dev/null
+++ b/tutorials/module_3/example32-4.png
Binary files differ
diff --git a/tutorials/module_3/example32-4element1.png b/tutorials/module_3/example32-4element1.png
new file mode 100644
index 0000000..f3fc4b8
--- /dev/null
+++ b/tutorials/module_3/example32-4element1.png
Binary files differ
diff --git a/tutorials/module_3/image_1759346314337.png b/tutorials/module_3/image_1759346314337.png
new file mode 100644
index 0000000..3fc6040
--- /dev/null
+++ b/tutorials/module_3/image_1759346314337.png
Binary files differ
diff --git a/tutorials/module_3/image_1759346552105.png b/tutorials/module_3/image_1759346552105.png
new file mode 100644
index 0000000..eeb16ba
--- /dev/null
+++ b/tutorials/module_3/image_1759346552105.png
Binary files differ
diff --git a/tutorials/module_3/newton-raphson_nonlinear_systems.py b/tutorials/module_3/newton-raphson_nonlinear_systems.py
new file mode 100644
index 0000000..856d329
--- /dev/null
+++ b/tutorials/module_3/newton-raphson_nonlinear_systems.py
@@ -0,0 +1,35 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Sat Sep 27 15:31:11 2025
+
+@author: christian
+"""
+
+u = lambda x, y: x**2 + x*y - 10
+v = lambda x, y: y + 3*x*y**2 - 57
+
+# Initial guesses
+x_0 = 1.5
+y_0 = 3.5
+
+# Evaluate partial derivatives at initial guesses
+dudx = 2*x_0+y_0
+dudy = x_0
+
+dvdx = 3*y_0**2
+dvdy = 1 + 6*x_0*y_0
+
+# Find determinant of the Jacobian
+det = dudx * dvdy - dudy * dvdx
+
+# Values of functions valuated at initial guess
+u_0 = u(x_0,y_0)
+v_0 = v(x_0,y_0)
+
+# Substitute into newton-raphson equation
+x_1 = x_0 - (u_0*dvdy-v_0*dudy) / det
+y_1 = y_0 - (v_0*dudx-u_0*dvdx) / det
+
+print(x_1)
+print(y_1) \ No newline at end of file
diff --git a/tutorials/module_3/newton-raphson_nonlinear_systemspy b/tutorials/module_3/newton-raphson_nonlinear_systemspy
new file mode 100644
index 0000000..55a6cf1
--- /dev/null
+++ b/tutorials/module_3/newton-raphson_nonlinear_systemspy
@@ -0,0 +1,35 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Sat Sep 27 15:31:11 2025
+
+@author: christian
+"""
+
+u = lambda x, y: x**2 + x*y - 10
+v = lambda x, y: y + 3*x*y**2 - 57
+
+# Initial guesses
+x_0 = 1.5
+y_0 = 3.5
+
+# Evaluate partial derivatives at initial guesses
+dudx = 2*x_0+y_0
+dudy = x_0
+
+dvdx = 6*x_0*y_0
+dvdy = 1 + 6*x_0*y_0
+
+# Find determinant
+det = dudx * dvdy - dudy * dvdx
+
+# Values of functions valuated at initial guess
+u_0 = u(x_0,y_0)
+v_0 = v(x_0,y_0)
+
+# Substitute into newton-raphson equation
+x_1 = x_0 - (u_0*dvdy-v_0*dudy) / det
+y_1 = y_0 - (v_0*dudx-v_0*dvdx) / det
+
+print(x_1)
+print(y_1) \ No newline at end of file