From 17f90913cb9a16ff78ad672fe15a2bd4e7e1b5db Mon Sep 17 00:00:00 2001 From: Christian Kolset Date: Sun, 14 Sep 2025 20:55:09 -0600 Subject: Worked on ODE --- tutorials/module_3/1_numerical_differentiation.md | 2 +- tutorials/module_3/2_roots_optimization.md | 10 +- tutorials/module_3/3_system_of_equations.md | 354 ++++++++++++++++++++-- tutorials/module_3/4_numerical_integration.md | 2 +- tutorials/module_3/5_ode.md | 87 +++++- tutorials/module_3/6_pde.md | 1 + tutorials/module_3/fw_elim_bw_sub.png | Bin 0 -> 83212 bytes tutorials/module_3/fw_eulers.png | Bin 0 -> 34322 bytes tutorials/module_3/heuns_method.png | Bin 0 -> 45882 bytes 9 files changed, 428 insertions(+), 28 deletions(-) create mode 100644 tutorials/module_3/6_pde.md create mode 100644 tutorials/module_3/fw_elim_bw_sub.png create mode 100644 tutorials/module_3/fw_eulers.png create mode 100644 tutorials/module_3/heuns_method.png diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/1_numerical_differentiation.md index 68e0953..0926851 100644 --- a/tutorials/module_3/1_numerical_differentiation.md +++ b/tutorials/module_3/1_numerical_differentiation.md @@ -78,6 +78,6 @@ $$ ## High-Accuracy Differentiation Formulas ## Richardson Extrapolation -## Derivative of Unequally Spaced Data +## Derivative of Unequally Spaced Data ## Partial Derivatives diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md index e4224f1..780a284 100644 --- a/tutorials/module_3/2_roots_optimization.md +++ b/tutorials/module_3/2_roots_optimization.md @@ -229,4 +229,12 @@ Open methods may also have it's problems. Let's consider the following function: # Pipe Friction Example -Numerical methods for Engineers 7th Edition Case study 8.4. \ No newline at end of file +Numerical methods for Engineers 7th Edition Case study 8.4. + + + + + +# Systems of Non-linear equations + + diff --git a/tutorials/module_3/3_system_of_equations.md b/tutorials/module_3/3_system_of_equations.md index 3e6fad9..b30ce78 100644 --- a/tutorials/module_3/3_system_of_equations.md +++ b/tutorials/module_3/3_system_of_equations.md @@ -1,7 +1,7 @@ # Linear Equations Let's consider an linear equation $$ -ax=b +y = m x + b $$ where $a$ and $b$ are two known constants we can solve for $x$ easily. @@ -9,45 +9,284 @@ where $a$ and $b$ are two known constants we can solve for $x$ easily. [] # Linear Algebra -Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. - -If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Let's say we have a system of equation +Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Matrices are rectangular arrays of numbers arranged in rows and columns. They are widely used in both engineering and computer science. Let's say we have the following system of equation. $$ \begin{cases} a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 \\ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = b_3 \end{cases} +\tag{1} $$ -We can re-write this into matrix form by creating an $A$ matrix of all $a_{nn}$ values and a $b$ vector as follows - +Where both the $a_{nn}$ coefficients and the $b_n$ variables are all known. We can re-write this system into a matrix form by creating an $A$ matrix of all $a_{nn}$ values and a vector in the form of: $$ - A = \left[ {\begin{array}{cc} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{array} } \right] + \left[ {\begin{array}{cc} + x_{1}\\ + x_{2}\\ + x_{3}\\ + \end{array} } \right] += + \left[ {\begin{array}{cc} + b_{1}\\ + b_{2}\\ + b_{3}\\ + \end{array} } \right] + \tag{2} $$ -and +where $$ - b = + \left[ {\begin{array}{cc} + a_{11} & a_{12} & a_{13}\\ + a_{21} & a_{22} & a_{23}\\ + a_{31} & a_{32} & a_{33}\\ + \end{array} } \right] + = A, \left[ {\begin{array}{cc} b_{1}\\ b_{2}\\ b_{3}\\ \end{array} } \right] + = b $$ -to get, +we can also write the equation in it's simplified form $$ Ax=b +\tag{3} $$ -How does this work? Matrix Math. The +Take a close look at equation 1 and 2 and look at their similarity and differences. Notice how the pattern of the $A$ matrix and $b$ vector and follow the same patterns as however the elements of the $x$ vector ($x_1$,$x_2$,$x_3$) are flipped vertically. This is because of how we multiple matrices + +## Matrix operations in python + +- Matrices of the same size can be added or subtracted element-wise: +- Multiply a matrix by a scalaryields a new matrix where every element is multiplied by the scalar. +- The determinant of a matrix is +- **Identity Matrix $I$**: Square matrix with 1’s on the diagonal and 0’s elsewhere. +- **Inverse Matrix $A^{-1}$**: Satisfies $AA^{-1} = I$. (Only square, non-singular matrices have inverses.) +### Matrix Multiplication + +If $A$ is $m \times n$ and $B$ is $n \times p$, their product $C = AB$ is an $m \times p$ matrix: + +cij=∑k=1naikbkjc_{ij} = \sum_{k=1}^n a_{ik} b_{kj} + +**Example:** +$$ +[1234][5678]=[1⋅5+2⋅71⋅6+2⋅83⋅5+4⋅73⋅6+4⋅8]=[19224350]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 7 & 1 \cdot 6 + 2 \cdot 8 \\ 3 \cdot 5 + 4 \cdot 7 & 3 \cdot 6 + 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix} +$$ + + +## 1) Matrix Math in Python +```python +import numpy as np +np.set_printoptions(suppress=True) # nicer printing +``` + +--- + + + +Let's +```python +A = np.array([[1, 3], + [2, 4]]) # 2x2 +B = np.array([[5, 6], + [7, 8]]) # 2x2 +A.shape, B.shape + +b = np.array([3, 5]) +b.shape + +# Addition and Subtraction of same shapes +A + B +A - B + +# Scalar Multiplication +3*A + +5*B + +# Matrix Multiplication +mat_mult = A @ B -Matrix definition +# Transpose +B.T + +# Identity and Inverse matrix + +I = np.eye(2) +A_inv = np.linalg.inv(A) + +A @ A_inv, A_inv @ A + +# Determinants + +detA = np.linalg.det(A) +detB = np.linalg.det(B) +detA, detB + + + +``` + + + +--- + +## 7) Determinant + +For $2\times2$, + +$\det\begin{bmatrix}a&b\\ c&d\end{bmatrix}=ad-bc$. + +```python +detA = np.linalg.det(A) +detB = np.linalg.det(B) +detA, detB +``` + + +## 8) Solve Linear Systems $Ax=b$ + +```python +A = np.array([[3., 2., -1.], + [2., -2., 4.], + [-1., 0.5, -1.]]) +b = np.array([1., -2., 0.]) + +x = np.linalg.solve(A, b) # preferred over inv(A)@b +x, np.allclose(A @ x, b) +``` + +--- + +## 9) Eigenvalues & Eigenvectors + +Solve $A\mathbf{v}=\lambda \mathbf{v}$ (square $A$). + +```python +A = np.array([[4., 2.], + [1., 3.]]) +eigvals, eigvecs = np.linalg.eig(A) +eigvals, eigvecs # columns of eigvecs are eigenvectors +``` + +**Verify one pair $(\lambda, v)$** + +```python +lam, v = eigvals[0], eigvecs[:,0] +np.allclose(A @ v, lam * v) +``` + +--- + +## 10) Norms (size/length measures) + +```python +x = np.array([3., -4., 12.]) +vec_L2 = np.linalg.norm(x, ord=2) # Euclidean +vec_L1 = np.linalg.norm(x, ord=1) +vec_Linf= np.linalg.norm(x, ord=np.inf) + +M = np.array([[1., 2.], + [3., 4.]]) +fro = np.linalg.norm(M, 'fro') # Frobenius (matrix) +vec_L2, vec_L1, vec_Linf, fro +``` + +--- + +## 11) Orthogonality & Projections (quick demo) + +```python +u = np.array([1., 2., -2.]) +v = np.array([2., -1., 0.]) + +dot = u @ v +is_orthogonal = np.isclose(dot, 0.0) + +# projection of u onto v +proj_u_on_v = (u @ v) / (v @ v) * v +dot, is_orthogonal, proj_u_on_v +``` + +--- + +## 12) Example Mini-Lab + +1. Compute $2A - B$ for + + +A=[1324],B=[5678].A=\begin{bmatrix}1&3\\2&4\end{bmatrix},\quad B=\begin{bmatrix}5&6\\7&8\end{bmatrix}. + +```python +A = np.array([[1,3],[2,4]]) +B = np.array([[5,6],[7,8]]) +result = 2*A - B +result +``` + +2. Verify $A(A^{-1})\approx I$ and report the determinant of $A$. + + +```python +A_inv = np.linalg.inv(A) +check = np.allclose(A @ A_inv, np.eye(2)) +detA = np.linalg.det(A) +check, detA +``` + +3. Solve the system $Ax=b$ with + + +A=[4123],b=[10].A=\begin{bmatrix}4&1\\2&3\end{bmatrix},\quad b=\begin{bmatrix}1\\0\end{bmatrix}. + +```python +A = np.array([[4.,1.], + [2.,3.]]) +b = np.array([1.,0.]) +x = np.linalg.solve(A,b) +x, A @ x +``` + +--- + +### Tips for Students + +- Prefer `np.linalg.solve(A,b)` over `np.linalg.inv(A) @ b` for **speed and numerical stability**. +- Watch out for **singular** or **ill-conditioned** matrices (determinant near 0). +- Use `np.allclose` when comparing float results. +--- + + + +### Problem 1 +Re-write the following systems of equation in matrix form to normal algebraic equations. + +$$ + \left[ {\begin{array}{cc} + 3 & 5 & 7\\ + 2 & 1 & 3\\ + 2 & 3 & 6\\ + \end{array} } \right] + \left[ {\begin{array}{cc} + x_{1}\\ + x_{2}\\ + x_{3}\\ + \end{array} } \right] += + \left[ {\begin{array}{cc} + 24\\ + 52\\ + 12\\ + \end{array} } \right] +$$ ## Problem 1 ```python @@ -68,27 +307,102 @@ print(x) -# Systems of Equations +# Systems of Linear Equations + +## Techniques to solve Systems of Equations +In this section our goals is to learn some algorithms that help solve system of equations in python. + + +Matrix Determinate -## Working with Systems of Equations -Matrix Determinates Cramer's Rule - -Elimination + +Now let's try to solve a problem think about how we solve a system of two linear equations by hand. +[example problem] + +You've probably learned to use one of the following methods - substitution, graphical or elimination. Is there a way to think about this algorithmically. + +Step 1 - Subtract the two +$$ +\begin{cases} +a_{11} x_1 + a_{12} x_2 = b_1 \\ +a_{21} x_1 + a_{22} x_2 = b_2 +\end{cases} +$$ +$$ +a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\ +a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 +$$ + + +Forward Elimination then Backwards Substitution + ### Forward Elimination + ### Back Substitution + ### Naive Gauss Elimination +```python +def gaussian_elimination_naive(A, b): + A = A.astype(float).copy() + b = b.astype(float).copy() + n = len(b) + # Forward elimination + for k in range(n-1): + if np.isclose(A[k,k], 0.0): + raise ZeroDivisionError("Zero pivot encountered (naïve). Use pivoting.") + for i in range(k+1, n): + m = A[i,k]/A[k,k] + A[i, k:] -= m * A[k, k:] + b[i] -= m * b[k] + # Back substitution + x = np.zeros(n) + for i in range(n-1, -1, -1): + s = A[i, i+1:] @ x[i+1:] + x[i] = (b[i]-s)/A[i,i] + return x +``` ### Gauss Elimination - -### LU Decomposition +```python +def gaussian_elimination_pp(A, b): + A = A.astype(float).copy() + b = b.astype(float).copy() + n = len(b) + # Forward elimination with partial pivoting + for k in range(n-1): + piv = k + np.argmax(np.abs(A[k:, k])) + if np.isclose(A[piv, k], 0.0): + raise np.linalg.LinAlgError("Matrix is singular or nearly singular.") + if piv != k: # row swap + A[[k, piv]] = A[[piv, k]] + b[[k, piv]] = b[[piv, k]] + for i in range(k+1, n): + m = A[i,k]/A[k,k] + A[i, k:] -= m * A[k, k:] + b[i] -= m * b[k] + # Back substitution + x = np.zeros(n) + for i in range(n-1, -1, -1): + s = A[i, i+1:] @ x[i+1:] + x[i] = (b[i]-s)/A[i,i] + return x +``` ## Problem 1 -## Problem 2 +### Problem 2 + + # LU Decomposition -## ## Problem 1 diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md index 4c7876c..6e534b5 100644 --- a/tutorials/module_3/4_numerical_integration.md +++ b/tutorials/module_3/4_numerical_integration.md @@ -21,7 +21,7 @@ Integration is one of the fundamental tools in engineering analysis. Mechanical --- -## Numerical Methods +## Integration We wish to approximate a definite integral of the form: $$ I = \int_a^b f(x) \, dx diff --git a/tutorials/module_3/5_ode.md b/tutorials/module_3/5_ode.md index 98e68d8..81f6c6b 100644 --- a/tutorials/module_3/5_ode.md +++ b/tutorials/module_3/5_ode.md @@ -1,17 +1,94 @@ # Numerical Solutions of Ordinary Differential Equations -## Euler's Method +Fundamental laws of the universe derived in fields such as, physics, mechanics, electricity and thermodynamics define mechanisms of change. When combining these law's with continuity laws of energy, mass and momentum we get differential equations. -### Forwards Eulers +| Newton's Second Law of Motion | Equation | Description | +| ----------------------------- | -------------------------------- | ---------------------------------------------------- | +| Newton's Second Law of Motion | $$\frac{dv}{dt}=\frac{F}{m}$$ | Motion | +| Fourier's heat law | $$q=-kA\frac{dT}{dx}$$ | How heat is conducted through a material | +| Fick's law of diffusion | $$J=-D\frac{dc}{dx}$$ | Movement of particles from high to low concentration | +| Faraday's law | $$\Delta V_L = L \frac{di}{dt}$$ | Voltage drop across an inductor | -### Backwards Eulers +In engineering ordinary differential equation's (ODE) are very common in the thermo-fluid science's, mechanics and control systems. By now you've solve many ODE's however probably not using numerical methods. Suppose we have an initial value problem of the form +$$ +\frac{dy}{dt}=f(t,y), \quad y(t_0)=y_0 +$$ +where $f(t,y)$ describes the rate of change of $y$ with respect to time $t$. +## Explicit Methods + +### Euler's Method +![forward_eulers_method|350](fw_eulers.png) +Eulers method or more specifically Forwards Eulers method is one of the simplest methods for solving ODE's. The idea of the Forward Euler method is to approximate the solution curve by taking small time steps, each step moving forward along the tangent given by the slope $f(t,y)$. At each step, the method updates the solution using the formula +$$ +y_{n+1} = y_n + h f(t_n, y_n) +$$ +where $h$ is the chosen step size. This makes the method an **explicit method**, since the new value $y_{n+1}$​ is computed directly from known quantities at step $n$. Let's try and code this in python + +```python +import numpy as np +import matplotlib.pyplot as plt + +plt.style.use('seaborn-poster') +%matplotlib inline + +# Define parameters +f = lambda t, s: np.exp(-t) # ODE +h = 0.1 # Step size +t = np.arange(0, 1 + h, h) # Numerical grid +s0 = -1 # Initial Condition + +# Explicit Euler Method +s = np.zeros(len(t)) +s[0] = s0 + +for i in range(0, len(t) - 1): + s[i + 1] = s[i] + h*f(t[i], s[i]) + +plt.figure(figsize = (12, 8)) +plt.plot(t, s, 'bo--', label='Approximate') +plt.plot(t, -np.exp(-t), 'g', label='Exact') +plt.title('Approximate and Exact Solution \ +for Simple ODE') +plt.xlabel('t') +plt.ylabel('f(t)') +plt.grid() +plt.legend(loc='lower right') +plt.show() +``` +Although Forward Euler’s method is easy to implement and provides insight into how numerical integration works, it is not very accurate for larger step sizes and can become unstable for certain types of problems, especially stiff ODEs. Its accuracy is **first-order**, meaning the local error per step scales with $h^2$, and the global error across an interval scales with $h$. Because of this, Forward Euler is often used as a starting point for understanding numerical ODE solvers and as a baseline for comparing more advanced methods like Heun’s method or Runge–Kutta. + +## Heun's Method +![Karl Heun](https://upload.wikimedia.org/wikipedia/commons/c/ce/Karl_Heun.jpg) +Heun’s method introduced by German Mathematician Karl Heun is a refinement of the Forward Euler method. Like Euler’s method, it starts from the initial value problem +$$ +\frac{dy}{dt} = f(t, y), \quad y(t_0) = y_0 +$$ +The idea is to improve accuracy by not relying solely on the slope at the beginning of the interval. Instead, Heun’s method first uses Forward Euler to make a **prediction** of the next value, then computes the slope again at this predicted point, and finally averages the two slopes to make a **correction**. Mathematically, the update step is +$$ +y_{n+1} = y_n + \frac{h}{2}\big[f(t_n, y_n) + f(t_{n+1}, y_n + h f(t_n, y_n))\big] +$$ +This averaging of slopes makes the method **second-order accurate**, which means its global error decreases proportionally to $h^2$ rather than just hhh, as in Forward Euler. + +In practice, Heun’s method provides a significant improvement in accuracy without adding much computational cost. It still only requires evaluating the function $f(t,y)$ twice per step, making it more efficient than higher-order methods like Runge–Kutta while being more stable and reliable than Forward Euler. Because of this balance, Heun’s method is often used as an introduction to the idea of correcting predictions in numerical ODE solvers and as a stepping stone toward understanding more advanced predictor–corrector schemes. ## Runge-Kutta +The Runge-Kutta method, +The Runge–Kutta family of methods can be thought of as systematic improvements on Euler’s idea of “stepping forward” using slope information. In fact, Euler’s method itself is the simplest **Runge–Kutta method of order 1 (RK1)**: it uses just the slope at the beginning of the step to predict the next value. Heun’s method can then be seen as a **Runge–Kutta method of order 2 (RK2)**: instead of relying only on the initial slope, it takes two slope evaluations (one at the start and one at the predicted end), then averages them for better accuracy. This use of multiple slope samples within each step is the hallmark of the Runge–Kutta framework—higher-order accuracy comes from cleverly combining several slope evaluations. + +The most widely used member of this family is the **classical fourth-order Runge–Kutta (RK4)** method. It takes four slope evaluations per step: one at the beginning, two at intermediate points, and one at the end of the step. These slopes are then combined in a weighted average to update the solution. RK4 is accurate to **fourth order**, meaning the global error scales as h4h^4h4, which is far more accurate than Euler or Heun for the same step size. In practice, this makes RK4 the “workhorse” of many ODE solvers where stability and accuracy are needed but the system is not excessively stiff. + +Modern libraries, such as **SciPy’s `solve_ivp`**, build on this idea with adaptive Runge–Kutta methods. For example, `RK45` is an adaptive solver that pairs a 4th-order method with a 5th-order method, comparing the two at each step to estimate the error. Based on this estimate, the solver adjusts the step size automatically: smaller steps in regions where the solution changes quickly, and larger steps where the solution is smooth. This makes `RK45` both efficient and robust, giving you the accuracy benefits of Runge–Kutta methods while also taking away the burden of manually choosing an appropriate step size. +Would you like me to also show the Butcher tableaux for RK1, RK2, and RK4 so you see the structure that unifies them? +## Problem 1: Exponential decay +## Problem 2: The Predator-Pray Model -# Explicit ODE + + +# Implicit ODE +### Backwards Eulers -# Implicit ODE \ No newline at end of file +# System of ODE's \ No newline at end of file diff --git a/tutorials/module_3/6_pde.md b/tutorials/module_3/6_pde.md new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/tutorials/module_3/6_pde.md @@ -0,0 +1 @@ + diff --git a/tutorials/module_3/fw_elim_bw_sub.png b/tutorials/module_3/fw_elim_bw_sub.png new file mode 100644 index 0000000..43b2c01 Binary files /dev/null and b/tutorials/module_3/fw_elim_bw_sub.png differ diff --git a/tutorials/module_3/fw_eulers.png b/tutorials/module_3/fw_eulers.png new file mode 100644 index 0000000..9b0e407 Binary files /dev/null and b/tutorials/module_3/fw_eulers.png differ diff --git a/tutorials/module_3/heuns_method.png b/tutorials/module_3/heuns_method.png new file mode 100644 index 0000000..d3dfa5e Binary files /dev/null and b/tutorials/module_3/heuns_method.png differ -- cgit v1.2.3