summaryrefslogtreecommitdiff
path: root/tutorials
diff options
context:
space:
mode:
authorChristian Kolset <christian.kolset@gmail.com>2025-09-15 08:06:46 -0600
committerChristian Kolset <christian.kolset@gmail.com>2025-09-15 08:07:41 -0600
commitb7e71424dfde2ad56c4d8baaa2350e64a0f085a7 (patch)
treeb77b6e7ccad50f913d7339c7f3e7494a09d8e02e /tutorials
parent17f90913cb9a16ff78ad672fe15a2bd4e7e1b5db (diff)
Added heuns method
Diffstat (limited to 'tutorials')
-rw-r--r--tutorials/module_3/1_numerical_differentiation.md22
-rw-r--r--tutorials/module_3/2_roots_optimization.md17
-rw-r--r--tutorials/module_3/4_numerical_integration.md177
-rw-r--r--tutorials/module_3/5_ode.md42
-rw-r--r--tutorials/module_3/heuns_method.pngbin45882 -> 50969 bytes
-rw-r--r--tutorials/module_3/midpoint_method.pngbin0 -> 45882 bytes
6 files changed, 217 insertions, 41 deletions
diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/1_numerical_differentiation.md
index 0926851..441f838 100644
--- a/tutorials/module_3/1_numerical_differentiation.md
+++ b/tutorials/module_3/1_numerical_differentiation.md
@@ -45,7 +45,7 @@ plt.show()
### Backwards Difference
-Uses the point at which we want to find and the previous point in the array.
+Uses the point at which we want to find the derivative and the previous point in the array.
$$
f'(x_i) = \frac{f(x_{i})-f(x_{i-1})}{x_i - x_{i-1}}
$$
@@ -68,10 +68,28 @@ plt.show()
Try plotting both forward and backwards
### Central Difference
-
$$
f'(x_i) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}
$$
+### Problem 1
+Use the forward difference formula to approximate the derivative of $f(x)$ at $x = 1$ using step sizes: $h=0.5$ and $h=0.1$ for the following function.
+$$
+f(x) = \ln(x^2 + 1)
+$$
+Compare your results with the analytical solution at $x=1$. Comment on how the choice of $h$ affects the accuracy.
+```python
+
+```
+### Problem 2
+Use the central difference formula to approximate the derivative of $f(x)$ at $x = 1.2$ using step sizes: $h=0.5$ and $h=0.1$ for the following function.
+$$
+f(x) = e^{-x^2}
+$$
+Compare your results with the analytical solution at $x=1.2$. Comment on how the choice of $h$ affects the accuracy.
+```python
+
+```
+
---
# Advanced Derivatives
diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md
index 780a284..5023636 100644
--- a/tutorials/module_3/2_roots_optimization.md
+++ b/tutorials/module_3/2_roots_optimization.md
@@ -1,4 +1,4 @@
-# Root Finding Methods
+ # Root Finding Methods
By now you've learned the quadratic formula to find the roots of a second degree polynomial.
$$
@@ -41,10 +41,9 @@ When
# Bracketing Method
## Incremental Search
-Incremental search
-
-
+The incremental search method is the simplest of the bracketing methods. The basic idea is to start from an initial point and move forward in small increments along the function. At each step, the sign of the function value is checked. If a change of sign occurs between two consecutive points, then a root must lie between them (by the **Intermediate Value Theorem**). This gives us a bracket — an interval that contains the root — which can then be refined using more powerful methods such as bisection or false position.
+Although incremental search is not efficient for accurately finding roots, it is useful for _detecting the presence of roots_ and identifying approximate intervals where they exist. Once such an interval is located, other methods can be applied to converge quickly to the actual root. In this sense, incremental search often serves as a **pre-processing step** for bracketing or open methods, especially when the root’s location is not known in advance.
## Bisection
This method start by sectioning the database into two halves. We can do this with a phone book by simply opening it in the middle of the book. In our data set, we have find the middle entry to be 'M - Maxwell'. Based on the premises that all names are sorted alphabetically we can conclude that we will find our target in the second half, essentially eliminating the first half. Doing this a second time gives us our new midpoint 'S – Schrödinger' and a third time get us to our target 'W - Watt'. In only 3 iterations we have reached out target goal. This is a huge improvement from the Incremental search (22 iterations) versus our new 3 iterations.
@@ -101,8 +100,8 @@ def my_bisection(f, x_l, x_u, tol):
return my_bisection(f, x_l, x_r, tol)
```
-### Example
-
+### Problem 1
+Call the *my_bisection* function to solve the equation $f(x) = x^2 -2$
```python
f = lambda x: x**2 - 2
@@ -116,7 +115,7 @@ print("f(r01) =", f(r01))
```
-### Assignment 2
+### Problem 2
Write an algorithm that uses a combination of the bisection and incremental search to find multiple roots in the first 8 minutes of the data set.
```python
import pandas as pd
@@ -136,8 +135,8 @@ plt.legend()
plt.show()
```
-
## False Position
+
# Open Methods
So far we looked at methods that require us to bracket a root before finding it. However, let's think outside of the box and ask ourselves if we can find a root with only 1 guess. Can we improve the our guess by if we have some knowledge of the function itself? The answer - derivatives.
@@ -164,7 +163,7 @@ Since $x_0$ is our current guess and $x_0$ is our next guess, we can write these
$$
\boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} \tag{3.1}
$$
-### Assignment 2
+### Problem 1
From experimental data we extrapolated a function f. Write a python function called *newtonraphson* which as the following input parameters
- `f` - function
- `df` - first derivative of the function
diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md
index 6e534b5..78b0328 100644
--- a/tutorials/module_3/4_numerical_integration.md
+++ b/tutorials/module_3/4_numerical_integration.md
@@ -47,40 +47,187 @@ $$
It is simple to implement and works especially well for tabulated data. Like the midpoint rule,
its accuracy is of order $O(h^2)$.
+
+
+```python
+import numpy as np
+
+a = 0
+b = np.pi
+n = 11
+h = (b - a) / (n - 1)
+x = np.linspace(a, b, n)
+f = np.sin(x)
+
+I_trap = (h/2)*(f[0] + 2 * sum(f[1:n-1]) + f[n-1])
+err_trap = 2 - I_trap
+
+print(I_trap)
+print(err_trap)
+```
+
+
### Simpson’s Rule
Simpson’s rules use polynomial interpolation to achieve higher accuracy.
-- **Simpson’s 1/3 Rule (order $O(h^4)$)**
+**Simpson’s 1/3 Rule (order $O(h^4)$)**
Requires an even number of sub-intervals $n$:
$$
I \approx \frac{h}{3}\Big[f(x_0) + 4\sum_{\text{odd } i} f(x_i) +
2\sum_{\text{even } i<n} f(x_i) + f(x_n)\Big].
$$
+```python
+import numpy as np
+
+a = 0
+b = np.pi
+n = 11
+h = (b - a) / (n - 1)
+x = np.linspace(a, b, n)
+f = np.sin(x)
+
+I_simp = (h/3) * (f[0] + 2*sum(f[:n-2:2]) \
+ + 4*sum(f[1:n-1:2]) + f[n-1])
+err_simp = 2 - I_simp
-- **Simpson’s 3/8 Rule**
- Based on cubic interpolation across three subintervals:
+print(I_simp)
+print(err_simp)
+```
+
+
+**Simpson’s 3/8 Rule**
+Based on cubic interpolation across three subintervals:
$$
I \approx \frac{3h}{8}\Big[f(x_0) + 3f(x_1) + 3f(x_2) + f(x_3)\Big].
$$
- Often used to handle the final three panels when \(n\) is not divisible by two.
+Often used to handle the final three panels when \(n\) is not divisible by two.
+
+```python
+import numpy as np
+
+a = 0
+b = np.pi
+n = 11
+h = (b - a) / (n - 1)
+x = np.linspace(a, b, n)
+f = np.sin(x)
+
+I_simp = (h/3) * (f[0] + 2*sum(f[:n-2:2]) \
+ + 4*sum(f[1:n-1:2]) + f[n-1])
+err_simp = 2 - I_simp
+
+print(I_simp)
+print(err_simp)
+```
### Romberg Integration
Romberg integration refines trapezoidal approximations using Richardson extrapolation to cancel error terms. This results in very rapid convergence for smooth functions, making it a powerful method once the trapezoidal rule is implemented.
+```python
+import numpy as np
+
+a = 0.0
+b = 1.0
+
+f = lambda x: np.exp(-x**2)
+
+iter_ = 0
+iter_max = 50
+err = 1.0
+tol = 1e-14
+
+I = np.zeros((iter_max + 2, iter_max + 2), dtype=float)
+
+def trap_func(func, a, b, n):
+ """Composite trapezoidal rule with n subintervals on [a, b]."""
+ h = (b - a) / n
+ x = np.linspace(a, b, n + 1)
+ y = func(x)
+ return h * (0.5 * y[0] + y[1:-1].sum() + 0.5 * y[-1])
+
+
+n = 1
+I[0, 0] = trap_func(f, a, b, n)
+
+# ---- Romberg loop ----
+while (iter_ < iter_max) and (err > tol):
+ iter_ += 1
+ n = 2 ** iter_
+ I[iter_, 0] = trap_func(f, a, b, n)
+ for k in range(1, iter_ + 1):
+ j = iter_ - k
+ I[j, k] = (4**k * I[j + 1, k - 1] - I[j, k - 1]) / (4**k - 1)
+ if iter_ >= 1:
+ err = abs((I[0, iter_] - I_]()
+```
+
### Gaussian Quadrature
-Gaussian quadrature chooses evaluation points and weights optimally (based on Legendre
-polynomials) to maximize accuracy. With $n$ evaluation points, Gaussian quadrature is exact for polynomials of degree up to $2n-1$. This makes it extremely efficient for smooth integrands.
+Gaussian quadrature chooses evaluation points and weights optimally (based on Legendre polynomials) to maximize accuracy. With $n$ evaluation points, Gaussian quadrature is exact for polynomials of degree up to $2n-1$. This makes it extremely efficient for smooth integrands.
+
+```python
+import numpy as np
+
+a, b = 0.0, 1.0 # integration limits [a, b]
+n_list = [2, 3, 4, 5, 8, 16] # quadrature orders to try
+
+for n in n_list:
+ xi, wi = np.polynomial.legendre.leggauss(n)
+ x = 0.5*(b - a)*xi + 0.5*(b + a)
+ f_vals = np.exp(-x**2) # f(x) = e^(-x^2)
+
+ I_approx = 0.5*(b - a) * np.sum(wi * f_vals)
+ print(f"{n:<3}| {I_approx:.12f}")
+
+```
+
+
+
+```python
+#!/usr/bin/env python3
+# Gauss Quadrature using Gauss-Legendre method (manual 2-point & 3-point)
+
+import numpy as np
+from scipy.integrate import quad # for reference "exact" integral
+
+# ---- Establish a, b, and the function ----
+a = 0.0
+b = 1.0
+f = lambda x: np.exp(-x**2) # example: f(x) = e^(-x^2)
+
+# ---- Common factor ----
+dx = (b - a) / 2.0
+
+# ---- Two-point Gauss-Legendre ----
+x0 = ((b + a) + (b - a) * (-1/np.sqrt(3))) / 2
+x1 = ((b + a) + (b - a) * ( 1/np.sqrt(3))) / 2
+I_2Point = (f(x0) + f(x1)) * dx
+print(f"2-point Gauss-Legendre: {I_2Point:.10f}")
+
+# ---- Three-point Gauss-Legendre ----
+c0, c1, c2 = 5/9, 8/9, 5/9
+x0 = ((b + a) + (b - a) * (-np.sqrt(3/5))) / 2
+x1 = ((b + a) + (b - a) * (0.0)) / 2
+x2 = ((b + a) + (b - a) * ( np.sqrt(3/5))) / 2
+I_3Point = (c0*f(x0) + c1*f(x1) + c2*f(x2)) * dx
+print(f"3-point Gauss-Legendre: {I_3Point:.10f}")
+
+# ---- Actual value using scipy.integrate.quad ----
+I_actual, _ = quad(f, a, b)
+print(f"Actual (scipy.quad): {I_actual:.10f}")
+
+# ---- Errors ----
+err2 = abs(I_actual - I_2Point)
+err3 = abs(I_actual - I_3Point)
+print(f"Error (2-point): {err2:.2e}")
+print(f"Error (3-point): {err3:.2e}")
+
+```
+
+
----
-## Taking this further
-- **Romberg Integration:** Demonstrates how extrapolation accelerates convergence of trapezoidal approximations.
-- **Gaussian Quadrature:** Introduces optimal integration points and highlights efficiency for
-polynomial and smooth functions.
-- **Simpson’s Rules:** Show how higher-order polynomial interpolation improves accuracy.
-These methods will be implemented and compared in subsequent assignments to build a deeper understanding of numerical integration accuracy and efficiency.
---
@@ -121,12 +268,12 @@ Students should compare results for increasing $n$ and observe how the error dec
## Gaussian Quadrature
-Implement a Python function for two-point and three-point Gauss–Legendre quadrature over an arbitrary interval $[a,b]$. Verify exactness for polynomials up to the appropriate degree and compare performance against the trapezoidal rule on oscillatory test functions.
+Write a Python function for two-point and three-point Gauss–Legendre quadrature over an arbitrary interval $[a,b]$. Verify exactness for polynomials up to the appropriate degree and compare performance against the trapezoidal rule on oscillatory test functions.
---
## Simpson’s 1/3 Rule
-Implement the composite Simpson’s 1/3 rule. Test its accuracy on smooth functions and compare its performance to the trapezoidal rule and Gaussian quadrature. Document error trends and discuss cases where Simpson’s method is preferable.
+Code the composite Simpson’s 1/3 rule. Test its accuracy on smooth functions and compare its performance to the trapezoidal rule and Gaussian quadrature. Document error trends and discuss cases where Simpson’s method is preferable.
---
diff --git a/tutorials/module_3/5_ode.md b/tutorials/module_3/5_ode.md
index 81f6c6b..acb0cff 100644
--- a/tutorials/module_3/5_ode.md
+++ b/tutorials/module_3/5_ode.md
@@ -14,9 +14,9 @@ $$
\frac{dy}{dt}=f(t,y), \quad y(t_0)=y_0
$$
where $f(t,y)$ describes the rate of change of $y$ with respect to time $t$.
-## Explicit Methods
+# Explicit Methods
-### Euler's Method
+## Euler's Method
![forward_eulers_method|350](fw_eulers.png)
Eulers method or more specifically Forwards Eulers method is one of the simplest methods for solving ODE's. The idea of the Forward Euler method is to approximate the solution curve by taking small time steps, each step moving forward along the tangent given by the slope $f(t,y)$. At each step, the method updates the solution using the formula
$$
@@ -68,27 +68,39 @@ The idea is to improve accuracy by not relying solely on the slope at the beginn
$$
y_{n+1} = y_n + \frac{h}{2}\big[f(t_n, y_n) + f(t_{n+1}, y_n + h f(t_n, y_n))\big]
$$
-This averaging of slopes makes the method **second-order accurate**, which means its global error decreases proportionally to $h^2$ rather than just hhh, as in Forward Euler.
+This averaging of slopes makes the method **second-order accurate**, which means its global error decreases proportionally to $h^2$ rather than just $h$, as in Forward Euler.
-In practice, Heun’s method provides a significant improvement in accuracy without adding much computational cost. It still only requires evaluating the function $f(t,y)$ twice per step, making it more efficient than higher-order methods like Runge–Kutta while being more stable and reliable than Forward Euler. Because of this balance, Heun’s method is often used as an introduction to the idea of correcting predictions in numerical ODE solvers and as a stepping stone toward understanding more advanced predictor–corrector schemes.
-## Runge-Kutta
-The Runge-Kutta method,
-The Runge–Kutta family of methods can be thought of as systematic improvements on Euler’s idea of “stepping forward” using slope information. In fact, Euler’s method itself is the simplest **Runge–Kutta method of order 1 (RK1)**: it uses just the slope at the beginning of the step to predict the next value. Heun’s method can then be seen as a **Runge–Kutta method of order 2 (RK2)**: instead of relying only on the initial slope, it takes two slope evaluations (one at the start and one at the predicted end), then averages them for better accuracy. This use of multiple slope samples within each step is the hallmark of the Runge–Kutta framework—higher-order accuracy comes from cleverly combining several slope evaluations.
+![Heuns Method|350](heuns_method.png)
+In practice, Heun’s method provides a significant improvement in accuracy without adding much computational cost. It still only requires evaluating the function $f(t,y)$ twice per step, making it more efficient than higher-order methods like Runge–Kutta while being more stable and reliable than Forward Euler.
+## Classical Runge-Kutta
+Also known as the **classical fourth-order Runge–Kutta (RK4)** method. It takes four slope evaluations per step: one at the beginning, two at intermediate points, and one at the end of the step. These slopes are then combined in a weighted average to update the solution. RK4 is accurate to **fourth order**, meaning the global error scales as $h^4$, which is far more accurate than Euler or Heun for the same step size. In practice, this makes RK4 the “workhorse” of many ODE solvers where stability and accuracy are needed but the system is not excessively stiff.
-The most widely used member of this family is the **classical fourth-order Runge–Kutta (RK4)** method. It takes four slope evaluations per step: one at the beginning, two at intermediate points, and one at the end of the step. These slopes are then combined in a weighted average to update the solution. RK4 is accurate to **fourth order**, meaning the global error scales as h4h^4h4, which is far more accurate than Euler or Heun for the same step size. In practice, this makes RK4 the “workhorse” of many ODE solvers where stability and accuracy are needed but the system is not excessively stiff.
-
-Modern libraries, such as **SciPy’s `solve_ivp`**, build on this idea with adaptive Runge–Kutta methods. For example, `RK45` is an adaptive solver that pairs a 4th-order method with a 5th-order method, comparing the two at each step to estimate the error. Based on this estimate, the solver adjusts the step size automatically: smaller steps in regions where the solution changes quickly, and larger steps where the solution is smooth. This makes `RK45` both efficient and robust, giving you the accuracy benefits of Runge–Kutta methods while also taking away the burden of manually choosing an appropriate step size.
-
-Would you like me to also show the Butcher tableaux for RK1, RK2, and RK4 so you see the structure that unifies them?
+**SciPy’s `solve_ivp`**, build on this idea with adaptive Runge–Kutta methods. For example, `RK45` is an adaptive solver that pairs a 4th-order method with a 5th-order method, comparing the two at each step to estimate the error. Based on this estimate, the solver adjusts the step size automatically: smaller steps in regions where the solution changes quickly, and larger steps where the solution is smooth. This makes `RK45` both efficient and robust, giving you the accuracy benefits of Runge–Kutta methods while also taking away the burden of manually choosing an appropriate step size.
## Problem 1: Exponential decay
## Problem 2: The Predator-Pray Model
+## Problem 3: Swinging Pendulum
+
+
+# Implicit Method
+Implicit methods for solving ODEs are another numerical schemes in which the new solution value at the next time step appears on both sides of the update equation. This makes them fundamentally different from explicit methods, where the next value is computed directly from already-known quantities. While explicit methods are simple and computationally cheap per step, implicit methods generally require solving algebraic equations at every step. In exchange, implicit methods provide much greater numerical stability, especially for stiff problems. The general idea of an implicit method for a first-order ODE is to write the time-stepping update in a form such that
+$$
+y_{n+1} = y_n + h f(t_{n+1}, y_{n+1})\tag{2}
+$$
+where $h$ is the step size. Unlike explicit schemes (where $f$ is evaluated at known values $t_n$, $y_n$​), here the function is evaluated at the unknown point $t_{n+1}, y_{n+1}$​. This results in an equation that must be solved for $y_{n+1}$​ before advancing to the next step.
+
+Implicit methods are especially useful when dealing with stiff ODEs, where some components of the system change much faster than others. In such cases, explicit methods require extremely small time steps to remain stable, making them inefficient. Implicit schemes allow much larger time steps without losing stability, which makes them the preferred choice in fields like chemical kinetics, heat conduction, and structural dynamics with damping. Although they require more computational effort per step, they are often far more efficient overall for stiff problems.
+## Backwards Eulers
+The implicit counter part of Forward Euler is the Backward Euler method. Like Forward Euler, it advances the solution using a single slope estimate, but instead of using the slope at the current point, it uses the slope at the unknown future point. This creates an implicit update equation that must be solved at each step, making Backward Euler both the easiest implicit method to derive and an excellent starting point for studying implicit schemes.
+
+## Problem 1: Chemical Kinetics
+## Problem 2:
-# Implicit ODE
-### Backwards Eulers
+# Systems of ODE's
+All methods for single ODE equations can be extended to using for system of ODEs
-# System of ODE's \ No newline at end of file
+## Problem 1: \ No newline at end of file
diff --git a/tutorials/module_3/heuns_method.png b/tutorials/module_3/heuns_method.png
index d3dfa5e..6e9c582 100644
--- a/tutorials/module_3/heuns_method.png
+++ b/tutorials/module_3/heuns_method.png
Binary files differ
diff --git a/tutorials/module_3/midpoint_method.png b/tutorials/module_3/midpoint_method.png
new file mode 100644
index 0000000..d3dfa5e
--- /dev/null
+++ b/tutorials/module_3/midpoint_method.png
Binary files differ