summaryrefslogtreecommitdiff
path: root/tutorials/module_3/4_numerical_integration.md
diff options
context:
space:
mode:
Diffstat (limited to 'tutorials/module_3/4_numerical_integration.md')
-rw-r--r--tutorials/module_3/4_numerical_integration.md259
1 files changed, 0 insertions, 259 deletions
diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md
deleted file mode 100644
index 4448985..0000000
--- a/tutorials/module_3/4_numerical_integration.md
+++ /dev/null
@@ -1,259 +0,0 @@
-# Numerical Integration
-## Why Numerical?
-Integration is one of the fundamental tools in engineering analysis. Mechanical engineers frequently encounter integrals when computing work from force–displacement data, determining heat transfer from a time-dependent signal, or calculating lift and drag forces from pressure distributions over an airfoil. While some integrals can be evaluated analytically, most practical problems involve functions that are either too complex or are available only as experimental data. In engineering, numerical integration provides a systematic approach to approximate the integral of a function over a finite interval.
-
----
-
-## Integration
-We wish to approximate a definite integral of the form:
-$$
-I = \int_a^b f(x) \, dx
-$$
-by a weighted sum of function values:
-$$
-I \approx \sum_{i=0}^m w_i f(x_i).
-$$
-Here, $x_i$ are the chosen evaluation points and $w_i$ are their associated weights.
-
-### Midpoint Rule
-The midpoint rule divides the interval into $n$ sub-intervals of equal width $h = (b-a)/n$ and
-evaluates the function at the midpoint of each sub-interval:
-$$
-I \approx \sum_{i=0}^{n-1} h \, f\!\left(x_i + \tfrac{h}{2}\right).
-$$
-This method achieves second-order accuracy (error decreases with $h^2$).
-### Trapezoidal Rule
-The trapezoidal rule approximates the area under the curve as a series of trapezoids:
-$$
-I \approx \frac{h}{2}\Big[f(x_0) + 2\sum_{i=1}^{n-1} f(x_i) + f(x_n)\Big].
-$$
-It is simple to implement and works especially well for tabulated data. Like the midpoint rule,
-its accuracy is of order $O(h^2)$.
-
-
-
-```python
-import numpy as np
-
-a = 0
-b = np.pi
-n = 11
-h = (b - a) / (n - 1)
-x = np.linspace(a, b, n)
-f = np.sin(x)
-
-I_trap = (h/2)*(f[0] + 2 * sum(f[1:n-1]) + f[n-1])
-err_trap = 2 - I_trap
-
-print(I_trap)
-print(err_trap)
-```
-
-
-### Simpson’s Rule
-Simpson’s rules use polynomial interpolation to achieve higher accuracy.
-
-**Simpson’s 1/3 Rule (order $O(h^4)$)**
- Requires an even number of sub-intervals $n$:
- $$
- I \approx \frac{h}{3}\Big[f(x_0) + 4\sum_{\text{odd } i} f(x_i) +
- 2\sum_{\text{even } i<n} f(x_i) + f(x_n)\Big].
- $$
-```python
-import numpy as np
-
-a = 0
-b = np.pi
-n = 11
-h = (b - a) / (n - 1)
-x = np.linspace(a, b, n)
-f = np.sin(x)
-
-I_simp = (h/3) * (f[0] + 2*sum(f[:n-2:2]) \
- + 4*sum(f[1:n-1:2]) + f[n-1])
-err_simp = 2 - I_simp
-
-print(I_simp)
-print(err_simp)
-```
-
-
-**Simpson’s 3/8 Rule**
-Based on cubic interpolation across three subintervals:
- $$
- I \approx \frac{3h}{8}\Big[f(x_0) + 3f(x_1) + 3f(x_2) + f(x_3)\Big].
- $$
-Often used to handle the final three panels when \(n\) is not divisible by two.
-
-```python
-import numpy as np
-
-a = 0
-b = np.pi
-n = 11
-h = (b - a) / (n - 1)
-x = np.linspace(a, b, n)
-f = np.sin(x)
-
-I_simp = (h/3) * (f[0] + 2*sum(f[:n-2:2]) \
- + 4*sum(f[1:n-1:2]) + f[n-1])
-err_simp = 2 - I_simp
-
-print(I_simp)
-print(err_simp)
-```
-
-### Romberg Integration
-Romberg integration refines trapezoidal approximations using Richardson extrapolation to cancel error terms. This results in very rapid convergence for smooth functions, making it a powerful method once the trapezoidal rule is implemented.
-
-```python
-import numpy as np
-
-a = 0.0
-b = 1.0
-
-f = lambda x: np.exp(-x**2)
-
-iter_ = 0
-iter_max = 50
-err = 1.0
-tol = 1e-14
-
-I = np.zeros((iter_max + 2, iter_max + 2), dtype=float)
-
-def trap_func(func, a, b, n):
- """Composite trapezoidal rule with n subintervals on [a, b]."""
- h = (b - a) / n
- x = np.linspace(a, b, n + 1)
- y = func(x)
- return h * (0.5 * y[0] + y[1:-1].sum() + 0.5 * y[-1])
-
-
-n = 1
-I[0, 0] = trap_func(f, a, b, n)
-
-# ---- Romberg loop ----
-while (iter_ < iter_max) and (err > tol):
- iter_ += 1
- n = 2 ** iter_
- I[iter_, 0] = trap_func(f, a, b, n)
- for k in range(1, iter_ + 1):
- j = iter_ - k
- I[j, k] = (4**k * I[j + 1, k - 1] - I[j, k - 1]) / (4**k - 1)
- if iter_ >= 1:
- err = abs((I[0, iter_] - I_]()
-```
-
-### Gaussian Quadrature
-Gaussian quadrature chooses evaluation points and weights optimally to maximize accuracy. With $n$ evaluation points, Gaussian quadrature is exact for polynomials of degree up to $2n-1$. This makes it extremely efficient for smooth integrands.
-
-```python
-import numpy as np
-
-a, b = 0.0, 1.0 # integration limits [a, b]
-n_list = [2, 3, 4, 5, 8, 16] # quadrature orders to try
-
-for n in n_list:
- xi, wi = np.polynomial.legendre.leggauss(n)
- x = 0.5*(b - a)*xi + 0.5*(b + a)
- f_vals = np.exp(-x**2) # f(x) = e^(-x^2)
-
- I_approx = 0.5*(b - a) * np.sum(wi * f_vals)
- print(f"{n:<3}| {I_approx:.12f}")
-
-```
-
-
-
-```python
-import numpy as np
-from scipy.integrate import quad # for reference "exact" integral
-
-# ---- Establish a, b, and the function ----
-a = 0.0
-b = 1.0
-f = lambda x: np.exp(-x**2) # example: f(x) = e^(-x^2)
-
-# ---- Common factor ----
-dx = (b - a) / 2.0
-
-# ---- Two-point Gauss-Legendre ----
-x0 = ((b + a) + (b - a) * (-1/np.sqrt(3))) / 2
-x1 = ((b + a) + (b - a) * ( 1/np.sqrt(3))) / 2
-I_2Point = (f(x0) + f(x1)) * dx
-print(f"2-point Gauss-Legendre: {I_2Point:.10f}")
-
-# ---- Three-point Gauss-Legendre ----
-c0, c1, c2 = 5/9, 8/9, 5/9
-x0 = ((b + a) + (b - a) * (-np.sqrt(3/5))) / 2
-x1 = ((b + a) + (b - a) * (0.0)) / 2
-x2 = ((b + a) + (b - a) * ( np.sqrt(3/5))) / 2
-I_3Point = (c0*f(x0) + c1*f(x1) + c2*f(x2)) * dx
-print(f"3-point Gauss-Legendre: {I_3Point:.10f}")
-
-# ---- Actual value using scipy.integrate.quad ----
-I_actual, _ = quad(f, a, b)
-print(f"Actual (scipy.quad): {I_actual:.10f}")
-
-# ---- Errors ----
-err2 = abs(I_actual - I_2Point)
-err3 = abs(I_actual - I_3Point)
-print(f"Error (2-point): {err2:.2e}")
-print(f"Error (3-point): {err3:.2e}")
-
-```
-
-
-
-## More integral
-
-### Newton-Cotes Algorithms for Equations
-### Adaptive Quadrature
-
-
-# Problems
-
-## Numerical Integration to Compute Work
-In physics we've learned that work is computed
-$$
-Work = force * distance
-$$
-This can be written in int's integral form:
-$$
-W = \int{F(x)dx}
-$$
-If F(x) is easy to integrate, we could solve this problem analytically. However, a realistic problem the force may not be available to you as a function, but rather, tabulated data. Suppose some measurements were take of when a weighted box was pulled with a wire. If we data of the force on the wire and the angle of the wire from the horizontal plane.
-
-| x (ft) | F(x) (lb) | θ (rad) | F(x) cos θ |
-| ------ | --------- | ------- | ---------- |
-| 0 | 0.0 | 0.50 | 0.0000 |
-| 5 | 9.0 | 1.40 | 1.5297 |
-| 10 | 13.0 | 0.75 | 9.5120 |
-| 15 | 14.0 | 0.90 | 8.7025 |
-| 20 | 10.5 | 1.30 | 2.8087 |
-| 25 | 12.0 | 1.48 | 1.0881 |
-| 30 | 5.0 | 1.50 | 0.3537 |
-| | | | |
-Use the trapezoidal rule to compute the work done on the box.
-
-## Implementing the Composite Trapezoidal Rule
-
-Implement a Python function to approximate integrals using the trapezoidal rule.
-
-```python
-import numpy as np
-
-def trapz(f, a, b, n):
- x = np.linspace(a, b, n+1)
- y = f(x)
- h = (b - a) / n
- return h * (0.5*y[0] + y[1:-1].sum() + 0.5*y[-1])
-
-# Example tests
-f1 = np.sin
-I_true1 = 2.0 # ∫_0^π sin(x) dx
-for n in [4, 8, 16, 32]:
- print(n, trapz(f1, 0, np.pi, n))
-```
-
-Compare the results for increasing $n$ and observe how the error decreases with $O(h^2$).