From 183c55c09e62c069d65b2f5d9f5d7c065ad16228 Mon Sep 17 00:00:00 2001 From: Christian Kolset Date: Wed, 3 Sep 2025 13:19:49 -0600 Subject: Updated entries --- tutorials/module_3/0_numerical_methods.md | 46 +++++++ tutorials/module_3/1_numerical_differentiation.md | 67 +++++++++++ tutorials/module_3/2_roots_optimization.md | 12 ++ tutorials/module_3/3_system_of_equations.md | 11 ++ tutorials/module_3/4_numerical_integration.md | 129 ++++++++++++++++++++ tutorials/module_3/5_ode.md | 10 ++ tutorials/module_3/numerical_differentiation.md | 67 ----------- tutorials/module_3/numerical_integration.md | 140 ---------------------- tutorials/module_3/numerical_methods.md | 46 ------- tutorials/module_3/ode.md | 10 -- tutorials/module_3/roots_optimization.md | 12 -- tutorials/module_3/system_of_equations.md | 11 -- 12 files changed, 275 insertions(+), 286 deletions(-) create mode 100644 tutorials/module_3/0_numerical_methods.md create mode 100644 tutorials/module_3/1_numerical_differentiation.md create mode 100644 tutorials/module_3/2_roots_optimization.md create mode 100644 tutorials/module_3/3_system_of_equations.md create mode 100644 tutorials/module_3/4_numerical_integration.md create mode 100644 tutorials/module_3/5_ode.md delete mode 100644 tutorials/module_3/numerical_differentiation.md delete mode 100644 tutorials/module_3/numerical_integration.md delete mode 100644 tutorials/module_3/numerical_methods.md delete mode 100644 tutorials/module_3/ode.md delete mode 100644 tutorials/module_3/roots_optimization.md delete mode 100644 tutorials/module_3/system_of_equations.md (limited to 'tutorials/module_3') diff --git a/tutorials/module_3/0_numerical_methods.md b/tutorials/module_3/0_numerical_methods.md new file mode 100644 index 0000000..449ece0 --- /dev/null +++ b/tutorials/module_3/0_numerical_methods.md @@ -0,0 +1,46 @@ +# Numerical Methods +Engineering + +## What is a numerical method? +Numerical methods are techniques that transform mathematical problems into forms that can be solved using arithmetic and logical operations. Because digital computers excel at these computations, numerical methods are often referred to as computer mathematics. + + +## Numerical Differentiation +Forwards difference +Backwards difference +Central Difference method +[Read More](https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter20.00-Numerical-Differentiation.html) + +## Roots and Optimization +Incremental Search +Bisection +Modified Secant +Newton-Raphson + +## System of Equations +Guassian Method +LU Decomposition + +## Numerical Integration + +Midpoint +Trapezoidal +Romberg +Gaussian +Simpson's Rule + +[Read More](https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter21.00-Numerical-Integration.html) + + +## Numerical Solutions of Ordinary Differential Equations + +Euler's Method +- Forward +- Backwards + +Runge-Kutte + +[ReadMore](https://pythonnumericalmethods.studentorg.berkeley.edu/notebooks/chapter22.00-ODE-Initial-Value-Problems.html) + + + diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/1_numerical_differentiation.md new file mode 100644 index 0000000..b34b315 --- /dev/null +++ b/tutorials/module_3/1_numerical_differentiation.md @@ -0,0 +1,67 @@ +# Numerical Differentiation +Finding a derivative of tabular data can be done using a finite difference. Here we essentially pick two points on a function or a set of data points and calculate the slope from there. Let's imagine a domain $x$ as a vector such that $\vec{x}$ = $\pmatrix{x_0, x_1, x_2, ...}$. Then we can use the following methods to approximate derivatives + +## Forward Difference +Uses the point at which we want to find the derivative and a point forwards on the line. +$$ +f'(x_i) = \frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i} +$$ +*Hint: Consider what happens at the last point.* + +```python +import numpy as np +import matplotlib.pyplot as plt + +# Initiate vectors +x = np.linspace(0, 2, 100) +y = 34 * np.exp(3 * x) + +dydx = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) + +# Plot the function +plt.plot(x, y, label=r'$y(x)$') +plt.plot(x, dydx, label=b'$/frac{dy}{dx}$') +plt.xlabel('x') +plt.ylabel('y') +plt.title('Plot of $34e^{3x}$') +plt.grid(True) +plt.legend() +plt.show() +``` + + +## Backwards Difference +Uses the point at which we want to find +$$ +f'(x_i) = \frac{f(x_{i})-f(x_{i-1})}{x_i - x_{i-1}} +$$ + + +```python +import numpy as np +import matplotlib.pyplot as plt + +# Initiate vectors +x = np.linspace(0, 2, 100) +y = 34 * np.exp(3 * x) + +dydx = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) + +# Plot the function +plt.plot(x, y, label=r'$y(x)$') +plt.plot(x, dydx, label=b'$/frac{dy}{dx}$') +plt.xlabel('x') +plt.ylabel('y') +plt.title('Plot of $34e^{3x}$') +plt.grid(True) +plt.legend() +plt.show() +``` +## Central Difference + +$$ +f'(x_i) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}} +$$ + + + diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md new file mode 100644 index 0000000..3a288cc --- /dev/null +++ b/tutorials/module_3/2_roots_optimization.md @@ -0,0 +1,12 @@ +# Root Finding Methods + +Root Finding Methods or non-linear equation solvers. + +## Incremental Search + +## Bisection + +## Modified Secant + +## Newton-Raphson + diff --git a/tutorials/module_3/3_system_of_equations.md b/tutorials/module_3/3_system_of_equations.md new file mode 100644 index 0000000..9830060 --- /dev/null +++ b/tutorials/module_3/3_system_of_equations.md @@ -0,0 +1,11 @@ +# Systems of Equations + + +## Naive Gauss Elimination + +## Forward Elimination + +## Back Substitution + +## LU Decomposition + diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md new file mode 100644 index 0000000..c486825 --- /dev/null +++ b/tutorials/module_3/4_numerical_integration.md @@ -0,0 +1,129 @@ +## Midpoint Method + + +## Trapezoidal Method + + +## Romberg Integration + + +## Gaussian Integration + + +## Simpson's Rule + +### Simpsons 1/3 + +### Simpsons 3/8 + + + + +# Numerical Integration +## Why Numerical? +Integration is one of the fundamental tools in engineering analysis. Mechanical engineers frequently encounter integrals when computing work from force–displacement data, determining heat transfer from a time-dependent signal, or calculating lift and drag forces from pressure distributions over an airfoil. While some integrals can be evaluated analytically, most practical problems involve functions that are either too complex or are available only as experimental data. As engineering we choose numerical integration—also known as quadrature—provides a systematic approach to approximate the integral of a function over a finite interval. + +In this tutorial, we will study several standard methods of numerical integration, compare their accuracy, and implement them in Python. By the end, you will understand not only how to apply each method, but also when one method may be more suitable than another. + +--- + +## Numerical Methods +We wish to approximate a definite integral of the form: +$$ +I = \int_a^b f(x) \, dx +$$ +by a weighted sum of function values: +$$ +I \approx \sum_{i=0}^m w_i f(x_i). +$$ +Here, $x_i$ are the chosen evaluation points and $w_i$ are their associated weights. + +### Midpoint Rule +The midpoint rule divides the interval into $n$ subintervals of equal width $h = (b-a)/n$ and +evaluates the function at the midpoint of each subinterval: +$$ +I \approx \sum_{i=0}^{n-1} h \, f\!\left(x_i + \tfrac{h}{2}\right). +$$ +This method achieves second-order accuracy (error decreases with $h^2$). +### Trapezoidal Rule +The trapezoidal rule approximates the area under the curve as a series of trapezoids: +$$ +I \approx \frac{h}{2}\Big[f(x_0) + 2\sum_{i=1}^{n-1} f(x_i) + f(x_n)\Big]. +$$ +It is simple to implement and works especially well for tabulated data. Like the midpoint rule, +its accuracy is of order $O(h^2)$. + +### Simpson’s Rule +Simpson’s rules use polynomial interpolation to achieve higher accuracy. + +- **Simpson’s 1/3 Rule (order $O(h^4)$)** + Requires an even number of subintervals $n$: + $$ + I \approx \frac{h}{3}\Big[f(x_0) + 4\sum_{\text{odd } i} f(x_i) + + 2\sum_{\text{even } i