summaryrefslogtreecommitdiff
path: root/tutorials
diff options
context:
space:
mode:
authorChristian Kolset <christian.kolset@gmail.com>2025-09-12 17:24:40 -0600
committerChristian Kolset <christian.kolset@gmail.com>2025-09-12 17:24:40 -0600
commit18eb6dbfa545cdbb69103e312ccb0992d1b20b00 (patch)
treea601690e4ff90cb11032ddf43c6c794c829a18a1 /tutorials
parent0c79ba277ff4afbb36613dfd366481b5e28cdf4e (diff)
Worked on module 3
Diffstat (limited to 'tutorials')
-rw-r--r--tutorials/module_3/1_numerical_differentiation.md41
-rw-r--r--tutorials/module_3/2_roots_optimization.md4
-rw-r--r--tutorials/module_3/3_system_of_equations.md96
-rw-r--r--tutorials/module_3/4_numerical_integration.md61
-rw-r--r--tutorials/module_3/5_ode.md7
5 files changed, 160 insertions, 49 deletions
diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/1_numerical_differentiation.md
index aa45c3a..68e0953 100644
--- a/tutorials/module_3/1_numerical_differentiation.md
+++ b/tutorials/module_3/1_numerical_differentiation.md
@@ -11,7 +11,8 @@ s = 34 * np.exp(3 * t)
Then we can use the following methods to approximate the definitive derivatives as follows.
-## Forward Difference
+## Finite Difference
+### Forward Difference
Uses the point at which we want to find the derivative and a point forwards in the array.
$$
f'(x_i) = \frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}
@@ -19,14 +20,21 @@ $$
*Note: If we apply this to an array, consider what happens at the last point.*
```python
+for s in t:
+ ds_dt = f(x+)
+```
+
+
+
+```python
# Forward difference using python arrays
-dsdt = (y[1:] - y[:-1]) / (x[1:] - x[:-1])
+dsdt = (s[1:] - s[:-1]) / (t[1:] - t[:-1])
import matplotlib.pyplot as plt
# Plot the function
-plt.plot(x, s, label=r'$y(x)$')
-plt.plot(x, dsdt, label=b'$/frac{ds}{dt}$')
+plt.plot(t, s, label=r'$s(t)$')
+plt.plot(t, dsdt, label=b'$/frac{ds}{dt}$')
plt.xlabel('Time (t)')
plt.ylabel('Displacement (s)')
plt.title('Plot of $34e^{3t}$')
@@ -36,7 +44,7 @@ plt.show()
```
-## Backwards Difference
+### Backwards Difference
Uses the point at which we want to find and the previous point in the array.
$$
f'(x_i) = \frac{f(x_{i})-f(x_{i-1})}{x_i - x_{i-1}}
@@ -44,23 +52,32 @@ $$
```python
-dsdt = (y[1:] - y[:-1]) / (x[1:] - x[:-1])
+dsdt = (s[1:] - s[:-1]) / (t[1:] - t[:-1])
# Plot the function
-plt.plot(x, y, label=r'$y(x)$')
-plt.plot(x, dydx, label=b'$/frac{ds}{dt}$')
-plt.xlabel('x')
-plt.ylabel('y')
-plt.title('Plot of $34e^{3x}$')
+plt.plot(t, s, label=r'$s(t)$')
+plt.plot(t, dydx, label=b'$/frac{ds}{dt}$')
+
+plt.xlabel('Time (t)')
+plt.ylabel('Displacement (s)')
+plt.title('Plot of $34e^{3t}$')
plt.grid(True)
plt.legend()
plt.show()
```
-## Central Difference
+Try plotting both forward and backwards
+
+### Central Difference
$$
f'(x_i) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}
$$
+---
+# Advanced Derivatives
+## High-Accuracy Differentiation Formulas
+## Richardson Extrapolation
+## Derivative of Unequally Spaced Data
+## Partial Derivatives
diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md
index 8083260..e4224f1 100644
--- a/tutorials/module_3/2_roots_optimization.md
+++ b/tutorials/module_3/2_roots_optimization.md
@@ -39,7 +39,6 @@ Another approach may be to index the book and jump straight to the 'W' names how
When
-
# Bracketing Method
## Incremental Search
Incremental search
@@ -137,9 +136,8 @@ plt.legend()
plt.show()
```
-## False Position
-
+## False Position
# Open Methods
So far we looked at methods that require us to bracket a root before finding it. However, let's think outside of the box and ask ourselves if we can find a root with only 1 guess. Can we improve the our guess by if we have some knowledge of the function itself? The answer - derivatives.
diff --git a/tutorials/module_3/3_system_of_equations.md b/tutorials/module_3/3_system_of_equations.md
index d52dc69..e02fe40 100644
--- a/tutorials/module_3/3_system_of_equations.md
+++ b/tutorials/module_3/3_system_of_equations.md
@@ -1,13 +1,99 @@
+# Linear Equations
+Let's consider an linear equation
+$$
+ax=b
+$$
+where $a$ and $b$ are two known constants we can solve for $x$ easily.
+
+## Problem 1
+[]
+
+# Linear Algebra
+Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations.
+
+If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Let's say we have a system of equation
+$$
+\begin{cases}
+a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\
+a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 \\
+a_{31} x_1 + a_{32} x_2 + a_{33} x_3 = b_3
+\end{cases}
+
+$$
+We can re-write this into matrix form by creating an $A$ matrix of all $a_{nn}$ values and a $b$ vector as follows
+
+$$
+ A =
+ \left[ {\begin{array}{cc}
+ a_{11} & a_{12} & a_{13}\\
+ a_{21} & a_{22} & a_{23}\\
+ a_{31} & a_{32} & a_{33}\\
+ \end{array} } \right]
+$$
+and
+$$
+ b =
+ \left[ {\begin{array}{cc}
+ b_{1}\\
+ b_{2}\\
+ b_{3}\\
+ \end{array} } \right]
+$$
+to get,
+$$
+Ax=b
+$$
+
+Matrix Math
+
+Matrix definition
+
+
+## Problem 1
+```python
+import numpy as np
+
+# Coefficient matrix A
+A = np.array([[2, 3], [4, 5]])
+# Right - hand side vector b
+b = np.array([4, 6])
+
+# Solve the system of equations
+x = np.linalg.solve(A, b)
+print(x)
+```
+## Problem 2
+
+
+
+
+
# Systems of Equations
+## Working with Systems of Equations
+Matrix Determinates
+Cramer's Rule -
+Elimination
+
+### Forward Elimination
+### Back Substitution
+### Naive Gauss Elimination
+### Gauss Elimination
+
+### LU Decomposition
+
+## Problem 1
+
+## Problem 2
+
+# LU Decomposition
-## Naive Gauss Elimination
+##
-## Gauss Elimination
+## Problem 1
-## Forward Elimination
+## Problem 2
+Modeling of dynamic systems
-## Back Substitution
-## LU Decomposition
diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md
index c486825..4c7876c 100644
--- a/tutorials/module_3/4_numerical_integration.md
+++ b/tutorials/module_3/4_numerical_integration.md
@@ -1,15 +1,11 @@
## Midpoint Method
-
## Trapezoidal Method
-
## Romberg Integration
-
## Gaussian Integration
-
## Simpson's Rule
### Simpsons 1/3
@@ -21,9 +17,7 @@
# Numerical Integration
## Why Numerical?
-Integration is one of the fundamental tools in engineering analysis. Mechanical engineers frequently encounter integrals when computing work from force–displacement data, determining heat transfer from a time-dependent signal, or calculating lift and drag forces from pressure distributions over an airfoil. While some integrals can be evaluated analytically, most practical problems involve functions that are either too complex or are available only as experimental data. As engineering we choose numerical integration—also known as quadrature—provides a systematic approach to approximate the integral of a function over a finite interval.
-
-In this tutorial, we will study several standard methods of numerical integration, compare their accuracy, and implement them in Python. By the end, you will understand not only how to apply each method, but also when one method may be more suitable than another.
+Integration is one of the fundamental tools in engineering analysis. Mechanical engineers frequently encounter integrals when computing work from force–displacement data, determining heat transfer from a time-dependent signal, or calculating lift and drag forces from pressure distributions over an airfoil. While some integrals can be evaluated analytically, most practical problems involve functions that are either too complex or are available only as experimental data. In engineering, numerical integration provides a systematic approach to approximate the integral of a function over a finite interval.
---
@@ -39,8 +33,8 @@ $$
Here, $x_i$ are the chosen evaluation points and $w_i$ are their associated weights.
### Midpoint Rule
-The midpoint rule divides the interval into $n$ subintervals of equal width $h = (b-a)/n$ and
-evaluates the function at the midpoint of each subinterval:
+The midpoint rule divides the interval into $n$ sub-intervals of equal width $h = (b-a)/n$ and
+evaluates the function at the midpoint of each sub-interval:
$$
I \approx \sum_{i=0}^{n-1} h \, f\!\left(x_i + \tfrac{h}{2}\right).
$$
@@ -57,7 +51,7 @@ its accuracy is of order $O(h^2)$.
Simpson’s rules use polynomial interpolation to achieve higher accuracy.
- **Simpson’s 1/3 Rule (order $O(h^4)$)**
- Requires an even number of subintervals $n$:
+ Requires an even number of sub-intervals $n$:
$$
I \approx \frac{h}{3}\Big[f(x_0) + 4\sum_{\text{odd } i} f(x_i) +
2\sum_{\text{even } i<n} f(x_i) + f(x_n)\Big].
@@ -79,7 +73,31 @@ polynomials) to maximize accuracy. With $n$ evaluation points, Gaussian quadratu
---
-## Exercise — Implementing the Composite Trapezoidal Rule
+## Taking this further
+
+- **Romberg Integration:** Demonstrates how extrapolation accelerates convergence of trapezoidal approximations.
+- **Gaussian Quadrature:** Introduces optimal integration points and highlights efficiency for
+polynomial and smooth functions.
+- **Simpson’s Rules:** Show how higher-order polynomial interpolation improves accuracy.
+
+These methods will be implemented and compared in subsequent assignments to build a deeper understanding of numerical integration accuracy and efficiency.
+
+---
+
+
+## More integral
+
+### Newton-Cotes Algorithms for Equations
+### Adaptive Quadrature
+
+
+
+
+# Problems
+
+## Numerical Integration to Compute Work
+
+## Implementing the Composite Trapezoidal Rule
**Objective:** Implement a Python function to approximate integrals using the trapezoidal rule.
@@ -101,28 +119,13 @@ for n in [4, 8, 16, 32]:
Students should compare results for increasing $n$ and observe how the error decreases with $O(h^2$).
----
-
-## Taking this further
-
-- **Romberg Integration:** Demonstrates how extrapolation accelerates convergence of trapezoidal approximations.
-- **Gaussian Quadrature:** Introduces optimal integration points and highlights efficiency for
-polynomial and smooth functions.
-- **Simpson’s Rules:** Show how higher-order polynomial interpolation improves accuracy.
-
-These methods will be implemented and compared in subsequent assignments to build a deeper understanding of numerical integration accuracy and efficiency.
-
----
-
-## Assignment 1 — Gaussian Quadrature
+## Gaussian Quadrature
-Implement a Python function for two-point and three-point Gauss–Legendre quadrature over an
-arbitrary interval $[a,b]$. Verify exactness for polynomials up to the appropriate degree and
-compare performance against the trapezoidal rule on oscillatory test functions.
+Implement a Python function for two-point and three-point Gauss–Legendre quadrature over an arbitrary interval $[a,b]$. Verify exactness for polynomials up to the appropriate degree and compare performance against the trapezoidal rule on oscillatory test functions.
---
-## Assignment 2 — Simpson’s 1/3 Rule
+## Simpson’s 1/3 Rule
Implement the composite Simpson’s 1/3 rule. Test its accuracy on smooth functions and compare its performance to the trapezoidal rule and Gaussian quadrature. Document error trends and discuss cases where Simpson’s method is preferable.
diff --git a/tutorials/module_3/5_ode.md b/tutorials/module_3/5_ode.md
index 23b647c..98e68d8 100644
--- a/tutorials/module_3/5_ode.md
+++ b/tutorials/module_3/5_ode.md
@@ -8,3 +8,10 @@
## Runge-Kutta
+
+
+# Explicit ODE
+
+
+
+# Implicit ODE \ No newline at end of file