diff options
| author | Christian Kolset <christian.kolset@gmail.com> | 2025-09-11 15:00:31 -0600 |
|---|---|---|
| committer | Christian Kolset <christian.kolset@gmail.com> | 2025-09-11 15:00:31 -0600 |
| commit | 0c79ba277ff4afbb36613dfd366481b5e28cdf4e (patch) | |
| tree | ce8236c81308fcd3b7807ed2b163a9b42813c1e8 /tutorials/module_3 | |
| parent | 985812d89a71504de3af779a5332ad817394fdea (diff) | |
Finished root optimization text
Diffstat (limited to 'tutorials/module_3')
| -rw-r--r-- | tutorials/module_3/1_numerical_differentiation.md | 49 | ||||
| -rw-r--r-- | tutorials/module_3/2_roots_optimization.md | 20 |
2 files changed, 38 insertions, 31 deletions
diff --git a/tutorials/module_3/1_numerical_differentiation.md b/tutorials/module_3/1_numerical_differentiation.md index b34b315..aa45c3a 100644 --- a/tutorials/module_3/1_numerical_differentiation.md +++ b/tutorials/module_3/1_numerical_differentiation.md @@ -1,29 +1,35 @@ # Numerical Differentiation -Finding a derivative of tabular data can be done using a finite difference. Here we essentially pick two points on a function or a set of data points and calculate the slope from there. Let's imagine a domain $x$ as a vector such that $\vec{x}$ = $\pmatrix{x_0, x_1, x_2, ...}$. Then we can use the following methods to approximate derivatives +Finding a derivative of tabular data can be done using a finite difference. Here we essentially pick two points on a function or a set of data points and calculate the slope from there. You may have done this before in spreadsheets, we're going to do this using python. Let's imagine a time range $t$ as a vector such that $\vec{t}$ = $\pmatrix{t_0, t_1, t_2, ...}$ and a displacement domain as a function of time. We can represent the range and domain as two python arrays `t` and `s` respectively. + +```python +import numpy as np + +# Initiate time domain +t = np.linspace(0, 2, 100) +s = 34 * np.exp(3 * t) +``` + +Then we can use the following methods to approximate the definitive derivatives as follows. ## Forward Difference -Uses the point at which we want to find the derivative and a point forwards on the line. +Uses the point at which we want to find the derivative and a point forwards in the array. $$ f'(x_i) = \frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i} $$ -*Hint: Consider what happens at the last point.* +*Note: If we apply this to an array, consider what happens at the last point.* ```python -import numpy as np -import matplotlib.pyplot as plt - -# Initiate vectors -x = np.linspace(0, 2, 100) -y = 34 * np.exp(3 * x) +# Forward difference using python arrays +dsdt = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) -dydx = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) +import matplotlib.pyplot as plt # Plot the function -plt.plot(x, y, label=r'$y(x)$') -plt.plot(x, dydx, label=b'$/frac{dy}{dx}$') -plt.xlabel('x') -plt.ylabel('y') -plt.title('Plot of $34e^{3x}$') +plt.plot(x, s, label=r'$y(x)$') +plt.plot(x, dsdt, label=b'$/frac{ds}{dt}$') +plt.xlabel('Time (t)') +plt.ylabel('Displacement (s)') +plt.title('Plot of $34e^{3t}$') plt.grid(True) plt.legend() plt.show() @@ -31,25 +37,18 @@ plt.show() ## Backwards Difference -Uses the point at which we want to find +Uses the point at which we want to find and the previous point in the array. $$ f'(x_i) = \frac{f(x_{i})-f(x_{i-1})}{x_i - x_{i-1}} $$ ```python -import numpy as np -import matplotlib.pyplot as plt - -# Initiate vectors -x = np.linspace(0, 2, 100) -y = 34 * np.exp(3 * x) - -dydx = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) +dsdt = (y[1:] - y[:-1]) / (x[1:] - x[:-1]) # Plot the function plt.plot(x, y, label=r'$y(x)$') -plt.plot(x, dydx, label=b'$/frac{dy}{dx}$') +plt.plot(x, dydx, label=b'$/frac{ds}{dt}$') plt.xlabel('x') plt.ylabel('y') plt.title('Plot of $34e^{3x}$') diff --git a/tutorials/module_3/2_roots_optimization.md b/tutorials/module_3/2_roots_optimization.md index a97e7d2..8083260 100644 --- a/tutorials/module_3/2_roots_optimization.md +++ b/tutorials/module_3/2_roots_optimization.md @@ -164,7 +164,7 @@ x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)} $$ Since $x_0$ is our current guess and $x_0$ is our next guess, we can write these symbolically as $x_i$ and $x_{i+1}$ respectively. This gives us the *Newton-Raphson formula*. $$ -\boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} +\boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} \tag{3.1} $$ ### Assignment 2 From experimental data we extrapolated a function f. Write a python function called *newtonraphson* which as the following input parameters @@ -199,27 +199,35 @@ def newtonraphson(f, df, x0, tol): ## Modified Secant -A possible issue with the Newton-Raphson method is that we are required to know the derivative of the function we are finding the root for. Sometimes this may be extremely difficult or impossible to find analytically. Thus we can use the modified secant method. +A possible issue with the Newton-Raphson method is that we are required to know the derivative of the function we are finding the root for. Sometimes this may be extremely difficult or impossible to find analytically. Thus we can use the modified secant method. A numerical variant of the Newton-Raphson. This method uses the finite derivative to guess where the root lies. We can then iterate this guessing process until we're within tolerance. $$ f'(x_i) \simeq \frac{f(x_{i-1})-f(x_i)}{x_{i-1}x_i} $$ +Then substituting this in equation (3.1) we get the equation for the secant method: + $$ -x_{x+1} = x_i - \frac{f(x_i)}{f'(x_i)} +x_{i+1} = x_i - \frac{f(x_i) (x_{i-1}x_i)}{f(x_{i-1})-f(x_i)} \tag {3.2} $$ + +We can then *modify* the equation to instead of choosing two arbitrary numbers we use a small value $\delta$ in the independent variable to get a better reading. $$ x_{x+1} = x_i - \frac{f(x_i)(x_{i-1}-x_i)}{f(x_i+\delta x)-f(x_i)} $$ $$ -x_{x+1} = x_i - \frac{f(x_i)\delta x_i}{f(x_i+\delta x)-f(x_i)} -$$ +\boxed{x_{x+1} = x_i - \frac{f(x_i)\delta x_i}{f(x_i+\delta x)-f(x_i)}} \tag{3.3} +$$ +Note that if $\delta$ is too small the method can fail by round-off error due to the subractive cancellation in the denominator in eqn. 3.3. If $\delta$ is too large, the method becomes inefficient or even divergent. ## Issues with open methods -Open methods may also have it's problems. Let's consider the following function: $f(x)=3.2*tan^{-1}(x-4.3)$ if we were to apply the Newton-Raphson to find the root of this function we can see that the results diverge. This happens when +Open methods may also have it's problems. Let's consider the following function: $f(x)=3.2*tan^{-1}(x-4.3)$ if we were to apply the Newton-Raphson to find the root of this function we can see that the results diverge. + +{Divergence Demo} + # Pipe Friction Example |
