# Open Methods So far we looked at methods that require us to bracket a root before finding it. However, let's think outside of the box and ask ourselves if we can find a root with only 1 guess. Can we improve the our guess by if we have some knowledge of the function itself? The answer - derivatives. ## Newton-Raphson Possibly the most used root finding method implemented in algorithms is the Newton-Raphson method. By using the initial guess we can draw a tangent line at this point on the curve. Where the tangent intercepts the x-axis is usually an improved estimate of the root. It can be more intuitive to see this represented graphically. Newton-Raphson illustration Given the function $f(x)$ we make our first guess at $x_0$. Using our knowledge of the function at that point we can calculate the derivative to obtain the tangent. Using the equation of the tangent we can re-arrange it to solve for $x$ when $f(x)=0$ giving us our first improved guess $x_1$. As seen in the figure above. The derivative of the function is equal to the slope. $$ f'(x_0) = \frac{f(x_0)-0}{x_0-x_1} $$ Re-arranging for our new guess we get: $$ x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)} $$ Since $x_0$ is our current guess and $x_0$ is our next guess, we can write these symbolically as $x_i$ and $x_{i+1}$ respectively. This gives us the *Newton-Raphson formula*. $$ \boxed{x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}} \tag{3.1} $$ ### Interactive Plot Test the function $f(x)=tan^{-1}(x)$ or $f(x)=$atand$(x)$ [Interactive graphical render of newton-raphson method](https://www.geogebra.org/m/DGFGBJyU) [![Interactive plot](geogebra-newton-raphson.html)]() ### Problem 1 From experimental data we extrapolated a function f. Write a python function called *newtonraphson* which as the following input parameters - `f` - function - `df` - first derivative of the function - `x0` - initial guess - `tol` - the tolerance of error That finds the root of the function $f(x)=e^{\left(x-6.471\right)}\cdot1.257-0.2$ and using an initial guess of 8. Using Recursion ```python def newtonraphson(f, df, x0, tol): # Output of this function is the estimated root of f # using the Newton-Raphson method # recursive method if abs(f(x0)) < tol: return x0 else: return newtonraphson(f,df,x0) f = lambda x: x**2-1.5 f_prime = labmda x: 2*x ``` Using Iteration ```python import numpy as np def newton_raphson(f, f_prime, x0, tol=1e-8): """ Newton-Raphson root finding method. Parameters: f : function, the equation f(x) = 0 f_prime : function, derivative of f(x) x0 : float, initial guess tol : float, stopping tolerance Returns: x : float, approximate root history : list, values of x at each iteration """ x = x0 roots = [x] while True: x_new = x - f(x) / f_prime(x) roots.append(x_new) if abs(x_new - x) < tol: sol = x_new total_iter = len(roots) return sol, total_iter, roots x = x_new # Example: solve x^2 - 2 = 0 f = lambda x: x**2 - 2 f_prime = lambda x: 2*x sol, total_iter, root = newton_raphson(f, f_prime, x0=1.4) actual=np.sqrt(2) print("Solution=", sol) print("No. iterations:", total_iter) print("Iteration history:", root) print("Actual sqrt(2) =", np.sqrt(2)) ``` ## Modified Secant A possible issue with the Newton-Raphson method is that we are required to know the derivative of the function we are finding the root for. Sometimes this may be extremely difficult or impossible to find analytically. Thus we can use the modified secant method. A numerical variant of the Newton-Raphson. This method uses the finite derivative to guess where the root lies. We can then iterate this guessing process until we're within tolerance. $$ f'(x_i) \simeq \frac{f(x_{i-1})-f(x_i)}{x_{i-1}x_i} $$ Then substituting this in equation (3.1) we get the equation for the secant method: $$ x_{i+1} = x_i - \frac{f(x_i) (x_{i-1}x_i)}{f(x_{i-1})-f(x_i)} \tag {3.2} $$ We can then *modify* the equation to instead of choosing two arbitrary numbers we use a small value $\delta$ in the independent variable to get a better reading. $$ x_{x+1} = x_i - \frac{f(x_i)(x_{i-1}-x_i)}{f(x_i+\delta x)-f(x_i)} $$ $$ \boxed{x_{x+1} = x_i - \frac{f(x_i)\delta x_i}{f(x_i+\delta x)-f(x_i)}} \tag{3.3} $$ Note that if $\delta$ is too small the method can fail by round-off error due to the subractive cancellation in the denominator in eqn. 3.3. If $\delta$ is too large, the method becomes inefficient or even divergent. ## Issues with open methods Open methods may also have it's problems. Let's consider the following function: $f(x)=3.2*tan^{-1}(x-4.3)$ if we were to apply the Newton-Raphson to find the root of this function we can see that the results diverge. {Divergence Demo} ## Problem 2: Pipe friction The flow of fluids through pipes is a common problem in engineering. In mechanical and aerospace fields, it often arises in applications such as liquid and gas transport for cooling systems. The resistance to this flow is described by a dimensionless parameter known as the friction factor. For turbulent flow, the Colebrook equation provides a method to determine this factor. $$ 0=\frac{1}{\sqrt{f}}+2log(\frac{\epsilon}{3.7D}+\frac{2.51}{Re\sqrt{f}}) \tag{2.2} $$ where $$ Re=\frac{\rho VD}{\mu} $$ Suppose you're working on a carburetor which mixes air and fuel before. You are tasked with finding the friction factor of the pipes before entering the carburetor. A lab technician measure and provides the following data of the tube to you: $D=0.005 m$ $\epsilon=0.0015mm$ $V=40 m/s$ We know we are working with air and therefore: $\rho= 1.23 \frac{kg}{m^3}$ $\mu=1.79*10^{-5}Ns/m^3$ If we define a function $g(f)$ equal to equation above. The root of this function will give us the friction factor of this tube. a) Plot the function of $g(f)$ to select a good initial guess. b) Determine the root by using the Newton-Raphson method. ```python import numpy as np import matplotlib.pyplot as plt # Given parameters rho = 1.23 # kg/m^3 mu = 1.79e-5 # Pa·s (N·s/m^2) D = 0.005 # m V = 40 # m/s e = 0.0015 / 1000 # convert mm to m # Reynolds number Re = rho * V * D / mu # Define g(f) function (Colebrook residual) g = lambda f: 1/np.sqrt(f) + 2.0 * np.log10((e/(3.7*D)) + (2.51/(Re*np.sqrt(f)))) # Range of f values to check f_values = np.linspace(0.008, 0.08, 500) g_values = g(f_values) # Plot plt.figure(figsize=(8,6)) plt.plot(f_values, g_values) plt.axhline(0, color='red', linestyle='--') # root line plt.grid(True) plt.xlabel('f') plt.ylabel('g(f)') plt.title('Colebrook Residual Function g(f)') plt.show() ``` ```python g_prime = lambda x: 2*x x= = 1.5 sol, total_iter, root = newton_raphson(g, g_prime, x0) ``` Extra: ![Newton-Raphson in Optimization](https://www.youtube.com/watch?v=W7S94pq5Xuo)