diff options
Diffstat (limited to 'tutorials/module_3/2_1_Bracketing_Method.md')
| -rw-r--r-- | tutorials/module_3/2_1_Bracketing_Method.md | 97 |
1 files changed, 97 insertions, 0 deletions
diff --git a/tutorials/module_3/2_1_Bracketing_Method.md b/tutorials/module_3/2_1_Bracketing_Method.md new file mode 100644 index 0000000..3a9a97b --- /dev/null +++ b/tutorials/module_3/2_1_Bracketing_Method.md @@ -0,0 +1,97 @@ +# Bracketing Method +## Incremental Search +The incremental search method is the simplest of the bracketing methods. The basic idea is to start from an initial point and move forward in small increments along the function. At each step, the sign of the function value is checked. If a change of sign occurs between two consecutive points, then a root must lie between them (by the **Intermediate Value Theorem**). This gives us a bracket — an interval that contains the root — which can then be refined using more powerful methods such as bisection or false position. + +Although incremental search is not efficient for accurately finding roots, it is useful for _detecting the presence of roots_ and identifying approximate intervals where they exist. Once such an interval is located, other methods can be applied to converge quickly to the actual root. In this sense, incremental search often serves as a **pre-processing step** for bracketing or open methods, especially when the root’s location is not known in advance. +## Bisection +This method start by sectioning the database into two halves. We can do this with a phone book by simply opening it in the middle of the book. In our data set, we have find the middle entry to be 'M - Maxwell'. Based on the premises that all names are sorted alphabetically we can conclude that we will find our target in the second half, essentially eliminating the first half. Doing this a second time gives us our new midpoint 'S – Schrödinger' and a third time get us to our target 'W - Watt'. In only 3 iterations we have reached out target goal. This is a huge improvement from the Incremental search (22 iterations) versus our new 3 iterations. + +This exact same concept can be applied to finding roots of functions. Instead of searching for strings we are searching for floats. For computers this is a lot easier to do. Take a moment to think about how you would write psuedo-code to program this. + +> The **Intermediate Value Theorem** says that if f(x) is a continuous function between a and b, and sign(f(a))≠sign(f(b)), then there must be a c, such that a<c<b and f(c)=0. + +Let's consider a continuous function $f(x)$ with an unknown root $x_r$ . Using the intermediate value theorem we can bracket a root with the points $x_{lower}$ and $x_{upper}$ such that $f(x_{lower})$ and $f(x_{upper})$ have opposite signs. This idea is visualized in the graph below. +<img + style="display: block; + margin-left: auto; + margin-right: auto; + width: 50%;" + src="https://pythonnumericalmethods.studentorg.berkeley.edu/_images/19.03.01-Intermediate-value-theorem.png" + alt="Intermediate Value Theorem"> +Once we bisect the interval and found we set the new predicted root to be in the middle. We can then compare the two sections and see if there is a sign change between the bounds. Once the section with the sign change has been identified, we can repeat this process until we near the root. +<img + style="display: block; + margin-left: auto; + margin-right: auto; + width: 50%;" + src="bisection.png" + alt="Bisection"> +As you the figure shows, the predicted root $x_r$ get's closer to the actual root each iteration. In theory this is an infinite process that can keep on going. In practice, computer precision may cause error in the result. A work-around to these problems is setting a tolerance for the accuracy. As engineers it is our duty to determine what the allowable deviation is. + +So let's take a look at how we can write this in python. +```python +import numpy as np + +def my_bisection(f, x_l, x_u, tol): + # approximates a root, R, of f bounded + # by a and b to within tolerance + # | f(m) | < tol with m the midpoint + # between a and b Recursive implementation + + # check if a and b bound a root + if np.sign(f(x_l)) == np.sign(f(x_u)): + raise Exception( + "The scalars x_l and x_u do not bound a root") + + # get midpoint + x_r = (x_l + x_u)/2 + + if np.abs(f(x_r)) < tol: + # stopping condition, report m as root + return x_r + elif np.sign(f(x_l)) == np.sign(f(x_r)): + # case where m is an improvement on a. + # Make recursive call with a = m + return my_bisection(f, x_r, x_u, tol) + elif np.sign(f(x_l)) == np.sign(f(x_r)): + # case where m is an improvement on b. + # Make recursive call with b = m + return my_bisection(f, x_l, x_r, tol) +``` + +### Problem 1 +Call the *my_bisection* function to solve the equation $f(x) = x^2 -2$ +```python +f = lambda x: x**2 - 2 + +r1 = my_bisection(f, 0, 2, 0.1) +print("r1 =", r1) +r01 = my_bisection(f, 0, 2, 0.01) +print("r01 =", r01) + +print("f(r1) =", f(r1)) +print("f(r01) =", f(r01)) +``` + + +### Problem 2 +Write an algorithm that uses a combination of the bisection and incremental search to find multiple roots in the first 8 minutes of the data set. +```python +import pandas as pd +import matplotlib.pyplot as plt + +# Read the CSV file into a DataFrame called data +data = pd.read_csv('your_file.csv') + +# Plot all numeric columns +data.plot() + +# Add labels and show plot +plt.xlabel("Time (min)") +plt.ylabel("Strain (kPa)") +plt.title("Data with multiple roots") +plt.legend() +plt.show() +``` + +## False Position |
