diff options
Diffstat (limited to 'tutorials')
| -rw-r--r-- | tutorials/module_3/3_system_of_equations.md | 499 | ||||
| -rw-r--r-- | tutorials/module_3/4_numerical_integration.md | 5 | ||||
| -rw-r--r-- | tutorials/module_3/6_pde.md | 34 |
3 files changed, 345 insertions, 193 deletions
diff --git a/tutorials/module_3/3_system_of_equations.md b/tutorials/module_3/3_system_of_equations.md index b30ce78..9f50b97 100644 --- a/tutorials/module_3/3_system_of_equations.md +++ b/tutorials/module_3/3_system_of_equations.md @@ -5,9 +5,6 @@ y = m x + b $$ where $a$ and $b$ are two known constants we can solve for $x$ easily. -## Problem 1 -[] - # Linear Algebra Although this isn't a course in linear algebra we are going to use some fundamental concepts from linear algebra to solve systems of equations. If you haven't taken linear algebra before, it is the study of linear equations. These equations can be represented in the form of matrices. Matrices are rectangular arrays of numbers arranged in rows and columns. They are widely used in both engineering and computer science. Let's say we have the following system of equation. $$ @@ -65,34 +62,17 @@ Take a close look at equation 1 and 2 and look at their similarity and differenc ## Matrix operations in python - Matrices of the same size can be added or subtracted element-wise: -- Multiply a matrix by a scalaryields a new matrix where every element is multiplied by the scalar. -- The determinant of a matrix is +- Multiply a matrix by a scalar yields a new matrix where every element is multiplied by the scalar. +- The determinant of a matrix is number of a square matrix that summarizes certain properties of the matrix. - **Identity Matrix $I$**: Square matrix with 1’s on the diagonal and 0’s elsewhere. - **Inverse Matrix $A^{-1}$**: Satisfies $AA^{-1} = I$. (Only square, non-singular matrices have inverses.) -### Matrix Multiplication - -If $A$ is $m \times n$ and $B$ is $n \times p$, their product $C = AB$ is an $m \times p$ matrix: - -cij=∑k=1naikbkjc_{ij} = \sum_{k=1}^n a_{ik} b_{kj} - -**Example:** -$$ -[1234][5678]=[1⋅5+2⋅71⋅6+2⋅83⋅5+4⋅73⋅6+4⋅8]=[19224350]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 7 & 1 \cdot 6 + 2 \cdot 8 \\ 3 \cdot 5 + 4 \cdot 7 & 3 \cdot 6 + 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix} -$$ - -## 1) Matrix Math in Python +## Matrix Math in Python +Let's create two square matrices $A$ and $B$ to perform some matrix operations on: ```python import numpy as np -np.set_printoptions(suppress=True) # nicer printing -``` - ---- +np.set_printoptions(suppress=True) - - -Let's -```python A = np.array([[1, 3], [2, 4]]) # 2x2 B = np.array([[5, 6], @@ -101,7 +81,11 @@ A.shape, B.shape b = np.array([3, 5]) b.shape +``` + +Test and see how it works. +```python # Addition and Subtraction of same shapes A + B A - B @@ -114,7 +98,6 @@ A - B # Matrix Multiplication mat_mult = A @ B - # Transpose B.T @@ -130,145 +113,10 @@ A @ A_inv, A_inv @ A detA = np.linalg.det(A) detB = np.linalg.det(B) detA, detB - - - -``` - - - ---- - -## 7) Determinant - -For $2\times2$, - -$\det\begin{bmatrix}a&b\\ c&d\end{bmatrix}=ad-bc$. - -```python -detA = np.linalg.det(A) -detB = np.linalg.det(B) -detA, detB ``` - -## 8) Solve Linear Systems $Ax=b$ - -```python -A = np.array([[3., 2., -1.], - [2., -2., 4.], - [-1., 0.5, -1.]]) -b = np.array([1., -2., 0.]) - -x = np.linalg.solve(A, b) # preferred over inv(A)@b -x, np.allclose(A @ x, b) -``` - ---- - -## 9) Eigenvalues & Eigenvectors - -Solve $A\mathbf{v}=\lambda \mathbf{v}$ (square $A$). - -```python -A = np.array([[4., 2.], - [1., 3.]]) -eigvals, eigvecs = np.linalg.eig(A) -eigvals, eigvecs # columns of eigvecs are eigenvectors -``` - -**Verify one pair $(\lambda, v)$** - -```python -lam, v = eigvals[0], eigvecs[:,0] -np.allclose(A @ v, lam * v) -``` - ---- - -## 10) Norms (size/length measures) - -```python -x = np.array([3., -4., 12.]) -vec_L2 = np.linalg.norm(x, ord=2) # Euclidean -vec_L1 = np.linalg.norm(x, ord=1) -vec_Linf= np.linalg.norm(x, ord=np.inf) - -M = np.array([[1., 2.], - [3., 4.]]) -fro = np.linalg.norm(M, 'fro') # Frobenius (matrix) -vec_L2, vec_L1, vec_Linf, fro -``` - ---- - -## 11) Orthogonality & Projections (quick demo) - -```python -u = np.array([1., 2., -2.]) -v = np.array([2., -1., 0.]) - -dot = u @ v -is_orthogonal = np.isclose(dot, 0.0) - -# projection of u onto v -proj_u_on_v = (u @ v) / (v @ v) * v -dot, is_orthogonal, proj_u_on_v -``` - ---- - -## 12) Example Mini-Lab - -1. Compute $2A - B$ for - - -A=[1324],B=[5678].A=\begin{bmatrix}1&3\\2&4\end{bmatrix},\quad B=\begin{bmatrix}5&6\\7&8\end{bmatrix}. - -```python -A = np.array([[1,3],[2,4]]) -B = np.array([[5,6],[7,8]]) -result = 2*A - B -result -``` - -2. Verify $A(A^{-1})\approx I$ and report the determinant of $A$. - - -```python -A_inv = np.linalg.inv(A) -check = np.allclose(A @ A_inv, np.eye(2)) -detA = np.linalg.det(A) -check, detA -``` - -3. Solve the system $Ax=b$ with - - -A=[4123],b=[10].A=\begin{bmatrix}4&1\\2&3\end{bmatrix},\quad b=\begin{bmatrix}1\\0\end{bmatrix}. - -```python -A = np.array([[4.,1.], - [2.,3.]]) -b = np.array([1.,0.]) -x = np.linalg.solve(A,b) -x, A @ x -``` - ---- - -### Tips for Students - -- Prefer `np.linalg.solve(A,b)` over `np.linalg.inv(A) @ b` for **speed and numerical stability**. -- Watch out for **singular** or **ill-conditioned** matrices (determinant near 0). -- Use `np.allclose` when comparing float results. ---- - - - ### Problem 1 Re-write the following systems of equation in matrix form to normal algebraic equations. - $$ \left[ {\begin{array}{cc} 3 & 5 & 7\\ @@ -287,8 +135,7 @@ $$ 12\\ \end{array} } \right] $$ - -## Problem 1 +## Problem 2 ```python import numpy as np @@ -301,54 +148,91 @@ b = np.array([4, 6]) x = np.linalg.solve(A, b) print(x) ``` -## Problem 2 +## Problem 3 +Matrix Determinate + +Cramer's Rule - # Systems of Linear Equations ## Techniques to solve Systems of Equations -In this section our goals is to learn some algorithms that help solve system of equations in python. - +In this section our goals is to learn some algorithms that help solve system of equations in python. Let us try to solve system of equation think about how we solve a system of two linear equations by hand. +$$ +\begin{cases} 2x + y = 5 \\ 4x - y = 3 \end{cases} +$$ -Matrix Determinate +You’ve probably learned to solve a system of equations using methods such as substitution, graphical solutions, or elimination. These approaches work well for small systems, but they can become tedious as the number of equations grows. The question is: *can we find a way to carry out elimination in an algorithmic way?* -Cramer's Rule - +When solving systems of equations by hand, one strategy is to eliminate variables step by step until you are left with a simpler equation that can be solved directly. Forward elimination is the algorithmic version of this same process. -Now let's try to solve a problem think about how we solve a system of two linear equations by hand. -[example problem] +The idea is to start with the first equation, we use it to eliminate the first variable from all equations below it. Then we move to the second equation, using it to eliminate the second variable from all equations below it, and so on. This process continues until the coefficient matrix has been transformed into an upper-triangular form (all zeros below the diagonal). -You've probably learned to use one of the following methods - substitution, graphical or elimination. Is there a way to think about this algorithmically. +For example, suppose we have a 3×3 system of equations. After forward elimination, it will look like: +$$ +\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\ 0 & a_{22} & a_{23} & | & b_2 \\ 0 & 0 & a_{33} & | & b_3 \\ \end{bmatrix} $$ +Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve. -Step 1 - Subtract the two +Notice how everything below the diagonal has been reduced to zero. At this point, the system is much easier to solve because each row now involves fewer unknowns as you move downward. The last equation contains only one variable, which can be solved directly: $$ -\begin{cases} -a_{11} x_1 + a_{12} x_2 = b_1 \\ -a_{21} x_1 + a_{22} x_2 = b_2 -\end{cases} +a_{nn} x_n = b_n \quad \Rightarrow \quad x_n = \frac{b_n}{a_{nn}} $$ +Once $x_n$ is known, it can be substituted into the equation above it to solve for $x_{n-1}$: $$ -a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1 \\ -a_{21} x_1 + a_{22} x_2 + a_{23} x_3 = b_2 +a_{(n-1)(n-1)}x_{n-1} + a_{(n-1)n}x_n = b_{n-1} \quad \Rightarrow \quad x_{n-1} = \frac{b_{n-1} - a_{(n-1)n}x_n}{a_{(n-1)(n-1)}} $$ - - +This process continues row by row, moving upward through the system. In general, the formula for the $m$-th variable is: +$$ +x_m = \frac{b_m - \sum_{j=m+1}^n a_{mj}x_j}{a_{mm}} +$$ +This step-by-step procedure is known as **backward substitution**, and together with forward elimination it forms the backbone of systematic solution methods like Gaussian elimination. <img style="display: block; margin-left: auto; margin-right: auto; width: 60%;" src="fw_elim_bw_sub.png" - alt="Forward Elimination then Backwards Substitution"> + alt="Forward Elimination and Backwards Substitution"> +Now that we have an algorithm to follow, let's code it. +```python +import numpy as np +# Create the Augmented matrix +A = np.array([[ 2.2, 1.5, -1.3, 8.2], + [-3.4, -1.7, 2.8, -11.1], + [-2.0, 1.0, 2.4, -3.5]]) + +``` ### Forward Elimination +```python +n = len(A) + +# Forward elimination loop +for i in range(n-1): + for j in range(i+1, n): + factor = A[j, i] / A[i, i] # multiplier to eliminate A[j,i] + A[j, i:] = A[j, i:] - factor * A[i, i:] +print("Upper triangular matrix after forward elimination:") +print(A) +``` ### Back Substitution +```python +x = np.zeros(n) + +for i in range(n-1, -1, -1): + x[i] = (A[i, -1] - np.dot(A[i, i+1:n], x[i+1:n])) / A[i, i] + +print("Solution:", x) + +``` ### Naive Gauss Elimination +We can write a function combining these two steps: ```python def gaussian_elimination_naive(A, b): A = A.astype(float).copy() @@ -395,19 +279,262 @@ def gaussian_elimination_pp(A, b): return x ``` -## Problem 1 -### Problem 2 +# LU Decomposition +Imagine you’re designing the heat shield of a spacecraft. Engineers provide you with a structural model of the heat shield, represented by a coefficient matrix $A$. To evaluate how the design performs under different operating conditions, you need to test many different load cases, each represented by a right-hand side vector $\mathbf{b}$. + +Now suppose your model has a 50x50 matrix $A$, and you want to simulate 100 different load cases. If you rely on Gaussian elimination alone, you would need to **repeat the elimination process 100 times** - once for each $\mathbf{b}$. This quickly becomes computationally expensive, especially for larger systems. +This is where **LU decomposition** comes in. Instead of redoing elimination for every load case, we can factorize the matrix $A$ **just once** into two simpler pieces: +$$ +A = LU +$$ +expanding it: +$$ +\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix} += +\begin{bmatrix} 1 & 0 & 0 \\ l_{21} & 1 & 0 \\ l_{31} & l_{32} & 1 \\ \end{bmatrix} +\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} +$$ +Once we have this factorization, solving $A\mathbf{x} = \mathbf{b}$ becomes a two-step process: +1. Solve $L\mathbf{d} = \mathbf{b}$ by **forward substitution**. +$$ +d_m \;=\; \frac{\,b_m \;-\; \sum_{n=1}^{m-1} a_{mn}\, d_n\,}{a_{mm}} +$$ +2. Solve $U\mathbf{x} = \mathbf{d}$ by **backward substitution**. + $$ + x_m=\frac{\,d_m-\sum_{n=m+1}^{N} u_{mn}\,x_n\,}{u_{mm}} + $$ -# LU Decomposition +The big advantage is that the factorization is reused. For 100 different load vectors, we only do the expensive decomposition once, and then solve two triangular systems (fast operations) for each new $\mathbf{b}$. +<img + style="display: block; + margin-left: auto; + margin-right: auto; + width: 40%;" + src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Tadeusz_Banachiewicz_%28NAC%29.jpg/1024px-Tadeusz_Banachiewicz_%28NAC%29.jpg" + alt="Tadeusz Banchiewicz"> +First formalized in 1938 by Tadeusz Banachiewicz, based on Gauss’s elimination ideas. +Example + +<img + style="display: block; + margin-left: auto; + margin-right: auto; + width: 75%;" + src="lu_decomp.png" + alt="LU Decomposition"> ## Problem 1 +```python +""" +LU Decomposition Solver (with partial pivoting) -## Problem 2 -Modeling of dynamic systems +Solves A x = b by computing P, L, U such that P A = L U, +then performing: + 1) y from L y = P b (forward substitution) + 2) x from U x = y (back substitution) +- Works for a single RHS (b shape: (n,)) or multiple RHS (b shape: (n, m)) +- Raises a ValueError if a near-zero pivot is encountered. +""" +import numpy as np +def lu_decompose(A, pivot_tol=1e-12): + """ + Compute LU factorization with partial pivoting. + Returns P, L, U such that P @ A = L @ U. + + Parameters + ---------- + A : (n, n) ndarray + pivot_tol : float + If the absolute value of the pivot falls below this, the matrix is + treated as singular. + + Returns + ------- + P, L, U : ndarrays of shape (n, n) + """ + A = np.array(A, dtype=float, copy=True) + n = A.shape[0] + if A.shape[0] != A.shape[1]: + raise ValueError("A must be square") + + U = A.copy() + L = np.eye(n) + P = np.eye(n) + + for k in range(n - 1): + # Partial pivoting: find index of largest |U[i,k]| for i >= k + pivot_idx = np.argmax(np.abs(U[k:, k])) + k + pivot_val = U[pivot_idx, k] + + if abs(pivot_val) < pivot_tol: + raise ValueError(f"Numerically singular matrix: pivot ~ 0 at column {k}") + + # Swap rows in U and P (and the part of L constructed so far) + if pivot_idx != k: + U[[k, pivot_idx], :] = U[[pivot_idx, k], :] + P[[k, pivot_idx], :] = P[[pivot_idx, k], :] + if k > 0: + L[[k, pivot_idx], :k] = L[[pivot_idx, k], :k] + + # Gaussian elimination to build L and zero out below the pivot in U + for i in range(k + 1, n): + L[i, k] = U[i, k] / U[k, k] + U[i, k:] -= L[i, k] * U[k, k:] + U[i, k] = 0.0 + + # Final pivot check on U[-1, -1] + if abs(U[-1, -1]) < pivot_tol: + raise ValueError(f"Numerically singular matrix: pivot ~ 0 at last diagonal") + + return P, L, U + +def forward_substitution(L, b): + """ + Solve L y = b for y, where L is unit lower-triangular. + b can be (n,) or (n, m). + """ + L = np.asarray(L) + b = np.asarray(b, dtype=float) + n = L.shape[0] + + if b.ndim == 1: + b = b.reshape(n, 1) + + y = np.zeros_like(b, dtype=float) + for i in range(n): + # L has ones on the diagonal (Doolittle), so we don't divide by L[i,i] + y[i] = b[i] - L[i, :i] @ y[:i] + return y if y.shape[1] > 1 else y.ravel() + +def back_substitution(U, y): + """ + Solve U x = y for x, where U is upper-triangular. + y can be (n,) or (n, m). + """ + U = np.asarray(U) + y = np.asarray(y, dtype=float) + n = U.shape[0] + + if y.ndim == 1: + y = y.reshape(n, 1) + + x = np.zeros_like(y, dtype=float) + for i in range(n - 1, -1, -1): + if abs(U[i, i]) < 1e-15: + raise ValueError(f"Zero (or tiny) pivot encountered at U[{i},{i}]") + x[i] = (y[i] - U[i, i + 1:] @ x[i + 1:]) / U[i, i] + return x if x.shape[1] > 1 else x.ravel() + +def lu_solve(A, b, pivot_tol=1e-12): + """ + Solve A x = b using LU with partial pivoting. + Returns x and the factors (P, L, U). + """ + P, L, U = lu_decompose(A, pivot_tol=pivot_tol) + Pb = P @ np.asarray(b, dtype=float) + y = forward_substitution(L, Pb) + x = back_substitution(U, y) + return x, (P, L, U) + +def main(): + # Example system: + # 3x + 2y - z = 1 + # 2x - 2y + 4z = -2 + # -x + 0.5y - z = 0 + A = np.array([ + [ 3.0, 2.0, -1.0], + [ 2.0, -2.0, 4.0], + [-1.0, 0.5, -1.0] + ]) + b = np.array([1.0, -2.0, 0.0]) + + x, (P, L, U) = lu_solve(A, b) + print("Solution x:\n", x) + print("\nP:\n", P) + print("\nL:\n", L) + print("\nU:\n", U) + + # Residual check + r = A @ x - b + print("\nResidual A@x - b:\n", r) + print("Residual norm:", np.linalg.norm(r, ord=np.inf)) + +if __name__ == "__main__": + main() +``` + +## Problem 2 +Ohm's and Kirchhoff's laws have been used to find a set of equations to model a circuit. Using the built-in Numpy solvers. Plot the function of current at different voltages. + + +```python +import numpy as np +import matplotlib.pyplot as plt + +# Unknown ordering +VR1, VR2, VR3, VR4, VR5, I1, I2, IM1, IM2, IM3 = range(10) + +def build_system_matrices(R1, R2, R3, R4, R5): + A = np.zeros((10,10), float) + b_const = np.zeros(10, float) + + # KVL/KCL + Ohm's law equations + A[0, VR1] = -1; A[0, VR2] = -1 # -VR1 - VR2 = -Vs + 5 + A[1, VR2] = 1; A[1, VR3] = -1; A[1, VR4] = -1; b_const[1] = -2 # VR2 - VR3 - VR4 = -2 + A[2, VR4] = 1; A[2, VR5] = -1; b_const[2] = 2 # VR4 - VR5 = 2 + + A[3, VR1] = 1; A[3, I1 ] = -R1 # VR1 = I1 R1 + A[4, VR2] = 1; A[4, IM1] = -R2 # VR2 = IM1 R2 + A[5, VR3] = 1; A[5, I2 ] = -R3 # VR3 = I2 R3 + A[6, VR4] = 1; A[6, IM2] = -R4 # VR4 = IM2 R4 + A[7, VR5] = 1; A[7, IM3] = -R5 # VR5 = IM3 R5 + + A[8, I1 ] = 1; A[8, IM1] = -1; A[8, I2 ] = -1 # I1 = IM1 + I2 + A[9, I2 ] = 1; A[9, IM2] = -1; A[9, IM3] = -1 # I2 = IM2 + IM3 + return A, b_const + +def b_vector(Vs, b_const): + b = b_const.copy() + b[0] = -Vs + 5.0 + return b + +# Example component values +R1, R2, R3, R4, R5 = 10.0, 5.0, 8.0, 6.0, 4.0 + +A, b_const = build_system_matrices(R1, R2, R3, R4, R5) + +# Sanity: ensure A is non-singular +condA = np.linalg.cond(A) +condA + +Vs_vals = np.arange(0.0, 48.0 + 0.1, 0.1) + +# Option 1 (straightforward): loop with numpy.linalg.solve (LAPACK under the hood) +IM1_vals = np.empty_like(Vs_vals) +IM2_vals = np.empty_like(Vs_vals) +IM3_vals = np.empty_like(Vs_vals) + +for k, Vs in enumerate(Vs_vals): + b = b_vector(Vs, b_const) + x = np.linalg.solve(A, b) # Uses LU factorization internally + IM1_vals[k], IM2_vals[k], IM3_vals[k] = x[IM1], x[IM2], x[IM3] + +# Plot +plt.figure() +plt.plot(Vs_vals, IM1_vals, label="IM1 (solve)") +plt.plot(Vs_vals, IM2_vals, label="IM2 (solve)") +plt.plot(Vs_vals, IM3_vals, label="IM3 (solve)") +plt.xlabel("Supply Voltage Vs (V)") +plt.ylabel("Motor Current (A)") +plt.title(f"Motor Currents vs Supply Voltage (cond(A) ≈ {condA:.1f}, diff≈{max_diff:.2e})") +plt.legend() +plt.show() + +```
\ No newline at end of file diff --git a/tutorials/module_3/4_numerical_integration.md b/tutorials/module_3/4_numerical_integration.md index 78b0328..06cb8b9 100644 --- a/tutorials/module_3/4_numerical_integration.md +++ b/tutorials/module_3/4_numerical_integration.md @@ -183,9 +183,6 @@ for n in n_list: ```python -#!/usr/bin/env python3 -# Gauss Quadrature using Gauss-Legendre method (manual 2-point & 3-point) - import numpy as np from scipy.integrate import quad # for reference "exact" integral @@ -244,6 +241,7 @@ print(f"Error (3-point): {err3:.2e}") ## Numerical Integration to Compute Work + ## Implementing the Composite Trapezoidal Rule **Objective:** Implement a Python function to approximate integrals using the trapezoidal rule. @@ -266,6 +264,7 @@ for n in [4, 8, 16, 32]: Students should compare results for increasing $n$ and observe how the error decreases with $O(h^2$). +--- ## Gaussian Quadrature Write a Python function for two-point and three-point Gauss–Legendre quadrature over an arbitrary interval $[a,b]$. Verify exactness for polynomials up to the appropriate degree and compare performance against the trapezoidal rule on oscillatory test functions. diff --git a/tutorials/module_3/6_pde.md b/tutorials/module_3/6_pde.md index 332852e..1f11b13 100644 --- a/tutorials/module_3/6_pde.md +++ b/tutorials/module_3/6_pde.md @@ -1,8 +1,28 @@ # Partial Differential Equation +Partial differential equations are defines when two or more partial derivatives are present in an equation. Due to the widespread application in engineering, we will be looking at **second-order equations** which can be expressed in the following general form: +$$ +A\frac{\partial^2u}{\partial x^2} + +B\frac{\partial^2u}{\partial x \partial y} + +C\frac{\partial^2u}{\partial y^2} + +D += 0 +$$ -## Finite Difference +where $A$, $B$ and $C$ are functions of both $x$ and $y$. $D$ is a a function of $x$, $y$, $u$, $\partial u / \partial x$, and $\partial u / \partial y$. Similar to our beloved quadratic formula, we can take the discriminant of the equation +$$ +\Delta = B^2 - 4 AC +$$ +Based on the discriminant we can categorize the equations into the following three categories: + +| $\Delta$ | Category | Example | +| -------- | ---------- | ------------------------ | +| - | Elliptical | Laplace equation | +| 0 | Parabolic | Heat Conduction equation | +| + | Hyperbolic | Wave equation | +## Finite Difference Methods ### Elliptic Equations - Used for steady-state, boundary value problems +- Examples where fields where these equations are used are: steady-state heat conduction, electrostatics and potential flow Description of how the Laplace equations works @@ -11,11 +31,10 @@ $$ $$ Finite-different solutions -- Laplacian Difference equation +- Laplacian Difference equations in dimension $x$ and $y$: $$ \frac{\partial^2T}{\partial x^2}= \frac{T_{i+1,j}-2T_{i,j}+T_{i-1,j}}{\Delta x^2} $$ -and $$ \frac{\partial^2T}{\partial y^2}= \frac{T_{i+1,j}-2T_{i,j}+T_{i-1,j}}{\Delta y^2} $$ @@ -56,6 +75,13 @@ MacCormack Method ## Finite-Element Method General Approach + +1. Discretization +2. Element Equations +3. Assembly +4. Boundary Conditions +5. Solutions +6. Postprocessing ### One-dimensional analysis ### Two-dimensional Analysis @@ -80,7 +106,7 @@ General Approach Problem 32.4 from Numerical Methods for Engineers 7th Edition Steven C. Chapra and Raymond P. Canale -A series of interconnected strings are connected to a fixed wall where the other is subject to a constant force F. Using the step-by-step procedure from above, dertermine the displacement of the springs. +A series of interconnected strings are connected to a fixed wall where the other is subject to a constant force F. Using the step-by-step procedure from above, determine the displacement of the springs. |
