The square-wire problem is described by the 2-D Laplace and Poisson equations [

**1**].

\begin{equation} \dfrac{\partial^2 U \left( x, y \right)}{\partial x^2} + \dfrac{\partial^2 U \left( x, y \right)}{\partial y^2} = \left\{ \begin{array}{cr} 0 & \text{Laplace's Equation}\\ -4 \pi \rho \left( \mathbf{x} \right) & \text{Poisson's Equation} \end{array} \right. \end{equation}

Next, in order to obtain the second order derivatives a Taylor series expansion with steps of positive and negative values in both \( x \) and \( y \) are executed for \( U \).

\begin{equation} U \left( x + \Delta x, y \right) = U \left( x, y \right) + \dfrac{\partial U}{\partial x} \Delta x + \dfrac{1}{2} \dfrac{\partial^2 U}{\partial x^2} \left( \Delta x \right)^2 + \cdots, \end{equation}

\begin{equation} U \left( x - \Delta x, y \right) = U \left( x, y \right) - \dfrac{\partial U}{\partial x} \Delta x + \dfrac{1}{2} \dfrac{\partial^2 U}{\partial x^2} \left( \Delta x \right)^2 - \cdots, \end{equation}

\begin{equation} U \left( x, y + \Delta y \right) = U \left( x, y \right) + \dfrac{\partial U}{\partial y} \Delta y + \dfrac{1}{2} \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 + \cdots, \end{equation}

\begin{equation} U \left( x, y - \Delta y \right) = U \left( x, y \right) - \dfrac{\partial U}{\partial y} \Delta y + \dfrac{1}{2} \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 - \cdots, \end{equation}

Now, adding both equations each for \( x \) and \( y \) the odd terms cancel, and truncating at \( O \left( \Delta^4 \right) \) obtains

\begin{equation} U \left( x + \Delta x, y \right) + U \left( x - \Delta x, y \right) = 2 U \left( x, y \right) + \dfrac{\partial^2 U}{\partial x^2} \left( \Delta x \right)^2 + O \left( \Delta x^4 \right) \end{equation}

\begin{equation} U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) = 2 U \left( x, y \right) + \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 + O \left( \Delta y^4 \right) \end{equation}

Rearranging to segregate the second derivative engenders

\begin{equation} U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) = 2 U \left( x, y \right) + \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 + O \left( \Delta y^4 \right) \end{equation}

Rearranging to segregate the second derivative engenders

\begin{equation} \dfrac{\partial^2 U}{\partial x^2} \left( \Delta x \right)^2 = U \left( x + \Delta x, y \right) + U \left( x - \Delta x, y \right) - 2 U \left( x, y \right) \end{equation}

\begin{equation} \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 = U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) - 2 U \left( x, y \right) \end{equation}

\begin{equation} \dfrac{\partial^2 U}{\partial y^2} \left( \Delta y \right)^2 = U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) - 2 U \left( x, y \right) \end{equation}

Finally, dividing produces

\begin{equation} \dfrac{\partial^2 U}{\partial x^2} = \dfrac{U \left( x + \Delta x, y \right) + U \left( x - \Delta x, y \right) - 2 U \left( x, y \right)}{\Delta x^2} \end{equation}

\begin{equation} \dfrac{\partial^2 U}{\partial y^2} = \dfrac{U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) - 2 U \left( x, y \right)}{\Delta y^2} \end{equation}

Next, substituting the second order derivatives into the Poisson equation produces

\begin{equation} \begin{array}{l} \dfrac{U \left( x + \Delta x, y \right) + U \left( x - \Delta x, y \right) - 2 U \left( x, y \right)}{\Delta x^2} \\ \hspace{1.15 in} + \dfrac{U \left( x, y + \Delta y \right) + U \left( x, y - \Delta y \right) - 2 U \left( x, y \right)}{\Delta y^2} = -4 \pi \rho \left( x, y \right) \end{array} \end{equation}

If the grid spacing is the same in both the \( x \) and \( y \) coordinates, that is \( \Delta x = \Delta y \), then the equation reduces to

\begin{equation} \begin{array}{l} U \left( x + \Delta, y \right) + U \left( x - \Delta, y \right) + U \left( x, y + \Delta \right) \\ \hspace{2 in} + \hspace{1 mm} U \left( x, y - \Delta \right) - 4 U \left( x, y \right) = -4 \pi \rho \left( x, y \right) \Delta^2 \end{array} \end{equation}

\begin{equation} \begin{array}{l} U \left( x + \Delta, y \right) + U \left( x - \Delta, y \right) + U \left( x, y + \Delta \right) \\ \hspace{2 in} + \hspace{1 mm} U \left( x, y - \Delta \right) - 4 U \left( x, y \right) = -4 \pi \rho \left( x, y \right) \Delta^2 \end{array} \end{equation}

Algebraically, the solution

**\( U \left( x, y \right) \) can be shown as**
\begin{equation} \begin{array}{l} U \left( x, y \right) \simeq \dfrac{1}{4} \left[ U \left( x + \Delta, y \right) + U \left( x - \Delta, y \right) \right. \\ \hspace{2 in} \left. + \hspace{1 mm} U \left( x, y + \Delta \right) + U \left( x, y - \Delta \right) \right] + \pi \rho \left( x, y \right) \Delta^2 \end{array} \end{equation}

Lattice spacings can be identified as \( x = x_0 \pm \Delta \) and \( y = y_0 \pm \Delta \) which correspond to \(i \pm 1 \) and \( j \pm 1 \) where \( i, j = 0, 1, \ldots, N_{max - 1} \). Thus, the finite difference algorithm appears as

\begin{equation} U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) + \pi \rho_{i, j}\Delta^2 \end{equation}

However, for simplicity Landau et al just ask to solve for the homogeneous equation

\begin{equation} U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) \end{equation}

A few notes here from Landau et al. For one a direct solution cannot be obtain from this algorithm. That is, a process must be repeated several times in order for a solution to

First, a guess is imposed then swept over the domain which finds values for all variables at each point or node. The solution has "

Landau et al pose "Does it always converge, and if so, does it converge fast enough to be useful?"

Another important note is the coordinate system used. This problem was done in rectangular Cartesian coordinates. Other problems may work better in other coordinate systems such as cylindrical or spherical depending upon the geometry of the problem in which case the Laplacian expansion \( \Delta^2 \) will be different.

One way to solve the algorithm is to utilize the

If the sweep begins at the top left corner then the GS algorithm looks like

\begin{equation} U_{i, j} = \dfrac{1}{4} \left[ U^{(old)}_{i + 1, j} + U^{(new)}_{i - 1, j} + U^{(old)}_{i, j + 1} + U^{(new)}_{i, j - 1} \right] \end{equation}

\begin{equation} U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) + \pi \rho_{i, j}\Delta^2 \end{equation}

However, for simplicity Landau et al just ask to solve for the homogeneous equation

\begin{equation} U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) \end{equation}

A few notes here from Landau et al. For one a direct solution cannot be obtain from this algorithm. That is, a process must be repeated several times in order for a solution to

**! Thus, this is where efficiency and proficiency becomes very important in numerical methods.***converge*First, a guess is imposed then swept over the domain which finds values for all variables at each point or node. The solution has "

*converged*" when the current generation does not change compared to the previous generation of values within a "*tolerance*" or error/precision (absolute value of the difference between old values and new) specified by the user or technique. Another outcome could be that the values emerge as not expected which at that point the user must go back and figure out from where the error is coming. Landau et al give the definition of*relaxed*as when the first guess has converged into a solution.Landau et al pose "Does it always converge, and if so, does it converge fast enough to be useful?"

Another important note is the coordinate system used. This problem was done in rectangular Cartesian coordinates. Other problems may work better in other coordinate systems such as cylindrical or spherical depending upon the geometry of the problem in which case the Laplacian expansion \( \Delta^2 \) will be different.

One way to solve the algorithm is to utilize the

**. This most basic method simply evaluates each point, sweeping across the domain. The initialization and BCs symmetry are preserved in this way. However, in order to speed up this method, a simple modification can be implemented where values that are known from the current run compared to other nodes can be updated immediately. Thus, reducing the amount of needed storage by half since two nodes will be know for one in this case and the previous sweep will not be necessary to store since information is updated instantaneously when available. This technique is called the***Jacobi method***. The GS method leads to quicker convergence, less round-off error, and less needed storage. However, symmetry is not conserved which the user hopes does not affected the solution outcome.***Gauss-Seidel method*If the sweep begins at the top left corner then the GS algorithm looks like

\begin{equation} U_{i, j} = \dfrac{1}{4} \left[ U^{(old)}_{i + 1, j} + U^{(new)}_{i - 1, j} + U^{(old)}_{i, j + 1} + U^{(new)}_{i, j - 1} \right] \end{equation}

Octave code:

%This code was written by Tim Barber for the numerical PDE class taught by Dr. Parigger in the Summer of 2011%This file corresponds specifically to HW set #1

%Problem 1.3 Square Wire Problem: Chapter 17 of the Landau-course bookdiscusses the square-wire problem, viz. elliptic PDE. Section 17.4.2elaborates on the solution using the finite difference method. Your taskhere is to (i) derive the finite difference algorithm Eq. 17.27, (ii)implement the code using source programing (see the listed sourcecode), and (iii) display the results using gnuplot (along the lines ofSection 17.4.2).

clfclcclear%input data manually

N_max = 100;iter = 1000;

%initialize matrix

for i = 1:N_maxfor j = 1:N_maxif j = 100U(i, 100) = 99;elseU(i, j) = 0;j = j + 1;endendi = i +1;end%iterate matrix using the Gauss-Seidel techniquefor n = 1:iter + 1for i = 2:N_max - 1for j = 2:N_max - 1U(i, j) = (1/4)*(U(i + 1, j) + U(i - 1, j)+ U(i, j + 1) + U(i, j - 1));j = j + 1;endi = i +1;endn = n + 1;end%plot of U(x, y)mesh (U)%label title and axestitle('HW 1.3', "fontsize", 20)ylabel('y', "fontsize", 16)xlabel('x', "fontsize", 16)zlabel('U(x, y)', "fontsize", 16)%commands to save plot as a .png fileprint("HW_1_3_mesh.png")replot

**References**:

[

**1**] R. H. Landau, M. J. Páez, and C. C. Bordeianu.*A Survey of Computational Physics -**a Introductory Computational Science*, Princeton University Press, Princeton, New Jersey.

2008

## No comments:

## Post a Comment