Friday, October 14, 2011

Numerical Math - Solving systems of equations utilizing matrices (data fitting)

A nice pointer from Laundau et al [1] in the opening of their chapter (Ch. 8) on Solving Systems of Equations with Matrices; Data Fitting.  Many physical systems are modeled utilizing matrices which consist of a system of simultaneous equations.  However, these sets can become quite large and complicated which is why computers are very good with these processes.  Usually the algorithms for solving these sets of equations that utilize matrix theory is that they require repetition of a small list of steps which have been written in an efficient method.

One additional technique for speed is to tune the algorithm to the actual architecture of the computer which Landau et al [1] discuss more in their Chapter 14 called High-Performance Computing Hardware, Tuning, & Parallel Computing.

Many libraries exist which are "industrial-strength" subroutines for solving these matrix systems.  A majority of these libraries are well established such as the IMSL Numerical Libraries by Rogue Wave Software, Inc., the GNU Scientific Library (GSL), and the Netlib Repository at UTK and ORNL which contains LAPACK — Linear Algebra PACKage.  Landau et al [1] note that these libraries are usually an order of magnitude faster than general methods in linear algebra texts.  The libraries are streamlined for minimal round-off error and are aimed to solve a large spectrum of problems with high success.  It is here and for the reasons just mentioned where Landau et al warn that it is best if you don't write your own matrix subroutines but retrieve them from one of these libraries.  The libraries also provide the advantage of allowing the user to run them on one machine/processor or many machines/processors by varying with the computer architecture.

Next, Landau et al [1] ask the question which proposes to the user what is considered a "large" matrix.  Before, a large matrix was based upon a fraction of the RAM available to the computer system.  However, Landau et al describe a "large" matrix as now based upon the numerical time it takes to obtain values.  That is, if any waiting time is required, then a library should be used.  Landau et al also comment that the libraries are beneficial for speed even when the matrices may be small (which might apply in graphics processing).

One negative side effect lies in the multiple languages that the libraries are written.  That is one library may be in Fortran while the user is a C coder.  However, today libraries might exist which are programmed in or for many different computer coding languages.



References:


[1] R. H. Landau, M. J. Páez, and C. C. Bordeianu. A Survey of Computational Physics - Introductory Computational Science, Princeton University Press, Princeton, New Jersey. 2008

Thursday, October 13, 2011

Math - Fluid Dynamics - Vectors - Functions related to vectors and scalars

In order to locate a position in space and time, one may use the position vector as a function of time which happens to be a scalar value.  This relation is an example of a vector as a function of a scalar.

\[ \mathbf{r} = \mathbf{r} \left( t \right) \]

or in general if for each scalar variable there exist a vector value then

\[ \mathbf{A} = \mathbf{A} \left( t \right) \]

This relation can be inversed so that a scalar is a function of a vector.  One example is temperature which can be described at every point so that

\[ T = T \left( \mathbf{r} \right) \]

or, again, in general if for each vector value there exist a scalar value

\[ \phi = \phi \left( \mathbf{r} \right) \]

In the case when \( \mathbf{r} \) is the position vector, then the scalar is a function of position.

A third example denotes a vector as a function of a vector.  If one looks at the rigid body rotation that rotates at a constant angular velocity, \( \boldsymbol{\omega} \), then the velocity at a point on the body can be described as

\[ \mathbf{V} = \boldsymbol{\omega} \times \mathbf{r} \]

where the position vector, \( \mathbf{r} \), is taken from the axis of rotation.

Thus, the velocity vector, \( \mathbf{V} \), becomes a function of the position vector, \( \mathbf{r} \).

\[ \mathbf{V} =  \mathbf{V} \left( \mathbf{r} \right) \]

Likewise, the general form becomes

\[ \mathbf{A} =  \mathbf{A} \left( \mathbf{r} \right) \]

and when \(  \mathbf{r} \) is the position vector \(  \mathbf{A} \) is a vector function of position.

Ubuntu 11.10 'Oneiric Ocelot' Released (and some things to do after install/upgrade)

Ubuntu 11.10 is here! Going to list some links:

Ubuntu 11.10 'Oneiric Ocelot' Released, Full Review, Video and Screenshots Tour ~ Ubuntu Vibes

Ubuntu 11.10 (Oneiric Ocelot) is Here! - http://techhamlet.com/2011/10/oneiric-ocelot-is-here/

Tuesday, October 11, 2011

General Math - Vectors - Bases


According to Betten [1], an orthonormal basis example includes the three-dimensional rectangular Cartesian coordinate system, \( x_i, \; i = 1, 2, 3 \), where a vector is (as also noted in a previous post)

\[ \mathbf{V} = \left( V_1, V_2, V_3 \right) = V_1 \mathbf{e}_1 + V_2 \mathbf{e}_2 + V_3 \mathbf{e}_3 \]

and the unit base vectors are \( \mathbf{e}_1 \), \( \mathbf{e}_2 \), \( \mathbf{e}_3 \).  These unit base vectors make up the orthonormal basis.  One property of these unit base vectors is the Kronecker delta where

\[ \mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij} \]


According to Pahl and Damrath [], the basis vectors of a real vector space, \( \mathbb{R}^n \), is written as \( \mathbf{b}_1, \ldots, \mathbf{b}_n \).  This basis is orthogonal if the basis vectors are pairwise orthogonal and orthonormal if the basis vectors have a magnitude of one and are pairwise orthogonal.

\[ \text{orthogonal basis:} \qquad i \ne m \quad \Rightarrow \quad \mathbf{b}_i \cdot \mathbf{b}_m = 0 \]

\[ \text{orthonormal basis:} \qquad i = m \quad \Rightarrow \quad \mathbf{b}_i \cdot \mathbf{b}_m = 1 \qquad i \ne m \quad \Rightarrow \quad \mathbf{b}_i \cdot \mathbf{b}_m = 0 \]

A canonical basis...

A covariant basis is written as [2]

\[ \mathbf{b}_1, \ldots, \mathbf{b}_n \]

where the index is based on subscripts.  While a contravariant basis has an index as a superscript shown as [2]

\[ \mathbf{b}^1, \ldots, \mathbf{b}^n \]

A more general form of bases results in a more general coordinate system (such as cylindrical) known as a curvlinear coordinate system.  It is sometimes more useful to work in such coordinate systems.

In such a convention, the rectangular Cartesian right-handed orthogonal coordinates, \( x_i \), define a three-dimensional Euclidean space [1].  Curvlinear coordinates can be expressed as \( \xi^i \) and the transformation between rectangular coordinates and curvlinear coordinates is

\[ x_i = x_i \left( \xi^p \right) \Leftrightarrow \xi_i = \xi_i \left( x^p \right) \]

Slattery [3] gives a curvilinear coordinate system where a spatial vector field can be written as a linear combination of the natural basis


\[ \mathbf{u} = u^i \mathbf{g}_i \]

or a linear combination of the dual basis


\[ \mathbf{u} = u_i \mathbf{g}^i \]

In the rectangular Cartesian coordinate system, covariant and contravariant components are unecessary since the natural and dual basis vectors are the same [3].  Thus,

\[ \mathbf{u} =  u^i \mathbf{g}_i = u^i g_{ki} \mathbf{g}_i = u_k \mathbf{g}^k \]

From this one can separate into

\[ \left( u^i g_{ki} - u_k \right) \mathbf{g}^k = 0 \]

and

\[ u^i g_{ki} - u_k = 0 \]

which comes out to be

\[ u_k = g_{ki u^i} \]

Similarly,

\[i \mathbf{u} =  u_i \mathbf{g}^i = u^i g^{ji} \mathbf{g}_j = u^j \mathbf{g}_k \]

and

\[ u^j = g^{ji}u_i  \]

These relations allow indices to be raised and lowered.




References:


[1] J. Betten. Creep Mechanics, 3rd ed. Springer, Berlin, Germany. 2008.

[2] P. J. Pahl and R. Damrath. Mathematical Foundations of Computational Engineering: A Handbook Springer, Berlin, Germany. 2001.

[3] J. C. Slattery. Advanced Transport Phenomena, Cambridge University Press, Cambridge, UK. 1999.

Monday, October 10, 2011

References for electrodynamics

So I am taking a class entitles "Mathematical Methods for Physicists."  This class is pretty difficult because the teacher is a physicists and teaches from his experiences and background.  The title of the class points to this as well.  This class was merged into an official math class as well as a physics class.  I am not complaining by any means, :P.  I am really enjoying the class, even though it is requiring some extra time to catch up on physics terms, equations, etc. that I, as an engineer, do not know very well.  It is interesting as I have recently found respect and an interest in the theoretical physics from documentaries and shows on TV.  Things like relativity, black holes, quantum physics, string theory, astrodynamics, etc. One topic I come across in the class is electrodynamics.  I will be posted about electrodynamics topics and others, including references.



[1] D. Fleisch. A Student's Guide to Maxwell's Equations, Cambridge University Press, Cambridge, UK. 2008


[2] D. J. Griffiths. Introduction to Electrodynamics, Prentice-Hall Inc., Upper Saddle River, NJ. 1999




deal.II Homepage

deal.II Homepage: A Finite Element Differential Equations Analysis Library - October 9th, 2011: deal.II 7.1 released

What is deal.II? 

deal.II is a C++ program library targeted at the computational solution of partial differential equations using adaptive finite elements. It uses state-of-the-art programming techniques to offer you a modern interface to the complex data structures and algorithms required. 
The main aim of deal.II is to enable rapid development of modern finite element codes, using among other aspects adaptive meshes and a wide array of tools classes often used in finite element program. Writing such programs is a non-trivial task, and successful programs tend to become very large and complex. We believe that this is best done using a program library that takes care of the details of grid handling and refinement, handling of degrees of freedom, input of meshes and output of results in graphics formats, and the like. Likewise, support for several space dimensions at once is included in a way such that programs can be written independent of the space dimension without unreasonable penalties on run-time and memory consumption. 
deal.II is widely used in many academic and commercial projects. For its creation, its principal authors have received the 2007 J. H. Wilkinson Prize for Numerical Software. It is also part of the industry standard SPEC CPU 2006 benchmark suite used to determine the speed of computers and compilers, and comes pre-installed on the machines offered by the commercial Sun Grid program. 

deal.II emerged from work at the Numerical Methods Group at Universität Heidelberg, Germany, which is at the forefront of adaptive finite element methods and error estimators. Today, it is maintained by two of its original authors at Texas A&M University, and dozens of contributors and several hundred users are scattered around the world (see our credits page for a detailed list of people contributing to deal.II).

What deal.II can offer you? 

If you are active in the field of adaptive finite element methods, deal.II might be the right library for your projects. Among other features, it offers: 

Support for one, two, and three space dimensions, using a unified interface that allows to write programs almost dimension independent. 

Handling of locally refined grids, including different adaptive refinement strategies based on local error indicators and error estimators. Both h, p, and hp refinement is fully supported for continuous and discontinuous elements. 

Support for a variety of finite elements: Lagrange elements of any order, continuous and discontinuous; Nedelec and Raviart-Thomas elements of any order; elements composed of other elements. 

Parallelization on single machine through the Threading Build Blocks and across nodes via MPI. deal.II has been shown to scale to at least 16k processors. 

Extensive documentation: all documentation is available online in a logical tree structure to allow fast access to the information you need. If printed it comprises more than 500 pages of tutorials, several reports, and presently some 5,000 pages of programming interface documentation with explanations of all classes, functions, and variables. All documentation comes with the library and is available online locally on your computer after installation. 

Modern software techniques that make access to the complex data structures and algorithms as transparent as possible. The use of object oriented programming allows for program structures similar to the structures in mathematical analysis. 

A complete stand-alone linear algebra library including sparse matrices, vectors, Krylov subspace solvers, support for blocked systems, and interface to other packages such as Trilinos, PETSc and METIS.

Support for several output formats, including many common formats for visualization of scientific data.

Portable support for a variety of computer platforms and compilers.
Free source code under an Open Source license, and the invitation to contribute to further development of the library.

Saturday, October 8, 2011

Proof of the vector relation curl(curl(A)) = grad(div(A)) - div(grad(A))

The vector relation

\( \text{curl}(\text{curl}\mathbf{A}) = \text{grad}(\text{div}\mathbf{A}) - \text{div}(\text{grad}\mathbf{A}) \)

which is the same as

\[ \nabla \times \left( \nabla \times \mathbf{A} \right) = \nabla \left( \nabla \cdot  \mathbf{A} \right) - \nabla^2 \mathbf{A} \]

where \( \nabla^2 \) is the Laplacian or Laplace operator and also equals \( \nabla^2 \mathbf{A} = \nabla \cdot \nabla \mathbf{A} \).

The relation can also be written as

\[ \nabla^2 \mathbf{A} = \nabla \left( \nabla \cdot  \mathbf{A} \right) - \nabla \times \left( \nabla \times \mathbf{A} \right) \]

This relation is very useful in areas such as fluid and electrodynamics.

To show that this relation is true we simply expand the operations utilizing the three dimensional coordinate Cartesian system.

First we have,

\( \nabla \times \left( \nabla \times \mathbf{A} \right) \)

so we take

\( \nabla \times \mathbf{A} \)

which equals the determinant

\[ \nabla \times \mathbf{A} = \begin{vmatrix} \mathbf{i} &  \mathbf{j} &  \mathbf{k} \\ \dfrac{\partial}{\partial x} & \dfrac{\partial}{\partial y} & \dfrac{\partial}{\partial z} \\ A_x & A_y & A_z \end{vmatrix} \]

Expanding the determinant produces

\[ \left( \dfrac{\partial A_z}{\partial y} - \dfrac{\partial A_y}{\partial z} \right) \mathbf{i} - \left( \dfrac{\partial A_z}{\partial x} - \dfrac{\partial A_x}{\partial z} \right) \mathbf{j}+ \left( \dfrac{\partial A_y}{\partial x} - \dfrac{\partial A_x}{\partial y} \right) \mathbf{k} \]

which can be condensed and thought of as (I apologize for the hard to read formulas!  I am not sure what is causing it to do this.)

\[ \boldsymbol{\omega} = \boldsymbol{\omega} \left( \omega_x, \omega_y, \omega_z \right) \]

where

\[ \boldsymbol{\omega} = \underbrace{\left( \dfrac{\partial A_z}{\partial y} - \dfrac{\partial A_y}{\partial z} \right)}_{\omega_x} \mathbf{i} - \underbrace{\left( \dfrac{\partial A_z}{\partial x} - \dfrac{\partial A_x}{\partial z} \right)}_{\omega_y} \mathbf{j} + \underbrace{\left( \dfrac{\partial A_y}{\partial x} - \dfrac{\partial A_x}{\partial y} \right)}_{\omega_z} \mathbf{k} \]

Then continuing the expansion gives

\[ \nabla \times \boldsymbol{\omega} = \nabla \times \nabla \times \mathbf{A} \]

and the determinant to find is

\[ = \begin{vmatrix} \mathbf{i} &  \mathbf{j} &  \mathbf{k} \\ \dfrac{\partial}{\partial x} & \dfrac{\partial}{\partial y} & \dfrac{\partial}{\partial z} \\ \omega_x & \omega_y & \omega_z \end{vmatrix} \]

which expanded sees

\[ \left( \dfrac{\partial \omega_z}{\partial y} - \dfrac{\partial \omega_y}{\partial z} \right) \mathbf{i} - \left( \dfrac{\partial \omega_z}{\partial x} - \dfrac{\partial \omega_x}{\partial z} \right) \mathbf{j}+ \left( \dfrac{\partial \omega_y}{\partial x} - \dfrac{\partial \omega_x}{\partial y} \right) \mathbf{k} \]

Now subbing in the definitions for \( \omega_x \),  \( \omega_y \), and  \( \omega_z \) for each component

\[ \dfrac{\partial \omega_z}{\partial y} = \dfrac{\partial}{\partial y} \left( \dfrac{\partial A_y}{\partial x} - \dfrac{\partial A_x}{\partial y} \right) = \dfrac{\partial^2 A_y}{\partial x \partial y} - \dfrac{\partial^2 A_x}{\partial y^2} \]

\[ \dfrac{\partial \omega_y}{\partial z} = \dfrac{\partial}{\partial z} \left( \dfrac{\partial A_z}{\partial x} - \dfrac{\partial A_x}{\partial z} \right) = \dfrac{\partial^2 A_z}{\partial y \partial z} - \dfrac{\partial^2 A_x}{\partial z^2} \]

\[ \dfrac{\partial \omega_z}{\partial x} = \dfrac{\partial}{\partial x} \left( \dfrac{\partial A_y}{\partial x} - \dfrac{\partial A_y}{\partial x} \right) = \dfrac{\partial^2 A_y}{\partial x^2} - \dfrac{\partial^2 A_x}{\partial x \partial y} \]

\[ \dfrac{\partial \omega_x}{\partial z} = \dfrac{\partial}{\partial z} \left( \dfrac{\partial A_z}{\partial y} - \dfrac{\partial A_y}{\partial z} \right) = \dfrac{\partial^2 A_x}{\partial x \partial y} - \dfrac{\partial^2 A_y}{\partial z^2} \]

\[  \dfrac{\partial \omega_y}{\partial x} = -\dfrac{\partial}{\partial x} \left( \dfrac{\partial A_z}{\partial x} + \dfrac{\partial A_x}{\partial z} \right) = \dfrac{\partial^2 A_x}{\partial x \partial z} - \dfrac{\partial^2 A_z}{\partial x^2} \]

\[  \dfrac{\partial \omega_x}{\partial y} = \dfrac{\partial}{\partial y} \left( \dfrac{\partial A_z}{\partial y} - \dfrac{\partial A_y}{\partial z} \right) = \dfrac{\partial^2 A_z}{\partial y^2} - \dfrac{\partial^2 A_y}{\partial y \partial z} \]

Now combining

\[ \left( \dfrac{\partial^2 A_y}{\partial x \partial y} - \dfrac{\partial^2 A_x}{\partial y^2} + \dfrac{\partial^2 A_z}{\partial y \partial z} -  \dfrac{\partial^2 A_x}{\partial z^2} \right) \mathbf{i}\]

\[ - \left( \dfrac{\partial^2 A_y}{\partial x^2} - \dfrac{\partial^2 A_x}{\partial x \partial y} + \dfrac{\partial^2 A_x}{\partial x \partial y} - \dfrac{\partial^2 A_y}{\partial z^2} \right) \mathbf{j} \]

\[ \left( \dfrac{\partial^2 A_x}{\partial x \partial z} - \dfrac{\partial^2 A_z}{\partial x^2} + \dfrac{\partial^2 A_y}{\partial y \partial z} - \dfrac{\partial^2 A_z}{\partial y^2} \right) \mathbf{k} \]

In progress...to be continued.

Wednesday, October 5, 2011

Some references for fluid mechanics/dynamics


A list of fluid mechanics/dynamics book references.

[1] P. K. Kundu and I. M. Cohen. Fluid Mechanics, 3rd ed. Elsevier Academic Press, San Diego, CA. 2004



[2] T. C. Papanastasiou, G. C. Georgiou, & A. N. Alexandrou. Viscous Fluid Flow. CRC Press. Boca Raton, FL. 2000


[3] K. Karamcheti. Principles of Ideal-Fluid Aerodynamics. John Wiley & Sons, Inc., New York, NY. 1966



[4] R. L. Panton. Incompressible Flow. (3rd ed.). John Wiley & Sons, Inc. Hoboken, NJ. 2005



[5] M. E. O’Neill & F. Chorlton. Viscous and Compressible Fluid Dynamics. Ellis Horwood Limited. Chichester, UK. 1989

[6] R. Aris. Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover Publications.
New York, NY. 1990



[6] M. T. Schobeiri. Fluid Mechanics for Engineers - A Graduate Textbook. Springer.
Berlin, Germany. 2010




Numerical Math - The Jacobi method


A simple explanation giving by Landau et al [1] defines the Jacobi method as an initial sweep of a values without updating if available. The Jacobi method is very basic. For example, from the square wire problem in Landau et al the numerical algorithm to be solved is

\[ U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) \]

The initialization and BCs symmetry are preserved in this way.



References:

[1] R. H. Landau, M. J. Páez, and C. C. Bordeianu. A Survey of Computational Physics - Introductory Computational Science, Princeton University Press, Princeton, New Jersey. 2008

Numerical Math - The Gauss-Seidel (GS) method


The Gauss-Seidel method is an improvement upon the basic Jacobi method [1]. The GS method utilizes known values and current updates them in the algorithm as opposed to the Jacobi method which sweeps across the domain without utilizing known values to accelerate convergence. The accelerated convergence also produces less round-off error and utilizes less memory since only one generation of guesses is needed.

If the sweep begins at the top left corner then the GS algorithm looks like

\begin{equation} U_{i, j} = \dfrac{1}{4} \left[ U^{(old)}_{i + 1, j} + U^{(new)}_{i - 1, j} + U^{(old)}_{i, j + 1} + U^{(new)}_{i, j - 1} \right] \end{equation}

Compare to the Jacobi method.  Both algorithms shown are for the square wire finite difference technique problem in Landau et al [1].

\begin{equation} U_{i, j} = \dfrac{1}{4} \left( U_{i + 1, j} + U_{i - 1, j} + U_{i, j + 1} + U_{i, j - 1} \right) \end{equation}



References:

[1] R. H. Landau, M. J. Páez, and C. C. Bordeianu. A Survey of Computational Physics - Introductory Computational Science, Princeton University Press, Princeton, New Jersey. 2008

Tuesday, October 4, 2011

Numerical Math - The Finite Element Method (FEM)


From Landau [1], the finite element method (FEM) is explained as solving a PDE where the whole region or domain is subsectioned into smaller areas known as elements. Next an initial solution is formulated for the PDE in each of the elements.  The next step comes in the form of modifying the parameters of the initial formalization known as a best fit.

Pepper and Heinrich [2] open their FEM book by explaining that the method is a numerical formulation that solves physics and engineering models described by differential equations  (DEs). Similar to finite difference methods (FDM), FEM models a geometric region which is then broken up into a number of smaller subregions which results in a network known as a mesh. One difference between FDM and FEM includes mesh type (geometrically). The FDM requires that the network be made of orthogonal rows and columns (squares and rectangles) while the FEM does not require this limitation and can be, in fact, any shape such as triangles and/or quadrilaterals in two dimensions and tetrahedrons and/or hexahedrons in three dimensions.  Next, each FE is initialized with an approximate functions of the unknown variables for which to be solved. Expansions determine the variables approximations and appear as a linear or higher-order polynomials functions. These expansions, in turn, depend upon the geometric shape of the elements and location known as nodes. A second difference between FDM and FEM noted by Pepper and Heinrich [3] entails the solution method. The FEM integrates over each subregion then adds or connects them to make up the whole. This integration and summation engenders a set of finite linear equations per section which can then be solved utilizing linear algebra methods. Jiang [6] also discusses that FEM does not...

The network of elements and nodes or where these elements connect make up discrete systems such as a trusses, circuits, and fluid transport pipes [5]. In order to solve for the system variables such as displacements, electric potentials, and pressures, one can begin with a known and simple parameter such as

Hooke’s Law:

\begin{equation} F = \dfrac{\Delta L}{R} = E A \dfrac{\Delta L}{L} \end{equation}

Ohm’s Law:

\begin{equation} i = \dfrac{\Delta V}{R} = \dfrac{A}{\rho} \dfrac{\Delta V}{L} \end{equation}

Poiseuille’s Law:

\begin{equation} \dot{m} = \dfrac{\Delta p}{R} = \dfrac{\rho \pi D^4}{128 \mu} \dfrac{\Delta p}{L} \end{equation}

Jiang [6] notes that FEM has been utilized as one of the most general numerical techniques to solve DEs and done so with great success. Jiang even quotes Oden as

Perhaps no other family of approximation methods has had a greater impact on the theory and application of numerical methods during the twentieth century



In progress...to be continued.



References:

[1] O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu. The Finite Element Method - It’s Basis and Fundamentals, 6th ed. Elsevier Butterworth-Heinemann, Burlington, MA. 2005

[1] R. H. Landau, M. J. Páez, and C. C. Bordeianu. A Survey of Computational Physics - Introductory Computational Science, Princeton University Press, Princeton, New Jersey. 2008

[2] D. W. Pepper and J. C. Heinrich. The Finite Element Method: Basic Concepts and Applications, Taylor & Francis Hemisphere Publishing Corporation,. 1992

[4] J. C. Heinrich and D. W. Pepper. The Intermediate Finite Element Method: Fluid Flow and Heat Transfer Applications, Taylor & Francis Hemisphere Publishing Corporation, Washington, DC. 1999

[5] G. Comini, S. D. Giudice and C. Nonino. Finite Element Analysis in Heat Transfer: Basic Formulation & Linear Problems, Taylor & Francis Hemisphere Publishing Corporation, Washington, DC. 1994

[6] B.-N. Jiang. The Least-Squares Finite Element Method: Theory and Applications in Computational Fluid Dynamics and Electromagnetics, Springer-Verlag, Berlin, Germany. 1998

[7] O. C. Zienkiewicz, R. L. Taylor, and P. Nithiarasu. The Finite Element Method for Fluid Dynamics, (Volume 3) 6th ed. Elsevier Butterworth-Heinemann, Burlington, MA. 2005









Some references for numerical methods and CFD

A list of numerical analysis and CFD book references.

[1] O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu. The Finite Element Method - It’s Basis and Fundamentals, (Volume 1) 6th ed. Elsevier Butterworth-Heinemann, Burlington, MA. 2005


[2] O. C. Zienkiewicz and R. L. Taylor. The Finite Element Method for Solid and Structural Mechanics, (Volume 2) 6th ed. Elsevier Butterworth-Heinemann, Burlington, MA. 2005


[3] O. C. Zienkiewicz, R. L. Taylor, and P. Nithiarasu. The Finite Element Method for Fluid Dynamics, (Volume 3) 6th ed. Elsevier Butterworth-Heinemann, Burlington, MA. 2005. 2005. 2005. 2005


[3] D. W. Pepper and J. C. Heinrich. The Finite Element Method: Basic Concepts and Applications, Taylor & Francis Hemisphere Publishing Corporation,. 1992



[4] G. Comini, S. D. Giudice and C. Nonino. Finite Element Analysis in Heat Transfer: Basic Formulation & Linear Problems, Taylor & Francis Hemisphere Publishing Corporation, Washington, DC. 1994


[5] J. C. Heinrich and D. W. Pepper. The Intermediate Finite Element Method: Fluid Flow and Heat Transfer Applications, Taylor & Francis Hemisphere Publishing Corporation, Washington, DC. 1999


[6] J. Donéa and A. Huertu. Finite Element Methods for Flow Problems, John Wiley & Sons, Ltd., Chinchester, UK. 2003


[7] B.-N. Jiang. The Least-Squares Finite Element Method: Theory and Applications in Computational Fluid Dynamics and Electromagnetics, Springer-Verlag, Berlin, Germany. 1998


[8] P. M. Gresho and R. L. Sani. Incompressible Flow and the Finite Element Method, Incompressible Flow & the Finite Element Method - Advection-Diffusion & Isothermal Laminar Flow, John Wiley & Sons, Ltd., Chinchester, UK. 1998


[9] P. M. Gresho and R. L. Sani. Incompressible Flow and the Finite Element Method, Volume 1, Advection-Diffusion, (reprint), John Wiley & Sons, Ltd., Chinchester, UK. 2000


[10] P. M. Gresho and R. L. Sani. Incompressible Flow and the Finite Element Method, Volume 2, Isothermal Laminar Flow, (reprint), John Wiley & Sons, Ltd., Chinchester, UK. 2000


[11] R. H. Landau and M. J. Páez. Computational Physics: Problem Solving with Computers, John Wiley & Sons, New York, NY. 1997


[12] R. H. Landau and M. J. Páez. Computational Physics: Problem Solving with Computers, 2nd ed. John Wiley & Sons, New York, NY. 2007


[13] R. H. Landau, M. J. Páez, and C. C. Bordeianu. A Survey of Computational Physics - Introductory Computational Science, Princeton University Press, Princeton, New Jersey. 2008


[14] T. J. Chung. Computational Fluid Dynamics, Cambridge University Press,Cambridge, UK. 2002


[15] T. J. Chung. Computational Fluid Dynamics, 2nd ed. Cambridge University Press, Cambridge, UK. 2010



[16] D. A. Anderson, J. C. Tannehill, and R. H. Pletcher. Computational Fluid Mechanics and Heat Transfer, Taylor & Francis Hemisphere, . 1984


[17] D. A. Anderson, J. C. Tannehill, and R. H. Pletcher. Computational Fluid Mechanics and Heat Transfer, 2nd ed. Taylor & Francis  Hemisphere, New York, NY. 1997


[18] S. V. Patankar. Numerical Heat Transfer and Fluid Flow, Taylor & Francis Hemisphere, . 1980



[18] T.-M. Shih. Numerical Heat Transfer, Taylor & Francis Hemisphere, . 1984


[19] J. N. Reddy and D. K. Gartling. The Finite Element Method in Heat Transfer and Fluid Dynamics, 2nd ed. CRC Press, . 2001


[20] J. N. Reddy and D. K. Gartling. The Finite Element Method in Heat Transfer and Fluid Dynamics, 3rd ed. CRC Press, . 2010


[21] A. W. Date. Introduction to Computational Fluid Dynamics, Cambridge University Press, Cambridge, UK. 2005


[22] J. Tu, G. H. Yeoh, and C. Liu. Computational Fluid Dynamics: A Practical Approach, Elsevier Butterworth-Heinemann, Burlington, MA. 2008


[23] J. Blaz̆ek. Computational Fluid Dynamics: Principles and Applications, (reprinted in 2006) Elsevier, Oxford, UK. 2001


[24] J. Blaz̆ek. Computational Fluid Dynamics: Principles and Applications, 2nd ed. (reprinted in 2007) Elsevier, Oxford, UK. 2005


[25] H. K. Versteeg and W. Malalasekera. An Introduction to Computational Fluid Dynamics: The Finite Volume Method Approach, Longman Scientific & Technical, Harlow, UK. 1995


[26] H. K. Versteeg and W. Malalasekera. An Introduction to Computational Fluid Dynamics: The Finite Volume Method Approach, 2nd ed. Prentice Hall Pearson Education Limited, Harlow, UK. 2007



[27] J. D. Hoffman. Numerical Methods for Engineers and Scientists. 2nd ed. Marcel Dekker, Inc., New York, NY. 2001



In progress...to be continued.





Monday, October 3, 2011

Math - Vectors - Vector spaces and some vector properties

According to Kaplan [1], a vector space, \( \left( v_1, \ldots, v_n \right) = V^{\; n} \), contains the following properties:

\begin{align} \mathbf{I.} &= \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \\

\mathbf{II.} &= \left( \mathbf{u} + \mathbf{v} \right) + \mathbf{w} = \mathbf{u} + \left( \mathbf{v} + \mathbf{w} \right) \\

\mathbf{III.} &= h \left( \mathbf{u} + \mathbf{v} \right) = h \mathbf{u} + h \mathbf{v} \\

\mathbf{IV.}&= \left( a + b \right) \mathbf{u} = a \mathbf{u} + b \mathbf{u} \\

\mathbf{V.} &= \left( a b \right) \mathbf{u} = a \left( b \mathbf{u} \right) \\

\mathbf{VI.} &= 1 \mathbf{u} = \mathbf{u} \\

\mathbf{VII.} &= 0 \mathbf{u} = \mathbf{0} \\

\mathbf{VIII.} &= \mathbf{u} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{u} \\

\mathbf{IX.} &= \left( \mathbf{u} + \mathbf{v} \right) \cdot \mathbf{w} = \mathbf{u} \cdot \mathbf{w} + \mathbf{v} \cdot \mathbf{w} \\

\mathbf{X.} &= \left( a \mathbf{u} \right) \cdot \mathbf{v} = a \left( \mathbf{u} \cdot \mathbf{v} \right) \\

\mathbf{XI.} &= \mathbf{u} \cdot \mathbf{u} \ge 0 \\

\mathbf{XII.}  &= \mathbf{u} \cdot \mathbf{u} = 0 \; \; \text{if and only if} \; \; \mathbf{u} = 0 \end{align}

The first property in the above list is known as the commutative property or law which is true for vector addition [2]:

\[ \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \]

or with scalars [3]

\[ m \mathbf{u} = \mathbf{u}m \]

Note:  I think there might be an error in Tai [4].  He states that the associative law is


\[  \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \]

or

\[ \mathbf{A} - \mathbf{B} = -\mathbf{B} + \mathbf{A} \]

when I think he meant commutative.


The second property in the above list is known as the associative property or law which is also true for vector addition [2, 3]:

\[ \left( \mathbf{u} + \mathbf{v} \right) + \mathbf{w} = \mathbf{u} + \left( \mathbf{v} + \mathbf{w} \right) \]

or with scalars [3]

\[ m \left( n \mathbf{u} \right) = \left (mn \right) \mathbf{u}m \]

The third property in the above list is known as the distributive property or law which is also true for vector addition [3]:

\[  \left( m + n \right) \mathbf{u} = m \mathbf{u} + n \mathbf{u} \]

or


\[  m \left(  \mathbf{u} +  \mathbf{v} \right) = m \mathbf{u} + m \mathbf{v} \]





References:

[1] W. Kaplan. Advanced Calculus, 5th ed. Addison-Wesley. 2002

[2] A. I. Borisenko and I. E. Tarapov. Vector and Tensor Analysis with Applications, (translated by R. A. Silverman). Dover Publications Inc., Mineola, NY. 1979 (originally published in 1968 by Prentice-Hall, Inc.

[3] T. C. Papanastasiou, G. C. Georgiou, & A. N. Alexandrou. Viscous Fluid Flow. CRC Press. Boca Raton, FL. 2000

[4] C.-T. Tai. General Vector and Dyadic Analysis: Applied Mathematics in Field Theory, 2nd ed. Wiley-IEEE Press, New York, NY. 1997.




Math - Vectors - Scalar multiplication onto a vector

The scalar multiplication onto a vector is simply [1]

\begin{equation}
h \mathbf{u} = \left( h u_1 +, \ldots, + h u_n \right)
\end{equation}



References:

[1] W. Kaplan. Advanced Calculus (5th ed.). Addison-Wesley. 2002

Math - Vectors - Addition

The sum of two vectors is simply [1]

\begin{equation}
\mathbf{u} + \mathbf{v} = \left( u_1 + v_1 +, \ldots, + u_n + v_n \right)
\end{equation}


One property of vector addition is known as the commutative property [2]:

\[ \mathbf{A} + \mathbf{B} = \mathbf{B} + \mathbf{A} \]


Another property of vector addition is known as the associative property [2]:

\[ \left( \mathbf{A} + \mathbf{B} \right) + \mathbf{C} = \mathbf{B} + \mathbf{A} \]



References:


[1] W. Kaplan. Advanced Calculus, 5th ed. Addison-Wesley. 2002

[2] A. I. Borisenko and I. E. Tarapov. Vector and Tensor Analysis with Applications, (translated by R. A. Silverman). Dover Publications Inc., Mineola, NY. 1979 (originally published in 1968 by Prentice-Hall, Inc.