To begin with, let us consider a simple case in which a fluid is confined between two planes. One of them moves sideways with a certain speed , while the other is kept fixed. After a certain transient, some force is needed in order to keep this shearing. The simplest expression is

The force is proportional to the area and to the velocity difference between the planes. It is also inversely proportional to their separation, *L* (this fact being the least obvious). Finally, a constant of proportionality is given by , the viscosity coefficient, or

simply “the viscosity”. This constant may vary with temperature, density, pressure, but the point with Newtonian fluids is that it does not vary with the velocity field (or its derivatives).

Later, in section …, this flow will be solved as a solution of the Navier-Stokes equations, the Couette flow. There, it will be shown that the velocity is everywhere in the direction of the force exerted on the upper plane, let us call it $x$, and varies linearly between the planes, in the *y* direction. Therefore, the only components of the strain rate tensor are . We therefore have

With this in mind, let us look for a general relationship between and . This is much easier if we go to the principal strain axes. These are the coordinates on which the strain rate is diagonal. Such coordinate system always exist, since the strain rate tensor is symmetric. Notice that in these system strains are not due to shear, only to dilations.

]]>

The previous equation is still too general, and a connection between stress and strain is still needed. Here we consider the case in which there is a linear relationship between both, which involves the coefficient of viscosity.

To begin with, let us consider a simple case in which a fluid is confined between two planes. One of them moves sideways with a certain speed , while the other is kept fixed. After a certain transient, some force is needed in order to keep this shearing. The simplest expression is

The force is proportional to the area and to the velocity difference between the planes. It is also inversely proportional to their separation, *L* (this fact being the least obvious). Finally, a constant of proportionality is given by , the viscosity coefficient, or

simply “the viscosity”. This constant may vary with temperature, density, pressure, but the point with Newtonian fluids is that it does not vary with the velocity field (or its derivatives).

Later, in section …, this flow will be solved as a solution of the Navier-Stokes equations, the Couette flow. There, it will be shown that the velocity is everywhere in the direction of the force exerted on the upper plane, let us call it $x$, and varies linearly between the planes, in the *y* direction. Therefore, the only components of the strain rate tensor are . We therefore have

With this in mind, let us look for a general relationship between and $latex

\epsilon$. This is much easier if we go to the principal strain axes. These are the coordinates on which the strain rate is diagonal. Such coordinate system always exist, since the strain rate tensor is symmetric. Notice that in these system strains are not due to shear, only to dilations.

]]>I found a very clear review of existing alternatives at the Font usage post, by Ryosuke Iritani (入谷 亮介). I have taken his suggestions and created a gallery, with a simple sample of text and equations.

\usepackage[sc]{mathpazo} \linespread{1.05} % Palladio needs more leading (space between lines) \usepackage[T1]{fontenc}

\usepackage{kpfonts}

Used e.g. in Wikipedia on each sectioning

\usepackage{libertine} \usepackage{libertinust1math} \usepackage[T1]{fontenc}

Scientific and Technical Information Exchange; Times-based but much more elegant than txfonts package.

\usepackage[T1]{fontenc} \usepackage{stix}

It’s a bit thin and less friendly

\usepackage[urw-garamond]{mathdesign} \usepackage[T1]{fontenc}

\usepackage[adobe-utopia]{mathdesign} \usepackage[T1]{fontenc}

\usepackage[charter]{mathdesign}

\usepackage[T1]{fontenc} \usepackage{cochineal} \usepackage[cochineal,varg]{newtxmath}

Baskerville-based, thicker font

\usepackage[lf]{Baskervaldx} % lining figures \usepackage[bigdelims,vvarbb]{newtxmath} % math italic letters from Nimbus Roman \usepackage[cal=boondoxo]{mathalfa} % mathcal from STIX, unslanted a bit \renewcommand*\oldstylenums[1]{\textosf{#1}}

So far, the only font not included the Iritani’s Font usage post!

\usepackage{helvet} \usepackage{sansmath} \usepackage{titlesec} % this enforces helvetica in section and chapter titles \titleformat{\chapter}[display] {\normalfont\sffamily\huge\bfseries} {\chaptertitlename\ \thechapter}{20pt}{\Huge} \titleformat{\section} {\normalfont\sffamily\Large\bfseries} {\thesection}{1em}{} % In main text, at the beginning: \fontfamily{phv}\selectfont % before the first equation: \sansmath

All the above was produced with variations of this file. I just run latex on it, then dvips to get a ps file, which I then crop and export as PNG using the GIMP. Of course, depending on the system, some LaTeX packages may be needed, as well as fonts (I had to install urw-garamond on my arch linux system, for example.)

\documentclass{article} \newcommand{\bfr}{\mathbf{r}} \newcommand{\bfu}{\mathbf{u}} \newcommand{\bfq}{\mathbf{q}} \usepackage{amsmath} \usepackage{libertine} \usepackage{libertinust1math} \usepackage[T1]{fontenc} \usepackage{lipsum}% for filler text \begin{document} \section{A section} \lipsum[10] Equations: \begin{equation} \frac{d \mathbf{u}}{d t} = - \nabla p + \nu \nabla^2 \mathbf{u}, \end{equation} \begin{equation} \begin{split} E &= m c^{2},\\ T &= 2\pi \sqrt{\frac{m}{k}} \end{split} \end{equation} \begin{equation} \iint \phi = - \oint p \end{equation} \end{document}]]>

With python, start with

`ipython --pylab`

Then, read the data

In [4]: dt = loadtxt( ‘1/mesh.dat’ )

In [5]: shape( dt )

Out[5]: (1024, 27)

Notice the last command tells us we have 1024 data points, and 27 fields (well, 25 + positions). For convinience, assign columns to arrays:

In [6]: x=dt[:,0]; y=dt[:,1]; al=dt[:,4]

Now x and y are positions, and “al” is the scalar field for the fifth column (number 4, since counters start at 0 in python).

To visualize the positions,

In [7]: scatter( x , y )

A scalar field may be visualized with a color map:

In [9]: scatter( x , y , c = al )

The “c=” means the color is taken from field al. One may fiddle with colormaps and symbol sizes:

In [9]: scatter( x , y , c = al , cmap= plt.cm.Blues, s=8 )

To know the range we are plotting, produce a color bar:

In [19]: colorbar()

Remember each plotting is overlaid on the previous one, so it is necessary to blank the plot from time to time:

In [11]: clf()

For vector fields, assign coordinates to two separate arrays:

In [20]: vx=dt[:,8]; vy=dt[:,9]

Then, use “quiver” to get a vector plot:

In [22]: quiver( x, y, vx , vy )

]]>

- Se monta la película del modo habitual (seleccionando escenas, transiciones, cortando, etc). (Seguimos usando el windows movie maker, aunque hay opciones interesantes como openshot.) Es mejor no añadir música en este momento, aunque si se hace tampoco pasa nada.
- Una vez terminado el montaje, se extrae la banda sonora entera y se graba (en ogg preferiblemente, o en mp3 de buena calidad). Este paso es sencillo, el mismo VLC player lo hace)
- Se importa la banda sonora en un programa de edición de audio tipo DAW (yo uso Cubase 5). Esta es la
**pista 1**. (En principio, un DAW puede importar directamente el audio de un vídeo, pero el Cubase 5 está algo atrasado en cuanto a codecs.) Sobre DAWs, por cierto: están apareciendo varios gratuitos (al menos, para pocos proyectos) que son online (bandlab, soundtrap) ; pueden ser una buena alternativa a DAWs tradicionales. - En otras pistas, se importan los ficheros del móvil. (De nuevo, fue necesario VLC para transferir el formato de audio moderno, mp4, a ogg que pudiera leer el Cubase 5; hay que hacerlo uno por uno, lo cual es bastante tedioso, habría que investigar si p.e. audacity lo puede hacer mejor.)
- Bajo la pista 1 creamos la
**pista 2**, de mejoras. Pasamos el audio de la pista 1 al canal izquierdo estéreo, y el de la pista 2 al derecho. - Se va escuchando sólo la pista 1 para encontrar los fragmentos de audio; luego, los localizamos en las pistas de teléfono. Una vez que hemos encontrado el fragmento con audio bueno, lo subimos a la pista 2.
**Sincronización**: gracias a la separación estéreo de canales, es bastante sencillo desplazar con cuidado los fragmentos de la pista 2 hasta que están en sincronía con la 1. Cuando lo conseguimos, cortamos el fragmento de la pista 1 que hemos sustituido y lo mandamos a – infinito para que no suene (se puede borrar también, pero así no se pierde información).- Así con todos los fragmentos. Al terminar, se aplican inserciones estándar a la pista 1 y a la 2 (compresión, gate de ruido, ecualización, etc), se colocan las dos en posición estéreo neutra, y se exporta la mezcla de estos dos canales.
- Finalmente, con el programa de edición de vídeo se sustituye totalmente el audio con la nueva banda sonora. Puede añadirse ahora música si no se ha hecho antes.

]]>

- LaTeX must start as “dollar sign latex” … “dollar”
- Links to local files (such as pictures) don’t work
- Lists (such as this one) do not seem to work well

Muy a menudo, se parte de las EDPs, conocidas, por ejempo:

Estas se *discretizan*: sustituyendo las derivadas por diferencias.

Sin embargo, este es un proceso de ida y vuelta, porque

las EDPs se deducen a nivel discreto.

Se suponen cambios de un campo sólo

en la dirección

El cambio en la cantidad total será:

Antes los flujos por las caras venían dados por:

]]>In a nutshell:

- The standard approach involves Fourier techniques, involving (of course) complex numbers
- The real part of these numbers is analysed, with some trigonometric expression resulting, identifying the troublesome modes
- I claim this mode can be identified in advance, which makes the whole Fourier procedure unnecessary

BTW: it’s pronounced “fon no ee man”

Starting from the diffusion equation (aka heat equation):

A centered space and forward time discretization throws a explicit scheme:

where

is a diffusion Courant number. The question is what value of this number is acceptable for the simulation to remain stable. In more practical terms, for a given spacial resolution we are asking what time step is acceptable. The shorter, the more accurate (in principle, but for stuff such as roundoff errors), but of course we’d rather have our results in minutes rather than hours or days.

Now, the usual discussion centers around the growth of the error in the solution. Let’s just suppose that the solution has a sine wave shape:

i.e.

Then the discretized equation (1) becomes

Now, the usual discussion relates the part with the complex exponentials to trigonometrical functions:

Now this means

Any law of the form

means that:

- the mode will decay monotonically if real(
*C*) < 1 (*C*is a complex number!) (well, not in this case, but it might be, in general) - the mode will decay in an oscillatory manner if real(
*C*) > – 1. In other words, the scheme is stable if |*C*| < 1 - If |
*C*| > 1 the scheme will fail, because that mode will grow and grow, which it must not in principle (see below for Cahn Hilliard) - A possible imaginary part is not so important, it being a change of phase of the mode

This applies to every mode (i.e. value of *k* ), but our main offender is seen to be the one whose real( *C* ) is the lowest: the one for which the argument of the squared sin is π/2 (for the time being, we’ll pretend this does not mean anything special). For this mode, *C* = 1- 4 α, and:

Hence, we finally arrive at our result:

- α must be below 1/4 for a monotonic decrease of the worst modes, it suffices it be below 1/2 for stability

Now, seriously, the worst mode is the one for which

The wavelength of this mode is λ = 2 Δx. This is precisely the shortest wavelength mode that is allowed by our spacial resolution (that’s about everything we need to know of Fourier analysis).

This may look like a cosine function centered at *j* and with this wavelength takes vales +1 at *j*, -1 at *j*+1, 1 at *j*+2, *et caetera*. It turns out, then, we did not need such a detour. Back to Eq. (1),

It seems clear all along that we would get the worst possible case if the values of the field Φ are staggered, producing a -1 -2 -1= -4 in the parenthesis. Then:

and in this case

$latex A^{n + 1} = \A^{n} [ 1 – 4 \alpha ] .$

Which recovers the result above in two lines!

This may be extended very simply to other schemes. For example, an implicit scheme ends up in something like this (for the worst mode):

So, *C*=1/(1 + 4α ), which is always between 1 and 0. The method is then unconditionally stable.

A Crank-Nicolson method yields

So, for this scheme, α must be above 1.

Let’s apply this to the more complicated Cahn-Hilliard equation

We’ll suppose that γ = 1 / 2 (it can be shown that this happens if the spacial length is set equal to the interfacial length at equilibrium).

Now, a tricky thing about this equation is that it describes segregation, and domain formation. So, the φ = 0 field is actually **unstable** against fluctuations. The systems should begin to form structures with values of φ departing from 0, with a well defined wave-length (see Note below). [That’s the featured image, by the way. It started at values of 0.01, and see what it has done in just about 500 reduced time units.]

It makes little sense to use our stability analysis in this case, then.

We have to go close to the equilibrium regime, when φ ~ 1 (there is also φ ~ -1, what follows applies in the same way).

Let us write

The small field is now

and our equation can be approximated by

Now, a standard explicit scheme would yield:

(There’s a handy finite differences calculator for higher order derivatives). In this expression, α is as above, and

Now, the worst case is exactly as in diffusion, an in this case,

Hence, the scheme is stable if 1 – 8 α – 8 β > -1, i.e. α + β < 1/4.

An implicit scheme leads to

which is unconditionally stable.

A mixed scheme such as the one that I am forced to work with (long story…) produces

hence the criterium for stability is

This means a typical, easy choice of Δ*t* = Δ*x* , in which α = β = 1, would be stable.

Actually (… but this gets progresively less interesting for the general public), my approach involves an explicit cube term, in which we may write, close to saturation

which leads to

hence the criterium for stability is

This means the choice Δ*t* = Δ*x* , in which α = β = 1, would **not** be stable!

If φ << 1, the cubed term is negibible. Then

Which is linear. Going to Fourier space,

Hence, all modes between *k* = 0 and √2 will grow, with the one with *k*=1 growing the most. This is a wavelength of 2 π , in units of the interfacial length.

Let us define the DFT as having *k* coefficient

here, *s* is a sign, either +1 or -1. In fact, *s*=-1 for the usual definition of DFT (but this is ok, as we will see later.)

In accordance to its computing origin, the DFT carries no physical information about the period of the signal, and just focuses on the length of the data vector, *N* (aka DOF, degrees of freedom).

The definition of a Fourier Series usually starts by introducing the **inverse** Fourier transform:

where the values of the wave vectors are given by

where *L* is the period of the signal (we are thinking of *x* as a spacial coordinate and *L* as the length of the system, but of course everything holds for time signals).

Now, if we sample the function at equal intervals

where the spacing , we may write

since , we find the direct correspondence

The coefficients themselves (**direct** transform) are calculated from the integral

(The derivation is very simple, just by integrating upon the FS times and using orthogonality of the complex exponential functions.)

In the discrete world integrals have to be approximated. If we use the simple rectangle method,

Recalling ,

Everythings together, since is actually the inverse of $\text{DFT}_{+1} $ . (That *N* factor can lead to trouble if forgotten!).

Notice the signs match for the DFT and the FS, and that “the” DFT is defined with a minus sign, which corresponds to our direct FS.

Also, notice that the periodic length does not actually enter any of the equations, not even in the continuum, because the integral may change its integration variable , and all the *L * factors drop!

Let us consider a constant function . Its Fourier coefficients are then

i.e. all null except for . Using the Fourier series, we readilly obtain back .

Now, on the discrete level, the coefficients are given by

(zero for *m* not equal to 0), and the DFT would be

.

A nice feature of Fourier series is the ease in treating derivatives (that, and convolution, of course). Indeed,

which means that the Fourier coefficients of the derivative are

Now, , and the *L* does **not** drop here. This is important, since physically the “same” sine modulation has different derivative depending on *L*!

Our inverse Fourier series may be interpreted as a discretized integral simply by multiplying by the spacing in Fourier space:

Since the spacing is

A quick comparison with the usual definition of the Fourier transform reveals that the Fourier transform is related to the (analytic continuation of the) fourier components thus:

(We have called the Fourier transform to distinguish if from the coefficients.) This fact is also acknowledged in the wikipedia article.

Notice the Fourier coefficients have the same physical dimensions as the original function in real space, but that the Fourier transform has an extra multiplicative length dimension (area or volume in higher dimensions).

]]>

When *a *becomes negative, the minimum of *g* changes from 0 to other values, . The function then has the celebrated double minimum feature, which features prominently in many symmetry-breaking theories, including of course the appearance of particle mass mass and, you know, the Big Bang and the Universe.

But here we are just considering phase separation in materials. The interface between two coexisting phases must have some associated cost, and the simplest (lowest order) way to include it is by introducing a total free energy functional

This is also called a London-Linzburg-Landau free energy, also appearing in their theory of superconductors.

Now, parameters *a*, *b*, and *c* are not easy to measure (or, at least, estimate) experimentally, but they are related to: the surface tension, the width of the interface, and the magnitude of the bulk equilibrium order parameter (i.e. ). Here I show how to obtain it in a slightly more general setting, since I was not able to find it on the internet (it can be found e.g, in the book by Rowlinson & Widom).

Let us consider a general square-gradient expression

with a which does not need be the previous one.

The usual Euler equations to find an extremum of the free energy functional are

This translates into a modified diffusion equation:

An alternative form of the Euler equations, since the integrand does not depend on space explicitely, is given by the Beltrami identity:

This leads us to

where *G* is a constant that must be determined. (In fact, this is an alternative Euler expression that applies since the integration variable (the space) does not appear in the integrand of the functional).

Now, let’s consider variations in the *x* direction only. At the far left the order parameter has value , and at the right, . It follows that the space derivative of the order parameter must be zero at these two extremes. This identifies the constant *G* as . In other words,

where , the **excess** free energy (we keep using the nabla symbol, but of course it just means a derivative w.r.t. *x* ).

If we define an excess free energy functional:

for the equilibrium profile,

By definition the excess free energy of an interface is its area *A* times its surface tension. Therefore:

when the equilibrium profile is plugged in it.

Again, instead of solving these head on, our previous result yields

so that the two terms in are exactly equal! This permits writing

or also

Now, the latter integral really means a change of variables! We may therefore write

$latex \mathrm{(2)} \qquad \sigma = \sqrt{ 2 c} \int_{-\phi_0}^{\phi_0} d\phi \sqrt{\Delta g}.$

This is a very remarkable expression that estates that the surface tension is the area below the square root of the excess free energy function between the two minima. See the Figure for a plot for the LGL, and an interesting numerical value which will serve us later on.

Notice this form of the surface tension completely circumvents the expression of the profile. A way to obtain it, alternative to solving the diffusion equation, is to use (1) again, to write

In the latter, the value of the order parameter must be known at position . I.e. . This permits the calculation of the profile by inversion of the resulting .

Equations (2) and (3) may be applied to any square-gradient expression, not just the simple LGL simple double well (for example, it can be applied to van der Waals’ most famous expression for liquid-vapour equilibrium).

Here, we compute this expressions for the simple double-well potential. Let us write again:

.

We will only consider the case in which there is phase separation, and *a* is negative. In what follows, we will just write *a* for its absolute value.

Now, we define a normalized order parameter such that:

The idea is that contains **no** physical parameters, which are all absorbed in *A* and *B*. Equating the two equations,

we find

The usefulness of this transformation is more apparent when we use them in the expression for the surface tension. Indeed

The $latex \sqrt{A}$ appears from the overall prefactor in the energy, while comes from the change of integration variable.

The last integral contains no parameters whatsoever! We may predict now

an expression perhaps more complicated that may have been expected. In it, *n* is some dimensionless number, very likely not too large or small. By the way, since *a* is supposed to be proportional to close to the critical point, this predicts a classical critical exponent of 3/2 for the surface tension. In fact, the minimum stands at . Recalling *B*, the extremelly famous critical exponent of 1/2 is predicted for the order parameter.

The excess is given by . The latter may be written as

an expression in which the double-well feature is quite prominent.

The integral of $latex (1- x^2)$ is computed in the figure: 4/3 (not a hard one since the square root cancels the power of two!). Finally,

Now, for the profiles. If we include the square gradient term we may define

The idea here is to capture the typical length scale *L* of the interface, since the spatial derivative may then be cast as (a factor of 2 is introduced in the definition purely for convenience). Therefore:

which does not feature *b*, and predicts a diverging interfacial spacing at the critical point, with a critical exponent of -1/2.

Going back to Eq. (3) we have

again with *A* appearing because of the global prefactor, and *B* from the change of integration variable. In terms of *L*:

which makes clear how the length scale is given by *L*. Now, let us define the van der Waals dividing surface as the point at which the order parameter takes the value of zero, and let us place that surface at the origin. Then,

Now,

This function is precisely the inverse of the hyperbolic tangent! Therefore we may invert to get

OK, imagine we are given the value of the surface tension, the bulk concentration and the interfacial length. We may write the surface tension as

,

with the energy density . Therefore,

From the value of the bulk concentration, we find

Finally, from the interfacial length we find for *c*

For example, Camley and Brown J. Chem. Phys. 135, 225106 (2011), in a study of 2D hydrodynamics, use pN (units of force because this is actually a line tension in 2D), and an iterfacial width of nm.

With these numbers, and an order parameter with a value of 1, we would have

A bit crazy on these units, but in more microscopic ones they seem more sensible:

]]>