I found a very clear review of existing alternatives at the Font usage post, by Ryosuke Iritani (入谷 亮介). I have taken his suggestions and created a gallery, with a simple sample of text and equations.

\usepackage[sc]{mathpazo} \linespread{1.05} % Palladio needs more leading (space between lines) \usepackage[T1]{fontenc}

\usepackage{kpfonts}

Used e.g. in Wikipedia on each sectioning

\usepackage{libertine} \usepackage{libertinust1math} \usepackage[T1]{fontenc}

Scientific and Technical Information Exchange; Times-based but much more elegant than txfonts package.

\usepackage[T1]{fontenc} \usepackage{stix}

It’s a bit thin and less friendly

\usepackage[urw-garamond]{mathdesign} \usepackage[T1]{fontenc}

\usepackage[adobe-utopia]{mathdesign} \usepackage[T1]{fontenc}

\usepackage[charter]{mathdesign}

\usepackage[T1]{fontenc} \usepackage{cochineal} \usepackage[cochineal,varg]{newtxmath}

Baskerville-based, thicker font

\usepackage[lf]{Baskervaldx} % lining figures \usepackage[bigdelims,vvarbb]{newtxmath} % math italic letters from Nimbus Roman \usepackage[cal=boondoxo]{mathalfa} % mathcal from STIX, unslanted a bit \renewcommand*\oldstylenums[1]{\textosf{#1}}

So far, the only font not included the Iritani’s Font usage post!

\usepackage{helvet} \usepackage{sansmath} \usepackage{titlesec} % this enforces helvetica in section and chapter titles \titleformat{\chapter}[display] {\normalfont\sffamily\huge\bfseries} {\chaptertitlename\ \thechapter}{20pt}{\Huge} \titleformat{\section} {\normalfont\sffamily\Large\bfseries} {\thesection}{1em}{} % In main text, at the beginning: \fontfamily{phv}\selectfont % before the first equation: \sansmath

All the above was produced with variations of this file. I just run latex on it, then dvips to get a ps file, which I then crop and export as PNG using the GIMP. Of course, depending on the system, some LaTeX packages may be needed, as well as fonts (I had to install urw-garamond on my arch linux system, for example.)

\documentclass{article} \newcommand{\bfr}{\mathbf{r}} \newcommand{\bfu}{\mathbf{u}} \newcommand{\bfq}{\mathbf{q}} \usepackage{amsmath} \usepackage{libertine} \usepackage{libertinust1math} \usepackage[T1]{fontenc} \usepackage{lipsum}% for filler text \begin{document} \section{A section} \lipsum[10] Equations: \begin{equation} \frac{d \mathbf{u}}{d t} = - \nabla p + \nu \nabla^2 \mathbf{u}, \end{equation} \begin{equation} \begin{split} E &= m c^{2},\\ T &= 2\pi \sqrt{\frac{m}{k}} \end{split} \end{equation} \begin{equation} \iint \phi = - \oint p \end{equation} \end{document}]]>

With python, start with

`ipython --pylab`

Then, read the data

In [4]: dt = loadtxt( ‘1/mesh.dat’ )

In [5]: shape( dt )

Out[5]: (1024, 27)

Notice the last command tells us we have 1024 data points, and 27 fields (well, 25 + positions). For convinience, assign columns to arrays:

In [6]: x=dt[:,0]; y=dt[:,1]; al=dt[:,4]

Now x and y are positions, and “al” is the scalar field for the fifth column (number 4, since counters start at 0 in python).

To visualize the positions,

In [7]: scatter( x , y )

A scalar field may be visualized with a color map:

In [9]: scatter( x , y , c = al )

The “c=” means the color is taken from field al. One may fiddle with colormaps and symbol sizes:

In [9]: scatter( x , y , c = al , cmap= plt.cm.Blues, s=8 )

To know the range we are plotting, produce a color bar:

In [19]: colorbar()

Remember each plotting is overlaid on the previous one, so it is necessary to blank the plot from time to time:

In [11]: clf()

For vector fields, assign coordinates to two separate arrays:

In [20]: vx=dt[:,8]; vy=dt[:,9]

Then, use “quiver” to get a vector plot:

In [22]: quiver( x, y, vx , vy )

]]>

- Se monta la película del modo habitual (seleccionando escenas, transiciones, cortando, etc). (Seguimos usando el windows movie maker, aunque hay opciones interesantes como openshot.) Es mejor no añadir música en este momento, aunque si se hace tampoco pasa nada.
- Una vez terminado el montaje, se extrae la banda sonora entera y se graba (en ogg preferiblemente, o en mp3 de buena calidad). Este paso es sencillo, el mismo VLC player lo hace)
- Se importa la banda sonora en un programa de edición de audio tipo DAW (yo uso Cubase 5). Esta es la
**pista 1**. (En principio, un DAW puede importar directamente el audio de un vídeo, pero el Cubase 5 está algo atrasado en cuanto a codecs.) Sobre DAWs, por cierto: están apareciendo varios gratuitos (al menos, para pocos proyectos) que son online (bandlab, soundtrap) ; pueden ser una buena alternativa a DAWs tradicionales. - En otras pistas, se importan los ficheros del móvil. (De nuevo, fue necesario VLC para transferir el formato de audio moderno, mp4, a ogg que pudiera leer el Cubase 5; hay que hacerlo uno por uno, lo cual es bastante tedioso, habría que investigar si p.e. audacity lo puede hacer mejor.)
- Bajo la pista 1 creamos la
**pista 2**, de mejoras. Pasamos el audio de la pista 1 al canal izquierdo estéreo, y el de la pista 2 al derecho. - Se va escuchando sólo la pista 1 para encontrar los fragmentos de audio; luego, los localizamos en las pistas de teléfono. Una vez que hemos encontrado el fragmento con audio bueno, lo subimos a la pista 2.
**Sincronización**: gracias a la separación estéreo de canales, es bastante sencillo desplazar con cuidado los fragmentos de la pista 2 hasta que están en sincronía con la 1. Cuando lo conseguimos, cortamos el fragmento de la pista 1 que hemos sustituido y lo mandamos a – infinito para que no suene (se puede borrar también, pero así no se pierde información).- Así con todos los fragmentos. Al terminar, se aplican inserciones estándar a la pista 1 y a la 2 (compresión, gate de ruido, ecualización, etc), se colocan las dos en posición estéreo neutra, y se exporta la mezcla de estos dos canales.
- Finalmente, con el programa de edición de vídeo se sustituye totalmente el audio con la nueva banda sonora. Puede añadirse ahora música si no se ha hecho antes.

]]>

- LaTeX must start as “dollar sign latex” … “dollar”
- Links to local files (such as pictures) don’t work
- Lists (such as this one) do not seem to work well

Muy a menudo, se parte de las EDPs, conocidas, por ejempo:

Estas se *discretizan*: sustituyendo las derivadas por diferencias.

Sin embargo, este es un proceso de ida y vuelta, porque

las EDPs se deducen a nivel discreto.

Se suponen cambios de un campo sólo

en la dirección

El cambio en la cantidad total será:

Antes los flujos por las caras venían dados por:

]]>In a nutshell:

- The standard approach involves Fourier techniques, involving (of course) complex numbers
- The real part of these numbers is analysed, with some trigonometric expression resulting, identifying the troublesome modes
- I claim this mode can be identified in advance, which makes the whole Fourier procedure unnecessary

BTW: it’s pronounced “fon no ee man”

Starting from the diffusion equation (aka heat equation):

A centered space and forward time discretization throws a explicit scheme:

where

is a diffusion Courant number. The question is what value of this number is acceptable for the simulation to remain stable. In more practical terms, for a given spacial resolution we are asking what time step is acceptable. The shorter, the more accurate (in principle, but for stuff such as roundoff errors), but of course we’d rather have our results in minutes rather than hours or days.

Now, the usual discussion centers around the growth of the error in the solution. Let’s just suppose that the solution has a sine wave shape:

i.e.

Then the discretized equation (1) becomes

Now, the usual discussion relates the part with the complex exponentials to trigonometrical functions:

Now this means

Any law of the form

means that:

- the mode will decay monotonically if real(
*C*) < 1 (*C*is a complex number!) (well, not in this case, but it might be, in general) - the mode will decay in an oscillatory manner if real(
*C*) > – 1. In other words, the scheme is stable if |*C*| < 1 - If |
*C*| > 1 the scheme will fail, because that mode will grow and grow, which it must not in principle (see below for Cahn Hilliard) - A possible imaginary part is not so important, it being a change of phase of the mode

This applies to every mode (i.e. value of *k* ), but our main offender is seen to be the one whose real( *C* ) is the lowest: the one for which the argument of the squared sin is π/2 (for the time being, we’ll pretend this does not mean anything special). For this mode, *C* = 1- 4 α, and:

Hence, we finally arrive at our result:

- α must be below 1/4 for a monotonic decrease of the worst modes, it suffices it be below 1/2 for stability

Now, seriously, the worst mode is the one for which

The wavelength of this mode is λ = 2 Δx. This is precisely the shortest wavelength mode that is allowed by our spacial resolution (that’s about everything we need to know of Fourier analysis).

This may look like a cosine function centered at *j* and with this wavelength takes vales +1 at *j*, -1 at *j*+1, 1 at *j*+2, *et caetera*. It turns out, then, we did not need such a detour. Back to Eq. (1),

It seems clear all along that we would get the worst possible case if the values of the field Φ are staggered, producing a -1 -2 -1= -4 in the parenthesis. Then:

and in this case

$latex A^{n + 1} = \A^{n} [ 1 – 4 \alpha ] .$

Which recovers the result above in two lines!

This may be extended very simply to other schemes. For example, an implicit scheme ends up in something like this (for the worst mode):

So, *C*=1/(1 + 4α ), which is always between 1 and 0. The method is then unconditionally stable.

A Crank-Nicolson method yields

So, for this scheme, α must be above 1.

Let’s apply this to the more complicated Cahn-Hilliard equation

We’ll suppose that γ = 1 / 2 (it can be shown that this happens if the spacial length is set equal to the interfacial length at equilibrium).

Now, a tricky thing about this equation is that it describes segregation, and domain formation. So, the φ = 0 field is actually **unstable** against fluctuations. The systems should begin to form structures with values of φ departing from 0, with a well defined wave-length (see Note below). [That’s the featured image, by the way. It started at values of 0.01, and see what it has done in just about 500 reduced time units.]

It makes little sense to use our stability analysis in this case, then.

We have to go close to the equilibrium regime, when φ ~ 1 (there is also φ ~ -1, what follows applies in the same way).

Let us write

The small field is now

and our equation can be approximated by

Now, a standard explicit scheme would yield:

(There’s a handy finite differences calculator for higher order derivatives). In this expression, α is as above, and

Now, the worst case is exactly as in diffusion, an in this case,

Hence, the scheme is stable if 1 – 8 α – 8 β > -1, i.e. α + β < 1/4.

An implicit scheme leads to

which is unconditionally stable.

A mixed scheme such as the one that I am forced to work with (long story…) produces

hence the criterium for stability is

This means a typical, easy choice of Δ*t* = Δ*x* , in which α = β = 1, would be stable.

Actually (… but this gets progresively less interesting for the general public), my approach involves an explicit cube term, in which we may write, close to saturation

which leads to

hence the criterium for stability is

This means the choice Δ*t* = Δ*x* , in which α = β = 1, would **not** be stable!

If φ << 1, the cubed term is negibible. Then

Which is linear. Going to Fourier space,

Hence, all modes between *k* = 0 and √2 will grow, with the one with *k*=1 growing the most. This is a wavelength of 2 π , in units of the interfacial length.

Let us define the DFT as having *k* coefficient

here, *s* is a sign, either +1 or -1. In fact, *s*=-1 for the usual definition of DFT (but this is ok, as we will see later.)

In accordance to its computing origin, the DFT carries no physical information about the period of the signal, and just focuses on the length of the data vector, *N* (aka DOF, degrees of freedom).

The definition of a Fourier Series usually starts by introducing the **inverse** Fourier transform:

where the values of the wave vectors are given by

where *L* is the period of the signal (we are thinking of *x* as a spacial coordinate and *L* as the length of the system, but of course everything holds for time signals).

Now, if we sample the function at equal intervals

where the spacing , we may write

since , we find the direct correspondence

The coefficients themselves (**direct** transform) are calculated from the integral

(The derivation is very simple, just by integrating upon the FS times and using orthogonality of the complex exponential functions.)

In the discrete world integrals have to be approximated. If we use the simple rectangle method,

Recalling ,

Everythings together, since is actually the inverse of $\text{DFT}_{+1} $ . (That *N* factor can lead to trouble if forgotten!).

Notice the signs match for the DFT and the FS, and that “the” DFT is defined with a minus sign, which corresponds to our direct FS.

Also, notice that the periodic length does not actually enter any of the equations, not even in the continuum, because the integral may change its integration variable , and all the *L * factors drop!

Let us consider a constant function . Its Fourier coefficients are then

i.e. all null except for . Using the Fourier series, we readilly obtain back .

Now, on the discrete level, the coefficients are given by

(zero for *m* not equal to 0), and the DFT would be

.

A nice feature of Fourier series is the ease in treating derivatives (that, and convolution, of course). Indeed,

which means that the Fourier coefficients of the derivative are

Now, , and the *L* does **not** drop here. This is important, since physically the “same” sine modulation has different derivative depending on *L*!

Our inverse Fourier series may be interpreted as a discretized integral simply by multiplying by the spacing in Fourier space:

Since the spacing is

A quick comparison with the usual definition of the Fourier transform reveals that the Fourier transform is related to the (analytic continuation of the) fourier components thus:

(We have called the Fourier transform to distinguish if from the coefficients.) This fact is also acknowledged in the wikipedia article.

Notice the Fourier coefficients have the same physical dimensions as the original function in real space, but that the Fourier transform has an extra multiplicative length dimension (area or volume in higher dimensions).

]]>

When *a *becomes negative, the minimum of *g* changes from 0 to other values, . The function then has the celebrated double minimum feature, which features prominently in many symmetry-breaking theories, including of course the appearance of particle mass mass and, you know, the Big Bang and the Universe.

But here we are just considering phase separation in materials. The interface between two coexisting phases must have some associated cost, and the simplest (lowest order) way to include it is by introducing a total free energy functional

This is also called a London-Linzburg-Landau free energy, also appearing in their theory of superconductors.

Now, parameters *a*, *b*, and *c* are not easy to measure (or, at least, estimate) experimentally, but they are related to: the surface tension, the width of the interface, and the magnitude of the bulk equilibrium order parameter (i.e. ). Here I show how to obtain it in a slightly more general setting, since I was not able to find it on the internet (it can be found e.g, in the book by Rowlinson & Widom).

Let us consider a general square-gradient expression

with a which does not need be the previous one.

The usual Euler equations to find an extremum of the free energy functional are

This translates into a modified diffusion equation:

An alternative form of the Euler equations, since the integrand does not depend on space explicitely, is given by the Beltrami identity:

This leads us to

where *G* is a constant that must be determined. (In fact, this is an alternative Euler expression that applies since the integration variable (the space) does not appear in the integrand of the functional).

Now, let’s consider variations in the *x* direction only. At the far left the order parameter has value , and at the right, . It follows that the space derivative of the order parameter must be zero at these two extremes. This identifies the constant *G* as . In other words,

where , the **excess** free energy (we keep using the nabla symbol, but of course it just means a derivative w.r.t. *x* ).

If we define an excess free energy functional:

for the equilibrium profile,

By definition the excess free energy of an interface is its area *A* times its surface tension. Therefore:

when the equilibrium profile is plugged in it.

Again, instead of solving these head on, our previous result yields

so that the two terms in are exactly equal! This permits writing

or also

Now, the latter integral really means a change of variables! We may therefore write

$latex \mathrm{(2)} \qquad \sigma = \sqrt{ 2 c} \int_{-\phi_0}^{\phi_0} d\phi \sqrt{\Delta g}.$

This is a very remarkable expression that estates that the surface tension is the area below the square root of the excess free energy function between the two minima. See the Figure for a plot for the LGL, and an interesting numerical value which will serve us later on.

Notice this form of the surface tension completely circumvents the expression of the profile. A way to obtain it, alternative to solving the diffusion equation, is to use (1) again, to write

In the latter, the value of the order parameter must be known at position . I.e. . This permits the calculation of the profile by inversion of the resulting .

Equations (2) and (3) may be applied to any square-gradient expression, not just the simple LGL simple double well (for example, it can be applied to van der Waals’ most famous expression for liquid-vapour equilibrium).

Here, we compute this expressions for the simple double-well potential. Let us write again:

.

We will only consider the case in which there is phase separation, and *a* is negative. In what follows, we will just write *a* for its absolute value.

Now, we define a normalized order parameter such that:

The idea is that contains **no** physical parameters, which are all absorbed in *A* and *B*. Equating the two equations,

we find

The usefulness of this transformation is more apparent when we use them in the expression for the surface tension. Indeed

The $latex \sqrt{A}$ appears from the overall prefactor in the energy, while comes from the change of integration variable.

The last integral contains no parameters whatsoever! We may predict now

an expression perhaps more complicated that may have been expected. In it, *n* is some dimensionless number, very likely not too large or small. By the way, since *a* is supposed to be proportional to close to the critical point, this predicts a classical critical exponent of 3/2 for the surface tension. In fact, the minimum stands at . Recalling *B*, the extremelly famous critical exponent of 1/2 is predicted for the order parameter.

The excess is given by . The latter may be written as

an expression in which the double-well feature is quite prominent.

The integral of $latex (1- x^2)$ is computed in the figure: 4/3 (not a hard one since the square root cancels the power of two!). Finally,

Now, for the profiles. If we include the square gradient term we may define

The idea here is to capture the typical length scale *L* of the interface, since the spatial derivative may then be cast as (a factor of 2 is introduced in the definition purely for convenience). Therefore:

which does not feature *b*, and predicts a diverging interfacial spacing at the critical point, with a critical exponent of -1/2.

Going back to Eq. (3) we have

again with *A* appearing because of the global prefactor, and *B* from the change of integration variable. In terms of *L*:

which makes clear how the length scale is given by *L*. Now, let us define the van der Waals dividing surface as the point at which the order parameter takes the value of zero, and let us place that surface at the origin. Then,

Now,

This function is precisely the inverse of the hyperbolic tangent! Therefore we may invert to get

OK, imagine we are given the value of the surface tension, the bulk concentration and the interfacial length. We may write the surface tension as

,

with the energy density . Therefore,

From the value of the bulk concentration, we find

Finally, from the interfacial length we find for *c*

For example, Camley and Brown J. Chem. Phys. 135, 225106 (2011), in a study of 2D hydrodynamics, use pN (units of force because this is actually a line tension in 2D), and an iterfacial width of nm.

With these numbers, and an order parameter with a value of 1, we would have

A bit crazy on these units, but in more microscopic ones they seem more sensible:

]]>

where is the velocity field, is the pressure, is the kinematic viscosity, and is the fixed density of the fluid. The time derivative is a total derivative:

It is common to choose parameters that simplify the equations, but that can obscure the role of the different parameters. In the following, I provide expressions with all relevant parameters included, with their physical dimensions. I later pass to dimensionless, or reduced, units, in terms of the Reynolds and Courant numbers.

The solution is a periodic array of vortices, that repeats itself in the and directions with a periodic length :

here, , and the function is

so that the decay time of the vortices due to viscosity is given by . The maximum modulus of the velocity field at time zero is .

The pressure field is given by

Hence the vortices go around zones of low pressure, either clockwise or counter-clockwise (pictures will come, eventually.)

Plugging these two fields into the Navier-Stokes equation shows that indeed this is a solution. It is interesting that the pressure gradient term exactly cancels the convective one, while the viscosity term cancels the partial derivative. That means that in the inviscid limit the vortices will never decay.

The vorticity field is given by

and the stream function is just . Notice the vorticity satisfies the convection-diffusion equation

Let us introduce the dimensionless time, built from time, maximum initial velocity, and typical length *L* (another choice would be *L*/2, which is the actual length of a vortex)

Notice that is the time a fluid particle would need to travel a distance *L*.

Function* f* can be written as

where the Reynolds number appears naturally:

The decay time is then seen to be in reduced units.

In a simulation, there is an important dimensionless parameter, the Courant number:

where is the simulation time-step, and the size of the simulation cells (or, the interparticle distance in a particle simulation, aka *h* ). If each Cartesian direction is discretized in *n* cells (for a total of *n* x *n* cells), then

where the dimensionless time-step arises naturally. For a series of simulations at fixed Co, the product of *n* and must remain constant. E.g. if we had 16 x 16 bins at some time-step, we should use 32 x 32 at half that time-step. A simulation that must then consider twice the number of time steps in order to reach the same final time, and with a system that has four times as many cells as the original one ! We can suppose this will scale badly, as the eighth power or so. It turns out that this is more like the worst-case scenario, though (I think…).

The typical instance corresponds to , in which case , and the function *f* takes the simple form . Also, . If the viscosity is also taken to be one, the expressions are even simpler, but then the Reynolds number is only !

S*PH simulations of the Taylor Green vortex sheet by INSEAN. Recall the time quoted is dimensioned. Hence, the onset of instability at about 70 would actually be about 10 in reduced units. The decay time is , which is about 25. Hence the velocity field has decayed exp(-10/25) at that time, which is about two thirds. Color by vorticity.*

In my simulations, I took (I should have chosen , but…), so then , and , a factor of two that I missed in an early version of a manuscript. I also took , which would give a Reynolds number of 200 (not, as I thought, 100). I have also taken for a number of 2000, as in the simulations by the INSEAN group in Rome.

*pFEM simulations by myself, at the same Re=2000 as those above. Instability sets in at a reduced time of about 15. Later times seem to indicate a vortex “hopping”. However, I have more recent simulations that show no instability at all. Color is by pressure on these simulations (left), and by vorticity, more or less (right)*

If we take one of these vortices as a good model for a storm, we may set a pressure difference of 100 HPa. This would give maximum wind speeds of m/s, or 330 km/h… quite the storm indeed. Anyway, with a storm size of 10 km, the Reynolds number would be a huge 1.6 10^10 (using a dynamic viscosity of 1.5 10^-5 m^2/s for air). That’s about 24 years, so, no, it seems clear storms do not decay due to air viscosity.

]]>

]]>

The usual computation is as follows:

- The simulation cell is a rectangle, dimensions . I.e. an aspect ratio of 4, and a 2D case.
- The density ratio is (I think) set to 3. I.e. the lighter phase is three times as light as the heavy one. I think the actual values are “1” and “3” in whichever reduced units that are used. Atwood number is therefore 1/2.
- Fluids are initially at rest, so there is no input typical velocity. A natural one would be , similar to the velocity that a fluid would have if it falls a distance
*d*under gravity. - The Reynolds number is therefore fixed as , where (I think), the lighter density and viscosity are used.
- Boundary conditions for the velocity are: no slip at the top and at the bottom, and slip (this is quite important) at the left and right walls. Pressure: zero normal gradient at all walls. There may be a symmetry plane at the
*x*=0 line, in order to avoid half of the domain. - The initial interface is perturbed by setting it as a cosine shape:
- Lots of details can be found on Guermond & Quartapelle (2000). A projection FEM for variable density incompressible flows. Journal of Computational Physics, 165(1), 167-188. It’s easy to get its pdf.

OK, this is all quite doable in OpenFOAM. As explained in this Eric Paterson at a workshop in Chalmers, the only trick seems to be to use funkySetFields. This utility is now part of swak4Foam, the Swiss Army Knife for FOAM. Now, installation for new releases of OpenFOAM may be a bit tricky, but the procedure is well documented.

Here’s the thing: with my particle simulations I still don’t know how to perform multiphase simulations: different densities, and viscosities, etc. I do know how to carry a color field around with particles: you just set it at time zero and never change it. O top of that, I can only do periodic boundary conditions, and on a square. So, this is what I did

- The two fluids have the same physical parameters
- But, some funny gravity acts upwards for one fluid and downwards for the other. If the color function is , either 0 or 1 for the two phases, this force per unit mass would be
- Since I can only do a square, I perturb my interface as . That should give us four plumes, instead of one

Now, that I can do! Also, it is not hard to hack interFoam to do the same. Just take the geometry from, say “cavity”, set cyclic patches at the boundary. Then, on the code, make sure “p” and not “p_rgh” is the field that enters the velocity equation, then add an extra term on the right-hand side, **+ (2 * alpha1 – 1) * rho * g** (just like that!). Get the control files from, e.g. dam break, and you are all set to run.

Btw, my method is not yet published, but it’s similar to SPH, MPS, or pFEM.

]]>