In a nutshell:

- The standard approach involves Fourier techniques, involving (of course) complex numbers
- The real part of these numbers is analysed, with some trigonometric expression resulting, identifying the troublesome modes
- I claim this mode can be identified in advance, which makes the whole Fourier procedure unnecessary

BTW: it’s pronounced “fon no ee man”

Starting from the diffusion equation (aka heat equation):

A centered space and forward time discretization throws a explicit scheme:

where

is a diffusion Courant number. The question is what value of this number is acceptable for the simulation to remain stable. In more practical terms, for a given spacial resolution we are asking what time step is acceptable. The shorter, the more accurate (in principle, but for stuff such as roundoff errors), but of course we’d rather have our results in minutes rather than hours or days.

Now, the usual discussion centers around the growth of the error in the solution. Let’s just suppose that the solution has a sine wave shape:

i.e.

Then the discretized equation (1) becomes

Now, the usual discussion relates the part with the complex exponentials to trigonometrical functions:

Now this means

Any law of the form

means that:

- the mode will decay monotonically if real(
*C*) < 1 (*C*is a complex number!) (well, not in this case, but it might be, in general) - the mode will decay in an oscillatory manner if real(
*C*) > – 1. In other words, the scheme is stable if |*C*| < 1 - If |
*C*| > 1 the scheme will fail, because that mode will grow and grow, which it must not in principle (see below for Cahn Hilliard) - A possible imaginary part is not so important, it being a change of phase of the mode

This applies to every mode (i.e. value of *k* ), but our main offender is seen to be the one whose real( *C* ) is the lowest: the one for which the argument of the squared sin is π/2 (for the time being, we’ll pretend this does not mean anything special). For this mode, *C* = 1- 4 α, and:

Hence, we finally arrive at our result:

- α must be below 1/4 for a monotonic decrease of the worst modes, it suffices it be below 1/2 for stability

Now, seriously, the worst mode is the one for which

The wavelength of this mode is λ = 2 Δx. This is precisely the shortest wavelength mode that is allowed by our spacial resolution (that’s about everything we need to know of Fourier analysis).

This may look like a cosine function centered at *j* and with this wavelength takes vales +1 at *j*, -1 at *j*+1, 1 at *j*+2, *et caetera*. It turns out, then, we did not need such a detour. Back to Eq. (1),

It seems clear all along that we would get the worst possible case if the values of the field Φ are staggered, producing a -1 -2 -1= -4 in the parenthesis. Then:

and in this case

$latex A^{n + 1} = \A^{n} [ 1 – 4 \alpha ] .$

Which recovers the result above in two lines!

This may be extended very simply to other schemes. For example, an implicit scheme ends up in something like this (for the worst mode):

So, *C*=1/(1 + 4α ), which is always between 1 and 0. The method is then unconditionally stable.

A Crank-Nicolson method yields

So, for this scheme, α must be above 1.

Let’s apply this to the more complicated Cahn-Hilliard equation

We’ll suppose that γ = 1 / 2 (it can be shown that this happens if the spacial length is set equal to the interfacial length at equilibrium).

Now, a tricky thing about this equation is that it describes segregation, and domain formation. So, the φ = 0 field is actually **unstable** against fluctuations. The systems should begin to form structures with values of φ departing from 0, with a well defined wave-length (see Note below). [That’s the featured image, by the way. It started at values of 0.01, and see what it has done in just about 500 reduced time units.]

It makes little sense to use our stability analysis in this case, then.

We have to go close to the equilibrium regime, when φ ~ 1 (there is also φ ~ -1, what follows applies in the same way).

Let us write

The small field is now

and our equation can be approximated by

Now, a standard explicit scheme would yield:

(There’s a handy finite differences calculator for higher order derivatives). In this expression, α is as above, and

Now, the worst case is exactly as in diffusion, an in this case,

Hence, the scheme is stable if 1 – 8 α – 8 β > -1, i.e. α + β < 1/4.

An implicit scheme leads to

which is unconditionally stable.

A mixed scheme such as the one that I am forced to work with (long story…) produces

hence the criterium for stability is

This means a typical, easy choice of Δ*t* = Δ*x* , in which α = β = 1, would be stable.

Actually (… but this gets progresively less interesting for the general public), my approach involves an explicit cube term, in which we may write, close to saturation

which leads to

hence the criterium for stability is

This means the choice Δ*t* = Δ*x* , in which α = β = 1, would **not** be stable!

If φ << 1, the cubed term is negibible. Then

Which is linear. Going to Fourier space,

Hence, all modes between *k* = 0 and √2 will grow, with the one with *k*=1 growing the most. This is a wavelength of 2 π , in units of the interfacial length.

Let us define the DFT as having *k* coefficient

here, *s* is a sign, either +1 or -1. In fact, *s*=-1 for the usual definition of DFT (but this is ok, as we will see later.)

In accordance to its computing origin, the DFT carries no physical information about the period of the signal, and just focuses on the length of the data vector, *N* (aka DOF, degrees of freedom).

The definition of a Fourier Series usually starts by introducing the **inverse** Fourier transform:

where the values of the wave vectors are given by

where *L* is the period of the signal (we are thinking of *x* as a spacial coordinate and *L* as the length of the system, but of course everything holds for time signals).

Now, if we sample the function at equal intervals

where the spacing , we may write

since , we find the direct correspondence

The coefficients themselves (**direct** transform) are calculated from the integral

(The derivation is very simple, just by integrating upon the FS times and using orthogonality of the complex exponential functions.)

In the discrete world integrals have to be approximated. If we use the simple rectangle method,

Recalling ,

Everythings together, since is actually the inverse of $\text{DFT}_{+1} $ . (That *N* factor can lead to trouble if forgotten!).

Notice the signs match for the DFT and the FS, and that “the” DFT is defined with a minus sign, which corresponds to our direct FS.

Also, notice that the periodic length does not actually enter any of the equations, not even in the continuum, because the integral may change its integration variable , and all the *L * factors drop!

Let us consider a constant function . Its Fourier coefficients are then

i.e. all null except for . Using the Fourier series, we readilly obtain back .

Now, on the discrete level, the coefficients are given by

(zero for *m* not equal to 0), and the DFT would be

.

A nice feature of Fourier series is the ease in treating derivatives (that, and convolution, of course). Indeed,

which means that the Fourier coefficients of the derivative are

Now, , and the *L* does **not** drop here. This is important, since physically the “same” sine modulation has different derivative depending on *L*!

Our inverse Fourier series may be interpreted as a discretized integral simply by multiplying by the spacing in Fourier space:

Since the spacing is

A quick comparison with the usual definition of the Fourier transform reveals that the Fourier transform is related to the (analytic continuation of the) fourier components thus:

(We have called the Fourier transform to distinguish if from the coefficients.) This fact is also acknowledged in the wikipedia article.

Notice the Fourier coefficients have the same physical dimensions as the original function in real space, but that the Fourier transform has an extra multiplicative length dimension (area or volume in higher dimensions).

]]>

When *a *becomes negative, the minimum of *g* changes from 0 to other values, . The function then has the celebrated double minimum feature, which features prominently in many symmetry-breaking theories, including of course the appearance of particle mass mass and, you know, the Big Bang and the Universe.

But here we are just considering phase separation in materials. The interface between two coexisting phases must have some associated cost, and the simplest (lowest order) way to include it is by introducing a total free energy functional

This is also called a London-Linzburg-Landau free energy, also appearing in their theory of superconductors.

Now, parameters *a*, *b*, and *c* are not easy to measure (or, at least, estimate) experimentally, but they are related to: the surface tension, the width of the interface, and the magnitude of the bulk equilibrium order parameter (i.e. ). Here I show how to obtain it in a slightly more general setting, since I was not able to find it on the internet (it can be found e.g, in the book by Rowlinson & Widom).

Let us consider a general square-gradient expression

with a which does not need be the previous one.

The usual Euler equations to find an extremum of the free energy functional are

This translates into a modified diffusion equation:

An alternative form of the Euler equations, since the integrand does not depend on space explicitely, is given by the Beltrami identity:

This leads us to

where *G* is a constant that must be determined. (In fact, this is an alternative Euler expression that applies since the integration variable (the space) does not appear in the integrand of the functional).

Now, let’s consider variations in the *x* direction only. At the far left the order parameter has value , and at the right, . It follows that the space derivative of the order parameter must be zero at these two extremes. This identifies the constant *G* as . In other words,

where , the **excess** free energy (we keep using the nabla symbol, but of course it just means a derivative w.r.t. *x* ).

If we define an excess free energy functional:

for the equilibrium profile,

By definition the excess free energy of an interface is its area *A* times its surface tension. Therefore:

when the equilibrium profile is plugged in it.

Again, instead of solving these head on, our previous result yields

so that the two terms in are exactly equal! This permits writing

or also

Now, the latter integral really means a change of variables! We may therefore write

$latex \mathrm{(2)} \qquad \sigma = \sqrt{ 2 c} \int_{-\phi_0}^{\phi_0} d\phi \sqrt{\Delta g}.$

This is a very remarkable expression that estates that the surface tension is the area below the square root of the excess free energy function between the two minima. See the Figure for a plot for the LGL, and an interesting numerical value which will serve us later on.

Notice this form of the surface tension completely circumvents the expression of the profile. A way to obtain it, alternative to solving the diffusion equation, is to use (1) again, to write

In the latter, the value of the order parameter must be known at position . I.e. . This permits the calculation of the profile by inversion of the resulting .

Equations (2) and (3) may be applied to any square-gradient expression, not just the simple LGL simple double well (for example, it can be applied to van der Waals’ most famous expression for liquid-vapour equilibrium).

Here, we compute this expressions for the simple double-well potential. Let us write again:

.

We will only consider the case in which there is phase separation, and *a* is negative. In what follows, we will just write *a* for its absolute value.

Now, we define a normalized order parameter such that:

The idea is that contains **no** physical parameters, which are all absorbed in *A* and *B*. Equating the two equations,

we find

The usefulness of this transformation is more apparent when we use them in the expression for the surface tension. Indeed

The $latex \sqrt{A}$ appears from the overall prefactor in the energy, while comes from the change of integration variable.

The last integral contains no parameters whatsoever! We may predict now

an expression perhaps more complicated that may have been expected. In it, *n* is some dimensionless number, very likely not too large or small. By the way, since *a* is supposed to be proportional to close to the critical point, this predicts a classical critical exponent of 3/2 for the surface tension. In fact, the minimum stands at . Recalling *B*, the extremelly famous critical exponent of 1/2 is predicted for the order parameter.

The excess is given by . The latter may be written as

an expression in which the double-well feature is quite prominent.

The integral of $latex (1- x^2)$ is computed in the figure: 4/3 (not a hard one since the square root cancels the power of two!). Finally,

Now, for the profiles. If we include the square gradient term we may define

The idea here is to capture the typical length scale *L* of the interface, since the spatial derivative may then be cast as (a factor of 2 is introduced in the definition purely for convenience). Therefore:

which does not feature *b*, and predicts a diverging interfacial spacing at the critical point, with a critical exponent of -1/2.

Going back to Eq. (3) we have

again with *A* appearing because of the global prefactor, and *B* from the change of integration variable. In terms of *L*:

which makes clear how the length scale is given by *L*. Now, let us define the van der Waals dividing surface as the point at which the order parameter takes the value of zero, and let us place that surface at the origin. Then,

Now,

This function is precisely the inverse of the hyperbolic tangent! Therefore we may invert to get

OK, imagine we are given the value of the surface tension, the bulk concentration and the interfacial length. We may write the surface tension as

,

with the energy density . Therefore,

From the value of the bulk concentration, we find

Finally, from the interfacial length we find for *c*

For example, Camley and Brown J. Chem. Phys. 135, 225106 (2011), in a study of 2D hydrodynamics, use pN (units of force because this is actually a line tension in 2D), and an iterfacial width of nm.

With these numbers, and an order parameter with a value of 1, we would have

A bit crazy on these units, but in more microscopic ones they seem more sensible:

]]>

where is the velocity field, is the pressure, is the kinematic viscosity, and is the fixed density of the fluid. The time derivative is a total derivative:

It is common to choose parameters that simplify the equations, but that can obscure the role of the different parameters. In the following, I provide expressions with all relevant parameters included, with their physical dimensions. I later pass to dimensionless, or reduced, units, in terms of the Reynolds and Courant numbers.

The solution is a periodic array of vortices, that repeats itself in the and directions with a periodic length :

here, , and the function is

so that the decay time of the vortices due to viscosity is given by . The maximum modulus of the velocity field at time zero is .

The pressure field is given by

Hence the vortices go around zones of low pressure, either clockwise or counter-clockwise (pictures will come, eventually.)

Plugging these two fields into the Navier-Stokes equation shows that indeed this is a solution. It is interesting that the pressure gradient term exactly cancels the convective one, while the viscosity term cancels the partial derivative. That means that in the inviscid limit the vortices will never decay.

The vorticity field is given by

and the stream function is just . Notice the vorticity satisfies the convection-diffusion equation

Let us introduce the dimensionless time, built from time, maximum initial velocity, and typical length *L* (another choice would be *L*/2, which is the actual length of a vortex)

Notice that is the time a fluid particle would need to travel a distance *L*.

Function* f* can be written as

where the Reynolds number appears naturally:

The decay time is then seen to be in reduced units.

In a simulation, there is an important dimensionless parameter, the Courant number:

where is the simulation time-step, and the size of the simulation cells (or, the interparticle distance in a particle simulation, aka *h* ). If each Cartesian direction is discretized in *n* cells (for a total of *n* x *n* cells), then

where the dimensionless time-step arises naturally. For a series of simulations at fixed Co, the product of *n* and must remain constant. E.g. if we had 16 x 16 bins at some time-step, we should use 32 x 32 at half that time-step. A simulation that must then consider twice the number of time steps in order to reach the same final time, and with a system that has four times as many cells as the original one ! We can suppose this will scale badly, as the eighth power or so. It turns out that this is more like the worst-case scenario, though (I think…).

The typical instance corresponds to , in which case , and the function *f* takes the simple form . Also, . If the viscosity is also taken to be one, the expressions are even simpler, but then the Reynolds number is only !

S*PH simulations of the Taylor Green vortex sheet by INSEAN. Recall the time quoted is dimensioned. Hence, the onset of instability at about 70 would actually be about 10 in reduced units. The decay time is , which is about 25. Hence the velocity field has decayed exp(-10/25) at that time, which is about two thirds. Color by vorticity.*

In my simulations, I took (I should have chosen , but…), so then , and , a factor of two that I missed in an early version of a manuscript. I also took , which would give a Reynolds number of 200 (not, as I thought, 100). I have also taken for a number of 2000, as in the simulations by the INSEAN group in Rome.

*pFEM simulations by myself, at the same Re=2000 as those above. Instability sets in at a reduced time of about 15. Later times seem to indicate a vortex “hopping”. However, I have more recent simulations that show no instability at all. Color is by pressure on these simulations (left), and by vorticity, more or less (right)*

If we take one of these vortices as a good model for a storm, we may set a pressure difference of 100 HPa. This would give maximum wind speeds of m/s, or 330 km/h… quite the storm indeed. Anyway, with a storm size of 10 km, the Reynolds number would be a huge 1.6 10^10 (using a dynamic viscosity of 1.5 10^-5 m^2/s for air). That’s about 24 years, so, no, it seems clear storms do not decay due to air viscosity.

]]>

]]>

The usual computation is as follows:

- The simulation cell is a rectangle, dimensions . I.e. an aspect ratio of 4, and a 2D case.
- The density ratio is (I think) set to 3. I.e. the lighter phase is three times as light as the heavy one. I think the actual values are “1” and “3” in whichever reduced units that are used. Atwood number is therefore 1/2.
- Fluids are initially at rest, so there is no input typical velocity. A natural one would be , similar to the velocity that a fluid would have if it falls a distance
*d*under gravity. - The Reynolds number is therefore fixed as , where (I think), the lighter density and viscosity are used.
- Boundary conditions for the velocity are: no slip at the top and at the bottom, and slip (this is quite important) at the left and right walls. Pressure: zero normal gradient at all walls. There may be a symmetry plane at the
*x*=0 line, in order to avoid half of the domain. - The initial interface is perturbed by setting it as a cosine shape:
- Lots of details can be found on Guermond & Quartapelle (2000). A projection FEM for variable density incompressible flows. Journal of Computational Physics, 165(1), 167-188. It’s easy to get its pdf.

OK, this is all quite doable in OpenFOAM. As explained in this Eric Paterson at a workshop in Chalmers, the only trick seems to be to use funkySetFields. This utility is now part of swak4Foam, the Swiss Army Knife for FOAM. Now, installation for new releases of OpenFOAM may be a bit tricky, but the procedure is well documented.

Here’s the thing: with my particle simulations I still don’t know how to perform multiphase simulations: different densities, and viscosities, etc. I do know how to carry a color field around with particles: you just set it at time zero and never change it. O top of that, I can only do periodic boundary conditions, and on a square. So, this is what I did

- The two fluids have the same physical parameters
- But, some funny gravity acts upwards for one fluid and downwards for the other. If the color function is , either 0 or 1 for the two phases, this force per unit mass would be
- Since I can only do a square, I perturb my interface as . That should give us four plumes, instead of one

Now, that I can do! Also, it is not hard to hack interFoam to do the same. Just take the geometry from, say “cavity”, set cyclic patches at the boundary. Then, on the code, make sure “p” and not “p_rgh” is the field that enters the velocity equation, then add an extra term on the right-hand side, **+ (2 * alpha1 – 1) * rho * g** (just like that!). Get the control files from, e.g. dam break, and you are all set to run.

Btw, my method is not yet published, but it’s similar to SPH, MPS, or pFEM.

]]>

Starting with the Navier-Stokes momentum equation

where is a Lamé viscosity coefficient. The bulk viscosity coeficient is defined as . The last term is often neglected, even in compressible flow, but sound attenuation is one of the few cases where it may have some influence. All viscosities are assumed to be constant, but in this case this is a safe assumption, since we are going to assume small departures about equilibrium values.

Let us not forget mass continuity:

Now, let us suppose small pressure and density fluctuations about a constant background. I.e:

The first equation means the medium onto which sounds travel is not moving (should be modified for e.g. wind). In this case, the fluctuations in pressure and density, being small, should be related by a constant which, with some foresight, we will call :

The viscosities were already assumed to have small variations about their equilibrium values. Then, neglecting second and higher order perturbation terms we obtain linearized equations, involving only the velocity and the pressure (not the density):

Here, we have dropped the “s” (it’s confusing, but the expressions are so much cleaner). The kinematic viscosity is defined as .

If we differentiate with respect to time in the first, and with respect to space (applying ) in the second, we can eliminate the pressure term (I know, exchanging both derivatives, which makes mathematicians nervous.) The following equation for the velocity results:

Clearly, just a wave equation if there was no viscosity. Let’s try a wave solution of the form

Here, is the angular frequency of the sound wave, but may be a complex number. If we find (as we will)

that would mean

which clearly identifies as the (real) wave-number for a wave-lenght , and , as a sound attenuation coefficient, with the penetration length.

Now, second time derivatives just yield

But, it is quite interesting that these two second space derivatives are not quite the same:

where is the unit vector in the *x* direction. Clearly, the second version of the derivative produces a longitudinal wave, with a vector component just along the direction of propagation!

I know, sound waves are longitudinal. But, what happens if we plug these derivatives for the *y* Cartesian component. Well:

or

Now, this is a very classic complex analysis problem. Recall You do need the to get the two solutions. On of the solutions has the signs reversed, and corresponds to a wave propagating and attenuating in the –*x* direction. The one we are looking for is:

Therefore

This is an interesting wave, since the attenuation coefficient is equal to the wave number. It looks like a simple function: , which would be called simplistic by a physicist. If you plot it you’ll see it decays very fast, with just one maximum of minimum of importance. (You can just type “e^-x*cos(x) from 0 to 10” in google, it will plot it!). So, yes, sound waves are basically longitudinal, since their transverse components get attenuated very fast. How fast? As we said before, the attenuation length is the inverse of , so . This means that for everyday “sound”, i.e. audible frequencies, that length is quite small. With the numerical data in the Table, that length will be only be about 0.7 mm for a very low frequency of 10 Hz (just below the hearing range), and will decrease as the inverse of the square root of the frequency.

(m/s) | (m/s) | (m/s) | (GHz) | |

water | 1480 | |||

air | 340 | ? |

**Table**: Numerical values for two important substances. Question marks are speculative, since in Cramer air is said to have a negligible bulk viscosity… but in a graph this quantity’s value is seen to be about half the shear viscosity value, and air is mostly nitrogen.

Plugging the derivatives for the *x* Cartesian component:

We see that the viscosity now features the bulk viscosity, as seems fitting for a longitudinal disturbance, which involves compression. Also, a new important term appears. If viscosity were negligible, the solution is just

the usual dispersion relation for sound. In general, though:

where we define the important crossover angular frequency

Numerical values can be found in the table (we provide the linear frequency). The value is really high for hearing range, which goes up to about 20 kHz for humans, 160kHz for porpoises, which hold the record. Medical ultrasound goes as high as 16 MHz only, and only acoustic microscopy reaches a few GHz, the range at which our value for air sits.

At frequencies much below the crossover frequency, we may expand the term in the denominator in a Taylor series, then again for the square root. The end result is

Therefore the wave number is

as if there were no viscosity. The attenuation coefficient is

It therefore grows as the square of the frequency. This agrees with the expression in wikipedia (but for a factor of two in the general formula, which I have just corrected – let’s see if it stays). This would mean that for that very high porpoise’s pitch at 160Hz the attenuation length would be about 1.5 kilometers (in water, of course), which may be important for long-range communication of these animals. For medical ultrasound at 16MHz this length is 14cm, which can clearly have an impact for human tissues (I am not sure if this attenuation may be used to our advantage). For the human bodies I have used water values, which is a fair approximation.

At frequencies much higher than the crossover, we may neglect the “1” in the denominator, to obtain

Now, this looks familiar, especially if we substitute the crossover frequency:

Basically, the same expression as for transverse waves, with a different combination of viscosities. The conclusion is similar:

and the resulting waves are heavily attenuated.

A fast check on the above approximations is to see what happens when the frequency is exactly the crossover frequency. At this point, the growth of the attenuation coefficient as the square of frequency crosses over to a growth as the square root. This would mean a bend in a log-log plot, between two straight lines with different slopes.

The extrapolation of the low frequency expression yields

whereas the high frequency expression yields

not so different at all.

The exact expression can be shown to be, after some complex algebra,

a value just a bit below the other two. This means the approximations remain quite fair up to the limit of their respective ranges.

By the way, for water this value corresponds to an attenuation length of about 58 nm, which is really short. For air, it is about 1.5 micrometers, the size of a really small cell.

]]>

# Created by the script cgal_create_cmake_script # This is the CMake script for compiling a CGAL application. project( embedded_ ) cmake_minimum_required(VERSION 2.6.2) if("${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION}" VERSION_GREATER 2.6) if("${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION}.${CMAKE_PATCH_VERSION}" VERSION_GREATER 2.8.3) cmake_policy(VERSION 2.8.4) else() cmake_policy(VERSION 2.6) endif() endif() find_package(CGAL QUIET COMPONENTS Core ) include( ${CGAL_USE_FILE} ) set(EIGEN3_INCLUDE_DIR "/usr/local/include/eigen") find_package(Eigen3) if(EIGEN3_FOUND) message(STATUS "NOTICE: Eigen library found.") include( ${EIGEN3_USE_FILE} ) else() message(STATUS "NOTICE: Eigen library is not found.") endif() set(CMAKE_MODULE_PATH "/usr/local/include/eigen/cmake/;${CMAKE_MODULE_PATH}") #set(CHOLMOD_LIBRARIES "/usr/include/suitesparse/") find_package( Cholmod ) find_library(LAPACK_LIB NAMES lapack) find_library(BLAS_LIB NAMES blas) find_library(SS_LIB NAMES suitesparseconfig) include_directories( ${CHOLMOD_INCLUDES}) include( CGAL_CreateSingleSourceCGALProgram ) create_single_source_cgal_program("main.cpp" "linear.cpp" "gradient.cpp" "nabla.cpp" "quad_coeffs.cpp" "periodic.cpp" "fields.cpp" "Delta.cpp" "move.cpp" "draw.cpp" "onto_from_mesh.cpp" ) target_link_libraries(main ${CHOLMOD_LIBRARIES} ${LAPACK_LIB} ${BLAS_LIB} ${SS_LIB}) #ADD_LIBRARY(${LAPACK_LIB} ${BLAS_LIB} ${CHOLMOD_LIBRARIES} )

Ok, in orange are the tweaked parts. I am a bit proud of the SS_LIB part, which I found out by myself

]]>

int n = 100; VectorXd x(n), b(n); SpMat A(n,n); fill_A(A); fill_b(b); // solve Ax = b Eigen::BiCGSTAB<SpMat> solver; //ConjugateGradient<SpMat> solver; solver.compute(A); x = solver.solveWithGuess(b,x0);

Notice that A is a **sparse** matrix! I am next describing how to use this in order to solve the 1D Poisson equation.

Now, A is a **sparse** matrix. Care must be taken when filling it, and when addressing its contents. Here is an efficient way of filling it up using a std::vector of triplets. These are (int i, int j, T b) with T= float or double typically (or complex), giving the value b of the element at row i, column j. Notice I fill it with the celebrated finite difference expression for the Laplacian operator.

void fill_A(SpMat& A) { typedef Eigen::Triplet<double> T; int n=A.rows(); std::vector<T> aa; // list of non-zeros coefficients double dx2 = double(n)*double(n); for(int i=0 ; i < n ; i++) { if(i>0) aa.push_back( T(i,i-1, dx2 )); // int im1= (i-1 + n) % n; // aa.push_back( T(i,im1, dx2 )); aa.push_back( T(i,i, -2 * dx2 ) ); // int ip1= (i+1 + n) % n; // aa.push_back( T(i,ip1, dx2 )); // cout << im1 << " " << i << " " << ip1 << endl; if(i<n-1) aa.push_back( T(i,i+1, dx2 )); } A.setFromTriplets(aa.begin(), aa.end()); return; }

This is how one fills the source term (right hand side of the Poisson equation). It is a regular vector (in the algebra sense), not a sparse one (i.e. it is a “dense” vector):

void fill_b(VectorXd& b) { int n=b.size(); for(int i=0 ; i<n ; i++) { double x=double(i+1)/double(n+1); b(i)= std::sin(2*M_PI*x); std::cout << x << " " << b(i) << std::endl; } return; }

We can also provide a vector as an initial guess, since in this case we know

the analytic solution:

void solution(VectorXd& b) { int n=b.size(); for(int i=0 ; i<n ; i++) { double x=double(i+1)/double(n+1); b(i)= - std::sin(2*M_PI*x) / (4*M_PI*M_PI); } return; }

FYI, this is the complete code, with ugly comments and everything. Some comments would apply for another canonical solution, f(x)= x (1-x), with a constant source term equal to 2.

#include <Eigen/IterativeLinearSolvers> #include <iostream> #include <vector> using Eigen::VectorXi; using Eigen::VectorXd; using Eigen::SparseMatrix; using Eigen::ConjugateGradient; using std::cout; using std::endl; const Eigen::IOFormat OctaveFmt(Eigen::StreamPrecision, 0, ", ", ";\n", "", "", "[", "];"); typedef SparseMatrix<double> SpMat; void fill_A(SpMat& A) { typedef Eigen::Triplet<double> T; // A.reserve(VectorXi::Constant(n,3)); std::cout << " Filling A " << std::endl; int n=A.rows(); std::vector<T> aa; // list of non-zeros coefficients double dx2 = double(n)*double(n); for(int i=0 ; i < n ; i++) { if(i>0) aa.push_back( T(i,i-1, dx2 )); // int im1= (i-1 + n) % n; // aa.push_back( T(i,im1, dx2 )); aa.push_back( T(i,i, -2 * dx2 ) ); // int ip1= (i+1 + n) % n; // aa.push_back( T(i,ip1, dx2 )); // cout << im1 << " " << i << " " << ip1 << endl; if(i<n-1) aa.push_back( T(i,i+1, dx2 )); } A.setFromTriplets(aa.begin(), aa.end()); // std::cout << i << std::endl; // int im1= (i-1 + n) % n; // int ip1= (i+1 + n) % n; // std::cout << i << " " << im1 << std::endl; // std::cout << i << " " << ip1 << std::endl; // A.insert(i,ip1)= dx2; std::cout << " Filled A " << std::endl; // for(int i=0 ; i < n ; i++) { // if(i>0) cout << A.coeffRef(i,i-1) << " "; // cout << A.coeffRef(i,i) << " "; // if(i<n-1) cout << A.coeffRef(i,i + 1) << " "; // cout << endl; // } return; } void fill_b(VectorXd& b) { int n=b.size(); for(int i=0 ; i<n ; i++) { // b(i)=2; double x=double(i+1)/double(n+1); b(i)= std::sin(2*M_PI*x); std::cout << x << " " << b(i) << std::endl; } return; } void solution(VectorXd& b) { int n=b.size(); for(int i=0 ; i<n ; i++) { double x=double(i+1)/double(n+1); b(i)= - std::sin(2*M_PI*x) / (4*M_PI*M_PI); // b(i)= x*(1-x); } return; } int main(void) { int n = 100; VectorXd x(n), b(n); SpMat A(n,n); // fill A fill_A(A); // fill b fill_b(b); // std::cout << "A= " << A.format(OctaveFmt) << std::endl; std::cout << "b= " << b.format(OctaveFmt) << std::endl; VectorXd x0(n); solution(x0); std::cout << "x0= " << x0.format(OctaveFmt) << std::endl; VectorXd bb = A*x0; std::cout << "bb= " << bb.format(OctaveFmt) << std::endl; // solve Ax = b Eigen::BiCGSTAB<SpMat> solver; //ConjugateGradient<SpMat> solver; solver.compute(A); if(solver.info()!=Eigen::Success) { // decomposition failed std::cout << "Failure decomposing\n"; return 1; } x = solver.solveWithGuess(b,x0); if(solver.info()!=Eigen::Success) { // solving failed std::cout << "Failure solving\n"; return 1; } // solve for another right hand side: // x1 = solver.solve(b1); std::cout << "#iterations: " << solver.iterations() << std::endl; std::cout << "estimated error: " << solver.error() << std::endl; std::cout << "x = " << x.format(OctaveFmt) << std::endl; // update b, and solve again //* x = cg.solve(b); return 0; }]]>

- Open the relevant collab with google chrome
- Slide one of the volume controls. After a while, all the audio tracks are transferred to your hard drive (opening the recording app also works, but I think it’s slower).
- All tracks can be found at C:\Users\<user_name>\AppData\Local\Google\Chrome\User Data\Default\Cache in windows 7. They have pretty names like “f_001b5a” and no extension. In other windows versions and in iOS they are some other place, just google it up.
- Order them by date. The audio tracks you want will be the newest ones, and have sizes from about 0.5Mb to 2 Mb. Click on them twice slowly (or once, plus F2) and add the “.ogg” extension to them. This is an audio format similar to mp3, but with less copyright restrictions.
- Open cubase, select “empty project”. Then, create an empty project (it’s in the “More” tab of the project assistant).
- Then create the tracks you will need,
**using presets**. Imagine the bandhub collab features some female singing. Then, you’d add a new audio track (project -> add track -> audio, or right click on the middle column, then click on “browse presets”, and start selecting. In this case, Vocal, Lead Vocal, Pop (for example). We could then select*Female Vox Basic Lead*, for example. - Select one track, then import a bandhub track onto it (File -> Import -> Audio file). Chances are you’ll import the wrong one, just hit the play button and listen to what you got. Press the “S” button on the track to solo it if things get noisy. Once you know where this audio belongs, select the audio clicking on it, cut it, click on the right track, then paste it (of course, go to the beginning of the track or it will be pasted at the current time).
- Now, all presets are sensible to me but some are
**redundant**. For example, if the vocals had some noticeable reverb, you won’t probably need more. Disable the reverb clickin on the Inserts tab, then the little on/off button. Same with echo. Electric guitars presets usually add some amp simulator which makes the sound really noisy if the guitar was already amped. So, disable that too. Needless to say, this is the part where you’ll be spending some time if you are serious about this stuff, tweaking the presets, adding new inserts, all that - OK, when all tracks are sounding ok, open the mixer window (F3 or devices -> mixer). Play the song and you’ll see all those meters jumping up and down. Adjust them until you find the right volume for each of them. Try to avoid clipping, which shows as the meters reaching the top. Don’t forget to
**pan**the mix moving the blue line left and right. This provides the stereo effect. Frankly, I don’t know much about panning, if I have to guitars I move one to one side, the other to another. I think panning the bass or drums makes no sense, and perhaps the same goes for the lead vocals, but I don’t really know. - Then, select the range you want to record in your song by dragging the little triangle on the time bar to the end of the song. Then, export your mix (File -> Export -> Audio Mixdown). Congratulations! Now you may upload your song to youtube (but then you have to make a “video”, even with one frame!), or to soundcloud, dropbox …
- Other stuff you can do:
- track
**trimming**, just move the white squares at the lower corners of the sound track **fade ins**and**outs**, click on the blue triangles at the upper corners**clipping**, right-click where you want to clip, select the scissors tool**dubbing**, this is kind of cheating, but you may add stuff to the song. Just record a new track the way you usually do. Of course, that track is not visible in the bandub collab (at least until they implement this feature, which some say they will). But, sometimes a song is lacking some simple thing I can’t record easily. Like a tambourine, my delicate condenser mic clips like crazy recording them. In this case, I start an instrument track, select Drums – Groove Agent One, and record some percussion with the PC keyboard. Remember to set up some drum set (sign with three little horizontal bars in the left column), or else you’ll get no sound. The keyboard appears with alt-K or devices -> Virtual keyboard.

- track
- Another interesting possibility is to use cubase 5 to build a rough guiding track to start a new collab… but this deserves another post.

**Audacity**

The basic process of importing the tracks and mixing them about is basically the same as above. I have the feeling it must be the same with about any modern DAW. Some advantages of using this piece software:

- It is open source and free
- It’s also quite lightweight
- Controls are much simpler
- ogg files are recogniced automatically, no need to add the “.ogg” extension. This is quite convenient

The bad:

- All basic effects are there: reverb, echo, compression, EQ. But, they are low-level! I mean, no presets are there for you to use. That means you have to know some relative advanced stuff, which I certainly do not. It does depend on the effect. EQ, for example, can manage presets, and comes with some basic ones already (it’s a rather cool EQ, by the way). Compression, on the other hand, does not, so I have to start learning about what “attack time” means …
- Also, effects cannot be applied on the fly, you have to stop listening to the mix, apply them, then listen to the result.
- No “virtual” instruments as cubase. You can no longer play drums or synth on your PC’s keyboard. Also, no midi, it seems (not sure).

My feeling is that, for a very simple mix audacity is probably better, it being simpler and easier in the beginning. If you want to do slightly more advanced stuff and you don’t mind learning a bit about proper EQ, compression, etc, then probably audacity is still better, it being more low level. Now, if you are either lazy or you want to go deeper, it seems you’ll have to go pro.

]]>