Posts

Cinf-structures to integrate involutive distributions

Image
My collaborators and I have recently published two papers ( this one and this other ) in which  we develop a method to obtain the integral manifolds of involutive distributions. Exploring the integral manifolds of involutive distributions contributes to a broader understanding within differential geometry, an area with ties to many other mathematical branches. Furthermore, the study of these distributions and manifolds holds relevance in physics, especially within classical mechanics and field theory. A deeper grasp of these mathematical constructs can be beneficial for ongoing research in both mathematics and physics.  Given a distribution, for example \(\mathcal{Z}=\{Z_1,Z_2\}\) in \(\mathbb R^n\), the idea of our work is to complete it with a sequence of \(n-2\) vector fields \(Y_1, Y_2,Y_3,\ldots\) in such a way that - \(Y_1\) is a \(\mathcal{C}^{\infty}\)-symmetry of \(\mathcal{Z}\). - \(Y_2\) is a \(\mathcal{C}^{\infty}\)-symmetry of \(\mathcal{Z}\oplus \{X_1\}\). - \(Y_3\

Lax pair introduction

Finite degree of freedom Consider a nonlinear system of ODEs \[ \dot{x}_i=F_i, \] for example one arising in Hamiltonian mechanics , \[ \dot{p} = -\frac{\partial H}{\partial q}, \quad \dot{q} = \frac{\partial H}{\partial p}. \tag{1} \] A Lax pair for this system is a pair of matrices \(L\) and \(M\) that satisfy the Lax equation: \[ \frac{dL}{dt} = [L, M] \tag{2} \] where \([L, M] = LM - ML\) is the commutator of \(M\) and \(L\), and \(\frac{dL}{dt}\) is the time derivative of \(L\). The entries of \(L\) and \(M\) are typically expressed in terms of the variables \(x_i\), and we require that equation (1) is satisfied if and only if (2) is satisfied. The exact form of \(L\) and \(M\) depends on the specific system under consideration, and is not unique. For instance, in the case of the simple harmonic oscillator with Hamiltonian \(H = \frac{p^2}{2m} + \frac{1}{2}m\omega^2 q^2\), we can choose: \[ L = \begin{pmatrix} p/m & \omega q \\ \omega q & -p/m \end{pmatrix}, \qu

General covariance and contravariance (new version)

Image
Suppose that a group \(G\) acts transitively on a space \(X\). A transformation \(T\in G\) can be thought of indistinctly as either "moving" the objects in \(X\) or as a change of "point of view" in the following way: Suppose we have a comfortable, mathematically speaking, space with a distinguished point, \((S,s_0)\). We are going to think of \((\mathbb Z,0)\), for easy. Suppose also a bijection \(b_1:S\to X\) (in the line of reasoning of the notes homogeneous space#Intuitive approach and basis and change of basis ). It is better to think in terms of \(b_1^{-1}\). That is, for every \(x\in X\) we have a kind of "coordinates" \(s\) for \(x\), given by \(s=b_1^{-1}(x)\in S\). Think, for example, of a vector space \(V\) and the isomorphism \(b_1:\mathbb R^2 \to V\) (fixing a basis in \(V\)). We can think of this as our initial point of view , and we can imagine as if we were located at \(x_0=b_1(0)\in X\) with some devices to take measurements that let u

4-dimensional spheres, or haunting an ant

Image
In this post, we will try to explain a little bit what it feels like to live in a 4-dimensional world. First, we will see that we can enter the inside of a normal sphere without breaking the wall. And secondly, we will try to describe what we would feel if our world were not the classic 3D universe, known as \(\mathbb{R}^3\), but another 3-dimensional space called \(S^3\). But before we start, let's name things. The circles we all know have a more technical name for mathematicians: \(S^1\). On the other hand, a sphere (let's say, a ball) is called \(S^2\). This similarity in names is not accidental: since we were children, we intuitively knew that circles and spheres have something in common. In fact, they are "the same" except for the detail that circles are 1-dimensional (you can only move forward or backward), and spheres are 2-dimensional (if you lived in one, you could move forward-backward, but also left-right. Oops, actually, we live in one). Now let's

The Pleasure of Understanding

I know it's hard to understand why I spend so much time locked up, sitting, reading, and researching. It's a pleasure that's very difficult to describe to someone who hasn't experienced it firsthand, but I'd like to give it a try. Understanding something after spending a long time trying and failing is a feeling analogous to trying to peel a slightly unripe orange with no tools except your nails. Suppose, in addition, that your nails have just been cut, so you can't puncture the skin to start peeling. You turn the orange over and over, but you can't seem to get started. You try to puncture the skin, but your nail slips; the orange is still unripe, and the skin is very smooth. But one day, you discover a tiny hole in the skin of the orange, a fissure you hadn't noticed before. Maybe the orange has ripened with time, or maybe your nails have grown, but the fact is that this time you're able to peel off a tiny piece of skin. That feeling is indescribabl

Why did physics go quantum?

We start by assuming a classical setup: we have pure and mixed states (which are probability distributions of pure states). the observables are real-valued functions, which can be seen like part of a c-star algebra \(\mathcal{A}\). More on this here . We can reverse our point of view and think of all this like if states were functionals on \(\mathcal{A}\), specifically they are normalized, positive , linear functionals. That is, a mixed state \(\omega\) is a probability distribution and an observable \(f\) is a random variable , so we consider the pairing \(\omega(f)=E_{\omega}(f)\) (the expected value). The pure states are those functionals which are also multiplicative (degenerated distributions, with all the probability concentrated on a single point). We decide to stick to this point of view, since it is equivalent to the original one (this is proven by the Gelfand-Naimark theorem and Riesz-Markov representation theorem ), and because at the end of the day observables are wh

Justification of Bra-Ket notation in Quantum Mechanics

Given a complex Hilbert space \(H\) we can define the complex conjugate of \(H\), \(\overline{H}\), which is all the same than \(H\) except that the scalar multiplication by \(z\in \mathbb{C}\) is changed by the conjugate \(\overline{z}\). Obviously, there is a conjugate linear isomorphism between \(H\) and \(\overline{H}\) . On the other hand, we can define the continuous dual \(H^*\) of \(H\) like the vector space of all continuous and linear maps from \(H\) to \(\mathbb{C}\). The inner product gives rise to a morphism from \(H\) to his dual \(H^*\) that is conjugate-linear: \[ \begin{array}{rcc} \Phi: H & \longmapsto & H^*\\ v & \longmapsto & \langle v,\cdot \rangle \end{array} \] The Riesz representation theorem tell us that \(\Phi\) is indeed an isomorphism (this is only evident in the finite dimensional case! See dual vector space ). Moreover, it is an isometry respect to the norm. In conclusion, \(H^* \cong\overline{H}\), since the composition of two conju