My point is that a discrete-time system really cannot be interpreted within a field-theoretic framework. A "phase portrait" captures both position and momentum of a continuous-time system described by an ordinary differential equation. These momentum variables setup the "field" that gives structure to the phase portrait. In a discrete-time system, we don't have the same kind of momentum.
For a continuous-time system, I can plot a point at an individual position, and I can also then draw a vector pointing away from that point representing the velocity at that instant of time. It is these velocity vectors that are put together to make a phase portrait of the system.
For a discrete-time system, there is no point-based velocity. In order for me to calculate the approximate "velocity" at a point, I need to know the position at the next point. Then I can draw a line between those two points and approximate the "velocity" of the system as going from the first point to the second point. However, if I have to know the next point in future anyway, it's more useful to just draw the second point.
Now, some discrete-time systems have a more predictable structure. For example, if you have a linear time-invariant discrete-time system like:
x[k+1] = M*x[k]
then the algebraic structure of the "M" matrix gives us insight into how trajectories will evolve. So for these special cases, it is possible to draw a kind of "phase portrait" for the discrete-time system. However, this is primarily because such a discrete-time system can be viewed as a sampled version of a continuous-time linear time-invariant system which does have a phase portrait.So, for an arbitrary discrete-time system, the best thing you can do is explore trajectories from different initial conditions. Gradually, as you explore the space more and more, you may find boundaries of attractors (possibly strange attractors). A complication with discrete-time systems is that the "next" point may be very far from the "previous" point. Take, for example:
x[k+1] = -1.1*x[k]
If you start at x[0]=1, the trajectory will bounce from point clustered above 1 to points clustered below -1. In a continuous-time system, you might expect to see initial conditions above 1 stay near 1 and initial conditions below -1 stay near -1. That is, in a continuous-time system, you wouldn't imagine trajectories could cross x=0 (which is an equilibrium/fixed-point of this system). However, the discrete-time system can jump wildly from point to point.Take, for example, the Henon map you mention. Wolfram's Mathworld has some nice plots:
http://mathworld.wolfram.com/HenonMap.html
The first pair of side-by-side plots are colored "according to the number of iterations required to escape". That is, the plots were generated by starting at several initial conditions and recording the resulting trajectories. Each "iteration" gives the next point from the previous point. For a while, a trajectory will stay around its initial condition. Eventually, it will escape and move away from the region. The regions are colored based on how many iterations (i.e., how many calculations after the initial condition) it took for the trajectory to leave the region.The second pair of side-by-side plots show a SINGLE trajectory started at x[0]=0 and y[0]=0. Each point was recorded and gradually a pattern emerged. Notice how in the left plot the two regions appear to be disconnected. If you saw a phase portrait that looked like this in a continuous-time system, you would conclude that initial conditions within one region would not be able to join the other region for this set of parameters. However, this plot was generated from a SINGLE initial condition. So the plot jumps from points in the top left to points in the bottom right and back.
So that's how you can explore something like the "phase space" for discrete-time systems. You can probe it with different initial conditions. For chaotic systems, you have to be very careful you don't accidentally jump over an interesting region of initial conditions that may have qualitatively different trajectories that follow from them.
As an aside, I guess it's also worth mentioning that many popular discrete-time chaotic maps are actually Poincaré maps of continuous-time dynamical systems. Poincaré maps have other names, including "return maps." Consider, for example, the planets as they orbit the sun. The actual orbits of the planets in three dimensions looks like a tangled mess when you consider their histories over several cycles around the sun because each orbit is slightly different than the previous orbit (i.e., they aren't entirely planar). However, if you insert a plane perpendicular to their orbits at a single location, each planet pierces the plane at one point every cycle. The resulting shapes that are poked out of that cross section reveal structure in the orbits.
I hope that helps! --
Ted
Personal weblog of Ted Pavlic. Includes lots of MATLAB and LaTeX (computer typesetting) tips along with commentary on all things engineering and some things not. An endless effort to keep it on the simplex.
Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts
Thursday, January 24, 2013
Discrete-time Phase Portraits?
I was contacted recently by e-mail asking how to produce a phase portrait of a discrete-time system. In my initial response, I explained that a true "phase portrait" wasn't defined for discrete-time systems because the technical notion of a phase portrait depends on a special structure that comes along with ordinary differential equations. The original poster needed some additional clarification, and so I sent a second e-mail that I have posted below. It touches a little bit on the original poster's question, it comments on differences between discrete-time and continuous-time systems, it talks a bit about chaos, and it gives a brief description of Poincaré/return maps that are often used in the study of approximately periodic systems.
Thursday, September 01, 2011
"Dark Matter is an Illusion" summary in National Geographic News gets something a little wrong
There was an interesting article from National Geographic News yesterday:
Here's the primary source:
[ note that magnetic dipoles align and reinforce surrounding magnetic fields because there are no magnetic monopoles. That is, magnetic field lines are continuous; they don't terminate. Consequently, magnetic dipoles are torqued to align their fields. Electric dipoles are driven by the motion of their monopolar ends ]
What Ker Than missed was that in this model of "gravitational charge", it is the case that opposites repel and likes attract. That's why you (matter) are attracted to earth (also matter). However, anti-matter and matter would repel each other. Moreover, if you had a matter–antimatter virtual pair (as quantum field theory says you do in a vacuum of space), that dipole would align because its "positive" end would be pulled toward the positive end of the gravitational field (and vice versa for its negative end). This alignment would strengthen the resulting field.
Here's the relevant snippet from the bottom of the first column of page 2 of the article:
"Dark Matter Is an Illusion, New Antigravity Theory Says"I thought I'd post a link to the primary source here. I also wanted to point out that the explanation Ker Than gave got something really important wrong and consequently diminished the elegance of the proposed theory.
by Ker Than
Here's the primary source:
"Is dark matter an illusion created by the gravitational polarization of the quantum vacuum?"Ker Than, the National Geographic News reporter, got it a little mixed up in this part of the NatGeo article:
by Dragan Slavkov Hajdukovic
Astrophysics and Space Science 334(2):215--218
DOI: 10.1007/s10509-011-0744-4
All of these electric dipoles are randomly oriented—like countless compass needles pointing every which way. But if the dipoles form in the presence of an existing electric field, they immediately align along the same direction as the field.This electric analogy states that electric dipoles align and strengthen electric fields, but that's incorrect. Electric dipoles weaken surrounding electric fields. In particular, the positive end of the dipole goes toward the "negative end" of the field and the negative end of the dipole goes toward the "positive end" of the field. So the two fields subtract from each other, not reinforce. This is summarized in the primary source (that I'll quote below).
According to quantum field theory, this sudden snapping to order of electric dipoles, called polarization, generates a secondary electric field that combines with and strengthens the first field.
[ note that magnetic dipoles align and reinforce surrounding magnetic fields because there are no magnetic monopoles. That is, magnetic field lines are continuous; they don't terminate. Consequently, magnetic dipoles are torqued to align their fields. Electric dipoles are driven by the motion of their monopolar ends ]
What Ker Than missed was that in this model of "gravitational charge", it is the case that opposites repel and likes attract. That's why you (matter) are attracted to earth (also matter). However, anti-matter and matter would repel each other. Moreover, if you had a matter–antimatter virtual pair (as quantum field theory says you do in a vacuum of space), that dipole would align because its "positive" end would be pulled toward the positive end of the gravitational field (and vice versa for its negative end). This alignment would strengthen the resulting field.
Here's the relevant snippet from the bottom of the first column of page 2 of the article:
In order to grasp the key difference between the polarization by an electric field and the eventual polarization by a gravitational field, let's remember that, as a consequence of polarization, the strength of an electric field is reduced in a dielectric. For instance, when a slab of dielectric is inserted into a parallel plate capacitor, the electric field between plates is reduced. The reduction is due to the fact that the electric charges of opposite sign attract each other. If, instead of attraction, there was repulsion between charges of opposite sign, the electric field inside a dielectric would be augmented. But, according to our hypothesis, there is such repulsion between gravitational charges of different sign.
Sunday, February 04, 2007
Do (Reuters) reporters know absolutely nothing about everything?
UPDATE: It was pointed out to me by a post on Nanotechnology Today that the Professor who heads up the research group that built the "demon" said (speaking about J. C. Maxwell):"As he predicted, the machine does need energy and in our experiment it is powered by light. While light has previously been used to energise tiny particles directly, this is the first time that a system has been devised to trap molecules as they move in a certain direction under their natural motion. Once the molecules are trapped they cannot escape."Again, what's going on at Reuters?! This quote is EXACTLY the opposite of their summary (quoted below).
It's just silly that the article "1867 nanomachine now reality" has gone the extra mile to be completely worthless. It's also silly that CNN has decided to put this in their "Offbeat news."
The experiment described in the article involves Maxwell's Demon, which is a thought experiment involving a "paradox" of statistical thermodynamics. However, nowhere in the article is this paradox ever mentioned. In fact, they go so far as to say this:
His mechanism traps molecular-sized particles as they move. As Maxwell had predicted long ago, it does not need energy because it is powered by light.
Now, I'm guessing that the scientists involved said that Maxwell's paradox was a paradox because his little demon did not require additional energy. However, this device doesn't cause any paradox because their demon DOES require additional energy IN THE FORM OF LIGHT. As I explain to my fourth graders, light is energy. Nearly all of the things that end in "cycle" in the study of earth and life science are driven by the energy brought from the sun in the form of light.
Anyway, the article completely misses the point and is filled with lots of misunderstandings and statements which could generously be called wrong.
Do they have editors at Reuters? To cut costs did they just fire them all and hire the cast of Who's the Boss? instead?
Labels:
cnn,
engineering,
entropy,
maxwell's demon,
news,
physics,
reuters,
science,
statistics,
thermodynamics
Subscribe to:
Posts (Atom)