Tuesday, November 08, 2011

Building ElectricSheep on Fedora 16 (and enabling in KDE4)

UPDATE: Apparently these steps also work on Fedora 17. Moreover, if you use Gnome3, there are some additional instructions that can help you get ElectricSheep setup.
Quick link: Updated makeSheepFedora16.sh install script
A while back, I wrote a post summarizing how to get ElectricSheep running on Fedora 15 with KDE 4.6.5. Today, I "upgraded" to Fedora 16 and was surprised to find that library updates were incompatible with ElectricSheep. Symbol mismatches made it impossible to simply symlink old shared libraries to new shared libraries. Moreover, the version of electricsheep that built fine on Fedora 15 would not build on Fedora 16.

So I made some modifications to Tait Clarridge's script to fix the build. My hacks seem to work for me so far; hopefully they do not introduce regressions into the already old version of electricsheep.

So to get ElectricSheep running on your Fedora 16 system, try downloading and running my modified makeSheepFedora16.sh script. As a bonus, it (hopefully) will install ElectricSheep as a screensaver if you are running KDE. However, the same DPMS caveats about mplayer from old post still apply until Fedora updates their mplayer to a recent version (why does it take these people so long?).

Thursday, October 27, 2011

"Your mamma's so dumb she can program in Arduino!"

Interesting and slightly amusing letter from IEEE Spectrum Editor-in-Chief Susan Hassler:
Dear Members and Readers,

Please accept our sincere apologies for the headline in today's Tech Alert: "With the Arduino, Now Even Your Mom Can Program." The actual title of the article is "The Making of Arduino."

I'm an IEEE member, and a mom, and the headline was inexcusable, a lazy, sexist cliché that should have never seen the light of day. Today we are instituting an additional headline review process that will apply to all future Tech Alerts so that such insipid and offensive headlines never find their way into your in-box.

Spectrum's insistence on editorial excellence applies to all its products, including e-mail alerts. Thank you for bringing this error to our attention. If you have any additional comments or recommendations, do not hesitate to contact me or other members of the editorial staff.

Sincerely yours,

Susan Hassler
Editor in Chief
IEEE Spectrum
[ the article that caused the fuss: http://spectrum.ieee.org/geek-life/hands-on/the-making-of-arduino ]

Thursday, September 29, 2011

Why symbiosis is not a good example of group selection

[ In a Google+ post, someone asked whether symbiosis was a good example of group selection. I responded in a comment, and another comment asked me to expand my response a little bit in my own post. So here is a copy of that post (with a few more hyperlinks). ]

Part 1: What is group selection?

Typically "group selection" doesn't cross species boundaries. That is, group selection refers to the proliferation of a particular form of a gene, otherwise known as an "allele", due to its benefits to groups of individuals which share that allele despite the individual costs of having that allele. It may help to consider the basic group-selection argument for the evolution of altruism (i.e., the evolution of behaviors that are costly to an individual and yet beneficial to a different unrelated individual). Before that, consider why we wouldn't expect altruistic alleles to have strong representation in a population.

For every gene or group of genes, there can be many different variations (alleles). Some of those variations will be deleterious to an individual, and so you would expect the relative representation of those deleterious variations to decrease over generations. So imagine if one of those alleles encoded an altruistic trait that caused an individual to do something costly for the benefit of another (e.g., helping a stranger understand group selection with no expectation of future payoff). Individuals with that allele are suckers. Those without that allele instead focus on tasks that return direct benefit to themselves, and that direct benefit would payoff with greater productivity of offspring that share that non-altruistic allele. When an altruist met a non-altruist, the benefit from the altruist would increase the non-altruist's alleles representation in the next population while decreasing its own alleles' representations. So we would expect that altruistic alleles would fade away into obscurity. Moreover, the benefit from all of the altruists would diffuse across the variety of alleles rather than being concentrated on just the altruistic ones.

However, what if that altruistic allele also encoded a behavior that would seek out others with that same allele. This non-random association means that each individual who helps another does actually help to increase the productivity in that allele. That is, even though there is a cost to the individual doing the altruistic task, the benefit going to the other individual is felt by the other copy of the same allele in the different (and unrelated) person. So when these altruists group together, altruistic benefits do not diffuse. They are captured within the group. Moreover, the group's synergy can cause it to be more productive than the remaining groups of non-altruists. Consequently, the altruistic allele not only persists in the population, but its representation can grow because there is a differential benefit between altruistic and non-altruistic groups. It is this differential benefit between groups that is group selection.

Part 2: Symbiosis and Mutualism

A symbiotic relationship between members of different species is not group selection (in general) because it does not posit that there is a mutual allele that may be deleterious in an individual but beneficial in a group. That is, there is no group synergy that is mitigating individual costs by generating benefits elsewhere that help to support alleles that would otherwise naturally decay. When species are mixed within a population of interest, the analysis is a bit different because alleles cannot flow across the species barrier (except for special cases).

For example, consider an allele that existed across species (e.g., an allele for a gene shared between humans and bonobos), the speciation in general would prevent the sort of group selection gains because there would be no way for increased numbers of alleles in one species to transfer to the other species. Imagine that altruists in one species seek out altruists in the other species. The result could lead to more increases in the altruist representation in one species than another, and so there would be an altruist surplus. Those surplus altruists would have no choice but to associate with non-altruists in the other species. However, if the group was all of one species, then there would not be surplus altruists. Altruistic benefit need not diffuse across non-altruists too.

However, most examples of symbiosis are not altruistic. Instead, they are mutualistic. That is, the behavior does benefit another, but that is a possibly unavoidable side effect of an action that benefits the individual doing the behavior. For example, if I'm driving through a parking lot looking for an empty space to park, I am revealing information to my competitors (other drivers) about where empty spots are not. I don't want to help the competing drivers, but it is unavoidable because they can see me go down an isle of the parking lot and not find a spot. Consequently, they do not go down that same isle. Of course, I use their searching behavior to inform my choices of the next isle. So we are doing "cooperative search" only because the behaviors have mutual benefits. The same goes for many symbiotic relationships among individuals of different species.

Consider a remora ("sharksucker"). It's a small fish that essentially attaches to another host (fish, whale, turtle, etc.). It can receive nutrients from on or around the host. It can also be protected from predators that avoid the host. In some cases, the host could eat the remora, but the remora is so small that it may not be worth the effort. Some hosts actually receive a small benefit (cleaning, for example) from the remora. Regardless, the remora experiences very little cost and plenty of benefit. Moreover, the host experiences very little cost and possibly some benefit. So there's no surprise that this behavior evolved. You don't need any fancy mathematical model to show how this is possible – when the benefits align like this, it's natural to assume that it is going to be favored by natural selection.

Part 2.5: Symbiosis and Co-evolution

Having said all of that, symbiosis can lead to elegant examples (or at least suggestions) of co-evolution, which describes how a change in one species can lead to a change in other species. In particular, natural selection on different species creates a feedback across species. One species is the ecological background for another species, and so as each species changes it creates new niches (and destroys old ones) for other species. So the evolution of one species can guide the evolution in another. But I think this post is long enough. :)

More information

Wikipedia does a pretty good job on these particular subjects. Check 'em out there.

( I have also mirrored this content on a post on my website. )

Tuesday, September 20, 2011

Duplex Printing from the Command Line to an HP LaserJet Printer

My department has several HP LaserJet printers available to access within the department via LPR and SMB. However, if you are working from a personal laptop connected to the university wireless, those servers will not be available to you. Instead, you must print by piping your documents through SSH to a department server that does have access.

Unfortunately, doing duplex printing (i.e., two-sided printing) to an HP LaserJet printer from the command line is not trivial. So, using the GSview documentation as a guide, I put together a small bash script (available for download as hplj_duplex_print) that does the trick.
#!/bin/bash

declare -a input_file
if test $# -eq 0; then
    input_file=("-")
else
    input_file=("$@")
fi

( echo -e "\e%-12345X@PJL JOB"
  echo "@PJL ENTER LANGUAGE = POSTSCRIPT"
  echo "<< /Duplex true /Tumble false >> setpagedevice"
  gs -sDEVICE=psmono -sPAPERSIZE=letter -q -sOutputFile=- \
    -dNOPAUSE -dBATCH "${input_file[@]}"
  echo -e "\e%-12345X@PJL EOJ"
  echo -e "\e%12345X"
  ) | lp -d your_HP_printer_spool_name
You should replace the your_HP_printer_spool_name with your printer's spool name. You might want to tweak some of the options (details below), but the general structure will remain the same. The opening and closing escape sequences communicate to the HP LaserJet printer that a PostScript file is coming. Then the setpagedevice PostScript directive instructs the printer to use its duplex module.

Regarding tweaking:
  • Again, make sure to change your_HP_printer_spool_name to your spool name. You may also want to change lp to lpr, but you will likely have to change -d to -P then.
  • You may want to change the gs (GhostScript) options to suit your purposes. For example, you can change the psmono device to one of the other GhostScript devices like psgray or psrgb.
  • The Tumble switch determines whether to do short-edge (true) or long-edge (false) duplex printing, and so this script defaults to the latter case. If you prefer vertical flipping, change the /Tumble false to /Tumble true. You might also make this a configurable command-line switch on the script.

Thursday, September 01, 2011

"Dark Matter is an Illusion" summary in National Geographic News gets something a little wrong

There was an interesting article from National Geographic News yesterday:
"Dark Matter Is an Illusion, New Antigravity Theory Says"
by Ker Than
I thought I'd post a link to the primary source here. I also wanted to point out that the explanation Ker Than gave got something really important wrong and consequently diminished the elegance of the proposed theory.

Here's the primary source:
"Is dark matter an illusion created by the gravitational polarization of the quantum vacuum?"
by Dragan Slavkov Hajdukovic
Astrophysics and Space Science 334(2):215--218
DOI: 10.1007/s10509-011-0744-4
Ker Than, the National Geographic News reporter, got it a little mixed up in this part of the NatGeo article:
All of these electric dipoles are randomly oriented—like countless compass needles pointing every which way. But if the dipoles form in the presence of an existing electric field, they immediately align along the same direction as the field.

According to quantum field theory, this sudden snapping to order of electric dipoles, called polarization, generates a secondary electric field that combines with and strengthens the first field.
This electric analogy states that electric dipoles align and strengthen electric fields, but that's incorrect. Electric dipoles weaken surrounding electric fields. In particular, the positive end of the dipole goes toward the "negative end" of the field and the negative end of the dipole goes toward the "positive end" of the field. So the two fields subtract from each other, not reinforce. This is summarized in the primary source (that I'll quote below).

[ note that magnetic dipoles align and reinforce surrounding magnetic fields because there are no magnetic monopoles. That is, magnetic field lines are continuous; they don't terminate. Consequently, magnetic dipoles are torqued to align their fields. Electric dipoles are driven by the motion of their monopolar ends ]

What Ker Than missed was that in this model of "gravitational charge", it is the case that opposites repel and likes attract. That's why you (matter) are attracted to earth (also matter). However, anti-matter and matter would repel each other. Moreover, if you had a matter–antimatter virtual pair (as quantum field theory says you do in a vacuum of space), that dipole would align because its "positive" end would be pulled toward the positive end of the gravitational field (and vice versa for its negative end). This alignment would strengthen the resulting field.

Here's the relevant snippet from the bottom of the first column of page 2 of the article:
In order to grasp the key difference between the polarization by an electric field and the eventual polarization by a gravitational field, let's remember that, as a consequence of polarization, the strength of an electric field is reduced in a dielectric. For instance, when a slab of dielectric is inserted into a parallel plate capacitor, the electric field between plates is reduced. The reduction is due to the fact that the electric charges of opposite sign attract each other. If, instead of attraction, there was repulsion between charges of opposite sign, the electric field inside a dielectric would be augmented. But, according to our hypothesis, there is such repulsion between gravitational charges of different sign.

Why GMail's show-if-unread is NOW useless with nested labels

Once upon a time, when they were both in "GMail Labs", the show-if-unread feature and nested labels features worked together seamlessly.
  • nested labels: You could create a nested label (like a subfolder) by adding slashes in folder names. You would create two folders called "Parent/ChildA" and "Parent/ChildB", and they would be displayed as "ChildA" and "ChildB" underneath a single "Parent" that you could collapse and expand.
  • show-if-unread: Only labels that had unread messages in them would show up in the left bar. To see all of your labels, you could use the "more" link which would show you a full list.
In the case where a nested label had unread messages in it, it would show up in its flat form (if I recall correctly) in the list. So you'd see "Parent/ChildA" but "ChildB" would still be nested under "Parent" in the "more". That was fantastic.

However, eventually both labs features became integrated into production GMail, and they messed it all up. Now, ostensibly to avoid revealing the slash form and to avoid having parent labels repeated in both the unread and read lists, they've made it impossible to apply show-if-unread to nested labels. Consequently, if any nested labels have unread messages in it, the parent and all of its nested labels show up in the unread list. So you get things like this (click for larger):
Obviously, that defeats the whole purpose of show-if-unread. I'm forced to look at all of those read nested labels just because some of their "sublings" are unread.

So Google has gone from taking two nice features and combining them into one terrible and useless and awful thing.

Wednesday, August 31, 2011

Electric Sheep on KDE 4.6.5 with Fedora 15 (using Intel graphics card)


This post begins with a few important updates; scroll down to see the bulk of the original post.

Fedora 16 update: If you are using Fedora 16, then see updates from a newer post about how to get ElectricSheep built and running.

DPMS Update (getting monitors to sleep/standby/suspend/turn off on schedule again): It turns out that electricsheep is preventing my monitors from getting DPMS sleep/standby/off signals because of a bug in mplayer that was fixed today in SVN r34074. If you pull down the updated mplayer and build it yourself, your DPMS problems with electricsheep will be fixed. If you aren't willing to pull down the SVN source and build the fixed binary, you could use something like this sample electricsheep-wrapper script that starts both electricsheep and a secondary process that reads waiting times from xset q, sleeps for those times, and then issues the appropriate xset dpms force commands. To use the hack, all references to electricsheep in KDE or xscreensaver configuration files must be changed to electricsheep-wrapper and the electricsheep-wrapper script has to be installed in a directory in the PATH. Or, again, you can just wait for mplayer to get patched in your Linux distribution.

KDE update: electricsheep's SourceForge SVN includes a Desktop file for KDE, and so I've updated the stuff below to use it instead. It's probably a good idea to check out their updated SVN repository at GoogleCode to see if the support files have improved.

Multiple Monitors update: KDE will stretch one electricsheep across all of your monitors. Instead, if you want electricsheep to put a different instance on each monitor, use xscreensaver instead of kscreensaver. To do so, you'll have to follow the instructions for using xscreensaver on KDE, which are also on the xscreensaver man page. I modified step 4 to use my own custom system-level kscreenlocker that doesn't force everyone on the system to use xscreensaver. Additionally, to get electricsheep to show up in the xscreensaver-demo menu, you need to not only install the relevant electricsheep.xml file (find it in the source repo's or built it yourself), but you also have to add a line to your own ~/.xscreensaver configuration file. I don't know why the former doesn't generate the latter. On one of my machines, it does. On the other, it doesn't.

Of course, YMMV.


I installed electric sheep today because I was bored of my ASCIIQuarium KDE screensaver and not thrilled about the other options (some of which bail out on my dual screen Intel setup). [ If you're not familiar with Electric Sheep, you should check out the Electric Sheep Wikipedia page which discusses how the screensaver evolves over time. It's a distributed computing project, and the genetic algorithm that guides the evolution actually takes input from Electric Sheep users (well, not me, because I don't have the keyboard support to "up" and "down" the sheep I see). So the screen saver is constantly downloading and processing new AVI's, generating new content, and contributing it back to the network. I like it because it's pretty screen saver diversity at the cost of a few computing cycles and some disk space. ] It wasn't so bad, but it also wasn't trouble free. Here's what I did (which almost worked entirely without me having to do anything special):
  1. Use the Fedora-specific script from Tait Clarridge's page on downloading and installing electric sheep in Fedora (if you are running Fedora 16, see my updated script instead).
  2. Learn from Giulio Guzzinati about the need to add an electric sheep KDE Desktop file to get the screensaver into the KDE Screen Saver configuration tool.
Unfortunately, Giulio Guzzinati's desktop file didn't work for me, and so I had to build my own use the desktop file inside the electricsheep distribution. Here the file that ended up "working" for me (which is downloadable as electricsheep.desktop, but I copied from electricsheep.desktop.kde in the SVN repo).
[Desktop Entry]
Exec=electricsheep
Icon=kscreensaver
Type=Service
X-KDE-ServiceTypes=ScreenSaver
TryExec=xscreensaver
Actions=InWindow;Root;Setup;
X-KDE-Category=Fractals Screen Savers
X-KDE-Type=xv
Name=ElectricSheep

[Desktop Action Setup]
Exec=electricsheep-preferences
Name=Setup...

[Desktop Action InWindow]
Exec=electricsheep -window-id %w
Name=Display in Specified Window
NoDisplay=true

[Desktop Action Root]
Exec=electricsheep -window-id %w
Name=Display in Root Window
NoDisplay=true

X-Ubuntu-Gettext-Domain=desktop_kdeartwork
As explained in Giulio Guzzinati's post, you can place that file in
/usr/share/kde4/services/ScreenSavers/
You can probably put it in
~/.kde/share/services/ScreenSavers/
as well (you might have to create that directory first) if you'd rather do something local. That put the Electric Sheep across both of my monitors. If you'd rather put a separate electricsheep in each monitor, use xscreensaver instead of KDE's screen saver. If you're having trouble getting your monitors to go to sleep while electricsheep is running, then you need to get an updated mplayer that fixes the bug that causes that problem (see the updates at the top of this post for more information). Alternatively, you can use a hack like this electricsheep-wrapper script to re-enable DPMS-like timeouts during the screensaver. To use the hack, all references to electricsheep in KDE or xscreensaver configuration files must be changed to electricsheep-wrapper and the electricsheep-wrapper script has to be installed in a directory in the PATH. However, it probably won't be too long until the mplayer DPMS fix reaches your Linux distribution.

Monday, August 29, 2011

3mindme is shutting down! Old owner recommends NudgeMail

Today I received the quoted e-mail below from David Barrett (@quinthar). His work as CEO of Expensify and some complications with maintaining the longevity of 3mindme have made him decide to shut down the 3mindme service effective immediately. He recommends the very similar but commercially operated service NudgeMail as a substitute. You can find other substitution options at an old post of mine discussing these services.
Hi! I'm David, the guy who made 3mindme. I'm very sad to inform you that I'm shutting down the service permanently, starting immediately. I strongly encourage you to check out a similar service at http://nudgemail.com -- it's essentially the same thing as 3mindme, but better.

Q: What will happen to the emails I've scheduled for the future?
A: After I send this email to all users, I'm going to send all future-dated emails immediately. My goal has always been to return every email at precisely the right time. Unfortunately, I'll need to make due with simply returning them at all.

Q: Can I do anything to convince you to keep 3mindme alive?
A: Probably not. It's been a fun service to operate these many years, but as CEO of Expensify (https://expensify.com - Expense reports that don't suck!) I just don't have the time to devote to 3mindme.

Q: Why now, after years of continuous operation?
A: Spam. I recently learned that many users (myself included) were having their emails silently dropped, meaning they got no error response, but the message was never scheduled for future delivery. Solving this problem is very difficult and time consuming, and I'd rather shut down 3mindme than leave it in a non-functioning state.

I think that's all. If you have any questions, feel free to respond to this email and I'll do what I can to help. Otherwise, give NudgeMail a shot, and keep Expensify in mind for your next expense report!

-david
Follow me at http://twitter.com/quinthar
So that is very sad. 3mindme was a nice server-side mail-me-back reminder service that didn't have the ugly commercial taste of pretty much every other alternative.

So bye-bye 3mindme; we'll miss you.

Thursday, August 25, 2011

Using \gobblepars to prevent LaTeX from adding empty lines at gaps

While searching for something else, I came upon a StackOverflow question from a while ago that asked how to prevent LaTeX from adding a \par at particular blank lines in the source code. The asker didn't want to remove blank spaces everywhere; he just wanted to get rid of the paragraphs at certain spots.

Of course, you can use comments to do this:
\somemacro{}
%
Some text
However, a lot of people don't like the look of that. Some of the responders on StackOverflow gave some alternatives that seemed ugly and half baked. So I came up with \gobblepars, which is a macro you can add to the end of your own macro definitions to cause them to eat up all trailing pars, or you can use explicitly. For example:
\somemacro{}\gobblepars

Some text
would do the same as the commented stuff above. Moreover, if you had control over \somemacro, you could build \gobblepars into it (in fact, even if you didn't have control, you could use \let and \def to augment an existing macro with a trailing \gobblepars, but that's a different topic).

Here's the simple definition of \globblepars (you put this in the preamble of your LaTeX document):
\makeatletter
\newcommand\gobblepars{%
    \@ifnextchar\par%
        {\expandafter\gobblepars\@gobble}%
        {}}
\makeatother
So that's pretty simple. It checks for a \par (which includes a blank line in the source) trailing it. If it finds one, it gobbles it up (i.e., gets rid of it) and then calls itself again. This process will continue until it finds something other than a \par. Hence, it "gobbles" strings of "pars".

Wednesday, August 24, 2011

Update to my LaTeX CV templates: Space allowed after sections now!

In preparation for setting up MultiMarkDown (MMD) to write my CV for me, I've been thinking about ways to refactor my old résumé/CV LaTeX templates to make them look a little cleaner. A fix I came up with tonight should help with that, and I think it will also make the templates easier for others to work with even if they're not doing anything with MMD.

In particular, the \section macro used to be renewed as a \marginpar with some other ugly stuff. Putting the sections in the margins caused problems because people like to put spaces after the sections, which generates a \par that means the section content will not be aligned with the section heading in the margin note. So the old way I got around that problem was to force people not to use spaces between \section and the section content. If they needed visual space in their source code, they could use comments to do that.

Well, I've swapped out that ugly definition for a slightly less ugly one that uses \llap (with a \smashed \parbox) and some creative gobbling. In particular,
% The section headings
%
% Usage: \section{section name}
\renewcommand{\section}[1]{\pagebreak[3]%
    \hyphenpenalty=10000%
    \vspace{1.3\baselineskip}%
    \phantomsection\addcontentsline{toc}{section}{#1}%
    \noindent\llap{\scshape\smash{%
        \parbox[t]{\marginparwidth}{\raggedright #1}}}%
    \vspace{-\baselineskip}\par}
The \vspace and \par combination should ensure that an additional \par isn't added by LaTeX. So before you were restricted to things like...
\section{Stuff} \begin{bibsection} %...
and
\section{Stuff}
%
Junk
But now you don't have to be so careful about the whitespace. You are allowed:
\section{Stuff}

\begin{bibsection} %...
and
\section{Stuff}

Junk
So that's cool. Much more readable.

You can get my most recent LaTeX CV templates at their page on my website. You can find a detailed history of the source code changes within my Mercurial repositories of documents.

(updated: new \gobblepars allows for arbitrary amount of space after each \section)
(updated: replaced \gobblepars with \par hack that still allows for arbitrary amount of space after each \section but also prevents lists from adding a \par when placed directly after a \section; consequently, adjusted all of the lone-lists to get rid of their leading negative vertical space (probably can get rid of them now, actually). I'm trying to shift toward using conventional lists (or perhaps conventional modifications of them from paralist or enumitem))

Tuesday, August 23, 2011

The maximum number of matrix dimensions in MATLAB

[ Background: I was asked what the maximum number of matrix dimensions was in MATLAB today. I responded as follows. ]

You are only limited by the amount of memory available and the maximum number of ELEMENTS (as opposed to dimensions) in a matrix. The actual number of dimensions is just a detail about how the memory is indexed. You can reshape any existing matrix to any number of dimensions (I'll give details below). You can only create a new matrix if it abides by the memory and element limits that vary by computer.

To find out the maximum number of elements for a matrix on your computer, use the MATLAB command "computer" (do "help computer" for details). For example:
[~,maxsize,~]=computer
tells me that I can have 2.8147e+14 elements in matrices on my computer. So I better be sure that:
(number of rows)
   × (number of columns)
   × (number of cubes)
   × (number of 4-th dimensional thinggies)
   × (...)
is less than that number.

To find out about memory limits on your system see, the command "memory" ("help memory" or "doc memory"). Unfortunately, memory may not be available on your system. Alternatively, you can see:

http://www.mathworks.com/support/tech-notes/1100/1110.html

for information about memory limits in MATLAB. For information about the maximum number of elements (and the command "computer" that I discussed above), see (UPDATE: MATLAB has moved this page, and this link doesn't land in the right spot anymore):

http://www.mathworks.com/support/tech-notes/1200/1207.html#15

Regarding dimensions, you can use the command "reshape" to re-index any existing matrix. For example, if I start with the column vector:
A=ones(100,1)
I can turn it into a row vector:
newA = reshape(A, 1, 100)
or a matrix of any number of dimensions so long as the number of elements is still 100.
newA = reshape( A, 2, 2, 25 )
newA = reshape( A, 1, 1, 1, 1, 1, 1, 1, 1, 1, 100, 1 )
newA = reshape( A, 1, 1, 1, 2, 1, 50, 1, 1, 1, 1, 1, 1, 1, 1 )
% etc.
Now, I'm assuming you're using regular MATLAB matrices. Alternatively, you can use sparse matrices so long as you limit yourself to functions that work with sparse matrices:
help sparfun
A sparse matrix stores an index with every element. That lets it "skip over" the 0 elements of the matrix. Consequently, you can store VERY large matrices with an abstract number of elements far larger than anything you can work with in MATLAB... however, most of those abstract elements will be 0.

Spotify, Google Music Beta, and Amazon Cloud Player? My choice is probably Google Music Beta

Between Spotify, Google Music Beta, and Amazon Cloud Drive/Player, I have had the most fun with Google Music Beta.

So Spotify is weird and uncomfortable. It’s cool that I can get easy access to lots of music that I don’t actually own, and it’s easy to make playlists. However, it is ugly to be able to both shuffle your whole library and put songs in multiple playlists without risking over-representing them in your shuffle. Over-representation is generally a major problem if you create artist playlists because one artist might have a whole bunch more songs in the Spotify database than others. It would be nice to “shuffle artists” where you’re guaranteed a balanced selection of artists (e.g., in every set of 30 songs)… What’s worse on Spotify is that playlists are static. You might be able to create an artist playlist, but you have to watch out for new songs to add to that playlist. Be careful though – songs get duplicated in a playlist if you drag them over. Having said all of that, I certainly have had fun discovering new music with Spotify. The interface is ugly though, and it sucks to have to pay $10/month just to have Linux access (yes, I know I can use Spotify through wine for free now (and $5/month later when the free accounts become limited), but I hate dealing with the headache of local MP3’s and the wine codec). Moreover, if I want Android access, I’m stuck with $10/month too. Boo.

Google Music Beta had an easy upload process. It took a while, but not that long. It was strange that it bogged down my entire Internet connection (while Amazon’s uploader didn’t affect my downstream at all), which makes me wonder what else Google is doing. However, I could select all of my songs on my Linux machine (no fancy Windows uploader needed) and they all got uploaded. Unfortunately, I cannot download them (unless I make them available offline on my phone and then figure out where and how Google stores them, which may not be tractable). Also, I cannot figure out how to buy new music (certainly a feature for the future, right?). However, Google randomly adds free music to my library, and that’s cool. What’s coolest is the Instant Playlist feature (which is similar to features in iTunes and other players/services) that builds a good-sized playlist from a single song. I’ve enjoyed its picks – even when the song I seeded lists with came from a local artist that it couldn’t have known much of anything about. Best of all, Google Music Beta gives me all of this for free (up to 20,000 songs) on all of my systems (including Android). I never need to worry about keeping a Windows machine.

Amazon’s Cloud Drive/Player is cool that it gives you 5GB for free and then $1/year/GB up to 1TB after that (starting at $20/year for 20GB). For the moment, if you pay for any storage, you get music storage for free. Any Amazon MP3 purchases can be placed directly in your library. Any song in your library can be downloaded. So Amazon’s Cloud Drive is a nice archival and music management solution. Almost all of the cool features of the player work on all systems. The only downside is that the MP3 Uploader (which re-organizes your music into Artist/Album/Song and will allow you to select a batch of thousands of songs to upload at once) is only available in Windows (and Mac?). On a Linux machine, you can use the web uploader from Amazon’s Cloud Drive, but you can only upload contents of one folder at a time (with no subfolders) and you have to organize everything manually. No one has figured out how to automate this through a script as far as I can tell. The Windows uploader does a pretty good job sitting in the background, and it’s safe to interrupt it in the middle of an upload (however, it may take a while building your upload list when you re-start it). The Amazon Cloud Player is fine. You can build playlists of your music, which is fine. You can shuffle. You can’t discover new music, but you can easily grow your library at 50 to 99 cents a song.


[ Oh, and all three will scrobble to Last.FM. It’s supported natively in Spotify (with no support for “Love”), and it’s supported with 3rd-party Greasemonkey scripts (for Firefox and Chrome (and Safari?)) for Google Music and Amazon MP3 Player. ]

Monday, August 22, 2011

Converting an EMF (MetaFile) on Linux using unoconv

When I needed to convert a graphic from an EMF (Windows Enhanced MetaFile) on my Linux machine today, all of my Google searchers were turning up with conversion utilities for Windows or wine at best.

Fortunately, it appears as though unoconv converts from EMF and is available in the standard Fedora repositories. I issued:
unoconv MYMETAFILE.emf
and it spit out a MYMETAFILE.pdf, and I was happy.

Wednesday, August 17, 2011

An hour with MIUI Android on my OG DROID

Last night, I installed the MIUI Android build from on my OG DROID. This is my first experience with MIUI, and it was mixed. I think the build was great (major h/t to Trey Motes), and I think the MIUI devs have done a terrific job showing me that my phone can look drastically different than I'm used to. I've posted links to MIUI Android (where
you can download a ROM for your phone) as well as MIUI.us (where you can also download a ROM for your phone) as well as MIUI (where you can read about the official project and their own MIUI phone that recently hit the news).

Things I liked:
  • Clean theme made GMail look so much nicer
  • Notifications pull-down included toggles for everything I'd want to toggle (WiFi/Bluetooth/etc.)
  • Lots of customizability (starting from lock-screen backgrounds and going all the way down to lots of other stuff that you usually only see in 3rd-party launchers and such)
  • Trey Motes has bundled lots of useful apps with his distro (ROM Manager, WiFi Tether, etc.) out of the "box"
Things I didn't like:
  • The iOS-style launcher – All of your apps are on the pages of the launcher. You can then create folders to group them together. There's no "app drawer" that shows you all of your apps so that you can only put a select few on your desktop screens. Some people might like this (iOS users sure do), but I've gotten used to Android-y things like using *FolderOrganizer* to tag apps (possibly with multiple tags), and so it's a major departure to go to an iOS-like organization style.
  • The iOS-style dialer and contacts list – I'm not sure what's smart about the "smart dialer" (but I didn't play too much, and it was too late to call anyone). The contacts list (and other lists on the system) displays the iOS-like letters down the right where you can click on the tiny letter you want. Android's typical way of doing this is displaying a pull-tab on the right that you can drag (opposite semantics as flicking; so more like a Desktop scroll). I like the way Android does it better than iOS.
  • Android Wizard doesn't run and Market didn't sync apps – The wizard that usually runs the first time you boot most ROMs didn't run, and so I had to add my accounts via the settings menu. What was probably worse was that the Market didn't automatically start downloadiing my apps, which is something I've come to appreciate (I know there are 3rd-party ways of backing up and restoring apps, but I don't use them if I don't have to).
  • The Music app didn't integrate with Google Music – The music app follows a bit of the style of the iTunes app in iOS, but it looks markedly different. In fact, it looks different than the stock Android music app too. So it has lots of features, but it's not like anything you're going to expect. It also didn't sync with Google Music, which really sucks after spending so much time uploading songs to the service.
  • General lack of integration with Google services – The MIUI tries not to be so Google-dependent... Some may find that a strength, and some may find that a weakness.
So I really don't have any complaints about the build or the dev's, but I'm just not sure I'm the intended demographic for the new look and feature set. I really think it's cool, but I just don't think it's something I want to use day to day. So I'm looking forward to whatever else Peter Alfonso tells me I need (he's like my new Steve Jobs).

I hear that next week Trey Motes will release a MIUIAndroid release for the OG DROID using a version of Peter Alfonso's kernel that is newer than anything you can get pre-built from him on his distro sites (I wonder if it's a 0.4 kernel? If so, I wonder if it has the same bugs that has been concerned about). So I imagine that performance of MIUIAndroid will be much nicer. It was fine when I tried it, but I didn't install many apps.

So give it a shot. When I tried it, I downloaded the ROM from MIUIAndroid.com (ROM Manger's version was a 1.7.x version; I used the 1.8.12 version from MIUIAndroid.com; note that it has a 1.8.12.1 HOTFIX (you'll see it in the forums that it links you too)) and used ROM Manager to backup my existing setup and install MIUIAndroid (you could try MIUI.us, but I got the feeling that Trey Motes does a fantastic job customizing MIUIAndroid for OG DROID, just like Peter Alfonso does with Android/AOSP). When I decided I didn't like it, I used ROM Manager (which comes bundled with MIUIAndroid) to restore my old setup. That was my hour-ish with MIUI.

Links:

Monday, July 18, 2011

How we fixed our Ikea wardrobe after the bar fell

Our apartment has no bedroom closet space. There are two coat closets and a linen closet all clustered in the same wall between the living room and office and near the bathroom, but there is no storage in the bedroom. So a while back, we bought three large Ikea wardrobes that fit nicely next to each other down one of the walls of our bedroom. This particular wardrobe model was one of Ikea's budget options (i.e., it was not one of their crazy customizable types; it basically came as a complete unit). The closet bar (shown here without the shelf that is usually above it) should attach to the closet using a plastic insert like this (click on the image for a larger and clearer version):
As you may be able to see, there is a large vertical scrape a few inches beneath the plastic insert. That scrape came into our lives when the plastic insert on the left side of the bar failed ("wardrobe malfunction"), which sent the awkward-shaped metal closet bar (and the clothes hanging on it) into the floor of the wardrobe. As you can see, there is a cantilever-type support jutting out from the plastic insert that sheared off (pretty clean cut, actually):
Ikea often keeps spare parts like this on hand that you can grab for free in bins from the store, but we didn't want to drive all the way to West Chester to look for them, and we were pretty sure these wardrobes were discontinued and these (likely specialized) parts were not available. So we went to Meijer instead (it was too late to go to a hardware store to find real closet accessories) to look for a way to hack together a good pre-fabricated furniture fix.

Just before we were about to give up, we found these corner braces that looked like the perfect size and shape for our problem. They were about $2.50 for a pack of 2 (in case any of the other supports ever break later).
So here's how we used a corner brace to support the closet bar:
As a bonus, the screws that came with the corner brace were short enough to not protrude out the side of the wardrobe. They were self-tapping screws, but I didn't trust them in the Ikea-style formica-covered particle board, and so I pre-drilled some small holes first, and that worked pretty well. We used a zip tie to fix the bar vertically; however, we also experimented with binder rings that we had stowed away in our office supplies. The binder rings actually provided a much tighter fit so that the bar didn't wiggle at all; however, as strange as it sounds, the zip ties were a little more discrete as they hugged the corner brace snugly.

[ I should note that I took off the shelf to get easier access to the closet. That meant pulling out the three small brads/nails attaching the masonite-ish backing. Because the area moment of inertia of that backing is very high, I think it provides significant support to the closet structure as a whole. So afterward, I pulled the closet out and put the nails back in a different spot. I probably could have left the shelf in through the whole fix. ]

I almost like the look of our fix better than the Ikea insert (which looks like it has a tenuous hold on the bar anyway).

[ You can also find this post at Jessie and Ted's blog. ]

Friday, July 08, 2011

Well, I have a Google+ account now...

UPDATE: Yes, it does appear like I have invitations to give out. Yes, if you e-mail me, I'll do my best to send you one.
You can my Google+ profile at:

http://profiles.google.com/ted.pavlic

Overall, initial reactions are good. There are some bugs to fix and some things to clean up, but I think I'd "+1" it.

Thursday, July 07, 2011

Someone asked me about Hilbert transforming minimum-phase magnitude responses today...

Someone sent me this e-mail today:
Thank you for contributing to the Wikipedia article about minimum phase. I gather from the article that I should be able to use the Hilbert transform to compute a phase response from the amplitude response of a minimum phase system. Yet when I compute (in Matlab) the Hilbert transform of the log of the amplitude response of a Butterworth filter (sampled at uniform frequency intervals), the result is not real and does not resemble the phase response of a Butterworth at all. I expected that it would equal the phase response of a Butterworth since a Butterworth is minimum phase. What have I missed? Thank you.
So I responded in an e-mail, and I've pasted that e-mail here.
Assuming that you are using a high-order filter, are you unwrapping your phase? See the MATLAB function "unwrap" for details. Another easy fix is to ensure you're using the NATURAL log to extract the exponent of the magnitude as an exponential. In MATLAB, "log" is natural log and "log10" is common log.

If you still have the problem, make sure your filter is truly minimum
phase. In particular, the transfer function and its inverse must be
stable and CAUSAL. The causality condition is redundant so long as your
notion of stability includes poles induced from unmatched zeros. For
example, the discrete-time filter:
z + 0.5
is not causal and thus has a pole at infinity. So it does not meet the criteria for being minimum phase. On the other hand, the filter:
(z+0.5)/z
is minimum phase. So let's take its impulse response. In MATLAB, you could try:
h = impulse(tf( [1,0.5], [1,0], 0.1));
or...
z = tf('z');
h=impulse( (z+0.5)/z );
or just read it from the numerator and add as many zeros as you'd like...
h=[1,0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0];
Then use the FFT:
H=fft(h);
Then use the discrete-time Hilbert transform of the NATURAL log:
X=hilbert(log(abs(H)));
Then, to compare, use "plot":
plot( 1:length(h), -imag(X)*180/pi, 'o', ...
      1:length(h), angle(H)*180/pi, 'x' )
I think you'll find that each x is circled.

To summarize:
h=[1,0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0];
H=fft(h);
X=hilbert(log(abs(H)));
plot( 1:length(h), -imag(X)*180/pi, 'o', ...
      1:length(h), angle(H)*180/pi, 'x' )
Here's another interesting case that won't match as well because of the discrete-time approximation.
z = tf('z',1);
H = (z + 0.5)/(z);
[mag,phase,w]=bode(H);
mag=mag(:); phase=phase(:); w=w(:);
X=hilbert(log(mag));
plot(w/pi,-imag(X)*180/pi,w/pi,phase)
As you can see, these two match pretty well in the interior region. You can make some interesting observations about the edges where they don't match well.

Monday, June 20, 2011

Spec# is a terrible name. C-clef would have been better.

Microsoft Research's RiSE has another pre-print out on Spec#. The Spec# specification language is old news by now, and so it's unfortunate that the name "Spec#" has not been changed because it means it probably is going to stick. Unlike the name "C#", "Spec#" is terribly unimaginative. It's like naming your first child "Hermione" and then naming your second child "Two".

Wouldn't it have made more sense to continue the musical analogy? For example, a C-clef is a conventional symbol from music theory that is used to specify the desired meaning of the lines that follow. Thus, it makes a lot of sense to use it as a name for specification language for C#, right?

Instead, we get Spec# (i.e., "specsharp"), which actually seems quite dull...

Tuesday, June 14, 2011

Delayed reminder e-mails: iTickleMe, LetterMeLater, FutureMe, 3mindme, and Outlook

UPDATE: On August 27, 2011, 3mindme was shut down. See a recent post for more information. The old owner of 3mindme recommends NudgeMail as a substitute.

UPDATE: It looks like there has been an update to the original Gadgetwise post. Unfortunately, because it seems like all technology writers are born to disappoint, the author picked a bone-headed client-side solution like the Boomerang plugin for Firefox/Chrome despite so many people pointing out existing server-side solutions. In fact, a bunch of people posted lots of johnny-come-lately server-side applications like Good Todo, FollowUp, FollowUpThen, and NudgeMail. Why you would favor any of these over something like 3mindme baffles me, but I guess it's nice to have options. You certainly shouldn't ever need to use Boomerang though!
This recent NYTimes: Gadgetwise post got me thinking about an old functionality I built into my mail server (using procmail and cronjobs) back before I switched over to Gmail. Basically, I implemented exactly this "delayed reminder" feature in a sort of GTD "tickler file" (43folders) way. I think you could already do something similar in GMail, but you'd still come a little short. Let me explain how my old IMAP-based version worked.

I would send myself messages with a subject like:
tickle10: Ask Joe to return book he borrowed
That is, "on day 10 of this month, remind me to 'Ask Joe to return...'". When my mail server received messages matching that format, it would file them into "tickler files" that were just IMAP folders for each day; each folder had a name like "TICKLE.1" or "TICKLE.25". I then had a script that would run nightly and would move contents of "TICKLE.today_number" into my inbox and mark them unread.

Consequently, this acted exactly like a tickler file with folders. I "drop" a message into the folder for a day later in this month or next month, and each day I empty the folder in the front and move the empty folder to the back. I just had a script do it for me.

You could have GMail do ALMOST all of this for you. That is, you could have it automatically file messages based on subject into tickler folders. The problem would be to automate moving the daily folders back into your inbox. Perhaps you could just manually check today's folder each day. That would be a step in the right direction.

But then I realized that if the NYTimes guy thought of it now, and I thought of it many many many years ago, then maybe other people have thought of it too. So I did a Google search, and it turns out other solutions do now exist. Here's one I just found:
  • iTickleMe: http://www.itickleme.com/

    iTickleMe lets you schedule e-mail reminders by sending the service e-mails at addresses like INTERVAL@itickleme.com.
Of course, there is more than one way to skin an cat. Alternatives to iTickleMe include:But then I found a really simple and elegant non-commercial alternative:
Finally, apparently Outlook has the ability to schedule e-mails for later delivery. This won't be an option for, say, GMail users who access GMail through the web... or Thunderbird users in general. Does Apple Mail have this feature? Well, in the meanwhile, the on-line reflector services should work pretty well... and, so long as you can count on the server being up, you don't have to worry about your mail client at home crashing and not sending those delayed e-mails while you are away (plus, do you really need to keep your computer on just as a reminder server?).

So go check out 3mindme. I haven't tried it yet. I hope it still works. Sounds great!

Monday, June 13, 2011

Someone asked me for some references on LaTeX today...

I got an e-mail today asking for some recommended references on LaTeX. Here is my response, which is a marked-up paste of an e-mail.

[ This post can also be found on my web page. ]
The reference that I keep handy is:

The LaTeX Companion (Tools and Techniques for Computer Typesetting)
The LaTeX Companion, Second Edition by Mittelbach and Goosens

That reference, often called TLC2, is a standard one. You really can't
go wrong with it. It's dense, includes lots of examples, and is pretty
easy to use. One other book that came in handy when I started drawing
graphics in LaTeX is:

LaTeX Graphics Companion, The (2nd Edition)
The LaTeX Graphics Companion by Goosens, Rahtz, and Mittelbach

That introduced me to things like picture environments and PSTricks. I
use PSTricks a lot now, and the book really is only meant to be an
introduction (albeit a nice one) to PSTricks as well as other competing
(and complementary) tools. Now I typically use the PSTricks
documentation on the PSTricks home page at TUG (you can google for
"PStricks"
to find the web page).

A nice small reference to LaTeX is:

LaTeX: A Document Preparation System (2nd Edition)

A very complete but also intimidating reference for TeX is:

Computers & Typesetting, Volume B: TeX: The Program
The TeXbook by Knuth

Three other notable and popular books on TeX (that are far less
intimidating) are:
You can still get that last book in print from some sources that print
out of print books (lulu.com), but I believe it has been released for free as a PDF as well. Yes, see:


for the free download link (released under GNU FDL).

[ If you really don't want to get into the nitty gritty details, I would
recommend sticking to the LaTeX references. ]

Otherwise, I've just done a lot of learning by doing. It helped to
learn about typesetting in general. A good reference for both things is
the documentation that comes with the memoir package:
That documentation link (memman.pdf) is an excellent introduction to all
of the basic typographical elements of a book... and memoir is a nice
LaTeX package in general.

After that, see comp.text.tex (available as a Google group) which is
known simply as "CTT" to insiders...


LaTeX and TeX experts watch that group and will answer your questions
about how to do things. You can also search the group for some previous
answers to similar questions. You can also see announcements of new
versions of packages that do cool things. It's a great resource.

Finally, seeing the LaTeX 2e source (implemented in TeX) can be helpful
to understand exactly what goes on when you do things like a \section.
"source2e.pdf" is included with the LaTeX distribution. You can also
view it on-line here:


That includes all of the TeX implementations for the LaTeX macros and
gives you some idea of what goes on when you build a LaTeX document.

Off the top of my head, that's all I can think of. Just go into things
thinking that LaTeX probably *CAN* do whatever you want it to (including
solving and plotting differential equations, which pure LaTeX (as
opposed to PDFLaTeX) can do). Like a sculptor, you just have to figure
out what to chip away to get it to do it. Keep trying things until
something is qualitatively similar to what you want, and then tune
(perhaps with the help of CTT) after that. Eventually you'll come up
with better and better implementations. If you come up with something
especially novel, post it on-line. In fact, contributing to CTAN
directly is usually recommended.

Another thing that helps me is to remember that TeX is really is just a
giant machine that tokenizes, parses, and expands. It's not a
"programming language" so much as it is a text "filter" in that a single
run of LaTeX doesn't necessarily result in what you want. Keeping this
in the back of my head helps me anticipate the problems I might have
with certain approaches, and it further helps me figure out how to
approach LaTeX in order to succeed.

Friday, May 06, 2011

Someone asked me to explain the Price equation today...

I got an e-mail today asking for help understanding the Price equation, prompted partly by the recent RadioLab about George Price. The person who e-mailed me made it sound like he was OK with a long explanation, just so long as it explained the ugliness of the mathematics. Here is my response... (pardon the e-mail-esque formatting... I'm just pasting it rather than re-formatting it)

[ This post can also be found on my web page. ]
You shouldn't believe everything the media tells you about the complexity of the Price equation. I'm always frustrated when I hear someone on the radio read the Price equation out loud as a mathematical statement. It is not meant to be a mathematical statement. It is just a logical justification for something we all think should be true -- traits with higher differential fitness advantage should spread throughout a population (which is a critical aspect of natural selection). Price formalized that statement and then proved that the formalism is a tautology. That's all that's important.

It is a very simple idea, and it has almost nothing to do with statistics (because there are no random variables nor data in the price equation). The Price equation is a theoretical statement about the relationship between two sequential generations of a model population. You can use it to predict how the representation of a particular trait will change over time and eventually settle at some fixed distribution. However, again, numerical applications aside, it really is just a mathematical verification of something which makes intuitive sense.

Just to get comfortable with the notation, consider a trait like "height" across a population of n=100 individuals. Each individual might have a different height. Let's say that in our population, people basically have two different heights (perhaps due to sexual dimorphism). So we have two groups:

z_1 = 5 feet
z_2 = 6 feet

We represent the number of people with each height using the variables:

n_1 = 50
n_2 = 50

That is, there are an equal number of 5' tall people and 6' tall people from our 100 person population (note that n_1 + n_2 = n). Further, we find that both 5' tall and 6' tall people tend to have 1 offspring each. That is, they both have an equivalent "fitness" of 1:

w_1 = 1
w_2 = 1

Where w_i is the number of offspring an individual of group i will contribute to the next generation. Let's say we also know that offspring from 5' tall people end up also being 5' tall, and offspring of 6' tall people also end up being 6' tall. Then we have:

z'_1 = 5 feet
z'_2 = 6 feet

So the value of the trait (height) does not change from generation to generation.

Everything above is a parameter of the model. It represents what we know about "height" of individuals in this generation as well as the relationship between the height of an INDIVIDUAL and its offspring. What Price equation does is tell us about how the distribution of height in the POPULATION will change from this generation to the next. It might be helpful to think about Price equation as relating the AVERAGE value of a trait (e.g., height) in one generation to the AVERAGE value of the trait (e.g., height) in the next generation.

So now let's add-on the Price equation stuff. To account for the changes in the average value of the trait (height here), we have to worry about two effects -- "background bias [due to individuals]" (my term) and "differential fitness" (a quantity that drives natural selection):

1.) Imagine that 5' tall parents produced 5' tall offspring (so z'_1=z_1=5 feet, as above), but 6' tall parents produced 10' tall offspring (so z'_2=10 feet in this hypothetical scenario). Then even without worrying about "differential fitness", we might expect an upward shift in AVERAGE height from the parent generation to the offspring generation. This "background bias [due to individuals]" is related to the "E(w_i \delta z_i)" term in the Price equation. It represents the change in a trait at the individual level. I'll give more info about the math later.

2.) Now, instead, assume that z'_1=z_1 and z'_2=z_2 (so offspring height is the same as parent height) as above. It may still be the case that the average height in the offspring generation changes from the parent generation. This would occur if one height had a higher fitness than the other height. Here, we see that w_1=w_2=1. They both have the same fitness, and so we don't expect any differences IN REPRESENTATION from one generation to the other. Note that if w_1=w_2=5, then each individual would produce 5 offspring. Consequently, the TOTAL population would grow, but the DISTRIBUTION of height would stay the same. To make things more interesting, imagine that w_1=1 and w_2=2. Now each 5' tall person produces one 5' tall offspring, but a 6' tall person produces TWO 6' tall offspring. Consequently, the distribution of height would change from parent to offspring generation. The AVERAGE height would shift toward 6' tall people. The "cov(w_i, z_i)" term aggregates this change. It relates the "differential fitness" of one height to its success into growing the representation of that height in the next generation. I'll give more info about the math in a bit. [NOTE that the average fitness represents the average "background" rate of growth from population to population.]

To get ready for an explanation of the actual Price equation, let's get some terminology out of the way.

First, we define the "expectation" or "average" height in the current population with:

E(z_i) = ( n_1 * z_1 + n_2 * z_2 + ... )/n

That is, "E(z_i)" is the average value of the trait (height above). There are n_1 individuals with z_1 value of the trait, and so we have to multiply n_1 * z_1 to get the total contribution of that value of the trait. We do that for each group. We can do the same for other variables too. For example, here's average fitness:

E(w_i) = ( n_1 * w_1 + n_2 * w_2 + ... )/n

The average fitness "E(w_i)" somehow represents the average rate of population growth. If every w_i is 1, then there will be 1-to-1 replacement of parent by offspring and there will be no population growth; likewise, the average "E(w_i)" will be 1 reflecting no growth. However, if every w_i is 5, then "E(w_i)" will also be 5 and the population will grow 5 fold every generation. With some simple arithmetic, it is easy to verify that the total population in the NEXT (i.e., offspring) generation is given by the product of the number of individuals in this generation (n) and the average fitness (E(w_i)).

We can also find the average value of the trait in the NEXT (i.e., offspring) generation. To do so, we have to scale each value of the trait in the next generation (z'_i) by the number of individuals with that trait in the next generation (n_i w_i), and then we have to divide by the total number of individuals in the next generation (n*E(w_i)). So the average value of the trait in the NEXT (i.e., offspring) generation is:

E(z'_i) = ( n_1 * w_1 * z'_1 + n_2 * w_2 * z'_2 + ... )/(n * E(w_i))

For simplicity, let's use symbols "z", "w", and "z'" as a shorthand for those three quantities above. That is:

z = E(z_i)
w = E(w_i)
z' = E(z'_i)

Penultimately, let's define "delta" which gives the difference in a variable from the this generation to the next. The difference in the average value of the trait is:

delta(z) = E(z') - E(z)

that difference may be due either to differential fitness (i.e., when w_i is not the same as w) or to intrinsic height changes at the individual level. Those intrinsic height changes at the individual level are:

delta(z_1) = z'_1 - z_1
delta(z_2) = z'_2 - z_2
...

Finally, let's define this "covariance" formula. For each group i, let's say we have variables A_i and B_i (e.g., z_i and w_i). Let A be the average value of A_i across the population:

A = ( n_1 A_1 + n_2 A_2 + ... )/n

and B be the similarly defined average value of B_i across the population. Then we can define the covariance across the POPULATION in a similar way as we defined average. That is:

cov( A_i, B_i )
=
E( (A_i-A)*(B_i-B) )
=
( n_1*(A_i - A)*(B_i - B) + n_2*(A_2 - A)*(B_2 - B) + ... )/n

That is, cov(A_i,B_i) is the AVERAGE value of the product of the difference between each A_i and its average A and the difference between each B_i and its average B. We call this the "covariance" because:

* If A_i doesn't vary across values of i, then A_i=A (no "variance" in A) so there is no "covariance"

* If B_i doesn't vary, then there is similarly no covariance

* If whenever A_i is far from its average B_i is close to its average, then there is LOW (i.e., near zero) covariance. That is, both A_i and B_i vary across the population, but they don't vary in the same way.

* If whenever A_i is far from its average B_i is also far from its average, then there is HIGH (i.e., far from zero) covariance. Both A_i and B_i vary across the population, and they vary in the same way.

Note that HIGH covariance could be very positive or very negative. In the positive case, A_i and B_i have a similar pattern across values of i. In the negative case, A_i and B_i have mirrored patterns across values of i (i.e., A_i is very positive when B_i is very negative and vice versa). LOW covariance is specifically when the cov() formula is near zero. That indicates that the pattern of A_i has little relationship to the pattern of B_i.

Now, let's look at the Price equation more closely. The left-hand side:

w*delta(z)

is roughly the amount of new trait ADDED to each "average" individual. So if the average trait shifts (e.g., from 5.5' tall to 6.5' tall, corresponding to a delta(z) of 1'), but the population has GROWN as well (i.e., "w>1"), then amount of height "added" to the parent population to get the offspring population is more than just 1' per person. We scale the 1' per person by the "w" growth rate. Thus, "w delta(z)" captures effects of population growth (which naturally adds trait to a population) and mean change in representation. Note that if the AVERAGE trait did not change ("delta(z)=0") but the population did grow ("w>1"), then we interpret "w delta(z)=0" to mean that even though the "total amount" of trait increased due to population increase, there was no marginal change in each individual's trait (i.e., individuals aren't getting taller; the population is just getting larger).

Now let's look at the right-hand side:

cov(w_i, z_i) + E(w_i*delta(z_i))

This implies that the amount of new trait added to each average individual is the combination of two components.

To parallel the discussion above, let's consider the E() part first:

E(w_i * delta(z_i))

we can expand this average to be:

( n_1*w_1*(z'_1 - z_1) + n_2*w_2*(z'_2 - z_2) + ... )/n

That is, delta(z_i) gives us the average change from AN INDIVIDUAL to A SINGLE OFFSPRING from z_i to z_i'. The w_i part ACCUMULATES those changes to EACH offspring. For example, if w_1=2, then group 1 parents have 2 offspring. So the total increase in the trait from group 1 is not delta(z_1) but is 2*delta(z_1). So you can see how this is the "BACKGROUND BIAS" representing the "w*delta(z)" component that we get even without worrying about differential fitness. This represents the change in "w*delta(z)" just due to INDIVIDUALS and POPULATION GROWTH.

Next, look at the covariance:

cov(w_i, z_i)

The covariance of w_i and z_i is a measure of how much the DIFFERENTIAL FITNESS contributes to added trait. Recall the formula for cov(w_i,z_i):

E( (w_i-w)*(z_i-z) )

which is equivalent to:

( n_1*(w_1-w)*(z_1-z) + n_2*(w_2-w)*(z_2-z) + ... )/n

Here, the quantity (w_i-w) is the "differential fitness" of group i, and the quantity (z_i-z) represents the location of the trait with respect to the average trait. So:

* if the fitness varies in a similar way as the level of trait across values of i, then the average value of the trait will tend to increase from population to population

* if the fitness varies in exactly the opposite way as the level of the trait across values of i, then the average value of the trait will tend to decrease from population to population

* if the fitness varies differently than the level of the trait, then there will be little change in the average trait from population to population

* if there is no variance in either fitness nor level of the trait, there will be little change in the average trait

Put in other words:

* if high differential fitness always comes with high values of the trait and low differential fitness always comes with low values of the trait, then there will be selection toward MORE trait

* if high differential fitness always comes with to low values of the trait and low differential fitness always comes with high values of the trait, then there will be selection toward LESS trait

* if differential fitness variation has no relationship to trait level variation, then selection will not change the average value of the trait

* if there is no variation in the trait or no variation in the fitness, then selection will not change the average value of the trait

Put in MORE words at a more individual group level:

If a group i has both a high "differential fitness" (w_i-w) AND a high (z_i-z), then its FITNESS w_i is far above the average fitness w and its level of the trait z_i is far above the average value of the trait z. Either one of those alone would be enough to cause the "total amount" of trait to shift upward. On the other hand, if BOTH (w_i-w) and (z_i-z) are NEGATIVE, then the average population is already far away from this trait value AND has a much higher fitness. Consequently, the motion of the average trait will still be upward, but here upward is AWAY from the trait z_i (because z_i is under the average z). Finally, if (w_i-w) and (z_i-z) have opposite signs, the motion of the average trait z will be negative, which will either be heading toward z_i if w_i>w or away from z_i if w_i<w. The covariance formula takes the average value of (w_i-w)(z_i-z). That average represents the contribution to the amount of trait "added" to each individual due to DIFFERENTIAL FITNESS.

So there you have it. Assuming that "w" (average fitness -- which is a growth rate) is not zero (which just assumes that the population does not die out in one generation), then we can divide everything by "w" to get a less complicated (but equivalent) Price equation:

delta(z) = ( cov(w_i,z_i) + E(w_i*delta(z_i)) )/w

So now we have an equation representing the average change from parent to offspring population. If you expand all the formulas, you can verify that this statement is equivalent to:

delta(z) = cov(w_i/w, z_i) + E( (w_i/w)*delta(z_i) )

The quotient "w_i/w" is a "fractional fitness." It is a measure comparing the fitness of group i with the average fitness, where high differential fitness corresponds to w_i/w > 1 and low differential fitness corresponds to w_i/w < 1. So let's create a new variable

v_i = w_i/w

to be the fractional fitness. Then we can rewrite Price's equation to be:

delta(z) = cov( v_i, z_i ) + E( v_i*delta(z_i) )

This version gets rid of the need to worry about scaling for population growth. If you think about it, v_i is just a normalized version of w_i where you have "factored out" the background growth rate of the population. So now we basically have:

AVERAGE_CHANGE
=
POPULATION_CHANGE_DUE_TO_DIFFERENTIAL_FITNESS
+
POPULATION_CHANGE_DUE_TO_INDIVIDUAL_CHANGES

In other words:

"the change in the average value of the trait is due to two parts:

1. The differential fitness of each value represented in the population

2. The individual change from parent trait level to offspring trait level"

So if you wish to go back to the "height" example...

"The average height increases when:
1. Natural selection favors increases in height
OR
2. Tall people have taller offspring"

You could create other variations that work as well:

"The average height DEcreases when:
1. Natural selection favors DEcreases in height
OR
2. Short people have shorter offspring"

====

"The average height stays the same when:
1. Natural selection has no preference for height
AND
2. Short people have short offspring and tall people have tall offspring"

====

"The average height DEcreases when:
1. Natural selection has no preference for height
AND
2. Short people have short offspring and tall people have short offspring"

====

"The average height INcreases when:
1. Natural selection has no preference for height
AND
2. Short people have tall offspring and tall people have tall offspring"