Sunday, September 30, 2012

Tennis lessons

My family has recently started taking tennis lessons.  In one of the lessons there were 6 students and we got to play on 3 courts.  The instructor wanted each of us to play with each other so we had 3 singles matches going simultaneously on 3 courts and we switched players after 2 games.  He had us switch the players on one side of the court to the other courts.  In other words, the first pairing was:

1 -- 2    Court 1
3 -- 4    Court 2
5 -- 6    Court 3

and the second pairing was:

1 -- 6    Court 1
3 -- 2    Court 2
5 -- 4    Court 3

and the third pairing was:

1 -- 4   Court 1
3 -- 6   Court 2
5 -- 2   Court 3

Since there are a total of $\left(\begin{array}{c}n\\2\end{array}\right)$ = n(n-1)/2 pairs and we can play n/2 games at a time on n/2 courts (let us consider the case of even n for the moment), we should be able to have different pairings at most n-1 times.  In the above case of 6 students (n=6), we should be able to have 5 such rotations.  But having the first 3 rotations as above, it is easy to see that it is not possible to have a 4th set of pairings without some of the pairs being repeated since players 1, 3, 5 have each played 2,4,6 respectively and thus needed to play each other which cannot be done simultaneously.  So we asked ourselves the question, is it possible to have n-1 rotations where each person with play another person exactly once?

My son looked at the configurations and came up with the following algorithm to pair players.  Fix one player and rotate the other players around a loop.  In other words, keeping player 1 fixed, and rotating the remaining players clockwise, the pairings are:

Court 1    1 -- 2           1 -- 3        1 -- 5      1 -- 6      1 -- 4
Court 2    3 -- 4           5 -- 2        6 -- 3      4 -- 5      2 -- 6
Court 3    5 -- 6           6 -- 4        4 -- 2      2 -- 3      3 -- 5

It is easy to see that this is equivalent to the well-known polygon method of assigning round robin tournaments.  Since my son is a fan of Minecraft, he also created a machine in Minecraft that illustrated this process using pistons to move colored blocks around.  He made one piston to pull out one of the blocks and then rotate the remaining blocks around a loop.  Finally the block that was pull out is reinserted into the group.  Pictorially, it looks like this:

Starting configuration:

   1 2
   3 4
   5 6

One block is pulled out (where x represent an empty slot):

1    x 2
      3 4
      5 6

The remaining 5 blocks (with the empty slot) are rotated clockwise:

1    3 x
      5 2
      6 4

Block 1 is inserted  back (and pushing block 3 across to the right) resulting in the first rotation:

     1 3
     5 2
     6 4

Sunday, August 5, 2012

Zermelo's winning strategy

Zermelo's theorem is a fundamental result in game theory that states that in a two-player game where

  • each player has perfect information about the state of the game
  • players take turns placing their move
  • no chance is involved
  • games end after a finite number of moves
  • games do not end in a draw
one of the two players will have a winning strategy.  I first encounter this theorem in Hugo Steinhaus' Mathematical Snapshots (a book that I cherish as I received it as a prize from participating in the first LVAIC Mathematics Competition).   This result basically states that one of the two players can always win if he or she plays appropriately.  On the surface this is easy to understand.  If player A does not have a winning strategy, then this means that whatever moves player A makes, player B has a move that prevent A from winning.  Since the games do not end in a draw, this means that player B will win eventually, i.e. B has a winning strategy.  This is an very interesting result since it implies that these games are inherently not fair; one player always has the advantage.  The problem is that in complicated games we don't know which player has this advantage.  When applied to chess this implies that either black or white can always force a win or a draw; we just don't know which color that is.  Since the game is finite, given enough computational power and time, it is possible to find out which player has the winning strategy by exploring all possible scenarios.  Or we may find out that by playing appropriately, all chess games will end in a draw.  Once such exhaustive search is done, computers playing chess would lose its excitement and allure, since we know apriori which color will win (or draw) and the winning strategy can be implemented via table lookup.  Fortunately, the number of moves in a chess game and the range of moves result in a search space so large that its complete exploration is out of reach for many years to come.

Sunday, July 15, 2012

Feedback and cycles

Feedback is an important concept in engineering. Negative feedback ensures stability as any deviation from the the desired state is met with a force in the opposite direction of said deviation.  To ensure convergence to an equilibrium, this force must diminish in strength as the deviation shrinks.  On the other hand, positive feedback implies that any deviation is amplified with additional changes in the same direction.  This can cause instability and can lead to chaos. Almost all engineering systems with some dynamical behavior use negative feedback to ensure proper operation. This is very different and almost contradictory to how the terms "negative feedback" and "positive feedback" are used in normal conversation.  In everyday language, "positive feedback" is used in the sense of "constructive feedback", i.e. encouragement whereas "negative feedback" is "destructive feedback".  Another set of terms used in everyday language is "virtuous cycle" and "vicious cycle".  Both are examples of positive feedback (the engineering kind), with the difference between virtuous cycle and vicious cycle being the direction of the deviation.  In a vicious cycle the ever-increasing effect is driven by deviation which is detrimental, whereas in a virtuous cycle the deviation is advantageous.

This is illustrated in the following diagram.  Constructive feedback occupies the top half of the diagram, whereas destructive feedback occupies the bottom half.  A curve from the lower left quadrant to the upper right quadrant can be thought of as a (memoryless) positive feedback algorithm. A curve from the upper left quadrant to the lower right quadrant is a negative feedback algorithm.  A virtuous cycle lives in the upper right quadrant and a vicious cycle lives in the lower left quadrant.



Saturday, July 7, 2012

The Monty Hall problem

The Monty Hall problem is one of my favorite problems whose solution is counter-intuitive.  Suppose you are being shown 3 closed boxes by Monty Hall, only one of which contains a prize and the other 2 are empty.  You get to choose a box, and Monty, who knows which box contains the prize, opens an empty box that you did not choose.  He then asks you if you want to change your mind.  Intuitively, one would think that it doesn't make a difference if you switch or not.  But it turns out your odds are doubled if you change your mind.  The reasoning is as follows.  If you have initially picked a box with the prize in it, then not changing your mind will result in you getting the prize.  You have a 1 in 3 chance of initially picking the winning box.  If you have initially picked an empty box, switching will result in you getting the prize, and you have a 2 in 3 chance of initially picking an empty box.  So the odd of winning by switching is double the odd of winning by not switching.

Saturday, June 2, 2012

Asymmetric conventions

In the blog post "Left or right?" I talked about different names for left or right.  Sometimes we need to make a choice between these 2 options in designing a system or defining a standard.  For instance, most countries drive on the right side of the road with the steering wheel on the left side of the car.  Typically screws tighten clockwise.  The hands of a clock turn "clockwise".  Faucets open by turning counterclockwise (at least in places where I live).  These choices are arbitrary as one choice is as good as another choice. Sometimes the choices are dictated by the application.  There are occasions where screws are tightened counterclockwise to prevent it from being loosened by the natural motion of the apparatus.  For instance, the left pedal on a bicycle uses this kind of reverse threads.

In electrical engineering, there are several arbitrary choices that were made.  For instance, electricity flows from the positive node to the negative node, which as it turns out, is opposite to the flow of electrons.  Similarly, magnetic field lines flow from the North pole to the South pole.  These choices result in "right-hand" rules of electromagnetic induction being part of the electrical engineer's armamentarium:  a counterclockwise current generates a magnetic field coming out of the page and a current coming out of the page generates counterclockwise magnetic field lines.  There is also a "right-hand" rule for generators and a "left-hand" rule for motors due to Fleming that tell you the direction of current and the direction of motion respectively.

Sometimes nature has inherent asymmetry.  For instance, our bodies are not symmetric.  Almost all of us has the heart slightly in the left side of the body, with the right half pumping blood to the lungs to replenish it with oxygen and the left half pumping it to the rest of the body.

As I learned in high-school chemistry, chemical compounds can also be asymmetric (this concept is called chirality), i.e. a molecule is chiral if it is different from its mirror image.  In particular amino acids are chiral.  Certain molecules can only be absorbed by the body if it is of a certain chirality.  I remember reading the science fiction story "Technical Error" by Arthur C. Clarke many years ago where a man accidentally travels through the fourth dimension and back and this resulted in his molecules having the opposite symmetry as us. His body cannot absorb the nutrients in normal food, and he is kept alive with engineered food of opposite chirality that is very expensive to manufacture.

Monday, May 14, 2012

Hemispherical misconceptions

Two concepts that I believed in throughout my entire childhood turn out to be false.

1. I was told in school that the seasons are caused by the elliptical orbit of the earth around the sun.  Some months along this orbit, the earth is closer to the sun, and other times it is farther away.  When the earth is closer to the sun, it is summer and when the earth is farther away, it is winter.  When I came to the USA, I met people from Brazil and Australia who tell me that it is winter there when it is summer here.  This clearly contradicts the theory above.  It turns out the seasons are caused by the tilt of the earth in its orbit.  Half of the year the northern hemisphere is tilted towards the sun (corresponding to the warmer seasons), while the other half of the year the southern hemisphere is tilted towards the sun.  In fact, the earth is closer to the sun in the winter of the northern hemisphere than in the summer.

2. I was also told that in the southern hemisphere water flushes away in a sink in the opposite direction from that occurring in the northern hemisphere.  I have been to the southern hemisphere 3 times, but each time I forgot to do some experiments to validate this claim.  It seems that this phenomenon attributed to the Coriolis effect is only relevant under very careful conditions or at a very large scale, neither of which is present in a toilet or a kitchen sink.

Thursday, May 10, 2012

A mathematical rebus

The following rebus describes an important mathematical equation using 5 fundamental constants:

Monday, March 12, 2012

Pi Day

Wednesday this week, March 14, is Pi Day, as the first digits of $\pi$ are 3.14.  A better approximation of $\pi$ will occur on Pi Day 2016, when 3.14.16 is close to $\pi=3.1415926535$...  Or better yet, at 1:59:27 (am or pm) on March 14, we'll get 3.14 1:59:27.   An even better approximation is on Pi Day 2015 at 9:26:54 (3.14.15 9:26:54).  Unfortunately April only has 30 days, as otherwise people in countries where they write dates in the format dd/mm can celebrate April 31 (31.4) as Pi Day.

There are other variants of Pi Day.  For instance, today March 12 is Pi Day in base 9 ($\pi_{base 9} = 3.1241881...$).
Pi Day in base 3 is October 1 ($\pi_{base 3} = 10.010211...$).
Pi Day in base 4 is March 2 ($\pi_{base 4} = 3.021003...$).
Pi Day in hexadecimal is March 24 ($\pi_{base 16} = 3.243f6...$).

In this manner, many days in March are Pi Day in some number base.

$n$ $\pi$ in base $n$ Pi Day in base $n$
4 $3.021003331...$March 2
5 $3.032322143...$March 3
6 $3.050330051...$March 5
7 $3.066365143...$March 6 (or March 7 after rounding)
8 $3.110375524...$March 11
9 $3.124188124...$March 12
10 $3.141592653...$March 14
11 $3.161507028...$March 16
12 $3.184809493...$March 18
15 $3.21cd1dc46...$March 21 (or March 22 after rounding)
16 $3.243f6a888...$March 24
17 $3.26fag579e...$March 26 (or March 27 after rounding)
18 $3.29fdeh0g7...$March 29 (or March 30 after rounding)

Wednesday, January 25, 2012

iPictureShow

It occurred to me that the last few products Apple has introduced all start with the letters i and p (iPod, iPhone, iPad).  Does this mean that they will continue this tradition when they introduce their new TV product and call it iPictureShow?

Thursday, January 19, 2012

Flux capacitor

Back to the future is a wonderful sci-fi film that is funny and light-hearted. I have seen the movie many times and I always chuckled when they mention the term "Flux Capacitor".  It sounded like someone took two random technical words and mashed them together.  In the movie, an elderly scientist tells the protagonist that the Flux Capacitor is what makes time travel possible.  From high school physics, we know that a capacitor is an electronic device used to store energy, and it is used in almost every electronic device, whether they are analog or digital.  Examples where capacitors are useful include audio filters, power line regulators, and dynamic memory (DRAM).  The basic design of a capacitor consists of two plates with a dielectric in between.  Applying a potential difference on the plates create an electric field between the plates where electrical energy is stored.  The strength of the electric field E is the potential difference V across the plates divided by the distance d between the plate. The electric flux Φ in the capacitor is the strength of the electric field E multiplied by the area A of the plate.  For a fixed voltage drop V across the plates,  Φ = VC/ε is proportional to the capacitance C.  Thus one can argue that the normal capacitor that we all know is in fact a flux capacitor!

In recent years, there have been advances in creating ultracapacitors or supercapacitors, i.e. capacitors that can store a large amount of energy. Some of these capacitors that are available today can have an energy density of up to 30Wh/kg.  Imagine that we have 10 kg of such capacitors. If we can fully charge them and then discharge them within 0.89 milliseconds, we would have an average power surge of 300Wh /0.89x10-3s = 1.21348 Gigawatts which should be sufficient to activate the time machine!