Friday, November 18, 2016

Transistor radios

I was listening to Van Morrison's classic hit "Brown Eyed Girl" and the line about transistor radios always appeals to me as I am fascinated by how song lyrics capture science and technology of the era.  Another example is the song "Kodachrome" by Paul Simon and I have written about fractals in the song "Frozen" in an earlier post. When I was younger, I built a crystal radio receiver and was amazed that I can listen to AM radio stations with a device that has no external power source and is powered solely by the energy in the radio waves! I also had a vintage transistor radio that is shaped like a pack of Marlboro cigarettes and powered by a 9V battery. The potentiometers for volume and channel have such a satisfying feel. The transistor radio was born as transistors have supplanted vacuum tubes in the demodulation and amplification circuitry of a radio receiver, resulting in portable receivers that are much smaller and use much less power. Today, portable radios are so much smaller and is packed with integrated circuits as more signal processing is done in the digital domain. The goal of software defined radio (SDR) is to replace all analog signal processing with digital signal processing done by computer algorithms by moving the analog to digital (A/D) conversion as early as possible. However, current computer processors and A/D converters do not have the speed and bandwidth to process digital samples at RF frequencies, so some analog processing is still done in SDR to demodulate the signals to intermediate frequencies or baseband frequencies. So the transistor is still necessary in radios today. But then again, even if the ideal SDR, where all processing is done by software, is possible, since computer processors today are still filled with transistors, technically the term "transistor radios" will be here to stay for a long time!

Saturday, November 5, 2016

"Bots" creating news digests about "bots"

I use a news digest app on my phone to read a selection of important news in areas I selected. This is nice since it only lists news items that I am most likely interested in and saves me time in reading the day's news.  I think the news summary is created automatically via computer algorithms, since many times the highlighted quote does not make sense or is attributed to the wrong person.  In addition, the "to explore further" section sometimes points to unrelated and/or inappropriate Wikipedia articles. Sometimes, this is because the person in the news has the same name as someone who is much more famous, so the Wikipedia article is about the wrong (but more well known) person.

Today, in the "science" section, there is a summary digest article about news organizations utilizing "bots", or automatic algorithms to aggregate data and generate news article. It is quite amusing, since again the highlighted quote is attributed to the wrong person and the "to explore further" section points to a specific news paper and is tangentially related to the news article.  

Wednesday, September 28, 2016

Add oil

In Chinese slang, the phrase "加油", literally translated as "add oil", means "put more effort" and is typically used as encouragement to try harder in order to succeed at something. I believe the origin come from the fact that we need to add oil (gasoline) to cars in order for it to move. As we are all expected to drive electric or hydrogen cars in the future, this might become an archaic slang in the not too distant future. 

Addendum: October 4, 2016
After reading this post, my wife asked me what the corresponding Chinese slang should be for electric vehicles. She said that "充電" which is the translation of "charging electricity" is not a good choice since "充電" is slang for "refresh" or "renew" and is typically used when one is tired or drained of energy.

Tuesday, September 27, 2016

Continued fraction expansion of the square root of n: part II

In an earlier blog post$\delta(n)$ is defined as the smallest term in the periodic part of the continued fraction of $\sqrt{n}$ and I showed that if $r$ is even then $\delta((\frac{rm}{2})^2+m) = r$ for all $m\geq 1$. and if $r$ is odd, then $\delta((rm)^2+2m) = r$ for all $m\geq 1$. Note that $\delta(n)$ is only defined if $n$ is not a perfect square.

If you look at the first few numbers $n$ that satisfy $\delta(n) = r$, it would appear that they all follow the quadratric equations above. However, not all integers $n$ such that $\delta(n) = r$ are of the forms above. In particular, if $r  > 0$ is even, then $\sqrt{\frac{r^4}{4} + r^3 + 2r^2 + 3r + 2} = \sqrt{\frac{(r^2-2)^2}{4}+(r+1)^3}$ has continued fraction expansion $\left[\frac{(r+1)^2+1}{2};\overline{r+1,r,r+1,(r+1)^2+1}\right]$ and thus $\delta\left(\frac{r^4}{4} + r^3 + 2r^2 + 3r + 2\right) = r$ and it is not of the forms above.

Similarly, if $r$ is odd, then $\sqrt{r^4 + r^3 + \frac{5(r+1)^2}{4}}$ has continued fraction expansion $\left[\frac{(r+1)(2r-1)+2}{2};\overline{r,2r-1,r,(r+1)(2r-1)+2}\right]$ and thus $\delta\left(r^4 + r^3 + \frac{5(r+1)^2}{4}\right) = r$ and it is not of the forms above either.


Wednesday, September 21, 2016

Paul Erdős and Kevin Bacon

In his famous 1967 paper, Stanley Milgram describes a study he conducted that shows that people are related to each other via a very small of number of acquaintances. This led to the phrase "six degrees of separation" being coined by John Guare in his play of the same name. Mathematicians study a similar concept called Erdős numbers. Two persons are linked if they have co-authored a mathematical paper together and a person's Erdős number is the minimum number of links between him/her and Paul Erdős. Thus Paul Erdős has Erdős number 0, A person other than Erdős who has written a paper with Erdős has Erdős number 1. A person who does not have Erdős number $\leq 1$ and has written a paper with a person with Erdős number 1 will have Erdős number 2, etc.  If you have not written a paper with anyone with a (finite) Erdős number, then your Erdős number is $\infty$.

As a consequence of a drinking game, there is a similar notion among actors, called the Bacon number. Kevin Bacon has Bacon number 0 and the authors who are his co-stars in a movie have Bacon number 1, etc.

The fascinating aspect is that the Erdős number and the Bacon number of most people whose number is finite is relatively small, which is typically referred to as the "small world effect". There are various websites that lets you type in a name and it will attempt to find the Erdős or Bacon number of this person.

There is an additional notion of an Erdős-Bacon number which is the sum of a person's Erdős number and Bacon number. As of now the lowest Erdős-Bacon number appears to be 4.

There is something unsatisfactory about the definition of the Erdős-Bacon number.  In particular, both Paul Erdős and Kevin Bacon have very large (and possibly infinite) Erdős-Bacon number.  As of today, according to this link and this link, Paul Erdős has Bacon number $\infty$ and Kevin Bacon has Erdős number $\infty$.

There is one way to remedy this injustice.  Kevin Bacon should publish a math paper with someone with Erdős number 1 (unfortunately Paul Erdős died in 1996) and starred in a movie with people who appeared in the documentary "N Is a Number: A Portrait of Paul Erdős".  Since several mathematicians in the above documentary have Erdős number 1, both these activities can be combined if  Kevin Bacon makes a documentary of how he collaborated with one (or more) of these mathematicians on a math paper. This will ensure that both  Paul Erdős and Kevin Bacon have Erdős-Bacon number 2 (the lowest possible) and the universe will be in order again.

So Mr. Bacon, if you are reading this, please make that your next project!

Thursday, August 25, 2016

The 2016 Rio Olympics

It has been a thrilling Summer Olympics in the last 2 weeks and we enjoyed watching many of the events on TV. While watching the swimming competition, we noticed that Nathan Adrian looks surprisingly similar to Chow Yun Fat (周潤發), especially when he smiles. Chow Yun Fat is one of my favorite Hong Kong actors and I grew up watching him in several TVB TV series, with 北斗雙雄 being my favorite series (having watched it several times over the years). Perhaps they can cast Mr. Adrian when they need to make a biopic of Mr. Chow. 

Another thing during the Olympics that I like is that I can listen to a special CD. Many years ago, (around the 1996 Atlanta Olympics I believe), I took some pictures on film (yes, there used to be such a thing as photographic film) and went to the local drugstore to get them developed. There was a special promotion from Kodak that included a CD titled "The Sound and the Spirit" with music from various Olympic games. Since then I would play this CD every 4 (sometimes 2) years. 

The Hartman-Grobman Linearization Theorem

Theorem: In the neighborhood of a hyperbolic fixed point, a smooth vector field or a diffeomorphism is topologically conjugate to its linear part.

This result was proved by Grobman and Hartman independently around 1959-1960 and basically states that the dynamics near a hyperbolic fixed point is essentially the same as the dynamics of its linearization which we can characterize completely from the eigenvalues pattern.  This is true for both continuous-time dynamics (vector field) or discrete-time dynamics (diffeomorphism).

Here is a sketch of the standard proof for the case of  a diffeomorphism.  First, we need the following simple fact for linear maps in Banach spaces: if $F$ is an invertible contraction, then $I+F^{-1}$ is also invertible. This can be seen as follows. $I+F^{-1} = F^{-1}(I+F)$.  If $I+F$ is not invertible, then there exists $x\neq y$ such that $x+F(x) = y+F(y)$.  This implies that $x-y = F(y)-F(x)$, i.e. $\|x-y\| = \|F(y)-F(x)\|$, contradicting the fact that $F$ is a contraction. Therefore $I+F$ is invertible, and thus $I+F^{-1}$ is invertible since it is the product of two invertible maps.

Consider a diffeomorphism $f$ with a hyperbolic fixed point at $0$. Let $A$ be the linear part of $f$ at $0$.  We want to find a homeomorphism $h = I+\delta$ such that $fh = hA$.  As we are interested only at $f$ near a neighborhood of $0$, we can assume that $f$ can be written as $f = A+\phi_1$ such that $\phi_1$ is bounded and have a small Lipschitz constant.  Furthermore, $\phi_1$ can be chosen small enough such that $A+\phi_1$ is a homeomorphism. Consider the equation $(A+\phi_1)h
= h(A+\phi_2)$.  After using the fact that $h=I+\delta$ and some manipulation, we get the following Eq. (1):
\[\delta - A^{-1}\delta(A+\phi_2) = A^{-1}(\phi_2-\phi_1(I+\delta))
\]
Next we argue that the linear operator $H: \delta \rightarrow \delta - A^{-1} \delta(A+\phi_2)$ is invertible.
By hyperbolicity of $A$, we can decompose the phase space into the stable subspace $W^s$ and the unstable subspace $W^u$. Since $W^s$ and $W^u$ are invariant under $A^{-1}$, if $\delta$ is a bounded function into $W^s$ and $W^u$ then $H(\delta)$ is also a bounded function into $W^s$ and $W^u$ respectively. Split $\delta = \delta^s + \delta^u$ into two functions $\delta^s$ and $\delta^u$ which maps into $W^s$ and $W^u$ respectively.
The map  $\delta^s \rightarrow A^{-1}\delta^s(A+\phi_1)$ is invertible with inverse $\delta^s \rightarrow A \delta^s (A+\phi_1)^{-1}$ since  $A \delta^s (A+\phi_1)^{-1} = A^s\delta^s(A+\phi_1)^{-1}$ the map
$\delta^s \rightarrow A \delta^s (A+\phi_1)^{-1}$ is a contraction and therefore
the map $\delta^s \rightarrow \delta^s - A^{-1}\delta^s(A+\phi_1)$ is invertible based on the fact discussed before. The same thing can be done with the $W^u$ and this implies that $H$ is invertible.

Coming back to Eq. (1) above, we get
\[ \delta = H^{-1}A^{-1}(\phi_2-\phi_1(I+\delta)) = \psi(\delta)\]

For small $\phi_1$ and $\phi_2$, $\psi$ is a contraction and thus for given $\phi_1$ and $\phi_2$ there exists a unique $\delta$ and hence a unique $h$.  It can be shown that $h$ is a homeomorphism and by choosing $\phi_2 = 0$ we get the desired result.

References
D. M. Grobman, "Homeomorphisms of systems of differential equations," Doklady Akademii Nauk SSSR, vol. 128, pp. 880–881, 1959.
P.  Hartman, "A lemma in the theory of structural stability of differential equations," Proc. AMS, vol. 11, no. 4, pp. 610–620, 1960.

Wednesday, August 17, 2016

Rounding the k-th root of n

Consider the problem of finding the $k$-th root of a number $n\geq 0$ and rounding it to the nearest integer, i.e. find $[\sqrt[k]{n}]$, where $[x]$ is $x$ rounded to the nearest integer. This can be easily computed in many computer languages using floating point arithmetic, but care must be taken for large $n$ to ensure enough significant digits are available. On the other hand, languages such as Python has built-in support for integers of arbitrary sizes and will automatically allocate more space to fit the number under consideration. This can be used to compute $[\sqrt[k]{n}]$ using only integer arithmetic without worrying whether there are enough precision in the floating point representation.

Let $i$ be the largest integer such that $i \leq \sqrt[k]{n}$. The number $i$ can be computed using integer arithmetic with an iterative Newton's method.
Since $n \geq 0$, $[\sqrt[k]{n}] = i+1$ if $\sqrt[k]{n}-i \geq \frac{1}{2}$ and $[\sqrt[k]{n}] = i$ otherwise. The condition $\sqrt[k]{n}-i \geq \frac{1}{2}$ is equivalent to $2^k n \geq (2i+1)^k$ which can be computed using integer arithmetic.

A simple python function using the gmpy2 module to implement this is the following:

from gmpy2 import iroot
def round_root(n,k): # round(k-th root of n), n >= 0
    i = iroot(n,k)[0]
    return int(i) + int(2**k*n >= (2*i+1)**k)

The gmpy2 module also includes the functions isqrt_rem and iroot_rem. The function isqrt_rem(n)returns a pair of numbers $i,j$ such that $i$ is the largest integer $\leq \sqrt{n}$ and $j = n-i^2$.
Similarly, iroot_rem(n,k)returns a pair of numbers $i,j$ such that $i$ is the largest integer $\leq \sqrt[k]{n}$ and $j = n-i^k$.
Since
\begin{eqnarray*}(2i+1)^k &=& (2i)^k + (2i+1)^{k-1} + \\
&&(2i+1)^{k-2}2i + \cdots + (2i+1)(2i)^{k-2} + (2i)^{k-1}\end{eqnarray*}
the condition can be rewritten as:
\begin{eqnarray*}2^k j  &\geq &(2i+1)^{k-1} + (2i+1)^{k-2}2i + \cdots + (2i+1)(2i)^{k-2} + (2i)^{k-1}\\ & \geq & \sum_{m=0}^{k-1} (2i+1)^{k-1-m}(2i)^m \end{eqnarray*}
For $k=2$, this is reduced to: $4j \geq 4i + 1$. A python function implementing $[\sqrt{n}]$ is:

from gmpy2 import isqrt_rem
def round_sqrt(n): # round(square root of n), n >= 0
    i, j = isqrt_rem(n)
    return int(i) + int(4*(j-i) >= 1)

Similarly, for $k=3$, the condition is reduced to $8j \geq 6i(2i+1)+1$.

Wednesday, July 27, 2016

A way for Spiderman to catch the Green Goblin

Some years ago, we worked on a problem in digital halftoning for which we could prove a result [1] which states that there is a variant of the vector error diffusion algorithm such that the sum of the errors is bounded. This implies that the average error decreases to 0 as the number of errors to be averaged increases. This is paraphrased as saying that averaging a large area of the halftoned image is similar to the average of the original image over the same area, which is what we expect a halftoning algorithm to behave. In the course of this research, I came up with the following scenario to which this result also provide a solution to, and I have used it to describe this problem to a lay audience. In celebration of Spiderman's well received introduction to the Marvel Cinematic Universe, I thought I'll describe it here:

Consider a city whose shape is a convex polygon, with a building at each corner of this polygon. The main villain Green Goblin (GC) is loose in the city and it is up to Spiderman (S), our friendly neighborhood superhero, to catch him. At the start, GC and S are located at different places in the city. Because of fatique or lack of fuel, at each time epoch, both GC and S are moving less and less. However, whereas GC can move arbitrarily within the city, S (being a webslinger) can only move toward a building at the corner of the city, along the line connecting S and this building.



More precisely, time is divided into epochs numbered 1, 2, 3, ...
At the k-th epoch:

  1. GC picks a destination within the city to move to and moves 1/k of the distance in the direction towards the destination.
  2. S can moves 1/k of the distance to a building located on the corners of the city, in the direction of that building.
The question is: can S ever catch up to GC?  The result in [1] shows that the answer is yes and gives an explicit algorithm of which building S should swing from at each epoch in order for S to catch up to GC.

References
[1] R. Adler, B. Kitchens, M. Martens, A. Nogueira, C. Tresser, C. W. Wu, "Error bounds for error diffusion and related digital halftoning algorithms," Proceedings of IEEE International Symposium on Circuits and Systems, 2001, pp. II-513-516.

Saturday, July 23, 2016

Lava, donuts and printing

We have just returned from vacation touring the British isles. In particular, we visited the Giant's causeway in Belfast, Northern Ireland. The area is covered by more than 40,000 basalt columns with many of them having a shape close to a hexagonal column (similar to the shape of a pencil). They were formed when lava were cooling and shrinking. Why the hexagonal shapes? This might be due to the fact that a hexagonal lattice arrangement of circles on a plane is the densest arrangement of circles of the same size [1]. The hexagonal lattice arrangement is also the covering of circular disk with the least overlap. Many arrangements of objects in nature such as honeycombs have this hexagonal arrangement. The Voronoi regions of this arrangment are hexagons. Since this arrangement is periodic in two (non perpenticular) axes, one can view this as a periodic packing on the torus, provided the density matches the dimension of the torus. When the density does not match, we don't have a hexagonal packing on the torus, and it is not clear what the densest packing is. This has applications in digital halftoning [2]. The only difference in the halftoning application is that the circle centers are points on a discrete grid on the torus.  We studied 2 algorithms to generate such packing on the torus. The first algorithm is the Direct Binary Search (DBS) algorithm [3] and generate patterns like this:



The second algorithm is based on the Riesz energy minimization theory of Hardin and Saff [4] and we were able to obtain patterns that are more uniform than DBS:


In both cases, they look like the patterns found on Giant's causeway:



In 3 dimensions, the densest packing is the face-centered cubic (FCC) lattice packing (also known as the "canonball" packing).  It was conjectured to be the densest packing by Kepler in 1611, and it was only proved relatively recently by Hales with a proof first announced in 1998 and the correctness of the lengthy proof checked by computer in 2014.

References:
[1] J. Conway and N. J. A. Sloane, "Sphere Packings, Lattices and Groups", Springer, 3rd Edition, 1998.
[2] C. W. Wu, B. Trager, K. Chandu and M. Stanich, "A Riesz energy based approach to generating dispersed dot patterns for halftoning applications," Proceedings of SPIE-IS&T Electronic Imaging, SPIE, vol. 9015, pp. 90150Q, 2014,
[3]  J. P. Allebach, “DBS: retrospective and future directions,” in Proceedings of SPIE, 4300, pp. 358–376, 2001.
[4] D. P. Hardin and E. B. Saff, “Discretizing manifolds via minimum energy points,” Notices of the AMS 51(10), pp. 1186–1194, 2004.