Modern society has been using the decimal number system for a long time, i.e. a number $n$ is expressed in base $10$ as $n = \sum_i b_i10^i$ where $0\leq b_i < 10$ are integers denoting the decimal digits of $n$.
Although there have been other cultures and civilizations who have used other number systems, most notably the Roman numerals and the Mayan base-20 number system, it was believed the decimal system was used by many due to us having 10 fingers. From a mathematical point of view, the base of the number system is arbitrary. For digital computers, using base 2 is more appropriate as it is easier to build components representing and processing 2-valued logic, or equivalently the binary digit, or the bit. It is interesting to note that one of the first (I said one of the first as there is a dispute whether the ABC computer is the first digital computer) digital computer, ENIAC, uses a decimal system and requires 10 vacuum tubes to represent a single decimal digit, each tube representing each of the numerals 0, 1, ..., 9.
Thus it came as a surprise to me that there is something inherently special about the decimal system. In 1964 Gustav Lochs proved the following theorem.
Lochs' theorem (1964): Let $m$ be the number of terms of a continued fraction expansion needed to determine the first $n$ decimal digits of a real number $x$. Then for almost all $x$, $\lim_{n\rightarrow \infty} \frac{m}{n} = \frac{ln(10)ln(64)}{\pi^2} \approx 0.970$.
What this tells us is that each coefficient of the continued fraction expansion contain slightly more information than each decimal digit. Had we use a base-11 numbering system, it would have been the opposite, each base-11 digit would contain more information than each additional continued fraction coefficient.