Posts

  • The Blogging Gauntlet: May 2 - Too Tired To Play

    This is part of The Blogging Gauntlet of May 2016, where I try to write 500 words every day. See the May 1st post for full details.

    Let’s talk about work and play.

    This is a theme that people absolutely love to bring up when talking about productivity. Here’s how the argument goes. People who do lots of work do because they like their work. They find ways to make work fun. The average employee goes to work to earn a paycheck, using that money to fund their play elsewhere. They work to live, instead of living to work. The mythical employee who treats their work as play lives for their work, because their work isn’t work; it’s play! That self-satisfaction drives them to work on side project after side project.

    I’ll fully admit I’m strawmanning the argument by a bit. It’s not as though it’s unfounded. People who like their job will do more at their job. It’s obvious. People with big hobbies will spend time outside of work on those hobbies. They’ll take salsa lessons, or practice playing the piano. Maybe they’ll try organizing a roleplaying campaign.

    What this portrayal misses is that even if you like your work, it’s still work. Someone can both deeply enjoy their job and deeply desire a break from it. And similarly, some hobbies require lots of effort as well. Learning to play the piano requires focused practice. Running a roleplaying campaign requires crafting a plausible setting, and learning how to improvise against the insane actions of roleplayers themselves.

    If there is work and there is play, then for me blogging is semi-play. It’s nice, but it requires effort, and sometimes I’m too tired for that. If I’ve just worked for 12 hours, the last thing my brain wants to do is do more. I’m too tired to work, and I’m too tired to play.

    It doesn’t matter that blogging is more personally fulfilling than being a passive consumer of other people’s work. Entertainment is entertaining! It’s made that way! Zero effort for a small blip of amusement. Welcome, dear children, to the fire hose of information.

    An xkcd comic about consuming

    There is always a relevant xkcd

    I don’t want to bash the entertainment industry that hard, because it would be hypocritical. This blog is a contributor to that fire hose, and I want people to drink from it.

    I want to write more, but there’s this pressure to make the words I’m writing perfect and well-polished. To make every sentence a clean pearl, saying exactly what I mean in exactly as many words as I need. The issue is that it takes a long time to write in a polished way. The more polished I want something, the more effort that has to go into it. My standards for myself make blogging too daunting to do unless I have little else going on, and that’s not sustainable.

    A few weeks ago, I had a conversation with somebody. She said it was nice to have a low effort writing outlet. That way, she could write down her passing thoughts, without worrying about making it perfect.

    This challenge is a step towards making my blog take less effort. By requiring 500 words every day, I’m forced to lower my standards on what’s publishable and what isn’t. I won’t lie, that’s going to sting a bit. It doesn’t matter how many layers of self-deprecation I weave about myself, I care a lot about the quality of the things I produce. It’s natural to; in fact I’d say it’s incredibly strange not to. The end result of a well-edited most can be glorious to behold, but it’s a long journey to get there, and I find it less fun than the first step of turning my thoughts into words.

    I’m treating this gauntlet as one small piece of my long term plan to get better at working towards long term plans. In the short term, my writing is going to be straight garbage. In the long term, I’ll be more articulate, and I’ll be better at understanding the power of taking small concrete steps towards lofty uncertain goals.

    Comments
  • The Blogging Gauntlet: May 1 - Introduction

    Recently, I haven’t been updating this blog. The biggest factor was that right after spring break finished, all of my classes decided they could really get moving, and I’ve been swamped with final projects until today.

    However, that shouldn’t be an excuse not to blog at all. In fact, for about the first two weeks after break, I found myself wanting to write blog posts about various silly observations I’d had. I knew I was busy, and that blogging had to take a backseat, but it was nagging me for a while. To quote Art & Fear yet again,

    Artists don’t get down to work until the pain of working is exceeded by the pain of not working.

    Yet as the weeks passed by, that motivation dwindled. I always had an excuse. This presentation is due the 14th. I need to fix my research code in time to run experiments. I have a paper to write. These were all indisputable facts, and the consequences were similarly unavoidable: my blogging habit slowly dissipated. Gaming and TVTropes rushed into the void, and soon I was back where I started.

    Classes have finished at Berkeley, meaning I’m free for the first time in a long while. But in just over a month, I’ll be starting a full-time job, and I’m worried I’ll once again have reasons not to blog. I’ll be too tired, I’ll be too drained. Those are the words I’ll tell myself to justify not writing, and sometimes they will be true and sometimes they won’t, but the outcome will be the same either way.

    With a new month comes a new start and a new opportunity. I’ve been planning this for a while, and I am proud to present: The Blogging Gauntlet of May 2016!

    (Please clap.)

    Here’s how it works.

    • For every day of May, I will write and post at least 500 words.
    • If I write more than 500 words for a day, that’s great! I still have to write 500 words the next day. There is no rollover, there is no surplus, and there is no buffer. Five hundred words per day, every day, all written between 12:00:00 AM and 11:59:59 PM local time.
    • I am allowed to write on whatever I want. I am allowed to write as awfully as I want. Posts do not have to be well-edited, but I will try to edit them if I have the time.
    • To make my brain take notice of this, every day I fail to write 500 words, I will donate $20 to charity. I plan to distribute any money donated based on GiveWell’s recommendations.

    This gauntlet is a somewhat drastic measure, but I am very sure I can do this. Looking back, I was about as busy last semester as I was this semester, but I still managed to get posts out on a semi-regular basis.

    The intention with this challenge is that 500 words is not that much. Yet, if I do 500 words every day, that’s 15,000 words this month, and that’s quite a bit. Think NaNoWriMo, but smaller scale and more personal. This should also help flush my queue of ideas I want to write down but haven’t yet. Based on reception, I can then narrow down which ideas I want to write about in more detail.

    If I can’t do this when I don’t have work, there’s no way I do this when I do have work. If I can do this for an entire month, the habit may finally stick.

    I’m pretty excited about this. I have more to say about why I’m doing this, but I’ll leave it for another time.

    (Finally, a parting mark: yes, my brain models giving to charity as a punishment, not a reward. if you’re wondering about the ethical gymnastics going on behind that, stay tuned, because I will almost certainly write about it.)

    Comments
  • Primes, Riemann Zeta, and Pi, Oh My!

    I’ve been working on other posts, but they’ve been harder to write than expected. A quick recreational math post should help fill the gap.

    I’m targeting this towards the high school math contest audience. That means I won’t assume anything past calculus, but I will assume many things up to calculus, and I also assume people reading this will look up unfamiliar terms on their own.

    Relatively Prime Random Numbers

    Let \(a\) and \(b\) be two uniformly random natural numbers. What is the probability \(a\) and \(b\) are relatively prime?

    First off, this problem isn’t well formed. You can’t define a uniform distribution over the naturals, because such a distribution would break probability theory axioms. (See here if curious.) To make this formal, we should really be picking naturals \(a,b\) uniformly from \(1\) to \(N\), and ask what the probability converges to as \(N\) approaches \(\infty\).

    Let \(P(n)\) be the probability that two uniformly random natural numbers from \(1\) to \(n\) (inclusive) are relatively prime. What is \(P(\infty) = \lim_{n\to\infty} P(n)\)?

    This was first solved by Dirichlet in 1849. For this proof, I’m handwaving the limit and assuming that random natural numbers act according to intuition.

    Two numbers \(a,b\) are relatively prime if they share no common factors. This is true if and only if for every prime \(p\), \(a\) and \(b\) are not both divisible by \(p\).

    Since we pick uniformly at random, the probability \(p\) divides a random natural number is \(1/p\). The probabillity both numbers are divisible by \(p\) is \(1/p^2\), giving

    \[P(\infty) = \left(1 - \frac{1}{2^2}\right) \left(1 - \frac{1}{3^2}\right) \left(1 - \frac{1}{5^2}\right)\cdots = \prod_{p\text{ prime}} \left(1 - \frac{1}{p^2}\right)\]

    Now, here’s where you apply a neat trick. Take the reciprocal of both sides.

    \[\frac{1}{P(\infty)} = \frac{1}{1 - \frac{1}{2^2}}\cdot \frac{1}{1 - \frac{1}{3^2}}\cdot \frac{1}{1 - \frac{1}{5^2}}\cdots\]

    For each fraction, substitute the infinite series \(\frac{1}{1-r} = 1 + r + r^2 + \cdots\).

    \[\frac{1}{P(\infty)} = \prod_{p\text{ prime}} \left( 1 + \frac{1}{p^2} + \frac{1}{p^4} + \frac{1}{p^6} + \cdots \right)\]

    I claim the right hand side is the same as

    \[\sum_{n=1}^\infty \frac{1}{n^2}\]

    The argument follows from a clever factoring argument. Suppose we expanded out the product of the infinite series. We would get infinitely many terms, where each term is generated by choosing one term from every infinite series and multiplying them together. For example, take \(1/2^2\) from the \(p = 2\) series, \(1/3^2\) from the \(p = 3\) series, and \(1\) from the rest. The term obtained is

    \[\frac{1}{2^2\cdot 3^2} = \frac{1}{6^2}\]

    Now, take \(1/2^4\) from the \(2\) series, \(1\) from the \(3\) series, \(1/5^2\) from the \(5\) series, and \(1\) from the rest. That gives

    \[\frac{1}{2^4 \cdot 5^2} = \frac{1}{(2^2 \cdot 5)^2} = \frac{1}{20^2}\]

    We can pick terms such that they multiply to \(1/n^2\) for any \(n\). For every prime \(p_i\), find the largest power \(p_i^{k_i}\) that divides \(n\). (\(k_i\) could be \(0\).) Then, take \(1/p^{2k_i}\) from the \(p_i\) series. By prime factorization, the product is

    \[\prod_{p\text{ prime}} \frac{1}{p^{2k_i}} = \frac{1}{\left(\prod_{p\text{ prime}} p^{k_i}\right)^2} = \frac{1}{n^2}\]

    Every natural number has a unique prime factorization, so every \(n\) is generated once and exactly once. Thus, the product expands to the sum of the reciprocal of the squares

    \[\prod_{p\text{ prime}} \left( 1 + \frac{1}{p^2} + \frac{1}{p^4} + \frac{1}{p^6} + \cdots \right) = \sum_{n=1}^\infty \frac{1}{n^2}\]

    (If you’re like me, you may want to spend a moment admiring this argument. It’s a contender for my favorite proof ever.)

    The final expression is

    \[\frac{1}{P(\infty)} = \sum_{n=1}^\infty \frac{1}{n^2}\]

    Substituting

    \[\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}\]

    gives that the probability is \(6/\pi^2 \approx 0.608\).

    Riemann Zeta And More Handwaving

    Now, unless you’ve seen it before, it should not be clear why

    \[\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}\]

    This was first proved by Euler in 1734. His proof was a bit sketchy, and it took another 7 years to make it rigorous. Nevertheless, I’m presenting the sketchy proof.

    To prove this result, we’re going to start from something completely different: the Taylor series for \(\sin(x)\). For all \(x\),

    \[\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots\]

    Divide through by \(x\) to get

    \[\frac{\sin(x)}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \cdots\]

    (If you don’t know what Taylor series are, take this on faith. I don’t want to do calculus.)

    The idea Euler used was to treat \(\frac{\sin(x)}{x}\) as an infinite degree polynomial. Any polynomial \(p(x)\) with roots \(r_1,\ldots, r_n\) can be written as

    \[p(x) = a(x-r_1)(x-r_2)\cdots(x-r_n)\]

    where \(a\) is some constant. For this proof, we’re using a different form. Divide each term by \(-r_i\) to get

    \[p\left(x\right) = a'\left(1 - \frac{x}{r_1}\right)\left(1 - \frac{x}{r_2}\right)\cdots\left(1-\frac{x}{r_n}\right)\]

    where \(a'\) is some other constant. Assuming this works for functions with infinitely many roots, \(\frac{\sin(x)}{x} = 0\) at \(x = \pm \pi, \pm 2\pi, \pm 3\pi, \ldots\).

    \[\frac{\sin(x)}{x} = a'\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)\cdots\]

    Equate this with the Taylor series. To make the constant term match up, we must have \(a' = 1\). (Try converting to the \(a(x-r_1)(x-r_2)\cdots\) form and you’ll see why it’s sketchy to assume \(\sin\) acts like an infinite degree polynomial.)

    Group up the terms for roots \(k\pi\) and \(-k\pi\) to get

    \[\frac{\sin(x)}{x} = \left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{4\pi^2}\right)\left(1-\frac{x^2}{9\pi^2}\right)\cdots\]

    Now, compare the coefficient of \(x^2\). To get an \(x^2\) term, we have to choose \(x^2/(n^2\pi^2)\) exactly once, and choose \(1\) from the rest. This gives

    \[-\frac{1}{3!}x^2 = \sum_{n=1}^\infty -\frac{x^2}{n^2\pi^2}\]

    Which gives the final answer of

    \[\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}\]

    This sum of reciprocals is a special case of the Riemann zeta function, defined as

    \[\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}\]

    It turns out that analyzing this function for complex \(s\) has deep connections to the primes. I can’t explain why because I don’t know analytic number theory, but if you consider that we started from relatively prime numbers and got to here, I don’t think a connection is too strange.

    \(\pi\) By Prime Products

    I’ll close with a fun identity that relates the prime numbers to \(\pi\). This identity was also discovered by Euler.

    Every odd prime \(p\) is either \(1 \text{ mod } 4\) or \(3 \text{ mod } 4\). Take the negative reciprocal if it’s \(1 \text{ mod } 4\) and the positive reciprocal if it’s \(3 \text{ mod } 4\). Add \(1\), take the product over all odd primes, and you get the identity

    \[\frac{4}{\pi} = \left(1+\frac{1}{3}\right)\left(1-\frac{1}{5}\right)\left(1+\frac{1}{7}\right)\left(1+\frac{1}{11}\right)\left(1 - \frac{1}{13}\right)\cdots\]

    The proof borrows from both previous sections. First, take the reciprocal.

    \[\frac{1}{1+\frac{1}{3}}\cdot \frac{1}{1-\frac{1}{5}}\cdot \frac{1}{1+\frac{1}{7}}\cdot \frac{1}{1+\frac{1}{11}}\cdot \frac{1}{1-\frac{1}{13}}\cdots\]

    Replace each term with an infinite series

    \[\prod_{p \equiv 1 \text{ mod }4} \left(1 +\frac{1}{p} + \frac{1}{p^2}+\cdots\right) \prod_{p \equiv 3 \text{ mod }4} \left(1 - \frac{1}{p} + \frac{1}{p^2}-\cdots\right)\]

    And again, expand out the infinite product. Ignoring the signs, this is exactly the same as the product for relatively prime random numbers, except without an infinite series for \(2\). When expanded, we’ll get exactly one term for every odd number.

    (I love this factoring trick so much. It’s great, even if it’s difficult to apply on other problems.)

    Now, consider the signs of those terms. Term \(1/n\) is positive if and only if \(n\) is the product of an even number of (not necessarily distinct) \(3\text{ mod } 4\) primes. If you multiply an even number of \(3\text{ mod }4\) primes, you get a number that’s \(1 \text{ mod } 4\). If you multiply an odd number of those primes, you’ll get a \(3 \text{ mod } 4\) number.

    So, the sign is negative at every other odd number, giving an expansion of

    \[1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots\]

    which - surprise - is Leibniz’s formula for \(\pi\). (It follows from the Taylor series for \(\arctan\) evaluated at \(1\).)

    \[\frac{\pi}{4} = \frac{1}{1+\frac{1}{3}}\cdot \frac{1}{1-\frac{1}{5}}\cdot \frac{1}{1+\frac{1}{7}}\cdot \frac{1}{1+\frac{1}{11}}\cdot \frac{1}{1-\frac{1}{13}}\cdots\] \[\Rightarrow\] \[\frac{4}{\pi} = \left(1+\frac{1}{3}\right)\left(1-\frac{1}{5}\right)\left(1+\frac{1}{7}\right)\left(1+\frac{1}{11}\right)\left(1 - \frac{1}{13}\right)\cdots\]

    Oh My! That’s All, Folks!

    That’s all I’ve got. See you next time, for something less mathy.

    Comments