# Posts

• ## Research Communication is Hard

I’ve been working on my current research project for around four to six months. The results haven’t been that good. I have a meeting about it tomorrow, and I’d say there’s about a 50-50 chance we decide I should stop working on the project and switch to another one. If it ends that way, well, that sucks. But after reflecting on it for a while, I think I’m mentally prepared for that outcome.

In preparation for that meeting, my manager recommended I write a document describing what the project is, how it got to where it is now, and all the things I tried along the way. I agreed this was a good idea, although I was starting to get sick of explaining my project. In the past, I wrote a doc when I proposed the project, then made a presentation after it got refined, then wrote another doc when I pivoted. Ironically, all that documentation made it hard to figure out where the project was, because each doc built on the previous ones.

From a naive viewpoint, writing a summary shouldn’t have taken that long. I’ve been living and breathing this project for months, and I had already written several documents on the project. How long could writing one more take?

Well, turns out it took me about a day. That’s actually not too long, all things considered, but why did it have to take that long at all? It was only summarizing things I’d already written.

The tricky part of research is that the author is trying to take these grand ideas, that have never been arranged the way the author’s arranged them, and then he or she needs to convert them into text that gets the point across.

Yasha Berchenko-Kogan (who I actually kinda know in real life) has a great quote about communicating research.

Communicating these ideas is a bit like trying to explain a vacuum cleaner to someone who has never seen one, except you’re only allowed to use words that are four letters long or shorter.

A lot of the metaphorical four letter words I used weren’t useful anymore, and filtering it to just the key points was most of the work.

When I got comments back, I had a shocking revelation: the only one who fully understood my project was me.

On the surface, that’s not so surprising, so let me clarify. I’ve been talking to most of these people about my project for months. They’ve received and read every doc I wrote about my project, from start to finish. I’ve had weekly meetings with several of these people, to explain issues I ran into and get tips on how to get past them.

And the whole time, I was the only one who understood everything that was going on?

What the hell?

I used to think good writing was important because nobody wants to extend research that’s hard to understand.

But now I’ve realized there’s another big benefit. If you want people to give useful feedback, it’s very important they actually understand what the problem is.

I’m about to describe one of those ideas that’s obviously true. Are you ready? Okay, good.

When people say “That sounds good to me”, they’re implicitly saying “Based on my understanding of the problem, that sounds good to me.” Worryingly, if they missed a few details you tried to convey, their brains will fill in the details by themselves. And the details they fill in will be ones that make sense to them, which likely differs from the details you were trying to convey in the first place.

The relevant point: if you ask for help from your advisor, and your problem is in one of those details you failed to convey, you’re not going to get good advice.

If you yourself are a researcher, and this was a new idea for you, read over it a few times. I think it’s worth knowing.

In hindsight, the biggest shifts in my research didn’t happen when I had a new idea. They happened when experienced researchers finally grasped a detail I thought they’d already understood. They would then immediately make an obvious argument for why doing it that way was dumb, and then I’d feel like an idiot.

If everyone’s busy climbing your chain of reasoning, they won’t notice the weak links, because they’ll assume all the links make sense.

That suggests two things:

• Prefer simple solutions over complicated ones. Complicated solutions are hard to explain, meaning it’s harder to get good feedback, and complicated solutions are the ones that need feedback the most.
• If you do need to use a complicated solution, be very, very careful that you understand why you’re doing everything you’re doing, and that the reasons you made up four months ago still make sense in the present.

If I had explained my research project better, I could have saved a few weeks of work, because I could have avoided a lot of silly ideas that weren’t worth doing in retrospect.

Research miscommunication is inevitable. It takes a long time to explain things properly, and usually people don’t have the time for that. I suspect that’s the main reason research takes so long - it’s the blind leading the blind. Sometimes the blind one is you. Sometimes it’s someone else. It’s hard to tell the difference.

If the pattern holds, people aren’t going to understand everything I wanted them to understand from this post. Oh well. I tried.

• ## MIT Mystery Hunt 2017

When ✈✈✈ Galactic Trendsetters ✈✈✈ did introductions this year, I realized with some surprise that this was my 5th time solving Mystery Hunt. Only my 2nd time flying in for Hunt though. Now that I’m a Real Adult with a Real Job, it’s harder to justify taking a few days off, but after five years I’ve hit a critical mass of MIT people I know and puzzle people I know.

If you want to avoid spoilers, you should stop reading now.

* * *

Let’s get some elephants out of the room first.

Was Hunt fun? Yes, absolutely. I loved the theme. The opening skit was great, the flavor was baked in all over the place, and overall it’s one of my favorite Hunts.

Was Hunt shorter than Setec Astronomy expected? Yes, absolutely.

Was having a short Hunt a bad thing? I’m not so sure. WHOOSH Galactic Trendsetters NIERRRRR got 4th this year, which is a lot better than before. From what I heard, the team got bigger, people got better at looking at metas early, and the metas were easier to solve than previous years. Combined, that meant we finished by midday Saturday. Turns out you can do things on MLK weekend besides solve puzzles. I played some board games and party games, had a nice dinner, and got to work on Facebook Hacker Cup Round 1 at a reasonable time, instead of 4 AM.

The aforementioned nice dinner, at Thelonious Monkfish.

A game from Jackbox Party Pack 3

Yes, I can see how a short Hunt could be disappointing. Some people on our team arrived Saturday morning, and by that point we only had two metapuzzles left. On the flip side, running out of puzzles is a competitive team problem. There were several teams who got to finish Hunt for the first time, and that’s really awesome.

Look, time estimation is hard. I should know, I contributed to the Sages hunt. I place no blame at all on Setec for a short Hunt. Setec wrote good puzzles. In fact, in some ways they were almost too good. To me, getting good at puzzles is less about noticing the trick, and more about learning how to solve from 60% of the data, 10% of which might be wrong. I can’t do it, but some people are eerily good at solving past errors.

Neo: What are you trying to tell me? That I can identify extractions?

Morpheus: No, Neo. I’m trying to tell you that when you’re ready, you won’t have to. [Because you’ll solve metas by pulling random letters out of the puzzle answers.]

Hunts get longer when you have to figure out incredibly obtuse extraction mechanisms, and we didn’t hit as many of those this year.

I’ll sum it up like this. The metas were clued through tons of flavortext, which made them easier to solve. That led to more backsolving constraints, which made it easier to solve puzzles. That sped up the unlocking process, and it all cascades from there. At some point we had three Quest rounds open with zero solves in each, just because we’d been solving metas and Character puzzles so quickly. The Chemist meta and Cleric meta were especially open to backsolving, and because character levels were the main unlock mechanism, we backsolved those very aggressively. (A bit too aggressively in fact. HQ got mad at us and stopped confirming our Cleric backsolves for an hour. We’re sorry!)

I did like how clear the character level unlock mechanism was. It made it easy to decide which puzzles to prioritize. We never would have done the scavenger hunt if people hadn’t insisted we unlock Wizard level 6 already. I didn’t work on it myself, but it led to a scroll of the Bee Movie script, which was a glorious sight.

I think it comes down to how much you were here for puzzles, and how much you were here for hanging out with people for puzzles. I came for a bit of both, and I got a bit of both, so I’m satisfied.

On to puzzle stories!

The first puzzle I was around all the way for was CHINA KILNS. The trick was really cute. We derped on assigning clues to answers properly, then we derped on solving the clue phrase, then we solved it.

By the time I looked at A Message From Our Sponsors, everything was IDed. Someone else got the aha. My main contribution was verifying the Poinsettia Bowl was indeed a thing.

When Pentoku came out, I loudly announced I wanted to work on a logic puzzle and people were free to join me. Then I left because about seven people started crowding around a blackboard, and I’m secretly not good at logic puzzles. Between the entire team, I’m pretty sure we solved each Pentoku twice, just because of communication issues between different rooms and the remote solvers.

I helped arrange the clues for A Lengthy Journey. This was shortly after we solved CHINA KILNS, so I was thinking we needed to make the answers form a cycle (each answer disambiguates a different clue, forming a cycle among clues.) We never realized that wasn’t what we were supposed to do. We ended Hunt with 9 puzzles unsolved, and this was one of them.

Fashionable Friends looked like fashion and so I skipped it. Someone pinged me about it later, saying it was a My Little Pony puzzle. I showed up and immediately identified the extraction mechanism, because of course cutie marks are going to be important in an MLP puzzle. I mean, duh! (It helps that last year’s MLP puzzle also used cutie mark extraction.)

I worked on Hamiltonian Path with a few other people. I let other people figure out the path constraints, and kept myself busy with identifying songs, because god damnit when else am I going to get to use my knowledge of Hamilton. I was very, very surprised that I could ID about 80% of the songs by memory, and only needed to look up lyrics for a few of them. Who knew 2-3 second snippets could be so distinctive? They were so distinctive that I didn’t even realize each clip had a number in it, which was really important for extraction. Oops. Anyways, after a very long time we got to GIVE US A VERSE, then got back a CD…and I realized I actually needed to make a team request for a laptop that had a CD player. Oh technology, you get obsoleted so quickly.

Mentally Stimulating Diversion was in fact a mentally stimulating diversion. It was a short one though, me and another person only spent about 40 minutes on it.

I did a dramatic reading of The Puzzle At The End of This Book around midnight on Friday. That was fun. We noticed all the encoded information during the dramatic reading, except for the encoding that gave SPECTER. I’m really disappointed we missed that mechanism, because it’s clearly the best one. (We did catch QALUPALIK, but it took us a while to believe that was a word.)

That was the last normal puzzle I looked at. After that I stared at metas for the rest of Hunt.

I don’t have much to say on endgame that wasn’t said at wrap-up. After solving all the metas, we scheduled a solo version of the Hungry Hungry Hippogriffs event (which didn’t look as fun as the full version), and a solo version of the Quizardry event.

The one cute thing that happened was that we asked to hear only the category on the final round of Quizardry, then tried to figure out the correct answer purely from knowing it needed to be transformable into another valid answer in that category. I suppose we unintentionally spoiled the endgame gimmick for ourselves, because endgame worked exactly the safe way - identify something in a category whose phrase satisfies a constraint we learned from before.

So long, and thanks for all the puzzles!

• ## Introduction to the Hybrid Argument

I was reading through some proofs from imitation learning, and realized they were reminding me of hybrid arguments from cryptography. It’s always nice to realize connections between fields, so I figure it was worth making a quick guide to how hybrid arguments work.

***

Hybrid arguments are a proof method, like proof by induction. Like induction, they aren’t always enough to solve the problem. Also like induction, the details differ on each problem, and filling in those details is the hardest part of each method.

The hybrid argument requires the following.

• We want to compare two objects $A$ and $A'$.
• There is a sequence of objects $A_0, A_1, \cdots, A_n$ such that $A_0 = A$, $A_n = A'$, and the $A_i$ can be seen as an interpolation from $A_0$ to $A_n$. Intuitively, as $i$ increases, $A_i$ slowly drifts from $A$ to $A'$.
• The difference between two adjacent $A_i$ in the interpolation is small.

For concreteness, let’s assume there’s a function $f$ and we’re trying to bound $f(A) - f(A')$. Rewrite this difference as a telescoping series.

Every term in the sum cancels, except for the starting $f(A_0)$ and the ending $-f(A_n)$.

(Man, I love telescoping series. There’s something elegant about how it all cancels out. Although in this case, we’re adding more terms instead of removing them.)

This reduces bounding $f(A) - f(A')$ to bounding the sum of terms $f(A_i) - f(A_{i+1})$. Since the difference between adjacent $A_i$ is small, $f(A) - f(A')$ is at most $n$ times that small value. And that’s it! Really, there are only two tricks to the argument.

• Creating a sequence $\{A_i\}$ with small enough differences.
• Applying the telescoping trick to use those differences.

It’s very important that there’s both a reasonable interpolation and the distance between interpolated objects is small. Without both these points, the argument has no power.

This is all very fuzzy, so let’s make things more concrete. This problem comes from the DAGGER paper. (Side note: if you’re doing imitation learning, DAGGER is a bit old, and AGGREVATE or Generative Adversarial Imitation Learning may be better.)

We have an environment in which agents can act for $T$ timesteps. Let $\pi_E$ be the expert policy, and $\pi$ be our current policy. Let $J(\pi)$ be the expected cost of policy $\pi$. We want to prove that given the right assumptions, $J(\pi)$ will be close to $J(\pi_E)$ by the end of training.

This is done with hybrids. Define $\pi_i$ as the policy which follows $\pi$ for $i$ timesteps, then follows $\pi_E$ for the remaining $T-i$ timesteps. Note $\pi_0 = \pi_E$ and $\pi_T = \pi$. The telescoping trick gives

The only difference between $\pi_i$ and $\pi_{i+1}$ is that in the first, the expert takes over after $i$ steps, and in the second it takes over after $i+1$ steps. The paper then argues that as long as the environment has no key decision where a single wrong move can lead to death, the ability of the expert to correct after $i+1$ steps must be similar to its ability to correct after $i$ steps.

This shows why hybrids are useful. They let us break down reasoning over $n$ steps worth of differences to reasoning about $n$ differences of 1 step each.

A similar flavor of argument shows up a ton in crypto. Very often, we’re trying to replace a true source of randomness with something that’s pseudorandom, and we need to argue that security is still preserved. For example, we have $n$ PRNGs $G_1,\cdots,G_n$, and $n$ independently sampled seeds $s_i$. Suppose we concatenated the $n$ inputs and $n$ outputs together to get the function

We want to show $G'$ is still a PRNG.

Here, the hybrids are functions $H_i$, where $H_i$ uses the first $i$ PRNGs and uses true randomness for the remaining $n-i$ blocks of bits. This makes $H_0$ truly random and $H_n = G'$. If the difference between $H_0$ and $H_n$ is small, $G'$’s output is close to truly random, which would show $G'$ is a PRNG. This leaves arguing that switching from $H_i$ to $H_{i+1}$ (switching the $(i+1)$th block of bits from true random to $G_{i+1}$) doesn’t change things enough to break security.

***

Like with many things, hybrid arguments are something that you have to actually do to really understand. And I don’t have a library of hybrid problems off the top of my head. That being said, I think it’s useful to know what they are and roughly how they work. Proof methods are only as useful as your ability to recognize when they might apply, and it’s hard to recognize something if you don’t know it exists.

Whenever you have two objects and a reasonable interpolation between them, it’s worth thinking about whether you can bound the difference between adjacent terms. And whenever you know how to bound the difference between two similar objects, it’s worth thinking about whether you can build an appropriate sequence that lets you chain those differences into a conclusion about objects further apart.