Excessively large pages fail to return properly
Summary: This is purely a cluster-serialization problem, and I should have thought of it before -- if the wikitext for a page is more than 128k, it doesn't get properly serialized and sent.
A good example is the Issues by Modification Time page here, which runs around 150k. Unless the connection happens to be made on the same node the Space is living on, it will fail.
This is going to be a PITA to fix. I could hack around it by raising the maximum message size, but that's just kicking the can down the road. The correct solution is that we should be chunking the page content if necessary, rather than trying to return it all as a block.