Blog de Frédéric

Wednesday, March 20 2013

Exercises in Set Theory: Classical Independence Results

Here are new solutions to exercises from Thomas Jech’s book ‘‘Set Theory’’:

Doing the exercises from these chapters gave me the opportunity to come back to the ‘‘classical’’ results about the independence of the Axiom of Choice and (Generalized) Continuum Hypothesis by Kurt Gödel and Paul Cohen. It’s funny to note that it’s easier to prove that AC holds in $L$ (essentially, the definition by ordinal induction provides the well-ordering of the class of contructible sets) than to prove that GCH holds in $L$ (you rely on AC in $L$ and on the technical condensation lemma). Actually, I believe Gödel found his proof for AC one or two years after the one for GCH. On the other hand, it is easy to make GCH fails (just add $\aleph_{2}$ Cohen reals by Forcing) but more difficult to make AC fails (e.g. AC is preserved by Forcing). This can be interpreted as AC being more ‘‘natural’’ than GCH.

After reading the chapters again and now I analyzed in details the claims, I’m now convinced about the correctness of the proof. There are only two points I didn’t verify precisely about the Forcing method (namely that all axioms of predicate calculus and rules of inference are compatible with the Forcing method ; that the Forcing/Generic Model theorems can be transported from the Boolean Algebra case to the general case) but these do not seem too difficult. Here are some notes about claims that were not obvious to me at the first reading. As usual, I hope they might be useful to the readers of that blog:

1. In the first page of chapter 13, it is claimed that for any set $M$, $M\in\mathrm{def}(M)$ and $M\subseteq\mathrm{def}(M)\subseteq\operatorname{\mathcal{P}}(M)$. The first statement is always true because $M=\{x\in M:(M,\in)\models x=x\}\in\mathrm{def}(M)$ and ${(x=x)}^{{(M,\in)}}$ is $x=x$ by definition. However, the second statement can only be true if $M$ is transitive (since that implies $M\subseteq\operatorname{\mathcal{P}}(M)$). Indeed, if $M$ is transitive then for all $a\in M$ we have $a\subseteq M$ and since $x\in a$ is $\Delta_{0}$ we get $a=\{x\in M:(M,\in)\models x\in a\}\in\mathrm{def}(M)$. If moreover we consider $x\in X\in\mathrm{def}(M)$ then $x\in X\subseteq M$ so $x\in M\subseteq\mathrm{def}(M)$ and $\mathrm{def}(M)$ is also transitive. Hence the transitivity of the $L_{\alpha}$ can still be shown by ordinal induction.

2. The proof of lemma 13.7 can not be done exactly by induction on the complexity of $G$, as suggested. For example to prove (ii) for $G=G_{2}=\cdot\times\cdot$, we would consider $\exists u\in F(...)\times H(...)\varphi(u)\Leftrightarrow\exists a\in F(...),% \exists b\in H(...),\varphi((a,b))$ and would like to say that $\varphi((a,b))$ is $\Delta_{0}$. Nevertheless, we can not deduce that from the induction hypothesis. Hence the right thing to do is to prove the lemma for $G_{1}=\{\cdot,\cdot\}$ first and deduce the lemma for $G=(\cdot,\cdot)$ (and $G^{{\prime}}=(\cdot,\cdot,\cdot)$). Then we can proceed by induction.

3. In the proof of theorem 13.18, it is mentioned that the assumption

1. $x<_{\alpha}y$ implies $x<_{\beta}y$

2. $x\in L_{\alpha}$ and $y\in L_{\beta}\setminus L_{\alpha}$ implies $x<_{\beta}y$

implies that if $x\in y\in L_{\alpha}$ then $x<_{\alpha}y$. To show that, we consider $\beta\leq\alpha$ the least ordinal such that $y\in L_{\beta}$. In particular, $\beta$ is not limit ($L_{0}=\emptyset$ and if $y\in L_{\beta}$ for some limit $\beta>0$ then there is $\gamma<\beta$ such that $y\in L_{\gamma}$) and we can write it $\beta=\gamma+1$. We have $y\in L_{\beta}=L_{{\gamma+1}}$ so there is a formula $\varphi$ and elements $a_{1},...,a_{n}\in L_{\gamma}$ such that $x\in y=\{z\in L_{\gamma}:(L_{\gamma},\in)\models\varphi(z,a_{1},...,a_{n})\}$. Hence $x\in L_{\gamma}$. Moreover by minimality of $\beta$, $y\in L_{\beta}\setminus L_{\gamma}$ so by (ii) we have $x<_{\beta}y$ and by (i) $x<_{\alpha}y$.

4. In lemma 14.18, we have expressions that seem ill-defined for example $a_{u}(t)$ where $t\notin\operatorname{dom}(a_{u})$. This happens in other places, like lemma 14.17 or definition 14.27. The trick is to understand that the functions are extended by 0. Indeed, for any $x,y\in V^{B}$ if $x\subseteq y$ and $\forall t\in\operatorname{dom}(y)\setminus\operatorname{dom}(x),y(t)=0$ then

 $\displaystyle\|y\subseteq x\|$ $\displaystyle=\prod_{{t\in\operatorname{dom}(y)}}\left(-y(t)+{\|t\in x\|}\right)$ $\displaystyle=\prod_{{t\in\operatorname{dom}(x)}}\left(-x(t)+{\|t\in x\|}\right)$ $\displaystyle=\|x\subseteq x\|=1$

and similarly we get $\|x=y\|=1$. Then we can use the inequality page 207 ($\|\varphi(x)\|=\|x=y\|\cdot\|\varphi(x)\|\leq\|\varphi(y)\|=\|x=y\|\cdot\|% \varphi(y)\|\leq\|\varphi(x)\|$) to replace $x$ by its extension $y$.

5. In lemma 14.23, the inequality

 $\|x\text{ is an ordinal}\|\leq\|x\in\check{\alpha}\|+\|x=\check{\alpha}\|+\|% \check{\alpha}\in x\|$

seems obvious but I don’t believe that it can be proved so easily at that point. For example the proof from chapter 2 requires at least the Separation axiom and the $\Delta_{0}$ formulation from chapter 10 is based on the Axiom of Regularity. To solve that issue, it seems to me that the lemma should be moved after the proof that axioms of ZFC are valid in $V^{B}$. This is not an issue since lemma 14.23 is only used much later in lemma 14.31.

6. Many details could be added to the proof of theorem 14.24, but let’s just mention Powerset. For any $u\in V^{B}$, some $u^{{\prime}}\in\operatorname{dom}(Y)$ is defined and satisfies $\|u\subseteq X\|\leq\|u=u^{{\prime}}\|$ (this follows from the definitions, using the Boolean inequality $-a+b\leq-a+b\cdot a$ to conclude). Since moreover $\forall t\in\operatorname{dom}(Y),Y(t)=1$ we get

 $\displaystyle\|u\subseteq X\|\implies\|u\in Y\|$ $\displaystyle\geq-\|u=u^{{\prime}}\|+\sum_{{t\in\operatorname{dom}(Y)}}\left(% \|u=t\|\cdot Y(t)\right)$ $\displaystyle=\sum_{{t\in\operatorname{dom}(Y)}}-\left(\|u\neq t\|\cdot\|u=u^{% {\prime}}\|\right)$ $\displaystyle\geq\sum_{{t\in\operatorname{dom}(Y)}}\|u^{{\prime}}=t\|=1$
7. In theorem 14.34, we prove that any $\kappa$ regular in $V$ remains regular in $V[G]$ (the hard case is really $\kappa$ uncountable and this assumption is implicitely used later to say that $\bigcup_{{\alpha<\lambda}}A_{\alpha}$ is bounded). It may not be obvious why this is enough. First recall that for any ordinal $\alpha$, $\operatorname{cf}^{{V[G]}}(\alpha)\leq\operatorname{cf}^{V}(\alpha)$, ${|\alpha|}^{{V[G]}}\leq{|\alpha|}^{{V}}$, and any (regular) cardinal in $V[G]$ is a (regular) cardinal in $V$. Next we have,

 $\displaystyle\exists\alpha\in\mathrm{Ord},\operatorname{cf}^{{V[G]}}(\alpha)% \leq\operatorname{cf}^{{V}}(\alpha)$ $\displaystyle\implies\exists\alpha\in\mathrm{Ord},\operatorname{cf}^{{V[G]}}(% \operatorname{cf}^{V}(\alpha))\leq\operatorname{cf}^{{V[G]}}(\alpha)<% \operatorname{cf}^{{V}}(\alpha)$ $\displaystyle\implies\exists\beta\textrm{ regular cardinal in }V,\textrm{ not % regular cardinal in }V[G]$ $\displaystyle\implies\exists\beta\in\mathrm{Ord},\operatorname{cf}^{{V[G]}}(% \beta)<\beta=\operatorname{cf}^{V}(\beta)$

that is $\forall\alpha\in\mathrm{Ord},\operatorname{cf}^{{V[G]}}(\alpha)=\operatorname{% cf}^{{V}}(\alpha)$ is equivalent to ‘‘$V$ and $V[G]$ have the same regular cardinals’’. Similarly, we can prove that $\forall\alpha\in\mathrm{Ord},{|\alpha|}^{{V[G]}}={|\alpha|}^{{V}}$ is equivalent to ‘‘$V$ and $V[G]$ have the same cardinals’’.

The proof of theorem 14.34 shows that ‘‘$V$ and $V[G]$ have the same regular cardinals’’ and so to complete the proof, it is enough to show that $\forall\alpha,\operatorname{cf}^{{V[G]}}(\alpha)=\operatorname{cf}^{{V}}(\alpha)$ implies $\forall\alpha,{|\alpha|}^{{V[G]}}={|\alpha|}^{{V}}$. So suppose $\forall\alpha,\operatorname{cf}^{{V[G]}}(\alpha)=\operatorname{cf}^{{V}}(\alpha)$ and assume that there is $\alpha$ such that ${|\alpha|}^{{V[G]}}<{|\alpha|}^{{V}}$. Consider the least such $\alpha$. If $\beta={|\alpha|}^{{V}}$ then $\beta\leq\alpha$ so ${|\beta|}^{{V[G]}}\leq{|\alpha|}^{{V[G]}}<{|\alpha|}^{{V}}=\beta$. By minimality of $\alpha$, $\beta=\alpha$ and so $\alpha$ is a cardinal in $V$. $\alpha$ is actually regular in $V$. Otherwise, suppose $\operatorname{cf}^{V}(\alpha)<\alpha$ and let $\alpha=\bigcup_{{\beta<\operatorname{cf}^{V}(\alpha)}}X_{\beta}$ such that ${|X_{\beta}|}^{V}<{|\alpha|}^{V}$. By minimality of $\alpha$, we have ${|\operatorname{cf}^{V}(\alpha)|}^{{V[G]}}={|\operatorname{cf}^{V}(\alpha)|}^{% {V}}$ and ${|X_{\beta}|}^{{V[G]}}={|X_{\beta}|}^{{V}}$. Then ${|\alpha|}^{{V[G]}}={|\operatorname{cf}^{V}(\alpha)|}^{{V[G]}}\sup_{{\beta<% \operatorname{cf}^{V}(\alpha)}}{|X_{\beta}|}^{{V[G]}}={|\operatorname{cf}^{V}(% \alpha)|}^{{V}}\sup_{{\beta<\operatorname{cf}^{V}(\alpha)}}{|X_{\beta}|}^{{V}}% ={|\alpha|}^{{V}}$, a contradiction. Finally, we get $\operatorname{cf}^{V}(\alpha)=\alpha={|\alpha|}^{V}>{|\alpha|}^{{V[G]}}\geq% \operatorname{cf}^{{V[G]}}(\alpha)$. This is again a contradiction and so $\forall\alpha,{|\alpha|}^{{V[G]}}={|\alpha|}^{{V}}$.

Saturday, March 2 2013

MathML Acid Tests

There has recently been discussion in the Mozilla community about Opera switch from Presto to Webkit and the need to preserve browser competition and diversity of rendering engines, especially with mobile devices. Some people outside the community seem a bit skeptic about that argument. Perhaps a striking example to convince them is to consider the case of MathML where basically only Gecko has a decent native implementation and the situation in the recent eBooks workshop illustrates that very well: MathML support is very important for some publishers (e.g. for science or education) but the main eBook readers rely exclusively on the Webkit engine and its rudimentary MathML implementation. Unfortunately because there is currently essentially no alternatives on mobile platforms, developers of eBook readers have no other choices than proposing a partial EPUB support or relying on polyfill....

After Google's announce to remove MathML from Chrome 25, someone ironized on twitter about the fact that an Acid test for MathML should be written since that seems to motivate them more than community feedback. I do not think that MathML support is something considered important from the point of view of browser competition but I took this idea and started writing MathML versions of the famous Acid2 and Acid3 tests. The current source of these MathML Acid tests is available on GitHub. Of course, I believe that native MathML implementation is very important and I expect at least that these tests could help the MathML community ; users and implementers.

Here is the result of the MathML Acid2 test with the stable Gecko release. To pass the test we only need to implement negative spacing or at least integrate the patch I submitted when I was still active in Gecko developments (bug 717546).

And here is the score of the MathML Acid 3 test with the stable Gecko release. The failure of test 18 was not supposed to happen but I discovered it when I wrote the test. That will be fixed by James Kitchener's refactoring in bug 827713. Obviously, reaching the score of 100/100 will be much more difficult to achieve by our volunteer developers, but the current score is not too bad compared to other rendering engines...

Friday, January 25 2013

Exercises in Set Theory

I'm finally done with the first part "Basic Set Theory" :-) The two last chapters:

Tuesday, January 22 2013

Analysis of Lithium's algorithm

I’ve recently been working on automated testcase reduction tools for the MathJax project and thus I had the chance to study Jesse Ruderman’s Lithium tool, itself inspired from the ddmin algorithm. This paper contains good ideas, like for example the fact that the reduction could be improved if we rely on the testcase structure like XML nodes or grammar tokens instead of just characters/lines (that’s why I’ve started to write a version of Lithium to work with abstract data structure). However, the authors of the ddmin paper really don’t analyse precisely the complexity of the algorithm, except the best and worst case and there is a large gap between the two. Jesse's analysis is much better and in particular introduces the concepts of monotonic testcase and clustered reduction where the algorithm performs the best and which intuitively seems the usual testcases that we meet in practice. However, the monotonic+clustered case complexity is only “guessed” and the bound $O(M\log _{2}(N))$ for a monotonic testcase (of size $N$ with final reduction of size $M$) is not optimal. For example if the final reduction is relatively small compared to $N$, say $M=\frac{N}{\log _{2}(N)}=o(N)$ then $M\log _{2}(N)=N=\Omega(N)$ and we can’t say that the number of verifications is small compared to $N$. In particular, Jesse can not deduce from his bound that Lithium’s algorithm is better than an approach based on $M$ binary search executions! In this blog post, I shall give the optimal bound for the monotonic case and formalize that in some sense the clustered reduction is near the best case. I’ll also compare Lithium’s algorithm with the binary search approach and with the ddmin algorithm. I shall explain that Lithium is the best in the monotonic case (or actually matches the ddmin in that case).

Thus suppose that we are given a large testcase exhibiting an unwanted behavior. We want to find a smaller test case exhibiting the same behavior and one way is to isolate subtestcases that can not be reduced any further. A testcase can be quite general so here are basic definitions to formalize a bit the problem:

• A testcase $S$ is a nonempty finite sets of elements (lines, characters, tree nodes, user actions) exhibiting an “interesting” behavior (crash, hang and other bugs…)

• A reduction $T$ of $S$ is a testcase $T\subseteq S$ with the same “interesting” behavior as $S$.

• A testcase $S$ is minimal if $\forall T\subsetneq S,T\text{ is not a reduction of }S$.

Note that by definition, $S$ is a reduction of itself and $\emptyset$ is not a reduction of $S$. Also the relation “is a reduction of” is transitive that is a reduction of a reduction of $S$ is a reduction of $S$.

We assume that verifying one subset to check if it has the “interesting” behavior is what takes the most time (think e.g. testing a hang or user actions) so we want to optimize the number of testcases verified. Moreover, the original testcase $S$ is large and so a fast reduction algorithm would be to have a complexity in $o(|S|)$. Of course, we also expect to find a small reduction $T\subseteq S$ that is $|T|=o(|S|)$.

Without information on the structure on a given testcase $S$ or on the properties of the reduction $T$, we must consider the $2^{{|S|}}-2$ subsets $\emptyset\neq T\neq S$, to find a minimal reduction. And we only know how to do that in $O\left(2^{{|S|}}\right)$ operations (or $O\left(2^{{|S|/2}}\right)$ with Grover’s algorithm ;-)). Similarly, even to determine whether $T\subseteq S$ is minimal would require testing $2^{{|T|}}-2$ subsets which is not necessarily $o(|S|)$ (e.g. $|T|=\log _{2}{(|S|)}=o(|S|)$). Hence we consider the following definitions:

• For any integer $n\geq 1$, $S$ is $n$-minimal if $\forall T\subsetneq S,|S-T|\leq n\implies T\text{ is not a reduction of }S$.

• In particular, $S$ is $1$-minimal if $\forall x\in S,S\setminus\{ x\}\text{ is not a reduction of }S$.

• $S$ is monotonic if $\forall T_{1}\subseteq T_{2}\subseteq S,\, T_{1}\text{ is a reduction of }S\implies T_{2}\text{ is a reduction of }S$.

Finding a $n$-minimal reduction will give a minimal testcase that is no longer interesting if we remove any portion of size at most $n$. Clearly, $S$ is minimal if it is $n$-minimal for all $n$. Moreover, $S$ is always $n$-minimal for any $n\geq|S|$. We still need to test exponentially many subsets to find a $n$-minimal reduction. To decide whether $T\subseteq S$ is $n$-minimal, we need to consider subsets obtained by removing portions of size $1,2,...,\min(n,|T|-1)$ that is $\sum _{{k=1}}^{{\min(n,|T|-1)}}\binom{|T|}{k}$ subsets. In particular whether $T$ is $1$-minimal is $O(|T|)$ and so $o(|S|)$ if $T=o(|S|)$. If $S$ is monotonic then so is any reduction $T$ of $S$. Moreover, if $T\subsetneq S$ is a reduction of $S$ and $x\in S\setminus T$, then $T\subseteq S\setminus\{ x\}\subsetneq S$ and so $S\setminus\{ x\}$ is a reduction of $S$. Hence when $S$ is monotonic, $S$ is $1$-minimal if and only if it is minimal. We will target $1$-minimal reduction in what follows.

Let’s consider Lithium’s algorithm. We assume that $S$ is ordered and so can be identified with the interval $[1,|S|]$ (think for example line numbers). For simplicity, let’s first assume that the size of the original testcase is a power of two, that is $|S|=N=2^{n}$. Lithium starts by $n-1$ steps $k=1,2,...,n-1$. At step $k$, we consider the chunks among the intervals $[1+j2^{{n-k}},(j+1)2^{{n-k}}]\ (0\leq j<2^{k})$ of size $2^{{n-k}}$. Lithium verifies if removing each chunk provides a reduction. If so, it permanently removes that chunk and tries another chunk. Because $\emptyset$ is not a reduction of $S$, we immediately increment $k$ if it remains only one chunk. The $n$-th step is the same, with chunk of size 1 but we stop only when we are sure that the current testcase $T$ is $1$-minimal that is when after $|T|$ attempts, we have not reduced $T$ any further. If $N$ is not a power of 2 then $2^{{n-1}} where $n=\lceil\log _{2}(N)\rceil$. In that case, we apply the same algorithm as $2^{n}$ (i.e. as if there were $2^{n}-N$ dummy elements at the end) except that we don’t need to remove the chunks that are entirely in that additional part. This saves testing at most $n$ subtests (those that would be obtained by removing the dummy chunks at the end of sizes $2^{{n-1}},2^{{n-2}},...,1$). Hence in general if $C_{N}$ is the number of subsets of $S$ tried by Lithium, we have $C_{{2^{n}}}-n\leq C_{N}\leq C_{{2^{n}}}$. Let $M$ be the size of the $1$-minimal testcase found by Lithium and $m=\lceil\log _{2}(M)\rceil$.

Lithium will always perform the $n-1$ initial steps above and check at least one subset at each step. At the end, it needs to do $M$ operations to be sure that the testcase is $1$-minimal. So $C_{N}\geq\lceil\log _{2}(N)\rceil+M-1=\Omega(\log _{2}(N)+M)$. Now, consider the case where $S$ monotonic and has one minimal reduction $T=[1,M]$. Then $T$ is included in the chunk $[1,2^{{m}}]$ from step $k=n-m$. Because $S$ is monotonic, this means that at step $k=1$, we do two verifications and the second chunk is removed because it does not contain the $T$ (and the third one too if $N$ is not a power of two), at step $k=2$ it remains two chunks, we do two verifications and the second chunk is removed etc until $k=n-m$. For $k>n-m$, the number of chunk can grow again: 2, 4, 8… that is we handle at most $2^{{1+k-(n-m)}}$ chunks from step $n-m+1$ to $n-1$. At step $k=n$, a first round of at most $2^{m}$ verifications ensure that the testcase is of size $M$ and a second round of $M$ verifications ensure that it is $1$-minimal. So $C_{N}\leq 1+\left(\sum _{{k=1}}^{{n-m}}2\right)+\left(\sum _{{k=n-m+1}}^{{n}}2^{{1+k-(n-m)}}\right)+2^{m}+M=1+2(n-m)+2^{m}-1+2^{m}+M$ and after simplification $C_{N}=O(\log _{2}(N)+M)$. Hence the lower bound $\Omega(\log _{2}(N)+M)$ is optimal. The previous example suggests the following generalization: a testcase $T$ is $C$-clustered if it can be written as the union of $C$ nonempty closed intervals $T=I_{1}\cup I_{2}\cup...\cup I_{C}$. If the minimal testcase found by Lithium is $C$-clustered, each $I_{j}$ is of length at most $M\leq 2^{m}$ and so $I_{j}$ intersects at most 2 chunks of length $2^{m}$ from the step $k=n-m$. So $T$ intersects at most $2C$ chunks from the step $k=n-m$ and a fortiori from all the steps $k\leq n-m$. Suppose that $S$ is monotonic. Then if $c$ is a chunk that does not contain any element of $T$ then $T\setminus c$ is a reduction of $T$ and so Lithium will remove the chunk $c$. Hence at each step $k\leq n-m$, at most $2C$ chunks survive and so there are at most $4C$ chunks at the next step. A computation similar to what we have done for $T=[1,M]$ shows that $C_{N}=O(C(\log _{2}(N)+M))$ if the final testcase found by Lithium is $C$-clustered. Note that we always have $M=o(N)$ and $\log _{2}(N)=o(N)$. So if $C=O(1)$ then $C_{N}=O(\log _{2}(N)+M)=o(N)$ is small as wanted. Also, the final testcase is always $M$-clustered (union of intervals that are singletons!) so we found that the monotonic case is $O(M(\log _{2}(N)+M))$. We shall give a better bound below.

Now, for each step $k=1,2,...,n-1$, Lithium splits the testcase in at most $2^{k}$ chunk and try to remove each chunk. Then it does at most $N$ steps before stopping or removing one chunk (so the testcase becomes of size at most $N-1$), then it does at most $N-1$ steps before stopping or removing one more chunk (so the testcase becomes of size at most $N-1$), …, then it does at most $M+1$ steps before stopping or removing one more chunk (so the testcase becomes of size at most $M$). Then the testcase is exactly of size $M$ and Lithium does at most $M$ additional verifications. This gives $C_{N}\leq\sum _{{k=1}}^{{n-1}}2^{k}+\sum _{{k=M}}^{{N}}k=2^{n}-2+\frac{N(N+1)-M(M-1)}{2}=O(N^{2})$ verifications. This bound is optimal if $1\leq M\leq 2^{{n-1}}$ (this is asymptotically true since we assume $M=o(N)$): consider the cases where the proper reductions of $S$ are exactly the segments $[1,k]\ (2^{{n-1}}+1\leq k\leq 2^{n}-1)$ and $[k,2^{{n-1}}+1]\ (2\leq k\leq 2^{{n-1}}-M+2)$. The testcase will be preserved during the first phase. Then we will keep browsing at least the first half to remove elements at position $2^{{n-1}}+2\leq k\leq 2^{n}-1$. So $C_{N}\geq\sum _{{k=2^{{n-1}}+2}}^{{2^{n}-1}}2^{{n-1}}=2^{{n-1}}\left(2^{{n-1}}-2\right)=\Omega(N^{2})$.

We now come back to the case where $S$ is monotonic. We will prove that the worst case is $C_{N}=\Theta\left(M\log _{2}(\frac{N}{M})\right)$ and so our assumption $M=o(N)$ gives $M\log _{2}(\frac{N}{M})=\left(-\frac{M}{N}\log _{2}(\frac{M}{N})\right)N=o(N)$ as we expected. During the steps $1\leq k\leq m$, we test at most $2^{k}$ chunks. When $k=m$, $2^{{m}}\geq M$ chunks but at most $M$ distinct chunks contain an element from the final reduction. By monocity, at most $M$ chunks will survive and there are at most $2M$ chunks at step $m+1$. Again, only $M$ chunks will survive at step $m+2$ and so on until $k=n-1$. A the final step, it remains at most $2M$ elements. Again by monocity a first round of $2M$ tests will make $M$ elements survive and we finally need $M$ additional tests to ensure that the test case is minimal. Hence $C_{N}\leq{\sum _{{k=1}}^{{m}}2^{k}}+{\sum _{{k=m+1}}^{{n}}2M}+M=2^{{m+1}}-3+M(2(n-m)+1)=O(M\log _{2}(\frac{N}{M}))$. This bound is optimal: if $M=2^{m}$, consider the case where $T=\{ j2^{{n-m}}+1:0\leq j<2^{{m}}\}$ is the only minimal testcase (and $S$ monotonic) ; if $M$ is not a power of two, consider the same $T$ with $2^{{m}}-M$ points removed at odd positions. Then for each step $1\leq k\leq m-1$, no chunks inside $[1,2^{n}]$ are removed. Then some chunks in $[1,2^{n}]$ are removed (none if $M$ is a power of two) at step $m$ and it remains $M$ chunks. Then for steps $m+1\leq k\leq n-1$ there are always exactly $2M$ chunks to handle. So $C_{N}\geq\sum _{{m+1\leq k\leq n-1}}2M=2M(n-m-2)=2M(\log _{2}(\frac{N}{M})-2)=\Omega(M\log _{2}(\frac{N}{M}))$.

We note that we have used two different methods to bound the number of verifications in the general monotonic case, or when the testcase is $C$-clustered. One naturally wonders what happens when we combine the two techniques. So let $c=\lceil\log _{2}C\rceil\leq m$. From step $1$ to $c$, the best bound we found was $O(2^{k})$ ; from step $c$ to $m$, it was $O(C)$ ; from step $m$ to $n-m$ it was $O(C)$ again ; from step $n-m$ to $n-c$, it was $O(2^{{1-k-(n-m)}}C)$ and finally from step $n-c$ to $n$, including final verifications, it was $O(M)$. Taking the sum, we get $C_{N}=O(2^{c}+((n-m)-c)C+2^{{(n-c-(n-m))}}C+(n-(n-c))M)=O(C\left(1+\log _{2}{\left(\frac{N}{MC}\right)}\right)+M(1+\log _{2}{C}))$ Because $C=O(M)$, this becomes $C_{N}=O(C\log _{2}{\left(\frac{N}{MC}\right)}+M(1+\log _{2}{C}))$. If $C=O(1)$, then we get $C_{N}=O(\log _{2}(N)+M)$. At the opposite, if $C=\Omega(M)$, we get $C_{N}=\Omega(M\log _{2}{\left(\frac{N}{M}\right)})$. If $C$ is not $O(1)$ but $C=o(M)$ then $1=o(\log _{2}{C})$ and $C\log _{2}(C)=o(M\log _{2}{C})$ and so the expression can be simplified to $C_{N}=O(C\log _{2}{\left(\frac{N}{M}\right)}+M\log _{2}{C})$. Hence we have obtained an intermediate result between the worst and best monotonic cases and shown how the role played by the number of clusters: the less the final testcase is clustered, the faster Lithium finds it. The results are summarized in the following table:

Number of tests
Best case $\Theta(\log _{2}(N)+M)$
$S$ is monotonic ; $T$ is $O(1)$-clustered $O(\log _{2}(N)+M)$
$S$ is monotonic ; $T$ is $C$-clustered ($C=o(M)$ and unbounded) $O(C\log _{2}{\left(\frac{N}{M}\right)}+M\log _{2}{C})$
$S$ is monotonic $O\left(M\log _{2}\left(\frac{N}{M}\right)\right);o(N)$
Worst case $\Theta(N^{2})$
Figure 0.1: Performance of Lithium’s algorithm for some initial testcase $S$ of size $N$ and final reduction $T$ of size $M=o(N)$. $T$ is $C$-clustered if it is the union of $C$ intervals.

In the ddmin algorithm, at each step we add a preliminary round where we try to immediately reduce to a single chunk (or equivalently to remove the complement of a chunk). Actually, the ddmin algorithm only does this preliminary round at steps where there are more than 2 chunks for otherwise it would do twice the same work. For each step $k>1$, if one chunk $c_{1}$ is a reduction of $S$ then $c_{1}\subseteq c_{2}$ for some chunk $c_{2}$ at the previous step $k-1$. Now if $S$ is monotonic then, at level $k-1$, removing all but the chunk $c_{2}$ gives a subset that contains $c_{1}$ and so a reduction of $S$ by monocity. Hence on chunk survive at level $k$ and there are exactly 2 chunks at level $k$ and so the ddmin algorithm is exactly Lithium’s algorithm when $S$ is monotonic. The ddmin algorithm keeps in memory the subsets that we didn’t find interesting in order to avoid repeating them. However, if we only reduce to the complement of a chunk, then we can never repeat the same subset and so this additional work is useless. That’s the case if $S$ is monotonic.

Finally, if $S$ is monotonic Jesse proposes a simpler approach based on a binary search. Suppose first that there is only one minimal testcase $T$. If $k\geq\min T$ then $[1,k]$ intersects $T$ and so $U_{k}=S\setminus[1,k]\neq T$. Then $U_{k}$ is not a reduction of $S$ for otherwise a minimal reduction of $U_{k}$ would be a minimal reduction of $S$ distinct from $T$ which we exclude by hypothesis. If instead $k<\min T$ then $[1,k]$ does not intersect $T$ and $U_{k}=S\setminus[1,k]\supseteq T\setminus[1,k]\supseteq T$ is a reduction of $S$ because $S$ is monotonic. So we can use a binary search to find $\min T$ by testing at most $\log _{2}(N)$ testcases (modulo some constant). Then we try with intervals $[1+\min T,k]$ to find the second least element of $T$ in at most $\log _{2}(N)$. We continue until we find the $M$-th element of $T$. Clearly, this gives $M\log _{2}(N)$ verifications which sounds equivalent to Jesse’s bound with even a better constant factor. Note that the algorithm still works if we remove the assumption that there is only one minimal testcase $T$. We start by $S=S_{1}$ and find $x_{1}=\max\{\min T:T\text{ is a minimal reduction of }S_{1}\}$: if $k then $S\setminus[1;k]$ contains at least one mininal reduction with least element $x_{1}$ and so is a reduction because $S$ is monotonic. If $k\geq x_{1}$ then $S\setminus[1;k]$ is not a reduction of $S$ or a minimal reduction of $S\setminus[1;k]$ would be a minimal reduction of $S$ whose least element is greater than $x_{1}$. So $S_{2}=S_{1}\setminus[1;x_{1}-1]$ is a reduction of $S_{1}$. The algorithm continues to find $x_{2}=\max\{\min(T\setminus\{ x_{1}\}):T\text{ is a minimal reduction of }S_{2},\min(T)=x_{1}\}$ etc and finally returns a minimal reduction of $S$. However, it is not clear that this approach can work if $S$ is not monotonic while we can hope that Lithium is still efficient if $S$ is “almost” monotonic. We remark that when there is only one minimal testcase $T=[1,M]$, the binary search approach would require something like $\sum _{{k=0}}^{{M-1}}\log _{2}(N-k)=M\log _{2}(N)+\sum _{{k=0}}^{{M-1}}\log _{2}\left(1-\frac{k}{N}\right)\geq M\log _{2}(N)+M\log _{2}\left(1-\frac{M}{N}\right)=M\log _{2}(N)+o(M)$. So that would be the worst case of the binary search approach whereas Lithium handles this case very nicely in $\Theta(\log _{2}(N)+M)$! In general, if there is only one minimal testcase $T$ of size $M\leq\frac{N}{2}$ then $\max(T)$ can be anywhere between $[M,N]$ and if $T$ is placed at random, $\max(T)\leq\frac{3}{4}N$ with probability at least $\frac{1}{2}$. So the average complexity of the binary search approach in that case will be at least $\frac{1}{2}\sum _{{k=0}}^{{M-1}}\log _{2}(\frac{1}{4}N)=\frac{1}{2}M\log _{2}(N)+o(M)$ which is still not as good as Lithium’s optimal worst case of $O(M\log _{2}(\frac{N}{M}))$

Friday, January 11 2013

MathML in Chrome, a couple of demos and some perspectives...

For those who missed the news, Google Chrome 24 has recently been released with native MathML support. I'd like to thank Dave Barton again for his efforts during the past year, that have allowed to make this happen. Obviously, some people may ironize on how long it took for Google to make this happen (Mozilla MathML project started in 1999) or criticize the bad rendering quality. However the MathML folks, aware of the history of the language in browsers, will tend to be more tolerant and appreciate this important step towards MathML adoption. After all, this now means that among the most popular browsers, Firefox, Safari and Chrome have MathML support and Opera a basic CSS-based implementation. This also means that about three people out of four will be able to read pages with MathML without the need of any third-party rendering engine.

After some testing, I think the Webkit MathML support is now good enough to be used on my Website. There are a few annoyances with stretchy characters or positioning, but in general the formulas are readable. Hence in order to encourage the use of MathML and let people report bugs upstream and hopefully help to fix them, I decided to rely on the native MathML support for Webkit-based browsers. I'll still keep MathJax for Internet Explorer (when MathPlayer is not installed) and Opera.

I had the chance to meet Dave Barton when I was at the Silicon Valley last October for the GSoC mentor summit. We could exchange our views on the MathML implementations in browsers and discuss the perspectives for the future of MathML. The history of MathML in Webkit is actually quite similar to Gecko's one: one volunteer Alex Milowski decided to write the initial implementation. This idea attracted more volunteers who joined the effort and helped to add new features and to conduct the project. Dave told me that the initial Webkit implementation did not pass the Google's security review and that's why MathML was not enabled in Chrome. It was actually quite surprising that Apple decided to enable it in Safari and in particular all Apple's mobile products. Dave's main achievement has been to fix all these security bugs so that MathML could finally appear in Chrome.

One of the idea I share with Dave is how important it is to have native MathML support in browsers, rather than to delegate the rendering to Javascript libraries like MathJax or browser plug-in like MathPlayer. That's always a bit sad to see that third-party tools are necessary to improve the native browser support of a language that is sometimes considered a core XML language for the Web together with XHTML and SVG. Not only native support is faster but also it integrates better in the browser environment: zooming text, using links, applying CSS style, mixing with SVG diagrams, doing dynamic updates with e.g. Javascript etc all of the features Web users are familiar with are immediately available. In order to illustrate this concretely, here is a couple of demos. Some of them are inspired from the Mozilla's MathML demo pages, recently moved to MDN. By the way, the famous MathML torture page is now here. Also, try this test page to quickly determine whether you need to install additional fonts.

MathML with CSS text-shadow & transform properties, href & dir attributes as well as Javascript events

$det ⁡ ( 1 2 3 4 5 6 7 8 9 ) = 45 + 84 + 96 − ( 105 + 48 + 72 ) = 0 محدد ⁡ ( ١‎ ٢ ٣ ٤ ٥ ٦ ٧‎ ٨ ٩ ) = ٤٥ + ٨٤ + ٩٦ − ( ١‎٠٥ + ٤٨ + ٧‎٢ ) = ٠$

HTML and animated SVG inside MathML tokens

MathML inside animated SVG (via the <foreignObject> element):

Note that although Dave was focused on improving MathML, the language naturally integrates with the rest of Webkit's technologies and almost all the demos above work as expected, without any additional efforts. Actually, Gecko's MathML support relies less on the CSS layout engine than Webkit does and this has been a recurrent source of bugs. For example in the first demo, the text-shadow property is not applied to some operators (bug 827039) while it is in Webkit.

In my opinion, one of the problem with MathML is that the browser vendors never really shown a lot of interest in this language and the standardization and implementation efforts were mainly lead and funded by organizations from the publishing industry or by volunteer contributors. As the MathML WG members keep repeating, they would love to get more feedback from the browser developers. This is quite a problem for a language that has among the main goal the publication of mathematics on the Web. This leads for example to MathML features (some of them are now deprecated) duplicating CSS properties or to the <mstyle> element which has most of its attributes unused and do similar things as CSS inheritance in an incompatible way. As a consequence, it was difficult to implement all MathML features properly in Gecko and this is the source of many bugs like the one I mention in the previous paragraph.

Hopefully, the new MathML support in Chrome will bring more interest to MathML from contributors or Web companies. Dave told me that Google could hire a full-time engineer to work on MathML. Apparently, this is mostly because of demands from companies working on Webkit-based mobile devices or involved in EPUB. Although I don't have the same impression from Mozilla Corporation at the moment, I'm confident that with the upcoming FirefoxOS release, things might change a bit.

Finally I also expect that we, at MathJax, will continue to accompany the MathML implementations in browsers. One of the ideas I proposed to the team was to let MathJax select the output mode according to the MathML features supported by the browser. Hence the native MathML support could be used if the page contains only basic mathematics while MathJax's rendering engine will be used when more advanced mathematical constructions are involved. Another goal to achieve will be to make MathJax the default rendering in Wikipedia, which will be much better than the current raster image approach and will allow the users to switch to their browser's MathML support if they wish...

- page 2 of 11 -