Blog de Frédéric

To content | To menu | To search

Tag - mathjax

Entries feed - Comments feed

Tuesday, January 22 2013

Analysis of Lithium's algorithm

I’ve recently been working on automated testcase reduction tools for the MathJax project and thus I had the chance to study Jesse Ruderman’s Lithium tool, itself inspired from the ddmin algorithm. This paper contains good ideas, like for example the fact that the reduction could be improved if we rely on the testcase structure like XML nodes or grammar tokens instead of just characters/lines (that’s why I’ve started to write a version of Lithium to work with abstract data structure). However, the authors of the ddmin paper really don’t analyse precisely the complexity of the algorithm, except the best and worst case and there is a large gap between the two. Jesse's analysis is much better and in particular introduces the concepts of monotonic testcase and clustered reduction where the algorithm performs the best and which intuitively seems the usual testcases that we meet in practice. However, the monotonic+clustered case complexity is only “guessed” and the bound O(Mlog2(N))O(M\log _{2}(N)) for a monotonic testcase (of size NN with final reduction of size MM) is not optimal. For example if the final reduction is relatively small compared to NN, say M=Nlog2(N)=o(N)M=\frac{N}{\log _{2}(N)}=o(N) then Mlog2(N)=N=Ω(N)M\log _{2}(N)=N=\Omega(N) and we can’t say that the number of verifications is small compared to NN. In particular, Jesse can not deduce from his bound that Lithium’s algorithm is better than an approach based on MM binary search executions! In this blog post, I shall give the optimal bound for the monotonic case and formalize that in some sense the clustered reduction is near the best case. I’ll also compare Lithium’s algorithm with the binary search approach and with the ddmin algorithm. I shall explain that Lithium is the best in the monotonic case (or actually matches the ddmin in that case).

Thus suppose that we are given a large testcase exhibiting an unwanted behavior. We want to find a smaller test case exhibiting the same behavior and one way is to isolate subtestcases that can not be reduced any further. A testcase can be quite general so here are basic definitions to formalize a bit the problem:

  • A testcase SS is a nonempty finite sets of elements (lines, characters, tree nodes, user actions) exhibiting an “interesting” behavior (crash, hang and other bugs…)

  • A reduction TT of SS is a testcase TST\subseteq S with the same “interesting” behavior as SS.

  • A testcase SS is minimal if TS,T is not a reduction of S\forall T\subsetneq S,T\text{ is not a reduction of }S.

Note that by definition, SS is a reduction of itself and \emptyset is not a reduction of SS. Also the relation “is a reduction of” is transitive that is a reduction of a reduction of SS is a reduction of SS.

We assume that verifying one subset to check if it has the “interesting” behavior is what takes the most time (think e.g. testing a hang or user actions) so we want to optimize the number of testcases verified. Moreover, the original testcase SS is large and so a fast reduction algorithm would be to have a complexity in o(|S|)o(|S|). Of course, we also expect to find a small reduction TST\subseteq S that is |T|=o(|S|)|T|=o(|S|).

Without information on the structure on a given testcase SS or on the properties of the reduction TT, we must consider the 2|S|-22^{{|S|}}-2 subsets TS\emptyset\neq T\neq S, to find a minimal reduction. And we only know how to do that in O(2|S|)O\left(2^{{|S|}}\right) operations (or O(2|S|/2)O\left(2^{{|S|/2}}\right) with Grover’s algorithm ;-)). Similarly, even to determine whether TST\subseteq S is minimal would require testing 2|T|-22^{{|T|}}-2 subsets which is not necessarily o(|S|)o(|S|) (e.g. |T|=log2(|S|)=o(|S|)|T|=\log _{2}{(|S|)}=o(|S|)). Hence we consider the following definitions:

  • For any integer n1n\geq 1, SS is nn-minimal if TS,|S-T|nT is not a reduction of S\forall T\subsetneq S,|S-T|\leq n\implies T\text{ is not a reduction of }S.

  • In particular, SS is 11-minimal if xS,S{x} is not a reduction of S\forall x\in S,S\setminus\{ x\}\text{ is not a reduction of }S.

  • SS is monotonic if T1T2S,T1 is a reduction of ST2 is a reduction of S\forall T_{1}\subseteq T_{2}\subseteq S,\, T_{1}\text{ is a reduction of }S\implies T_{2}\text{ is a reduction of }S.

Finding a nn-minimal reduction will give a minimal testcase that is no longer interesting if we remove any portion of size at most nn. Clearly, SS is minimal if it is nn-minimal for all nn. Moreover, SS is always nn-minimal for any n|S|n\geq|S|. We still need to test exponentially many subsets to find a nn-minimal reduction. To decide whether TST\subseteq S is nn-minimal, we need to consider subsets obtained by removing portions of size 1,2,,min(n,|T|-1)1,2,...,\min(n,|T|-1) that is k=1min(n,|T|-1)(|T|k)\sum _{{k=1}}^{{\min(n,|T|-1)}}\binom{|T|}{k} subsets. In particular whether TT is 11-minimal is O(|T|)O(|T|) and so o(|S|)o(|S|) if T=o(|S|)T=o(|S|). If SS is monotonic then so is any reduction TT of SS. Moreover, if TST\subsetneq S is a reduction of SS and xSTx\in S\setminus T, then TS{x}ST\subseteq S\setminus\{ x\}\subsetneq S and so S{x}S\setminus\{ x\} is a reduction of SS. Hence when SS is monotonic, SS is 11-minimal if and only if it is minimal. We will target 11-minimal reduction in what follows.

Let’s consider Lithium’s algorithm. We assume that SS is ordered and so can be identified with the interval [1,|S|][1,|S|] (think for example line numbers). For simplicity, let’s first assume that the size of the original testcase is a power of two, that is |S|=N=2n|S|=N=2^{n}. Lithium starts by n-1n-1 steps k=1,2,,n-1k=1,2,...,n-1. At step kk, we consider the chunks among the intervals [1+j2n-k,(j+1)2n-k](0j<2k)[1+j2^{{n-k}},(j+1)2^{{n-k}}]\ (0\leq j<2^{k}) of size 2n-k2^{{n-k}}. Lithium verifies if removing each chunk provides a reduction. If so, it permanently removes that chunk and tries another chunk. Because \emptyset is not a reduction of SS, we immediately increment kk if it remains only one chunk. The nn-th step is the same, with chunk of size 1 but we stop only when we are sure that the current testcase TT is 11-minimal that is when after |T||T| attempts, we have not reduced TT any further. If NN is not a power of 2 then 2n-1<N<2n2^{{n-1}}<N<2^{n} where n=log2(N)n=\lceil\log _{2}(N)\rceil. In that case, we apply the same algorithm as 2n2^{n} (i.e. as if there were 2n-N2^{n}-N dummy elements at the end) except that we don’t need to remove the chunks that are entirely in that additional part. This saves testing at most nn subtests (those that would be obtained by removing the dummy chunks at the end of sizes 2n-1,2n-2,,12^{{n-1}},2^{{n-2}},...,1). Hence in general if CNC_{N} is the number of subsets of SS tried by Lithium, we have C2n-nCNC2nC_{{2^{n}}}-n\leq C_{N}\leq C_{{2^{n}}}. Let MM be the size of the 11-minimal testcase found by Lithium and m=log2(M)m=\lceil\log _{2}(M)\rceil.

Lithium will always perform the n-1n-1 initial steps above and check at least one subset at each step. At the end, it needs to do MM operations to be sure that the testcase is 11-minimal. So CNlog2(N)+M-1=Ω(log2(N)+M)C_{N}\geq\lceil\log _{2}(N)\rceil+M-1=\Omega(\log _{2}(N)+M). Now, consider the case where SS monotonic and has one minimal reduction T=[1,M]T=[1,M]. Then TT is included in the chunk [1,2m][1,2^{{m}}] from step k=n-mk=n-m. Because SS is monotonic, this means that at step k=1k=1, we do two verifications and the second chunk is removed because it does not contain the TT (and the third one too if NN is not a power of two), at step k=2k=2 it remains two chunks, we do two verifications and the second chunk is removed etc until k=n-mk=n-m. For k>n-mk>n-m, the number of chunk can grow again: 2, 4, 8… that is we handle at most 21+k-(n-m)2^{{1+k-(n-m)}} chunks from step n-m+1n-m+1 to n-1n-1. At step k=nk=n, a first round of at most 2m2^{m} verifications ensure that the testcase is of size MM and a second round of MM verifications ensure that it is 11-minimal. So CN1+(k=1n-m2)+(k=n-m+1n21+k-(n-m))+2m+M=1+2(n-m)+2m-1+2m+MC_{N}\leq 1+\left(\sum _{{k=1}}^{{n-m}}2\right)+\left(\sum _{{k=n-m+1}}^{{n}}2^{{1+k-(n-m)}}\right)+2^{m}+M=1+2(n-m)+2^{m}-1+2^{m}+M and after simplification CN=O(log2(N)+M)C_{N}=O(\log _{2}(N)+M). Hence the lower bound Ω(log2(N)+M)\Omega(\log _{2}(N)+M) is optimal. The previous example suggests the following generalization: a testcase TT is CC-clustered if it can be written as the union of CC nonempty closed intervals T=I1I2ICT=I_{1}\cup I_{2}\cup...\cup I_{C}. If the minimal testcase found by Lithium is CC-clustered, each IjI_{j} is of length at most M2mM\leq 2^{m} and so IjI_{j} intersects at most 2 chunks of length 2m2^{m} from the step k=n-mk=n-m. So TT intersects at most 2C2C chunks from the step k=n-mk=n-m and a fortiori from all the steps kn-mk\leq n-m. Suppose that SS is monotonic. Then if cc is a chunk that does not contain any element of TT then TcT\setminus c is a reduction of TT and so Lithium will remove the chunk cc. Hence at each step kn-mk\leq n-m, at most 2C2C chunks survive and so there are at most 4C4C chunks at the next step. A computation similar to what we have done for T=[1,M]T=[1,M] shows that CN=O(C(log2(N)+M))C_{N}=O(C(\log _{2}(N)+M)) if the final testcase found by Lithium is CC-clustered. Note that we always have M=o(N)M=o(N) and log2(N)=o(N)\log _{2}(N)=o(N). So if C=O(1)C=O(1) then CN=O(log2(N)+M)=o(N)C_{N}=O(\log _{2}(N)+M)=o(N) is small as wanted. Also, the final testcase is always MM-clustered (union of intervals that are singletons!) so we found that the monotonic case is O(M(log2(N)+M))O(M(\log _{2}(N)+M)). We shall give a better bound below.

Now, for each step k=1,2,,n-1k=1,2,...,n-1, Lithium splits the testcase in at most 2k2^{k} chunk and try to remove each chunk. Then it does at most NN steps before stopping or removing one chunk (so the testcase becomes of size at most N-1N-1), then it does at most N-1N-1 steps before stopping or removing one more chunk (so the testcase becomes of size at most N-1N-1), …, then it does at most M+1M+1 steps before stopping or removing one more chunk (so the testcase becomes of size at most MM). Then the testcase is exactly of size MM and Lithium does at most MM additional verifications. This gives CNk=1n-12k+k=MNk=2n-2+N(N+1)-M(M-1)2=O(N2)C_{N}\leq\sum _{{k=1}}^{{n-1}}2^{k}+\sum _{{k=M}}^{{N}}k=2^{n}-2+\frac{N(N+1)-M(M-1)}{2}=O(N^{2}) verifications. This bound is optimal if 1M2n-11\leq M\leq 2^{{n-1}} (this is asymptotically true since we assume M=o(N)M=o(N)): consider the cases where the proper reductions of SS are exactly the segments [1,k](2n-1+1k2n-1)[1,k]\ (2^{{n-1}}+1\leq k\leq 2^{n}-1) and [k,2n-1+1](2k2n-1-M+2)[k,2^{{n-1}}+1]\ (2\leq k\leq 2^{{n-1}}-M+2). The testcase will be preserved during the first phase. Then we will keep browsing at least the first half to remove elements at position 2n-1+2k2n-12^{{n-1}}+2\leq k\leq 2^{n}-1. So CNk=2n-1+22n-12n-1=2n-1(2n-1-2)=Ω(N2)C_{N}\geq\sum _{{k=2^{{n-1}}+2}}^{{2^{n}-1}}2^{{n-1}}=2^{{n-1}}\left(2^{{n-1}}-2\right)=\Omega(N^{2}).

We now come back to the case where SS is monotonic. We will prove that the worst case is CN=Θ(Mlog2(NM))C_{N}=\Theta\left(M\log _{2}(\frac{N}{M})\right) and so our assumption M=o(N)M=o(N) gives Mlog2(NM)=(-MNlog2(MN))N=o(N)M\log _{2}(\frac{N}{M})=\left(-\frac{M}{N}\log _{2}(\frac{M}{N})\right)N=o(N) as we expected. During the steps 1km1\leq k\leq m, we test at most 2k2^{k} chunks. When k=mk=m, 2mM2^{{m}}\geq M chunks but at most MM distinct chunks contain an element from the final reduction. By monocity, at most MM chunks will survive and there are at most 2M2M chunks at step m+1m+1. Again, only MM chunks will survive at step m+2m+2 and so on until k=n-1k=n-1. A the final step, it remains at most 2M2M elements. Again by monocity a first round of 2M2M tests will make MM elements survive and we finally need MM additional tests to ensure that the test case is minimal. Hence CNk=1m2k+k=m+1n2M+M=2m+1-3+M(2(n-m)+1)=O(Mlog2(NM))C_{N}\leq{\sum _{{k=1}}^{{m}}2^{k}}+{\sum _{{k=m+1}}^{{n}}2M}+M=2^{{m+1}}-3+M(2(n-m)+1)=O(M\log _{2}(\frac{N}{M})). This bound is optimal: if M=2mM=2^{m}, consider the case where T={j2n-m+1:0j<2m}T=\{ j2^{{n-m}}+1:0\leq j<2^{{m}}\} is the only minimal testcase (and SS monotonic) ; if MM is not a power of two, consider the same TT with 2m-M2^{{m}}-M points removed at odd positions. Then for each step 1km-11\leq k\leq m-1, no chunks inside [1,2n][1,2^{n}] are removed. Then some chunks in [1,2n][1,2^{n}] are removed (none if MM is a power of two) at step mm and it remains MM chunks. Then for steps m+1kn-1m+1\leq k\leq n-1 there are always exactly 2M2M chunks to handle. So CNm+1kn-12M=2M(n-m-2)=2M(log2(NM)-2)=Ω(Mlog2(NM))C_{N}\geq\sum _{{m+1\leq k\leq n-1}}2M=2M(n-m-2)=2M(\log _{2}(\frac{N}{M})-2)=\Omega(M\log _{2}(\frac{N}{M})).

We note that we have used two different methods to bound the number of verifications in the general monotonic case, or when the testcase is CC-clustered. One naturally wonders what happens when we combine the two techniques. So let c=log2Cmc=\lceil\log _{2}C\rceil\leq m. From step 11 to cc, the best bound we found was O(2k)O(2^{k}) ; from step cc to mm, it was O(C)O(C) ; from step mm to n-mn-m it was O(C)O(C) again ; from step n-mn-m to n-cn-c, it was O(21-k-(n-m)C)O(2^{{1-k-(n-m)}}C) and finally from step n-cn-c to nn, including final verifications, it was O(M)O(M). Taking the sum, we get CN=O(2c+((n-m)-c)C+2(n-c-(n-m))C+(n-(n-c))M)=O(C(1+log2(NMC))+M(1+log2C))C_{N}=O(2^{c}+((n-m)-c)C+2^{{(n-c-(n-m))}}C+(n-(n-c))M)=O(C\left(1+\log _{2}{\left(\frac{N}{MC}\right)}\right)+M(1+\log _{2}{C})) Because C=O(M)C=O(M), this becomes CN=O(Clog2(NMC)+M(1+log2C))C_{N}=O(C\log _{2}{\left(\frac{N}{MC}\right)}+M(1+\log _{2}{C})). If C=O(1)C=O(1), then we get CN=O(log2(N)+M)C_{N}=O(\log _{2}(N)+M). At the opposite, if C=Ω(M)C=\Omega(M), we get CN=Ω(Mlog2(NM))C_{N}=\Omega(M\log _{2}{\left(\frac{N}{M}\right)}). If CC is not O(1)O(1) but C=o(M)C=o(M) then 1=o(log2C)1=o(\log _{2}{C}) and Clog2(C)=o(Mlog2C)C\log _{2}(C)=o(M\log _{2}{C}) and so the expression can be simplified to CN=O(Clog2(NM)+Mlog2C)C_{N}=O(C\log _{2}{\left(\frac{N}{M}\right)}+M\log _{2}{C}). Hence we have obtained an intermediate result between the worst and best monotonic cases and shown how the role played by the number of clusters: the less the final testcase is clustered, the faster Lithium finds it. The results are summarized in the following table:

Number of tests
Best case Θ(log2(N)+M)\Theta(\log _{2}(N)+M)
SS is monotonic ; TT is O(1)O(1)-clustered O(log2(N)+M)O(\log _{2}(N)+M)
SS is monotonic ; TT is CC-clustered (C=o(M)C=o(M) and unbounded) O(Clog2(NM)+Mlog2C)O(C\log _{2}{\left(\frac{N}{M}\right)}+M\log _{2}{C})
SS is monotonic O(Mlog2(NM));o(N)O\left(M\log _{2}\left(\frac{N}{M}\right)\right);o(N)
Worst case Θ(N2)\Theta(N^{2})
Figure 0.1: Performance of Lithium’s algorithm for some initial testcase SS of size NN and final reduction TT of size M=o(N)M=o(N). TT is CC-clustered if it is the union of CC intervals.

In the ddmin algorithm, at each step we add a preliminary round where we try to immediately reduce to a single chunk (or equivalently to remove the complement of a chunk). Actually, the ddmin algorithm only does this preliminary round at steps where there are more than 2 chunks for otherwise it would do twice the same work. For each step k>1k>1, if one chunk c1c_{1} is a reduction of SS then c1c2c_{1}\subseteq c_{2} for some chunk c2c_{2} at the previous step k-1k-1. Now if SS is monotonic then, at level k-1k-1, removing all but the chunk c2c_{2} gives a subset that contains c1c_{1} and so a reduction of SS by monocity. Hence on chunk survive at level kk and there are exactly 2 chunks at level kk and so the ddmin algorithm is exactly Lithium’s algorithm when SS is monotonic. The ddmin algorithm keeps in memory the subsets that we didn’t find interesting in order to avoid repeating them. However, if we only reduce to the complement of a chunk, then we can never repeat the same subset and so this additional work is useless. That’s the case if SS is monotonic.

Finally, if SS is monotonic Jesse proposes a simpler approach based on a binary search. Suppose first that there is only one minimal testcase TT. If kminTk\geq\min T then [1,k][1,k] intersects TT and so Uk=S[1,k]TU_{k}=S\setminus[1,k]\neq T. Then UkU_{k} is not a reduction of SS for otherwise a minimal reduction of UkU_{k} would be a minimal reduction of SS distinct from TT which we exclude by hypothesis. If instead k<minTk<\min T then [1,k][1,k] does not intersect TT and Uk=S[1,k]T[1,k]TU_{k}=S\setminus[1,k]\supseteq T\setminus[1,k]\supseteq T is a reduction of SS because SS is monotonic. So we can use a binary search to find minT\min T by testing at most log2(N)\log _{2}(N) testcases (modulo some constant). Then we try with intervals [1+minT,k][1+\min T,k] to find the second least element of TT in at most log2(N)\log _{2}(N). We continue until we find the MM-th element of TT. Clearly, this gives Mlog2(N)M\log _{2}(N) verifications which sounds equivalent to Jesse’s bound with even a better constant factor. Note that the algorithm still works if we remove the assumption that there is only one minimal testcase TT. We start by S=S1S=S_{1} and find x1=max{minT:T is a minimal reduction of S1}x_{1}=\max\{\min T:T\text{ is a minimal reduction of }S_{1}\}: if k<x1k<x_{1} then S[1;k]S\setminus[1;k] contains at least one mininal reduction with least element x1x_{1} and so is a reduction because SS is monotonic. If kx1k\geq x_{1} then S[1;k]S\setminus[1;k] is not a reduction of SS or a minimal reduction of S[1;k]S\setminus[1;k] would be a minimal reduction of SS whose least element is greater than x1x_{1}. So S2=S1[1;x1-1]S_{2}=S_{1}\setminus[1;x_{1}-1] is a reduction of S1S_{1}. The algorithm continues to find x2=max{min(T{x1}):T is a minimal reduction of S2,min(T)=x1}x_{2}=\max\{\min(T\setminus\{ x_{1}\}):T\text{ is a minimal reduction of }S_{2},\min(T)=x_{1}\} etc and finally returns a minimal reduction of SS. However, it is not clear that this approach can work if SS is not monotonic while we can hope that Lithium is still efficient if SS is “almost” monotonic. We remark that when there is only one minimal testcase T=[1,M]T=[1,M], the binary search approach would require something like k=0M-1log2(N-k)=Mlog2(N)+k=0M-1log2(1-kN)Mlog2(N)+Mlog2(1-MN)=Mlog2(N)+o(M)\sum _{{k=0}}^{{M-1}}\log _{2}(N-k)=M\log _{2}(N)+\sum _{{k=0}}^{{M-1}}\log _{2}\left(1-\frac{k}{N}\right)\geq M\log _{2}(N)+M\log _{2}\left(1-\frac{M}{N}\right)=M\log _{2}(N)+o(M). So that would be the worst case of the binary search approach whereas Lithium handles this case very nicely in Θ(log2(N)+M)\Theta(\log _{2}(N)+M)! In general, if there is only one minimal testcase TT of size MN2M\leq\frac{N}{2} then max(T)\max(T) can be anywhere between [M,N][M,N] and if TT is placed at random, max(T)34N\max(T)\leq\frac{3}{4}N with probability at least 12\frac{1}{2}. So the average complexity of the binary search approach in that case will be at least 12k=0M-1log2(14N)=12Mlog2(N)+o(M)\frac{1}{2}\sum _{{k=0}}^{{M-1}}\log _{2}(\frac{1}{4}N)=\frac{1}{2}M\log _{2}(N)+o(M) which is still not as good as Lithium’s optimal worst case of O(Mlog2(NM))O(M\log _{2}(\frac{N}{M}))

Friday, January 11 2013

MathML in Chrome, a couple of demos and some perspectives...

For those who missed the news, Google Chrome 24 has recently been released with native MathML support. I'd like to thank Dave Barton again for his efforts during the past year, that have allowed to make this happen. Obviously, some people may ironize on how long it took for Google to make this happen (Mozilla MathML project started in 1999) or criticize the bad rendering quality. However the MathML folks, aware of the history of the language in browsers, will tend to be more tolerant and appreciate this important step towards MathML adoption. After all, this now means that among the most popular browsers, Firefox, Safari and Chrome have MathML support and Opera a basic CSS-based implementation. This also means that about three people out of four will be able to read pages with MathML without the need of any third-party rendering engine.

After some testing, I think the Webkit MathML support is now good enough to be used on my Website. There are a few annoyances with stretchy characters or positioning, but in general the formulas are readable. Hence in order to encourage the use of MathML and let people report bugs upstream and hopefully help to fix them, I decided to rely on the native MathML support for Webkit-based browsers. I'll still keep MathJax for Internet Explorer (when MathPlayer is not installed) and Opera.

I had the chance to meet Dave Barton when I was at the Silicon Valley last October for the GSoC mentor summit. We could exchange our views on the MathML implementations in browsers and discuss the perspectives for the future of MathML. The history of MathML in Webkit is actually quite similar to Gecko's one: one volunteer Alex Milowski decided to write the initial implementation. This idea attracted more volunteers who joined the effort and helped to add new features and to conduct the project. Dave told me that the initial Webkit implementation did not pass the Google's security review and that's why MathML was not enabled in Chrome. It was actually quite surprising that Apple decided to enable it in Safari and in particular all Apple's mobile products. Dave's main achievement has been to fix all these security bugs so that MathML could finally appear in Chrome.

One of the idea I share with Dave is how important it is to have native MathML support in browsers, rather than to delegate the rendering to Javascript libraries like MathJax or browser plug-in like MathPlayer. That's always a bit sad to see that third-party tools are necessary to improve the native browser support of a language that is sometimes considered a core XML language for the Web together with XHTML and SVG. Not only native support is faster but also it integrates better in the browser environment: zooming text, using links, applying CSS style, mixing with SVG diagrams, doing dynamic updates with e.g. Javascript etc all of the features Web users are familiar with are immediately available. In order to illustrate this concretely, here is a couple of demos. Some of them are inspired from the Mozilla's MathML demo pages, recently moved to MDN. By the way, the famous MathML torture page is now here. Also, try this test page to quickly determine whether you need to install additional fonts.

MathML with CSS text-shadow & transform properties, href & dir attributes as well as Javascript events

det ( 1 2 3 4 5 6 7 8 9 ) = 45 + 84 + 96 ( 105 + 48 + 72 ) = 0 محدد ( ١‎ ٢ ٣ ٤ ٥ ٦ ٧‎ ٨ ٩ ) = ٤٥ + ٨٤ + ٩٦ ( ١‎٠٥ + ٤٨ + ٧‎٢ ) = ٠

HTML and animated SVG inside MathML tokens

tr ( ) n = = 0 π 2 θ θ

MathML inside animated SVG (via the <foreignObject> element):

n = 0 + α n n ! exp(α)

Note that although Dave was focused on improving MathML, the language naturally integrates with the rest of Webkit's technologies and almost all the demos above work as expected, without any additional efforts. Actually, Gecko's MathML support relies less on the CSS layout engine than Webkit does and this has been a recurrent source of bugs. For example in the first demo, the text-shadow property is not applied to some operators (bug 827039) while it is in Webkit.

In my opinion, one of the problem with MathML is that the browser vendors never really shown a lot of interest in this language and the standardization and implementation efforts were mainly lead and funded by organizations from the publishing industry or by volunteer contributors. As the MathML WG members keep repeating, they would love to get more feedback from the browser developers. This is quite a problem for a language that has among the main goal the publication of mathematics on the Web. This leads for example to MathML features (some of them are now deprecated) duplicating CSS properties or to the <mstyle> element which has most of its attributes unused and do similar things as CSS inheritance in an incompatible way. As a consequence, it was difficult to implement all MathML features properly in Gecko and this is the source of many bugs like the one I mention in the previous paragraph.

Hopefully, the new MathML support in Chrome will bring more interest to MathML from contributors or Web companies. Dave told me that Google could hire a full-time engineer to work on MathML. Apparently, this is mostly because of demands from companies working on Webkit-based mobile devices or involved in EPUB. Although I don't have the same impression from Mozilla Corporation at the moment, I'm confident that with the upcoming FirefoxOS release, things might change a bit.

Finally I also expect that we, at MathJax, will continue to accompany the MathML implementations in browsers. One of the ideas I proposed to the team was to let MathJax select the output mode according to the MathML features supported by the browser. Hence the native MathML support could be used if the page contains only basic mathematics while MathJax's rendering engine will be used when more advanced mathematical constructions are involved. Another goal to achieve will be to make MathJax the default rendering in Wikipedia, which will be much better than the current raster image approach and will allow the users to switch to their browser's MathML support if they wish...

Wednesday, November 14 2012

Writing mathematics in emails

People writing mathematics in emails, like researchers in mathematics or physics, have probably encountered this difficulty to properly format complex mathematical formulas. The most common technique is just to write text with LaTeX-like or ASCIIMathML-like syntax and hope that the recipient will just understand the expressions. Obviously, this is not really convenient to write and read, some errors may happen and result in misunderstandings between the sender and the recipient. There are other classical issues like how to write the math (special syntax? math panel? handwriting recognition?), accessibility, rendering quality etc Of course, these issues are well-known and expected to be addressed by MathML. Since HTML is a common format for email and MathML is now part of HTML5, this is clearly a good candidate to solve the problem of mathematics in emails.

The idea to use MathML in emails is not new and was already suggested in a screenshot from the Mozilla MathML Project more than 10 years ago. Thunderbird has been able to render MathML in newsfeeds for a long time, provided that the author served his content as XHTML. I may also mention Amaya, which added support for sending a document by email in 2007, although I have never figured out how to configure it to send emails. Two years ago, I tried without success to fix a bug to display XHTML attachment inline and which could be a partial solution to the problem. Finally, one year ago Bob Mathews (from Design Science) asked me about the status of MathML in Thunderbird, and I could unfortunately not give him a better answer than what is in the present paragraph. But I hoped that MathML in HTML5 will change the situation.

Indeed, while I was working on some MathML-in-clipboard patches, I realized that it is now possible to paste MathML inside an email. After further discussions with Bob Mathews, Paul Topping & David Carlisle, I've been able to do more testing. The situation is the following:

  • Thunderbird can send emails containing MathML and render them correctly.
  • Apple Mail (used in Mac OS X and iOS) can receive emails containing MathML and should render them correctly since MathML is enabled in Apple's products.
  • Microsoft Outlook does not render MathML in emails. However the rendering is based on Microsoft Word which has MathML support. Basically, Thunderbird sends MathML in HTML5 and Word displays MathML after an XSLT conversion into Microsoft's own OMML format. Hence Microsoft might be able to do something not too complicated to make the whole stuff work.
  • Web Mail Clients like Gmail or Zimbra seem to filter the MathML in emails and so do not render it correctly. If this filter is removed, they can certainly let the browser do the rendering job or use MathJax to do so.

Now let's consider a basic example about how to send MathML in emails, using Thunderbird. One of the issue is that Gecko's editor has really been designed with only HTML-editing features in mind and if you start editing MathML formulas you are going to get some invalid markup messages or other troubles. And of course Thunderbird does not have any math panel or other WYSIWYG tools to write mathematics. However it might not be too difficult to write an add-on to add MathML editing features in Thunderbird like BlueGriffon's add-on or Firemath (these add-on might even be installed without too much trouble in Thunderbird). Or one can of course use one of the existing tools to generate MathML and just paste the code in Thunderbird. Here I'm going to use the itex2MML filter. So first write your mail in a separate text file:

mail.txt

Hi Matthew, I just read your email about the behavior of the factorial function and harmonic series for large values of $n$. If you denote by $\gamma \approx 0.5772156649$ the Euler's number, by $e \approx 2.7182818284$ the Euler's constant then you have the well-known Stirling's approximation:

$$n! = \sqrt{2 \pi n} {\left( \frac{n}{e} \right)}^n \left( 1 + O \left( \frac{1}{n} \right) \right)$$

where of course I use the classical constant $\pi \approx 3.1415926535$. We also have the following asymptotic expansion:

$$\sum_{k=1}^n \frac{1}{k} = \ln(n) + \gamma + O \left( \frac{1}{n} \right)$$

I hope that this answers your question.

then call itex2MML to replace the LaTeX code by <math> elements:

cat mail.txt | itex2MML > mail.html

Write a new mail in Thunderbird and use the menu "Insert ; HTML" . David Carlisle told me that you have to be sure that the "send as HTML" is enabled if it does not show up. Then just copy the mail.html source into the window:

insert MathML

Once you click the insert button, the MathML should be automatically rendered in Thunderbird:

MathML in Thunderbird

When your email is ready, just send it as usual! Here is how it appears on an iPod:

MathML in Apple Mail

Let's just hope that other mail clients will support MathML in emails!

Thursday, September 13 2012

Master Thesis, LaTeXML and Quantum Groups (part 1)

I wanted to wait for my oral defense before blogging about my master thesis and how I manage to publish Web and paper versions of it. However, I finally met my supervisor today and the oral defense is only likely to take place next Wednesday. I do not want to delay too much this blog post and thus decided to publish it today...

This year, I have taken the following approach to write my master thesis: from LaTeX input files, I used XeLaTeX to generate a pdf document and LaTeXML to generate HTML+MathML Web pages. I had to handle some small differences via separate configuration files. However in general, these two tools are compatible and accept more or less the same LaTeX input. I did not really have to make graphics: I only used the amscd package to draw simple commutative diagrams and did not try to draw schemas for representations of quantum groups. Hence I did not get the opportunity to test how LaTeXML can generate MathML inside SVG, although I saw on July something interesting for Firefox on the LaTeXML mailing list.

The pdf version provides a good print layout which allows to workaround some issues that I had two years ago when I printed my Master Thesis in Computer Science directly from Firefox. XeLaTeX also seems much faster and so more convenient to use when you only want to check that your LaTeX code is syntactically correct and get a quick preview. It seems that XeLaTeX uses a kind of cache: there are intermediary files that I guess are used again when you regenerate the document. In contrast, LaTeXML seems to always regenerate one big XML file in a first step and the Web pages in a second step. Perhaps LaTeXML has an option to avoid that behavior or perhaps the idea of a cache system does not work well in the case of Web pages.

The output of LaTeXML has the classical advantages of HTML+MathML for publication on the Web and is much more comfortable to read on a screen. Generally speaking, I think Firefox renders pretty well the LaTeXML output. LaTeXML generates HTML rows to implement labelling and does not rely on mathvariant, which allow to avoid issues with <mlabeledtr> and token elements. However, I still note some MathML's rendering imperfections which, not surprisingly, have already be mentioned in the MathML project roadmap:

  • Linebreaking: bad line breaks inside some equations, apparently those generated by some environments like multline or gathered. Sometimes, I also see bad line breaks around equations for example when they are inside parenthesis.
  • Spacing: the lack of support for mtable@rowspacing/columnspacing seems to give wrong spacing inside binom-like notations. For some reason LaTeXML generates <mpadded> elements of zero width in some places and they cause weird overlappings in some summations.
  • Operator Stretching: commutative diagrams in the definition of Hopf algebras would look better if we support stretching operators in table cells.

This also gives me the opportunity to report various bugs and give some suggestions to the LaTeXML team, including the use of MathJax (for browsers without MathML support), the replacement of <mfenced> by the equivalent <mrow>, <mo> constructions (better rendering in Firefox), improvement to the generation of headers in HTML5 and more.

The title of my master thesis is "Specialization of Quantum Groups at a Root of Unity and Finite Dimensional Representations". The concept of Quantum Groups is based on ideas from theoretical physics, but I studied these structures from a purely algebraic point of view. That is not likely to be interesting if you have never heard about Lie algebras or are not familiar with representation theory, so I will present my contribution in a separate blog post.

Thursday, March 1 2012

MathJax 2.0 no longer uses Firefox MathML support by default :-(

MathJax 2.0 was released last Sunday and contains a lot of bug fixes and new features. The changes are extensively described on the MathJax web site, but one of them is particularly important to know for Mozillians, so I thought I should write a blog post about it. As indicated in the title, MathJax 2.0 no longer uses Firefox MathML support by default, because of some (more or less serious) limitations of the Mozilla MathML engine. Maybe the main reason is the lack of support for <mlabeledtr>, which is essential to implement LaTeX commands introduced in MathJax 2.0 for equation labeling/numbering.

First of all, for those who do not know, MathJax is an open source Javascript display engine for mathematics. Basically, it parses mathematical expressions (e.g. LaTeX or MathML) inside the source code of a Web page, converts it into an internal format (more or less the MathML format) and outputs some code (HTML+CSS, SVG or MathML) that the browsers can use to render the mathematical expressions. MathJax allows to get an excellent rendering quality across all the browsers (as opposed to the traditional raster images of mathematical formulas), requires no specific installation for the visitors (no need for native MathML support, downloadable fonts can be used etc) and is very easy to set up and maintain for the webmaster (author just have to insert a <script> tag linking to the MathJax CDN and write LaTeX inside the source code ; automatic upgrades...). MathJax provides other advantages and also has some drawbacks, but I will not enter in the details here.

As it has already said elsewhere, three main reasons have prevented the mainstream adoption of MathML: need for a converter or other tools to generate MathML, need to serve the page as XML (until recently) and lack of a good support in main browsers. Because of the main advantages mentioned above, MathJax has become very popular, the MathJax user list is active and MathJax is even considered to be used by arXiv or Wikipedia. Since MathJax relies on MathML it has contributed to increase the use of this W3C recommendation. This has also obviously enabled the Mozilla MathML Project to receive a lot of feedback to improve its implementation.

Unfortunately with the release of MathJax 2.0, Firefox MathML support is going to be much less used or tested. More precisely, it will not be used by default on Web sites that use MathJax unless the web master modify this configuration or the visitors select it in the MathJax menu. It is definitely a shame for Mozilla and the MathML community in general to see that the best native MathML support will be disabled. I want to highlight that in most case, Firefox will display without problems the MathML code generated by MathJax. Thus if you are a Web master / Firefox user willing to help the Mozilla MathML project, I suggest you to override the default configuration. But more importantly, this change should encourage Mozilla developers to continue improving the Firefox MathML support so that it becomes the default renderer in MathJax again...

--update: I forgot to mention that now bug 356870 is fixed (Firefox 9), it is possible to show the label on the left of <mlabeledtr> rows with the simple CSS rule below. That should provide a satisfying result in most cases.

mlabeledtr > mtd:first-child {
  display: table-cell;
}

--update 2: Because I got feedback about that, I just want to clarify things: the change described here is not arbitrary or political. It has been made from a purely technical point of view: there have always been various issues in the MathML rendering that can be inconvenient for Firefox users. Now, MathJax 2.0 relies on two key MathML features that are missing in Firefox rendering (namely linebreaking and equation labeling) which are thought important enough to make the choice of switching to HTML-CSS. Again, please read the Bugzilla entry for technical details.