There has recently been discussion in the Mozilla community about Opera
switch from Presto to Webkit and the need to preserve browser competition and
diversity of rendering engines, especially with mobile devices. Some people
outside the community seem a bit skeptic about that argument. Perhaps a
striking example to convince them is to consider the case of MathML where
basically only Gecko has a decent native implementation and the situation in
the recent eBooksworkshop
illustrates that very well: MathML support is very important for some publishers
(e.g. for science or education) but the main eBook readers rely
exclusively on the Webkit engine and its rudimentary MathML implementation.
Unfortunately because there is currently essentially no alternatives on mobile
platforms, developers of eBook readers have no other choices than proposing a
partial EPUB support or relying on polyfill....
After Google's announce to remove MathML from Chrome 25, someone
ironized on twitter about the fact that an Acid test for MathML should be
written since that seems to motivate them more than community feedback. I do
not think that MathML support is something considered important from the point
of view of browser competition but I took this idea and started writing MathML
versions of the famous Acid2 and Acid3 tests. The current source of these
MathML Acid tests is available on GitHub. Of course, I believe that native
MathML implementation is very important and I expect at least that these tests
could help the MathML community ; users and implementers.
Here is the result of the MathML Acid2 test with the stable Gecko release.
To pass the test we only need to implement negative spacing or at least
integrate the patch I submitted when I was still active in Gecko developments (bug 717546).
And here is the score of the MathML Acid 3 test with the stable Gecko
release. The failure of test 18 was not supposed to happen but I discovered it
when I wrote the test. That will be fixed by James Kitchener's refactoring in
Obviously, reaching the score of 100/100 will be much more difficult to achieve
by our volunteer developers, but the current score is not too bad compared to
other rendering engines...
I’ve recently been working on automated testcase reduction tools for the
MathJax project and thus
I had the chance to study Jesse Ruderman’s
Lithium tool, itself inspired from the
ddmin algorithm. This
paper contains good ideas, like for example the fact that the reduction could
be improved if we rely on the testcase structure like XML nodes or grammar
instead of just characters/lines (that’s why I’ve started to write
a version of
Lithium to work with abstract data structure). However, the authors of the
ddmin paper really don’t analyse precisely the complexity of the algorithm,
except the best and worst case and there is a large gap between the two.
Jesse's analysis is
much better and in particular introduces the concepts of monotonic testcase
and clustered reduction where the algorithm performs the best and which
intuitively seems the usual testcases that we meet in practice. However, the
monotonic+clustered case complexity is only “guessed” and the bound
for a monotonic testcase (of size with final reduction of
size ) is not optimal. For example if the final reduction is relatively
small compared to , say
then and we
can’t say that the number of verifications is small compared to .
In particular, Jesse can
not deduce from his bound that Lithium’s algorithm is better than an approach
based on binary search executions!
In this blog post, I shall give the optimal bound for
the monotonic case and formalize that in some sense the clustered reduction is
near the best case. I’ll also compare Lithium’s algorithm with the binary search
approach and with the ddmin algorithm. I shall explain that Lithium is the
best in the monotonic case (or actually matches the ddmin in that case).
Thus suppose that we are given a large testcase exhibiting an unwanted behavior.
We want to find a smaller test case exhibiting the same behavior
and one way is to isolate subtestcases that can not be reduced any further.
A testcase can be quite general so here are basic definitions to formalize a bit
A testcase is a nonempty finite sets of elements
(lines, characters, tree nodes, user actions) exhibiting an “interesting”
behavior (crash, hang and other bugs…)
A reduction of is a testcase
with the same “interesting” behavior as .
A testcase is minimal if
Note that by definition, is a reduction of itself and is not
a reduction of . Also the relation “is a reduction of” is transitive that
is a reduction of a reduction of is a reduction of .
We assume that verifying one subset to check if it has the
“interesting” behavior is what takes the most time
(think e.g. testing a hang or user actions) so we want to optimize the number
of testcases verified. Moreover, the original testcase is large and so a
fast reduction algorithm would be to have a complexity in .
Of course, we also expect to find a small reduction
that is .
Without information on the structure on a given testcase or on the
properties of the reduction , we must consider
the subsets , to find a minimal
reduction. And we only know how to do that in
operations (or with
Grover’s algorithm ;-)).
Similarly, even to determine whether is minimal would
require testing subsets which is not necessarily
Hence we consider the following definitions:
For any integer , is -minimal if
In particular, is -minimal if
is monotonic if .
Finding a -minimal reduction will give a minimal testcase that is no longer
interesting if we remove any portion of size at most . Clearly, is
minimal if it is -minimal for all . Moreover, is always -minimal
for any . We still need to test exponentially many
subsets to find a -minimal reduction. To decide whether
is -minimal, we need to consider subsets obtained by
removing portions of size
that is subsets. In
particular whether is -minimal is and so if
If is monotonic then so is any reduction of . Moreover, if
is a reduction of and , then
is a reduction of . Hence when is monotonic, is -minimal if and
only if it is minimal. We will target -minimal reduction in what follows.
Let’s consider Lithium’s algorithm.
We assume that is ordered and so can be identified with the interval
(think for example line numbers). For simplicity, let’s first
the size of the original testcase is a power of two, that is .
Lithium starts by steps . At step , we consider
the chunks among the intervals
of size .
Lithium verifies if removing each
chunk provides a reduction. If so, it permanently removes that chunk and tries
another chunk. Because is not a reduction of , we immediately
increment if it remains only one chunk.
The -th step is the same, with chunk of size 1 but we stop
only when we are sure that the current testcase is -minimal that is
when after attempts, we have not reduced any further. If is not a
power of 2 then where .
In that case, we apply the same algorithm as (i.e. as if there were
dummy elements at the end)
except that we don’t need to remove the chunks that are entirely in that
This saves testing at most subtests (those that would be obtained by
removing the dummy chunks at the end of sizes ). Hence
in general if is the number of
subsets of tried by Lithium, we have .
Let be the size of the -minimal testcase found
by Lithium and .
Lithium will always perform the initial steps above and check at least
one subset at each step. At the end, it needs to do operations to
be sure that the testcase is -minimal.
Now, consider the case where monotonic and has one minimal reduction
. Then is included in the chunk from step
. Because is monotonic, this means that at step ,
we do two verifications and the second chunk is removed because it does
not contain the (and the third one too
if is not a power of two), at step it
remains two chunks, we do two verifications and the second chunk is removed etc
until . For ,
the number of chunk can grow again: 2, 4, 8… that is we handle at most
chunks from step to .
At step , a first round
of at most verifications ensure that the testcase is of size and a
second round of verifications ensure that it is -minimal. So
and after simplification .
Hence the lower bound is optimal. The previous example
following generalization: a testcase is -clustered if it can be
written as the union of nonempty closed
intervals . If the
minimal testcase found by Lithium is -clustered, each is of length at
most and so intersects at most 2 chunks of length
from the step . So intersects at most chunks from
the step and a fortiori from all the steps .
Suppose that is monotonic. Then if is a chunk that does not contain any
element of then is a reduction of and so
Lithium will remove the chunk . Hence at each step ,
at most chunks survive and so there are at most chunks at the next
step. A computation similar to what we have done for shows that
if the final
testcase found by Lithium is -clustered. Note that we always have
and . So if then is small
as wanted. Also, the final testcase is always -clustered (union of intervals
that are singletons!)
so we found that the monotonic case is .
We shall give a better bound below.
Now, for each step
, Lithium splits the testcase in at most
chunk and try to remove each chunk. Then it does at most steps
before stopping or removing one chunk (so the testcase becomes of size at most
), then it does at most steps before stopping or removing one more
chunk (so the testcase becomes of size at most ), …,
then it does at most steps before stopping or removing one more
chunk (so the testcase becomes of size at most ). Then the testcase is
exactly of size and Lithium does at most additional verifications.
This bound is optimal if (this is asymptotically
true since we assume ): consider the cases where
the proper reductions of are exactly the segments
The testcase will be preserved
during the first phase. Then we will keep browsing at least the first half to
remove elements at position . So
We now come back to the case where is
monotonic. We will prove that the worst case is
and so our assumption
as we expected. During the steps
, we test at most chunks. When ,
chunks but at most distinct chunks contain
an element from the final reduction.
By monocity, at most chunks will survive and there are at most
chunks at step . Again, only chunks will survive at step
and so on until . A the final step, it remains at most
elements. Again by monocity a first round of tests will make elements
survive and we finally need additional tests to ensure that the test case
is minimal. Hence
. This bound is
optimal: if , consider the case where
is the only minimal testcase
(and monotonic) ; if is not a power of two, consider the same
with points removed at odd positions. Then for each step
, no chunks inside are removed. Then some
chunks in are removed (none if is a power of two) at
step and it remains chunks. Then for steps
there are always exactly chunks to handle. So
We note that we have used two different methods to bound the number of
verifications in the general monotonic case, or when the testcase is
-clustered. One naturally wonders what happens when we combine the two
techniques. So let . From step to ,
the best bound we found was ; from step to , it was
; from step to it was again ; from step
to , it was and finally from step
to , including final verifications, it was . Taking the sum, we
Because , this becomes
, then we get . At the opposite,
if , we get .
If is not but then and
and so the expression can be simplified to
. Hence we have
obtained an intermediate result between the worst and best monotonic cases and
shown how the role played by the number of clusters: the less the final testcase
is clustered, the faster Lithium finds it. The results are summarized in the
Number of tests
is monotonic ; is -clustered
is monotonic ; is -clustered ( and unbounded)
Figure 0.1: Performance of Lithium’s algorithm for some initial testcase of
size and final reduction of size . is -clustered if it
is the union of intervals.
In the ddmin algorithm, at each step we add a preliminary round where we try
to immediately reduce to a single chunk (or equivalently to remove the
complement of a chunk). Actually, the ddmin algorithm only does this preliminary
round at steps where there are more than 2 chunks for otherwise it would do
twice the same work. For each step , if one chunk
is a reduction of then for some chunk at the
previous step . Now if is monotonic then, at level , removing all
but the chunk gives a subset that contains and so a reduction of
by monocity. Hence on chunk survive at level and
there are exactly 2 chunks at level and so the ddmin
algorithm is exactly Lithium’s algorithm when is monotonic. The ddmin
algorithm keeps in memory the subsets that we didn’t find
interesting in order to avoid
repeating them. However, if we only reduce to the complement of a chunk, then
we can never repeat the same subset and so this additional work is useless.
That’s the case if is monotonic.
if is monotonic Jesse proposes a simpler approach based on a binary search.
Suppose first that there is only one minimal testcase .
If then intersects and so
. Then is not a reduction of for
otherwise a minimal reduction of would be a minimal reduction of
distinct from which we exclude by hypothesis. If instead then
does not intersect and
reduction of because is monotonic. So we can use a binary search
to find by testing at most testcases
(modulo some constant). Then we try with intervals
to find the second least element of in at most
. We continue until we find the -th element of . Clearly,
this gives verifications which sounds equivalent to
Jesse’s bound with even a better constant factor. Note that the algorithm
still works if we remove the assumption that there is only one minimal testcase
. We start by and find
then contains at least one mininal reduction
with least element and so is a reduction because is monotonic.
If then is not a reduction of or a minimal
reduction of would be a minimal reduction of whose
least element is greater than . So is
a reduction of . The algorithm continues to find
etc and finally
returns a minimal reduction of .
However, it is not clear that this
approach can work if is not monotonic while we can hope that Lithium is
still efficient if is “almost” monotonic.
We remark that when there is only one minimal testcase , the binary
search approach would require something like
. So that would be the worst case of the binary search
approach whereas Lithium handles this case very nicely in
! In general, if
there is only one minimal testcase of size then
can be anywhere between and if is
placed at random, with probability at least
. So the average complexity of the binary search approach in that
be at least which is still not as good as Lithium’s
optimal worst case of …
For those who missed the news, Google Chrome 24 has recently been
released with native MathML support. I'd like to thank
Dave Barton again for his
efforts during the past year, that have allowed to make this happen.
Obviously, some people may ironize on how long it took for Google to
make this happen
(Mozilla MathML project started in 1999)
or criticize the bad
rendering quality. However the MathML folks, aware of the history of
the language in browsers, will tend to be more tolerant and
appreciate this important step towards MathML adoption.
this now means that among the most popular browsers,
Firefox, Safari and Chrome have MathML support and
Opera a basic CSS-based implementation. This also means that about three
people out of four will be able to read pages with MathML without the
need of any third-party rendering engine.
After some testing,
I think the Webkit MathML support is now good enough to be used on
my Website. There
are a few annoyances with stretchy characters or positioning, but in
general the formulas are readable. Hence in order to encourage the use
of MathML and let people report bugs upstream and hopefully help to fix
them, I decided to
rely on the native MathML support for Webkit-based browsers. I'll still
keep MathJax for Internet Explorer (when MathPlayer is not installed) and
I had the chance to meet Dave Barton when I was at the Silicon Valley
last October for the GSoC mentor summit. We could
exchange our views on the MathML implementations in browsers and discuss
the perspectives for the future of MathML.
The history of MathML in Webkit is actually
quite similar to Gecko's one: one volunteer Alex Milowski
decided to write the initial implementation. This idea attracted more
volunteers who joined the effort and helped to add new features and
to conduct the project. Dave told me that the initial Webkit
implementation did not pass the Google's security review and that's why
MathML was not
enabled in Chrome. It was actually quite surprising that Apple decided
to enable it in Safari and in particular all Apple's mobile products.
Dave's main achievement has been to fix all these security bugs so that
MathML could finally appear in Chrome.
One of the idea I share with Dave is how important it is to have native
MathML support in browsers, rather than to delegate the rendering to
That's always a bit sad to see that third-party tools are necessary to
improve the native browser support of a language that is sometimes
considered a core XML language for the Web together with XHTML and SVG.
Not only native support is faster but also it integrates better in the
browser environment: zooming text, using links, applying CSS style,
mixing with SVG diagrams, doing dynamic updates
In order to illustrate this concretely, here is
a couple of demos. Some of them are inspired from the
Mozilla's MathML demo pages, recently
moved to MDN. By the way, the famous
MathML torture page is now
try this test page to quickly determine whether you need to install
MathML with CSS text-shadow & transform
properties, href & dir attributes as
HTML and animated SVG inside MathML tokens
MathML inside animated SVG (via the <foreignObject> element):
Note that although Dave was focused on improving MathML, the language
integrates with the rest of Webkit's technologies and almost all the demos
above work as expected, without any additional efforts. Actually,
Gecko's MathML support relies less on the CSS layout engine than Webkit
does and this has been a recurrent source of bugs. For example in the
first demo, the
text-shadow property is not applied to some operators
827039) while it is in Webkit.
In my opinion, one of the problem with MathML is
that the browser vendors never really shown a lot of interest in
this language and the standardization and implementation efforts were mainly
lead and funded by organizations from the publishing industry or by volunteer
As the MathML WG members keep repeating, they would love to get more
feedback from the browser developers.
This is quite a problem for a language that has among the main goal
the publication of mathematics on the Web.
This leads for example to MathML features
(some of them are now deprecated) duplicating CSS properties or
to the <mstyle> element which has most of its
attributes unused and do similar things as CSS inheritance in an
incompatible way. As a consequence, it was difficult to implement
all MathML features properly in Gecko and this is the source of many bugs like the one
I mention in the previous paragraph.
Hopefully, the new MathML support in Chrome will bring more
interest to MathML from contributors or Web companies.
Dave told me that Google could
hire a full-time engineer to work on MathML. Apparently, this is
of demands from companies working on Webkit-based mobile devices or
involved in EPUB. Although I don't have the same impression
from Mozilla Corporation at the moment, I'm
confident that with the upcoming FirefoxOS release, things might change a
Finally I also expect that we, at MathJax, will continue to accompany the
MathML implementations in browsers. One of the ideas I proposed to the
team was to let MathJax select the output mode according to the
MathML features supported by the browser. Hence the native MathML support
could be used if the page contains only basic mathematics while MathJax's
rendering engine will be used when
more advanced mathematical constructions are involved. Another goal
to achieve will be to make MathJax the default rendering in Wikipedia,
which will be much better than the current raster image approach
and will allow the users to
switch to their browser's MathML support if they wish...
During last December, I've made more progress on the exercises from Thomas
Jech's book "Set Theory". After the six
first chapters and the seventh
chapter, I've worked on chapters 8, 9 and 10. I've published the solutions
for most of the exercices here: