**Note: some parts of this blog post (especially the Javascript program) may be lost when exported to Planet or other feed aggregators. Please view it on the original page.**

I recently took a look at the description of the CSS 2D / SVG transform
`matrix(a, b, c, d, e, f)`

on MDN and I added a
concrete example showing the effect of such a
transform on an SVG line, in order to make this clearer for people who
are not familiar with affine transformations or matrices.

This also recalled me a small algorithm to decompose an arbitrary SVG transform into a composition of basic transforms (Scale, Rotate, Translate and Skew) that I wrote 5 years ago for the Amaya SVG editor. I translated it into Javascript and I make it available here. Feel free to copy it on MDN or anywhere else. The convention used to represent transforms as 3-by-3 matrices is the one of the SVG specification.

## Live demo

Enter the CSS 2D transform you want to reduce and decompose or pick one example from the list . You can also choose between LU-like or QR-like decomposition: .

Here is the reduced CSS/SVG matrix as computed by your rendering engine **?** and its matrix representation:

After simplification (and modulo rounding errors), an SVG
decomposition into simple transformations is **?** and it renders
like this:

After simplification (and modulo rounding errors), a CSS decomposition
into simple transformations is **?** and it renders like this:

A matrix decomposition of the original transform is:

## Mathematical Description

The decomposition algorithm is based on the classical
LU and
QR
decompositions. First remember the SVG specification: the transform
`matrix(a,b,c,d,e,f)`

is represented by the matrix

and corresponds to the affine transformation

$$\left(\begin{array}{c}x\\ y\end{array}\right)\mapsto \left(\begin{array}{cc}a& c\\ b& d\end{array}\right)\left(\begin{array}{c}x\\ y\end{array}\right)+\left(\begin{array}{c}e\\ f\end{array}\right)$$which shows the classical factorization into a composition of a linear transformation $\left(\begin{array}{cc}a& c\\ b& d\end{array}\right)$ and a translation $\left(\begin{array}{c}e\\ f\end{array}\right)$. Now let's focus on the matrix $\left(\begin{array}{cc}a& c\\ b& d\end{array}\right)$ and denote $\Delta =ad-bc$ its determinant. We first consider the LDU decomposition. If $a\ne 0$, we can use it as a pivot and apply one step of Gaussian's elimination:

$$\left(\begin{array}{cc}1& 0\\ -b/a& 1\end{array}\right)\left(\begin{array}{cc}a& c\\ b& d\end{array}\right)=\left(\begin{array}{cc}a& c\\ 0& \Delta /a\end{array}\right)$$and thus the LDU decomposition is

$$\left(\begin{array}{cc}a& c\\ b& d\end{array}\right)=\left(\begin{array}{cc}1& 0\\ b/a& 1\end{array}\right)\left(\begin{array}{cc}a& 0\\ 0& \Delta /a\end{array}\right)\left(\begin{array}{cc}1& c/a\\ 0& 1\end{array}\right)$$Hence if $a\ne 0$, the transform `matrix(a,b,c,d,e,f)`

can be
written
`translate(e,f) skewY(atan(b/a)) scale(a, Δ/a) skewX(c/a)`

.
If $a=0$ and $b\ne 0$ then we have $\Delta =-cb$ and we
can write (this is approximately "LU with full pivoting"):

and so the transform becomes
`translate(e,f) rotate(90°) scale(b, Δ/b) skewX(d/b)`

. Finally,
if $a=b=0$, then we already have an LU decomposition and we can just write

and so the transform is
`translate(e,f) scale(c, d) skewX(45°) scale(0, 1)`

.

As a consequence, we have proved that any transform
`matrix(a,b,c,d,e,f)`

can be decomposed into a product of simple
transforms. However, the decomposition is not always what we want, for example
`scale(2) rotate(30°)`

will be decomposed into a product that
involves `skewX`

and `skewY`

instead of preserving the
nice factors.

We thus consider instead the QR decomposition. If $\Delta \ne 0$, then by applying the Gram–Schmidt process to the columns $\left(\begin{array}{c}a\\ b\end{array}\right),\left(\begin{array}{c}c\\ d\end{array}\right)$ we obtain

$$\left(\begin{array}{cc}a& c\\ b& d\end{array}\right)=\left(\begin{array}{cc}a/r& -b/r\\ b/r& a/r\end{array}\right)\left(\begin{array}{cc}r& (ac+bd)/r\\ 0& \Delta /r\end{array}\right)=\left(\begin{array}{cc}a/r& -b/r\\ b/r& a/r\end{array}\right)\left(\begin{array}{cc}r& 0\\ 0& \Delta /r\end{array}\right)\left(\begin{array}{cc}1& (ac+bd)/{r}^{2}\\ 0& 1\end{array}\right)$$where $r=\sqrt{{a}^{2}+{b}^{2}}\ne 0$. In that case, the transform becomes
```
translate(e,f) rotate(sign(b) * acos(a/r)) scale(r, Δ/r)
skewX(atan((a c + b d)/r^2))
```

. In particular, a similarity transform
preserves orthogonality and length ratio and so
$ac+bd=\left(\begin{array}{c}a\\ b\end{array}\right)\cdot \left(\begin{array}{c}c\\ d\end{array}\right)=0$
and $\Delta =\parallel \left(\begin{array}{c}a\\ b\end{array}\right)\parallel \mid \left(\begin{array}{c}c\\ d\end{array}\right)\parallel \mathrm{cos}(\pi /2)={r}^{2}$. Hence for a
similarity transform we get
`translate(e,f) rotate(sign(b) * acos(a/r)) scale(r)`

as wanted.
We also
note that it is enough to assume the weaker hypothesis
$r\ne 0$ (that is $a\ne 0$ or $b\ne 0$)
in the expression above and so the decomposition applies in that case too.
Similarly, if we let $s=\sqrt{{c}^{2}+{d}^{2}}$ and instead assume
$c\ne 0$ or $d\ne 0$ we get

Hence in that case the transform is
`translate(e,f) rotate(90° - sign(d) * acos(-c/s)) scale(Delta/s, s) skewY(atan((a c + b d)/s^2))`

. Finally if $a=b=c=d=0$, then the transform is
just
`scale(0,0)`

.

The decomposition algorithms are now easy to write. We note that none of them gives the best result in all the cases (compare for example how they factor Rotate2 and Skew1). Also, for completeness we have included the noninvertible transforms in our study (that is $\Delta =0$) but in practice they are not really useful (try NonInvertible).