# Blog de Frédéric

## Tag - quantum computing

Sunday, April 22 2012

## Cyclic HSP based on Subgroup Reduction

In my master thesis on quantum computing, I designed a version of the Abelian HSP algorithm based on subgroup reduction. In a previous blog post, I wondered whether this algorithm could be extended to the Dedekindian HSP and if it would be more efficient than the standard one. I’m going to prove that the latter is false in the cyclic case, where the two algorithms have exactly the same associated Markov chain.

Recall that the HSP is - given a finite group $G$, an unknown subgroup $H$ and an "oracle" $f$ hiding $H$ (i.e. a function on $G$ which factorizes on the coset space $G/H$ in a function with pairwise distinct values) - to determine the subgroup $H$ using a number of oracle calls polynomial in the size of $G$.

In the standard Dedekindian HSP algorithm, $G$ is a dedekind group and we use Fourier Sampling to measure a polynomial number of representations $\rho _{1},...\rho _{m}$ of $G$ such that $H\subseteq\operatorname{Ker}{\rho _{k}}$. One proves that with high probability $H=\cap _{k}\operatorname{Ker}{\rho _{k}}$. In my alternative algorithm, we measure $\rho _{1}$ and set $G_{1}=\operatorname{Ker}{\rho _{1}}$. Then instead of continuing to work on the whole group $G$, the idea is to move to a HSP problem on $G_{1}$ with hidden subgroup $H$. Then do a Fourier Sampling on $G_{1}$ to move to a new subgroup $G_{2}$ and so forth. If $H\subsetneq G_{k}$ then there is a probability at least one half, that $G_{{k+1}}$ is at least twice smaller and so we see that we reach $H$ in an exponentially fast way.

Let’s now consider the cyclic HSP. We are given a cyclic group $G=\mathbb{Z}_{N}=\mathbb{Z}/{N\mathbb{Z}}$ and a hidden subgroup $H=d\mathbb{Z}_{N}$ for some $1\leq d\leq N$ and an oracle $f$. With this setting, one call to $f$ allows to get a Fourier sample $\lambda\frac{N}{d}$ for some random $\lambda\in\{ 0,1,...,d-1\}$, with uniform probability. In the standard cyclic HSP, we repeat Fourier sampling to get $\lambda _{1}\frac{N}{d},\lambda _{2}\frac{N}{d},...\lambda _{m}\frac{N}{d}$ and deduce $d$. For example, one can show that

 $P(\frac{N}{\operatorname{gcd}{\left(\lambda _{k}\frac{N}{d}\right)}_{{1\leq k\leq m}}}=d)\geq 1-2^{{-m/2}}$

In the algorithm I proposed, we build a sequence of polynomial length $H\subseteq G_{m}\subseteq G_{{m-1}}\subseteq...\subseteq G_{1}=G=\mathbb{Z}_{N}$ such that with high probability, $H=G_{m}$. Clearly, all the $G_{k}$’s are cyclic groups and can be written $G_{k}=a_{k}\mathbb{Z}_{N}$. Since they contain $H=d\mathbb{Z}_{N}$, we have $a_{k}|d$. From $a_{k}$, we set $d_{k}=\frac{d}{a}_{k}$ and $N_{k}=\frac{N}{a}_{k}$. We have the isomorphism $\phi _{k}:G_{k}\overset{\sim}{\to}\mathbb{Z}_{{N_{k}}}$ defined by $\phi _{k}(x)=\frac{x}{a}_{k}$. Hence we can now work on $\mathbb{Z}_{{N_{k}}}$, with a hidden subgroup $H_{k}=\phi _{k}(H)=d_{k}\mathbb{Z}_{{N_{k}}}$ and oracle $f_{k}=f\circ\phi _{k}^{{-1}}$.

We start with $G_{1}=G=\mathbb{Z}_{N},H_{1}=H=d\mathbb{Z}_{N},a_{1}=1,d_{1}=d,N_{1}=N$. At step $k$, we perform a Fourier sampling that provides a random $\lambda _{k}\frac{N_{k}}{d_{k}}$ for some $\lambda _{k}\in\{ 0,1,...,d_{k}-1\}$. Then, applying the Continued Fraction Algorithm on $\lambda _{k}\frac{N_{k}}{d_{k}}\frac{a_{k}}{N}=\frac{\lambda _{k}}{d_{k}}$ we get an irreducible fraction $\frac{p_{k}}{q_{k}}$ and a divisor $q_{k}$ of $d_{k}$. Set $a_{{k+1}}=q_{k}a_{k}$. Then $G_{{k+1}}=a_{{k+1}}\mathbb{Z}_{N}$ is a subgroup of $G_{k}$. Because $q_{k}|d_{k}$, we have ${a_{k}q_{k}}|{a_{k}d_{k}}=d$ and so $H$ is a subgroup of $G_{{k+1}}$.

The general remark about the algorithm being exponentially fast is easy to see in the cyclic case. Indeed if $q_{k}=1$ above, then $\lambda _{k}=p_{k}d_{k}$ which happens only if $\lambda _{k}=0$. If at step $k$, we have $H\subsetneq G_{k}$ then $\lambda _{k}$ can take a non-zero value with probability at least one half. In that case $q_{k}\geq 2$ and $a_{{k+1}}\geq 2a_{k}$. Hence $a_{k}$ grows exponentially fast until it becomes stationary and thus $a_{m}=d$ with very high probability.

We can prove that we get the same procedure using only Fourier sampling on the initial group $G$. Suppose we have measured Fourier samples $\lambda _{1}\frac{N}{d},\lambda _{2}\frac{N}{d},...\lambda _{m}\frac{N}{d}$. We want to construct the same sequence $a_{k}$. Set $a_{1}=1$. For each $1\leq k, we let $d_{k}=\frac{d}{a}_{k}$ and compute the irreducible fraction $\lambda _{k}\frac{N}{d}\frac{a_{k}}{N}=\frac{\lambda _{k}}{d_{k}}=\frac{p_{k}}{q_{k}}$. As above, we have ${a_{k}q_{k}}|{a_{k}d_{k}}=d$ and set $a_{{k+1}}={q_{k}a_{k}}$. Now, write the euclidean division $\lambda _{k}=d_{k}\mu _{k}+\nu _{k}$ where $\mu _{k}\in\{ 0,...a_{k}-1\}$ and $\nu _{k}\in\{ 0,...d_{k}-1\}$ (with uniform probability for $\mu _{k}$ and $\nu _{k}$ too). We have $\frac{p_{k}}{q_{k}}=\frac{\lambda _{k}}{d_{k}}=\mu _{k}+\frac{\nu _{k}}{d_{k}}$ so $q_{k}$ is actually the denominator obtained when we compute the irreducible fraction $\frac{\nu _{k}}{d_{k}}$. With this description, it becomes obvious that the two algorithms are equivalent (where $\nu _{k}$ corresponds to $\lambda _{k}$ in the previous algorithm).

I conjecture that the same phenomenon happens for a general Dedekindian group, but this may be more difficult to write down…

Friday, December 10 2010

## Maximal Subgroup Reduction and HSP over Projective Linear Groups

-- updated the 12th of December.

Among the recent algorithms for the Hidden Subgroup Problem, two are particularly interesting: one to find hidden subgroups of nil-2 groups by Ivanyos, Sanselme and Santha ; and another one to find conjugate stabilizer subgroups of $\mathrm{PSL}\left(2,q\right)$ (and also those of the related groups $\mathrm{SL}\left(2,q\right)$ and $\mathrm{PGL}\left(2,q\right)$) by Denney, Moore and Russell. The idea of both algorithms is to reduce the initial problem to a HSP over a group for which we already know an efficient algorithm. However, the former only relies on Fourier Sampling over abelian groups while the latter requires Strong Fourier Sampling over some semidirect products of abelian groups. Also, considering the general approach to HSP that I have described in my Master Project, the technique of the former is only likely to work for solvable groups while the composition series of groups considered in the latter contain non-abelian simple groups... but can their ideas be combined together to get new HSP algorithms?

First, let's recall the concept of quantum hiding functions, which is a key ingredient of the first paper. In general, HSP algorithms use a classical oracle $f$ over $G$ which hides a subgroup $H$ i.e. is constant over each coset of $H$ and takes different values on different cosets. The quantum counterpart of $f$ is implemented by a quantum operator ${U}_{f}$ which takes $\mid g⟩\mid 0⟩$ to $\mid g⟩\mid f\left(g\right)⟩$. Some HSP algorithms use the so-called Strong Fourier Sampling: we create a uniform superposition of the states $\mid g⟩\mid f\left(g\right)⟩$ over all $g\in G$, apply a Fourier Tranform on the first register and measure it. The idea of quantum hiding functions is to replace ${U}_{f}$ by any other quantum operator sending $\mid g⟩\mid 0⟩$ to $\mid g⟩\mid {\Psi }_{g}⟩$. In order to make the procedure works, we only need the $\mid {\Psi }_{g}⟩$'s to be unit states taking the same value over one coset of $H$ and orthogonal values on different cosets. These states may even change at each sampling.

To solve the general HSP and in particular the difficult case of simple groups, I have proposed maximal subgroup reduction. Basically, the idea is to find a maximal subgroup $K$ containing $H$ (there is always one except in the trivial case $H=G$, which can be checked directly) using an HSP algorithm over $G$ and thus we reduce the problem to HSP over $K$. Nevertheless, It is not clear at all how to build an efficient oracle hiding $K$ from an oracle $f$ hiding $H$: the natural choice for $f\left(g\right)$ would be something gathering the values of $f$ over all the cosets of $H$ inside the coset $gK$... but unfortunately there can be exponentially many of them! Hence, it would maybe help to consider quantum hiding functions instead. For example, the states

$\mid {\Psi }_{g}⟩=\frac{1}{\sqrt{\mid K\mid }}\sum _{k\in K}\mid f\left(gk\right)⟩$satisfy the expected properties. However it is not clear how to produce $\mid {\Psi }_{g}⟩$ efficiently and even less without knowing the subgroup $K$. But it is open whether we can find another method, maybe involving other states $\mid {\Psi }_{g}⟩$.

Let's turn our attention to HSP over $G=\mathrm{PSL}\left(2,q\right)$ (or the other groups described in the paper of Denney, Moore and Russell) and see whether we can apply a maximal subgroup reduction. Note that the group $G$ acts on the projective line ${\mathfrak{F}}_{q}\cup \left\{\infty \right\}$ and thus we can define the stabilizer ${G}_{a}$ of a point $a$, which is a maximal subgroup. Now, we assume that the hidden subgroup $H$ is included in some ${G}_{a}$ and that we have a way to build a quantum function hiding ${G}_{a}$ from an oracle hiding $H$. Note that we have the inclusions

${G}_{a}\cap {G}_{\infty }\subseteq {G}_{\infty }\subseteq G$

The idea of the paper is to restrict an oracle hiding ${G}_{a}$ to the subgroup ${G}_{\infty }$ and thus get a HSP over ${G}_{\infty }$ with hidden subgroup ${G}_{a}\cap {G}_{\infty }$. Now, ${G}_{\infty }$ is isomorphic to some "friendly" semidirect product of abelian groups and thus the authors are able to determine ${G}_{a}\cap {G}_{\infty }$ by Strong Fourier Sampling and thus find the stabilizer ${G}_{a}$. Note that their algorithm still works if we use quantum hiding function instead! Hence, with the hypothesis above, we are able to determine a maximal subgroup ${G}_{a}$ containing $H$. To complete the algorithm, we need to find the hidden subgroup $H$ of ${G}_{a}\cong {G}_{\infty }$. However, the paper describes only the case of ${G}_{a}\cap {G}_{\infty }\subseteq {G}_{\infty }$ and I am not sure that the authors mean they have an algorithm for finding arbitrary subgroups of ${G}_{\infty }$...

Another open question: can we generalize this maximal subgroup technique to solve HSP over $\mathrm{PSL}\left(2,q\right)$? We would like a way to distinguish among exponentially many (maximal) subgroups. Note that the list of (maximal) subgroups of $\mathrm{PSL}\left(2,q\right)$ and $\mathrm{PGL}\left(2,q\right)$ are known (see Dickson's book). Among the exponentially large subgroups (i.e. for which we would need an efficient HSP algorithm), there are subgroups isomorphic to $\mathrm{PSL}\left(2,{q}_{0}\right)$, $\mathrm{PGL}\left(2,{q}_{0}\right)$ or to a semidirect product of a p-group and a cyclic group (including the famous dihedral group!). Note also that $\mathrm{PSL}\left(2,q\right)$ is a normal subgroup of $\mathrm{PGL}\left(2,q\right)$, and so we could also imagine the use of a quotient reduction...

Thursday, August 5 2010

## Two Open Problems for the Subgroup-Reduction based Dedekindian HSP Algorithm

One of my main idea after having studied the Hidden Subgroup Problem is that subgroup simplification is likely to play an important role. Basically, rather than directly finding the hidden subgroup $H$ by working on the whole initial group $G$, we only try to get partial information on $H$ in a first time. This information allows us to move to a simpler HSP problem and we can iterate this process until the complete determination of $H$. Several reductions of this kind exist and I think the sophisticated solution to HSP over 2-nil groups illustrates well how this technique can be efficient.

Using only subgroup reduction, I've been able to design an alternative algorithm for the Dedekindian HSP i.e. over groups that have only normal subgroups. Recall that the standard Dedekindian HSP algorithm is to use Weak Fourier Sampling, measure a polynomial number of representations ${\rho }_{1},\dots ,{\rho }_{m}$ and then get with high probability the hidden subgroup as the intersection of kernels $H=\bigcap _{i}\mathrm{Ker}{\rho }_{i}$. When the group is dedekindian, we always have $H\subseteq \mathrm{Ker}{\rho }_{i}$. Hence my alternative algorithm is rather to start by measuring one representation ${\rho }_{1}$, move the problem to HSP over $\mathrm{Ker}{\rho }_{1}$ and iterate this procedure. I've been able to show that we reach the group $H$ after a polynomial number of steps, the idea being that when we measure a non-trivial representation the size of the underlying group becomes at least twice smaller. One difficulty of this approach is to determine the structure of the new group $\mathrm{Ker}{\rho }_{i}$ so that we can work on it. However, for the cyclic case this is determination is trivial and for the abelian case I've used the group decomposition algorithm, based on the cyclic HSP. Finally I've two open questions:

1. Can my algorithm work for the Hamiltonian HSP i.e. over non-abelian dedekindian groups?
2. Is my algorithm more efficient than the standard Dedekindian HSP?

For the first question, I'm pretty sure that the answer is positive, but I admit that I've not really thought about it. For the second one, it depends on what we mean by more efficient. The decomposition of the kernel seems to increase the time complexity but since we are working on smaller and smaller groups, we decrease the space complexity. However, if we are only considering the numbers of sample, my conjecture is that both algorithms have the same complexity and more precisely yield the same markov process. In order to illustrate this, let's consider the cyclic case $G={ℤ}_{18}$ and $H=6{ℤ}_{18}$. The markov chain of the my alternative algorithm is given by the graph below, where the edge labels are of course probabilities and the node labels are the underlying group. We start at $G={ℤ}_{18}$ and want to reach $H\cong {ℤ}_{3}$.

One can think that moving to smaller and smaller subgroups will be faster than the standard algorithm which always works on ${ℤ}_{18}$. However, it turns out that the markov chain of the standard algorithm is exactly the same. The point being that while it is true that working over ${ℤ}_{9}$ or ${ℤ}_{6}$ provides less possibilities of samples (3 and 2 respectively, instead of 6 for ${ℤ}_{18}$) the repartition of "good" and "bad" samples is the same and thus we get the same transition probabilities. I guess it would be possible to formalize this for a general cyclic group. The general abelian case seems more difficult, but I'm sure that the same phenomenon can be observed.

Friday, July 30 2010

## Master's Project - The Hidden Subgroup Problem

I've submitted my report on The Hidden Subgroup Problem to arXiv.org and I expect it to be published soon.

Comments are welcome on this blog entry.

--update 03/08/2010: a pdf version is available on arXiv.org. You can still find a Web version as well as my slides on my website.

Monday, July 5 2010

## Thoughts on the Dihedral Hidden Subgroup Problem (part 3)

Back to the DHSP again (see part 1 and part 2 in previous blog posts)... This time, I've tried to approximate the value of $d$ rather than finding its parity. For convenience, we assume $N$ to be even and define $N\text{'}=2M=\frac{N}{2}$. We consider for $a\in {ℤ}_{N}$ and $b\in {ℤ}_{2}$ the sets of $N\text{'}$ successive values${F}_{a}^{b}=\left\{f\left(a+i,b\right),i\in {ℤ}_{N\prime }\right\}$

We suppose that we can efficiently create uniform superposition over these states. As in part 2, we have ${F}_{a}^{1}={F}_{a-d}^{0}={F}_{s}^{0}$ with $-N and we measure the tensor product of the uniform superpositions over ${F}_{a}^{1}={F}_{s}^{0}$ and ${F}_{0}^{0}$ in the basis ${\mid x⟩\mid x⟩}_{x\in {ℤ}_{N}}$, ${\frac{1}{\sqrt{2}}\left(\mid x⟩\mid y⟩+\mid x⟩\mid y⟩\right)}_{x, ${\frac{1}{\sqrt{2}}\left(\mid x⟩\mid y⟩-\mid x⟩\mid y⟩\right)}_{x. We define $l=\mid {F}_{0}^{0}\cap {F}_{a}^{1}\mid$ the number of common elements.

The figure above shows that in the case $0\le s\le N\text{'}$, we have $l=N\text{'}-s$. In general, the expression is

$l=\mid N\text{'}-\mid s\mid \mid =\mid N\text{'}-\mid d-a\mid \mid$

Using an analysis similar to the one part 2, we get the probability to measure an element in ${\frac{1}{\sqrt{2}}\left(\mid x⟩\mid y⟩-\mid x⟩\mid y⟩\right)}_{x:

$P=\frac{1}{2}\left[1-{\left(\frac{l}{N\prime }\right)}^{2}\right]$

Repeating the measurements, we can approximate $P$, and consequently $l,s$ and finally $d$. We can also play with the value $a$, to improve the approximation. Unfortunately, using a number of repetitions polynomial in $\mathrm{log}N$, it does not seem possible to bound the error better than $O\left(N\right)$. It seems however that it is still a better result than the one that I obtained with the standard Coset Sampling method. Hence it is another hint of the fact that using a superposition over several oracles values would yield improvement.

I'm still wondering how to create efficiently an uniform superposition over several oracle values. One thing I've read in many papers, is given a superposition over elements $\mid x⟩\mid f\left(x\right)⟩$, erase or uncompute the first register to get a superposition over elements $\mid f\left(x\right)⟩$. However, I'm not sure what are the requirements to apply such an operation. If it could work for the algorithm of part 2, then this would solve the case $N={2}^{n}$.

- page 1 of 2