Blog de Frédéric

To content | To menu | To search

Tag - markov process

Entries feed - Comments feed

Sunday, April 22 2012

Cyclic HSP based on Subgroup Reduction

In my master thesis on quantum computing, I designed a version of the Abelian HSP algorithm based on subgroup reduction. In a previous blog post, I wondered whether this algorithm could be extended to the Dedekindian HSP and if it would be more efficient than the standard one. I’m going to prove that the latter is false in the cyclic case, where the two algorithms have exactly the same associated Markov chain.

Recall that the HSP is - given a finite group GG, an unknown subgroup HH and an "oracle" ff hiding HH (i.e. a function on GG which factorizes on the coset space G/HG/H in a function with pairwise distinct values) - to determine the subgroup HH using a number of oracle calls polynomial in the size of GG.

In the standard Dedekindian HSP algorithm, GG is a dedekind group and we use Fourier Sampling to measure a polynomial number of representations ρ1,ρm\rho _{1},...\rho _{m} of GG such that HKerρkH\subseteq\operatorname{Ker}{\rho _{k}}. One proves that with high probability H=kKerρkH=\cap _{k}\operatorname{Ker}{\rho _{k}}. In my alternative algorithm, we measure ρ1\rho _{1} and set G1=Kerρ1G_{1}=\operatorname{Ker}{\rho _{1}}. Then instead of continuing to work on the whole group GG, the idea is to move to a HSP problem on G1G_{1} with hidden subgroup HH. Then do a Fourier Sampling on G1G_{1} to move to a new subgroup G2G_{2} and so forth. If HGkH\subsetneq G_{k} then there is a probability at least one half, that Gk+1G_{{k+1}} is at least twice smaller and so we see that we reach HH in an exponentially fast way.

Let’s now consider the cyclic HSP. We are given a cyclic group G=ZN=Z/NZG=\mathbb{Z}_{N}=\mathbb{Z}/{N\mathbb{Z}} and a hidden subgroup H=dZNH=d\mathbb{Z}_{N} for some 1dN1\leq d\leq N and an oracle ff. With this setting, one call to ff allows to get a Fourier sample λNd\lambda\frac{N}{d} for some random λ0,1,,d-1\lambda\in\{ 0,1,...,d-1\}, with uniform probability. In the standard cyclic HSP, we repeat Fourier sampling to get λ1Nd,λ2Nd,λmNd\lambda _{1}\frac{N}{d},\lambda _{2}\frac{N}{d},...\lambda _{m}\frac{N}{d} and deduce dd. For example, one can show that

P(Ngcd(λkNd)1km=d)1-2-m/2P(\frac{N}{\operatorname{gcd}{\left(\lambda _{k}\frac{N}{d}\right)}_{{1\leq k\leq m}}}=d)\geq 1-2^{{-m/2}}

In the algorithm I proposed, we build a sequence of polynomial length HGmGm-1G1=G=ZNH\subseteq G_{m}\subseteq G_{{m-1}}\subseteq...\subseteq G_{1}=G=\mathbb{Z}_{N} such that with high probability, H=GmH=G_{m}. Clearly, all the GkG_{k}’s are cyclic groups and can be written Gk=akZNG_{k}=a_{k}\mathbb{Z}_{N}. Since they contain H=dZNH=d\mathbb{Z}_{N}, we have ak|da_{k}|d. From aka_{k}, we set dk=dakd_{k}=\frac{d}{a}_{k} and Nk=NakN_{k}=\frac{N}{a}_{k}. We have the isomorphism ϕk:GkZNk\phi _{k}:G_{k}\overset{\sim}{\to}\mathbb{Z}_{{N_{k}}} defined by ϕkx=xak\phi _{k}(x)=\frac{x}{a}_{k}. Hence we can now work on ZNk\mathbb{Z}_{{N_{k}}}, with a hidden subgroup Hk=ϕkH=dkZNkH_{k}=\phi _{k}(H)=d_{k}\mathbb{Z}_{{N_{k}}} and oracle fk=fϕk-1f_{k}=f\circ\phi _{k}^{{-1}}.

We start with G1=G=ZN,H1=H=dZN,a1=1,d1=d,N1=NG_{1}=G=\mathbb{Z}_{N},H_{1}=H=d\mathbb{Z}_{N},a_{1}=1,d_{1}=d,N_{1}=N. At step kk, we perform a Fourier sampling that provides a random λkNkdk\lambda _{k}\frac{N_{k}}{d_{k}} for some λk0,1,,dk-1\lambda _{k}\in\{ 0,1,...,d_{k}-1\}. Then, applying the Continued Fraction Algorithm on λkNkdkakN=λkdk\lambda _{k}\frac{N_{k}}{d_{k}}\frac{a_{k}}{N}=\frac{\lambda _{k}}{d_{k}} we get an irreducible fraction pkqk\frac{p_{k}}{q_{k}} and a divisor qkq_{k} of dkd_{k}. Set ak+1=qkaka_{{k+1}}=q_{k}a_{k}. Then Gk+1=ak+1ZNG_{{k+1}}=a_{{k+1}}\mathbb{Z}_{N} is a subgroup of GkG_{k}. Because qk|dkq_{k}|d_{k}, we have akqk|akdk=d{a_{k}q_{k}}|{a_{k}d_{k}}=d and so HH is a subgroup of Gk+1G_{{k+1}}.

The general remark about the algorithm being exponentially fast is easy to see in the cyclic case. Indeed if qk=1q_{k}=1 above, then λk=pkdk\lambda _{k}=p_{k}d_{k} which happens only if λk=0\lambda _{k}=0. If at step kk, we have HGkH\subsetneq G_{k} then λk\lambda _{k} can take a non-zero value with probability at least one half. In that case qk2q_{k}\geq 2 and ak+12aka_{{k+1}}\geq 2a_{k}. Hence aka_{k} grows exponentially fast until it becomes stationary and thus am=da_{m}=d with very high probability.

We can prove that we get the same procedure using only Fourier sampling on the initial group GG. Suppose we have measured Fourier samples λ1Nd,λ2Nd,λmNd\lambda _{1}\frac{N}{d},\lambda _{2}\frac{N}{d},...\lambda _{m}\frac{N}{d}. We want to construct the same sequence aka_{k}. Set a1=1a_{1}=1. For each 1k<m1\leq k<m, we let dk=dakd_{k}=\frac{d}{a}_{k} and compute the irreducible fraction λkNdakN=λkdk=pkqk\lambda _{k}\frac{N}{d}\frac{a_{k}}{N}=\frac{\lambda _{k}}{d_{k}}=\frac{p_{k}}{q_{k}}. As above, we have akqk|akdk=d{a_{k}q_{k}}|{a_{k}d_{k}}=d and set ak+1=qkaka_{{k+1}}={q_{k}a_{k}}. Now, write the euclidean division λk=dkμk+νk\lambda _{k}=d_{k}\mu _{k}+\nu _{k} where μk0,ak-1\mu _{k}\in\{ 0,...a_{k}-1\} and νk0,dk-1\nu _{k}\in\{ 0,...d_{k}-1\} (with uniform probability for μk\mu _{k} and νk\nu _{k} too). We have pkqk=λkdk=μk+νkdk\frac{p_{k}}{q_{k}}=\frac{\lambda _{k}}{d_{k}}=\mu _{k}+\frac{\nu _{k}}{d_{k}} so qkq_{k} is actually the denominator obtained when we compute the irreducible fraction νkdk\frac{\nu _{k}}{d_{k}}. With this description, it becomes obvious that the two algorithms are equivalent (where νk\nu _{k} corresponds to λk\lambda _{k} in the previous algorithm).

I conjecture that the same phenomenon happens for a general Dedekindian group, but this may be more difficult to write down…

Thursday, August 5 2010

Two Open Problems for the Subgroup-Reduction based Dedekindian HSP Algorithm

One of my main idea after having studied the Hidden Subgroup Problem is that subgroup simplification is likely to play an important role. Basically, rather than directly finding the hidden subgroup H by working on the whole initial group G , we only try to get partial information on H in a first time. This information allows us to move to a simpler HSP problem and we can iterate this process until the complete determination of H . Several reductions of this kind exist and I think the sophisticated solution to HSP over 2-nil groups illustrates well how this technique can be efficient.

Using only subgroup reduction, I've been able to design an alternative algorithm for the Dedekindian HSP i.e. over groups that have only normal subgroups. Recall that the standard Dedekindian HSP algorithm is to use Weak Fourier Sampling, measure a polynomial number of representations ρ 1 , , ρ m and then get with high probability the hidden subgroup as the intersection of kernels H = i Ker ρ i . When the group is dedekindian, we always have H Ker ρ i . Hence my alternative algorithm is rather to start by measuring one representation ρ 1 , move the problem to HSP over Ker ρ 1 and iterate this procedure. I've been able to show that we reach the group H after a polynomial number of steps, the idea being that when we measure a non-trivial representation the size of the underlying group becomes at least twice smaller. One difficulty of this approach is to determine the structure of the new group Ker ρ i so that we can work on it. However, for the cyclic case this is determination is trivial and for the abelian case I've used the group decomposition algorithm, based on the cyclic HSP. Finally I've two open questions:

  1. Can my algorithm work for the Hamiltonian HSP i.e. over non-abelian dedekindian groups?
  2. Is my algorithm more efficient than the standard Dedekindian HSP?

For the first question, I'm pretty sure that the answer is positive, but I admit that I've not really thought about it. For the second one, it depends on what we mean by more efficient. The decomposition of the kernel seems to increase the time complexity but since we are working on smaller and smaller groups, we decrease the space complexity. However, if we are only considering the numbers of sample, my conjecture is that both algorithms have the same complexity and more precisely yield the same markov process. In order to illustrate this, let's consider the cyclic case G = 18 and H = 6 18 . The markov chain of the my alternative algorithm is given by the graph below, where the edge labels are of course probabilities and the node labels are the underlying group. We start at G = 18 and want to reach H 3 .

1 Z18 1->1 1/6 2 Z9 1->2 1/6 3 Z6 1->3 1/3 4 Z3 1->4 1/3 2->2 1/3 2->4 2/3 3->3 1/2 3->4 1/2 4->4 1

One can think that moving to smaller and smaller subgroups will be faster than the standard algorithm which always works on 18 . However, it turns out that the markov chain of the standard algorithm is exactly the same. The point being that while it is true that working over 9 or 6 provides less possibilities of samples (3 and 2 respectively, instead of 6 for 18 ) the repartition of "good" and "bad" samples is the same and thus we get the same transition probabilities. I guess it would be possible to formalize this for a general cyclic group. The general abelian case seems more difficult, but I'm sure that the same phenomenon can be observed.