In my master thesis on quantum computing, I designed a version of the Abelian HSP algorithm based on subgroup reduction. In a previous blog post, I wondered whether this algorithm could be extended to the Dedekindian HSP and if it would be more efficient than the standard one. I’m going to prove that the latter is false in the cyclic case, where the two algorithms have exactly the same associated Markov chain.
Recall that the HSP is - given a finite group , an unknown subgroup and an "oracle" hiding (i.e. a function on which factorizes on the coset space in a function with pairwise distinct values) - to determine the subgroup using a number of oracle calls polynomial in the size of .
In the standard Dedekindian HSP algorithm, is a dedekind group and we use Fourier Sampling to measure a polynomial number of representations of such that . One proves that with high probability . In my alternative algorithm, we measure and set . Then instead of continuing to work on the whole group , the idea is to move to a HSP problem on with hidden subgroup . Then do a Fourier Sampling on to move to a new subgroup and so forth. If then there is a probability at least one half, that is at least twice smaller and so we see that we reach in an exponentially fast way.
Let’s now consider the cyclic HSP. We are given a cyclic group and a hidden subgroup for some and an oracle . With this setting, one call to allows to get a Fourier sample for some random , with uniform probability. In the standard cyclic HSP, we repeat Fourier sampling to get and deduce . For example, one can show that
In the algorithm I proposed, we build a sequence of polynomial length such that with high probability, . Clearly, all the ’s are cyclic groups and can be written . Since they contain , we have . From , we set and . We have the isomorphism defined by . Hence we can now work on , with a hidden subgroup and oracle .
We start with . At step , we perform a Fourier sampling that provides a random for some . Then, applying the Continued Fraction Algorithm on we get an irreducible fraction and a divisor of . Set . Then is a subgroup of . Because , we have and so is a subgroup of .
The general remark about the algorithm being exponentially fast is easy to see in the cyclic case. Indeed if above, then which happens only if . If at step , we have then can take a non-zero value with probability at least one half. In that case and . Hence grows exponentially fast until it becomes stationary and thus with very high probability.
We can prove that we get the same procedure using only Fourier sampling on the initial group . Suppose we have measured Fourier samples . We want to construct the same sequence . Set . For each , we let and compute the irreducible fraction . As above, we have and set . Now, write the euclidean division where and (with uniform probability for and too). We have so is actually the denominator obtained when we compute the irreducible fraction . With this description, it becomes obvious that the two algorithms are equivalent (where corresponds to in the previous algorithm).
I conjecture that the same phenomenon happens for a general Dedekindian group, but this may be more difficult to write down…