*This example will only work on the emulator backend*

## What does it do?

Classification is the act of assigning labels to data, often with respect to other data. For instance, we can classify between animals having $2$ or $4$ legs or a different number. Often we classify new data based on already classified (learned) data. We classify this new data point by representing all data points by features or characteristics and compute the distance between the features of the new data point and the features of the learned data points. The new data point is assigned the label to which it has the shortest distance.

This is a typical example of $k$-nearest neighbor classification, which classically has a complexity of $\mathcal{O}(NM)$, with $N$ the number of data points and $M$ the number of features. A quantum version of this classifier was proposed by Schuld et al. , see ref.1, and has a constant complexity $\mathcal{O}(1)$.

## What is it used for?

Classification has wide-spread use in everyday life and we often classify objects without fully realizing we do it. Classically, naively we would have to compute the distance between the new data point and all other data points separately, after which we combine the results. This approach can be approved upon, but we are still left with a complexity of $\mathcal{O}(NM)$.

With a quantum-based approach, we do the same computations, but now only requiring one single-qubit operation and two measurements. Hence, the quantum-based approach is in theory better than the classical one. A caveat is however that the result is probabilistic, hence, multiple measurement rounds must be needed. Furthermore, a specific quantum state is assumed to begin with. Without something like a quantum RAM or a quantum process that produces the input state efficiently, we are left with the same complexity as the classical approach.

## How does it work?

Let our data points be given by ${(x_i, y_i)}_{i=0}^{N-1}$, with $x_i$ the data point and $y_i$ the corresponding label and let $(x^$ be the unlabeled new data point. Furthermore, there are only two labels $y_i=\pm 1$. The new label is then given by evaluating

$y^* = {\rm sgn}\left(\sum_{i=0}^{N-1} y_i \left[1-\frac{1}{4N}{\vert{x^*}-x_i\vert}^2\right]\right)$

If both classes have the same number of data points, we can simplify the above function to

$y^* = {\rm sgn}\left(\sum_{i=0}^{N-1} y_i {\vert{x^*} + x_i\vert}^2\right)$

We can create a quantum state that will produce precisely this probability distribution, if we start with

$\frac{1}{\sqrt{2N}} \sum_{i=0}^{N-1} \vert i \rangle ( \vert 0 \rangle \vert x^* \rangle + \vert 1 \rangle \vert x_i \rangle) \vert y_i \rangle$

Here $\vert x \rangle = \sum_{m=0}^{M-1}x^m\vert m \rangle$ encodes the data points and $y_i = s \leftrightarrow \vert y_i \rangle = \vert 2s-1 \rangle$. Furthermore, $\vert i \rangle$ is the $i$-th computational basis state to keep track of the $i$-th training point and the second register is an ancilla qubit.

Given this initial state, the quantum algorithm consists of only three operations. First, we apply a Hadamard on the second register. This gives

$\frac{1}{2\sqrt{N}} \sum_{i=0}^{N-1} \vert i \rangle (\vert 0 \rangle (\vert x^* \rangle + \vert x_i \rangle) + \vert 1 \rangle (\vert x^* \rangle - \vert x_i \rangle))\vert y_i \rangle$

We then measure the second register and only conditionally continue if we measured the $0$-state. This gives

$\frac{1}{2\sqrt{Np_{0}}} \sum_{i=0}^{N-1} \vert i \rangle \vert 0 \rangle(\vert x^* \rangle + \vert x_i \rangle) \vert y_i \rangle = \frac{1}{2\sqrt{Np_{0}}} \sum_{i=0}^{N-1} \sum_{m=0}^{M-1}\vert i \rangle\vert 0 \rangle(x^*$

with $p_{0}$ the probability of obtaining $0$ after measuring the second register and $x_{i,m}$ the $m$-th feature of the $i$-th data point.

Now measuring the last register which encodes the label gives the probability of each class

$P(y^* = 1) = \frac{1}{4Np_{0}}\sum_{i\vert y_i \rangle =1}{\vert{x^*+x\vert}_i}^2$

Note that the two measurement rounds can be combined by classically post-processing the results.

## Examination of the code

Let us consider a simple implementation of the above quantum distance-based classifier. As we have to quantum RAM, we will explicitly prepare the quantum state. Furthermore, we will consider only two data points, each with two features.

For normalized and standardized data, we can assume without loss of generality that

$\vert x_0 \rangle = \cos(\theta/2)\vert 0 \rangle - \sin(\theta/2)\vert 1 \rangle$, $y_0 = -1$

$\vert x_1 \rangle = \cos(\phi/2)\vert 0 \rangle - \sin(\phi/2)\vert 1 \rangle$, $y_1 = 1$

$\vert x^* \rangle = \cos(\omega/2)\vert 0 \rangle - \sin(\omega/2)\vert 1 \rangle$

for specific angles $\theta$, $\phi$ and $\omega$.

The code is then given by

` ````
version 1.0
qubits 4
prep_z q[0:3]
H q[0:1]
#e ncode the new data point, this implements a cRy(omega)
CNOT q[0], q[2]
Ry q[2], -omega/2
CNOT q[0], q[2]
Ry q[2], omega/2
X q[0]
# encode the first data point, this implements a ccRy(theta)
toffoli q[0],q[1],q[2]
CNOT q[1],q[2]
Ry q[2], theta/4
CNOT q[1],q[2]
Ry q[2], -theta/4
toffoli q[0],q[1],q[2]
CNOT q[1],q[2]
Ry q[2], -theta/4
CNOT q[1],q[2]
Ry q[2], theta/4
X q[1]
# encode the second data point, this implements a ccRy(phi)
toffoli q[0],q[1],q[2]
CNOT q[1],q[2]
Ry q[2], phi/4
CNOT q[1],q[2]
Ry q[2], -phi/4
toffoli q[0],q[1],q[2]
CNOT q[1],q[2]
Ry q[2], -phi/4
CNOT q[1],q[2]
Ry q[2], phi/4
# encode the labels
CNOT q[0], q[3]
# The actual algorithm
H q[1]
measure_z q[1,3]
```

Here we see the results of the execution for angles $\theta=$, $\phi=$ and $\omega=$. Note that we must discard the measurement results where the measurement of the second qubit gives $1$. Hence, we see that $P(y^$, as we would expect.

## Want to know more?

For more information see the article by Schuld et al., the notebooks on this algorithm on our GitHub or this article [2], where the practicalities of the state preparation are given.

[1] Implementing a distance-based classifier with a quantum interference circuit, M. Schuld, M. Fingerhuth, and F. Petruccione, EPL (Europhysics Letters), Volume 119, Number 6, 2017

[2]: R. Wezeman, N. Neumann, and F. Phillipson. “Distance-based classifier on the Quantum Inspire”. In: Digitale Welt4.1 (Jan. 2020), pp. 85–91