QCED(Quantum-Classical Encoder-Decoder)とFNN(Feedforward Neural Network)の比較


今回の実験では200変数のQUBO問題で比較しています。
ほとんどのQUBO問題は、計算にものすごく時間がかかる「NP困難」な問題です。通常のアルゴリズムで解こうとすると、すぐに計算が遅くなるか、解を見つけるのに非常に長い時間がかかります。
変数の数が増えるにつれ指数関数的に複雑になる最適化問題で、200変数では可能な解の組み合わせは約 1.6 × 10^60 通りに達します。

数字だと 1,606,938,044,258,990,275,541,962,092,341,244,800 だそうです。
もうよくわからないですね。

この問題をを最適化していく形になります。
使用するのはQCED(Quantum-Classical Encoder-Decoder)とFNN(Feedforward Neural Network)です。

QCED(Quantum-Classical Encoder-Decoder)は、量子計算と古典的なニューラルネットワークを融合させたハイブリッド手法で、QUBO(Quadratic Unconstrained Binary Optimization)問題に対して非常に強力な解決してくれます。このような大規模問題を短時間で解くことは、従来の古典的な手法では極めて困難です。しかし、QCEDは量子シミュレーションを活用し、解探索を効率化してくれます。

一方、FNNは古典的なニューラルネットワーク手法を使用した最適化方法になります。


では早速試してみましょう。

今回使用するデータはこちらになります。


まずはFNNから


import numpy as np

file_path = '200_QUBO_instances.npy'
Q_list = np.load(file_path, allow_pickle=True)

# 読み込んだデータを確認
print(type(Q_list))
print(Q_list)
<class 'numpy.ndarray'>
[[[ 0.87835693 -0.05598211  3.683261   ...  6.4303255  -1.8843331
    1.6913242 ]
  [-0.05598211 -3.8544106  -1.6865983  ...  0.10170794  8.554954
   -6.560239  ]
  [ 3.683261   -1.6865983   0.6847429  ...  4.1337156   7.147146
   -5.7261066 ]
  ...
  [ 6.4303255   0.10170794  4.1337156  ... -5.928204   -7.277937
   -3.4466977 ]
  [-1.8843331   8.554954    7.147146   ... -7.277937    0.83853626
    3.9370723 ]
  [ 1.6913242  -6.560239   -5.7261066  ... -3.4466977   3.9370723
   -0.44826508]]

 [[-8.407826   -7.7025204   5.179104   ... -1.616338    6.5573287
   -0.02192593]
  [-7.7025204  -5.6019745  -6.2455225  ...  3.1760235   6.641848
   -7.629181  ]
  [ 5.179104   -6.2455225   2.690012   ...  5.638295    8.471462
   -0.6121874 ]
  ...
  [-1.616338    3.1760235   5.638295   ... -2.261506   -7.186126
    3.7407212 ]
  [ 6.5573287   6.641848    8.471462   ... -7.186126    1.258419
   -1.3019364 ]
  [-0.02192593 -7.629181   -0.6121874  ...  3.7407212  -1.3019364
   -4.416256  ]]

 [[-9.488083    8.058189   -3.666016   ... -1.4998932   5.252211
   -2.0552635 ]
  [ 8.058189   -5.101273   -6.9305077  ... -6.2437572   4.829009
   -0.7470312 ]
  [-3.666016   -6.9305077  -8.463369   ... -1.9663281  -7.3712444
    3.4172711 ]
  ...
  [-1.4998932  -6.2437572  -1.9663281  ... -3.9669647  -4.3001432
   -1.6329725 ]
  [ 5.252211    4.829009   -7.3712444  ... -4.3001432   8.586697
   -6.046257  ]
  [-2.0552635  -0.7470312   3.4172711  ... -1.6329725  -6.046257
   -4.2270255 ]]

 ...

 [[-9.903284   -2.5444975  -4.8658643  ... -1.9106255   7.974622
   -6.4791775 ]
  [-2.5444975   5.2603436  -1.7656236  ... -3.4351063   6.3991966
   -3.8232503 ]
  [-4.8658643  -1.7656236  -6.575544   ...  3.6702332  -0.0736351
    3.7162786 ]
  ...
  [-1.9106255  -3.4351063   3.6702332  ...  0.47473717  2.1200066
   -1.1627731 ]
  [ 7.974622    6.3991966  -0.0736351  ...  2.1200066  -1.1586561
   -0.90860987]
  [-6.4791775  -3.8232503   3.7162786  ... -1.1627731  -0.90860987
   -6.848552  ]]

 [[ 8.320421    0.06300545  5.622634   ... -8.942442    2.9164004
    3.314126  ]
  [ 0.06300545 -9.422556   -3.6271994  ...  1.9982953  -4.08238
   -3.3852835 ]
  [ 5.622634   -3.6271994   2.031063   ... -2.3335936  -2.8284044
    3.5308604 ]
  ...
  [-8.942442    1.9982953  -2.3335936  ...  1.2786207   3.9538307
    0.29459763]
  [ 2.9164004  -4.08238    -2.8284044  ...  3.9538307   9.607193
   -2.1903872 ]
  [ 3.314126   -3.3852835   3.5308604  ...  0.29459763 -2.1903872
   -1.7977161 ]]

 [[-5.583067   -3.3709478  -2.9271286  ...  7.619334    0.14672041
   -0.6116371 ]
  [-3.3709478   4.3764887   2.421525   ... -1.4115126  -3.8925061
   -2.7743666 ]
  [-2.9271286   2.421525    9.209936   ... -2.6664572  -5.363298
    5.7154903 ]
  ...
  [ 7.619334   -1.4115126  -2.6664572  ... -9.761688   -1.7106423
    0.7213435 ]
  [ 0.14672041 -3.8925061  -5.363298   ... -1.7106423   2.755107
   -3.460055  ]
  [-0.6116371  -2.7743666   5.7154903  ...  0.7213435  -3.460055
    0.43645477]]]


import torch
import torch.nn as nn
from copy import deepcopy


class ClassicSolver(nn.Module):
    def __init__(self,Q_size,layers):
        super(ClassicSolver,self).__init__()
        self.layers = layers
        self.hidden = nn.ModuleList()
        for i in range(layers):
            if i ==0:
                self.hidden.append(nn.Linear(4,4+Q_size))
            if 0<i<layers-1:
                self.hidden.append(nn.Linear(4+Q_size,4+Q_size))
            if i == layers-1:
                self.hidden.append(nn.Linear(4+Q_size,Q_size))

        self.relu = nn.ReLU()
        self.tanh = nn.Tanh()
        
    def forward(self,x):
        out = x
        for i in range(self.layers):
            if i < self.layers-1:
                func = self.hidden[i]
                out = func(out)
                out = self.relu(out)
            else:
                func = self.hidden[i]
                out = func(out)
                out = 0.5*self.tanh(out)+0.5
        
        return out
    

def qubo_loss(Q,x):
    off = Q.detach().clone()
    off.fill_diagonal_(0)
    return torch.dot(x,torch.diag(Q))+x.t()@off@x  #linear terms + quadratic terms




def fnn_solve(Q_list,layers,Q_sol_list=None,learning_rate=0.01,num_epochs=500,pre_rounds=20,anneal_round=10,post_annealing=True):


    sol_list = []
    vector_list = []
    for index,Q in enumerate(Q_list):

        if Q_sol_list !=None:
            Q_sol = Q_sol_list[index]
    
        Q_size = len(Q)

        #pre-sampling
        fnn_state_list = []
        inputs_list=[]
        pre_loss = []
        for pre_round in range(pre_rounds):  #pre-sampling rounds

            fnn = ClassicSolver(Q_size,layers)
            fnn_state_list.append(fnn.state_dict().copy())

            inputs = torch.rand(4,requires_grad=True)
            inputs_list.append(deepcopy(inputs))
            
            optimizer = torch.optim.SGD(fnn.parameters(), lr=learning_rate)

            for epoch in range(20):  #20 epochs of training for each pre-sampling round

                out = fnn(inputs)
        
                loss = qubo_loss(Q,out)
    
                loss.backward()
    
                optimizer.step()
           
                #clear the gradients 
                optimizer.zero_grad()

            pre_loss.append(loss.item())

        best_fnn_params = fnn_state_list[np.argmin(pre_loss)]
        best_inputs = inputs_list[np.argmin(pre_loss)]

        #actual training
        fnn = ClassicSolver(Q_size,layers)
        fnn.load_state_dict(best_fnn_params)

        inputs = best_inputs
        inputs.requires_grad=True
    
        optimizer = torch.optim.SGD(fnn.parameters(), lr=learning_rate)
    
        solution = []
        for epoch in range(num_epochs):

            out = fnn(inputs)
        
            loss = qubo_loss(Q,out)
    
            loss.backward()
    
            optimizer.step()
           
            #clear the gradients 
            optimizer.zero_grad()
            
            with torch.no_grad():
                out = fnn(inputs)
                solution.append(out)

        
        binary_solution = [np.array([0 if v <0.5 else 1 for v in solution[i]]) for i in range(num_epochs)]
        fnn_sol = binary_solution[-1]

        W = Q.detach().numpy()
        
        best = fnn_sol.T@W@fnn_sol   #cost value
        best_sol = fnn_sol   #solution vector
        
        print("fnn loss:",best)
        if post_annealing:
            u = np.identity(Q_size)

            T = abs(best)
            for a in range(1,anneal_round+1):
                print(f"annealing round: {a}/{anneal_round}")
                T = T/Q_size/a
                for i in range(Q_size):
                    trial_sol = abs(best_sol - u[i])
                    loss = trial_sol.T@W@trial_sol
                    if loss < best:
                        
                        best_sol = trial_sol
                        best = loss
                        print("better solution found, loss:",loss)
                    else:
                        if np.random.rand() < np.exp((best-loss)/T):
                            
                            for j in range(Q_size):
                                if j != i:
                                    trial_sol2 = abs(trial_sol-u[j])
                                    loss = trial_sol2.T@W@trial_sol2
                                    if loss < best:
                                        best_sol =trial_sol2
                                        best = loss
                                        print("energy barrier overcome, loss:",loss)

        if Q_sol_list !=None:
            percentage_error = 100*(best-Q_sol)/abs(Q_sol)
            if percentage_error > 0.001:
                sol_list.append(percentage_error)
            else:
                sol_list.append(0)
            print(f"instance: {index+1}/{len(Q_list)}, loss={percentage_error}")
        else:
            sol_list.append(best)
            print(f"instance: {index+1}/{len(Q_list)}, loss={best}")
        
    
    return sol_list

energy barrier overcome, loss: -7270.085608959198
energy barrier overcome, loss: -7277.4733147621155
better solution found, loss: -7291.2521276474
better solution found, loss: -7331.992702960968
better solution found, loss: -7364.177434921265
energy barrier overcome, loss: -7369.382350444794
energy barrier overcome, loss: -7391.930121421814
better solution found, loss: -7400.794818878174
energy barrier overcome, loss: -7426.123362064362
better solution found, loss: -7448.69739818573
better solution found, loss: -7464.209702014923
energy barrier overcome, loss: -7470.498142242432
better solution found, loss: -7474.490074157715
better solution found, loss: -7482.60951757431
energy barrier overcome, loss: -7499.3888463974
energy barrier overcome, loss: -7503.722304821014
energy barrier overcome, loss: -7523.53792142868
energy barrier overcome, loss: -7528.712829113007
better solution found, loss: -7530.4759821891785
annealing round: 2/10
better solution found, loss: -7534.821636676788
annealing round: 3/10
annealing round: 4/10
annealing round: 5/10
annealing round: 6/10
annealing round: 7/10
annealing round: 8/10
annealing round: 9/10
annealing round: 10/10
instance: 100/100, loss=-7534.821636676788
最適化結果: [-6882.13072681427, -7104.082824707031, -6953.216075897217, -7182.5534200668335, -7625.4486174583435, -7178.059028148651, -6764.183611392975, -7125.820971488953, -7150.0958886146545, -6586.682129859924, -6301.727297782898, -6467.445317268372, -6772.246123313904, -6489.615850925446, -6685.903628349304, -7515.334743499756, -7005.321583747864, -6570.864305973053, -6936.357727050781, -6328.4874267578125, -6753.252447605133, -6684.2883496284485, -6233.45458650589, -6888.106480121613, -6335.27365064621, -6685.763347148895, -6710.372583866119, -6843.076160430908, -6500.282066822052, -6367.54719877243, -6413.069146633148, -6012.203217506409, -7544.315174102783, -6647.361626625061, -7690.616402626038, -6742.389635562897, -7137.193478107452, -6716.983383655548, -6573.659171581268, -7173.340741157532, -6873.588557243347, -5470.138840675354, -6411.483247280121, -6578.835870742798, -6336.735528469086, -7480.291038036346, -6028.380337715149, -7032.744576931, -6642.237397193909, -6319.391517162323, -6093.93727350235, -6112.176330089569, -6129.763909816742, -6364.019955635071, -6614.005595207214, -7285.074822425842, -8150.715076446533, -7237.96831035614, -6972.512172222137, -6886.287088871002, -6874.873653411865, -7177.208784580231, -6591.548059463501, -6026.9044070243835, -6067.188675403595, -5857.083622455597, -6059.013258457184, -6385.657928466797, -7426.963681221008, -6694.863788127899, -8470.104269504547, -6816.053777694702, -7656.909501552582, -7100.4130120277405, -6104.913543701172, -6312.859435081482, -6069.653106212616, -7399.189745426178, -6514.7608294487, -6507.512263774872, -6637.954981803894, -6464.523952007294, -7146.513973236084, -6656.9875202178955, -6752.638627052307, -6267.541570663452, -6984.852240562439, -7129.915972232819, -6565.36901140213, -6627.910125732422, -7331.5474462509155, -5879.125876903534, -7096.481014251709, -7505.373484134674, -6614.666921138763, -7041.0602951049805, -6694.567754745483, -7298.342719554901, -6263.025983810425, -7534.821636676788]

次はQCED


import torch
import torch.nn as nn
from torch import Tensor
import numpy as np
import matplotlib.pyplot as plt
from qutip import *
from scipy.spatial.distance import pdist, squareform
import time 

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Construct 1-qubit Pauli operator
def single_pauli(q,j,axis):  #q: number of qubits, target jth qubit, axis of the Pauli matrix (x->0, y->1, z->2)
    product = [qeye(2)]*q
    paulis =[sigmax(),sigmay(),sigmaz()]
    product[j] = paulis[axis]
    return tensor(product)

# Construct 1-qubit number operator 
def nz(q,j):  #q: number of qubits, target jth qubit
    product = [qeye(2)]*q
    product[j] = (1+sigmaz())/2
    return tensor(product)

# Construct 2-qubit number operator 
def nznz(q,j,k):  #q: number of qubits, target jth and kth qubit
    product = [qeye(2)]*q
    product[j] = (1+sigmaz())/2
    product[k] = (1+sigmaz())/2
    return tensor(product)

# Calculate the Rydberg interaction matrix based on the given coordinates
def ryd_int(coords,C=1):
    interactions = squareform(C/pdist(coords)**6)
    return interactions

# Quantum simulation of the Rydberg Hamiltonian
# The Rydberg Hamiltonian is composed of one global Ω(t), q local Δj(t), and q(q-1)/2 Rydberg interactions Vjk
# Ω(t) is parametrized by the asymptotic values of the basis step functions, while Δj(t) by the initial point Δj(0) and the slope sj
# Ω(t), Δj(0),and sj each has q parameters (in total 3q parameters)
def simulation(q,T,omegas,delta_0,delta_s,V,noise=0): 
    """
    T:evolution time 
    omegas: Ω(t) parameters
    delta_0: Δj(0)
    delta_s: sj
    V: Rydberg interaction matrix
    noise: maximum random noise ratio (relative to the laser parameters)

    """
    omegas = omegas*(1+np.random.uniform(-noise,noise))
    delta_0 = delta_0*(1+np.random.uniform(-noise,noise))
    delta_s = delta_s*(1+np.random.uniform(-noise,noise))

    #N = len(omegas)+2  #include boundary values
    #oparams = [0]+list(omegas)+[0]  #boundary conditions: Ω(t0) = Ω(tN) = 0
    N = len(omegas)  #dont consider boundary conditions
    oparams = list(omegas)
    
    delta_t = T/N  #duration of each step

    w = [oparams[i] if i==0 else oparams[i]-oparams[i-1] for i in range(N)] #coefficients of the basis Sigmoid functions

    try: 
        #Cython
        import cython  #if Cython is installed, string-based Hamiltonian is used in QuTiP simulation
        
        omega_t = ""
        for i in range(1,N): 
            omega_t += f"{w[i]}*1/(exp(-1000*(t-{delta_t}*{i})+10)+1)+"

        omega_t = omega_t[:-1]  #remove the "+" sign in the end

        delta_t = []
        for j in range(q):
            d = f"{delta_0[j]}+{delta_s[j]}*t"
            delta_t.append(d)

    except ImportError:
        #Without Cython (function-based Hamiltonian is used)
        def omega_t(t,args):
            y = 0
            for i in range(N):
                y += w[i]*1/(np.exp(-1000*(t-i*delta_t)+10)+1)
            return y

        class local_delta:
            def __init__(self,j):
                self.j = j

            def evo(self):
                func = lambda t,args: delta_0[self.j]+delta_s[self.j]*t
                return func

        delta_t = [local_delta(j).evo() for j in range(q)]

    
    Vjk = 0
    for j in range(q):
        for k in range(j+1,q):
            Vjk += V[j,k]*nznz(q,j,k)
        
    #time-independent part
    H_t = [Vjk]

    #time-dependent part
    for j in range(q):
        H_t.append([0.5*single_pauli(q,j,0),omega_t])
        H_t.append([-nz(q,j),delta_t[j]])
    
        
    time_list = np.linspace(0,T,int(T)) 

    #intial state = ground state = |1> 
    init = tensor([basis(2,1) for i in range(q)])

    result_t = mesolve(H_t, init, time_list,[], [])

    #Measure in the z-basis of each qubit
    measurement = [single_pauli(q,j,2) for j in range(q)]

    #Expectation value measured in each qubit
    expval_list = np.array([expect(measurement[j],result_t.states[-1]) for j in range(q)])

    return expval_list

# Finite-difference method for gradient computation
def finite_diff(q,T,omegas,delta_0,delta_s,V,noise):

    diff = 0.0001
    shift = diff*np.identity(q)

    omegas_grad = []
    delta_0_grad = []
    delta_s_grad = []

    for m in range(q):  #m: index of the parameter
        plus = omegas+shift[m,:]
        minus = omegas-shift[m,:]
        omegas_grad.append((simulation(q,T,plus,delta_0,delta_s,V,noise)-simulation(q,T,minus,delta_0,delta_s,V,noise))/(2*diff))
        plus = delta_0+shift[m,:]
        minus = delta_0-shift[m,:]
        delta_0_grad.append((simulation(q,T,omegas,plus,delta_s,V,noise)-simulation(q,T,omegas,minus,delta_s,V,noise))/(2*diff))
        plus = delta_s+shift[m,:]
        minus = delta_s-shift[m,:]
        delta_s_grad.append((simulation(q,T,omegas,delta_0,plus,V,noise)-simulation(q,T,omegas,delta_0,minus,V,noise))/(2*diff))
    
    ograd = np.array(omegas_grad).transpose()   #jth row: gradients w.r.t. the expectation value of the jth qubit
    dgrad_0 = np.array(delta_0_grad).transpose()
    dgrad_s = np.array(delta_s_grad).transpose()

    gradient_list = [[ograd[j,:],dgrad_0[j,:],dgrad_s[j,:]] for j in range(q)] #jth item: gradients of the expectation value of the jth qubit

    return gradient_list


# Define the Encoder class
# Number of layers in the encoder can be specified in the argument
class Encoder(nn.Module):
    def __init__(self,q,Q_size,layers,omega_max,delta_max,T):
        """
        Q_size: number of variables in the QUBO instance 
        layers: number of layers
        omega_max: maximum allowed Ω
        delta_max: maximum allowed Δ
        
        """
        super(Encoder,self).__init__()
        self.q = q
        self.omega_max = omega_max
        self.delta_max = delta_max
        self.T = T
        self.input_size =int(Q_size*(Q_size+1)/2)  #dimension of the input QUBO data
        self.layers = layers
        self.hidden = nn.ModuleList()
        self.hidden.append(nn.Linear(self.input_size,self.input_size+3*self.q))
        for i in range(1,layers):
            if i<layers-1:
                self.hidden.append(nn.Linear(3*self.q+self.input_size, 3*self.q+self.input_size))
            else:
                self.hidden.append(nn.Linear(3*self.q+self.input_size,3*self.q))
        
        self.sig = nn.Sigmoid()  #activation function of the last encoding layer
        self.relu = nn.ReLU()
        
    def forward(self,x):
        out = x
        for i in range(self.layers):
            if i < self.layers-1:
                func = self.hidden[i]
                out = func(out)
                out = self.relu(out)
            else:
                func = self.hidden[i]
                out = func(out)
                out = self.sig(out)
   
        omega_out = (out[:self.q]*self.omega_max).reshape(self.q,1)   #Ω domain [0,omega_max]
        delta_out_0 = (-self.delta_max/2+out[self.q:2*self.q]*self.delta_max).reshape(self.q,1)  #Δ(0) domain [-delta_max/2,delta_max/2]
        delta_out_s = (-self.delta_max/(2*self.T)+out[2*self.q:]*self.delta_max/self.T).reshape(self.q,1) #s domain [-delta_max/2T, delta_max/2T]

        return torch.cat((omega_out,delta_out_0,delta_out_s),dim=1)


# Define the quantum simulation as the PyTorch Autograd function
class Quantum(torch.autograd.Function):

    #input is the input tensor from the encoder
    @staticmethod
    def forward(ctx,input,q,T,V,noise):  
        
        ctx.q = q #save q as a parameter of ctx to use it in the backward pass

        #turn the input tensors from the encoder into numpy arrays
        t_omegas = input[:,0]
        omegas= t_omegas.detach().numpy()
        
        t_delta_0 = input[:,1]
        delta_0 = t_delta_0.detach().numpy()

        t_delta_s = input[:,2]
        delta_s = t_delta_s.detach().numpy()

        #calculate the gradients of the laser parameters and save it in ctx for the backward pass
        finite = finite_diff(q,T,omegas,delta_0,delta_s,V,noise)
        ctx.finite = finite

        return torch.tensor(simulation(q,T,omegas,delta_0,delta_s,V,noise),dtype=torch.float32,requires_grad=True)


    #grad_output is the gradient of loss w.r.t. each output expectation value 
    @staticmethod
    def backward(ctx, grad_output):

        finite = ctx.finite
        q = ctx.q

        input_grad = []
        for j in range(q):
            grad_omegas = list(finite[j][0])
            grad_delta_0 = list(finite[j][1])
            grad_delta_s = list(finite[j][2])

            input_grad.append(grad_omegas+grad_delta_0+grad_delta_s)

        #compute the gradient of loss w.r.t to each laser parameter with the chain rule
        input_grad = torch.tensor(input_grad,dtype=torch.float32)   #shape (q,3q)
        grad_output = torch.reshape(grad_output,(1,grad_output.shape[0]))  #shape (1,q)
        loss_input = torch.matmul(grad_output,input_grad)   #shape (1,3q)

        loss_omegas = torch.transpose(loss_input[:,:q],0,1) #shape (q,1)
        loss_delta_0 = torch.transpose(loss_input[:,q:2*q],0,1) #shape (q,1)
        loss_delta_s = torch.transpose(loss_input[:,2*q:],0,1) #shape (q,1)

        return torch.cat((loss_omegas,loss_delta_0,loss_delta_s),dim=1),None,None,None,None  #shape (q,3) (same dimension as input)
        #return None for extra parameters q,T,V,noise since we don't need to calculate the gradient of them 

# Define the Decoder Class
# Number of layers in the decoder can be specified in the argument
class Decoder(nn.Module):
    def __init__(self,q,Q_size,layers):
        super(Decoder,self).__init__()
        self.layers = layers
        self.hidden = nn.ModuleList()
        self.hidden.append(nn.Linear(q,q+Q_size))
        for i in range(1,layers):
            if i<layers-1:
                self.hidden.append(nn.Linear(q+Q_size, q+Q_size))
            else:
                self.hidden.append(nn.Linear(q+Q_size,Q_size))
        
        self.relu = nn.ReLU()
        self.tanh = nn.Tanh() #activate function of the last decoding layer
        
    def forward(self,x):
        out = x
        for i in range(self.layers):
            if i < self.layers-1:
                func = self.hidden[i]
                out = func(out)
                out = self.relu(out)
            else:
                func = self.hidden[i]
                out = func(out)
                out = 0.5*self.tanh(out)+0.5
        
        return out

# QUBO loss function
def qubo_loss(Q,x):
    off = Q.detach().clone()
    off.fill_diagonal_(0)
    return torch.dot(x,torch.diag(Q))+x.t()@off@x  #linear terms + quadratic terms
 


def QCED_solve(Q: Tensor,q_resource: dict,num_epochs=100,lr=0.01,e_layers=3,d_layers=3,noise=0,Q_sol=None):
    """
    Q: QUBO matrix. Should be of type torch.Tensor 
    
    q_resource: quantum resource of the Rydberg annealer. It should be a dictionary with keys in the order: "q","T","coords","omega_max",and "delta_max"
    
    i.e. q_resource = {"q":4,"T":5,"coords":np.array([[0., 0.],
                                                      [0., 1.],
                                                      [1., 0.],
                                                      [1., 1.]]),
                    "omega_max":5,"delta_max":20}
                    
    where q is the number of qubits, T is the evolution time, coords is the coordinates array of atoms, 
    omega_max and delta_max are the maximum allowed Ω and Δ in the simulation.

    num_epochs: number of training epochs

    lr: learning rate

    e_layer, d_layer: number of layers in the encoder and decoder respectively

    noise: maximum noise ratio relative to the laser parameters
                        
    Q_sol: cost value of the optimal solution (if available)
    
    """
    
    q,T,coords,omega_max,delta_max = [item for key,item in q_resource.items()]
    
    Q_size = len(Q)

    V = ryd_int(coords)

    encoder = Encoder(q,Q_size,e_layers,omega_max,delta_max,T).to(device)
    decoder = Decoder(q,Q_size,d_layers).to(device)

    #take the upper-triangular elements of the QUBO matrix as the input
    x = Q[torch.triu(torch.ones(Q_size, Q_size) == 1)]

    params = list(encoder.parameters())+list(decoder.parameters())
    optimizer = torch.optim.SGD(params, lr=lr,momentum=0.9)
    
    start = time.time()
  
    solution = []  #store the solution vectors
    loss_list = [] #store the loss values
    for epoch in range(num_epochs):  

        #Ryd denotes the function of quantum simulation
        Ryd = Quantum.apply

        #forward pass
        encoder_out = encoder(x)       
        q_out = Ryd(encoder_out,q,T,V,noise)
        #print(f"q_out:{q_out}")
        y = decoder(q_out)
        loss = qubo_loss(Q,y)

        #backpropagation
        loss.backward()
    
        #update all the parameters 
        optimizer.step()
        
        #clear the gradients 
        optimizer.zero_grad()


        with torch.no_grad():
            sol_vector = decoder(Ryd(encoder(x),q,T,V,noise))
            loss_value = qubo_loss(Q,sol_vector)
            solution.append(sol_vector)
            loss_list.append(loss_value)

            if (epoch+1)%10 == 0:
                if Q_sol == None:
                    print(f"epoch {epoch+1}: Loss = {loss_value.item()}")
                else:
                    print(f"epoch {epoch+1}: Loss = {100*(loss_value.item()-Q_sol)/abs(Q_sol)}%")
                    
    end = time.time()
  
    #map the continuous (probabilistic) output to binary
    binary_solution = [np.array([0 if v <0.5 else 1 for v in solution[i]]) for i in range(num_epochs)]

    binary_losses=[]    #losses evaluated with the binary-variable output
    for bs in binary_solution:
        binary_tensor = torch.tensor(bs,dtype=torch.float32)
        binary_tensor = torch.reshape(binary_tensor,(-1,))
        binary_losses.append(qubo_loss(Q,binary_tensor).item())

    prob_losses = [s.item() for s in loss_list]  #losses evaluated with the continuous-variable output

    # If the optimal solution is not provided, then simply return the loss values
    if Q_sol == None:
        result = [binary_losses,prob_losses]
    # If the optimal solution is provided, then return the percentage error relative to the optimal solution
    else:
        bin_err = [100*(s-Q_sol)/abs(Q_sol) for s in binary_losses]
        prob_err = [100*(s-Q_sol)/abs(Q_sol) for s in prob_losses]
        result = [bin_err,prob_err]

    print(f"Solution: {binary_solution[-1]}")
    print(f"QUBO cost: {binary_losses[-1]}")
    if Q_sol != None:
        print(f"Percentage Error: {result[0][-1]}%")
    print(f"Solving time: {end-start}s")

    return result 

# Plot the learning curve
def plot_learning_curve(result,figsize=(6,4),logx=False,logy=False):

    num_epochs  = len(result[0])
    plt.figure(figsize=figsize)
    plt.plot(range(1,num_epochs+1),result[1],lw=3,label="continuous variables")
    plt.plot(range(1,num_epochs+1),result[0],lw=3,label="binary variables",ls="dashed",color="tomato")
    plt.ylabel("Loss",fontsize=15)
    plt.xlabel("Iterations",fontsize=15)
    plt.xticks(fontsize=12)
    plt.yticks(fontsize=12)
    plt.legend(fontsize=12)
    if logx == True:
        plt.xscale("log")
    if logy == True:
        if result[0][-1] < 0:
            raise ValueError("Negative values found in the result. Try percentage error loss instead or disable logy.")
        else:
            plt.yscale("log")
    plt.show()


Q_matrix = np.load('/content/200_QUBO_instances.npy')
Q_tensor = torch.tensor(Q_matrix, dtype=torch.float32)

#インスタンス数とQUBO行列のサイズを確認  100インスタンス 200変数
num_instances = Q_tensor.shape[0]  
Q_size = Q_tensor.shape[1]         

#上三角要素を取得するための関数を定義
def extract_upper_triangular(Q_tensor):
    upper_triangular_elements = [Q_tensor[i][torch.triu(torch.ones(Q_size, Q_size) == 1)] for i in range(num_instances)]
    return upper_triangular_elements

#上三角要素の抽出
upper_triangular_elements = extract_upper_triangular(Q_tensor)

#上三角要素をテンソルに変換
upper_triangular_elements_tensor = torch.stack(upper_triangular_elements)

# 量子リソースの定義
q_resource = {
    "q": 4,  # 量子ビットの数
    "T": 5,  # 進化時間
    "coords": np.array([[0., 0.], [0., 1.], [1., 0.], [1., 1.]]),  # 原子の座標
    "omega_max": 5,  # 最大Ω
    "delta_max": 20  # 最大Δ
}

# 各インスタンスごとにQUBO最適化を実行
for i in range(num_instances):
    Q_instance = Q_tensor[i]  # 各インスタンスのQUBO行列を取得
    result = QCED_solve(Q_instance, q_resource, num_epochs=100, lr=0.01, e_layers=3, d_layers=3, noise=0, Q_sol=None)

    # 学習曲線をプロット
    plot_learning_curve(result)
epoch 100: Loss = -5062.001953125
Solution: [1 0 1 1 1 1 1 1 0 0 1 1 0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1 0 0 1 0 0 0 1 0 0
 0 0 0 1 0 0 0 1 0 1 1 0 0 1 1 1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0
 0 1 1 0 1 1 1 1 0 1 1 0 0 0 0 1 1 1 0 0 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0
 0 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 1 0 1 1 0 0 0 1 0 1 0 1 1 0 0 1
 0 1 1 1 0 1 0 1 0 1 0 1 1 1 0 1 1 0 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1
 1 1 1 1 0 1 0 1 1 1 1 1 0 0 1]
QUBO cost: -5062.001953125
Solving time: 657.5923266410828s



可視化

QCEDを用いたQUBO問題の最適化結果

QCEDの結果

  • 最終損失値(Loss): -5062.001953125

  • 最適化にかかった時間: 657.59秒(約11分)

  • 得られた解:

    1. [1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1]

FNNの結果

FNN(Feedforward Neural Network)を使用して同じQUBO問題を解いた結果

  • 最終損失値(Loss): -7534.821636676788

  • 最適化にかかった時間: 約8分

  • 得られた解:

    1. [[-6882.13072681427, -7104.082824707031, -6953.216075897217, -7182.5534200668335, -7625.4486174583435, -7178.059028148651, -6764.183611392975, -7125.820971488953, -7150.0958886146545, -6586.682129859924, -6301.727297782898, -6467.445317268372, -6772.246123313904, -6489.615850925446, -6685.903628349304, -7515.334743499756, -7005.321583747864, -6570.864305973053, -6936.357727050781, -6328.4874267578125, -6753.252447605133, -6684.2883496284485, -6233.45458650589, -6888.106480121613, -6335.27365064621, -6685.763347148895, -6710.372583866119, -6843.076160430908, -6500.282066822052, -6367.54719877243, -6413.069146633148, -6012.203217506409, -7544.315174102783, -6647.361626625061, -7690.616402626038, -6742.389635562897, -7137.193478107452, -6716.983383655548, -6573.659171581268, -7173.340741157532, -6873.588557243347, -5470.138840675354, -6411.483247280121, -6578.835870742798, -6336.735528469086, -7480.291038036346, -6028.380337715149, -7032.744576931, -6642.237397193909, -6319.391517162323, -6093.93727350235, -6112.176330089569, -6129.763909816742, -6364.019955635071, -6614.005595207214, -7285.074822425842, -8150.715076446533, -7237.96831035614, -6972.512172222137, -6886.287088871002, -6874.873653411865, -7177.208784580231, -6591.548059463501, -6026.9044070243835, -6067.188675403595, -5857.083622455597, -6059.013258457184, -6385.657928466797, -7426.963681221008, -6694.863788127899, -8470.104269504547, -6816.053777694702, -7656.909501552582, -7100.4130120277405, -6104.913543701172, -6312.859435081482, -6069.653106212616, -7399.189745426178, -6514.7608294487, -6507.512263774872, -6637.954981803894, -6464.523952007294, -7146.513973236084, -6656.9875202178955, -6752.638627052307, -6267.541570663452, -6984.852240562439, -7129.915972232819, -6565.36901140213, -6627.910125732422, -7331.5474462509155, -5879.125876903534, -7096.481014251709, -7505.373484134674, -6614.666921138763, -7041.0602951049805, -6694.567754745483, -7298.342719554901, -6263.025983810425, -7534.821636676788]]

QCEDとFNNの比較

比較項目QCEDFNN最終損失値(Loss)-5062.001953125-7534.821636676788最適化時間約11分約8分得られた解バイナリ解(200変数)バイナリ解(200変数)ハイブリッド手法量子シミュレーションと古典古典的なニューラルネットワーク

結論

  • FNNはQUBO問題に対して非常に良い結果を得られており、損失値が-7534.821636676788という最適な値になっています。

  • 一方、QCEDは量子計算を活用することで、より複雑な問題に対しても安定して解を見つけることが可能であり、特定の条件下では非常に有効な解法です。

特にこのFNNは、業界でも有名なGurobi(非常に強力な最適化ソフトウェア)や、D-Waveという量子コンピュータと比較しても、ほぼ同じかそれ以上の精度を短時間で論文の中で達成しています。つまり、「計算時間が短くなりつつ最先端の技術と競り合って、それらに匹敵する、もしくはそれ以上の性能を発揮する」点が非常に重要です。

もちろん制約の数や問題によっては難しくなるのかもしれないですが、戦えるのは良いことですね。

参照:https://arxiv.org/abs/2410.12636




この記事が気に入ったらサポートをしてみませんか?