Can't use GPU using Aesara

I want use GPU with Aesara. But wheather I run the on Windows or Ubuntu, the same error gotten. I use python3.8.5 with anaconda and CUDA is also installed. The file .aesararc was also built following the documents. The is also from the documents as follows:

from aesara import function, config, shared, tensor as aet
import aesara
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], aet.exp(x))
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print(“Looping %d times took %f seconds” % (iters, t1 - t0))
print(“Result is %s” % (r,))
if numpy.any([isinstance(x.op, aesara.tensor.elemwise.Elemwise) and
(‘Gpu’ not in type(x.op).name)
for x in f.maker.fgraph.toposort()]):
print(‘Used the cpu’)
print(‘Used the gpu’)

The error gotten is as follows:

ERROR (aesara.gpuarray): Could not initialize pygpu, support disabled
Traceback (most recent call last):
File “/home/chinylan/anaconda3/lib/python3.8/site-packages/aesara/gpuarray/”, line 262, in
File “/home/chinylan/anaconda3/lib/python3.8/site-packages/aesara/gpuarray/”, line 249, in use
init_dev(device, preallocate=preallocate)
File “/home/chinylan/anaconda3/lib/python3.8/site-packages/aesara/gpuarray/”, line 120, in init_dev
context = pygpu.init(
File “pygpu/gpuarray.pyx”, line 658, in pygpu.gpuarray.init
File “pygpu/gpuarray.pyx”, line 569, in pygpu.gpuarray.pygpu_init
ValueError: invalid literal for int() with base 10: ‘{1}’
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
Looping 1000 times took 0.860167 seconds
Result is [1.2317803 1.6187934 1.5227807 … 2.2077181 2.2996776 1.6232328]
Used the cpu

The errors gotten on Windows 10 and Ubuntu 20.04 are the same.