You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/advance.rst
+39-86Lines changed: 39 additions & 86 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ the only new line is to set the bond dimension for the new simulator.
12
12
13
13
.. code-block:: python
14
14
15
-
c =tc.MPSCircuit(n)
15
+
c =tq.MPSCircuit(n)
16
16
c.set_split_rules({"max_singular_values": 50})
17
17
18
18
The larger bond dimension we set, the better approximation ratio (of course the more computational cost we pay)
@@ -31,15 +31,15 @@ The two-qubit gates applied on the circuit can be decomposed via SVD, which may
31
31
"fixed_choice": 1, # 1 for normal one, 2 for swapped one
32
32
}
33
33
34
-
c =tc.Circuit(nwires, split=split_conf)
34
+
c =tq.Circuit(nwires, split=split_conf)
35
35
36
36
# or
37
37
38
38
c.exp1(
39
39
i,
40
40
(i +1) % nwires,
41
41
theta=paramc[2* j, i],
42
-
unitary=tc.gates._zz_matrix,
42
+
unitary=tq.gates._zz_matrix,
43
43
split=split_conf
44
44
)
45
45
@@ -49,23 +49,21 @@ Note ``max_singular_values`` must be specified to make the whole procedure stati
49
49
Jitted Function Save/Load
50
50
-----------------------------
51
51
52
-
To reuse the jitted function, we can save it on the disk via support from the TensorFlow `SavedModel<https://www.tensorflow.org/guide/saved_model>`_. That is to say, only jitted quantum function on the TensorFlow backend can be saved on the disk.
52
+
To reuse the jitted function, we can save it on the disk via support from PyTorch's `TorchScript<https://pytorch.org/docs/stable/jit.html>`_.
53
53
54
-
For the JAX-backend quantum function, one can first transform them into the tf-backend function via JAX experimental support: `jax2tf <https://github.com/google/jax/tree/main/jax/experimental/jax2tf>`_.
55
-
56
-
We wrap the tf-backend `SavedModel` as very easy-to-use function :py:meth:`tensorcircuit.keras.save_func` and :py:meth:`tensorcircuit.keras.load_func`.
54
+
We provide easy-to-use functions :py:meth:`tyxonq.torchnn.save_func` and :py:meth:`tyxonq.torchnn.load_func`.
57
55
58
56
Parameterized Measurements
59
57
-----------------------------
60
58
61
-
For plain measurements API on a ``tc.Circuit``, eg. `c = tc.Circuit(n=3)`, if we want to evaluate the expectation :math:`<Z_1Z_2>`, we need to call the API as ``c.expectation((tc.gates.z(), [1]), (tc.gates.z(), [2]))``.
59
+
For plain measurements API on a ``tq.Circuit``, eg. `c = tq.Circuit(n=3)`, if we want to evaluate the expectation :math:`<Z_1Z_2>`, we need to call the API as ``c.expectation((tq.gates.z(), [1]), (tq.gates.z(), [2]))``.
62
60
63
-
In some cases, we may want to tell the software what to measure but in a tensor fashion. For example, if we want to get the above expectation, we can use the following API: :py:meth:`tensorcircuit.templates.measurements.parameterized_measurements`.
61
+
In some cases, we may want to tell the software what to measure but in a tensor fashion. For example, if we want to get the above expectation, we can use the following API: :py:meth:`tyxonq.templates.measurements.parameterized_measurements`.
The sparse matrix is specifically useful to evaluate Hamiltonian expectation on the circuit, where sparse matrix representation has a good tradeoff between space and time.
91
-
Please refer to :py:meth:`tensorcircuit.templates.measurements.sparse_expectation` for more detail.
89
+
Please refer to :py:meth:`tyxonq.templates.measurements.sparse_expectation` for more detail.
92
90
93
-
For different representations to evaluate Hamiltonian expectation in tensorcircuit, please refer to :doc:`tutorials/tfim_vqe_diffreph`.
91
+
For different representations to evaluate Hamiltonian expectation in tyxonq, please refer to :doc:`tutorials/tfim_vqe_diffreph`.
94
92
95
-
Randoms, Jit, Backend Agnostic, and Their Interplay
The interplay between randomness and JIT compilation requires careful handling, especially when aiming for reproducibility. PyTorch uses a stateful pseudo-random number generator (PRNG). To ensure reproducibility in a JIT-compiled function, the random state must be managed explicitly.
122
97
123
98
.. code-block:: python
124
99
125
-
import tensorcircuit as tc
126
-
import jax
127
-
K = tc.set_backend("jax")
128
-
key = K.set_random_state(42)
100
+
import tyxonq as tq
101
+
import torch
102
+
K = tq.set_backend("pytorch")
129
103
130
104
@K.jit
131
-
defr(key):
132
-
K.set_random_state(key)
133
-
return K.implicit_randn()
134
-
135
-
key1, key2 = K.random_split(key)
105
+
defr(generator):
106
+
return torch.randn(1, generator=generator)
136
107
137
-
print(r(key1), r(key2)) # different, correct
108
+
g1 = torch.Generator().manual_seed(42)
109
+
g2 = torch.Generator().manual_seed(42)
110
+
print(r(g1), r(g1)) # same, correct
111
+
print(r(g2)) # same as first call, correct
138
112
139
-
Therefore, a unified jittable random infrastructure with backend agnostic can be formulated as
113
+
To get different random numbers, you must use different generator states.
140
114
141
115
.. code-block:: python
142
116
143
-
import tensorcircuit as tc
144
-
import jax
145
-
K = tc.set_backend("tensorflow")
117
+
g = torch.Generator().manual_seed(42)
118
+
print(r(g), r(g)) # Two calls with the same generator will produce the same result if the function is jitted
And a more neat approach to achieve this is as follows:
124
+
TyxonQ's backend provides helper functions to manage this. ``K.get_random_state`` will return a `torch.Generator` instance, and ``K.random_split`` can be used to create new independent generator objects.
168
125
169
126
.. code-block:: python
170
127
171
128
key = K.get_random_state(42)
172
129
173
130
@K.jit
174
131
defr(key):
175
-
K.set_random_state(key)
176
-
return K.implicit_randn()
132
+
# We don't need K.set_random_state inside, as we pass the generator
133
+
return K.implicit_randn(generator=key)
177
134
178
135
key1, key2 = K.random_split(key)
179
136
180
137
print(r(key1), r(key2))
181
138
182
-
It is worth noting that since ``Circuit.unitary_kraus`` and ``Circuit.general_kraus`` call ``implicit_rand*`` API, the correct usage of these APIs is the same as above.
183
-
184
-
One may wonder why random numbers are dealt in such a complicated way, please refer to the `Jax design note <https://github.com/google/jax/blob/main/docs/design_notes/prng.md>`_ for some hints.
185
-
186
-
If vmap is also involved apart from jit, I currently find no way to maintain the backend agnosticity as TensorFlow seems to have no support of vmap over random keys (ping me on GitHub if you think you have a way to do this). I strongly recommend the users using Jax backend in the vmap+random setup.
139
+
This paradigm is crucial when using stochastic elements in your circuits, such as with ``Circuit.unitary_kraus`` and ``Circuit.general_kraus``, inside a JIT-compiled function.
0 commit comments