|
| 1 | +================ |
| 2 | +Advanced Usage |
| 3 | +================ |
| 4 | + |
| 5 | +MPS Simulator |
| 6 | +---------------- |
| 7 | + |
| 8 | +(Still experimental support) |
| 9 | + |
| 10 | +Very simple, we provide the same set of API for ``MPSCircuit`` as ``Circuit``, |
| 11 | +the only new line is to set the bond dimension for the new simulator. |
| 12 | + |
| 13 | +.. code-block:: python |
| 14 | +
|
| 15 | + c = tc.MPSCircuit(n) |
| 16 | + c.set_split_rules({"max_singular_values": 50}) |
| 17 | +
|
| 18 | +The larger bond dimension we set, the better approximation ratio (of course the more computational cost we pay) |
| 19 | + |
| 20 | +Split Two-qubit Gates |
| 21 | +------------------------- |
| 22 | + |
| 23 | +The two-qubit gates applied on the circuit can be decomposed via SVD, which may further improve the optimality of the contraction pathfinding. |
| 24 | + |
| 25 | +`split` configuration can be set at circuit-level or gate-level. |
| 26 | + |
| 27 | +.. code-block:: python |
| 28 | +
|
| 29 | + split_conf = { |
| 30 | + "max_singular_values": 2, # how many singular values are kept |
| 31 | + "fixed_choice": 1, # 1 for normal one, 2 for swapped one |
| 32 | + } |
| 33 | +
|
| 34 | + c = tc.Circuit(nwires, split=split_conf) |
| 35 | +
|
| 36 | + # or |
| 37 | +
|
| 38 | + c.exp1( |
| 39 | + i, |
| 40 | + (i + 1) % nwires, |
| 41 | + theta=paramc[2 * j, i], |
| 42 | + unitary=tc.gates._zz_matrix, |
| 43 | + split=split_conf |
| 44 | + ) |
| 45 | +
|
| 46 | +Note ``max_singular_values`` must be specified to make the whole procedure static and thus jittable. |
| 47 | + |
| 48 | + |
| 49 | +Jitted Function Save/Load |
| 50 | +----------------------------- |
| 51 | + |
| 52 | +To reuse the jitted function, we can save it on the disk via support from the TensorFlow `SavedModel <https://www.tensorflow.org/guide/saved_model>`_. That is to say, only jitted quantum function on the TensorFlow backend can be saved on the disk. |
| 53 | + |
| 54 | +For the JAX-backend quantum function, one can first transform them into the tf-backend function via JAX experimental support: `jax2tf <https://github.com/google/jax/tree/main/jax/experimental/jax2tf>`_. |
| 55 | + |
| 56 | +We wrap the tf-backend `SavedModel` as very easy-to-use function :py:meth:`tensorcircuit.keras.save_func` and :py:meth:`tensorcircuit.keras.load_func`. |
| 57 | + |
| 58 | +Parameterized Measurements |
| 59 | +----------------------------- |
| 60 | + |
| 61 | +For plain measurements API on a ``tc.Circuit``, eg. `c = tc.Circuit(n=3)`, if we want to evaluate the expectation :math:`<Z_1Z_2>`, we need to call the API as ``c.expectation((tc.gates.z(), [1]), (tc.gates.z(), [2]))``. |
| 62 | + |
| 63 | +In some cases, we may want to tell the software what to measure but in a tensor fashion. For example, if we want to get the above expectation, we can use the following API: :py:meth:`tensorcircuit.templates.measurements.parameterized_measurements`. |
| 64 | + |
| 65 | +.. code-block:: python |
| 66 | +
|
| 67 | + c = tc.Circuit(3) |
| 68 | + z1z2 = tc.templates.measurements.parameterized_measurements(c, tc.array_to_tensor([0, 3, 3, 0]), onehot=True) # 1 |
| 69 | +
|
| 70 | +This API corresponds to measure :math:`I_0Z_1Z_2I_3` where 0, 1, 2, 3 are for local I, X, Y, and Z operators respectively. |
| 71 | + |
| 72 | +Sparse Matrix |
| 73 | +---------------- |
| 74 | + |
| 75 | +We support COO format sparse matrix as most backends only support this format, and some common backend methods for sparse matrices are listed below: |
| 76 | + |
| 77 | +.. code-block:: python |
| 78 | +
|
| 79 | + def sparse_test(): |
| 80 | + m = tc.backend.coo_sparse_matrix(indices=np.array([[0, 1],[1, 0]]), values=np.array([1.0, 1.0]), shape=[2, 2]) |
| 81 | + n = tc.backend.convert_to_tensor(np.array([[1.0], [0.0]])) |
| 82 | + print("is sparse: ", tc.backend.is_sparse(m), tc.backend.is_sparse(n)) |
| 83 | + print("sparse matmul: ", tc.backend.sparse_dense_matmul(m, n)) |
| 84 | +
|
| 85 | + for K in ["tensorflow", "jax", "numpy"]: |
| 86 | + with tc.runtime_backend(K): |
| 87 | + print("using backend: ", K) |
| 88 | + sparse_test() |
| 89 | +
|
| 90 | +The sparse matrix is specifically useful to evaluate Hamiltonian expectation on the circuit, where sparse matrix representation has a good tradeoff between space and time. |
| 91 | +Please refer to :py:meth:`tensorcircuit.templates.measurements.sparse_expectation` for more detail. |
| 92 | + |
| 93 | +For different representations to evaluate Hamiltonian expectation in tensorcircuit, please refer to :doc:`tutorials/tfim_vqe_diffreph`. |
| 94 | + |
| 95 | +Randoms, Jit, Backend Agnostic, and Their Interplay |
| 96 | +-------------------------------------------------------- |
| 97 | + |
| 98 | +.. code-block:: python |
| 99 | +
|
| 100 | + import tensorcircuit as tc |
| 101 | + K = tc.set_backend("tensorflow") |
| 102 | + K.set_random_state(42) |
| 103 | +
|
| 104 | + @K.jit |
| 105 | + def r(): |
| 106 | + return K.implicit_randn() |
| 107 | +
|
| 108 | + print(r(), r()) # different, correct |
| 109 | +
|
| 110 | +.. code-block:: python |
| 111 | +
|
| 112 | + import tensorcircuit as tc |
| 113 | + K = tc.set_backend("jax") |
| 114 | + K.set_random_state(42) |
| 115 | +
|
| 116 | + @K.jit |
| 117 | + def r(): |
| 118 | + return K.implicit_randn() |
| 119 | +
|
| 120 | + print(r(), r()) # the same, wrong |
| 121 | +
|
| 122 | +
|
| 123 | +.. code-block:: python |
| 124 | +
|
| 125 | + import tensorcircuit as tc |
| 126 | + import jax |
| 127 | + K = tc.set_backend("jax") |
| 128 | + key = K.set_random_state(42) |
| 129 | +
|
| 130 | + @K.jit |
| 131 | + def r(key): |
| 132 | + K.set_random_state(key) |
| 133 | + return K.implicit_randn() |
| 134 | +
|
| 135 | + key1, key2 = K.random_split(key) |
| 136 | +
|
| 137 | + print(r(key1), r(key2)) # different, correct |
| 138 | +
|
| 139 | +Therefore, a unified jittable random infrastructure with backend agnostic can be formulated as |
| 140 | + |
| 141 | +.. code-block:: python |
| 142 | +
|
| 143 | + import tensorcircuit as tc |
| 144 | + import jax |
| 145 | + K = tc.set_backend("tensorflow") |
| 146 | +
|
| 147 | + def ba_key(key): |
| 148 | + if tc.backend.name == "tensorflow": |
| 149 | + return None |
| 150 | + if tc.backend.name == "jax": |
| 151 | + return jax.random.PRNGKey(key) |
| 152 | + raise ValueError("unsupported backend %s"%tc.backend.name) |
| 153 | +
|
| 154 | + |
| 155 | + @K.jit |
| 156 | + def r(key=None): |
| 157 | + if key is not None: |
| 158 | + K.set_random_state(key) |
| 159 | + return K.implicit_randn() |
| 160 | +
|
| 161 | + key = ba_key(42) |
| 162 | +
|
| 163 | + key1, key2 = K.random_split(key) |
| 164 | +
|
| 165 | + print(r(key1), r(key2)) |
| 166 | +
|
| 167 | +And a more neat approach to achieve this is as follows: |
| 168 | + |
| 169 | +.. code-block:: python |
| 170 | +
|
| 171 | + key = K.get_random_state(42) |
| 172 | +
|
| 173 | + @K.jit |
| 174 | + def r(key): |
| 175 | + K.set_random_state(key) |
| 176 | + return K.implicit_randn() |
| 177 | +
|
| 178 | + key1, key2 = K.random_split(key) |
| 179 | +
|
| 180 | + print(r(key1), r(key2)) |
| 181 | +
|
| 182 | +It is worth noting that since ``Circuit.unitary_kraus`` and ``Circuit.general_kraus`` call ``implicit_rand*`` API, the correct usage of these APIs is the same as above. |
| 183 | + |
| 184 | +One may wonder why random numbers are dealt in such a complicated way, please refer to the `Jax design note <https://github.com/google/jax/blob/main/docs/design_notes/prng.md>`_ for some hints. |
| 185 | + |
| 186 | +If vmap is also involved apart from jit, I currently find no way to maintain the backend agnosticity as TensorFlow seems to have no support of vmap over random keys (ping me on GitHub if you think you have a way to do this). I strongly recommend the users using Jax backend in the vmap+random setup. |
0 commit comments