I used model.compile(inputset, device=device, verbose=True, show_mlir=True) in FullyConnectedNeuralNetworkOnMNIST.ipynb to print some circuit details, but I can’t understand them.
Computation Graph for _clear_forward_proxy
--------------------------------------------------------------------------------
%0 = _x_181 # EncryptedTensor<int4, shape=(1, 784)> ∈ [-1, 6]
%1 = [[1 0 1 .. ... .. 0 0 0]] # ClearTensor<int4, shape=(784, 392)> ∈ [-4, 4]
%2 = matmul(%0, %1) # EncryptedTensor<int12, shape=(1, 392)> ∈ [-1387, 458]
%3 = [-1 -1 -2 ... -1 -1 -1] # ClearTensor<int4, shape=(392,)> ∈ [-5, 1]
%4 = add(%2, %3) # EncryptedTensor<int12, shape=(1, 392)> ∈ [-1392, 457]
%5 = round_bit_pattern(%4, lsbs_to_remove=5, overflow_protection=False, exactness=Exactness.EXACT) # EncryptedTensor<int12, shape=(1, 392)> ∈ [-1376, 448] @ /features/fc0/Gemm.matmul_rounding
%6 = subgraph(%5) # EncryptedTensor<uint3, shape=(1, 392)> ∈ [0, 7]
%7 = [[ 0 1 1 ... -1 -1 1]] # ClearTensor<int4, shape=(392, 10)> ∈ [-5, 4]
%8 = matmul(%6, %7) # EncryptedTensor<int8, shape=(1, 10)> ∈ [-82, 101]
return %8
MLIR
--------------------------------------------------------------------------------
module {
func.func @_clear_forward_proxy(%arg0: tensor<1x784x!FHE.esint<12>>) -> tensor<1x10x!FHE.esint<9>> {
%cst = arith.constant dense<"0x0001......00101"> : tensor<784x392xi5>
%0 = "FHELinalg.matmul_eint_int"(%arg0, %cst) : (tensor<1x784x!FHE.esint<12>>, tensor<784x392xi5>) -> tensor<1x392x!FHE.esint<12>>
%cst_0 = arith.constant dense<"0x000E......F0F0F"> : tensor<392xi4>
%1 = "FHELinalg.add_eint_int"(%0, %cst_0) : (tensor<1x392x!FHE.esint<12>>, tensor<392xi4>) -> tensor<1x392x!FHE.esint<12>>
%2 = "FHELinalg.round"(%1) : (tensor<1x392x!FHE.esint<12>>) -> tensor<1x392x!FHE.esint<7>>
%cst_1 = arith.constant dense<"0x0000......0000"> : tensor<128xi64>
%3 = "FHELinalg.apply_lookup_table"(%2, %cst_1) : (tensor<1x392x!FHE.esint<7>>, tensor<128xi64>) -> tensor<1x392x!FHE.eint<9>>
%cst_2 = arith.constant dense<"0x1F010......0011F"> : tensor<392x10xi5>
%4 = "FHELinalg.to_signed"(%3) : (tensor<1x392x!FHE.eint<9>>) -> tensor<1x392x!FHE.esint<9>>
%5 = "FHELinalg.matmul_eint_int"(%4, %cst_2) : (tensor<1x392x!FHE.esint<9>>, tensor<392x10xi5>) -> tensor<1x10x!FHE.esint<9>>
return %5 : tensor<1x10x!FHE.esint<9>>
}
}
programmable_bootstrap_count: 2352
programmable_bootstrap_count_per_parameter: {
BootstrapKeyParam(polynomial_size=256, glwe_dimension=6, input_lwe_dimension=587, level=3, base_log=9, variance=0.000000): 1960
BootstrapKeyParam(polynomial_size=16384, glwe_dimension=1, input_lwe_dimension=862, level=2, base_log=15, variance=0.000000): 392
}
- Correspondence between PBS and computational graph
- There are two types of PBS. The first type has smaller parameter levels. The second type has larger parameter levels. I want to determine whether the first type of PBS corresponds to
round_bit_patternin the computation graph, and whether the second type of PBS corresponds to%6 = subgraph(%5)(the LUT operator for quantization, ReLU, and truncation fusion) in the computation graph?
- round_bit_pattern
- The function of round_bit_pattern is to carry and then truncate. I want to know how it corresponds in TFHE. Why is its parameter so small? Does it change the ciphertext modulus or the plaintext modulus?
- The meaning of IR
(tensor<1x392x!FHE.esint<12>>) -> tensor<1x392x!FHE.esint<7>>- The IR value above indicates that the
plaintext modulusdecreased from 12 to 7, right?
- FHELinalg.to_signed
- The documentation states that
FHELinalg.to_signedcan cast an unsigned integer tensor to a signed one. Does this mean thatFHELinalg.to_signed’s TFHE operator in the backend only changes theencodinganddecodingmethods?