I want to do a very simple thing:
Given two vectors, I want to encrypt them and do some calculation, then decrypt the result and get the inner product between both vectors.
Can you recommend me some library that can do this thing? Any other material will help me as well.
I found HELIB, but I still dont know if that is the best that I can do. Would you recommend this library? Do you know better ones for my purpose? I want to do it as fast as possible for the biggest vector dimension possible.
I have only basic knowledge in crypto, so I would like use that as black box as much as possible without putting to much effort in the mathematics behind it.
Thanks for any help!
You can do exactly that with the homomorphic encryption library Pyfhel in Python3. Just install it using pip install Pyfhel and create a simple demo:
from Pyfhel import Pyfhel, PyCtxt
he = Pyfhel() # Object in charge of all homomorphic operations
he.contextGen(10000) # Choose the maximum size of your values
he.keyGen(10000) # Generate your private/public key pair
# Now to the encryption/operation/decryption
v1 = [5.34, 3.44, -2.14, 9.13]
v2 = [1, 2.5, -2.1, 3]
p1 = [he.encrypt(val) for val in v1]
p2 = [he.encrypt(val) for val in v2]
pMul = [a*b for a,b in zip(p1, p2)]
pScProd = [pMul[0]+p for p in pMul[1:]
round(he.decrypt(pScProd), 3)
#> 45.824
Related
Working on a project which includes evolving the fnn topology during runtime, I want to outsource the nn output calculation to my gpu. Currently using unity (soon switching to just c# libs without unity or smth similar), I use hlsl like compute shaders.
Now since I am very restricted with hlsl's syntax (Arrays/Matrices etc with dynamic index ranges, dot product and matrix functions only working on preexisting types like float2 and those "vectors" having a max length of 4 (float4)), I am looking for a formula with which I can calculate the output of the fnn within a single calculation (obviously still including a loop). That means my shader has the weights, input layer, bias and structure of the nn each as seperate structured buffers. Now I need to find a formula with which I can calculate the output without using dot products or matrix vector multiplications since it's very hard and tortuous to implement those. However I've tried finding a formula for days, hopelessly...
Does anyone have a clue for this? Sadly not so many less recourses on the internet concerning hlsl and this problem.
Thanks a lot!
EDIT:
Okay I got some hints that my actual math related question is not quite clear. Since I am kind of an artist myself, I made a little illustration:
Now, there of course are multiple ways to calculate the output O. I wanna record following:
v3 = v1w1+v2w2
v4 = v1w3+v2w4
v5 = v3w5+v4w6 = w5*(v1w1+v2w2)+w6*(v1w3+v2w4)
This is an easy way to calculate the final output O in one step:
O1 = w5*(v1w1+v2w2)+w6*(v1w3+v2w4).
There are other quite cool ways to calculate the output in one step. Looking at the weights as matrices, e.g.
m1 = [[w1, w2], [w3, w4]] and m2 = [[w5, w6]].
This way we can calculate O as
L1 = m1 * I and O = m2 * L1
or in one step
O = m2*(m1*I)
which is the more elegant way. I would prefer it this way but I cant do it with matrix-vector multiplications or any other linear algebra since I am restricted by my programming tools, so I have to stay with this shape:
O1 = w5*(v1w1+v2w2)+w6*(v1w3+v2w4).
Now this would be really really easy if I had a neural network with fixed topology. However, since it evolves during runtime I have to find a function/formula with which I can calculate O independent from the topology. All information I have is ONE list of ALL weights (w=[w1, w2, w3, w4, w5, w6]), a list of inputs (i=[v1, v2]) and the structure of the nn (s=[2, 2, 1] - 2 input nodes, one hidden layer with 2 nodes, and one output node).
However I cant think of an algorithm that calculates the output with the given information efficiently.
I am a newbie in openmdao. Recently I am trying to implement a dummy wing optimization problem to learn openmdao. I have come up with a weird problem that I wanted to ask about. I am using a bspline to define twist and t/c distribution. The optimization setup is working when I use COBYLA, DifferentialEvolution or DOEdriver as the driver. But when I set SciPy SLSQP, the control points for these splines does not change during iterations. What could be the problem?
Below is the main section where I define the problem...
if __name__ == '__main__':
driver = om.ScipyOptimizeDriver() ;
driver.options['optimizer']='SLSQP'
driver = om.DOEDriver(om.LatinHypercubeGenerator(samples=10))
recorder_name ='cases'
recorder = om.SqliteRecorder(recorder_name+'.sql')
driver = om.DifferentialEvolutionDriver()
driver.options['max_gen']=10
min_step = 0.01
n_cp = 4
n_vsp_segment = 4
ivc = om.IndepVarComp()
ivc.add_output('Mach',0.2)
ivc.add_output('b',7.)
ivc.add_output('cr',3.)
ivc.add_output('taper',0.5)
ivc.add_output('twist_cp',np.ones(n_cp))
ivc.add_output('tc_cp',np.ones(n_cp)*0.1)
Scomp = om.SplineComp(method='bsplines',x_interp_val = np.linspace(0.,1.,int(n_vsp_segment)),
num_cp = n_cp, interp_options={"order": min(n_cp, 4)})
Scomp.add_spline(y_cp_name='twist',y_interp_name='twist_vsp')
Scomp.add_spline(y_cp_name='tc',y_interp_name='tc_vsp')
model = om.Group()
model.add_subsystem('IVC',ivc)
model.add_subsystem('spline',Scomp)
model.add_subsystem('VSP',VSP(n_vsp_segment=n_vsp_segment))
model.add_subsystem('AVL',AVL())
model.add_subsystem('obj',om.ExecComp('obj = (CD0+CDi)*100+0.1/tr'))
model.add_subsystem('cons',om.ExecComp('c1 = Sref-40.'))
model.connect('IVC.twist_cp','spline.twist')
model.connect('spline.twist_vsp','VSP.twist')
model.connect('IVC.tc_cp','spline.tc')
model.connect('spline.tc_vsp','VSP.tc')
model.connect('IVC.Mach',['VSP.Mach','AVL.Mach'])
model.connect('IVC.b',['VSP.b','AVL.b'])
model.connect('IVC.cr','VSP.cr')
model.connect('IVC.taper','VSP.taper')
model.connect('VSP.CD0','obj.CD0')
model.connect('VSP.Sref',['AVL.Sref','cons.Sref'])
model.connect('VSP.Cref','AVL.Cref')
model.connect('VSP.MOMref','AVL.MOMref')
model.connect('VSP.tr','obj.tr')
model.connect('AVL.CDi','obj.CDi')
prob = om.Problem(model,driver)
prob.model.add_design_var('IVC.tc_cp',lower=0.05,upper=0.1,indices=[1,2,3])
prob.model.add_design_var('IVC.twist_cp',lower=-10.,upper=2.,indices=[1,2,3])
prob.model.add_design_var('IVC.cr',lower=2,upper=6)
prob.model.add_design_var('IVC.b',lower=10,upper=20)
prob.model.add_design_var('IVC.taper',lower=0.2,upper=0.9)
prob.model.add_constraint('cons.c1',upper=0)
prob.model.add_objective('obj.obj',scaler=100)
prob.setup(check=True)
prob.set_val('IVC.cr',4.)
prob.set_val('IVC.b',10.)
prob.set_val('IVC.taper',0.8)
prob.driver.options['debug_print'] = ['desvars','ln_cons','nl_cons','objs']
prob.run_driver()
Your problem seems to be working with gradient free methods, but not with gradient based one. Hence it's a safe bet that there is a problem with the derivatives.
I'm going to assume that since you're using VSP and AVL, that you're doing finite differences. You likely need to set up different FD settings to get decent derivative approximations. You probably want to use the [appox_totals][1] method at the top level of your problem.
You will likely need to experiment with larger FD steps sizes and absolute vs relative steps. You can get a visualization of what your intial jacobian looks like using the OpenMDAO scaling report. Your problem doesn't look badly scaled at first glance, but the jacobian visualization in that report might be helpful to you as you test FD step sizes.
Just playing around with Julia (1.0) and one thing that I need to use a lot in Python/numpy/matlab is the squeeze function to drop the singleton dimensions.
I found out that one way to do this in Julia is:
a = rand(3, 3, 1);
a = dropdims(a, dims = tuple(findall(size(a) .== 1)...))
The second line seems a bit cumbersome and not easy to read and parse instantly (this could also be my bias that I bring from other languages). However, I wonder if this is the canonical way to do this in Julia?
The actual answer to this question surprised me. What you are asking could be rephrased as:
why doesn't dropdims(a) remove all singleton dimensions?
I'm going to quote Tim Holy from the relevant issue here:
it's not possible to have squeeze(A) return a type that the compiler
can infer---the sizes of the input matrix are a runtime variable, so
there's no way for the compiler to know how many dimensions the output
will have. So it can't possibly give you the type stability you seek.
Type stability aside, there are also some other surprising implications of what you have written. For example, note that:
julia> f(a) = dropdims(a, dims = tuple(findall(size(a) .== 1)...))
f (generic function with 1 method)
julia> f(rand(1,1,1))
0-dimensional Array{Float64,0}:
0.9939103383167442
In summary, including such a method in Base Julia would encourage users to use it, resulting in potentially type-unstable code that, under some circumstances, will not be fast (something the core developers are strenuously trying to avoid). In languages like Python, rigorous type-stability is not enforced, and so you will find such functions.
Of course, nothing stops you from defining your own method as you have. And I don't think you'll find a significantly simpler way of writing it. For example, the proposition for Base that was not implemented was the method:
function squeeze(A::AbstractArray)
singleton_dims = tuple((d for d in 1:ndims(A) if size(A, d) == 1)...)
return squeeze(A, singleton_dims)
end
Just be aware of the potential implications of using it.
Let me simply add that "uncontrolled" dropdims (drop any singleton dimension) is a frequent source of bugs. For example, suppose you have some loop that asks for a data array A from some external source, and you run R = sum(A, dims=2) on it and then get rid of all singleton dimensions. But then suppose that one time out of 10000, your external source returns A for which size(A, 1) happens to be 1: boom, suddenly you're dropping more dimensions than you intended and perhaps at risk for grossly misinterpreting your data.
If you specify those dimensions manually instead (e.g., dropdims(R, dims=2)) then you are immune from bugs like these.
You can get rid of tuple in favor of a comma ,:
dropdims(a, dims = (findall(size(a) .== 1)...,))
I'm a bit surprised at Colin's revelation; surely something relying on 'reshape' is type stable? (plus, as a bonus, returns a view rather than a copy).
julia> function squeeze( A :: AbstractArray )
keepdims = Tuple(i for i in size(A) if i != 1);
return reshape( A, keepdims );
end;
julia> a = randn(2,1,3,1,4,1,5,1,6,1,7);
julia> size( squeeze(a) )
(2, 3, 4, 5, 6, 7)
No?
I'm trying to calculate the volume of an unstructured grid using mayavi and tvtk. My idea was to tetrahedronalize the Point cloud by means of the Delaunay3d-Filter. Then I Need to somehow extract the tetrahedra from this dataset while ignoring other cell-types such as lines and triangles.
But how can i accomplish this? My Python code so far Looks as follows:
import numpy as np
from mayavi import mlab
x, y, z = np.random.random((3, 100))
data = x**2 + y**2 + z**2
src = mlab.pipeline.scalar_scatter(x, y, z, data)
field = mlab.pipeline.delaunay3d(src)
Can i use the field-object to retrieve the polyhedras Vertices?
Thanks in advance.
frank.
Is this the best way to go about it? scipy.spatial has delaunay functionality as well. Having recently worked with both myself, I would note that scipy is a much lighter dependency, easier to use, and better documented. Note that either method will work on the convex hull of the pointcloud, which may not be what you want. the scipy version also easily allows you to compute the boundary primitives as well, amongst other things, which may be useful for further processing.
As an exercise I am working on a parallel implementation of the Sieve of Eratosthenes. As part of that I am implementing a sequence of bitmaps, using one bit per number to save memory. Reading bits one at a time appear to work fine, but setting them is slow, especially when I use large binaries.
getBit(Bin, N, Size)->
R=Size-N-1,
<<_:N,Bit:1,_:R>> = Bin,
Bit.
setBit(Bin, N, Size)->
R=Size-N-1,
<<A:N,_:1,B:R>> = Bin,
<<A:N,1:1,B:R>>.
Is there a way to do this well in functional Erlang, perhaps similar to how Arrays work?
I have read about hipe_bifs:bytearray_update but would prefer to keep my coding style functional.
You can store your bitmaps as integers (if they are not longer than ~1000 bits):
-module(z).
-export([get_bit/2, set_bit/2]).
get_bit(N, B) -> N band bit_to_int(B) > 0.
set_bit(N, B) -> N bor bit_to_int(B).
bit_to_int(B) -> 1 bsl B.
You might try your version w/o passing the Size, and padding to bytes:
getBit(Bin, N)->
<<_:N/bitstring,Bit:1,_/bitstring>> = Bin,
Bit.
setBit(Bin, N)->
<<A:N,_:1,B/bitstring>> = Bin,
<<A:N,1:1,B>>.
%OR:
setBit(Bin, N)->
PSize = 8 - ((N + 1) rem 8),
<<A:N,_:1, Pad:PSize,B/bytes>> = Bin,
<<A:N,1:1,Pad:PSize,B>>.
Can be better or worse :-)
Generally one can not do it better then with complexity of O(log(n)) for both get and set operations, since Erlang is functional language. You can archive O(log(n)) using some type of trees. array, dict, gb_trees and gb_sets modules does this.
So if you need to make it as fast as possible you have to go imperative - you could try hipe_bifs module, process dictionary and ets tables.