Pyfhel subtraction - encryption

I am trying to use Pyfhel library to perform some operations on encrypted integer list. But while performing subtraction operation, when negative values are expected, I am getting a different value.
"""Import all the packages useful for the Demo.
#-Pyfhel is useful to generate keys, encrypt and decrypt.
#-PyPtxt is useful to tranform the input vectors into plain text objects that could be encrypted.
#-PyCtxt is useful to tranform the plain text object in PyCtxt object that are encrypted (Cypher texts). PyCtxt can be add, multiply etc with homeomorphic operations."""
from Pyfhel import Pyfhel
from PyPtxt import PyPtxt
from PyCtxt import PyCtxt
"""Other imports useful for the demo."""
from itertools import izip
import itertools
from operator import sub
import numpy as np
import matplotlib.pyplot as plt
import sys
import argparse
import copy
import datetime
import os
#Instantiate a Pyfhel object called HE.
HE = Pyfhel()
print("******Generation of the keys for encryption******")
#Create the Key Generator parameters.
KEYGEN_PARAMS={ "p":257, "r":1,
"d":1, "c":2,
"sec":80, "w":64,
"L":10, "m":-1,
"R":3, "s":0,
"gens":[], "ords":[]}
"""Print the Key Generator parameters to let the user knows how his vectors will be encrypted."""
print(" Running KeyGen with params:")
print(KEYGEN_PARAMS)
"""Generate the keys that will be use to encrypted the vectors. The generation of the keys uses the Key Generator parameters. Then print a message to inform the user that the key generation has been completed."""
HE.keyGen(KEYGEN_PARAMS)
print(" KeyGen completed")
var1 = HE.encrypt(PyPtxt([130], HE))
var2 = HE.encrypt(PyPtxt([10], HE))
xyz = var2 - var1
abc = HE.decrypt(xyz)
abc[0][0] # 137
print(abc[0][0] - 257) # output: -120
As in the code above, I noticed that if I subtract the value of 'p' used while generating keys, I get the expected output, but that's not of much help, especially when the difference is more than 257.
Can anyone please let me know if this is the expected behavior or what could be done to obtain expected output?
Thanks!
(Couldn't add the related tags but it's around Homomorphic Encryption using Pyfhel library, Python implementation of HElib)

I can confirm you that this is the expected behaviour. All the operations you do are modulo p, therefore encrypted values will always stay in the interval [0, p-1]. Note that, even though Pyfhel supports substraction, it does not support negative numbers in its current implementation.
Nevertheless, a major upgrade is on the way, which will solve this issue. Stay tuned!

print(abc[0][0] - p) yields the expected output because we are working modulo p.

Related

How to avoid RuntimeError while call __dict__ on module?

it is appearing in some big modules like matplotlib. For example expression :
import importlib
obj = importlib.import_module('matplotlib')
obj_entries = obj.__dict__
Between runs len of obj_entries can vary. From 108 to 157 (expected) entries. Especially pyplot can be ignored like some another submodules.
it can work stable during manual debug mode with len computing statement after dict extraction. But in auto it dont work well.
such error occures:
RuntimeError: dictionary changed size during iteration
python-BaseException
using clear python 3.10 on windows. Version swap change nothing at all
during some attempts some interesting features was found.
use of repr is helpfull before dict initiation.
But if module transported between classes like variable more likely lazy-import happening? For now there is evidence that not all names showing when command line interpriter doing opposite - returning what expected. So this junk of code help bypass this bechavior...
Note: using pkgutil.iter_modules(some_path) to observe modules im internal for pkgutil ModuleInfo form.
import pkgutil, importlib
module_info : pkgutil.ModuleInfo
name = module_info.name
founder = module_info.module_finder
spec = founder.find_spec(name)
module_obj = importlib.util.module_from_spec(spec)
loader = module_obj.__loader__
loader.exec_module(module_obj)
still unfamilliar with interior of import mechanics so it will be helpfull to recive some links to more detail explanation (spot on)

Is there a way to expand groups with the XDSM diagram creation in OpenMDAO?

Most of my test files involve the creation of an IndepVarComp that gets connected to a group. When I go to create an XDSM from the test file, it only shows the IndepVarComp Box and the Group Box. Is there a way to get it to expand the group and show what's inside?
This would also be useful when dealing with a top level model that contains many levels of groups where I want to expand one or two levels and leave the rest closed.
There is a recurse option, which controls if groups are expanded or not. Here is a small example with the Sellar problem to explore this option. The disciplines d1 and d2 are part of a Group called cycle.
import numpy as np
import openmdao.api as om
from openmdao.test_suite.components.sellar import SellarNoDerivatives
from omxdsm import write_xdsm
prob = om.Problem()
prob.model = model = SellarNoDerivatives()
model.add_design_var('z', lower=np.array([-10.0, 0.0]),
upper=np.array([10.0, 10.0]), indices=np.arange(2, dtype=int))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', equals=np.zeros(1))
model.add_constraint('con2', upper=0.0)
prob.setup()
prob.final_setup()
# Write output. PDF will only be created, if pdflatex is installed
write_xdsm(prob, filename='sellar_pyxdsm', out_format='pdf', show_browser=True,
quiet=False, output_side='left', recurse=True)
The same code with recurse=False (d1 and d2 are not shown, instead their Group cycle):
To enable the recursion from the command line, use the --recurse flag:
openmdao xdsm sellar_pyxdsm.py -f pdf --recurse
With the function it is turned on by default, in the command line you have to include the flag. If this does not work as expected for you, please provide an example.
You can find a lot of examples with different options in the tests of the XDSM plugin. Some of the options, like recurse, include_indepvarcomps, include_solver and model_path control what is included in the XDSM.

Gremlin - TypeError: Object of type GraphTraversal is not JSON serializable

This below piece of gremlin code runs perfectly well in the gremlin console (it finds the unique start and end points of an k-step ego network, along with the minimum distance to that endpoint):
g.V(42062000).as("from")
.repeat(both().as("to")).emit().times(3).path()
.count(local).as("pathlen")
.select("from", "to", "pathlen")
.dedup("from", "to").toList()
And gives an output similar to the following, which is as expected:
==>{from=v[42062000], to=v[83607800], plen=2}
==>{from=v[42062000], to=v[23683248], plen=3}
==>{from=v[42062000], to=v[41762840], plen=3}
==>{from=v[42062000], to=v[42062000], plen=3}
==>{from=v[42062000], to=v[83599456], plen=3}
However, when converting the code to conform to the gremlinpython wrapper
(i.e. after substituting as for as_), I'm given the error TypeError: Object of type GraphTraversal is not JSON serializable, even though it's the same query.
Has anyone faced similar issues?
I am using gremlinpython 3.4.2, but was originally using 3.3.3. My version of Python is 3.7.3.
import static classes using
from gremlin_python import statics
statics.load_statics(globals())
or
from gremlin_python.process.graph_traversal import elementMap, range_,
local, count
and common reserved words must end with _
Example:
as_, range_

Spark's int96 time type

When you create a timestamp column in spark, and save to parquet, you get a 12 byte integer column type (int96); I gather the data is split into 6-bytes for Julian day and 6 bytes for nanoseconds within the day.
This does not conform to any parquet logical type. The schema in the parquet file does not, then, give an indication of the column being anything but an integer.
My question is, how does Spark know to load such a column as a timestamp as opposed to a big integer?
Semantics is determined based on the metadata. We'll need some imports:
import org.apache.parquet.hadoop.ParquetFileReader
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.conf.Configuration
example data:
val path = "/tmp/ts"
Seq((1, "2017-03-06 10:00:00")).toDF("id", "ts")
.withColumn("ts", $"ts".cast("timestamp"))
.write.mode("overwrite").parquet(path)
and Hadoop configuration:
val conf = spark.sparkContext.hadoopConfiguration
val fs = FileSystem.get(conf)
Now we can access Spark metadata:
ParquetFileReader
.readAllFootersInParallel(conf, fs.getFileStatus(new Path(path)))
.get(0)
.getParquetMetadata
.getFileMetaData
.getKeyValueMetaData
.get("org.apache.spark.sql.parquet.row.metadata")
and the result is:
String = {"type":"struct","fields: [
{"name":"id","type":"integer","nullable":false,"metadata":{}},
{"name":"ts","type":"timestamp","nullable":true,"metadata":{}}]}
Equivalent information can be stored in the Metastore as well.
According to the official documentation this is used to achieve compatibility with Hive and Impala:
Some Parquet-producing systems, in particular Impala and Hive, store Timestamp into INT96. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems.
and can be controlled using spark.sql.parquet.int96AsTimestamp property.

Return a list of internationalized names of languages in Plone 4

By querying the portal_languages tool I can get a list of language names:
>>> from Products.CMFPlone.utils import getToolByName
>>> ltool = getToolByName(context, 'portal_languages')
>>> language_names = [name for code, name in ltool.listAvailableLanguages()]
[u'Abkhazian', u'Afar', u'Afrikaans', u'Albanian', u'Amharic', (...)
But how can I return a list of localized language names?
[EDIT] What I want is the list of language names in the language of the current user, as shown in ##language-controlpanel See: http://i.imgur.com/rGfjG.png
If you want translated language names in many different languages, install Babel (http://pypi.python.org/pypi/Babel). There's good documentation on it, for example http://packages.python.org/Babel/display.html:
>>> from babel import Locale
>>> locale = Locale('de', 'DE').languages['ja']
u'Japanisch'
Plone only includes native and English language names. The zope.i18n package has some of this data, but it's really incomplete and outdated, so Babel is your best bet.
Use the listAvailableLanguageInformation() method instead:
>>> from Products.CMFPlone.utils import getToolByName
>>> ltool = getToolByName(context, 'portal_languages')
>>> native_language_names = [entry[u'native']
... for entry in ltool.listAvailableLanguageInformation()]
[u'Afrikaans', u'Aymara', u'Az\u0259ri T\xfcrk\xe7\u0259si', u'Bahasa Indonesia', ...]
Note that the ##language-controlpanel view uses the zope.i18n.locales module to provide translated languages; but that list is so incomplete that the languages list is not translated for most of UI languages. Apparently italian is one language where this is translated.
You can reach the locales structure via the request, or via the ##plone_state view. The locales.displayNames.languages dictionary maps language code (2 letters) to local language name:
>>> from Products.CMFPlone.utils import getToolByName
>>> ltool = getToolByName(context, 'portal_languages')
>>> languages = request.locales.displayNames.languages
>>> language_names = [languages.get(code, name) for code, name in ltool.listAvailableLanguages()]
[u'abkhazian', u'afar', u'afrikaans', u'albanese', u'amarico', ...]
As you can see, the language names are lowercased, not properly capitalized. Also, the data is expensive to parse (the package contains XML files parsed on first access) so it can take several moments before this data is available to you on first access.
Your best bet would be to use Babel, as Hanno states, as it actually has far more current information available, and not just for a handful of languages.
Thanks to Martijn's help I was able to solve the issue. This is the final working code that will generate the dictionary of translated language names. Very useful if you want to make a localized selection field such as the one found in the language control-panel.
from Products.CMFCore.interfaces import ISiteRoot
from zope.component import getMultiAdapter
from zope.site.hooks import getSite
from zope.globalrequest import getRequest
#grok.provider(IContextSourceBinder)
def languages(context):
"""
Return a vocabulary of language codes and
translated language names.
"""
# z3c.form KSS inline validation hack
if not ISiteRoot.providedBy(context):
for item in getSite().aq_chain:
if ISiteRoot.providedBy(item):
context = item
# retrieve the localized language names.
request = getRequest()
portal_state = getMultiAdapter((context, request), name=u'plone_portal_state')
lang_items = portal_state.locale().displayNames.languages.items()
# build the dictionary
return SimpleVocabulary(
[SimpleTerm(value=lcode, token=lcode, title=lname)\
for lcode, lname in lang_items]
)

Resources