I am trying to use Pyfhel library to perform some operations on encrypted integer list. I am doing one multiplication operation and later performing an addition operation. But while performing addition operation, I randomly get following error:
IndexError('Unable to find key in unordered_map.’,)
If I try to decrypt that encrypted value on which I try to perform addition operation, I am getting the same error.
Can anyone please let me know what could be the problem?
Thanks!
(Couldn't add the related tags but it's around Homomorphic Encryption using Pyfhel library, Python implementation of HElib)
All I can tell you so far is that this error is originated by a search in the unordered map that contains all the cyphertexts. This unordered map is inside the Pyfhel object, accessed via a C++ call. As usual, it shouldn't be happenning. Right now, efforts are being put into upgrading Pyfhel so that you can hold each cyphertext in a single Python object, thus rendering your current error obsolete.
Related
I am trying to find how token.head and token.children are implemented. I want to replicate this implementation as I add a custom component to my spaCy pipeline for SRL.
That is, each token can point to predicates for which it is an argument. Intuitively, I think that this should work kind of like token.children wherein (I think) it returns a generator of the actually dependent child token objects.
I assume that I should not simply store an attribute of that token as this does not seem very memory efficient and rather redundant. Does anyone know the correct way to implement this? Or is this handled implicitly by the spaCy Underscore.set method?
Thanks!
The Token object is only a view -- it's sort of like holding a reference to the Doc object, and an index to the token. The Span object is like this too. This ensures there's a single source of truth, and only one copy of the data.
You can find the definition of the key structs in the spacy/structs.pxd file. This defines the attributes of the TokenC struct. The Doc object then holds an array of these, and a length. The Token objects are created on the fly when you index into the Doc. The data definition for the Doc object can be found in spacy/tokens/doc.pxd, and the implementation of the token access is in spacy/tokens/doc.pyx.
The way the parse tree is encoded in spaCy is a bit unsatisfying. I've made an issue about this on the tracker --- it feels like there should be a better solution.
What we do is encode the offset of the head relative to the token. So if you do &doc.c[i] + doc.c[i].head you'll get a pointer to the head. That part is okay. The part that's a bit weirder is that we track the left and right edges of the token's subtree, and the number of direct left and right children. To get the rightmost or leftmost child, we navigate around within this region. In practice this actually works pretty well because we're dealing with a contiguous block of memory, and loops in Cython are fast. But it still feels a bit janky.
As far as what you'll be able to do as a user...If you run your own fork of spaCy you can happily define your own data on the structs. But then you're running your own fork.
There's no way to attach "real" attributes to the Doc or Token objects, as these are defined as C-level types --- so their structure is defined statically; it's not dynamic. You could subclass the Doc but this is quite ugly: you need to also subclass.
This is why we have the underscore attributes, and the doc.user_data dictionary. It's really the only way to extend the objects. Fortunately you shouldn't really face a data redundancy problem. Nothing is stored on the Token objects. The definitions of your extensions are stored globally, within the Underscore class. Data is stored on the Doc object, even if it applies to a token --- again, the Token is a view. It can't own anything. So the Doc has to note that we have some value assigned to token i.
If you're defining a tree-navigation system, I'd recommend considering defining it as your own Cython class, so you can use structs. If you use native Python types it'll be pretty slow and pretty large. If you pack the data into numpy arrays the representation will be more compact, but writing the code will be a pretty miserable experience, and the performance is likely to be not great.
In short:
Define your own types in Cython. Put the data into a struct owned by a cdef class, and give the class accessor methods.
Use the underscore attributes to access the data from spaCy's Doc, Span and Token objects.
If you come up with a compelling API for SRL and the data can be coded compactly into the TokenC struct, we'd consider adding it as native support.
I'm writing an API that converts actions performed by a non-technical user into Salesforce.com SOQL 'SELECT', 'UPSERT', and 'DELETE' statements. Is there any resource, library, etc. out there that could validate the syntax of the generated SOQL? I'm the only one at my company with any experience with SOQL, so I'd love to place it into a set of automated tests so that other developers enhancing (or fixing) the SOQL generation algorithm know if it's still functioning properly.
I know one solution here is to just make these integration tests. However, I'd rather avoid that for three reasons:
I'd need to maintain another Salesforce.com account just for tests so we don't go over our API request cap.
We'll end up chasing false positives whenever there are connectivity issues with Salesforce.com.
Those other developers without experience will potentially need to figure out how to clean up the test Salesforce.com instance after DML operation test failures (which really means I'll need to clean up the instance whenever this occurs).
You might solve your problem by using the SoqlBuilder library. It generates SOQL for you and is capable of producing SOQL statements that would be quite error prone to create manually. The syntax is straight forward and I've used it extensively with very few issues.
I found another way to do this.
Salesforce.com posted their SOQL notation in Backus-Noir Form (BNF) here:
http://www.salesforce.com/us/developer/docs/api90/Content/sforce_api_calls_soql_bnf_notation.htm
This means you can use a BNF-aware language recognition tool to parse the SOQL. One of the most common tools, ANTLR, does this and is free. Following the ANTLR example, pass the SOQL grammar into its grammar compiler to get a Lexer and a Parser in your desired language (C#, Java, Python, etc.). Then you can pass the actual SOQL statements you want to validate into the Lexer, and then your Lexer tokens into your Parser, to break apart the SOQL statements. If your Lexer or Parser fails, you have invalid SOQL.
I can't think of a way to do this from outside of Salesforce (and even in Apex I've only got one idea right now that may not work), but I can think of two suggestions that may be of help:
Validate queries by running them, but do them in batches using a custom web service. i.e. write a web service in Apex that can accept up to 100 query strings at once, have it run them and return the results. This would drastically reduce the number of API calls but of course it won't work if you're expecting a trial-and-error type setup in the UI.
Use the metadata API to pull down information on all objects and their fields, and use those to validate that at least the fields in the query are correct. Validating other query syntax should be relatively straight forward, though conditionals may get a little tricky.
You can make use of the salesforce develop nuget packages that leverages SOAP API
I figured I'd be able to use an Index functoid but it doesn't seem to like my first parameter (the scripting functoid that calls the external assembly) - a red X in place of the usual green check mark.
The thing that makes me think it's possible, is that the Index functoid doesn't give me an error at all - it compiles and deploys with no complaints. The problem is that the mapping never takes place, I get a catastrophic failure (IMO) because it doesn't even return an error.
So, any way to use an external assembly that returns a DataTabe/DataRow/DataSet in a BizTalk map?
I know this does not address your question entirely but I always think that any calls to external dependencies should be done before the mapping stage, and the results stored in a message.
The map would have multiple input schemas, one of which could be a DataRow (modelled on the ADO DataRow).
Then when you call the transform you pass all the messages in which are needed to do the transform. This makes it much easier to isolate your genuine mapping failures from other failures.
This might help:
Code Behind BizTalk Functoids
You may be able to get some insight into how the mapper does its thing.
I have an odd situation that has only come up in this one orchestration I'm working on.
I have a Receive message come in. I use an Expression shape and write it to a variable "xmlDoc" so I can verify what is in it. I then have a Message Assignment shape where I Load a string of XML to a variable "xmlDoc2" and assign that variable to a second message and write it out so I can verify it. I then have another Expression shape and attempt to write out the first message again and it's apparently been replaced with the second message information.
It's not in a Parallel shape, and the Message Assignment is only building the second message. Between the receive and where I'm seeing this issue, I'm doing a few Decide shapes and building other messages from the Receive message. They all work fine and don't overwrite anything (do the same processes as what I'm trying to do later.)
Anyone seen this before or see something I'm missing?
ETA: The process works a bit like this:
Send Message comes in
xmlDoc = Send Message
xmlDoc.OuterXml is written to a table
xmlDoc2 = "<root><xml></xml></root>"
Second Message = xmlDoc2
xmlDoc2.OuterXml is written to a table
xmlDoc = Send Message <-- What should happen
xmlDoc = Second Message <-- What is happening
I could not reproduce your exact problem but I got close. I think there are some implied statements in your process outline that would be critical for us to understand what's really happening. In any case, I think your BizTalk messages do not get overwritten, but that the XmlDocument variables are.
I think you may have been hit by one of the fundamental confusions a developer coming from a Java or VB6 background encounters when working with C#.
C# is a Managed Language
Please, remember that C# is a managed language, in that it uses a garbage collector to reclaim unused references to objects. The key word here is Reference.
When you write the following lines:
xmlDoc2 = "<root><xml/></root>";
SecondMessage = xmlDoc2;
Basically, you have two references to the same content. Namely, two references xmlDoc2 and SecondMessage which refer to the assigned string.
So, depending upon the code you use to "write out" the XML content of your BizTalk messages, you may be overwriting some references.
Furthermore, if this happens in the context of a Construct shape, you may be inadvertently overwriting the content of the BizTalk message itself.
A Solution?
This problem does not usually manifest itself when working with BizTalk. I personally never encountered this issue.
If you update your original question with the exact code for both Expression shapes and the Assignment shape, I'll update this response with more appropriate guidance.
I would like to know whether anyone knows about a library or code that will accept a PL/SQL string and thow error if there is any PL/SQL injection. Most of the open source projects in the internet are created in PHP.
You need to use parameters, for example
UPDATE mytable SET field=:param WHERE id=:id
And then assign :param and :id to be the value that you get from the untrusted source (form value, url params, cookie, ...)
This also improves performance, and you don't need to parse anything to determine if it's injection or not. (Such approaches might have subtle bugs that you don't see, but the attaker will use. I mean you cannot verify that every possible attack, including those you haven't thought of yet, will be stopped by an injection-detection logic.)
Assuming you have a very good reason to use both dynamic SQL and to embed strings in your statements rather than use bind variables, Oracle has a built-in library for this purpose. It's called dbms_assert.
See http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_assert.htm for full details on this package.