I am trying to create a de-identification template using DLP from console in GCP.Using dataflow template to pick the csv data from gcs and load into BigQuert.
When I am creating the Cryptokey in global location and de-identification template also in Global it is successfully encrypting the csv data and loading in BigQuery. For other combination of Crypto key and De-identification template always getting error in dataflow.as "PERMISSION_DENIED: Not authorized to access requested deidentify template."
Also when creating the template in Global and key using from other location than Global I am getting the below error at the time of creating the template itself.
Error: In KmsWrappedCryptoKey, 'crypto_key_name': must be from location 'global'.
My question is can't we use crpto key and template created in any region other than Global?
Thanks
Your template and key need to be co-located in the same region.
Related
I wish to use the firebase admin-sdk but for some reason, I am getting a "project id is required to access Firestore error".
I have downloaded an admin-sdk json file from the firebase console and I have placed it in the same directory as the file that calls it.
opt := option.WithCredentialsFile("../<FILENAME>.json")
The credentials file has the project id but for some reason the opt variable is unable to extract data from the credentials file, and as a result when I try to get the firestore client this error is occuring.
Thanks!
Problem solved, I was using the wrong absolute path.
I am going to parse and format the flat file input based on the business logic stored in SQL server database tables. I don’t have a document schema for the input. I wrote a C# custom component class for the disassemble. When I use the custom component in Disassemble stage in receive pipeline, I am getting document schema not found error.
Did anyone come across with same situation and handled it differently? .
BizTalk routes messages using the 'MessageType' property (The namespace + the root node name of the XML in the message) in the context portion of the message. You don't have that with your design so it doesn't know what to do with it.
You can:
handle each type of flat file separately by parsing and assigning a unique message type
distill the content into one type of message
wrap the content of the file in an 'envelope'
You'll need to create a schema for any of those choices.
Namespaces and routing are a spiffy way to handle changes to file structure. If you include the version of the file in the namespace BizTalk can route the message to the code that handles that kind of message for you. You can continue to handle old style messages as well as new formats. We handle pilot programs that way.
Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything
I have users uploading files, and a Cloud Function responding by adding the uploaded file to the database, and planned on using the following path:
/files/{user-id}/{filename}
The reasoning being that if a file gets deleted, i can in the Cloud Function immediately get the reference to the database-reference.
However, i am not allowed to use certain characters in db-paths that are allowed in filenames (most specifically, a dot). How should this be set up, so that for a removed Storage-file I can immediately get the correct Database-path?
You could push() the path under /files/{uid} to create the entry, then orderByValue().equalTo(x) to find the entry later for deletion. This way, you won't have to worry about the contents of the file name.
We need to configure the IdentityProvider from metadata stored in a database. It would seem though that the only way to specify the metadata to IdentityProvider is through metadataLocation property which supports a URL or file path.
Is there anyway, which I've missed, to pass a stream object that holds the metadata to the IdentityProvider?
Thanks
I'm not aware of any way using the standard code. The Load method that takes a stream is marked as internal, see here:
https://github.com/KentorIT/authservices/blob/master/Kentor.AuthServices/Metadata/MetadataLoader.cs
You could:
Write your database value to a temporary location and give this file path to load
Write an api route that serves up the metadata for a given Idp as a url
Make an open source contribution to add support for this
Don't use MetadataLocation but instead construct the IdentityProvider object and separately set signing key, entity id, binding etc.
etc.