I need to have one stored procedure which I need to run on different partition keys. My collection is partitioned on one key entityname and I want to execute the stored procecure on each entity in the partition.
sproc = await client.CreateStoredProcedureAsync(collectionLink, sproc,
new RequestOptions { PartitionKey = new PartitionKey(partitionkey) });
StoredProcedureResponse<int> scriptResult = await client.ExecuteStoredProcedureAsync<int>(
sproc.SelfLink,
new RequestOptions { PartitionKey = new PartitionKey(partitionkey) },
args);
I get the following exception:
Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted
Is it necessary to create a stored procedure in each partition based on key?
Is it possible to have one stored procedure which can execute for all keys?
When stored procedure is executed from client, RequestOptions specify the partition key, stored procedure will run in context of this partition and cannot operate (e.g. create) on docs that have different partition key value.
What you can do is to execute the sproc from client for each partition key. For instance, if sproc is to bulk-create documents, you can group the docs by partition key and send each group (can be done in parallel) to the sproc providing partition key value in RequestOptions. Would that be helpful?
You don't have to create sproc for each partition key, just create once without providing partition key.
I have developed the above in Java.
There are differences in how we implement while using Java SDK and stored procedure.
Please note the use of 'string' and the need to separate records based on partition key.
Bulk Import Store Procedure used: https://learn.microsoft.com/en-us/azure/documentdb/documentdb-programming
Below is the client calling the store procedure
public void CallStoredProcedure(Map <String, List<String>> finalMap)
{
for ( String key : finalMap.keySet() ) {
try
{
ExecuteStoredProcedure(finalMap.get(key),key);
}
catch(Exception ex)
{
LOG.info(ex.getMessage());
}
}
}
public void ExecuteStoredProcedure(List<String> documents, String partitionKey)
{
try {
if (documentClient == null) {
documentClient = new DocumentClient(documentDBHostUrl, documentDBAccessKey, null, ConsistencyLevel.Session);
}
options = new RequestOptions();
options.setPartitionKey(new PartitionKey(partitionKey));
Object[] sprocsParams = new Object[2] ;
sprocsParams[0] = documents;
sprocsParams[1] = null;
StoredProcedureResponse rs = documentClient.executeStoredProcedure(selflink, options, sprocsParams);
} catch (Exception ex) {
}
}
Related
I have generated a CSR, and had it signed. I still have the private key that I used to create the CSR, and I want to store that cert in the Windows CertStores, along with that private key.
My metric for success is:
When I view the cert in the CertStore, it is marked as having a private key. Specifically, it has the little 'key' sub-icon in the top left of the cert icon, and if you open the cert up, it says "You have a private key that corresponds to this certificate" under the ValidDates info.
We initially assumed that .CopyWithPrivateKey(RSA key) would do that for us, but it doesn't seem to work on it's own. We also need to set some keyStorage flags, but we can only do that by .Export()ing the cert to a byte[] array and then "importing" it with another constructor call.
I've tried a bunch of variations, and that's the only sequence of events that works:
public void InstallCertOnNonUiThread(byte[] certificateDataFromCsrResponse, RSA privateKeyUsedToGenerateCsr)
{
var keyStorageFlags = X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.MachineKeySet;
var originalCert = new X509Certificate2(certificateDataFromCsrResponse);
var exportOfOriginalCert = originalCert.Export(X509ContentType.Pkcs12);
var withFlagsCert = new X509Certificate2(certificateDataFromCsrResponse, (SecureString)null, keyStorageFlags);
var exportOfWithFlagsCert = withFlagsCert.Export(X509ContentType.Pkcs12);
var copiedWithPKCert = originalCert.CopyWithPrivateKey(privateKeyUsedToGenerateCsr);
var exportOfCopiedWithPkCert = copiedWithPKCert.Export(X509ContentType.Pkcs12);
var withFlagsReimportOfOriginal = new X509Certificate2(exportOfOriginalCert, (SecureString)null, keyStorageFlags);
var withFlagsReimportOfWithFlags = new X509Certificate2(exportOfWithFlagsCert, (SecureString)null, keyStorageFlags);
var withFlagsReimportOfCopiedWithPK = new X509Certificate2(exportOfCopiedWithPkCert, (SecureString)null, keyStorageFlags);
InstallCertInStore(StoreLocation.LocalMachine, originalCert); // Doesn't work; no key in Store UI.
InstallCertInStore(StoreLocation.LocalMachine, withFlagsCert); // Doesn't work; no key in Store UI.
InstallCertInStore(StoreLocation.LocalMachine, copiedWithPKCert); // Doesn't work; no key in Store UI.
InstallCertInStore(StoreLocation.LocalMachine, withFlagsReimportOfOriginal); // Doesn't work; no key in Store UI.
InstallCertInStore(StoreLocation.LocalMachine, withFlagsReimportOfWithFlags); // Doesn't work; no key in Store UI.
InstallCertInStore(StoreLocation.LocalMachine, withFlagsReimportOfCopiedWithPK);// This one works. Cert has key icon, and text "You have a private key that corresponds to this certificate"
}
private static void InstallCertInStore(StoreLocation location, X509Certificate2 newCert)
{
using (var store = new X509Store(StoreName.My, location))
{
store.Open(OpenFlags.ReadWrite);
store.Add(newCert);
}
}
So my final code to do this will look like:
public Task<bool> InstallCertOnNonUiThread(byte[] certificateDataFromCsrResponse, RSA privateKeyUsedToGenerateCsr, string orgId)
{
var keyStorageFlags = X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.MachineKeySet;
var originalCert = new X509Certificate2(certificateDataFromCsrResponse);
var copiedWithPKCert = originalCert.CopyWithPrivateKey(privateKeyUsedToGenerateCsr);
var exportOfCopiedWithPkCert = copiedWithPKCert.Export(X509ContentType.Pkcs12);
var withFlagsReimportOfCopiedWithPK = new X509Certificate2(exportOfCopiedWithPkCert, (SecureString)null, keyStorageFlags);
InstallCertInStore(StoreLocation.LocalMachine, withFlagsReimportOfCopiedWithPK);// This one works. Cert has key icon, and text "You have a private key that corresponds to this certificate"
return Task.FromResult(true);
}
That final option does work, but it seems like way more steps than ought to be necessary, and it suggests that I'm about to define my own extention method: .ActuallyCopyWithPrivateKey, to replace the .NET framework version of that method. Which seems wrong.
Is there a better way to be achieving this, or does it really need all 4 steps.
CopyWithPrivateKey maintains the state of the private key. If it's an ephemeral key, it stays ephemeral. If it's a persisted key, it stays persisted. When you created the RSA privateKeyUsedToGenerateCsr object you created an ephemeral key object, and so you got the behavior you see.
Your 4-liner is correct (except you really want each of the cert objects in a using statement).
Hydrate the newly signed certificate into an X509Certificate2 object
CopyWithPrivateKey makes a new X509Certificate2 object with the private key bound (could be persisted, could be ephemeral).
Exporting as a PFX can fail if the key is persisted and non-exportable. It's up to you if you want to guard against this, or not. Otherwise, this creates the possibility of reimporting the key to a persisted key more easily.
Reimporting the PFX with PersistKeySet makes a new copy of the private key (if it was already persisted, or "the first copy" if it was ephemeral), and makes it so the key isn't erased when the cert object gets garbage collected or disposed.
On the other hand, you could create the key as a persisted key and just need to use CopyWithPrivateKey once. Of course, this notion only exists on Windows.
Creating a persisted CNG key
Assuming you created the key as new RSACng(2048) or RSA.Create(2048), you can make a persisted key by
CngKeyCreationParameters keyParameters = new CngKeyCreationParameters
{
// Or whatever.
ExportPolicy = CngExportPolicies.None,
// If applicable.
KeyCreationOptions = CngKeyCreationOptions.MachineKey,
Parameters =
{
new CngProperty("Key Length", BitConverter.GetBytes(2048), CngPropertyOptions.Persist),
},
};
using (CngKey key = CngKey.Create(CngAlgorithm.Rsa, Guid.NewGuid().ToString(), keyParameters))
{
return new RSACng(key);
}
Creating a persisted CAPI key
Ideally, don't. Create a persisted CNG instead. But, if you must:
Assuming you created the key as new RSACryptoServiceProvider(2048), you can make a persisted CAPI key by
CspParameters cspParams = new CspParameters
{
KeyContainerName = Guid.NewGuid().ToString(),
Flags =
// If appropriate
CspProviderFlags.UseMachineKeyStore |
// If desired
CspProviderFlags.UseNonExportableKey
};
return new RSACryptoServiceProvider(2048, cspParams);
I replicated the architecture below.
I insert the gps positions (document db) in cosmos db and in the javascript client (maps google) the pin moves.
All the step works: insert document db, trigger azure function and signalr that link client and document db in azure cosmos db.
The code to upload a document db in Cosmos:
Microsoft.Azure.Documents.Document doc = client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(databaseName, collectionName), estimatedPathDocument).Result.Resource;
ret[0] = doc.Id;
Azure function:
public static async Task Run(IReadOnlyList<Document> input, IAsyncCollector<SignalRMessage> signalRMessages, ILogger log)
{
if (input != null && input.Count > 0)
{
var val = input.Select((d) => new
{
genKey = d.GetPropertyValue<string>("genKey"),
dataType = d.GetPropertyValue<string>("dataType")
});
await signalRMessages.AddAsync(new SignalRMessage
{
UserId = val.First().genKey,
Target = "tripUpdated",
Arguments = new[] { input }
});
}
}
When I insert only one position the function in azure records the event and fire by moving the pin.
The problem is when I insert sequentially a series of positions in an almost instantaneous way and this does not trigger the function for the document following the first one.
Only if i insert a delay only some documents fire the trigger:
Microsoft.Azure.Documents.Document doc = client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(databaseName, collectionName), estimatedPathDocument).Result.Resource;
Thread.Sleep(3000);
ret[0] = doc.Id;
I don't know if I load the documents correctly, but even managing them in an asynchronous way (see under), it almost seems like the trigger is triggered only when the document in cosmos db is "really / physically" created.
Task.Run(async () => await AzureCosmosDB_class.MyDocumentAzureCosmosDB.CreateRealCoordDocumentIfNotExists_v1("axylog-cdb-01", "axylog-collection-01", realCoord, uri, key));
The solution can be to list the documents in a queue and load them on azure cosmos sequentially after a delay of about ten seconds one from the other?
I have the following c# function that selects all the documents out of a couchbase bucket.
public IEnumerable<CatalogueItem> GetAll()
{
using (var bucket = _cluster.OpenBucket(_bucketName))
{
var request = new QueryRequest("SELECT `boxcatalogue`.* FROM `boxcatalogue` LIMIT 100;").UseStreaming(true);
var query = bucket.Query<CatalogueItem>(request);
return query.Rows;
}
}
I can run the same query from the Couchbase console and it returns my documents correctly. But when calling from C# code query.Rows is null. When inspecting the query object in a breakpoint I can see it has a private member called Results which is populated with the results of the query, so why is Rows null?
I have a desire to add a property with a default value to a set of documents that I retrieve via a SELECT query if they contain no value.
I was thinking of this in two parts:
SELECT * FROM c article WHERE article.details.locale = 'en-us'
I'd like to find all articles where article.details.x does not exist.
Add the property, article.details.x = true
I was hoping this EXEC command could be supported via the Azure Portal so I don't have to create a migration tool to run this command once but I couldn't find this option in the portal. Is this possible?
You can use Azure Document DB Studio as a front end to creating and executing a stored procedure. It can be found here. It's pretty easy to setup and use.
I've mocked up a stored procedure based on your example:
function updateArticlesDetailsX() {
var collection = getContext().getCollection();
var collectionLink = collection.getSelfLink();
var response = getContext().getResponse();
var docCount = 0;
var counter = 0;
tryQueryAndUpdate();
function tryQueryAndUpdate(continuation) {
var query = {
query: "select * from root r where IS_DEFINED(r.details.x) != true"
};
var requestOptions = {
continuation: continuation
};
var isAccepted =
collection
.queryDocuments(collectionLink,
query,
requestOptions,
function queryCallback(err, documents, responseOptions) {
if (err) throw err;
if (documents.length > 0) {
// If at least one document is found, update it.
docCount = documents.length;
for (var i=0; i<docCount; i++){
tryUpdate(documents[i]);
}
response.setBody("Updated " + docCount + " documents");
}
else if (responseOptions.continuation) {
// Else if the query came back empty, but with a continuation token;
// repeat the query w/ the token.
tryQueryAndUpdate(responseOptions.continuation);
} else {
throw new Error("Document not found.");
}
});
if (!isAccepted) {
throw new Error("The stored procedure timed out");
}
}
function tryUpdate(document) {
//Optimistic concurrency control via HTTP ETag.
var requestOptions = { etag: document._etag };
//Update statement goes here:
document.details.x = "some new value";
var isAccepted = collection
.replaceDocument(document._self,
document,
requestOptions,
function replaceCallback(err, updatedDocument, responseOptions) {
if (err) throw err;
counter++;
});
// If we hit execution bounds - throw an exception.
if (!isAccepted) {
throw new Error("The stored procedure timed out");
}
}
}
I got the rough outline for this code from Andrew Liu on GitHub.
This outline should be close to what you need to do.
DocumentDB has no way in a single query to update a bunch of documents. However, the portal does have a Script Explorer that allows you to write and execute a stored procedure against a single collection. Here is an example sproc that combines a query with a replaceDocument command to update some documents that you could use as a starting point for writing your own. The one gotcha to keep in mind is that DocumentDB will not allow sprocs to run longer than 5 seconds (with some buffer). So you may have to run your sproc multiple times and keep track of what you've already done if it can't complete in one 5 second run. The use of IS_DEFINED(collection.field.subfield) != true (thanks #cnaegle) in your query followed up by a document replacement that defines that field (or removes that document) should allow you to run the sproc as many times as necessary.
If you didn't want to write a sproc, the easiest thing to do would be to export the database using the DocumentDB Data Migration tool. Import that into Excel to manipulate or write a script to do the manipulation. Then upload it again using the Data Migration tool.
I have a problem with fetching results from a database. I'm using firebird, c3p0, JDBCTemplate, SpringMVC.
public class InvoiceDaoImpl implements InvoiceDao {
...
public Invoice getInvoice(int id) {
List<Invoice> invoice = new ArrayList<Invoice>();
String sql = "SELECT ID,FILENAME, FILEBODY FROM T_FILES WHERE id=" + id;
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
invoice = jdbcTemplate.query(sql, new InvoiceRowMapper());
return invoice.get(0);
}....}
Used Model:
public class Invoice {
private int ID;
private Blob FILEBODY;
private String FILENAME;
getters and setters ...
}
RowMapper and Extractor are standard.
In the JSP I'm getting a FileStream and return as file for download:
#RequestMapping("admin/file/GetFile/{id}")
public void invoiceGetFile(#PathVariable("id") Integer id, HttpServletResponse response) {
Invoice invoice = invoiceService.getInvoice(id);
try {
response.setHeader("Content-Disposition", "inline;filename=\"" + invoice.getFILENAME() + "\"");
OutputStream out = response.getOutputStream();
response.setContentType("application/x-ms-excel");
IOUtils.copy(invoice.getFILEBODY().getBinaryStream(), out);
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
}
}
catalina.out:
datasource.DataSourceTransactionManager - Releasing JDBC Connection [com.mchange.v2.c3p0.impl.NewProxyConnection#566b1836] after transaction
http-nio-8443-exec-29 DEBUG datasource.DataSourceUtils - Returning JDBC Connection to DataSource
http-nio-8443-exec-29 DEBUG resourcepool.BasicResourcePool - trace com.mchange.v2.resourcepool.BasicResourcePool#4d2dbc65 [managed: 2, unused: 1, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection#4ca5c225)
org.firebirdsql.jdbc.FBSQLException: GDS Exception. 335544332.
**invalid transaction handle (expecting explicit transaction start)**
at org.firebirdsql.jdbc.FBBlobInputStream.<init>(FBBlobInputStream.java:38)
at org.firebirdsql.jdbc.FBBlob.getBinaryStream(FBBlob.java:404)
I don't understand why do I need to use transactions handling when I use SELECT, and not UPDATE or INSERT?
Firebird (and JDBC for that matter) does everything in a transaction, because the transaction determines the visibility of data.
In this specific case the select query was executed within a transaction (presumably an auto-commit), but the blob access is done after the transaction has been committed.
This triggers this specific exception because Jaybird knows it requires a transaction to retrieve the blob, but even if Jaybird had a new transaction accessing the blob wouldn't work as the blob handle is only valid inside the transaction that queried for the blob handle.
You will either need to disable auto commit and only commit after retrieving the blob (which would require extensive changes to your DAO by the looks of it), or your row mapper needs to explicitly load the blob (for example into a byte array).
Another option is to ensure this query is executed using a holdable result set (in which case Jaybird will materialize the blob into a byte array within the Blob instance for you), but I am unsure if JdbcTemplate allows you to specify use of holdable result sets.