ASP.NET Azure Blob Geographically Redundant Storage - How to use? - asp.net

I have been searching for an answer on MS, SE and Google and cannot find it. I want to use the GRS option for Azure Storage (Cloud Block Blobs) but I cannot figure out how to properly do that.
I created my storage object in Azure and chose the GRS option.
I get that I have a primary and secondary connection string and know how to get that from the Azure portal.
What I do not know, in ASP.NET 4.0, is how to set both connection strings in the CloudBlockClient and gracefully handle the primary storage being unavailable.
--What exception is thrown and where, when primary is unavailable? Is this thrown when I create the client, or when I try to get a blob reference?
-- How do I then use the secondary?
Do I have to just test for any old exception and then try using the secondary connection string in a new CloudBlockClient if the primary does not work? Or is there anything in the API for this. I would think there would be but I cannot find it.
None of the "How to use Azure Storage" tutorials I have seen go into this. Most of the documentation seems to date from before mid-2014 when this feature became generally available.

This blog post should help you. In short if you want to read from both primary and secondary you want to enable RA-GRS - essentially read access from the secondary. If you are using out storage client libraries you can also enable a retry policy that will first try to read from a primary and then from the secondary if the first read fails.

Related

Setting offer throughput or autopilot on container is not supported for serverless accounts. Azure Cosmos DB migration tool

After Selecting all the details correct in Migration tool returns the error related throughput value and having 0 or -1 does not help.
Workaround to migrate data using tool is to just create the collection first in Azure portal Cosmos DB and then run the Migration tool with same details then it will add all the rows to the same collection created. Main issue here was creation of new collection but i do not know why it returns something related to throughput which i think does not have anything to do with it.
In general, setting an explicitly value for offer_throughput is not allowed for serverless type of accounts. So either you omit that value and it will be applied by default one or you change your account type.
Related issues (still opened as of 23/02/2022):
https://learn.microsoft.com/en-us/answers/questions/94814/cosmos-quick-start-gt-create-items-contaner-gt-htt.html
https://github.com/Azure/azure-cosmos-dotnet-v2/issues/861

GCP encryption thru Beam / Dataflow APIs for Bigquery and Cloud SQL

Context: We are trying to load some CSV format data into GCP BigQuery using GCP Dataflow (Apache Beam). As a part of this for the first time (for each table) creating the BQ tables thru BigQueryIO API. One of the customer requirement is the data on GCP needs to be encrypted using Customer supplied/managed Encryption keys.
Problem Statement: We are not able to find any way to specify the "Custom Encryption Keys" thru APIs while creating Tables. The GCP documentation details about how to specify the Custom encryption keys thru GCP BQ Console but could not find anything for specifying it thru APIs from within DataFlow Code.
Code Snippet:
String tableSpec = new StringBuilder().append(PipelineConstants.PROJECT_ID).append(":")
.append(dataValue.getKey().target_dataset).append(".").append(dataValue.getKey().target_table_name)
.toString();
ValueProvider<String> valueProvider = StaticValueProvider.of("gs://bucket/folder/");
dataValue.getValue().apply(Count.globally()).apply(ParDo.of(new RowCount(dataValue.getKey())))
.apply(ParDo.of(new SourceAudit(runId)));
dataValue.getValue().apply(ParDo.of(new PreProcessing(dataValue.getKey())))
.apply(ParDo.of(new FixedToDelimited(dataValue.getKey())))
.apply(ParDo.of(new CreateTableRow(dataValue.getKey(), runId, timeStamp)))
.apply(BigQueryIO.writeTableRows().to(tableSpec)
.withSchema(CreateTableRow.getSchema(dataValue.getKey()))
.withCustomGcsTempLocation(valueProvider)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Query: If anybody could let us know
If this is possible to provide encryption key thru Beam API?
If its not possible with the current version what could be the possible work
around?
Kindly let know if additional information is required.
Customer supplied encryption keys is a new feature, not all libraries have been updated to support it yet.
If you know the table name in advance, you can use UI/CLI or API to create table, then run your normal flow to load data into that table. That might be a work around for you.
https://cloud.google.com/bigquery/docs/customer-managed-encryption#create_table
API to create table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
You need to set this section on table object:
"encryptionConfiguration": {
"kmsKeyName": string
}
More details on table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource

SQLite-PCL : Accessing my own created table?

I'm referring to SQLite-PCL tutorial here: https://code.tutsplus.com/tutorials/an-introduction-to-xamarinforms-and-sqlite--cms-23020
I'm very new to SQLite, so I'm lacking in knowledge in lots of basic things - I have tried Googling, can't understand most of it.
Is the call to new SQLiteConnection actually opens up the database or just saying that "the road to the database has been established, whether you access it or not is up to you"?
How do I check if there's already existing database in the devices? And if there is, how do I access it? I have Googled this, but it all seems to be a bit extreme - can't I just call simple OPEN the database?
Is it okay to have multiple SQLiteConnection instances to the same database, if I can be sure that I'm not going to do multiple transaction at the same time?
After I have INSERT into the database, close the app, open up the app back - how do I make sure that there is database created in previous session? Any way to debug this? Because I have no idea if the database is there or not, and I don't know how to access it either..
SQLiteConnection returns a connection object that is used to make subsequent queries
use File.Exists to see if the db file is already present
Yes
Again, use File exists to see if the db file is physically present
Xamarin's ToDo sample provides a good overview of using SQLite with Forms.

Qt - How to list all existing databases on PostgreSQL server using Qt interface

Could someone please explain how to obtain a list of all existing databases on a PostgreSQL server, to which the user already has access, using Qt? PostgreSQL documentation suggests the following query:
SELECT datname FROM pg_database WHERE datistemplate = false;
What are the correct parameters to the following functions:
QSqlDatabase::setDatabaseName(const QString & name) //"postgres" or "pg_database"?
QSqlDatabase::setUserName(const QString & name) //actual user name?
QSqlDatabase::setPassword(const QString & password) //no password? or user password?
Much appreciated. Thank you in advance.
You appear to have already answered the first part of your question. Connect to the postgres or template1 database and issue the query you've quoted above to get a list of databases. I'm guessing - reading between the lines - that you don't know how to connect to PostgreSQL to send that query, and that's what the second part of your question is about. Right?
If so, the QSqlDatabase accessor functions you've mentioned are used to set connection parameters, so the "correct" values depend on your environment.
If you want to issue the query above - to list databases - then you would probably want to connect to the postgres database as it always exists and isn't generally used for anything specific, it's there just to be connected to. That means you'd call setDatabaseName("postgres");. Passing pg_database to setDatabaseName would be nonsensical, since pg_database is the pg_catalog.pg_database table, it isn't a database you can connect to. pg_database is one of those odd tables that exists in every database, which might be what confused you.
With the other two accessors specify the appropriate username and password for your environment, same as you'd use for psql; there's no possible way I could tell you which ones to use.
Note that if you set a password but one isn't required because authentication is done over unix socket ident, trust, or other non-password scheme the password will be ignored.
If this doesn't cover your question, consider editing it and explaining your problem in more detail. What've you tried? What didn't work how you expected? Error messages? Qt version?

ORA-24778: cannot open connections

I am getting the ORA-24778: cannot open connections, what are the possible causes?
We have a number of applications deployed in WAS7 profile and they connect to a number of schemas in Oracle 11g.
One of the schema is connecting through other schema via public DB link.
I cannot identify a solution for this cause.
After restarting the WAS7 profile, it is ok for a while and again start hitting the error.
Pls help!!
I assume you missed to tell us a few details:
You are using XA
You are using XA in combination with database links
You are using shared database links
The ora-24778 is not happening all the time
Either you haven't configured shared server option or you are not connected to a shared server. However Oracle requires you to user shared server if you want to use XA and database links.
Or the parameter OPEN_LINKS_PER_INSTANCE is not set sufficiently. Keep in mind mind that there is also a open_links init.ora parameter. The open_links parameter does not apply to XA.
This error can occur when you invoke a dblink in a existing transaction.

Resources