How can I see the SQL code Speedment sends to the database? - speedment

When I create and use a stream with Speedment, how can I see what is sent to the database? For example, if I try the first example on GitHub:
Optional<Film> longFilm = films.stream()
.filter(Film.LENGTH.greaterThan(120))
.findAny();

You can see the queries generated by attaching a logger to the builder when you first setup the Speedment Application.
final DemoApplication app = new DemoApplicationBuilder()
.withLogging(ApplicationBuilder.LogType.STREAM)
.build();
Now when you invoke .findAny(), the generated SQL query will be shown in the output.

Related

Avoid manually insert device code key in AzureKusto

I'm trying to connect to db in Azure Data Explorer from R using AzureKusto library. Following this documentation https://github.com/Azure/AzureKusto, after calling kusto_database_endpoint(...) function I need to open a browser page and insert the printed code manually. There's a way to skip this manual step and do it automatically? Or there are alternatives for connecting to ADX db?
Thanks for the help!
Co-creator of the package here. Thank you for the question. Yes, you can use the get_kusto_token function to obtain a token and then pass it to kusto_database_endpoint as the .query_token argument. get_kusto_token supports the following authentication flows:
"authorization_code"
"device_code"
"client_credentials"
"resource_owner"
For example, if you have an AAD application service principal that has access to the Azure Data Explorer cluster, you can use its ID and secret to authenticate:
# authenticate using client_credentials method: see ?AzureAuth::get_azure_token
token <- get_kusto_token("https://mycluster.kusto.windows.net",
tenant="mytenant",
authtype="client_credentials",
app="myappid",
password="myclientsecret")
kusto_database_endpoint(server = "mycluster.kusto.windows.net",
database = "mydb",
.query_token=token)
The help page ?AzureKusto::get_kusto_token provides more detailed information on this. Also, please note that the get_kusto_token function is a wrapper around AzureAuth::get_azure_token. The readme for the AzureAuth R package has more detailed examples of other methods of obtaining an Azure access token: https://github.com/Azure/AzureAuth

Going offline with app and database syncing

I have a NativeScript (Angular) app that makes API-calls to a server to get data. I want to implement a bi-directional synchronization once a device gets online but using current API, no BaaS.
I could do a sort of caching. Once in a while app invalidates info in database and fetches it again. I don't like this approach because there are big lists that may change. They are fetched in batches, i.e. by page. One of them is a list of files downloaded to and stored on a device. So I have to keep those that are still in the list, and delete those that are not. It sounds like a nightmare.
How would you solve such a problem?
I use nativescript-couchebase plugin to store the data. We have following services
Connectivity
Data
API Service
Based on connectivity is Online/Offline, we either fetch data from remote API or via couchebase db. Please note that API service always returns the data from Couchebase only.
So in online mode
API Call -> Write to DB -> Return latest data from Couchebase
Offline mode
Read DB -> Return latest data from Couchebase
Also along with this, we maintain all API calls in a queue. So whenever connectivity returns, API calls are processed in sequence. Another challenge that you may face while coming in online mode from offline mode is the token expiry. This problem can be solved by showing a small popup to user after you come online.
I do this by saving my data as a json string and saving it to the devices file system.
When the app loads/reloads I read it from the file.
ie.
const fileSystemModule = require("tns-core-modules/file-system");
var siteid = appSettings.getNumber("siteid");
var fileName = viewName + ".json";
const documents = fileSystemModule.knownFolders.documents();
const site_folder = documents.getFolder("site");
const siteid_folder = site_folder.getFolder(siteid.toString());
const directoryPath = fileSystemModule.path.join(siteid_folder.path, fileName);
const directoryFile = fileSystemModule.File.fromPath(directoryPath);
directoryFile.writeText(json_string)
.then((result) => {
directoryFile.readText().then((res) => {
retFun(res);
});
}).catch((err) => {
console.log(err.stack);
});

How to delete all data in a partition?

I have a CosmosDB collection with a number of different partitions. I want to delete all of the data in one of the partitions so I tried to run the command:
db.myCollection.deleteAll({PartitionKey: 'pop-9q'})
Where PartitionKey is the field that I partition/shard based on. But when I execute this it returns the not very helpful message:
ERROR: An Error has occurred
Why would I be getting this message and how can I either get more details on the cause or find a resolution?
Currently, at this time, you are unable to perform a bulk delete. Please Up Vote and Comment on this functionality: Add the ability to delete ALL data in a partition
Additionally, which API are you consuming? For Gremlin API you could execute something like the following: g.V().drop()
The Microsoft.Azure.Cosmos SDK has added this ability - currently only available as a preview feature (which requires you to opt-in via the portal)
See here for more details:
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-delete-by-partition-key?tabs=dotnet-example
Sample code included there:
// Get reference to the container
var container = cosmosClient.GetContainer("DatabaseName", "ContainerName");
// Delete by logical partition key
ResponseMessage deleteResponse = await container.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey("Contoso"));
if (deleteResponse.IsSuccessStatusCode) {
Console.WriteLine($"Delete all documents with partition key operation has successfully started");
}
As #Mike said, a "delete all data" feature is not supported yet in Cosmos db SQL API and Mongo API. I notice that you have already added comments in above link. I just provide you with a workaround here that using bulk delete stored procedure for Cosmos db SQL API.
(sample code: https://gist.github.com/deepumi/2a23c5380202bddf0b85e83baf5833be)
For Mongo API, unfortunately, even stored procedure is not supported. You could create an Azure HTTP Trigger Function to execute bulk delete code in the function whenever you want or merge it into your program code.

How to view dynamo DB logs | Dynamo Mapper Issue

I have a lambda function which tries to save an item to dynamo DB. Following is the code snippet which does that:
AmazonDynamoDB dynamoDBClient = AmazonDynamoDBClientBuilder.standard().withRegion(Regions.US_WEST_2).build();
logger.log("dynamoDBClient instantiated"+dynamoDBClient);
DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient);
logger.log("Invoking save"+mapper);
mapper.save(user);
i have populated user object with the values that i want to set into the table. When i execute my lambda function the logs displayed in cloudwatch is
dynamoDBClient instantiatedcom.amazonaws.services.dynamodbv2.AmazonDynamoDBClient#6221a451
I do not see the logs "Invoking save". This means that something when wrong when DynamoMapper was instantiated. However i dont see any logs in cloudwatch.
What am i doing wrong? Any help is greatly appreciated.
Thanks
That code looks fine.
What is your lambda timeout set to, and how long does the process execute for before it stops? I'm wondering if your process is timing out?
Im also wondering if you are packaging the DynamoDB SDK Library with your code correctly? Maybe you could share your build file?

Symfony - Log runnables natives queries when database is out

I'am working on a Symfony app that provides a rest web service (simple HTTP Request with JSON).
That service check some rules and inserts few lines in two MySQL table (write only).
For optimize reason, even if Doctrine bundle is available, i use native MySQL Query (with bind params) to insert this lines.
My need is : If for any reason, the database is not available, write "runnables" queries into a log file.
The final purpose is that when database is back, i want to be able to execute directly the file's content on the database.
Note that there is no unique constraint (pk is a generated uuid) and no lock or transaction to handle (simple insert statements).
I write a custom SQLLogger, but when $connection->insert(...) is called, the connect fail before logger is called.
So, my question is : There is a way to get the final query (with binded parameters) without database connection ?
Or should i rewrite the mecanism that bind params into query and log it myself when database is not available ?
Best regards,
Julien
As the final query with parameters is build by the database, there is just no way to build the query with PHP and to be garanteed that the query will be the same as the database.
The only way si to build query without binded parameters, but this is clearly not a good practice.
So, i finally decided to store all the JSON (API request body) in a file if the database is not available.
So when the database is back, instead of replay SQL queries, i can replay the original HTTP query.
Hope this late self-anwser will help someone.
Best regards.

Resources