Does CppSQLite queries the disk or cache? - sqlite

I use CppSQLite 3.5.7.
Is there way to log / get information about whether the SQLite query uses the disk or cache to get the results?

The SQLite C API has the sqlite3_db_status() function, which allows querying the SQLITE_DBSTATUS_CACHE_HIT/MISS counters.
The CppSQLite library does not expose this API; you'd have to modify it.

Related

Does aws appsync have scan operations to scan dynamoDB

I am building a serveless web app with aws amplify - graphql - dynamodb. I want to know what exactly a scan operation is in this context. For example, I have an User table and queries listUsers and getUser were generated from amplify schema. Are they scan operations or queries?
Thank you for your answers in advance as I could only find the definition of a scan operation but there aren't example for me to identify one when it comes to graphql.
Amplify uses Filter Expressions which are a type of Query.
You can see this yourself by looking at the .vtl files that amplify generates and uploads to appsync.
They are located here: amplify/#current-cloud-backend/api/[API NAME]/build/resolvers
In that folder you can open up one of the Query.list[Model].req.vtl. Even if you are not familiar with Velocity Template Language you can still get the idea. You can see that it uses the expression $util.transform.toDynamoDBFilterExpression.
More info about that util and then looking at the docs for toDynamoDBFilterExpression.

Illegal invocation using SQLite in web

import { SQLite } from 'expo-sqlite';
export const db = SQLite.openDatabase("db.db");
I tried to use sqlite in the expo and run from a browser, However, I get error TypeError: Illegal invocation, any can help me please
WebSQL API is so bad it was ultimately abandoned as a standard for the web.
The expo-sqlite module provides an SQL database with a WebSQL based interface. This is pretty powerful, and supports pretty much all the features of SQLite. SQLite is also perfect for exactly the kind of use case that apps with offline requirements have. It lets you store large amounts of structured data on disk and read only the parts you need for displaying the current screen into memory.
Maybe you should try #databases/expo
https://itnext.io/using-sqlite-in-expo-for-offline-react-native-apps-a408d30458c3
import connect, {sql} from '#databases/expo';
const db = connect('my-database');

cosmosdb - archive data older than n years into cold storage

I researched several places and could not find any direction on what options are there to archive old data from cosmosdb into a cold storage. I see for DynamoDb in AWS it is mentioned that you can move dynamodb data into S3. But not sure what options are for cosmosdb. I understand there is time to live option where the data will be deleted after certain date but I am interested in archiving versus deleting. Any direction would be greatly appreciated. Thanks
I don't think there is a single-click built-in feature in CosmosDB to achieve that.
Still, as you mentioned appreciating any directions, then I suggest you consider DocumentDB Data Migration Tool.
Notes about Data Migration Tool:
you can specify a query to extract only the cold-data (for example, by creation date stored within documents).
supports exporting export to various targets (JSON file, blob
storage, DB, another cosmosDB collection, etc..),
compacts the data in the process - can merge documents into single array document and zip it.
Once you have the configuration set up you can script this
to be triggered automatically using your favorite scheduling tool.
you can easily reverse the source and target to restore the cold data to active store (or to dev, test, backup, etc).
To remove exported data you could use the mentioned TTL feature, but that could cause data loss should your export step fail. I would suggest writing and executing a Stored Procedure to query and delete all exported documents with single call. That SP would not execute automatically but could be included in the automation script and executed only if data was exported successfully first.
See: Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs.
UPDATE:
These days CosmosDB has added Change feed. this really simplifies writing a carbon copy somewhere else.

Is there any way to input the result got from the curl via fluentd?

We are seeking the most simple way for sending alfresco's audit log to elasticsearch.
I think using the alfresco supplying query and getting audit log would be most simple way.(since audit log data is hardly watchable on db)
And this query processes the effect measure as json type then I'd like to download the query direct using fluentd and send to elasticsearch.
I roughly understood that it would ouput at elasticsearc but I wonder whether I can download 'curl commend' using query direct at fluentd.
Otherwise, if you have other simple idea to get alfresco's audit log then kindly let me know.
I am not sure weather I understood it fully or not but based on your last statement I am giving this answer.
To retrieve audit entries from alfresco repository you could directly use REST APIs of Alfresco which allows you to access them.

Symfony - Log runnables natives queries when database is out

I'am working on a Symfony app that provides a rest web service (simple HTTP Request with JSON).
That service check some rules and inserts few lines in two MySQL table (write only).
For optimize reason, even if Doctrine bundle is available, i use native MySQL Query (with bind params) to insert this lines.
My need is : If for any reason, the database is not available, write "runnables" queries into a log file.
The final purpose is that when database is back, i want to be able to execute directly the file's content on the database.
Note that there is no unique constraint (pk is a generated uuid) and no lock or transaction to handle (simple insert statements).
I write a custom SQLLogger, but when $connection->insert(...) is called, the connect fail before logger is called.
So, my question is : There is a way to get the final query (with binded parameters) without database connection ?
Or should i rewrite the mecanism that bind params into query and log it myself when database is not available ?
Best regards,
Julien
As the final query with parameters is build by the database, there is just no way to build the query with PHP and to be garanteed that the query will be the same as the database.
The only way si to build query without binded parameters, but this is clearly not a good practice.
So, i finally decided to store all the JSON (API request body) in a file if the database is not available.
So when the database is back, instead of replay SQL queries, i can replay the original HTTP query.
Hope this late self-anwser will help someone.
Best regards.

Resources