If we use only PartiQL for Dynamodb calls, do the calls get automatically routed via DAX because we are using DAX cache also in front of some of our tables.
Basically how to use PartiQL along with DAX
Thanks
PartiQL does not use DAX behind the scenes.
If using PartiQL and needing a caching layer, I’d suggest using ElastiCache. It’s not seamless because you have to be aware the cache exists by calling to it directly but it would provide the necessary functionality.
Related
I need to get data from DynamoDB into EventBrdige. Can this happen without the need of a Lambda Function? Can EventBridge listen to DynamoDB changes as a rule?
No. There is currently no direct integration between DynamoDB Stream and EventBridge.
You need to consider the different use cases for these two services. Event Bridge is for highly important low-volume events for simpler systems integration, while DynamoDB is for high-volume events, which are already recorded in the DB. DynamoDB Stream is designed chiefly for analytical use cases with the ability to aggregate or filter the events to analyze.
There are a few other event processing services in AWS, such as Kinesis and Managed Kafka, which are designed differently for scale than Event-Bridge.
Currently we are connecting the AR System through the Oracle database for this purpose. I need to know: is there any alternative way to access or query the Remedy database effectively? Is there any built-in API which we can utilise which will increase the efficiency of the work?
What could be used is the REST api, in which you can query directly the forms.
Please check following url:
REST API Doc
It will result with JSON object containing all data.
In order to obtain access to all forms you need to create a "service" user with fixed license and permissions to the forms which you would like to read using the API call.
You can query the Oracle back-end directly, with a few caveats. It should only be for reading data, not writing or modifying data. Otherwise, you could break data integrity as well as bypass workflow that should be fired. Also, this direct access does not enforce any permissions, nor does it translate any of the data. For example, selection fields come back as a number instead of their string value, dates are in epoch format, etc.
There is a Remedy ODBC driver, which isn't being updated, nor does it support joins. However, you can open multiple connections with it and join them manually. Plus, it does handle permissions and translations for you.
https://docs.bmc.com/docs/ars1911/odbc-database-access-introduction-896318914.html
If you know in advance what joins you will be doing, you should setup join forms within Remedy. That way the joins are done efficiently in the database. Otherwise, you are stuck with either of the above solutions or using one of the APIs which don't support ad-hoc joins.
I am evaluating databases for my new project where existing data is stored in cosmos SQL database.
For our use case graphDB seem to be a good solution. My Options are Gremlin API or TigerGraph.
I heard Gremlin API is built on top of document database so queries will be slower as Graph queries first gets converted to nosql queries, is that true statement? Any pointers here?
The Gremlin query language is part of Apache TinkerPop. TinkerPop does not dictate how a back end store and query engine be built. It ships with a reference in-memory graph written in Java that actually uses a simple HashMap to store data. TinkerPop has been ported to many different back end stores and storage models. It is not very common however for that to be a document store and there is no need to convert Gremlin queries to SQL unless perhaps you port TinkerPop on top of a Relational or other store and implement it that way. Most Graph DBs I have worked with that implement support for Apache TinkerPop use custom built graph engines and have their own query optimizers that are nothing to do with SQL. I should add that I am not familiar with how the CosmosDB Gremlin support is implemented. The main point is that TinkerPop does not dictate the type of store that be used or how the Gremlin support is implemented.
I have a DynamoDB table that contains key value pairs that will be read by a number of applications. On startup each application will read the entire table and cache it in-memory.
The problem I'm trying to solve is that of getting the applications to update their cache if one or more items in the DynamoDB table have been modified.
DynamoDB streams initially seemed to be the right approach to solving the problem. I have implemented the consumer using Kinesis Client Library (KCL) as recommended by AWS. While implementing it, however, I have encountered some problems that make me believe that I'm on the wrong track. Specifically:
When I create a new consumer using KCL, it creates a new DynamoDB table to do the housekeeping of leases and checkpoints, such that when the application is restarted, KCL knows which records have been consumed and which have not. This is not what I need for this problem. Any stream records that are created while the application is offline is irrelevant, since the entire table is read upon application startup.
Several instances of the same application are running at the same time. Each of them needs to be notified of table updates. To implement that in KCL I need to assign a unique application name to each of them. Otherwise they will share the lease table and only one of the applications will get notified. One table for each application instance doesn't seem right. Also I would then need something to remove unused tables.
I also implemented it using the low level API instead. That works fine when there's a single shard. My implementation doesn't handle re-sharding like KCL, however, so it's too fragile. It seems wrong to have to implement handling of re-sharding for the simple problem I'm trying to solve.
I'm beginning to consider other solutions like:
Implementing a lambda function that gets triggered on updates to the table. The function sends a notification to an SNS topic. Consumers create SQS subscriptions on the topic and gets notified via that. This solution has too many moving parts for my liking.
Make the applications periodically re-read the entire table and determine themselves if changes have been made. This solution feels a bit primitive, but seems to be the simplest.
All solutions that I have considered so far have quite significant drawbacks. What am I missing?
It depends on how your KCL is pushing to the dependent apps but
I believe the SQS path is the correct choice.
You can add a presumably infinite number of consumers without being throttled.
When you do add another dependent app, it won't require changing your KCL to push to it, the new app will simply watch the SQS queue.
You gain the ability to monitor the queue when issues happen.
More moving parts to setup, but once you have the Streams -> SNS -> SQS pipe in place, it's basically bulletproof.
Just my 2¢.
Nowadays an AWS AppSync GraphQL API with subscriptions may be the simplest approach to power this type of application, with the least number of moving parts.
Whenever one of your applications starts up, it connects to your AppSync GraphQL API using the Amplify framework or AppSync SDK and subscribes to the updates its interested in. Then whenever an application updates information in the table via your GraphQL API, all your other applications will be notified of the change, along with the relevant changed data.
AppSync integrates well with DynamoDB out of the box, allowing you to generate DynamoDB tables with appropriate indexes alongside your GraphQL or generate GraphQL from your existing DynamoDB tables if you so choose. Amplify can even help you automatically generate an AppSync GraphQL API at a higher level with associated DynamoDB tables, indexes, entity relationships, and more like elasticsearch search capabilities by using their GraphQL transformers.
I am trying to create an app that receives an Sqlite database from a server for offline use but cloud synchronization. The server has a postgres database with information from many clients.
1) Is it better to delete the sql database and create a new one from a query, or try to synchronize and update the existing separate sqlite files (or another better solution). The refreshes will be a few times a day per client.
2) if it is the latter, could you give me any leads to resources on how I could do this?
I am pretty new to database applications so please excuse my ignorance and let me know if there is any way I could clarify.
There is no one size fits all approach here. You need to carefully consider exactly what needs to be done, what you are replicating, how much data is involved, and what your write models are, all before you build a solution. Along the way you have to decide how to handle write conflicts and more.
In general the one thing I would say is that such synchronization works best with append-only write models (i.e. inserts, no deletes, no updates), and one way to do it is to log changes that need to be made and replicate those changes.
However, master-master replication is difficult on the best of days and with the best of tools available. Jumping between databases with very different capabilities will introduce a number of additional problems. You are in for a big job.
Here's an open source product that claims to solve this for many database types including Postgres. I have no affiliation or commercial interest in this company.
https://github.com/sqlite-sync/SQLite-sync.com
http://sqlite-sync.com/
If you're able and willing to step outside relational databases to use an object store you might want to have a look at CouchDb and perhaps PouchDb that use a MVCC based replication protocol designed to support multi-master replication including conflict resolution. Under the covers, PouchDb uses adaptors for Sqlite, IndexDb, Local storage or a remote CouchBb instance to persist client side data. It auto selects the best client side storage option for the given desktop or mobile browser. The Sqlite engine can be either WebSQL or a Cordova Sqlite plugin.
http://couchdb.apache.org/
https://pouchdb.com/