I have a use-case where I need to query based on a 2i value and retrieve all the Riak objects intstead of only keys. Doing a map-reduce operation for this took quite a long time and more computations.
Is there any other solution for this?
Since a 2i search returns a list of matching keys, you can then make multiple parallel requests to fetch the associated objects.
Riak is not like a traditional RDBMS where you try to minimize the number of fetch requests you make. I recommend benchmarking to find an ideal number of parallel requests to maximize performance to fetch objects.
Finally, please ensure that you are using a load balancer between your application and Riak so that these fetch requests are balanced among the nodes in your cluster.
Related
For example if I was building an airline booking system and all of my seats were individual documents in a cosmos container with PartitionKey of the FlightNumber_DepartureDateTime e.g. UAT123_20220605T1100Z and id of SeatNumber eg. 12A.
A request comes in to allocate a single seat (any seat without a preference).
I want to be able to query the cosmos container for seats where allocated: false and allocate the first one to the request by setting allocated: true allocatedTo:ticketReference. But I need to do this in a thread safe way so that no two requests get the same seat.
Does Cosmos DB (SQL API) have a standard pattern to solve this problem?
The solution I thought of was to query a document and then update it by checking its Etag and if another thread got in first then the update would fail. If it fails then query another document and keep trying until I can successfully update it to claim the seat for this thread.
Is there a better way?
You could achieve this by using transactions. Cosmos DB allows you to write stored procedures that are executed in an atomic transaction, basically serializing concurrent seat reservation operations for you within a logical partition.
Quote from "Benefits of using server-side programming" in the link above:
Atomic transactions: Azure Cosmos DB database operations that are
performed within a single stored procedure or a trigger are atomic.
This atomic functionality lets an application combine related
operations into a single batch, so that either all of the operations
succeed or none of them succeed.
Bear in mind though that transactions come with a cost. They limit scalability of those operations. However in your scenario when you partition data per flight and given that those operations are very fast, this might be the preferable and most reliable option.
I have done something similar with Service Bus queues, essentially allowing you to queue bookings to be saved, therefore you can do the availability logic before you save the booking guaranteeing no overbookings.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-queues-topics-subscriptions
Situation:
A web service with an API to read records from DynamoDB. It uses eventually consistent reads (GetItem default mode)
An integration test consisting of two steps:
create test data in DynamoDB
call the service to verify that it is returning the expected result
I worry that this test is bound to be fragile due to eventual consistency of the data.
If I attempt to verify the data immediately after writing using GetItem withConsistenRead=true it only guarantees that the data has been written to the majority of DB copies, but not all, so the service under test still has a chance to read from the non-updated copy on the next step.
Is there a way to ensure that the data has been written to all DynamoDB copies before proceeding?
The data usually reach all geographically distributed replicas in a second.
My suggestion is to wait (i.e. in Java terms sleep for few seconds) for couple of seconds before calling the web service should produce the desired result.
After inserting the data into DynamoDB table, wait for few seconds before calling the web service.
Eventually Consistent Reads (Default) – the eventual consistency
option maximizes your read throughput. However, an eventually
consistent read might not reflect the results of a recently completed
write. Consistency across all copies of data is usually reached within
a second. Repeating a read after a short time should return the
updated data.
In riak documentation, there are often examples that you could model your e-commerce datastore in certain way. But here is written:
In a production Riak cluster being hit by lots and lots of concurrent writes,
value conflicts are inevitable, and Riak Data Types
are not perfect, particularly in that they do not guarantee strong
consistency and in that you cannot specify the rules yourself.
From http://docs.basho.com/riak/latest/theory/concepts/crdts/#Riak-Data-Types-Under-the-Hood, last paragraph.
So, is it safe enough to user Riak as primary datastore in e-commerce app, or its better to use another database with stronger consistency?
Riak out of the box
In my opinion out of the box Riak is not safe enough to use as the primary datastore in an e-commerce app. This is because of the eventual consistency nature of Riak (and a lot of the NoSQL solutions).
In the CAP Theorem distributed datastores (Riak being one of them) can only guarentee at most 2 of:
Consistency (all nodes see the same data at the same time)
Availability (a guarantee that every request receives a response
about whether it succeeded or failed)
Partition tolerance (the system
continues to operate despite arbitrary partitioning due to network
failures)
Riak specifically errs on the side of Availability and Partition tolerance by having eventual consistency of data held in its datastore
What Riak can do for an e-commerce app
Using Riak out of the box, it would be a good source for the content about the items being sold in your e-commerce app (content that is generally written once and read lots is a great use case for Riak), however maintaining:
count of how many items left
money in a users' account
Need to be carefully handled in a distributed datastore.
Implementing consistency in an eventually consistent datastore
There are several methods you can use, they include:
Implement a serialization method when writing updates to values that you need to be consistent (ie: go through a single/controlled service that guarantees that it will only update a single item sequentially), this would need to be done outside of Riak in your API layer
Change the replication properties of your consistent buckets so that you can 'guarantee' you never retrieve out of date data
At the bucket level, you can choose how many copies of data you want
to store in your cluster (N, or n_val), how many copies you wish to
read from at one time (R, or r), and how many copies must be written
to be considered a success (W, or w).
The above method is similar to using the strong consistency model available in the latest versions of Riak.
Important note: In all of these data store systems (distributed or not) you in general will do:
Read the current data
Make a decision based on the current value
Change the data (decrement the Item count)
If all three of the above actions cannot be done in atomic way (either by locking or failing the 3rd if the value was changed by something else) an e-commerce app is open to abuse. This issue exists in traditional SQL storage solutions (which is why you have SQL Transactions).
we are currently testing out Riak, we have in Riak a huge bucket with millions of keys and I need to query all the keys and save them in a file.
We are using Java as the API.
Is there any way to get the result of the query paged?
Listing millions of keys in Riak is not recommended in production environments as it is a very expensive operation. If you still need to do it, it is best to use the list keys function as this allows Riak to stream results back to the client and will work for any backend.
While it is possible to perform paging for secondary index queries if you are using LevelDB or the memory backend in version 1.4 onwards, this requires sorting on the server side and is therefore not recommended for such large result sets.
I would like to get your opinion regarding a design implementation for data sharing.
I am working on Linux embedded device (mips 200 Mhz) and I want to have some sort of data sharing between multiple processes which can either read or write multiple parameters at once.
This data holds ~200 string parameters which are updated every second.
Process may access to data around ~10 times in 1 second.
I would very much like to try and make the design efficient (CPU / Mem).
This data is not required to be persistent and will be recreated every reboot.
Currently, I am considering two options:
Using shard memory IPC (SHM) + semaphore (locking on all SHM).
To use SQLite memory based DB.
For either option, I will supply a C interface library which will perform all the logic of DB operation.
For SHM, this mean locking/unlocking the semaphore and access the parameters which can be referred as an indexed array.
For SQLite, my library will be a wrapper for the SQLite interface library, so the process will not have to know SQL syntax, (some parsing should be done for queries and reply).
I believe that shared memory is more efficient:
No need to use and parse SQL, and it is accessed as an array.
Saying that, there are some pros as well for using SQLite:
Already working and debugged (DB level).
Add flexibility.
Used widely in many embedded systems.
Getting to the point,
Performance wise, I have no experience with SQLite, I would appreciate if you can share your opinions and experience.
Thanks
SQLite's in-memory databases cannot be shared between processes, but you could put the DB file into tmpfs.
However, SQLite does not do any synchronization between processes. It does lock the DB file to prevent update conflicts, but if one process finds the file already locked, it just waits for a random amount of time.
For efficient communication between processes, you need to use a mechanism like SHM/semaphores or pipes.