I've created a table in AWS dynamoDb with only one hash key. Currently it holds over 20 million pieces of data, and every day a few thousands of data are inserted.
Recently, I want to fetch these data from dynamoDb into local hard disk every day. I wrote a small program to use scan operations to save them. The total size of data is not much larger, about 10G, but the time cost in the scanning process is nearly 5 hours each day. Of course, considering the expenses, I didn't set much larger read throughputs.
My question is: is there a way to scan these data incrementally, which means I only need to copy the newly inserted data, but not the entire database. I once tried to use withExclusiveStartKey, but it couldn't find newly inserted data, it might because the lastKeyEvaluated only describes the last key of the specific segment.
You can Create the LSI on the Table and then query the table with
by default it is true it will give you the result in acceding order if you want it in descending order you can user "ScanIndexForward" => false,
E.g
$response = $this->dbClient->query(array(
"TableName" => $this->tableName,
"IndexName" => "TableNameIndex",
"KeyConditions" => array(
"Id" => array(
"ComparisonOperator" => ComparisonOperator::EQ,
"AttributeValueList" => array(
array(Type::NUMBER => $this->getId())
)
)
),
"ScanIndexForward" => false,
));
You will get the result in decremental model.
if you want the top 50 records then you can also set limit like
'limit' => Number;
Hope it will help you.
Related
I have an entity and repository to fetch details:
$subscriptions = $userSubscriptionRepo->findBy(
['isDeleted' => false , 'disabled' => false],
null,
$this->paginationLimit, //5
0
);
On initial load, this will retrieve the first 5 items within the DB. But in our product, new subscriptions are happening all the time and my hope is to create lazy loading of subscriptions and that will be handled by another call.
The problem is that if more items are added, the limit will retrieve the next five, but shifted by the amount added (could be 2 - 5). So now I have an issue where the next five could be the same ones or well into the list +5 which wont be accurate representation. Is there a condition within findBy that allows me to do a call that has something similar to "start from ID" that then allows the limit + offset to start from there?
Example:
$subscriptions = $userSubscriptionRepo->findBy(
['isDeleted' => false , 'disabled' => false],
null,
$this->paginationLimit, //5
0,
['startFromID' = '5'] // so that it ensures no matter how many is added, it will start from this id
);
i have a table(user table) which is associated to many tables. While saving data, it is getting saved in all associated table. But in some scenario i need to save only in base table (User) not in assoicatied table.
In cakephp 2 we have option callback => false, but how can we achive this in cake php 3?
You may specify the associated tables in which you want to save (cf: CakePHP ORM Documentation).
You could then do :
$this->Users->save($user, ['associated' => false]);
To disable the save in associated tables. (I have not tested as I'm at work, I will edit my message if it does not work for me !)
following code worked for me
$entity = $this->Users->newEntity($this->request->data, ['ignoreCallbacks' => true,'associated' => []]);
$result = $this->Users->save($entity);
does votingapi_set_votes handles voting and unvoting. I will try to use the same logic as this Drupal Creating Votes in Voting API Through Code to create a vote. What i am asking is how to handle voting and unvoting.
Generally you add votes using votingapi_set_votes and delete votes using votingapi_delete_votes.
For both these functions you need a base criteria, something like this.
$criteria = array(
'entity_type' => 'node',
'entity_id' => $node->nid,
'uid' => $user->uid,
'value_type' => 'points',
'tag' => 'vote',
);
For setting vote you need its value, which usually differs from criteria only by value field.
$votes = $copy_of_criteria;
$votes['value'] = 666;
Then votingapi_set_votes($votes, $criteria) will delete all votes matching $criteria, and then add new votes (specified by $votes). This function also takes care of recalculating votes cache (i.e. aggregated values).
For deleting votes ("unvote") you firstly need to select required votes and then pass them into the votingapi_delete_votes function:
$votes = votingapi_select_votes($criteria);
votingapi_delete_votes($votes);
This function does not recalculates voting cache, so you need to call votingapi_recalculate_results('node', $node->nid).
I am trying to search with the below params, and I am wondering why some cause this exception to be thrown.
Only a few params are not working. All others are working.
?q=220v+0+ph => Not working
?q=220v+1+ph => Not working
?q=220v+2+ph => Not working
?q=220v+3+ph => Not working
?q=220v+4+ph => Working
?q=220v+5+ph => Working
?q=220v+6+ph => Working
?q=220v+7+ph => Working
?q=220v+8+ph => Working
?q=220v+9+ph => Working
I am checking the center character. It is not working only in the cases of 0, 1, 2 and 3.
Query: {+(title:480v* content:480v title:3* content:3 title:ph* content:ph)
One or more of your wildcard queries is generating too many term matches. Wildcard queries are rewritten by enumerating all of the matching terms, and create a set of primitive queries matching them, combined in a BooleanQuery.
For instance, the query title:foo*, could be rewritten to title:foobar title:food title:foolish title:footpad, in an index containing those terms.
By default, a BooleanQuery allows a maximum of 1024 clauses. If you have over 1024 different terms in the index matching title:0*, for instance, that is likely your problem.
I'm using WordPress, specifically WooCommerce, creating a new plugin to allow the user to store multiple shipping addresses. I'm currently storing these new shipping addresses as meta data in the user meta data table as a serialized array. I need a way to store these with some sort of ID.
What's the best way to do this -- give each a key ID with a unique number? Try to increment the last highest ID? Not sure what to do.
Why not just keep the serialized array but extend it to multidimensional?
array(
0 => array(city => 'Dallas', state => 'TX'),
1 => array(city => 'Madison', state => 'WI')
);