I am writing some fixed length records to a QUEUE in Berkeley DB, and get back the record number agter each PUT. So for example if I put 4 messages on the queue I am getting back 1, 2, 3, 4.
NOw I would like to retrieve a message from the queue based on it's KEY....
So if I try:
db_recno_t keyval;
DBT key, data;
memset(&key, 0, sizeof(DBT));
memset(&data, 0, sizeof(DBT));
keyval = 2;
key.data = &keyval;
key.ulen = sizeof(keyval);
ret = q->get(q, NULL, &key, &data, DB_CONSUME);
printf("Key peek = %i\n", keyval);
printf("Data peek = %s\n", data.data);
I keep getting back the first record in the queue, not the one I specify with the key (in this case "2")
I know the keys are 1,2,3,4 on the queue so I am wondering what stupid thing I am doing here?
Thanks for the help, much aprreciated ;-)
Lynton
Try some other database format than DB_QUEUE if you need random access.
Related
For the sake of argument, assume that I have a very simple database:
CREATE TABLE test(idx integer primary key autoincrement, count integer);
This has one row. The database is accessed by a CGI script, which is called by Apache. The script reads the current value of count, increments it, and writes it back. I can run the script as
curl http://localhost/cgi-bin/test
and it tells me what the new value of count is. The script is actually C++; the basic stripped-down code looks like this:
// 'callback' sets 'count' to the current value of count
sqlite3_exec(con, "select count from test where idx=1", callback, &count, 0);
++count;
command << "update test set count=" << count << " where idx=1";
sqlite3_exec(con, command.str().c_str(), 0, 0, 0);
If I write a bash script that runs 20 instances of curl in the background, then I get lots of messages that the database is locked, and the counter is only incremented to 2 or 3, instead of 20. Ok, that's not very surprising, but how do I fix this?
After some experimenting, I've put both sqlite3_exec statements inside an exclusive transaction:
while(true) {
rc = sqlite3_exec(con, "begin exclusive transaction", 0, 0, 0);
if(rc != SQLITE_BUSY)
break;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
if(rc != SQLITE_OK)
error();
...the select and update code shown above, followed by:
sqlite3_exec(con, "end transaction", 0, 0, 0);
This appears to be rock-solid, but I can't make much sense of the relevant bits of the sqlite docs, and I'm not convinced. Is there anything else I need to think about? Note that I don't have any rollbacks, or any other sqlite3 calls, apart from sqlite_open_v2, sqlite3_errmsg, and sqlite_close, no WAL, and I only test for SQLITE_BUSY. For testing, I run the bash script below, with $1 set to 1000 (ie. 1000 curl instances all running the CGI code). This completes in 10 or 11 seconds, and every time I run it it shows the final value of count as 1000, so it appears to be working.
Test script:
#!/bin/bash
sqlite3 /var/www/cgi-bin/test.db <<EOF
update test set count=0 where idx=1;
EOF
for ((c=0; c<$1; c++ ))
do
curl http://localhost/cgi-bin/test > /dev/null 2>&1 &
done
wait
sqlite3 /var/www/cgi-bin/test.db <<EOF
select count from test where idx=1;
EOF
Using :
-IronPython
-AutoDesk
-Revit (PyRevit)
-Revit API
-SQLite3
My code is as follows:
try:
conn = sqlite3.connect('SQLite_Python.db')
c = conn.cursor()
print("connected")
Insert_Volume = """INSERT INTO Column_Coordinates
(x, y, z)
VALUES
(1, 2, 3)"""
count = c.execute(Insert_Volume)
conn.commit()
print("Volume values inserted", c.rowcount)
c.close()
except sqlite3.Error as error:
print("Failed to insert data into sqlite table", error)
finally:
if (conn):
conn.close()
print("The SQLite connection is closed")'''
This code used to work within PyRevit, but now does not, with the following error:
Exception : System.IO.IOException: Could not add reference to assembly IronPython.SQLite
Please advise, this is one of the early steps of a large project and therefore is delaying my work quite a bit.
I look forward to your reply.
When I tried to use below codes to retrieve the state by linearId, I got 2 records returned, one is the consumed one, the other is the unconsumed one. The initial linearId was passed in from web api.
val linearId: UniqueIdentifier = UniqueIdentifier(null, UUID.fromString(legalContractState.legalContract.linearId))
val linearIds = listOf(linearId)
val linearStateCriteria = QueryCriteria.LinearStateQueryCriteria(linearId = listOf(linearIds.first(), linearIds.last()))
val states = serviceHub.vaultQueryService.queryBy(LegalContractState::class.java, linearStateCriteria).states
val inputState: StateAndRef<LegalContractState> = serviceHub.vaultQueryService.queryBy(LegalContractState::class.java, linearStateCriteria).states.single()
But from a sample code on the vault api page, it says this will return an unconsumed state based on a linearId, I also checked the data in H2 database VAULT_STATES table, there are 2 records, one has a CONSUMED_TIMESTAMP and its STATE_STATUS is 1, but the other one CONSUMED_TIMESTAMP IS null and STATE_STATUS is 0. This is one unshared state which means only stored in my database, and I executed one update for it, so ideally have one consumed state and one new output state in db. So now I am not sure what's wrong here.
Query for unconsumed linear states for given linear ids:
val linearIds = issuedStates.states.map { it.state.data.linearId }.toList()
val criteria = LinearStateQueryCriteria(linearId = listOf(linearIds.first(), linearIds.last()))
val results = vaultQuerySvc.queryBy<LinearState>(criteria)
This is a bug that will be fixed in release M14. See https://github.com/corda/corda/issues/949.
According to this blog post, firebase array keys are created using a timestamp:
It does this by assigning a permanent, unique id based on the current timestamp (offset to match server time).
Is there a way to recover this timestamp for use later, given the key?
As I said in my comment, you should not rely on decoding the timestamp from the generated id. Instead of that, you should simply store it in a property in your Firebase.
That said, it turns out to be fairly easy to get the timestamp back:
// DO NOT USE THIS CODE IN PRODUCTION AS IT DEPENDS ON AN INTERNAL
// IMPLEMENTATION DETAIL OF FIREBASE
var PUSH_CHARS = "-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz";
function decode(id) {
id = id.substring(0,8);
var timestamp = 0;
for (var i=0; i < id.length; i++) {
var c = id.charAt(i);
timestamp = timestamp * 64 + PUSH_CHARS.indexOf(c);
}
return timestamp;
}
var key = prompt("Enter Firebase push ID");
if (key) {
var timestamp = decode(key);
console.log(timestamp+"\n"+new Date(timestamp));
alert(timestamp+"\n"+new Date(timestamp));
}
I'll repeat my comment, just in case somebody thinks it is a good idea to use this code for anything else than as an exercise in reverse engineering:
Even if you know how to retrieve the timestamp from the key, it would be a bad idea to do this in production code. The timestamp is used to generate a unique, chronologically ordered sequence. If somebody at Firebase figures out a more efficient way (whichever subjective definition of efficiency they happen to choose) to accomplish the same goal, they might change the algorithm for push. If your code needs a timestamp, you should add the timestamp to your data; not depend on it being part of your key.
Update
Firebase documented the algorithm behind Firebase push IDs. But the above advice remains: don't use this as an alternative to storing the date.
Here's a version of Frank's code re-written in Swift (4.2 at the time of writing.)
Just to be clear, my use case for this was to patch my old models with no timestamps (createdAt, updatedAt.) I could just throw in random dates in them just to save me some headaches. But then that wouldn't be relevant to their models. I knew that there's an element of time baked into these auto-ids based on what I've read from other articles.
let PUSH_CHARS = "-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz"
func decode(autoId: String) -> TimeInterval {
let substring = autoId.substring(toIndex: 8)
var timestamp = 0
for i in 0..<substring.length {
let c = Character(substring[i])
timestamp = (timestamp * 64) + PUSH_CHARS.firstIndex(of: c)!.encodedOffset
}
return TimeInterval(exactly: timestamp)!
}
Grab the Playground-ready code here: https://gist.github.com/mkval/501c03cbb66cef12728ed1a19f8713f7.
And in python
PUSH_CHARS = "-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz"
def get_timestamp_from_id(id):
timestr = id[0:8]
timestamp = 0
for idx, ch in enumerate(timestr):
timestamp = timestamp * 64 + PUSH_CHARS.index(ch)
return timestamp/1000
I have a QMAP data structure. Now, I want to insert the QVariant type in the QMAP. The data is inserted based on the priority. For example priority 1, 2.. etc.
These priority is the key in the QMAP. However, I can have the same key values - meaning same priority. This means priority 1, and 1 can have different QVariants. In order to suffice this, I am using insertMulti rather than insert. Now, the difficulty is that the last insertMulti having the same key is getting inserted on the top of the previously insert value. Now, how can I make it reverse?
QMAP<int, QVariant> grp;
grp.insertMulti(0, "HELLO");
grp.insetMulti(0. "Hi");
On reading the values -
It first returs Hi. However, I want it to return HeLLO. How can I do so?
Please don't give answers in using other data structures. This is a snippet of a very complex problem.
How the values are stored internally in the map is not the problem, but rather how to retrieve them in the required 'priority' order.
As you've stated that the "data is inserted based on the priority" and you want the values retrieved in the same order, you can use the QMap::values(const Key & key) const function for which the docs state:-
Returns a list containing all the values associated with key key, from the most recently inserted to the least recently inserted one.
I would do it in the following way:
QMap<int, QVariant> grp;
grp.insertMulti(0, "HELLO");
grp.insertMulti(0, "Hi");
QList<QVariant> vl = grp.values(0);
QString firstInserted = vl.last().toString(); // Returns the 'HELLO'.
Problem - If the multiple keys are stored in the same key, then the last value added for the same key will have a preference. As per the document - Returns a list containing all the values associated with key key, from the most recently inserted to the least recently inserted one.
Solution - To get it back to the same form - again traverse the QMAP and insert it in the other QMAP. Now, the values associated with the same key will be reversed again - based on the fundamental that recently accessed will be at the top for the same key value.
QMAP<int, QVariant> grp;
grp.insertMulti(0, "HELLO");
grp.insetMulti(0. "Hi");
solution
QMAP<int, QVariant> pgrp;
QMap<int, QVariant>::const_iterator pGroupServiceIterator = grp.constBegin();
while (pGroupServiceIterator != grp.constEnd())
{
pgrp.insertMulti(pGroupServiceIterator.key(),pGroupServiceIterator.value());
++pGroupServiceIterator;
}
Now, the print you will get is "HELLO" first rather than Hi when the pgrp is traversed and printed.
QMultiMap<int, QVariant> grp;
grp.insertMulti(0, "HELLO");
QMultiMap<int, QVariant> map {{ 0, "HI" }};
grp = map.unite(grp);
QList<QVariant> vl = grp.values(0);
QString firstInserted = vl.last().toString();