Is emplace always efficient than insert when adding an element into a map? - dictionary

In my thought, emplace is always a better option than insert, because in some case emplace could save one time of ctor and in other cases it could directly replace insert with the same invoking form.
But now I have a map and add an element into it with insert with initialize_list
map<string, map<int,int>> m1;
if (auto it = m1.find("c1"); it == m1.end()){
m1.insert({ "c1", {{10, 1}} });
}
But with emplace I could not think of any way that as consice as insert and still get the benifits what emplace provides.Like below? It would be ill-formed and compiles in error.
auto [it, inserted] = m1.try_emplace("c1", 10, 1);
So how to emplace in my case? And is it true that emplace is always better than insert?

Related

DynamoDB Java SDK query to match items in a list

I'm trying to use SQL IN clause kind of feature in dynamoDB. I tried using withFilterExpression but I'm not sure how to do it. I looked at similar questions as they were too old. Is there a better method to do this? This is the segment of code I have got. I have used a static List as example but it is actually dynamic.
def getQuestionItems(conceptCode : String) = {
val qIds = List("1","2","3")
val querySpec = new QuerySpec()
.withKeyConditionExpression("concept_id = :c_id")
.withFilterExpression("question_id in :qIds") // obviously wrong
.withValueMap(new ValueMap()
.withString(":c_id", conceptCode));
questionsTable.query(querySpec);
}
I need to pass qID list to fetch results similar to IN clause in SQL Query.
Please refer to this answer. Basically you need to form key list/value list dynamically
.withFilterExpression("question_id in (:qId1, :qId2, ... , :qIdN)")
.withValueMap(new ValueMap()
.withString(":qId1", ..) // just do this for each element in the list in a loop programmatically
....
.withString(":qIdN", ..)
);
Mind there is a restriction on maxItems in 'IN'

Find ONLY soft deleted rows with Sequelize

I'm running a database on sequelize and sqlite and I use soft-deletes to basically archive the data.
I'm aware that with .findAll(paranoid: false) I can find all rows including the soft deleted ones. However I would like to find ONLY the soft-deleted ones.
Is there any way to achieve this? Or is there perhaps a way to do "set operations" with two data results, like finding the relative complement of one in the other?
For this, you can add the following condition to where.
deletedAt: { [Op.not]: null }
For example like this:
const projects = await db.Project.findAndCountAll({
paranoid: false,
order: [['createdAt', 'DESC']],
where: { employer_id: null, deletedAt: { [Op.not]: null } },
limit: parseInt(size),
offset: (page - 1) * parseInt(size),
});

Create a perl hash from a db select

Having some trouble understanding how to create a Perl hash from a DB select statement.
$sth=$dbh->prepare(qq{select authorid,titleid,title,pubyear from books});
$sth->execute() or die DBI->errstr;
while(#records=$sth->fetchrow_array()) {
%Books = (%Books,AuthorID=> $records[0]);
%Books = (%Books,TitleID=> $records[1]);
%Books = (%Books,Title=> $records[2]);
%Books = (%Books,PubYear=> $records[3]);
print qq{$records[0]\n}
print qq{\t$records[1]\n};
print qq{\t$records[2]\n};
print qq{\t$records[3]\n};
}
$sth->finish();
while(($key,$value) = each(%Books)) {
print qq{$key --> $value\n};
}
The print statements work in the first while loop, but I only get the last result in the second key,value loop.
What am I doing wrong here. I'm sure it's something simple. Many thanks.
OP needs better specify the question and do some reading on DBI module.
DBI module has a call for fetchall_hashref perhaps OP could put it to some use.
In the shown code an assignment of a record to a hash with the same keys overwrites the previous one, row after row, and the last one remains. Instead, they should be accumulated in a suitable data structure.
Since there are a fair number of rows (351 we are told) one option is a top-level array, with hashrefs for each book
my #all_books;
while (my #records = $sth->fetchrow_array()) {
my %book;
#book{qw(AuthorID TitleID Title PubYear)} = #records;
push #all_books, \%book;
}
Now we have an array of books, each indexed by the four parameters.
This uses a hash slice to assign multiple key-value pairs to a hash.
Another option is a top-level hash with keys for the four book-related parameters, each having for a value an arrayref with entries from all records
my %books;
while (my #records = $sth->fetchrow_array()) {
push #{$books{AuthorID}}, $records[0];
push #{$books{TitleID}}, $records[1];
...
}
Now one can go through authors/titles/etc, and readily recover the other parameters for each.
Adding some checks is always a good idea when reading from a database.

Gremlin graph traversal backtracking

I have some code that generates gremlin traversals dynamically based on a query graph that a user supplies. It generates gremlin that relies heavily on backtracking. I've found some edge cases that are generating gremlin that doesn't do what I expected it to, but I also can't find anything online about the rules surrounding the usage of some of these pipes (like 'as' and 'back' in this case). An example of one of these edge cases:
g.V("id", "some id").as('1').inE("edgetype").outV.has("#class", "vertextype").as('2').inE("edgetype").outV.has("#class", "vertextype").filter{(it.getProperty("name") == "Bob")}.outE("edgetype").as('target').inV.has("id", "some id").back('2').outE("edgetype").inV.has("id", "some other id").back('target')
The goal of the traversal is to return the edges that were 'as'd as 'target'. When I run this against my orientdb database it encounters a null exception. I narrowed down the exception to the final pipe in the traversal: back('target'). I'm guessing that the order of the 'as's and 'back's matters and that the as('target') went 'out of scope' as soon as back('2') was executed. So a few questions:
Is my understanding (that 'target' goes out of scope because I backtracked to before it was 'as'd) correct?
Is there anywhere I can learn the rules surrounding backtracking without having to reverse engineer the gremlin source?
edit:
I found the relevant source code:
public static List<Pipe> removePreviousPipes(final Pipeline pipeline, final String namedStep) {
final List<Pipe> previousPipes = new ArrayList<Pipe>();
for (int i = pipeline.size() - 1; i >= 0; i--) {
final Pipe pipe = pipeline.get(i);
if (pipe instanceof AsPipe && ((AsPipe) pipe).getName().equals(namedStep)) {
break;
} else {
previousPipes.add(0, pipe);
}
}
for (int i = 0; i < previousPipes.size(); i++) {
pipeline.remove(pipeline.size() - 1);
}
if (pipeline.size() == 1)
pipeline.setStarts(pipeline.getStarts());
return previousPipes;
}
It looks like the BackFilterPipe does not remove any of the aspipes between the back pipe and the named as pipe, but they must not be visible to pipes outside the BackFilterPipe. I guess my algorithm for generating the code is going to have to be smarter about detecting when the target is within a meta pipe.
Turns out the way I was using 'as' and 'back' made no sense. When you consider that everything between the 'back('2')' and the 'as('2')' is contained within the BackFilterPipe's inner pipeline, it becomes clear that meta pipes can't be overlapped like this: as('2'), as('target'), back('2'), back('target').

CouchDB View with 2 Keys

I am looking for a general solution to a problem with couchdb views.
For example, have a view result like this:
{"total_rows":4,"offset":0,"rows":[
{"id":"1","key":["imported","1"],"value":null},
{"id":"2","key":["imported","2"],"value":null},
{"id":"3","key":["imported","3"],"value":null},
{"id":"4","key":["mapped","4"],"value":null},
{"id":"5,"key":["mapped","5"],"value":null}
]
1) If I want to select only "imported" documents I would use this:
view?startkey=["imported"]&endkey=["imported",{}]
2) If I want to select all imported documents with an higher id then 2:
view?startkey=["imported",2]&endkey=["imported",{}]
3) If I want to select all imported documents with an id between 2 and 4:
view?startkey=["imported",2]&endkey=["imported",4]
My Questtion is: How can I select all Rows with an id between 2 and 4?
You can try to extend the solution above, but prepend keys with a kind of "emit index" flag like this:
map: function (doc) {
emit ([0, doc.number, doc.category]); // direct order
emit ([1, doc.category, doc.number]); // reverse order
}
so you will be able to request them with
view?startkey=[0, 2]&endkey=[0, 4, {}]
or
view?startkey=[1, 'imported', 2]&endkey=[1, 'imported', 4]
But 2 different views will be better anyway.
I ran into the same problem a little while ago so I'll explain my solution. Inside of any map function you can have multiple emit() calls. A map function in your case might look like:
function(doc) {
emit([doc.number, doc.category], null);
emit([doc.category, doc.number], null);
}
You can also use ?include_docs=true to get the documents back from any of your queries. Then your query to get back rows 2 to 4 would be
view?startkey=[2]&endkey=[4,{}]
You can view the rules for sorting at CouchDB View Collation

Resources