Is it possible to query just part of the name and get the data
Like I have data 12345678
can I somehow just search for 1234?
data.whereEqualsto("data", 1234);
This is needed because the last numbers changes and the first ones doesn't.
Is it possible to query just part of the name and get the data Like I have data 12345678 can I somehow just search for 1234?
Sure it is. When it comes to Firestore, you can simply use .startAt() as seen in the following query:
db.collection("collName").orderBy("data").startAt(1234);
When it comes to the Realtime Database, you can use .startAt() too, but as seen below:
db.child("nodeName").orderByChild("data").startAt(1234);
But remember, both queries will return elements that are greater than 1234. Since 12345678 is greater, it will be present in the result set.
Related
I've been reading a DynamoDB docs and was unable to understand if it does make sense to query on Global Secondary Index with a usage of 'contains' operator.
My problem is as follows: my dynamoDB document has a list of embedded objects, every object has a 'code' field which is unique:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
]
}
I want to be able to get all documents that contain entities with entity.code = X.
For this purpose I'm considering adding a Global Secondary Index that would contain all entity.codes that are present in current db document separated by a comma. So the example above would look like:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
],
"entitiesGlobalSecondaryIndex":"entityCode1,entityCode2"
}
And then I would like to apply filter expression on entitiesGlobalSecondaryIndex something like: entitiesGlobalSecondaryIndex contains entityCode1.
Would this be efficient or using global secondary index does not make sense in this way and DynamoDB will simply check the condition against every document which is similar so scan?
Any help is very appreciated,
Thanks
The contains operator of a query cannot be run on a partition Key. In order for a query to use any sort of operators (contains, begins with, > < ect...) you must have a range attributes- aka your Sort Key.
You can very well set up a GSI with some value as your PK and this code as your SK. However, GSIs are replication of the table - there is a slight potential for the data ina GSI to lag behind that of the master copy. If the query you're doing against this GSI isn't very often, then you're probably safe from that.
However. If you are trying to do this to the entire table at once then it's no better than a scan.
If what you need is a specific Code to return all its documents at once, then you could do a GSI with that as the PK. If you add a date field as the SK of this GSI it would even be time sorted. If you query against that code in that index, you'll get every single one of them.
Since you may have multiple codes, if they aren't too many per document, you maybe could use a Sparse Index - if you have an entity with code "AAAA" then you also have an attribute named AAAA (or AAAAflag or something.) It is always null/does not exist Unless the entities contains that code. If you do a GSI on this AAAflag attribute, it will only contain documents that contain that entity code, and ignore all where this attribute does not exist on a given document. This may work for you if you can also provide a good PK on this to keep the numbers well partitioned and if you don't have too many codes.
Filter expressions by the way are different than all of the above. Filter expressions are run on tbe data that would be returned, after it is already read out of the table. This is useful I'd you have a multi access pattern setup, but don't want a particular call to get all the documents associated with a particular PK - in the interests of keeping the data your code is working with concise. The query with a filter expression still retrieves everything from that query, but only presents what makes it past the filter.
If are only querying against a particular PK at any given time and you want to know if it contains any entities of x, then a Filter expressions would work perfectly. Of course, this is only per PK and not for your entire table.
If all you need is numbers, then you could do a count attribute on the document, or a meta document on that partition that contains these values and could be queried directly.
Lastly, and I have no idea if this would work or not, if your entities attribute is a map type you might very well be able to filter against entities code - and maybe even with entities.code.contains(value) if it was an SK - but I do not know if this is possible or not
I need the result of the first query to pass it as an input parameter to my second query. and also want to know to write multi queries.
In my use case, the second query can be traversed only using the result of the first query and that too using loop(which is similar to for loop)
const query1 =g.V().hasLabel('Province').has('code',market').inE('partOf').outV().has('type',state).values('code').as('state')
After executing query1,the result is
res=[{id1},{id2},........]
query2 = select('state').repeat(has('code',res[0]).inE('partOf').outV().has('type',city).value('name')).times(${res.length-1}).as('city')
I made the assumptions that your first query tries to finds "states by market" where the market is a variable you intend to pass to your query. If that is correct then your first query simplifies to:
g.V().hasLabel('Province').has('code',market).
in('partOf').
has('type','state').values('code')
so, prefer in() to inE().outV() when no filtering on edge properties is needed.
Your second query doesn't look like valid Gremlin, but maybe you were just trying to provide an example of what you wanted to do. You wrote:
select('state').
repeat(has('code',res[0]).
inE('partOf').outV().
has('type',city).value('name')).
times(${res.length-1}).as('city')
and I assume that means you want to use the states found in the first query to find their cities. If that's what you're after you can simplify this to a single query of:
g.V().hasLabel('Province').has('code',market).
in('partOf').has('type','state').
in('partOf').has('type','city').
values('name')
If you need data about the state and the city as part of the result then consider project():
g.V().hasLabel('Province').has('code',market).
in('partOf').has('type','state').
project('state','cities').
by('code').
by(__.in('partOf').has('type','city').
values('name').
fold())
I have a simple query like this:
SELECT * FROM CUSTOMERS WHERE CUSTID LIKE '~' AND BANKNO LIKE '~'
The problem is, the customers-table might or might not contain the BANKNO column depending on circumstances I've no control over. If however BANKNO is not a column in CUSTOMERS, this query fails.
So my question is: it is possible to test if the BANKNO column exists and if so, to include it in the query and if not to exclude this column?
The query really has to be flexible.
A non-existent column in a SELECT to sqlite3 will always fail.
One option might be to put the "full" sql in a try block, and if it errors, execute the other sql.
Or, you could query PRAGMA table_info('CUSTOMERS') and interrogate the result to see if a column in question is in the database. Find the sqlite doc here https://www.sqlite.org/pragma.html#pragma_table_info.
I'm sure there are other options, but the bottom line is you need to know before the sql is executed that it contains only valid column names.
How should a projection query be written in objectify, such that, the id of the entity also comes along in the result? (Projection query because my table has a lot of columns)
The query that I've written is
ofy().load().type(Item.class).filter("shopId",shopId)
.filter("name >=",name)
.filter("name <=",name+"\ufffd")
.order("-creationTime")
.project("name","imageUrl").list();
I've read putting id in the project function doesn't work. What's the workout to it, so that I get the name, imageUrl and the id as well?
My bad. The id does come along in the result. You don't need to put id in the project function.
This is odd because I'm not inserting data, I'm pulling data with a query.
I'm trying to run
SELECT DISTINCT description FROM products;
Which outputs the error "The field is too small to accept the amount of data you attempted to add.".
However, running the following doesn't produce the error:
SELECT description FROM products;
So I'm confused as to what the issue would be.
I'm using OleDbDataReader and taking data out of an mdb database file.
This might be related to: http://support.microsoft.com/kb/896950/us
This problem occurs because when you
set the UniqueValues query property to
Yes, a DISTINCT keyword is added to
the resulting SQL statement. The
DISTINCT keyword directs Access to
perform a comparison between records.
When Access performs a comparison
between two Memo fields, Access treats
the fields as Text fields that have a
255-character limit. Sometimes Memo
field data that is larger than 255
characters will generate the error
message that is mentioned in the
"Symptoms" section. Sometimes only 255
characters are returned from the Memo
field.
Workaround:
To work around this problem, modify
the original query by removing the
Memo field. Then, create a second
query that is based on both the table
and the original query. This new query
uses all the fields from the original
query, and this new query uses the
Memo field from the table. When you
run the second query, the first query
runs. Then, this data is used to run
the second query. This behavior
returns the Memo field data based on
the returned data of the first query.