I want to suppress the CosmosDB information in the following resultset, how can that be done?
{
"id": null,
"_rid": null,
"_self": null,
"_ts": 0,
"_etag": null,
"topLevelCategory": "Shorts,Skirt"
},
This is an extract of course but I dont want to show the ID etc as they serve no purpose in this result but I cannot figure out how to suppress that info.
I expect the following
{
"topLevelCategory": "Shorts,Skirt"
},
Query looks as follows
$"SELECT DISTINCT locales.categories[0] AS topLevelCategory " +
$"FROM c JOIN locales in c.locales " +
$"WHERE locales.country = '{apiInputObject.Locale}' " +
$"AND locales.language = '{apiInputObject.Language}'";
Interesting thing is if I cast the result as a JOBJECT I dont get the system data, I only get it if I createDOcumentQuery as DOcument, so a workaround would be as follows
IQueryable<JObject> queryResultSet = client.CreateDocumentQuery<JObject>(UriFactory.CreateDocumentCollectionUri(databaseName, databaseCollection), parsedQueryObject.SqlStatement, queryOptions);
but that has other async issues but the above does not show the system generate IDs but the below one does
var query = client.CreateDocumentQuery<Document>(UriFactory.CreateDocumentCollectionUri(databaseName, databaseCollection), parsedQueryObject.SqlStatement, queryOptions).AsDocumentQuery();
var result = await query.ExecuteNextAsync<Document>();
These are system-generated properties of items in Cosmos DB.
Surely,you could filter them in the sql: select c.topLevelCategory from c, don't mention them or use select * from c. Filtering in sql is the best method, better than secondary processing of result set.
Update Answer:
Your situation is executing the exact same query the JOBJECT does not show the system data but the Document does.
My explanation as below:
Document Class is a self-contained base class of Document DB .NET package.It has these generate properties:
SDK will try to map the result data one by one to the entity class which you defined in the CreateDocumentQuery<T>.
So actually,you already find the solution.You could define your custom pojo to receive the result data. Just contain the properties you want in that pojo inside like:
class Pojo : Document
{
public string id { get; set; }
public string name { get; set; }
}
That would have both business implications and no more redundant fields.Hope i'm clear on this.
Related
I am learning prisma and I can't figure out how to use the prisma types correctly if the returned data includes a sub model.
For example, I have the following two tables
model Services {
id Int #id #default(autoincrement())
service_name String #db.VarChar(255)
description String #db.MediumText
overall_status ServiceStatus #default(OPERATIONAL)
deleted Boolean #default(false)
sub_services SubServices[]
}
model SubServices {
id Int #id #default(autoincrement())
name String #db.VarChar(255)
description String #db.MediumText
current_status ServiceStatus #default(OPERATIONAL)
service Services? #relation(fields: [service_id], references: [id], onDelete: Cascade)
service_id Int?
}
I am then pulling data from the Services model using the following:
const services = await prisma.services.findMany({
where: {
deleted: false
},
include: {
sub_services: true
}
});
I am then in the client side referencing the Services model, but the IDE isn't detecting that Services can include sub_services. I can use it and it works but the IDE is always showing a squiggly line as if the code is wrong, example is below:
import {Services} from "#prisma/client";
const MyComponent : React.FC<{service: Services}> = ({services}) => {
return (
<>
service.sub_services.map(service => {
})
</>
)
}
but in the above example sub_services is underlined with the error TS2339: Property 'sub_services' does not exist on type 'Services'.
So how would I type it in a way that IDE can see that I can access sub_services from within services model.
UPDATE
I found a way to do it, but I'm not sure if this is the correct way or not as I am creating a new type as below:
type ServiceWithSubServices <Services> = Partial<Services> & {
sub_services: SubServices[]
}
and then change the const definition to the below
const ServiceParent : React.FC<{service: ServiceWithSubServices<Services>}> = ({service}) => {
Although this does seem to work, is this the right way to do it, or is there some more prisma specific that can do it without creating a new type.
In Prisma, by default only the scalar fields are included in the generated type. So, in your case for the Services type, all the scalar fields except sub_services would be included in the type. sub_services is not included because it's a relation field.
To include the relation fields, you would need to use Prisma.validator, here's a guide on generating types that include the relation field.
I am very beginner to AWS DynamoDB, I want to scan the DynamoDB with SENDTO.emailAddress = "first#first.com" as FilterExpression.
The DB Structure looks like this
{
ID
NAME
MESSAGE
SENDTO[
{
name
emailAddress
}
]
}
A Sample Data
{
ID: 1,
NAME: "HELLO",
MESSAGE: "HELLO WORLD!",
SENDTO: [
{
name: "First",
emailAddress: "first#first.com"
},
{
name: "Second",
emailAddress: "second#first.com"
}
]
}
I want to retrieve document that match emailAddress. I tried to scan with filter expression and here is my code to retrieve the data. I am using AWS Javascript SDK.
let params = {
TableName : "email",
FilterExpression: "SENDTO.emailAddress = :emailAddress",
ExpressionAttributeValues: {
":emailAddress": "first#first.com",
}
}
let result = await ctx.docClient.scan(params).promise();
In order to find the item by sendto attribute, you need to know both name and emailAddress attribute value. DynamoDB can't find the data by just one of the attributes in an object (i.e. email attribute value alone).
CONTAINS function can be used to find the data in List data type.
CONTAINS is supported for lists: When evaluating "a CONTAINS b", "a"
can be a list; however, "b" cannot be a set, a map, or a list.
Sample code using Contains:-
var params = {
TableName: "email",
FilterExpression: "contains (SENDTO, :sendToVal)",
ExpressionAttributeValues: {
":sendToVal": {
"name" : "First",
"emailAddress" : "first#first.com"
}
}
};
If you don't know the value of name and emailAddress attribute, you may need to remodel the data to fulfill your use case.
I think that you should create two tables for users and for messages.
The user table has partition_key: user_id and sort_key: email and a field with an array of his messages ids.
The message table has partition_key: message_id and a field with an array of users ids.
When you will get the array of users ids you can use BATCH GET query to get all users of one message.
When you will get the array of message ids you can use BATCH GET query to get all messages of one user.
If you want to get one user by email you can use QUERY method.
Docs
i want to delete record where company_id is "****" and gamer_id is "****".
How to write query for this .
public List<CompanyGamer> unfollowcompany(CompanyGamerForm CompanyGamerForm) throws NotFoundException {
String company_id = CompanyGamerForm.getCompany_id();
String gamer_id = CompanyGamerForm.getGamer_id();
Iterable<Key<CompanyGamer>> allKeys = ofy().load().type(CompanyGamer.class).filter("company_id=", company_id).filter("gamer_id=", gamer_id).keys();
ofy().delete().keys(allKeys); }
Please let me know what should be define in return ?
Have a look at the Objectify documentation for Queries https://github.com/objectify/objectify/wiki/Queries at the bottom of section "Executing Queries". "You can query for just keys, which will return Key objects much more efficiently than fetching whole objects"
Iterable<Key<Gamercompany>> allKeys = ofy().load().type(Gamercompany.class).filter("company_id=", compnyid).filter("gamer_id=",gamer_id).keys();
And then you can delete all entities corresponding to the keys:
ofy().delete().keys(allKeys);
Or if you do want to execute the query that returns the entities and not the keys, you could iterate over the Query and do:
ofy().delete().entity(thing); // asynchronous
or
ofy().delete().entity(thing).now(); // synchronous
However it would be less efficient than the first way.
I am writing an app using the Realm.io database that will pull data from another, server database. The server database has some tables whose primary keys are composed of more than one field. Right now I can't find a way to specify a multiple column key in realm, since the primaryKey() function only returns a String optional.
This one works:
//index
override static func primaryKey() ->String?
{
return "login"
}
But what I would need looks like this:
//index
override static func primaryKey() ->[String]?
{
return ["key_column1","key_column2"]
}
I can't find anything on the docs on how to do this.
Supplying multiple properties as the primary key isn't possible in Realm. At the moment, you can only specify one.
Could you potentially use the information in those two columns to create a single unique value that you could use instead?
It's not natively supported but there is a decent workaround. You can add another property that holds the compound key and make that property the primary key.
Check out this conversation on github for more details https://github.com/realm/realm-cocoa/issues/1192
You can do this, conceptually, by using hash method drived from two or more fields.
Let's assume that these two fields 'name' and 'lastname' are used as multiple primary keys. Here is a sample pseudo code:
StudentSchema = {
name: 'student',
primaryKey: 'pk',
properties: {
pk: 'string',
name: 'string',
lastname: 'string',
schoolno: 'int'
}
};
...
...
// Create a hash string drived from related fields. Before creating hash combine the fields in order.
myname="Uranus";
mylastname="SUN";
myschoolno=345;
hash_pk = Hash( Concat(myname, mylastname ) ); /* Hash(myname + mylastname) */
// Create a student object
realm.create('student',{pk:hash_pk,name:myname,lastname:mylastname,schoolno: myschoolno});
If ObjectId is necessary then goto Convert string to ObjectID in MongoDB
In RavenDB I can store objects of type Products and Categories and they will automatically be located in different collections. This is fine.
But what if I have 2 logically completely different types of products but they use the same class? Or instead of 2 I could have a generic number of different types of products. Would it then be possible to tell Raven to split the product documents up in collections, lets say based on a string property available on the Product class?
Thankyou in advance.
EDIT:
I Have created and registered the following StoreListener that changes the collection for the documents to be stored on runtime. This results in the documents correctly being stored in different collections and thus making a nice, logically grouping of the documents.
public class DynamicCollectionDefinerStoreListener : IDocumentStoreListener
{
public bool BeforeStore(string key, object entityInstance, RavenJObject metadata)
{
var entity = entityInstance as EntityData;
if(entity == null)
throw new Exception("Cannot handle object of type " + EntityInstance.GetType());
metadata["Raven-Entity-Name"] = RavenJToken.FromObject(entity.TypeId);
return true;
}
public void AfterStore(string key, object entityInstance, RavenJObject metadata)
{
}
}
However, it seems I have to adjust my queries too in order to be able to get the objects back. My typical query of mine used to look like this:
session => session.Query<EntityData>().Where(e => e.TypeId == typeId)
With the 'typeId' being the name of the new raven collections (and the name of the entity type saved as a seperate field on the EntityData-object too).
How would I go about quering back my objects? I can't find the spot where I can define my collection at runtime prioring to executing my query.
Do I have to execute some raw lucene queries? Or can I maybe implement a query listener?
EDIT:
I found a way of storing, querying and deleting objects using dynamically defined collections, but I'm not sure this is the right way to do it:
Document store listener:
(I use the class defined above)
Method resolving index names:
private string GetIndexName(string typeId)
{
return "dynamic/" + typeId;
}
Store/Query/Delete:
// Storing
session.Store(entity);
// Query
var someResults = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e => e.EntityId == entity.EntityId)
var someMoreResults = session.Advanced.LuceneQuery<EntityData>(GetIndexName(entityTypeId)).Where("TypeId:Colors AND Range.Basic.ColorCode:Yellow)
// Deleting
var loadedEntity = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e =>
e.EntityId == entity.EntityId).SingleOrDefault();
if (loadedEntity != null)
{
session.Delete<EntityData>(loadedEntity);
}
I have the feeling its getting a little dirty, but is this the way to store/query/delete when specifying the collection names runtime? Or do I trap myself this way?
Stephan,
You can provide the logic for deciding on the collection name using:
store.Conventions.FindTypeTagName
This is handled statically, using the generic type.
If you want to make that decision at runtime, you can provide it using a DocumentStoreListner