asp.net Action to receive a complex json with a complex collection - asp.net

This is a tough one: Basically I want to map a complex JSON object with a collection consisting of different but similar complex types into a C# model.
This is a default JSON my ASP.NET Web API will receive:
{
"Id": "000111222333",
"Move": {
"Address2": {
"City": "CityName",
"Street": "StreetName",
"PostalCode": "4444",
"HouseNumber": "4",
"Country": "NLD",
"IsInhabited": "true"
},
"Contracts": {
"ElectricityContract": {
"ContractId": "000000031",
"ContractNumber": "00000011",
"EANCode": "53123123123123",
"StartDate": "2000-01-20",
"EndDate": "2017-06-06",
"IsBuildingComplex": "false",
"ForceSingleTariff": "false"
},
"CableContract": {
"ContractId": "456454546353",
"ContractNumber": "12312312313",
"StartDate": "2000-01-20",
"EndDate": "2017-01-23"
}
}
}
}
My problem here is that I can't seem to receive the different kinds of Contracts as a single collection.
I've tried to map the "Contracts" sequence to an ArrayList but its always 'null'.
I've tried to map it to ICollection/IEnumerable/List< IContract> where all contracts are a type of IContract. But this doesn't work as well.
I've started to manually map from the JSON string, but I hope there's a better way to do this? All help appreciated.

To get a Contracts collection inside this object you need to change your JSON to this one
{
...
"Contracts": [
{
"ContractId": "000000031",
"Type": "ElectricityContract",
"ContractNumber": "00000011",
"EANCode": "53123123123123",
"StartDate": "2000-01-20",
"EndDate": "2017-06-06",
"IsBuildingComplex": "false",
"ForceSingleTariff": "false"
},
{
"ContractId": "456454546353",
"Type" : "CableContract",
"ContractNumber": "12312312313",
"StartDate": "2000-01-20",
"EndDate": "2017-01-23"
}
]
}
So, Contracts will be a JSON collection not an object and you can map it by using custom binding to ICollection/IEnumerable/List<IContract>. This binding should create different IContract implementation according to Type property.
How to do this - take a look at this post.

Related

Query Firebase using Rest Call

I am querying firestore using query
https://firestore.googleapis.com/v1/projects/myproject/databases/(default)/documents/mycollection
I am getting following Json. Can someone please help me how can I filter my query based on Rate field. I am writing following query and it doesn't work
https://firestore.googleapis.com/v1/projects/myproject/databases/(default)/documents/mycolletion?Rate="15"
{
"documents": [
{
"name": "0C45nDuozgQDOwEx5xHR",
"fields": {
"Clinic": {
"stringValue": "American Hospital"
},
"Rate": {
"stringValue": "140"
},
,`enter code here`
"createTime": "2020-06-28T20:32:18.776123Z",
"updateTime": "2020-07-22T21:19:24.061647Z"
},
{
"name": "Jm3tNVWmk4Q1pk87KL1m",
"fields": {
"Clinic": {
"stringValue": "Cleaveland clinic"
},
"Rate": {
"stringValue": "150"
},
"createTime": "2020-06-28T20:28:03.726819Z",
"updateTime": "2020-07-22T21:19:05.073019Z"
}
}
The problem is that the Rate field is an object (inside another object to make it worse), so in order to achieve that you would either need to update you Firestore structure to do it all in the request URL, or you would have to use a structured query in the body of the request.
Change the structure:
In order to work with the request you already have, you will need to change the structure to (using the 1st document as an example):
{
"name": "0C45nDuozgQDOwEx5xHR",
"Clinic": "American Hospital",
"Rate": "140",
"createTime": "2020-06-28T20:32:18.776123Z",
"updateTime": "2020-07-22T21:19:24.061647Z"
}
which in my opinion makes it simpler although I don't have the full picture
Have a Structured query in your request body:
To keep the structure you already have you will need to use thisURL:
https://firestore.googleapis.com/v1/projects/myproject/databases/(default)/documents/:runQuery
And add this to the body of the request:
"structuredQuery": {
"from": [{
"collectionId": "mycollection"
}],
"where": {
"fieldFilter": {
"fields": {
"Rate": {
"stringValue": "15"
}
}
}
}
}
You can check more details by checking the runQuery Documentation and the structuredQuery Documentation for what else you can do with these options.

Some nosql cosmosdb advice required

I am looking for some advice in designing an application using NoSQL CosmosDB or relevant technology.
The data structure looks like the following currently:
{
"accounts": [{
"name": "name1",
"type": "type1"
},
{
"name": "name2",
"type": "type2"
}
],
"categories": [{
"master": "mastername",
"child": [
"child1name",
"child2name"
]
},
{
"master": "mastername2",
"child": [
"child3name",
"child4name"
]
}
],
"charts": {
},
"grouping": [{
"2018": [{
"06": {
"property1": "value1",
"property2":"value2"
},
"07": {
"property1": "value2",
"property2":"value2",
"property3":"value3"
}
}]
}],
"ItemsList": [{
"id": "2018051720",
"dateMonth": "201807",
"property1": "value2",
"date": "17/07/2018",
"Description": "description2"
},
{
"id": "2018051720",
"datemonth": "201807",
"property1": "value1",
"date": "17/07/2018",
"Description": "description"
}
],
"id": "7b786960c93cc9a8"
}
Currently I have decided to have the one collection, because of budget concerns and inside that, will have a multiple of the data structure you see above, so like a list of it.
My question is, is this a good design, the reason for asking is that the following elements can grow quite substantially over time.
ItemList and Grouping.
Itemlist will grow every month by users adding to it, and Grouping will be for every year and month, once a month, but updated as ItemList items are added. Categories and accounts could also change but irregularly.
If I have this in one collection, I was thinking maybe I somehow, have the following structure:
// Main Object
{
"accounts": [{
"name": "name1",
"type": "type1"
},
{
"name": "name2",
"type": "type2"
}
],
"categories": [{
"master": "mastername",
"child": [
"child1name",
"child2name"
]
},
{
"master": "mastername2",
"child": [
"child3name",
"child4name"
]
}
],
"charts": {
},
"id": "7b786960c93cc9a8"
}
// Groupings list
{
"grouping": [{
"userid": "7b786960c93cc9a8",
"grouping": {
"2018": [{
"06": {
"property1": "value1",
"property2": "value2"
},
"07": {
"property1": "value2",
"property2": "value2",
"property3": "value3"
}
}]
}
},
{
"userid": "sfkjehffkjwhf34343",
"grouping": {
"2018": [{
"04": {
"property1": "value1",
"property2": "value2"
},
"05": {
"property1": "value2",
"property2": "value2",
"property3": "value3"
},
"06": {
"property1": "value2",
"property2": "value2",
"property3": "value3"
}
}]
}
}
]
}
// Item List List
{
"ItemLists": [{
"userid": "7b786960c93cc9a8",
"itemlist": [{
"id": "2018051720",
"dateMonth": "201807",
"property1": "value2",
"date": "17/07/2018",
"Description": "description2"
},
{
"id": "2018051720",
"datemonth": "201807",
"property1": "value1",
"date": "17/07/2018",
"Description": "description"
}
]
},
{
"userid": "sfkjehffkjwhf34343",
"itemlist": [{
"id": "2018051720",
"dateMonth": "201807",
"property1": "value2",
"date": "17/07/2018",
"Description": "description2"
},
{
"id": "2018051720",
"datemonth": "201807",
"property1": "value1",
"date": "17/07/2018",
"Description": "description"
}
]
}
]
}
As you can see, I will basically have the main object list with it growing like normal, and then the other json objects for itemlist and grouping where it can grow indepentdantly from the main object, but it would require Two Reads then or even three RU's for the website. Working on only having 400 RU's a month basically, its not a lot of user base and objects?
What is the best way to do this while thinking about money, because if money was no problem, I would have prob gone with a collection for each, where the main object just references the other collection by Id or something.
Hope it makes a bit of sense, in my head it does :)
Imho you're making the age-old mistake of worrying about optimization before a problem arises. Also your sentence "Working on only having 400 RU's a month" somehow makes me feel like you should read up more on the topic of RU's
Check here for Information about RU's and tools to estimate your throughput
400 RU's which cap's your collections "throughput" might slow down your end-user's experience (there might be other bottle necks - usually their on-premise internet connection)
You can always watch the usage of your collections in the Azure Portal and upscale within minutes - so you cannot go wrong by starting with 400RUs
Every request not made is the biggest possible boost to performance
Request in CosmosDB are already bloated with headers for security - you will not have notable performance boosts for shaving a few bytes off your objects here and there, but local caching (be it on your webserver or on the user's machine) will, and it's very easy to do if you simply store the whole Json Objects as key-value pairs (basically what CosmosDB does).
What would be wrong in my opinion is considering multiple collections. I think you have misunderstood the concept there a little. One collection per customer/project is usually the way to go, so don't worry. Everything is indexed and uniquely ID'ed internally so separating things is no problem. One collection per "object type" makes the advantage of any NoSQL Database moot.
If you worry about your "internal lists" getting too long, just save them separately and only save their id in the original object. Then you load them on-demand in your application. Generally speaking more small objects are better than few large objects - if you are able to load them cleverly in your application.
So instead of this:
{
"userid": "sfkjehffkjwhf34343",
"grouping": {
"2018": [{
"04": {
"property1": "value1",
"property2": "value2"
},
"05": {
"property1": "value2",
"property2": "value2",
"property3": "value3"
},
"06": {
"property1": "value2",
"property2": "value2",
"property3": "value3"
}
}]
}
}
you could do this instead
{
"userid": "sfkjehffkjwhf34343",
"grouping": {
"2018": ["x1","x2","x3"]
}
}
{
"groupingid": "x1",
"month":"04",
"values": {
"property1": "value1",
"property2": "value2"
}
}
{
"groupingid": "x2",
"month":"05",
"values": {
"property1": "value1",
"property3": "value3",
"property2": "value2"
}
}
{
"groupingid": "x3",
"month":"06",
"values": {
"property1": "value1",
"property2": "value2"
}
}
Load them only if needed, cache them according to their internal id (which changes on every update if you leave it out) and you won't believe how performant this can be.
You should read up on stored procedures also, they are a really powerful and in some cases a true gold mine for performance improvements.
There is a lot of good Information from Microsoft out there, though admittedly it's not easy to find sometimes.
CosmosDB is frankly an incredible powerful tool if used correctly, but I encourage you to read up on it a little more you you can use it effectively, performance- and cost-wise.

How to associate nested relationships with attributes for a POST in JSON API

According to the spec, resource identifier objects do not hold attributes.
I want to do a POST to create a new resource which includes other nested resource.
These are the basic resources: club (with name) and many positions (type). Think a football club with positions like goalkeeper, goalkeeper, striker, striker, etc.
When I do this association, I want to set some attributes like is the position required for this particular team. For example I only need 1 goalkeeper but I want to have a team with many reserve goalkeepers. When I model these entities in the DB I'll set the required attribute in a linkage table.
This is not compliant with JSON API:
{
"data": {
"type": "club",
"attributes": {
"name": "Backyard Football Club"
},
"relationships": {
"positions": {
"data": [{
"id": "1",
"type": "position",
"attributes": {
"required": "true"
}
}, {
"id": "1",
"type": "position",
"attributes": {
"required": "false"
}
}
]
}
}
}
}
This is also not valid:
{
"data": {
"type": "club",
"attributes": {
"name": "Backyard Football Club",
"positions": [{
"position_id": "1",
"required": "true"
},
{
"position_id": "1",
"required": "false"
}]
}
}
}
So how is the best way to approach this association?
The best approach here will be to create a separate resource for club_position
Creating a club will return a url to a create club_positions, you will then post club_positions to that url with a relationship identifier to the position and club resource.
Added benefit to this is that club_positions creation can be parallelized.

How to filter by nested attributes in JSONAPI?

Assuming we have the following data structure
"data": [
{
"type": "node--press",
"id": "f04eab99-9174-4d00-bbbe-cdf45056660e",
"attributes": {
"nid": 130,
"uuid": "f04eab99-9174-4d00-bbbe-cdf45056660e",
"title": "TITLE OF NODE",
"revision_translation_affected": true,
"path": {
"alias": "/press/title-of-node",
"pid": 428,
"langcode": "es"
}
...
}
The data returned is compliant with JSON API standards, and I have no problem retrieving and processing it, except for the fact that I need to be able to filter the nodes returned by the path pid.
How can I filter my data by path.pid?
I have tried:
- node-press?filter[path][pid]=428
- node-press?filter[path][pid][value]=428
to no avail
It's not well defined in the filters section of the specification but other parameters such as include describe accessing nested keys with dot-notation. You could try ?filter[path.pid]=428 and parse the filter that way.
"field_country": {
"data": {
"type": "taxonomy_term--country",
"id": "818f11ab-dd9d-406b-b1ca-f79491eedd73"
}
}
Above structure can be filtered by ?filter[field_country.id]=818f11ab-dd9d-406b-b1ca-f79491eedd73

Redux + Normalizr : Adding and deleting normalized entities in Redux state

I have an API response which has a lot of nested entities. I use normalizr to keep the redux state as flat as possible. For eg. the api response looks like below:
{
"id": 1,
"docs": [
{
"id": 1,
"name": "IMG_0289.JPG"
},
{
"id": 2,
"name": "IMG_0223.JPG"
}
],
"tags": [
{
"id": "1",
"name": "tag1"
},
{
"id": "2",
"name": "tag2"
}
]
}
This response is normalized using normalizr using the schema given below:
const OpeningSchema = new schema.Entity('openings', {
tags: [new schema.Entity('tags')],
docs: [new schema.Entity('docs')]
});
and below is how it looks then:
{
result: "1",
entities: {
"openings": {
"1": {
"id": 1,
"docs": [1,2],
"tags": [1,2]
}
},
"docs": {
"1": {
id: "1",
"name": "IMG_0289.JPG"
},
"2": {
id: "2",
"name": "IMG_0223.JPG"
}
},
"tags": {
"1": {
"id": 1,
"name": "tag1"
},
"2": {
"id": 2,
"name": "tag2"
}
}
}
}
The redux state now looks something like below:
state = {
"opening" : {
id: 1,
tags: [1,2],
docs: [1,2]
},
"tags": [
{
"id":1,
"name": "tag1"
},
{
"id":2,
"name": "tag2"
}
],
"docs": [
{
"id":1,
"name": "IMG_0289.JPG"
},
{
"id":2,
"name": "IMG_0223.JPG"
}
]
}
Now if I dispatch an action to add a tag, then it adds a tag object to state.tags but it doesn't update state.opening.tags array. Same behavior while deleting a tag also.
I keep opening, tags and docs in three different reducers.
This is an inconsistency in the state. I can think of the following ways to keep the state consistent:
I dispatch an action to update tags and listen to it in both tags reducer and opening reducer and update tags subsequently at both places.
The patch request to update opening with tags returns the opening response. I can again dispatch the action which normalizes the response and set tags, opening etc with proper consistency.
What is the right way to do this. Shouldn't the entities be observing for changes to the related entities and make the changes itself. Or there are any other patterns that could be followed any such action.
First to summarise how normalizr works: normalizr flattens nested API response to entities defined by your schemas. So, when you made your initial GET openings API request, normalizr flattened the response and created your Redux entities and the flattened objects: openings, docs, tags.
Your suggestions are viable, but I find normalizr's real benefit in separating API data from UI state; so I don't update the data in Redux store myself... All my API data are kept in entities and they are not altered by me; they are vanilla back-end data... All I do is to do a GET upon state changing API operations, and normalise the GET response. There is a small exception for DELETE case that I'll expand on later on... A middleware will deal with such cases, so you should use one if you haven't been using. I created my own middleware, but I know redux-promise-middleware is quite popular.
In your data set above; when you add a new tag, I assume you are making an API POST to do so, which in turn updates the back-end. Then, you should do another GET openings which will update the entities for openings and all its nested schemas.
When you delete a tag, e.g. tag[2], upon sending the DELETE request to the back-end, you should nullify the deleted object in your entities state, ie. entities.tags[2] = null before making the GET openings again to update your normalizr entities.

Resources