I have a task to extend my web application to provide users the ability to segment their own data (i.e choose their own fields and add their criteria using And/Or etc), so I'm creating something similar to a query builder tool but lighter. I'm not worrying about the front end for the moment, i am just trying to focus on how to do this in the back end.
My only thoughts so far are to store their "Segment" as an XML document (serialized in the DB) which contains all of their columns and criteria and how they map to the database, then when the segment is called, i have a mapping class which deserializes this xml document and maps the fields and builds a SQL query for this and then returns the query results. The problem i see with this is if the database setup changes (likely) then i have a serialized XML document which knows nothing about these changes.
Has anyone tacked a similar situation?
I had a similar problem and posted a question on here with what could be a potential solution to your own issue.
Dynamic linq query with multiple/unknown criteria
See how you get on with that.
Related
I am attempting to use the ODBC Schema Editor to connect to several Cosmos DB collections for reporting purposes (using Power BI). While I can successfully generate a schema for one collection, another is not working correctly.
The Document in question includes a request object. Within request there should be multiple fields. When I sample my collection in Schema Editor, the resulting schema is missing any array of objects (or anything that includes an array of objects) that should be included under the request object – they are just not listed in the resulting schema. Several others are properly split out into their own tables, but the tables are always empty when the schema is applied (this is not reflective of the underlying data – I would expect to see things in those tables). Behavior does not change if the same collection is re-sampled.
Here's an example:
JSON selection
Does anyone know how I can get the schema editor to recognize all of my data? I'm not sure what to share that would be helpful but I'm happy to provide more if there's something that would be informative.
EDIT: Unless I'm misunderstanding how to query Cosmos DB, it seems that I'm seeing the issue show up even if I try to query the data directly through Data Explorer. In the below, you can see if I select c.request.preparedBy that preparedBy has a property mail:
preparedBy
However, if I try to query c.request.preparedBy.mail directly then I see nothing but blanks, which is exactly what appeared in the Schema Editor:
preparedBy.mail
Thinking that maybe there was a limit to how many layers of depth I could query, I tried selecting from request instead of the entire collection. Interestingly, even though I see preparedBy when I select * from request, request.preparedBy again returns nothing but empty braces.
As per REST framework, we can access resources using GET method, which is fine, if i know key my resource. For example, for getting transaction, if i pass transaction_id then i can get my resource for that transaction. But when i want to access all transactions between two dates, then how should i write my REST method using GET.
For getting transaciton of transaction_id : GET/transaction/id
For getting transaction between two dates ???
Also if there are other conditions, i need to put like latest 10 transactions, oldest 10 transaction, then how should i write my URL, which is main key in REST.
I tried to look on google but not able to find a way which is completely RESTful and solve my queries, so posting my question here. I have clear understanding of POST and DELETE, but if i want to do same update using PUT for some resource based on condition, then how to do it?
There are collection and item resources in REST.
If you want to get a representation of an item, you usually use an unique identifier:
/books/123
/books/isbn:32t4gf3e45e67 (not a valid isbn)
or with template
`/books/{id}
/books/isbn:{isbn}
If you want to get a representation of a collection, or a reduced collection you use the unique identifier of the collection and add some filters to it:
/books/since:{fromDate}/to:{toDate}/
/books/?since="{fromDate}"&to="{toDate}"
the filters can go into the path or into the queryString part of the url.
In the response you should add links with these URLs (aka HATEOAS), which the REST clients can follow. You should use link relations, for example IANA link relations to describe those links, and linked data, for example schema.org or to describe the data in your representation. There are other vocabs as well, for example GoodRelations, and ofc. you can write your own vocab as well for your application.
I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.
I have an ASP.NET application that access a MySQL database. For that I made a class with all the queries I need to retrieve data from database.
In order to bring from database just the info I need I have a lot of queries:
example
One query that gets the NAME and DATE from the table NEWS
Another query to get the NAME, DATE AND TEXT from the table news.
I do this because in some pages I just need the name and date and in others I need also the text.
What do you think would be better for performance, just to have one query and get all the information even if I don't use some of the fields in some pages or to have a query for each case?
This has been a very simple example, in some cases I have many fields...
Thanks.
It really depends more on how often you create connection with database. For example, if your page loads and some parts of your page use first query and other use the second, there is benefit of executing the second query for both only once and distribute data as needed. You save on unnecessary connections and this does result in performance gain. However if you have different pages calling different methods and you can not reduce number of calls, you can keep both methods and call the one that will select only what you need.
I'd like some advice on designing a REST API which will allow clients to add/remove large numbers of objects to a collection efficiently.
Via the API, clients need to be able to add items to the collection and remove items from it, as well as manipulating existing items. In many cases the client will want to make bulk updates to the collection, e.g. adding 1000 items and deleting 500 different items. It feels like the client should be able to do this in a single transaction with the server, rather than requiring 1000 separate POST requests and 500 DELETEs.
Does anyone have any info on the best practices or conventions for achieving this?
My current thinking is that one should be able to PUT an object representing the change to the collection URI, but this seems at odds with the HTTP 1.1 RFC, which seems to suggest that the data sent in a PUT request should be interpreted independently from the data already present at the URI. This implies that the client would have to send a complete description of the new state of the collection in one go, which may well be very much larger than the change, or even be more than the client would know when they make the request.
Obviously, I'd be happy to deviate from the RFC if necessary but would prefer to do this in a conventional way if such a convention exists.
You might want to think of the change task as a resource in itself. So you're really PUT-ing a single object, which is a Bulk Data Update object. Maybe it's got a name, owner, and big blob of CSV, XML, etc. that needs to be parsed and executed. In the case of CSV you might want to also identify what type of objects are represented in the CSV data.
List jobs, add a job, view the status of a job, update a job (probably in order to start/stop it), delete a job (stopping it if it's running) etc. Those operations map easily onto a REST API design.
Once you have this in place, you can easily add different data types that your bulk data updater can handle, maybe even mixed together in the same task. There's no need to have this same API duplicated all over your app for each type of thing you want to import, in other words.
This also lends itself very easily to a background-task implementation. In that case you probably want to add fields to the individual task objects that allow the API client to specify how they want to be notified (a URL they want you to GET when it's done, or send them an e-mail, etc.).
Yes, PUT creates/overwrites, but does not partially update.
If you need partial update semantics, use PATCH. See http://greenbytes.de/tech/webdav/draft-dusseault-http-patch-14.html.
You should use AtomPub. It is specifically designed for managing collections via HTTP. There might even be an implementation for your language of choice.
For the POSTs, at least, it seems like you should be able to POST to a list URL and have the body of the request contain a list of new resources instead of a single new resource.
As far as I understand it, REST means REpresentational State Transfer, so you should transfer the state from client to server.
If that means too much data going back and forth, perhaps you need to change your representation. A collectionChange structure would work, with a series of deletions (by id) and additions (with embedded full xml Representations), POSTed to a handling interface URL. The interface implementation can choose its own method for deletions and additions server-side.
The purest version would probably be to define the items by URL, and the collection contain a series of URLs. The new collection can be PUT after changes by the client, followed by a series of PUTs of the items being added, and perhaps a series of deletions if you want to actually remove the items from the server rather than just remove them from that list.
You could introduce meta-representation of existing collection elements that don't need their entire state transfered, so in some abstract code your update could look like this:
{existing elements 1-100}
{new element foo with values "bar", "baz"}
{existing element 105}
{new element foobar with values "bar", "foo"}
{existing elements 110-200}
Adding (and modifying) elements is done by defining their values, deleting elements is done by not mentioning it the new collection and reordering elements is done by specifying the new order (if order is stored at all).
This way you can easily represent the entire new collection without having to re-transmit the entire content. Using a If-Unmodified-Since header makes sure that your idea of the content indeed matches the servers idea (so that you don't accidentally remove elements that you simply didn't know about when the request was submitted).
Best way is :
Pass Only Id Array of Deletable Objects from Front End Application To Web API
2. Then You have Two Options:
2.1 Web API Way : Find All Collections/Entities using Id arrays and Delete in API , but you need to take care of Dependant entities like Foreign Key Relational Table Data too
2.2. Database Way : Pass Ids to your database side, find all records in Foreign Key Tables and Primary Key Tables and Delete in same order i.e. F-Key Table records then P-Key Table records