I have an app that can have one or more streams
Example:
Book of author A
Book of author B
Book of author C
So my queries can have one or more relationship filters.
Assuming that I would like to use only one template for more views, and a view can have more streams so I can't have the names of each one in my template, how can i do that?
Basically in my template I would like to have a unique list even if I got more streams
AsDynamic(Data["Default"]) //This should get all the streams in my data
Is that possible? Maybe aggregating them in Visual query?
I'm trying to have an out stream coming from many but giving the same name I got and error.
At the moment this is not possible (2sxc 8.5.6). There are a few problems related to this idea
the same item could occur multiple times, this is not supposed to happen in a stream
you will probably loose the "which author was this for" information
As of now, I recommend to either just merge them in js or server-side code if this is what you need.
Related
I have multiple flatfiles (CSV) (with multiple records) where files will be received randomly. I have to combine them (records) with unique ID fields.
How can I combine them, if there is no common unique field for all files, and I don't know which one will be received first?
Here are some files examples:
In real there are 16 files.
Fields and records are much more then in this example.
I would avoid trying to do this purely in XSLT/BizTalk orchestrations/C# code. These are fairly simple flat files. Load them into SQL, and create a view to join your data up.
You can still use BizTalk to pickup/load the files. You can also still use BizTalk to execute the view or procedure that joins the data up and sends your final message.
There are a few questions that might help guide how this would work here:
When do you want to join the data together? What triggers that (a time of day, a certain number of messages received, a certain type of message, a particular record, etc)? How will BizTalk know when it's received enough/the right data to join?
What does a canonical version of this data look like? Does all of the data from all of these files truly get correlated into one entity (e.g. a "Trade" or a "Transfer" etc.)?
I'd probably start with defining my canonical entity, and then look towards the path of getting a "complete" picture of that canonical entity by using SQL for this kind of case.
I'm looking for best practices of handling same type of data in a redux store while we can have different queries for it.
Imagine a wordpress website, on different pages we have different queries for posts. In homepage, for example we fetch 10 most recent posts and save them to redux store, but for a category page we have to again fetch posts in posts store, but now we might have different posts which may/may not include the ones we had before.
And this applies to many different pages, like tag,taxonomy,author,date,etc...
So basically having to create a separate store for each case doesn't seem to be a good solution since it might end up with many duplicate values.
This is one of the standard reasons why the Flux concept was invented in the first place. Per Dan Abramov's article on The Case for Flux, caching queries is an excellent use case for a Flux-type architecture.
Going beyond that, the Redux principle of having a "single source of truth" applies here, as does the idea of normalizing data in the store.
Overall, you'd probably want to store your data in a normalized form, with multiple "tables" in your state. Add each set of results into the state to cache them, and have different parts of the UI read out the specific posts they're interested in as needed.
See the Redux FAQ entry on organizing nested or duplicate data for links to more information.
Some background:
My question is very similar to this clarification question about denormalization, but I want to change the situation a bit.
In the Considerations section of this blog post on denormalization, the Firebase people say the following about updating data.
Let’s discuss some consequences of a [denormalized data structure]. You will need to ensure that every time some data is created (in this case, a comment) it is put in the right places.
The example includes three paths, one to store the comment's data, and two paths under which to store pointers to that comment.
...
Modification of comments is easy: just set the value of the comment under /comments to the new content. For deletion, simply delete the comment from /comments — and whenever you come across a comment ID elsewhere in your code that doesn’t exist in /comments, you can assume it was deleted and proceed normally:
But this only works because, as the answer to the other question says,
The structure detailed in the blog post does not store duplicate comments. We store comments once under /comments then store the name of those comments under /links and /users. These function as pointers to the actual comment data.
Basically, the content is only stored in one location.
The question:
What if the situation were such that storing duplicate data is necessary? In that case, what is the recommended way to update data?
My attempt at an answer:
An answer to this question exists, but it is directed at MongoDB, and I'm not sure it quite addresses the issue in Firebase.
The most sensible way I could think of, just for reference, is as follows.
I have a helper class to which I give a catalog of paths in Firebase, which somewhat resembles a schema. This class has methods that wrap Firebase methods, so that I can perform writes and updates under all the paths specified by my schema. The helper class iterates over every path where there is a reference to the object, and at each location performs a write, update, or delete. In my case, no more than 4 paths exist for any individual operation like that, and most have 2.
Example:
Imagine I have three top-level keys, Users, Events, and Events-Metadata. Users post Images to Events, and both Events and Users have a nested record for all their respective Images. Events-Metadata is its own top-level key for the case where I want to display a bunch of events on a page, but I don't want to pull down potentially hundreds of Image records along with them.
Images can have captions, and thus, when updating an Image's caption, I should update these paths:
new Firebase("path/to/eventID/images/imageID/caption"),
and
new Firebase("path/to/userID/images/imageID/caption")
I give my helper class both those of these paths and a wrapper method, so that anytime a caption is updated, I can call helperclass.updateCaption(imageObj, newCaptionData), and it iteratively updates the data at each path.
Images are stored with attributes including eventID, userID, and imageID, so that the skeletons of those paths can be filled in correctly.
Is this a recommended and/or appropriate way to approach this issue? Am I doing this wrong?
This is a purely theoretical question (at least until I start trying to implement it) but here goes.
I wrote a web form a long time ago which has a configurable section for getting information. Basically for some customers there are no fields, for other customers there are up to 20 fields. I got it working by dynamically creating the fields at just the right time in the page lifecycle and going through a lot of headaches.
2 years later, I need to make some pretty big updates to this web form and there are some nifty new technologies. I've worked with ASP.NET Dynamic Data just a bit and, well, I half-crazed plan just occurred to me:
The Ticket object has a one-to-many relationship to ExtendedField, we'll call that relationship Fields for brevity.
Using that, the idea would be to create a FieldTemplate that dynamically generated the list of fields and displayed it.
The big questions here would probably be:
1) Can a single field template resolve to multiple web controls without breaking things?
2) Can dynamic data handle updating/inserting multiple rows in such a fashion?
3) There was a third question I had a few minutes ago, but coworkers interrupted me and I forgot. So now the third question is: what is the third question?
So basically, does this sound like it could work or am I missing a better/more obvious solution?
Did you try creating a FieldTemplate that had a "ListView" of all the Fields? (the ListView would use Dynamic Data to determine which FieldTemplate to display for each field.)
I don't see why this would not be possible. Although, "out of the box", you may have to hit "edit" on each row of the new FieldTemplate's ListView to edit the values. It would be like replacing the "Order Details" link in the Orders List, with an inline List of the "Order Details".
1.) Not very nicely. Can you imagine showing a DateTime, Integers, Phone numbers, Urls, etc with just ONE user control, like text.ascx? Why not have multiple field templates and use UIHint to specify usage per column?
2.) Yes.
3.) Define basically?
regarding one to many relationships, you might have a look at the ListDetails.aspx Page Template in Dynamic Data. Hope this helps.
I'd like some advice on designing a REST API which will allow clients to add/remove large numbers of objects to a collection efficiently.
Via the API, clients need to be able to add items to the collection and remove items from it, as well as manipulating existing items. In many cases the client will want to make bulk updates to the collection, e.g. adding 1000 items and deleting 500 different items. It feels like the client should be able to do this in a single transaction with the server, rather than requiring 1000 separate POST requests and 500 DELETEs.
Does anyone have any info on the best practices or conventions for achieving this?
My current thinking is that one should be able to PUT an object representing the change to the collection URI, but this seems at odds with the HTTP 1.1 RFC, which seems to suggest that the data sent in a PUT request should be interpreted independently from the data already present at the URI. This implies that the client would have to send a complete description of the new state of the collection in one go, which may well be very much larger than the change, or even be more than the client would know when they make the request.
Obviously, I'd be happy to deviate from the RFC if necessary but would prefer to do this in a conventional way if such a convention exists.
You might want to think of the change task as a resource in itself. So you're really PUT-ing a single object, which is a Bulk Data Update object. Maybe it's got a name, owner, and big blob of CSV, XML, etc. that needs to be parsed and executed. In the case of CSV you might want to also identify what type of objects are represented in the CSV data.
List jobs, add a job, view the status of a job, update a job (probably in order to start/stop it), delete a job (stopping it if it's running) etc. Those operations map easily onto a REST API design.
Once you have this in place, you can easily add different data types that your bulk data updater can handle, maybe even mixed together in the same task. There's no need to have this same API duplicated all over your app for each type of thing you want to import, in other words.
This also lends itself very easily to a background-task implementation. In that case you probably want to add fields to the individual task objects that allow the API client to specify how they want to be notified (a URL they want you to GET when it's done, or send them an e-mail, etc.).
Yes, PUT creates/overwrites, but does not partially update.
If you need partial update semantics, use PATCH. See http://greenbytes.de/tech/webdav/draft-dusseault-http-patch-14.html.
You should use AtomPub. It is specifically designed for managing collections via HTTP. There might even be an implementation for your language of choice.
For the POSTs, at least, it seems like you should be able to POST to a list URL and have the body of the request contain a list of new resources instead of a single new resource.
As far as I understand it, REST means REpresentational State Transfer, so you should transfer the state from client to server.
If that means too much data going back and forth, perhaps you need to change your representation. A collectionChange structure would work, with a series of deletions (by id) and additions (with embedded full xml Representations), POSTed to a handling interface URL. The interface implementation can choose its own method for deletions and additions server-side.
The purest version would probably be to define the items by URL, and the collection contain a series of URLs. The new collection can be PUT after changes by the client, followed by a series of PUTs of the items being added, and perhaps a series of deletions if you want to actually remove the items from the server rather than just remove them from that list.
You could introduce meta-representation of existing collection elements that don't need their entire state transfered, so in some abstract code your update could look like this:
{existing elements 1-100}
{new element foo with values "bar", "baz"}
{existing element 105}
{new element foobar with values "bar", "foo"}
{existing elements 110-200}
Adding (and modifying) elements is done by defining their values, deleting elements is done by not mentioning it the new collection and reordering elements is done by specifying the new order (if order is stored at all).
This way you can easily represent the entire new collection without having to re-transmit the entire content. Using a If-Unmodified-Since header makes sure that your idea of the content indeed matches the servers idea (so that you don't accidentally remove elements that you simply didn't know about when the request was submitted).
Best way is :
Pass Only Id Array of Deletable Objects from Front End Application To Web API
2. Then You have Two Options:
2.1 Web API Way : Find All Collections/Entities using Id arrays and Delete in API , but you need to take care of Dependant entities like Foreign Key Relational Table Data too
2.2. Database Way : Pass Ids to your database side, find all records in Foreign Key Tables and Primary Key Tables and Delete in same order i.e. F-Key Table records then P-Key Table records