Validating empty Guids in Api layer - asp.net

Let's say I have an API endpoint GET api/customers. My API request model contains a Guid.
In the business layer I throw an error if that Guid is empty.
Should I catch this error in the API layer or let it propagate to lower layers?
Should I check that Guid against empty in the API layer?
using annotations
classic if check, before calling the business logic
Is there any standard way to handle empty Guids in API design?

It is better to define as Nullable<Guid> in your request API model.
So that it will have null value instead of empty.
And you can add also checking logic into your API Controller before calling business logic.

Generally speaking a Controller's Action should be responsible for the followings:
Receive and try parse data (header, route, form)
Perform preliminary checks
Call service / business layer
Translate the response from the below layer to a http response
In my opinion checking against Guid.Empty is the same as checking against whitespaces or null. If the provided data does not met the requirements then the best thing what you can do is to fail fast or use fallback.
I have participated in several discussions about where this kind of validation should belong. As always it depends. But my personal rule of thumb (I have followed this through many projects) looks like this:
Presentation layer should check against syntactic issues (check against default values as well)
Business layer should check against semantic issues (cross property / cross entity checks as well)
Persistence layer should check against dataset consistency
My point is that at the end of the day you want to have the followings:
Responsibilities of different layers are understood by everyone
Each validation logic can be easily determined where it should belong
Validation failures can be easily tracked back

Related

How can we access NetworkMapCache in Contract-States library of CordApp

I am trying to implement an Validator class in Contract-States library of CordApp, which have several validation methods that are inherited by Model classes in their init() fun, so that each time a Model class is called/initialized the validation happens on the spot.
Now I'm stuck at a point, where I need to validate whether the incoming member name(through a Model class) matches with Organisation name of the node, I need to access the NetworkMap for that. How can I do that?
In Work-Flow library each flow extends FlowLogic class that implements ServiceHub interface and through that we can access the NetworkMap, but how to do that in Contract-States library?
P.S. - I'm trying to avoid any circular dependency (Contract-States lib should not depend on Work-Flow lib)
The short answer, you can't.
The long answer:
The difference between flow validations and contract validation is that the latter (contracts) must be deterministic meaning for the same input they must always give the same output whether it's now or after 100 years, in the current node or any other node.
The reason for that is because any time (even in the future) when a node receives a transaction it must validate that transaction which includes validating the inputs which in return requires validating the transaction that created those inputs and so on, until you get a fully validated graph of all the inputs that lead to the outputs that were used as inputs and so on.
That's why the contract should return the same result any time, and that's why it should be deterministic, and that's why contracts (unlike flows) don't have access to external resources (like HTTP calls, or even the node's database).
Imagine if the contract was relying on the node's database for some validation rule, as you know, states are distributed on a need to know basis (i.e. only to participants of the state); so one node might have the state that you're using as validation source and another node won't, so the contract's output (transaction valid/invalid) will differ between nodes, and that breaks the deterministic concept.
Contracts only have access to the transaction components: inputs, outputs, attachments, signatures, time-windows, reference states.
Good news, there are other ways to implement your requirement:
Using an attachment that has the list of nodes that are allowed to be part of the transaction, this method should be used if the blacklist is not updated frequently and you can see the example here.
Using reference states, where you can create a state that has the allowed parties and require the existence of that reference state in your transaction; this method should be used when the blacklist is more frequently updated. You can read about reference states here.
Using Oracles, this option is in case there is a world organization (or for instance Ministry of Trade of some country) that provides an Oracle which returns the list of blacklisted parties; and you use that Oracle in your transaction. You can read about Oracles here.

Filter api model properties by user permission level in ASP.NET Core

We want to filter / hide / clear specific api model properties based on the user permission level.
The model itself will not differ. Just no return for those.
I found a lot of different ideas online:
switch(userrole) and call different logic methods
pass user role to logic
reflection to clear properties in the response (I hate this idea)
middleware to redirect the user to different actions
What is the recommended way to filter api model properties for specific roles?
It's hard to give any real specifics with what you've provided, but generally, I'd say this should go into your mapping logic. Utilize view models/DTOs to accept data from client requests. Then map that data from the view model/DTO to your entity instance. During this process, you can make decisions based on the user's role/permission set about what properties should or should not be mapped over. All of this logic can be factored out into a separate class or library of classes. Ultimately it doesn't really matter what the client sends. You can't control that anyway. You just need to ensure that they ultimately can't set any properties they don't have "access" to, and mapping logic is a great place for that.

how to pass sensitive data from view to controller

In order to construct an entity with quite a lot of information, I need to performe a sequence of forms submitting. Every time I return a view from a controller, I need to pass some id's about the not yet established entity. Right now I inject these pieces of info into hidden fields, and when post back to server, continuing to construct the entity.
This scenario continues for a few times.
I'm very not satisfied with this way of passing sensitive information, and was wonder if there're other more appropriate ways of doing it. I use authorization and authentication, but still worried of some scenarios in which one user could hack these id's, before sending it back to server, and by that, modifying the wrong entity.
Also, seems kind of hard work to pass back and forth the same data. I disqualified the use of sessions, because it reveals a different kind of data disruption threat . (in case of using more than one browser at a time).
How should I perform the mentioned operation?
You can use a secure hash of the data in another hidden field to detect tampering with the values.
Here is an example of how to generate a cryptographically secure hash http://www.bytemycode.com/snippets/snippet/379/
You can secure your data using many approaches, I discussed some of it in this post
http://miroprocessordev.blogspot.com/2012/03/save-aspnet-mvc-application-against.html
http://miroprocessordev.blogspot.com/2012/03/safe-aspnet-mvc-application-against.html
use Cross-site request forgery with token to identify that everytime u send an info it contains same token generated at server side and returned from your html

DataSet Validation vs. ASP.NET Forms Validation

A general question on where to put validation.
I have an ASP.NET form that gets/sets data from/to a DataSet.
Currently, the fields in the form are validated by the form itself (e.g. for invalid length, range, etc.).
Is it a good... or better idea to transfer this validation checks into the DataSet.
The downside is I need to trigger update calls to the DataSet in order to get the column with errors.
In using forms, I can catch the error earlier.
The main reason I'd prefer to do this is I'd be using this Dataset assembly into another project (a WFC service?). And I'd like to re-use the same validation code when possible.
If you found anything similar to what I prefer to do, please give a link. Thanks!
Validations need to happen at page level (i.e, using javascript) and also at database level. Put in in your database APIs (i.e, using stored procedures). Don't rely solely on front-end validation, and don't commit any data without validation.
You can perform additional checks at Business layer level if need be.
Use both ) DataSet validation is more reliable, but ASP.NET Forms Validation works faster, user don't have to wait server response with validation results. But form validation is easy to cheat, you could create Response mannualy and send it to server without any form validation.

What's the justification behind disallowing partial PUT?

Why does an HTTP PUT request have to contain a representation of a 'whole' state and can't just be a partial?
I understand that this is the existing definition of PUT - this question is about the reason(s) why it would be defined that way.
i.e:
What is gained by preventing partial PUTs?
Why was preventing idempotent partial updates considered an acceptable loss?
PUT means what the HTTP spec defines it to mean. Clients and servers cannot change that meaning. If clients or servers use PUT in a way that contradicts its definition, at least the following thing might happen:
Put is by definition idempotent. That means a client (or intermediary!) can repeat a PUT any number of times and be sure that the effect will be the same. Suppose an intermediary receives a PUT request from a client. When it forwards the request to the server, there is a network problem. The intermediary knows by definition that it can retry the PUT until it succeeds. If the server uses PUT in a non idempotent way these potential multiple calls will have an undesired effect.
If you want to do a partial update, use PATCH or use POST on a sub-resource and return 303 See Other to the 'main' resource, e.g.
POST /account/445/owner/address
Content-Type: application/x-www-form-urlencoded
street=MyWay&zip=22222&city=Manchaster
303 See Other
Location: /account/445
EDIT: On the general question why partial updates cannot be idempotent:
A partial update cannot be idempotent in general because the idempotency depends on the media type semantics. IOW, you might be able to specify a format that allows for idempotent patches, but PATCH cannot be guaranteed to be idempotent for every case. Since the semantics of a method cannot be a function of the media type (for orthogonality reasons) PATCH needs to be defined as non-idempotent. And PUT (being defined as idempotent) cannot be used for partial updates.
Because, I guess, this would have translated in inconsistent "views" when multiple concurrent clients access the state. There isn't a "partial document" semantics in REST as far as I can tell and probably the benefits of adding this in face of the complexity of dealing with that semantics in the context of concurrency wasn't worth the effort.
If the document is big, there is nothing preventing you from building multiple independent documents and have an overarching document that ties them together. Furthermore, once all the bits and pieces are collected, a new document can be collated on the server I guess.
So, considering one can "workaround" this "limitations", I can understand why this feature didn't make the cut.
Short answer: ACIDity of the PUT operation and the state of the updated entity.
Long answer:
RFC 2616 : Paragraph 2.5, "POST method requests the enclosed entity to be accepted as a new subordinate of the requested URL". Paragraph 2.6, "PUT method requests the enclosed entity to be stored at the specified URL".
Since every time you execute POST, the semantic is to create a new entity instance on the server, POST constitutes an ACID operation. But repeating the same POST twice with the same entity in the body still might result in different outcome, if for example the server has run out of storage to store the new instance that needs to be created - thus, POST is not idempotent.
PUT on the other hand has a semantic of updating an existing entity. There's no guarantee that even if a partial update is idempotent, it is also ACID and results in consistent and valid entity state. Thus, to ensure ACIDity, PUT semantic requires the full entity to be sent. Even if it was not a goal for the HTTP protocol authors, the idempotency of the PUT request would happen as a side effect of the attempt to enforce ACID.
Of course, if the HTTP server has close knowledge of the semantic of the entities, it can allow partial PUTs, since it can ensure through server-side logic the consistency of the entity. This however requires tight coupling between the data and the server.
With a full document update, it's obvious, without knowing any details of the particular API or what its limitations on the document structure are, what the resulting document will be after the update.
If a certain method was known to never be a partial content update, and an API someone provided only supported that method, then it would always be clear what someone using the API would have to do to change a document to have a given set of valid contents.

Resources