I am very new to Giraffe.
Assuming we have a big app with lots modules and pages (i.e. tens of web pages and hundreds or thousands of web api actions), what is the best way to specify the routing without creating a mess?
As an example, we have these business modules (let's say we can map them to subfolders with the same names):
HR
Employees
Display Page
CRUD actions:
Add Employee
Update
Remove
Get reference data actions
Postings
...
Payroll
...
Admin
...
The routing examples here are minimalistic: https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#routing. Usually, applications can have big routing tables. I assume that we will have to have a subroute for each module.
Thanks
I'm not aware of "best practices" discussed in the community (if I were you I'd also go to the F# Slack and start the discussion about this topic in the #web channel) but I usually stick to composing the application routes from module specific routers.
At the top-level I tend to have very general routes (error routes, OIDC logout, etc) as well as top module routers
let webApp =
choose [
route "/error" >=> handleError
route "/logout" >=> logout
moduleARoutes
moduleBRoutes
]
A module route could look like this
let moduleARoutes : HttpHandler =
subRoute "/api/moduleA"
authorize >=> choose [
GET >=>
choose [
routef "/%O" handleGet
routef "/%O/things" handleGetThings
]
POST >=>
choose [
routef "/%O" handleCreate
routef "/%O/things" handleThingCreation
]
subModuleA1Routes
]
Submodules and other modules on the same level as moduleARoutes are done exactly the same way.
The only thing you have to be very careful about is where you compose web-parts that handle special processing like authorization and authentication. I personally like to handle it on the basis of the top modules but this is very taste and use-case specific.
All-in-all you have a lot of freedom and choice with Giraffe route definitions - everything is fully composable. One caveat could be performance though - I'm not entirely sure how your routing design (especially in an enormous app like you describe) impacts endpoint resolution. I'd experiment, measure and adjust accordingly.
Related
I have a REST API that will be facilitating CRUD from multiple databases. These databases all represent the same data for different locations within the organization (IE We have 20 or so implementations of a software package and we want to read from all of the supporting databases via one API).
I was wondering what the "Best Practice" would be for facilitating what database to access resources from?
For example, right now in my request headers I have a custom "X-" header that would represent the database id. Unfortunately, this sort of thing feels a bit like a workaround.
I was thinking of a few other options:
I could bake the Database Id into the URI (/:db_id/resource/...)
I could modify the Accept Header like someone would with an API version
I could split up the API to be one service per database
Would one of the aforementioned options be considered "better" than the others, and if not what is considered the "best" option for this sort of architecture?
I am, at the moment, using ASP.NET Web API 2.
These databases all represent the same data for different locations within the organization
I think this is the key to your answer - you don't want to expose internal implementation details (like database IDs etc.) outside your API - what if you consolidate? or change your internal implementation one day?
However, this sentence reveals a distinction that is meaningful to the business - the location.
So - I'd make the location part of the URI:
/api/location/{locationId}/resource...
Then map the locationId internally to a database ID. LocationId could also be a name, or a code, or something unique that would be meaningful to the API client.
Then - if you later consolidate multiple locations to the same database or otherwise change your internal implementation, the clients don't have to change.
In addition, whoever is configuring the client applications, can do so thinking about something meaningful to the business - the location they are interested in.
I am looking to develop a Spring MVC Integration with HATEOAS. I've searched the web and I didn't find any such working example through which I can understand HATEOAS concept.
I only found this resource which itself has lots of code and is really difficult to understand. Is a complete working sample available?
You may have a look at this sample Spring/Boot HATEOAS project: https://github.com/opencredo/spring-hateoas-sample and some explanation in the related blog post: Implementing HAL hypermedia REST API using Spring HATEOAS
The examples shows a simple but not-so-trivial API.
The API represents a fictional library with a catalogue of books, related with authors and publishers.
All Resources includes examples of links.
Book GET also also shows how to return different level of details, either embedding or linking related resources.
Beyond GET examples for all resources, it also includes other "command" endpoints, like for adding a book to the collection, borrowing and returning books.
There's a pretty basic example in this repository. A more advanced showcase can be found in Spring RESTBucks.
Here's sample EchoService described step-by-step with explanations in code. It is using Spring Boot HATEOAS and shows sample Spock test with TestRestTemplate.
HATEOAS means (at least in my mind :-) ) that you treat a HTTP resource as a state machine, which means that it can change depending on its (system) internal state.
The most common example is a bank account as a resource. Accessing the resource (account) returns various informations about it and links to operations that can be performed on it. And those operations (hence available links) depend on the account's state. If a user has money then the links could be { "deposit": "deposit-url", "withdraw": "withdraw-url" }. When a user has no money on the account, then the returned links (available actions) could be { "deposit": "deposit-url" }. So the list of available operations/actions/links varies and depends on resource's state.
Another common example is having different menu items depending on user's role/permissions. In apps, which generate whole page on server side, you could generate links to different actions in page template by simple checks: if (isAdmin(currentUser)) { {generate secret link} } else { ... }. But when using REST services most clients are JavaScript apps, where you can't do any permissions checks. So here HATEOAS helps by returning menu actions (links) depending on user's role/permission on server side and REST client doesn't have to worry about it.
I am creating a MVC 3 application (although just as applicable to other technologies e.g. ASP.NET Forms) and was just wondering if it is feasible (performance wise) to serve images from code rather than using the direct virtual path (like usual).
The idea is that I improve the common method of serving files to:
Apply security checks
Standardised method of serving files based on route values
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
Perform business logic before allowing access to the resource
I know HOW to do it but I don't know IF I should do it.
What are the performance issues (if any)
Does something weird happen e.g. images only load sequentially (maybe that's how HTML does it currently i am not sure - exposing my ignorance here).
Anything else you can think of.
Hope this all makes sense!
Thanks,
Dan.
UPDATE
OK - lets get specific:
What are the performance implications for using this type of method for serving all images in MVC 3 using a memory stream? Note: the image url would be GenericFetchImage/image1 (and just for simplicity - all my images are jpegs).
public FileStreamResult GenericFetchImage(string RouteValueRefToImage)
{
// Create a new memory stream object
MemoryStream ms = new MemoryStream();
// Go get image from file location
ms = GetImageAndPutIntoMemoryStream(RouteValueRefToImage);
// return the output as a file
return new FileStreamResult(ms, "image/jpeg");
}
I know that this method works, because I am using it to dynamically generate an image based on a session value for a captcha image. It's pretty neat - but I would like to use this method for all image retrieval.
I guess I am wondering in the above example if this is ok to do or whether it requires more processing to perform and if so, how much? For example, if the number of visitors were to multiply by 1000 for example, would the server be then processingly burdened in the delivery of images..
THANKS!
A similar question was asked before (Can an ASP.Net MVC controller return an Image?) and it appears that the performance implications are very small to serving images out of actions vs directly. As the accepted answer noted, the difference appears to be on the order of a millisecond (in that test case, about 13%). You could re-run the test locally and see what the difference is on your hardware.
The best answer to your question of if you should be using it is from this answer to (another) similar question (emphasis mine):
DO worry about the following: you will need to re-implement a caching strategy on the server, since IIS manages that for static files requested directly. You will also need to make sure you manage your client-side caching with the correct headers included in the response. Ultimately, just ask yourself if re-inventing a method of serving static files from a server is something that serves your application's needs.
To address the specific cases you provided with the question:
Apply security checks
You can already do this using the IIS 7 integrated pipeline. Relevant bit from documentation:
Allowing services provided by both native and managed modules to apply to all requests, regardless of handler. For example, managed Forms Authentication can be used for all content, including ASP pages, CGIs, and static files.
Standardised method of serving files based on route values
If I'm reading the documentation correctly you can insert a module early enough in the pipeline to re-write incoming URLs to point directly to static resources and let IIS handle the request from there. (For the sake of completeness there also this related question regarding mapping routes to mages: How do I route images using ASP.Net MVC routing?)
Empowering ASP.NET components to provide functionality that was previously unavailable to them due to their placement in the server pipeline. For example, a managed module providing request rewriting functionality can rewrite the request prior to any server processing, including authentication.
There are also some pretty powerful URL rewrite features that come with IIS more or less out of the box.
Returning modified images (if requested) e.g. different dimentions (ok this would only be used sparingly so don't relate this to the performance question above).
It looks like a module that does this is already available for IIS. Not sure if that would fall under serving images from code or not though, I guess it might.
Perform business logic before allowing access to the resource
If you're performing business logic to generate said resources (like a chart) or as you mentioned a captcha image then yeah, you basically have no choice but to do it this way.
So I have an application I am working on at work that we have a few hundred clients running on. We are working on a brand spanking new ASP.NET MVC 3 app for it, and I am working on the routes for this app.
I posted recently on a solution I came up with for dynamic routes, and it works fine on a few entries I have in a Sql Express DB. Essentiall it creates routes for every entry that I have in this DB.
So, my question is...If I were to implement this on an enterprise application, would the creation of several hundred if not thousands of routes added into my application have any negative consequences?
Concerning the dynamic route table, there is a recommendation which you seem to follow already:
Use named routes. Named routes are an
optional feature of routing. The names
only apply to URL generation - they
are never used for matching incoming
URLs. When you specify a name when
generating a URL we will only try to
match that one route. This means that
even if the named route you specified
is the 100th route in the route table
we'll jump straight to it and try to
match.
Besides the number of customers / routes, you should also consider the estimated number of requests per day (for which you should be worried more, IMHO), and take into account the scalability of your web server (worker threads, hardware, ...) in consequence.
If you clients use own domains with you application, use custom IRouteConstraint in routes to check request domain and filter only this routes. Tt's solution also protect routing from collisions.
So best way for you to both task - routing request and building links - use cached routing.
You can inherits and extend default MVC Route class
To speedup link building: Override GetVirtualPath that calculate hash from RouteData values and using it to put and get url values to and from cache.
To speedup routing: override GetRouteData also to use cashed RouteDate with url hash.
This solution may be requred more memory, but in most cases you have limited set of url on pages.
You question is slightly unclear. By "dynamic route" do you mean you go to the DB tier on a request to resolve the route or do you query against your db to create the source file for the route table?
In the first case performance should be constant. (The overhead of checking the DB against the number of users you have will not change.) So you should see performance effects right away.
In the second case I expect the routing code will be slower if it has that many items to check -- but it is easy to test.
There is defiantly a performance hit once you start to get over a certain threshold of routes. I don't have any hard benchmarks on this but I have redesigned a few poorly performing sites now.
The more you can use the same route for many different url's with parameters the better.
Just from observation it seems when you start to get close to 1k routes is when it starts really bottoming out.
I'm considering an SOA architecture for a set of servives to support a business that Im consulting for, previously we used database integration where each application picked out what it need from a shared MS SQL database and worked with it etc.. We had various apps integrating with the monster database including java, .net and microsoft access, there was referential integrity as everything was tightly coupled.
I'm a bit confused about how to support data sharing between services.
Lets take Product Service which sits on top of a the Product database provided by the wholesaler each month. We build a domain model and sit this on to of the database with Hibernate or whatvever, implentation wise Product is a large object graph given the information provided by the wholesaler about the product.
Now lets say the Review service, Pricing Service, Shipping Service, and Stock Service will subscribe to ProductUpdated, ProductAdded, ProductDeleted. The problem is that each service only need part or some parts of the information about the Product. Shipping might only need the dimensions and weight. Pricing might only need product id, wholesale cost, volume discount, price effective to date. Review might need product id, product name, producer.
Is it standard practice just to publish the whole Product (suitable non-subscriber-specific contracts e.g. ProductUpdated, and a suitable schema - representing all product object graph) and let the subscribers map whatever they need to their domain models (or heck do what they want with, might not even have a domain model)...
Or as I write this I'm thinking maybe:
Product Service Publishes ProductAdded message (does not included product details just an ID of product and maybe a timestamp)
Pricing Service subscribes to ProductAdded and publishes RequestPricingForProduct message
Product Service Publishes ResultForPricingForProduct message
Hmm.. seems a little better... but it feels like I'm building the contract for Product Service based on what other services I can identify and what they are going to need, perhaps in future XYZ Service requires something different. Im going to stop there as I think it's getting clearer where I'm confused... perhaps the above will work because I should expose a way to return whatever that should be public hmmm right.
Any comments or direction greatly appreciated. Sorry if this appears half baked.
I actually think the solution to this problem is to NOT share the data. SOA means that data is owned by a service - it is the technical authority of that data. I suggest reading a few Pat Helland articles, such as Data On The Inside, Data On The Outside.
The only thing that should be shared between these different services is the primary key - the ProductId in your example. Otherwise, for each service, the data that needs to be transactionally consistent goes together.
There does not need to be one "Product". Each service can have a different view of the product in their service. For the Pricing service, you have a productId and a price. For the review service, a productId and a review. And so on.
Where this starts to confuse people is how to display this data in the UI if it's from all these disparate services. How can you show a list of reviews for a product that has the product name from the ProductService and the review text from the ReviewService?
The answer to that is to compose the UI from all the different services. Get the product from the product service and get the review data from the review service and then combine that data in the UI.
I was in your position recently. The problem with directly exposing the underlying object through the service is that you increase coupling between layers, and there becomes little point in using a Service Oriented Achitecture at all. You would not be able to change these objects or business rules without affecting the web service too.
It sounds like you are on the right track. If you are serious about seperating your layers, then the most common pattern is to create a new separate set of message classes just for the web service, potentially each service, and translate your internal objects back and forth.
For an example of how to set up your service layer in this manner see the "Service Interface" pattern. On the client side of the service, there is an opposite pattern called "Service Gateway".
The Application Architecture Guide 2.0 has a whole chapter dedicated to the types of the decisions you are making (http://apparchguide.codeplex.com/Wiki/View.aspx?title=Chapter%2013%20-%20Service%20Layer%20Guidelines). I would download the whole guide.
Here is the portion most relevant to you. Long story short, if you take the time to create new coarse-grained methods, and message-based objects, you'll end up with a much better web service:
Consider the following guidelines when designing a service interface:
Consider using a coarse-grained interface to batch requests and minimize the number of calls over the network.
Design service interfaces in such a way that changes to the business logic do not affect the interface.
Do not implement business rules in a service interface.
Consider using standard formats for parameters to provide maximum compatibility with different types of clients.
Do not make assumptions in your interface design about the way that clients will use the service.
Do not use object inheritance to implement versioning for the service interface.