TL;DR
Is there a tool can record all the network activity as I visit a website and create a mock server that responds to those requests with the same responses?
I'm investigating ways of mocking the complex backend for our React application. We're currently developing against the real backend (plus test/staging environments). I've looked around a bit and it looks there are a number of tools for mocking individual endpoints/features and sending the rest through to the real API (Mirage is leading the pack at the moment).
However, the Platonic ideal would be to mock the entire server so that a front end dev can work without an internet connection (again: Platonic ideal). It's a crazy lofty goal, I know this. And of course it would require mocking not only our backend but also requests any 3rd-party data sources. And of course the data would be thin and dumb and stale. But this is just for ultra-speedy front end development, it's just mocking. The data doesn't need to be rich, it'll be up to us to make it as useful/realistic as we need it to be.
Probably the quickest way would be to recreate the responses the backend is already sending, and then modifying as needed for new features or features under test etc.
To do this, we might go into Chrome DevTools and recreate everything on the network tab. Mock every request that was made by hardcoding response that returned. Taking it from there, do smart things like use url pattern matching to return a simple placeholder image for any request to get a user's avatar.
What I want to know is: is there any tool out there that does this automatically? That can watch as I load the site, click a bunch of stuff, take a bunch of actions, and spit out or set up a mock that recreates all the responses? And then we could edit any of them as we saw fit to simplify.
Does something like this exist? Maybe it's a browser tool. Maybe it's webpack middleware. Maybe it's a magic rooster.
PS. I imagine this may not be a specific, actionable enough question for SO. I'll understand if it's closed, but I'd really appreciate being directed somewhere where such questions/discussions would fit? I'm new enough to this world that SO is all I know!
There is a practice called service virtualization - a subset of the test double family.
Wikipedia has a list of tools you can use to do that. Here a couple of examples from that list:
Open Source Wiremock will let to record the mocks and edit the responses programmaticaly
Commercial Traffic Parrot will let you record the mocks and edit the responses via a UI and/or programatically
https://mswjs.io/ can mock all the requests for you. It intercepts all your client`s requests and returns your defined mock data.
Related
We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!
I'm just at the high-level of forming a concept here, so I'm really just looking for thoughts/ideas/suggestions about how this might be done -- if there's already something that does this, if I need to roll my own, or if maybe there's a couple separate projects that could be cobbled together to achieve this. Any and all input is appreciated :)
What I'd like to be able to do is have a "stream" (not sure what else to call it) broadcast? of data that will be served over an HTTP connection. This stream will serve events/updates/other data to clients that have subscribed to it (doesn't matter really what the data is, just that the client has subscribed in some fashion). This is nothing even remotely new, but what I'm having trouble tracking down is a way for multiple users/clients/connections to basically share the same stream/connection. In other words, I'm looking for a way for a web server to basically send data once and have all subscribed clients receive it. This way, for high-traffic applications, the web server doesn't have to send data explicitly to each and every listening connection.
I really hope that made sense.
Are there webservers/webserver-plugins that can already do this?
Would it be possible/feasible to adapt some form of video streaming library to achieve this?
Is this something I'll probably have to manage on my own (code that tracks subscribed connections, receives new data from some other service, and transparently replicates said data to each individual connection)?
Any other ideas, thoughts, concerns, caveats, etc?
Take a look at http://wiki.nginx.org/HttpPushStreamModule - it seems to answer what you're after in terms of existing functionality to track subscribed connections and relaying data to each client. It looks like it would take a bit to set up your data source as a channel, but beyond that it handles the rest.
I haven't used that module before, but have used Nginx itself -- it's quite nice and handles concurrent connections well.
Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.
I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"
I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores.
I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it.
I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...)
I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it.
Any help and suggestions will be appreciated, thank you
What you want to achieve is impossible. You can only make it harder for people to do. The best you can do is to use encryption and encrypt the SWF it self, which usually causes higher filesize and poorer performance.
The safest method is to evaluate or even run the whole game on the server. You can try to determine whether what the client sends you is possible at all. Rather than making sure people use your client, you're making sure people play the game according to your rules.
greetz
back2dos
All security is based on making things hard. It never makes things impossible. How about having your game register with a separate service when it starts up. It could use client information to build some kind of special code that would be unique for each iteration of the game. The game could morph the code in a way that would be hard to emulate. Then when the game is over the score gets submitted with the morphed code and validated on the server side.
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!