Approach on API/System design - api-design

I was watching a video on "Design Uber api mock interview".
for most of the APIs, the candidate just used "userId" and left on the system to resolve "rideId"
Example - cancelRide(userId: string)
For this api, user is just passing userId to cancelRide endpoint and now the system will have to resolve rideId to actually cancel the ride.
Now this may solve the problem at hand but in future Uber may want to enable multiple ride for single user (you can book ride for yourself and your mom too, at the same time)
With this kind of API design now we will have to make changes to cancelRide endpoint to accept rideId as well
cancelRide(userId: string, rideId: string)
Should we just design as per the problem at hand and make changes to the API/Design if there is some new requirement is coming in or We should at least consider some obvious/possible future requirements/change?

You should think ahead: What is the potential requirement/change they need in the future?
The reason is you are developing an API.
Assume that you have more than 100K external clients of your API, any change will enforce them to update, modify, rebuild their APP and a lot of work.
Thus, the YAGNI principle is inappropriate to API design issue. You should try the Open-Closed Principle (OCP).

Related

Managing a connection to an external component in an Aggregate

I just discovered Axon framework and am very eager to use it as it answers a lot of my questions and concerns about DDD.
The first project I'd like to use it on contains small cameras which are controlled via the GigEVision protocol (over TCP and UDP for the control and stream channels). I think my problem would be the same for any case where we maintain a connection to an external component or more generally we want to link an external component lifecycle to Axon's lifecycles.
I'd like to have an Aggregate named Camera to which I can send Commands to grab 1 image or start grabbing N images at a certain FPS.
What I'm not sure about is how to manage the connection to an external component in my Aggregate.
- Should I inject the client to my camera in my Camera Aggregate and consider connecting to it as part of my protocol / business commands? In this case how would I link the camera lifecycle (a camera get disconnected all of a sudden) to the aggregate lifecyle (create a corresponding CameraDisconnectedEvent)?
- Should the connection be handled in a side car Saga which get the camera client injected, the saga starting on ConnectionRequestedEvent and stopping as soon as we get a connection error from the camera. I would get the same issue of linking the connection lifecycle to the lifecycle of the Saga I think.
- Am I leaking implementation details in the business layer and should manage the issue an other way?
- Am I just using the wrong tool for this job and should not try to force it into Axon?
Thank you very much in advance, hope my message and issues make sense.
Best regards,
First and foremost what you should do, is ensure the language spoken by the GigEVision protocol by no means transitions over into your other domain.
These two should be separate and remain so, as they cover different concerns.
This brings to light the necessity to have a translation layer of some sort.
More formuly called a context mapping. From a DDD perspective, you would take this even further by talking about an Anti-Corruption Layer.
The name already says it, you add a layer to ensure you are not corrupting your domain logic with that from another domain.
Another useful topic to read up on here would be the notion of a Bounded Context.
I digress though, let's go back to the problem at hand: where to position this anti-corruption layer.
What is currently unclear to me, is what domain requirements are in place why the connection is required to be maintained all the time when requesting for images.
Is the command you want to send requesting for a live feed? Or just "some" images from a given time frame?
In both scenarios I am not immediately convinced that any of these operations requires the validation through a single Camera aggregate to be honest.
You could still model this in a command and event format, as the messaging paradigm is very helpful to allow clean segregation of concerns.
But given the current description, I am uncertain whether you need DDD's idea of an Aggregate to model a single Aggregate in.
I might be wrong on this note, but I just don't know enough about your domain at this stage.
That's my two cents to the situation, hoping this helps!

something in ngrx (redux pattern) than I still dont get for large applications

I've been building data driven applications for about 18 years and for the past two, I've been successfuly using angular for my large forms/crud based apps. You know, the classic sql server db with hundreds of tables with millons of records. So far, so good.
Now I'm porting/re-engineering a desktop app with about 50 forms, all complex, all fully functional, "smart". My approach for the last couple years was to simply work tightly with the backend rest API to retrieve, insert or update data as needed and everything works fine.
Then I stumbled across ngrx and I understand exactly how it works, what it does and why it is good for a "reactive" app.
My problem is the following: In the usual lifecycle of the kind of systems i mentioned, you always have to deal with fresh data and always have to tell everything to the server. Almost no data in such apps can be safely "stored" localy since transactional systems rely on centralized data interactions. There's no such thing as "hey lets keep this employee's sales here for later use".
So why would it be so important to manage a local 'store' when most of my data is volatile? I understand why it would be useful for global app data like user-profile or general ui related state, but for the core data itself? I dont get it. You query for data, plug that data in the form, it gets processed by the user and sent back to the server. That data is no longer needed, and if you do need it, you ask for it again, as it could have changed its state since the last time you interacted with it.
I do not understand the great lengths i have to go to mantain a local store and all the boilerplate if that state is so volatile.
They say change detection does not scale but I've build some really large web apps with a simple "http service" pattern and it works just fine, cause most of the component-tree is destroyed anyway as you go somewhere else in the app, and any previous subscriptions become useless. Even with large-bulky-kinky forms, it's never that big of a problem the inner workings of a form as to require external "aid" fro a store. The way I see it, the "state" of a form is a concern of that form in that moment alone. Is it to keep the component tree in sync? never had problems with that before... even for complicated trees with lots of shared data, master detail is kind of a flat pattern in the end if al lthe data is there.
For other components, such as grids, charts, reporte, etc, same thing applyes. They get the data they need and then "puf", gone.
So now you see my mindset. I AM trying to change it to something better. Why am I missing out the redux pattern?
I have a bit of experience here! It's all subjective, so what I've done may not suit you. My system is a complex system that sounds like it's on a similar scale as yours. I battled at first with the same issues of "why build complex logic on the front end and back end", and "why bother keeping stuff in state".
A redux/NGRX approach works for me because there are multiple ways data can be changed - perhaps it's a single user using the front end, perhaps it's another user making a change and I want to respond to that change straight away to avoid concurrency issues down the track. Perhaps there are multiple parts within my front end that can manipulate the same data.
At the back end, I use a CQRS pattern instead of a traditional REST API. Typically, one might suggest to re-implement the commands/queries to "reduce" changes to the state, however I opted for a different approach. I don't just want to send a big object graph back to the server and have it blindly insert, and I don't want to re-implement logic on the client and server.
My basic "use case" life cycle looks a bit like:
Load a list of data (limited size, not all attributes).
User selects item from list
Client requests "full" object/view/dto from server
Client stores response in object entity state
User starts modifying data
These changes are stored as "in progress" changes in a different part of state. The system is now responding to the data in the "in progress" part
If another change comes in from server, it doesn't overwrite the "in progress" data, but it does replace what is in the object entity state.
If required, UI shows that the underlying data has changed / is different to what user has entered / whatever.
User clicks on the "perform action" button, or otherwise triggers a command to be sent to server
server performs command. Any errors are returned, or success
server notifies client that change was successful, the client clears the "in progress" information
server notifies client that Entity X has been updated, client re-requests entity X and puts it into the object entity state. This notification is sent to all connected clients, so they can all behave appropriately.

I am using Microsoft's Face API. It requires to create a DB at their server. is it possible to use our own DB, rather than creating one at their end

I am trying to use Microsoft's Face API, for facial recognition for my company employees. I see that you need to create a database in Microsoft's serverS.
Is there a way to use their API's on our company database (without creating another DB on their server? Also any changes you make to this DB will be taken care of.
If no, then how will you take care of the changes you want (I know that there are delete API calls as well, but will not it be cumbersome?)
I think you may have misunderstood how this API works. Whenever there is a change to your list of employees, the PersonGroup (a friendly name for the image classifier model), must be retrained in order for it to start recognizing the added faces and stop recognizing removed ones. So even if there was a way to store the model locally (which there isn't), you will still need to track add/removes and take the additional step of training.

Simple Shopping Cart Using Session Variables Now Using AJAX

I know there are a million questions out there on how to implement a shopping cart in your site. However, I think my problem may be somewhat different. I currently have a working shopping cart that I wrote back in the 1.1 days that uses ASP.NET session variables to keep track of everything. This has been in place for about 6 years and has served its purpose well. However, it has come time to upgrade the site and part of what I have been tasked with is creating a more, erm, user-friendly site. Part of this is removing updatepanels and implementing real AJAX solutions.
My problem comes in where I need to persist this shopping cart over several pages. Sure I could use cookies, but I would like to keep track of carts for statistical purposes (abandonment stats, items added but not bought, those kinds of stats) as well as user-friendliness, like persisting their cart so that if they come back it is remembered. This is easy enough if a user is logged in, but I don't want to force a user to create an account if they don't want.
Additionally, the way we were processing orders was a bit, ehh, slapped-together. All of the details (color selected, type selected, etc) are passed to paypal via their description string which for the most part is ok, but if a product has selections that are too long for the string (255 chars i believe), they are cut off and we have to call the customer to confirm what they bought. If I were to implement a more "solid" shopping cart, we wouldn't have to do that because all the customer's choices would be stored, in addition to the order automatically being entered into the order processing system (they're manually entered into an excel spreadsheet. i know, right).
I want to do this the right way, but I don't want to use any sort of overblown software that won't really work with our current business model. Do I use a cookie to "label" each visitor to match them with their cart (give them a cookie with a GUID) across pages, keep their whole cart client side, keep the cart server side and just pull it from the db on each page refresh? Any help would be greatly appreciated.
Thanks!
So this isn't really the answer to your question, but it is part of the answer. I'm trying to find the duplicate that this is of (it may not be) but you can keep a lot of the same code if you'll use IRequiresSessionState. I didn't find any exact duplicates, but I recognized the subject matter.
Handler with IRequiresSessionState does not extend session timeout
other answers:
ASP.NET: How to access Session from handler?
Authentication in ASP.NET HttpHandler
So what you want to really do is just look to implement PageMethods in your pages, and then you can reduce a LOT of the overhead of communicating with the pages. But if you want to migrate away from what you're doing now you want to start implementing handlers (and configure them to use JSON - there's a decorator for that) and you can use that with the likes of jQuery.ajax() as a direct URL and keep it within the same scope of your project. Note that it will by default send the cookies for you, so that's no big deal. The reason why I say that is that the cookie has the identifier from Forms to let the session be identified.
So if you're using the IRequireSessionState then you can still use all the session state information that you're used to using. There's nothing wrong with using Session in combination with AJAX. The two really don't have a lot to do with each other. One is used for server storage, the other is used for server communication.
Now, if you're trying for a completely clientside app and a RESTful server solution, then you're going to need to start passing complex JSON structures back and forth (not a big deal, just a matter of making sure you define your datatypes for yourself pretty well in your own documentation) and you can keep everything restricted to only what's passed.
I actually use all three types of these methods in my applications on the same server, depending on what I'm trying to do with each of my apps. I have found that each has their own strength and weakness. (Ok, I don't use the session part, because I handle my state in other ways, but I could use session state)
What could I clarify with here?
There are a few ways you can handle this. The main question is how long do you need to persist their shopping cart information (30minutes, 1 hour, 1 day, 1 week, ...etc.).
Short Storage Requirements Easiest Implementation (30min - 1hr)
You can use session with Page Methods by using HttpContext.Current.Session["key"] so you can keep your session storage the same as it currently works for you. You can call these Page Methods using jquery ajax pretty easily and would eliminate the need for update panels, a script manage, etc. So it gets you halfway there in my opinion. Your pages will load faster and be more responsive and you don't have to throw away any of the code involved with caching stuff in session. Main downside to this is that you are still using session so you really don't want to persist sessions for too long as that will bog down the server hosting a site if it is fairly active.
Long Storage Requirements Server Side implementation
Same stuff above applies except you are not using session and you can use stateless web services if you like. You would generate a GUID for each visitor and storing that GUID in a cookie. On each ajax call you would send this GUID along with the data to persist. This info would get stored in a database identified by the GUID. If the customer completes their order then the information can be moved from the cache database to the completed orders database. In this implementation you would want to write some service or scheduled job that would delete cached orders (not completed) after a certain amount of time to keep the cache database lean.
Nice thing about this solution is you can have a pretty long lived cache, write some reports that key off this cached data, and the load on your web server will be reduced. Additionally if your site becomes more popular it is easy to scale out because you don't have to worry about keeping sessions in sync across multiple web servers.
Long Storage Requirements Client Side Implementation
This approach still uses web services or page methods but there is no caching database involved. Essentially you jam all your information into a cookie or set of cookies and key off that. You might still be able to get some information if you read out the contents of the cookie(s) on each POST and store that somewhere to report off of.
If you don't need to track what customers added things but didn't order then the major benefit of this solution is that you can cut the amount of POST's you have to do down by a lot. You can write to the cookie(s) in javascript and just POST everything when they are completing their order. Just be careful not to put any sensitive information in the cookies unencrypted (contact info, billing info, ..etc.) as there are ways to mine data in cookies from other domains in some less secure browsers. For the sensitive stuff you would POST it to the server and have it returned the encrypted information for storage in the cookie(s).
Downside to this solution is that if the information you need to store is large you could run up against the max cookie size and/or max number of cookies per domain limitation. With a good strategy (ie. storing product id's not product description) you will probably be ok.
Let me know if any of the above is unclear or if you have additional questions.
EDIT: Didn't see the answer above that essentially lays out the Short Storage Requirements one I have. If that is the accepted solution give him the check mark (he beat me to it (= ). Leaving my answer as it lays out some additional options.

How to hack proof a data submission program

I am writing a score submission system for games where I need to ensure that reports back to the server are not falsified (aka, hacked).
I know that I can store a password or private passkey in the program to authenticate or encrypt the request but if the program is decompiled, a crafty hacker can extract the password/passkey and use it to falsify reports.
Does a perfect solution exist?
Thanks in advance.
No. All you can do is make it difficult for cheaters.
You don't say what environment you're running on, but it sounds like you're trying to solve a code authentication problem*: knowing that the code that is executing is actually what you think it is. This is a problem that has plagued online games forever and does not have a good solution.
Common ways in which such systems are commonly broken:
Capture, modification and replay of submissions to the server
Modifying the binary to allow cheating
Using a debugger to modify the submission in-memory before the program applies signatures/encryption/whatever
Punkbuster is an example of a system which attempts to solve some of these problems: http://en.wikipedia.org/wiki/PunkBuster
Also consider http://en.wikipedia.org/wiki/Cheating_in_online_games
Chances are, this is probably too hard for your game. Hiding a public key in your binary and signing everything that leaves it will probably put you well ahead of the pack, security-wise.
* Apologies, I don't actually remember what the formal name for this is. I keep thinking "running code authentication", but Google comes up with nothing for the term.
There is one thing you can do - record all of the user inputs and send those to the server as part of the submission. The server can then replay the inputs through a local copy of the game engine to determine the score. Obviously this isn't appropriate for every type of game, though. Depending on the game, you may need to include replay protection.
Another method that may be appropriate for some types of games is to include a video recording of the high-scoring play within the submission. Provide links to the videos from the high score table, along with a link to report suspicious entries. This will let you "crowd-source" cheat detection - if a cheater's score hits the table at number 1, then the players behind scores 2 through 10 have a pretty big incentive to validate the video for you. If a score is reported enough times, you can check the video yourself and decide if it should be removed (and the user banned).

Resources