Assume that you are developing a web application that shows currency
exchange rates for a bank web site. The rates are stored in an IBM
Mainframe in the bank and rates are available through web services.
Each time a user accesses the rates page, the page makes a request to
the mainframe. This generates too much load on the mainframe.
Especially most of the time the rates delivered is the same. How would
you design a caching architecture that minimises the trip to the web
service. But at the same time rates may fluctuate within the day and
if rates have changed the rates web page should not display the cached
values but make another request to the web service. How would you
design such a caching architecture and make sure it is invalidated on
rate changes? Please explain by drawing a diagram.
Can you tell me in this scenario how ASP.net will know that values are changed? what should I do?
The "Mainframe" must advertise to Web that the value has changed.
So you can, for example, implement another web service called "InvalidateCache()" that would empty the cache when called.
When the rates change, "Mainframe" would call InvalidateCache service which would empty the cache so that subsequent request to the Rate service would call Mainframe for new rates.
Following is the description:
Make a new webservice which stores just a token, that token could be anything like datetime or you can generate the token from any other algo.
When the user comes to that page on which currency rates needs to be shown, browser make a hit to that new service along with token.
If the client token matches with the server then the service returns true response otherwise false and sends the updated token too.
If response is true then the browser gets data from it's cache.
If it's false then the browser stores the updated token and calls the main webservice gets data from there and stores it in it's cache.
Whenever the currency rates changes, token in new webservice needs to be updated.
Related
The web application I'm working on receives HTTP POST data from an external system run by another company, which essentially tells us when one of our customers performs a particular action. At present, we log the data, update relevant statuses, and it works nice and quickly.
I now have to integrate this into another system so that these updates are pushed out to other third party businesses via HTTP, directly to their systems. I'm conscious that these external services may suddenly be unavailable due to technical issues, especially as some are run by some businesses on nothing more than DSL lines.
Is it possible, that purely from an ASP.net page, I can asynchronously call the external web service, whilst still returning an OK response to the original service provider without a delay incurred by waiting by the extra call to the other web services?
Our service provider, and my web app, only cares we receive the data. If my app times out, they will send it again after a short period, which we don't want (there are cases where the same status will be sent multiple - but different - times for the same customer, but we don't want erroneous multiples)
The third parties receiving the data want the updates immediately, and the only other way I've thought of is if we run an internal Windows (exe) service, create a TCP connection directly to that from my web app, and the new service accepts the data and disconnects my web application quickly, then sends the data on afterwards, but that seems like more complexity than should be required.
We're developing an agenda on our platform. We implemented a feature to sync with Google Agenda which works correctly except that it only works with public calendar and not when it's private.
We implement everything as Google provides and use AuthO2 protocol.
We are migrating to https and we hope that it will solve our issue.
Do you have any idea on the reason it's blocked when agenda is private?
You can implement synchronization by sending HTTP request:
GET https://www.googleapis.com/calendar/v3/calendars/calendarId/events
and adding path parameters and optional query parameters as shown in Events: list.
In addition to that, referring to Synchronize Resources Efficiently, you can keep data for all calendar collections in sync while saving bandwidth by using the "incremental synchronization".
As highlighted in the documentation:
A sync token is a piece of data exchanged between the server and the client, and has a critical role in the synchronization process.
As you may have noticed, sync token takes a major part in both stages in incremental synchronization. Make sure to store this syncToken for the next sync request. As discussed:
Initial full sync is performed once at the very beginning in order to fully synchronize the client’s state with the server’s state. The client will obtain a sync token that it needs to persist.
Incremental sync is performed repeatedly and updates the client with all the changes that happened ever since the previous sync. Each time, the client provides the previous sync token it obtained from the server and stores the new sync token from the response.
More information and examples on how to synchronize efficiently can be found in the given documentations.
I'm trying to model a request submission/ approval /completion scenario. I'm using a flowchart workflow hosted as a service in a console app using WorkflowServiceHost. The workflow has a service reference to a WCF Service hosted in IIS this second service interacts with the application database. I have an aspnet front end with a service reference to the hosted workflow service and call its methods from a proxy client.
The workflow is using a persistence database that I have created using the scripts provided.
The scenario is that a request for a service is made by a user. The request must be approved once by a specific person (I'm using a pick with a delay in one branch to remind the person if no decision arrives, the other branch is receive decision). For some services the request must have a second approval which can be done by any one of a pool of approvers. Once approval is all finished the request goes to a different pool of people for completion.
I have it working but 3 questions:
In the aspnet home page I have a list of requests with links to pages to approve/complete as appropriate and call methods on the proxy after which they redirect back but because it's all asynchronous I am having to manually refresh the home page to see the changed list. Am I stuck with forcing the page to refresh itself every x seconds to get around this or is there a way to make it synchronous/check state of workflow/wait for a message back? It's not terribly interactive just hitting a button and not knowing whether the action succeeded or not.
Is there a way to stop someone approving a request just after someone else in the pool has approved it? At the moment nothing happens for the second person when they hit the button (which is good). In the workflow persistence database I can see that the blocking bookmark is the next activity along (presumably set by the person who got there first) so it looks as though the second receive just doesn't happen. I have concurrency checking code in the WCF data service but this never fires because there is no attempt to update the database. I would like to be able to warn the second person that another user got there first.
My homepage list in the web app is built by querying the application database, but is it possible to query the workflow to find the status of each item, passing the item's id (I'm using the id as the correlation handle)? Is it normal to do this or do people usually just query the application database?
I guess you could create an Ajax call that would check if any state change occurs and only refresh the page when that is the case.
If you send a WCF request for an operation that is no longer valid you should receive an error, unless you are using one way messaging because there is no message to send the error back. Mind you that due to a bug in WF4 the message could be a timeout after 60 seconds. There is no real way to avoid the problem because you are checking the workflow state as persisted and letting the user do an action based on that. Even when you query the state the workflow could have been resumes but not saved yet.
Either can work but I normally query the workflow instance store as that is the closest to the actual workflow state.
I am new in programming ,specially in web base programming. i want to learn best practices about state management techniques. I mean
when we have to create sessions?,
when to use sessions how to check null sessions? ,
when to use cookies ?
when to use hidden fields ?.
what are differences between all ?
which technique to use at certain time ?
how application may get crashes due to unsuccessful state management?.
which things we need to keep in mind about state management when we are developing the web applications...???
there so many questions . perhaps you guys known . please help me out to sort my confusion .
Thanks in advance !
http://www.thedevheaven.com/2012/05/state-management.html
State management is the process by which you maintain state and page information over multiple requests for the same or different pages.
Types of State Management
There are 2 types State Management:
Client – Side State Management
This stores information on the client's computer by embedding the information into a Web page, a uniform resource locator(url), or a cookie. The techniques available to store the state information at the client end are listed down below:
a. View State – Asp.Net uses View State to track the values in the Controls. You can add custom values to the view state. It is used by the Asp.net page framework to automatically save the values of the page and of each control just prior to rendering to the page. When the page is posted, one of the first tasks performed by page processing is to restore view state.
b. Control State – If you create a custom control that requires view state to work properly, you should use control state to ensure other developers don’t break your control by disabling view state.
c. Hidden fields – Like view state, hidden fields store data in an HTML form without displaying it in the user's browser. The data is available only when the form is processed.
d. Cookies – Cookies store a value in the user's browser that the browser sends with every page request to the same server. Cookies are the best way to store state data that must be available for multiple Web pages on a web site.
e. Query Strings - Query strings store values in the URL that are visible to the user. Use query strings when you want a user to be able to e-mail or instant message state data with a URL.
Server – Side State Management
a. Application State - Application State information is available to all pages, regardless of which user requests a page.
b. Session State – Session State information is available to all pages opened by a user during a single visit.
Both application state and session state information is lost when the application restarts. To persist user data between application restarts, you can store it using profile properties.
Advantages
Advantages of Client – Side State Management:
Better Scalability: With server-side state management, each client that connects to the Web server consumes memory on the Web server. If a Web site has hundreds or thousands of simultaneous users, the memory consumed by storing state management information can become a limiting factor. Pushing this burden to the clients removes that potential bottleneck.
Supports multiple Web servers: With client-side state management, you can distribute incoming requests across multiple Web servers with no changes to your application because the client provides all the information the Web server needs to process the request. With server-side state management, if a client switches servers in the middle of the session, the new server does not necessarily have access to the client’s state information. You can use multiple servers with server-side state management, but you need either intelligent load-balancing (to always forward requests from a client to the same server) or centralized state management (where state is stored in a central database that all Web servers access).
Advantages of Server – Side State Management:
Better security: Client-side state management information can be captured (either in transit or while it is stored on the client) or maliciously modified. Therefore, you should never use client-side state management to store confidential information, such as a password, authorization level, or authentication status.
Reduced bandwidth: If you store large amounts of state management information, sending that information back and forth to the client can increase bandwidth utilization and page load times, potentially increasing your costs and reducing scalability. The increased bandwidth usage affects mobile clients most of all, because they often have very slow connections. Instead, you should store large amounts of state management data (say, more than 1 KB) on the server
follow the links :-
Client - server difference in state management:
http://www.techbaba.com/q/858-difference+clint+side+management+server+side+management.aspx
http://www.dotnetfunda.com/articles/article61.aspx
Caching best practices :
http://msdn.microsoft.com/en-us/library/aa478965.aspx
state management Best practices :
http://msdn.microsoft.com/en-us/library/z1hkazw7.aspx
Use state management techniques in c# :
http://www.c-sharpcorner.com/UploadFile/freelance91/ASPNETstatemanagementtechniques01012007212655PM/ASPNETstatemanagementtechniques.aspx
Sounds like you should just need to do some reading.
Pro ASP.NET - This book has a chapter about state management, but I am betting the rest of the book would be helpful to you as well since you are a beginner.
Also, MSDN has some good information about state management and when to use what.
The web site I am developing will be sending tens of thousands of emails daily (and that number will be growing) - registration, notifications, alerts, etc. I will have a dedicated server box that will be actually generating and sending emails by request from the asp.net application (asp.net app calls a WCF method on the email box and provides various parameters for an email).
Now, I am trying to figure out what's the best way of queueing those email jobs on the email server. The call from asp.net app has to be async so that asp.net app doesn't wait for email server to create and send actual email.
Originally I was just creating a worker thread for each email job request but number of emails is going to be really high and I'm not sure if creating hundreds of simultaneous threads is a good idea performance wise. My next thought is to use MSMQ but I'm not sure about its performance and scalability.
Any ideas/production examples?
Thanks!
At a previous job, we had to queue messages for delivery, much like you are explaining. We decided to create a database record that represented each message. At message creation time, we created the mail message in .NET and then saved it into the database. A separate process (Windows service built in .NET) would periodically check to see if there were messages to be sent (delivery date was in the past and status was unsent). It would then re-create the mail message from the information it received from the stored procedure and sent the message along its merry way.
The procedure that returned the messages ready for sending also performed throttling logic based on the day and time of the call (we allowed more of our bandwidth to be used at night and the weekends than during the day).
We also had need for tracking bouncebacks, message opens, and click-throughs which meant having a database record that represented the email was necessary so we could relate events (bounce, open, click) with individual emails and recipients.