Keeping track after the back button - http

I want to write a web app order system using the REST methodology for the first time. I understand the concept of the "message id" when things get posted to a page but this scenario comes up. Once a user posts to the web app, you can keep track of their state with an id attached to the URI but what happens if they hit the back button of the browser to the entry point of the app when they didn't have any id? They then lose their state in the transaction.
I know you can always give them a cookie but you can't do that if they have cookies turned off and, worst case thinking here, they also have javascript turned off.
Now, I understand the answer may be "Yes, that's what will happen", that's the end of the story, and I can live with that but, being new to this, is there something I'm missing?

REST doesn't really have states server-side; you simply point to resources. User sessions aren't tracked; instead cookies are used to track application state. However, if you find that you really do need session state, then you are going to have to break REST and track it on the server.
A few things to consider:
How many of your users have cookies disabled anyway? How many even know how to do that?
Is it really likely that your users will have JS turned off? If so, how will you accomplish PUT (edit) and DELETE (delete) without AJAX?
EDIT: Since you do not want to force cookies and JavaScript, then you cannot have a truly RESTful system. But you can somewhat fake it. You are going to need to track a user server-side. You could do this with a session object, as found in your language/framework of choice or by adding a field to the database for whatever you want to know. Of course, when the user hits the back button, they will likely be going to a cached page. If that's not OK, then you will need to modify the headers to disallow caching. Basically, it gets more complicated if you don't use cookies, but not unmanageable.
What about the missing PUT and DELETE HTTP methods? You can fake those with POSTs and a hidden parameter specifying whether or not you are making something new, editing something, or deleting a record. The GET shouldn't really change.

The answer is that your application (in a REST scenario) simply doesn't keep track of what happens. All state is managed by the client, and state transitions are effected through URI navigation. The "State Transfer" part of REST refers to client navigation to new URIs which are new states.
A URI accessed with GET is effectively a read-only operation as per both the HTTP spec and the REST methodology. That means if the client "backs up" to some previous URI, "the worst" that happens is another GET is made and more data is loaded.
Let's say the client does this (using highly simplified pseudo-HTTP)...
GET //site.com/product/123
This retrieves information (or maybe a page) about product ID 123, which presumably includes a reference to a URI which can be used to POST that item into the user's shopping cart. So the user decides to buy the item. Again, it's oversimplified but:
POST //site.com/shoppingcart/
{productid = 123}
The return from this might be a representation of the shopping cart, or a reference to the added item (which could be used on the shoppingcart URI with DELETE to remove the item again), or a variety of other things (such as deeper XML describing the cart contents with other URIs pointing to the cart items and back to the original products). It's all up to you.
But the "state" is defined by whatever the client is doing. It isn't tracked on the server at all, although you will certainly keep a copy of his shopping cart contents in your database for some period of time. (I once returned to a website two years later and my shopping cart items were still there...) But it's up to him to keep track of the ID. To your server app it's just another record, presumably with some kind of expiration.
In this way you don't need cookies, and javascript is entirely dependent on the client implementation. It's difficult to build a decent REST client without script -- you could probably build something with XSLT and only return XML from the server, but that's probably more pain than anyone needs.
The starting point is to really understand REST, then design the system, then build it. It definitely doesn't lend itself to building it on the fly like most other systems do (right or wrong).
This is an excellent article that gives you a fairly "pure" look at REST without getting too abstract and without bogging you down with code:
http://www.infoq.com/articles/subbu-allamaraju-rest

It is true that the "S" in REST stands for "state" and the "T" for "transfer". But the state is is kept on the client, not on the server. The client always hast all information necessary to decide for himself in whicht direction he wants to change the state.
The way you describe it, your system is not restful.

Related

Simple Shopping Cart Using Session Variables Now Using AJAX

I know there are a million questions out there on how to implement a shopping cart in your site. However, I think my problem may be somewhat different. I currently have a working shopping cart that I wrote back in the 1.1 days that uses ASP.NET session variables to keep track of everything. This has been in place for about 6 years and has served its purpose well. However, it has come time to upgrade the site and part of what I have been tasked with is creating a more, erm, user-friendly site. Part of this is removing updatepanels and implementing real AJAX solutions.
My problem comes in where I need to persist this shopping cart over several pages. Sure I could use cookies, but I would like to keep track of carts for statistical purposes (abandonment stats, items added but not bought, those kinds of stats) as well as user-friendliness, like persisting their cart so that if they come back it is remembered. This is easy enough if a user is logged in, but I don't want to force a user to create an account if they don't want.
Additionally, the way we were processing orders was a bit, ehh, slapped-together. All of the details (color selected, type selected, etc) are passed to paypal via their description string which for the most part is ok, but if a product has selections that are too long for the string (255 chars i believe), they are cut off and we have to call the customer to confirm what they bought. If I were to implement a more "solid" shopping cart, we wouldn't have to do that because all the customer's choices would be stored, in addition to the order automatically being entered into the order processing system (they're manually entered into an excel spreadsheet. i know, right).
I want to do this the right way, but I don't want to use any sort of overblown software that won't really work with our current business model. Do I use a cookie to "label" each visitor to match them with their cart (give them a cookie with a GUID) across pages, keep their whole cart client side, keep the cart server side and just pull it from the db on each page refresh? Any help would be greatly appreciated.
Thanks!
So this isn't really the answer to your question, but it is part of the answer. I'm trying to find the duplicate that this is of (it may not be) but you can keep a lot of the same code if you'll use IRequiresSessionState. I didn't find any exact duplicates, but I recognized the subject matter.
Handler with IRequiresSessionState does not extend session timeout
other answers:
ASP.NET: How to access Session from handler?
Authentication in ASP.NET HttpHandler
So what you want to really do is just look to implement PageMethods in your pages, and then you can reduce a LOT of the overhead of communicating with the pages. But if you want to migrate away from what you're doing now you want to start implementing handlers (and configure them to use JSON - there's a decorator for that) and you can use that with the likes of jQuery.ajax() as a direct URL and keep it within the same scope of your project. Note that it will by default send the cookies for you, so that's no big deal. The reason why I say that is that the cookie has the identifier from Forms to let the session be identified.
So if you're using the IRequireSessionState then you can still use all the session state information that you're used to using. There's nothing wrong with using Session in combination with AJAX. The two really don't have a lot to do with each other. One is used for server storage, the other is used for server communication.
Now, if you're trying for a completely clientside app and a RESTful server solution, then you're going to need to start passing complex JSON structures back and forth (not a big deal, just a matter of making sure you define your datatypes for yourself pretty well in your own documentation) and you can keep everything restricted to only what's passed.
I actually use all three types of these methods in my applications on the same server, depending on what I'm trying to do with each of my apps. I have found that each has their own strength and weakness. (Ok, I don't use the session part, because I handle my state in other ways, but I could use session state)
What could I clarify with here?
There are a few ways you can handle this. The main question is how long do you need to persist their shopping cart information (30minutes, 1 hour, 1 day, 1 week, ...etc.).
Short Storage Requirements Easiest Implementation (30min - 1hr)
You can use session with Page Methods by using HttpContext.Current.Session["key"] so you can keep your session storage the same as it currently works for you. You can call these Page Methods using jquery ajax pretty easily and would eliminate the need for update panels, a script manage, etc. So it gets you halfway there in my opinion. Your pages will load faster and be more responsive and you don't have to throw away any of the code involved with caching stuff in session. Main downside to this is that you are still using session so you really don't want to persist sessions for too long as that will bog down the server hosting a site if it is fairly active.
Long Storage Requirements Server Side implementation
Same stuff above applies except you are not using session and you can use stateless web services if you like. You would generate a GUID for each visitor and storing that GUID in a cookie. On each ajax call you would send this GUID along with the data to persist. This info would get stored in a database identified by the GUID. If the customer completes their order then the information can be moved from the cache database to the completed orders database. In this implementation you would want to write some service or scheduled job that would delete cached orders (not completed) after a certain amount of time to keep the cache database lean.
Nice thing about this solution is you can have a pretty long lived cache, write some reports that key off this cached data, and the load on your web server will be reduced. Additionally if your site becomes more popular it is easy to scale out because you don't have to worry about keeping sessions in sync across multiple web servers.
Long Storage Requirements Client Side Implementation
This approach still uses web services or page methods but there is no caching database involved. Essentially you jam all your information into a cookie or set of cookies and key off that. You might still be able to get some information if you read out the contents of the cookie(s) on each POST and store that somewhere to report off of.
If you don't need to track what customers added things but didn't order then the major benefit of this solution is that you can cut the amount of POST's you have to do down by a lot. You can write to the cookie(s) in javascript and just POST everything when they are completing their order. Just be careful not to put any sensitive information in the cookies unencrypted (contact info, billing info, ..etc.) as there are ways to mine data in cookies from other domains in some less secure browsers. For the sensitive stuff you would POST it to the server and have it returned the encrypted information for storage in the cookie(s).
Downside to this solution is that if the information you need to store is large you could run up against the max cookie size and/or max number of cookies per domain limitation. With a good strategy (ie. storing product id's not product description) you will probably be ok.
Let me know if any of the above is unclear or if you have additional questions.
EDIT: Didn't see the answer above that essentially lays out the Short Storage Requirements one I have. If that is the accepted solution give him the check mark (he beat me to it (= ). Leaving my answer as it lays out some additional options.

Why are hidden fields used?

I have always seen a lot of hidden fields used in web applications. I have worked with code which is written to use a lot of hidden fields and the data values from the visible fields sent back and forth to them. Though I fail to understand why the hidden fields are used. I can almost always think of ways to resolve the same problem without the use of hidden fields. How do hidden fields help in design?
Can anyone tell me what exactly is the advantage that hidden fields provide? Why are hidden fields used?
Hidden fields is just the easiest way, that is why they are used quite a bit.
Alternatives:
storing data in a session server-side (with sessionid cookie)
storing data in a transaction server-side (with transaction id as the single hidden field)
using URL path instead of hidden field query parameters where applicable
Main concerns:
the value of the hidden field cannot be trusted to not be tampered with from page to page (as opposed to server-side storage)
big data needs to be posted every time, could be a problem, and is not possible for some data (for example uploaded images)
Main advantages:
no sticky sessions that spill between pages and multiple browser windows
no server-side cleanup necessary (for expired data)
accessible to client-side scripts
Suppose you want to edit an object. Now it's helpful to put the ID into a hidden field. Of course, you must never rely on that value (i.e. make sure the user has appropriate rights upon insert/update).
Still, this is a very convenient solution. Showing the ID in a visible field (e.g. read-only text box) is possible, but irritating to the user.
Storing the ID in a session / cookie is prohibitive, because it disallows multiple opened edit windows at the same time and imposes lifetime restrictions (session timeout leads to a broken edit operation, very annoying).
Using the URL is possible, but breaks design rules, i.e. use POST when modifying data. Also, since it is visible to the user it creates uglier URLs.
Most typical use I see/use is for IDs and other things that really don't need to be on the page for any other reason than its needed at some point to be sent back to the server.
-edit, should've included more detail-
say for instance you have some object you want to update -- the UI sends back a collection of values and the server at that point may or may not know "hey this is a customer object" so you fire off a request to the server and say "hey, give me ID 7" and now you have your customer object as the system knows it. The updates are applied, validated, whatever and now your UI gets the completed result.
I guess a good excuse/argument is using linq. Try to update an object in linq without getting it from the DB first. It has no real idea that it's something it can keep track of until you get the full object.
heres one reason, convenient way of passing data between client code (javascript) and server side.
There are many useful scenarios.
One is to "store" some data on a page which should not be entered by a user. For example, store the user ID when generate a page, then this value will be auto-submitted with the form back to the server.
One other scenario is security. Add some hidden token to the page and check its existence on the server. This will help identify whether a form was submitted via the browser or by some bot which just posted to some url on your site.
It keeps things out of the URL (as in the querystring) so it keeps that clean. It also keeps things out of Session that may not necessarily need to be in there.
Other than that, I can't think of too many other benefits.
They are generally used to store state as an interaction progresses. Cookies could be used instead, but some people disable them. Could also use a single hidden field to point at server-side state, but then there are session-stickiness issues.
If you are using hidden field in the form, you are increasing the burden of form by including a new control.
If there is no need to take hidden field, you should't take it because it is not suitable on the bases of security point. using hidden field does not come under the good programming. Because it also affect the performance of application.

Storing IDs between pages for up to a day?

I need to store IDs (Contact IDs, Claim IDs, etc.) between multiple .aspx pages.
At the moment I am storing the ID in the Session and have set the Session timeout to 300 minutes. However, I am still getting errors because users are attempting to perform operations after the Session has expired.
I think users are leaving their web broswers open, locking their computers, going home for the evening, coming in the next morning and attempting to pick up where they left off.
I don't want to use the Querystring. Cookies are more for User IDs than Contact IDs and Claim IDs. Viewstate is only maintained per page. Persisting Session to a database seems unnecessarily complicated. I don't want to extend the Session timeout too much.
I'd ideally like them to be able to pick up where they left off in the morning.
What are the best practices for dealing with storing IDs between pages? How can I do this without them receiving a message saying their session has expired? Am I overlooking something obvious?!
I would put all this information into a database. This is classic use of a database, a means to store information needed for the application.
Cookies are bad because you have to assume they will be at the same PC from session to session. You have seem the problem with sessions.
If you do not want to use a database then you can use a file on the server. And read from that file.
You might actually find it much easier than complicated to create a sort of "landing pad" database where a user's item history is stored, or MRU list. I wouldn't tie it to their immediate session or cookie though. Let the session maintain the user state, let the database handle where they were when they went home or took that 7 hour lunch break.
As #David said this is a classic use of databases.
If you are in need of doing this without using databases, you can fall back on some of the "tricks" (read hacks) that are used in non-dynamic languages:
Create a hidden field which exists on every page, name the field appropriately for the data you are storing.
When you render the page create an XML fragment containing your ids and other data, encrypt it for display purposes (using base64 etc), place the value into the hidden field you created.
When the page is posted back, reverse #2 and recover your data, then apply the page changes.
Rinse and repeat. :)
Really this is not a very scalable solution, as the XML and base64 encoding will add 50% or more to your data size, and if there is significant data (100's of KB) this process can get out of control.
The last time I was asked to use this solution it was for moving a list of well defined data around with an application that did not always have access to the DB. It worked ok for the purposes, but was definitely not ideal.
You could go ahead and store the IDs in viewstate on the page, then on each page that requires that ID have a settable property for that page. Before redirecting to the page instantiate it and set the ID property.
You're correct, storing this type of data in session isn't really what it's meant for. Viewstate is a better mechanism for this.

GET vs. POST does it really really matter?

Ok, I know the difference in purpose. GET is to get some data. Make a request and get data back. POST should be used for CRUD operations other than read I believe. But when it comes down to it, does the server really care if it's receiving a GET vs. POST in the end?
According to the HTTP RFC, GET should not have any side-effects, while POST may have side-effects.
The most basic example of this is that GET is not appropriate for anything like a purchase-transaction or posting an article to a blog, while POST is appropriate for actions-that-have-consequences.
By the RFC, you can hold a user responsible for actions done by POST (such as a purchase), but not for GET actions. 'Bots always use GET for this reason.
From the RFC 2616, 9.1.1:
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in
their interactions over the Internet,
and should be careful to allow the
user to be aware of any actions they
might take which may have an
unexpected significance to themselves
or others.
In particular, the convention has
been established that the GET and
HEAD methods SHOULD NOT have the
significance of taking an action
other than retrieval. These methods
ought to be considered "safe". This
allows user agents to represent other
methods, such as POST, PUT and
DELETE, in a special way, so that the
user is made aware of the fact that
a possibly unsafe action is being
requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore
cannot be held accountable for them.
It does if a search engine is crawling the page, since they will be making GET requests but not POST. Say you have a link on your page:
http://www.example.com/items.aspx?id=5&mode=delete
Without some sort of authorization check performed before the delete, it's possible that Googlebot could come in and delete items from your page.
Since you're the one writing the server software (presumably), then it cares if you tell it to care. If you handle POST and GET data identically, then no, it doesn't.
However, the browser definitely cares. Refreshing or clicking back to a page you got as a response to a POST pops up the little "Are you sure you want to submit data again" prompt, for example.
GET has data limit restrictions based on the sending browser:
The spec for URL length does not dictate a minimum or maximum URL length, but implementation varies by browser. On Windows: Opera supports ~4050 characters, IE 4.0+ supports exactly 2083 characters, Netscape 3 -> 4.78 support up to 8192 characters before causing errors on shut-down, and Netscape 6 supports ~2000 before causing errors on start-up
If you use a GET request to alter back-end state, you run the risk of bad things happening if a webcrawler of some kind traverses your site. Back when wikis first became popular, there were horror stories of whole sites being deleted because the "delete page" function was implemented as a GET request, with disastrous results when the Googlebot came knocking...
"Use GET if: The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup)."
"Use POST if: The interaction is more like an order, or the interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or the user be held accountable for the results of the interaction."
source
You be aware of a few subtle security differences. See my question
GET versus POST in terms of security?
Essentially the important thing to remember is that GET will go into the browser history and will be transmitted through proxies in plain text, so you don't want any sensitive information, like a password in a GET.
Obvious maybe, but worth mentioning.
By HTTP specifications, GET is safe and idempotent and POST is neither. What this means is that a GET request can be repeated multiple times without causing side effects.
Even if your server doesn't care (and this is unlikely), there may be intermediate agents between your client and the server, all of whom have this expectation. For example proxies to cache data at your ISP or other providers for improved performance. THe same expectation is true for accelerators, for example, a prefetching plugin for your browser.
Thus a GET request can be cached (based on certain parameters), and if it fails, it can be automatically repeated without any expecation of harmful effects. So, really your server should strive to fulfill this contract.
On the other hand, POST is not safe, not idempotent and every agent knows not to cache the results of a POST request, or retry a POST request automatically. So, for example, a credit card transaction would never, ever be a GET request (you don't want accounts being debited multiple times because of network errors, etc).
That's a very basic take on this. For more information, you might consider the "RESTful Web Services" book by Ruby and Richardson (O'Reilly press).
For a quick take on the topic of REST, consider this post:
http://www.25hoursaday.com/weblog/2008/08/17/ExplainingRESTToDamienKatz.aspx
The funny thing is that most people debate the merits of PUT v POST. The GET v POST issue is, and always has been, very well settled. Ignore it at your own peril.
GET has limitations on the browser side. For instance, some browsers limit the length of GET requests.
I think a more appropriate answer, is you can pretty much do the same things with both. It is not so much a matter of preference, however, but a matter of correct usage. I would recommend you use you GETs and POSTs how they were intended to be used.
Technically, no. All GET does is post the stuff in the first line of the HTTP request, and POST posts stuff in the body.
However, how the "web infrastructure" treats the differences makes a world of difference. We could write a whole book about it. However, I'll give you some "best practises":
Use "POST" for when your HTTP request would change something "concrete" inside the web server. Ie, you're editing a page, making a new record, and so on. POSTS are less likely to be cached, or treated as something that's "repeatable without side-effects"
Use "GET" for when you want to "look at an object". Now, such a look might change something "behind the scenes" in terms of caching or record keeping, but it shouldn't change anything "substantial". Ie, I could repeat my GET over and over and nothing bad would happen, except for inflated hit counts. GETs should be easily bookmarkable, so a user can go back to that same object later on.
The parameters to the GET (the stuff after the ?, traditionally) should be considered "attributes to the view" or "what to view" and so on. Again, it shouldn't actually change anything: use POST for that.
And, a final word, when you POST something (for example, you're creating a new comment), have the processing for the post issue a 302 to "redirect" the user to a new URL that views that object. Ie, a POST processes the information, then redirects the browser to a GET statement to view the new state. Displaying information as a result of a POST can also cause problems. Doing the redirection is often used, and makes things work better.
Should the user be able to bookmark the resulting page? Another thing to think about is some browsers/servers incorrectly limit the GET URI length.
Edit: corrected char length restriction note - thanks ars!
It depends on the software at the server end. Some libraries, like CGI.pm in perl handles both by default. But there are situations where you more or less have to use POST instead of GET, at least for pushing data to the server. Large amounts of data (where the corresponding GET url would become too long), binary data (to avoid lots of encoding/decoding trouble), multipart files, non-parsed headers (for continuous updates pre-AJAX style...) and similar.
The server technically couldn't care one way or the other about what kind of request it receives. It will blindly execute any request coming across the wire.
Which is the problem. If you have an action that destroys or modifies data in a GET action, Google will tear your site up as it crawls through indexing.
The server usually doesn't care. But it's mostly for following good practices, as you mentioned. The client side also matter - as mentioned you cannot bookmark a POST'd page usually, and some browsers have limits on the length of the URL for really long GET queries.
Since GET is intended for specifying resource you wanna get, depending on exact software on the server side, the web server (or the load balancer in front of it) may have a size limit on GET requests to prevent Denial Of Service attacks...
Be aware that browsers may cache GET requests but will generally not cache POST requests.
Yes, it does matter. GET and POST are quite different, really.
You are right in that normally, GET is for "getting" data from the server and displaying a page, while POST is for "posting" data back to the server. Internally, your scripts get the same data whether it's GET or POST, so no, the server doesn't really care.
The main difference is GET parameters are specified in URLs, while POST is not. This is why POST is used for signup and login forms - you don't want your password in a URL. Similarly, if you're viewing different pages or displaying a specific view of some data, you normally want a unique URL.
It really does matter. I have gathered like 11 things you should know abut them.
11 things you should know about GET vs POST
No, they shouldn't except for #jbruce2112 answer and uploading files require POST.

Preventing Users from Working on the Same Row

I have a web application at work that is similar to a ticket working system. Some users enter new issues. Other workers choose and resolve issues. All of the data is maintained in MS SQL server 2005.
The users working to resolve issues go to a page where they can view open issues. Because up to twenty people can be looking at this page at the same time, one potential problem I had to address was what happens if someone picks an issue that someone else picked just after their page loaded.
To address this, I did two things. First, the gridview displaying the issues to select uses an AJAX timer to update every second. Once an issue has been selected, it disappears one second later at most. In case they select one within this second, they get a message asking them to choose another.
The problem is that the AJAX part of this is sending too many updates (this is what I am assuming) and it is affecting the performance of the page and database. In addition, the updates are not performing every second. I find the timer to be unreliable when working to trigger stored procedures.
There has to be a better way, but I can't seem to find one. Does anyone have experience with a situation like this or have suggestions to keep multiple users from selecting the same record to maintain? I really do not want to disable the AJAX part entirely because I feel the message alone would make the application frustrating to use.
Thanks,
Put a lock timestamp field on the row in the database. Write a stored proc that returns true or false if the expiration timsetamp is older than a specific time. Set your sessions on your web app to expire in the same time, a minute or two. When a user select a row they hit the stored proc which helps the app to decide if it should let the user to modify it.
Hope that makes sense....
Two things can help mitigate your problem.
First, after-selection notification that the case has been taken is needed regardless of your ajax update time frame. Even checking every second doesn't mean two people cannot click the same case at what they perceive to be the same time. In such cases, one of the users needs to be notified that their selection is invalid even though it appeared valid when selected. This notification doesn't need to be elaborate; keeping a light, helpful tone can improve user perception even in the light of disappointment. And if you identify the user who selected that record already, that will not only help your users coordinate in future but also divert attention from your program to the user who snaked the juicy case. (indeed, management may like giving your users the occasional collision as it will motivate them to select cases faster)
Second, a small tweak to how you display your cases can reduce selection collisions. Adding a random element to display order and/or filtering out every other case on display will help your users select different cases naturally. Human pattern recognition and task selection isn't really random so small changes to presentation can equal big changes to selection behavior. Reductions in collision chance keeps your collision notifications rare (and thus less frustrating to your users). This is even better if your users can be separated into classifications that can help determine useful case ordering/filtering.
Okay, a third thing that will help you over time is if you keep a log of when collisions occur (with helpful meta data about the collision—like who was involved and selection timing). Armed with solid collision data, you can find what works and what doesn't. Over time, you can hone your application to your actual use cases as well as identify potential problems early. Nothing reassures your users more than being on top of a problem (and able to explain your plans to solve it) before they're even aware it exists.
With these mitigating patterns, you'll probably find you can safely reduce your ajax query timeframe without affecting user experience. And with useful logging, you'll have the assurance that any tweaks you put in place are actually working (or not—which is maybe even more useful to know).
I did something similar where once a user opened a ticket (row) it assigned that ticket to that user and set a value on that record, like and FK to that particular user, so if anyone else tried to open that ticket (row) it would let them know it has already been assigned to someone else.
If possible limit the system so that they just get the next open issue off the work queue as opposed having them be able choose from all open issues.
If that isn't possible, I suppose you could check upon the choosing of an issue to see if it is still available. If it's not available, then make it disappear after the user clicks on it. This way you are only requesting when they actually click on something as opposed to constant polling of the data.
Have you tried increasing the time between refreshes. I would expect that once per 30 seconds would be sufficient. 40 requests/minute is a lot less load than 1200/minute. Your users may not even notice the difference.
If they do, how about providing a refresh button on the page so the users can manually refresh the list just prior to selecting an item to avoid the annoying message if they choose.
I'm missing to see the issue, specially after you mentioned you are already flagging tickets as in progress/being maintained and have a timestamp/version of the item.
Isn't the following enough:
User browses the tickets and sees a list of available tickets i.e. this excludes ones that are in the db as in progress. If you want the users to also see tickets in progress, you indicate it clearly in the ticket status and disable the option to take it.
User either flags a ticket as in progress explicitly or implicitly by opening the ticket (depends on the user experience / how its presented to the users).
User explicitly moves the ticket to a different status i.e. completed, invalid, awaiting for feedback, etc.
When the items are retrieved at 1, you include a timestamp/version. When 2 happens, you use a optimistic concurrency approach to make sure that if 2 persons try to update the take the ticket at the same time only the first one will be successful.
What will happen is that for the second person, the update ... where ... timestamp = #timestamp will not find any records to update and you will report back that the ticket was already taken.
If you want, you can build on top of the above to update the UI as tickets are grabbed. This could be by just doing a full refresh of the current page of tickets after x time (maybe alerting/prompting the user), or even by retrieving a list of tickets changed for the page of tickets being showed with ajax. You still have the earlier steps in place, as this modification its just a convenience for the users.

Resources