Suppose that I have an ASP.NET page, where a customer can select a product from a drop down list, and then with this change event, corresponding price, quantity etc. fields are changed and set to appropriate values. These values are obtained from server-side asp.net page using jquery's "$.post(....)" method. Now in the same page, there is another section, which shows live market statistics of the products. This section obtains the live market product values by making a request to the server-side asp.net page every 20 seconds interval, which is controlled using a timer.
Now suppose that this timer goes off and a request is being processed at the server. At the same time the customer selects a different product from the drop down list, which also fires another ajax request.
Is there any chance that the responses from these two different request can get mixed up? I mean the response which is intended for the live update section is considered as the response for the product catalog section? If that is the case, then before making an ajax request, how can I be sure that there is another request is being processed at the server, and if necessary, abort that request?
I don't want to use ASP.NET ajax for this situation, because it generates a lot of unnecessary script/data, which increases the page size, and the customers of this site has a bandwidth of 2-4 kbps.........................:|
It depends how the code is written. So long as you aren't using globals in your JS you should be fine.
The first example at http://www.jibbering.com/2002/4/httprequest.html uses globals (xmlhttp), don't do that. Pass variables about instead. Make use of the this keyword inside your onreadystatechange callback function.
Related
I've setup a Zapier automation to fire an event every time a new deal is made on a 3rd party CRM. The automation triggers fine, and retrieves the GA Client ID stored in the CRM. The goal of this automation is to add the value of the deal to the client's session history. This works completely fine on a new test GA View I made as well as the original one (the one left without any filters).
However, there's one GA View which has both, anti-bot/spider setting and 3 filters set up. I tried disabling all four of them, yet the event still wasn't being fired - not in real-time, nor User Explorer. Wondering what could be the cause of this. All views are, of course, of the same property. Are there any other filters (besides the anti-bot/spider setting and view filters) or options I may have missed that are view-specific that would cause events sent by Zapier not to fire on just this one view?
Any help is appreciated!
The update of the settings, in the specific case relating to the filters, may not be immediate. If you leave the filters disabled, you can certainly check if after midnight (or after a few hours after midnight) you see that data in the reports.
This happens because after midnight the data is reprocessed again, so for that day (which has therefore become the previous one), if you have removed the filters, you should find all the data.
Say I have two pages on a site called “Page 1” and “Page 10”. I'd like to be able to see the paths visitors take to get from “Page 1” to “Page 10” with full URLs intact. Many of the URLs (including those for “Page 1” and “Page 10”) will include query strings that are important.
Is this possible? If so, how?
Try using behavior flow reports. The report basically shows you how visitors click through your website. There are a lot of ways to customize the report, with which you will need to play around to really answer your question. By default, the behavior flow focuses on entry and exit points of visitors, regardless how many times they hit the different subpages in between. However, I'm sure you can set appropriate filters and settings to answer your question.
I use two methods for tracking where people have been on my website:
Track and store the information in my own SQL database. (details below)
Lead Forensics (paid subscription, but you can do a trial).
For tracking and storing my own data, I record unique visitors based upon the IP Address they're connecting from and then have a separate table that records all page views that links back to the unique visitor table.
Lead Forensics data simply allows me to link up those unique visitors with actual companies that have viewed my website.
Doing it yourself means you don't have to rely on Google working for your records to work, and in my experience Google Analytics tends to round numbers so you don't get a true indication of numbers, and also you can remove bots and website trawlers from your data by tracking the user agent string.
As a somewhat ugly hack you could use transaction tracking. If you use the same transaction id multiple times subsequent products will be added to the existing data. So assign an ID at the start of the visits and on each page record a transaction with the current page url as product name (and the ID as transaction id). This will give you the complete path per user (I am frankly not to sure how this is useful - at some point you probably want aggregated data. Plus each transaction and product counts towards your quota for interaction counts, so on a large site you might run over the 10mio hits limit).
you can do it programatically
have a MAP in the backend which stores the userId (assuming u would have given a unique ID at the time of login to each user) with a list of Strings(each string being URL visited by that user)
whenever the user hits another URL from Page 1(and only from page1, check it using JS), send a POST request to backend with the new URL in its data section.
In the backend, check if the URL is of Page 10 and if not, add this URL as a string into the MAP for that corresponding user
Finally, when the user clicks on the Page 10 URL, you know the URLs in the way from Page 1 to Page 10 and so use them.
Though if I consider JS and I have not misunderstood your question, we can get the previous URL from request header information using document.referrer.
Are you trying to do it from 'Google Tag Manager'? I am not sure whether you are trying to trace the URLS in clientside or server side?
Ok! So I have spoken to a google representative about this issue, however since I am not enterprise level, he can't push me to tech support and suggested that I use the SO for answers. Here is the question...
In Google Maps Terms it states the following:
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store
any Content, except that you may store: (i) limited amounts of Content for the purpose of
improving the performance of your Maps API Implementation if you do so temporarily (and in
no event for more than 30 calendar days), securely, and in a manner that does not permit
use of the Content outside of the Service; and (ii) any content identifier or key that
the Maps APIs Documentation specifically permits you to store. For example, you must not
use the Content to create an independent database of "places" or other local listings
information.
This led me to originally believe that google would not allow caching of any type of information. However, then I read the following:
When to Use Client-Side Geocoding
The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota.
Two basic architectures for client-side geocoding exist.
Run the geocoding and display entirely in the browser. For instance, the user enters an address on your page. Your application geocodes it. Then your page uses the geocode to create a marker on the map. Or your app does some simple analysis using the geocode. No data is sent to your server. This reduces load on your server, but doesn't give you any sense of what your users are doing.
Run the geocode in the browser and then send it to the server. For instance, the user enters an address. Your application geocodes it in the browser. The app then sends the data to your server. The server responds with some data, such as nearby points of interest. This allows you to customize a response based on your own data, and also to cache the geocode if you want. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching.
So one side says you cannot cache, the other side tells you, you should. Another solution it states is to always use clientside when you can, but then this becomes a grey area as well, because both examples state that you must have a user input data. What if the jquery read data from a div or span and then geocoded the information? The user wouldn't have actually done the geocode,but it was still done client-side? I'm trying to create a site that has a bunch of events generated by users and this site could get pretty loaded, so I am trying to determine the best practice in being able to do this. Google suggested here, so before you go and say this is "off-topic" please note, this is where they stated me to post.
Any feedback would be greatly appreciated.
The first quote does not explicitly forbid caching data at all. It is ambiguous as to how much you can cache (what number explicitly is "limited amounts"?) but it does not forbid caching.
You are allowed to cache the data if it helps improve the performance of your site as long as you retain the data for no longer than 30 days and do not make it available in any way to any other service except the service that originally retrieved the data.
Regarding user interaction - if your user explicitly enters a page with the expectation that they will be shown geocoded information I would assume that this would fulfill "user interaction".
As an example from a project I worked on last year I had it set up to do the following:
- Show markers on the map
- If the user clicked a marker they were shown a popup with data from the cache if available, otherwise a geocode would be performed and the returned information would be cached along with the date/time of the cache.
Another page of the site showed a history of these markers at 5 minute intervals throughout the day. If cached data was present (from clicking the map marker as in the previous part) this would be shown, otherwise a geocode would be performed and the data cached as before. The user clicking to run the report was (in my opinion) enough "user interaction" to not count as pre-fetching as the user had to manually select a timeframe before the report would be displayed.
A cronjob then ran every day at midnight which would go through each record with cached data over 25 days old and remove it.
As it was I was caching much less than 10% of the marker positions being shown (20+ markers being updated every minute, but the report was being run on maybe 3-5 markers each day and only geocoding data for every 5th point).
I am using asp.net 2.0. I have 5 web pages in my project. I want to calculate the
length of time for which visitor view the page.
Outline for a tracking system:
Server-side: When the visitor requests the page, generate a unique id and include it in the page. Save the unique id with the user, page, and any other information you wish to track.
Client-side using javascript: When window.onunload fires, send an ajax call to the server to say the user has finished with the page identified by the unique id. Looking up the id saved in step 1, update the length of time visited.
Alternatively, use something like Google Analytics, which does a stellar job of tracking visitors.
In ASP.NET MVC, what is a good way (the preferred way??) to time how long a user has been on a specific page? For example, I want the user to select something and then only allow the user to do something for 30 seconds. Good links or a reference to a page of a book would be much appreciated.
Thanks in advance!
You can keep track of when a user went somewhere and how long they have been there (or had the page open while balancing their check book...you get the idea). The problem is that you need some form of client controller to let them see XYZ for 30 seconds...and then redirect them to the next page that they can see. So if you wan the user to see a resource for X amount of time you need to employ a javascript client side timer to take them away from the resource when their time is expired. This can be done with the time statically coded to the client (which could be changed by the client) or it can be done by making an AJAX request to the server to see if the time has expired. Or it could be done with an embeded flash player. The key here is that your server side doesn't have as much power over the client side as what you are requiring. Most testing sites deploy some form of this client side javascript to keep track of what the user is doing, when, and for how long!
One pretty easy way is to store the last time the user was on the page in a database - the table could have, for example, the fields UserID, Page, and TimeStamp. Whenever the user tries to do whatever you only want to allow for 30 secs, you check against the database if the time has passed or not. (For such short periods of time as 30 seconds, a database might be a little too slow, though... Depends on your requirement of precision, I guess...).
You could use JavaScript's setTimeout() function:
var timeOut = function() {
alert('Time is up!');
}
setTimeout (timeOut, 30000);
Or you could use a <meta> tag:
<meta http-equiv="refresh" content="30;url=http://www.example.com/time-is-up.html">