I have a Asp.net WebSite.Where in i need to implement(Most Viewed,Most Shared and Most Downloaded) features.We decided to go with URCHIN for the same.I dunno how to start and i googled a lot but did not find any resource regarding this.Can any one Suggest me some links or how to start with.
Waiting for your reply.
Thanks,
If you decided to use URCHIN ,, have a good start here:
https://secure.urchin.com/helpwiki/en/Help_Center.html
then here for advanced issues:
https://secure.urchin.com/helpwiki/en/Data_API.html
Urchin generates reports by reading web server log files. Create a Log Source in Urchin that specifies the location of your IIS logs, add the Log Source to a Profile, and you will see data in the reports.
Urchin Profiles have two options for tracking methods: UTM and IP+UA. UTM requires JavaScript page tags, and IP+UA uses the existing data in your log files. If you're new to Urchin, you probably don't have UTM code on your website so IP+UA is your best option for seeing data in the reports.
Related
If I wanted to build a scraper that pings each URL on a site and stores the adobe (or Google) image request, how would I go about this? I.e. I just want something that grabs all the parameters in the URL posted to Adobe in a csv or something similar. I'm familiar with how to build simple web scrapers, but how do I grab the URL I see in for example Fiddler that contains all the variables being sent to the Analytics solution?
If I could do this I could run a script that lists all URLs with the corresponding tracking events that are being fired and it would make QAing much more manageable.
You should be able to query the DOM for the image object created by the tag request. I am more familiar with the IBM Digital Analytics (Coremetrics) platform and you can find the tag requests using accessing the following array document.cmTagCtl.cTI in the Web Console on a Coremetrics tagged page. I used this method when building a Selenium WebDriver test case and wanted to test for the analytics tags.
I don't have the equivalent for Adobe or GA at the moment since it depends in the library implementation am trying the do the same as you for GA.
Cheers,
Jamie
I have one website like www.example.com and have dynamic pages like www.example.com/page?1 and www.example.com/page?2 etc. more pages are created every hours. I need to create sitemap.xml file automatically save in server path and update my latest web pages to Google search engine. How to do this in ASP.NET? Give me any clue on this.
I was looking for similar information recently, there is a similar topic. What you need is called "web crawler" - the principle of work consists in the searching of all URL-address in the HTML-code, excluding links to other sites, and creating a list of found links. For each of the URL-address in the list it will repeat these steps and as result you'll get list of address for all your web pages. And then you can build file Sitemap.xml, I have used for this the class of .net Framework - XmlTextWriter.What about automatically updating the Sitemap file , I think you can set some timer and to update the file, for example, once a day or do it yourself every day. Good luck
I need to do this:
1) enter a certain website using my clients usernames and pass- they are aware of that offcourse :)
2) navigate inside, a matter of 6 steps
3) download a .csv file from the site
It's a water meter reading site, and I want to update my DB automaticly every hour.
Using WebBrowser in c#, it works great. But, I need it to be on a server and run it all the time for the info to be up to date.
Webservices aren't valid because the reading site has nothing to do with me (3rd side company etc.).
So basically, what I need is to mimic the webbrowser control. what I found unthe codeproject didn't helped me.
I'm checking if CGI can do the trick, but perhaps I'm offtrack here.
thanks for your help!
You can use the HttpRequest/Response objects in the System.Net namespace. They don't mimic the web control but they do allow you to make the requests you want.
I have this ASP.NET web site that allows users to download program installation packages (just normal files). I want to be able to track when a download is completed (i.e. the file has been fully downloaded to the user's computer) and then invoke a Google Analytics script that reports a completed download as a 'Goal' (obviously, one of my goals is to increase file downloads).
The problem is that I need to support direct file URLs, as opposed to the "redirect page" solution. This is because a lot of traffic comes from software download sites that explicitly demand a direct file URL when submitting a product. Perhaps, they do their own file analysis (i.e. virus checking). But with this set of limitations, a typical scenario is:
The user visits my product listing on a software download site
The user clicks the "Download" button on this site
The "Download" page is typically a redirect that finally brings the user to my file via the direct URL I've initially submitted, i.e. http://www.ko-sw.com/somefile.exe
If under these conditions, an exact solution for monitoring is not possible, maybe there exists a workaround? What comes to my mind is temporarily storing the number of performed downloads on the server and then accessing an administrative page that somehow reports this number to Google Analytics and finally sets it back to zero. With this workaround, there is at least no need to try to attach a javascript handler to a non-HTML resource. But even then there are issues:
How to track if a download has completed?
How to track user geolocation and browser capabilities to make them further visible in the reports?
Thanks everybody in advance
According to awstats aborted download has http status code 206 so if you analyze server log for such code you can get those downloads that were not completed.
#Kerido ~ I'm curious what the business case is here. Are you trying to track installs or downloads? If installs, go with #SamMeiers solution.
However, if you're trying to track downloads, then the next question is what webserver base are you using? IIS? Apache? Something else?
In IIS, assuming you're using 7 (or later), you could (easily?) write a HttpHandler that checks for the last bytes of the file to be sent, and on that, record a log somewhere.
On Apache, just setup logging to tell you how many bytes were transferred (a trivial change in httpd.conf) and then parse the logs daily (awstats [amongst others] is pretty good for this, but you might have to write a sed/awk script) and find out how many full transfers were completed. Just depends on how thorough you're trying to be.
But I go back to, what's the business case for this? What does it matter if there were unfinished downloads?
It's possible to track links as a goal, which may be of use to you. However, this won't track when the download was completed.
http://www.google.com/support/analytics/bin/answer.py?answer=55529
Hope this helps.
Cheers
Tigger
I think the solution of #SamMeiers is very good but you may optimized by calling a web services after the installation complete but you might find a small problem if the use installing the app in an environment without internet but you might force to check if there is an internet or not.
You can create any trigger when you installation start as a start flag then when if finish check if the start flag exists then the app have been downloaded and installed also.
I am testing ASP.NET website and for that I have turned logging on at IIS6.0.
Following are the observations during testing:
Each link, png image, MS Chart and CSS file has been requested separately, one after another.
For request of say login page it is taking around 30-45 seconds to complete and in that page only 6 images are there and at log file it is observed that there are separate requests for each images one after another.
Can anybody help me to improve site performance and also I would like to know that is it possible that all requests would send to server parallel?
Yes it is possible to improve on the app speed by parallelizing the downloads !
I recommend going through google page-speed and yahoo's yslow, and read the practices that they propose. I felt it informative.
http://code.google.com/speed/page-speed/
http://developer.yahoo.com/yslow/help/index.html
Thanks
First of all, have you checked web-site Performance tab? Limits could've been set there. Also check that keep-alives are enabled (web site tab).
Then you should profile your server using System Monitor.
If everything mentioned is ok, you should check client side and what's between client and server.
What's happening is that the browser makes HTTP requests to the server for each object it finds on the page. You can eliminate those requests, or reduce how often they happen, by enabling client-side caching. For static files, you can configure that in IIS.
You can parallelize requests for images (not JS files) by assigning them to different domains; if they are all in a single domain, the browser will request only two at a time.
However, you question opens the door to a big subject. In an attempt to provide a detailed answer, I ended up writing a book on the subject, called Ultra-Fast ASP.NET. I cover the answer to the question from the OP in great detail in Chapter 2.