What is the best tool to see a website peak? [closed] - google-analytics

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I want to know what time my website has been more visitors. How can I check this peak? Can I do with google analytics?

Yes you can do this with Google Analytics. Please note that there are steps that Google takes to prevent you from getting analytics at a level detailed enough to violate the privacy of a user, so you may not exactly get the answer you want if you have too long of a history, or too few users visiting over the period you are analyzing.
The other thing you need to be careful of is that most data is geared toward sessions whereas your question is asking about visitors.
The best way to measure this sort of data is if you have unique advertising identifiers, cookies, or a login page that is being captured in a way that allows you to track a unique visitor as it navigates through your site, and treat that separately from a user who double clicked on a particular page. If you have a database, or log files, or an event strip, or even are able to capture ip addresses somewhere of where the visits are coming, along with timestamps, this can help you get the answer you're looking for better than can be achieved directly from Google Analytics.
Another caveat with GA is that the data may continue to trickle in over 24 hour period or longer, so if the spike just happened, you'll want to wait a while before making a final analysis.
With Google Analytics, if you are trying to investigate a single peak in your data on a single day, and that day occurred at least 24 hours ago, then here's an example of one way to get time-segmented traffic data for a website:
Log on to Google Analytics
On the left hand side click to expand the "Audience" tab and click on "Overview"
select the date range in the top right for instance April 1 - April 1 2016
slightly below that select "Hourly" instead of day (top right of the chart)
In the top left, click on Users instead of Sessions.
You should now see an hourly view of the traffic over the period of April 1.
I've attached a screenshot of this final page, taken from an old Google Analytics account that is tracking a defunct website (which is why there are so few data points)

Related

Firebase spend and AdMob ads [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I would like to ask you a question about the value of Firebase and AdMob ad earnings with an Android app. I am creating an application that uses a lot of Firebase Realtime Database as it is a social networking app, has messaging, multi-person group messaging, profile visiting, and more. I will only use Firebase as a database, so it will have a lot of value stored there. And the only way the app will have (for now) make money is banner admob ads (which will get on all app screens) and some interstitial ads. It is an app where users will spend a lot of time on it as it will be a social network, so ads will be displayed all the time.
The problem is that I'm afraid that at the end of the month Firebase's monthly price will be more expensive than the ad revenue for the month (I'll use the Blaze plan, which pays as you go). Can user visualization in ads be worth the price of their own readings, inclusions and exclusions of values ​​in Realtime Database? What do you think?
Depends how long it takes for you to get many people using the app.
When each screen that contains a banner ad loads, it's calculated as a page view.
The more people that use your app will drive up the amount you get paid for Page RPM (Revenue Per Mille which = 1000 views).
Obviously you require a separate banner ad unit for each page, so if the app has a few pages which load, they should add up quickly with many users.
The amount is only small though, usually a couple of dollars for Page RPM, the interstitial ads pay much more but will be an inconvenience for the user.
People get used to ignoring the banner ads as long as they don't interfere
with the functionality of the app, the interstitial ads need to be placed strategically.
My two apps which I've had published in the last 10 days don't have many users yet and have totalled around 3000 page views in that time which looks to only be around $6 (for the last 10 days).
In the same time, only 300 impressions which are around $20 per 1000.
I have 3 banners and two interstitial.
I'm trying to build up the number of users at the moment which will cost more in advertising too.

Is it ok to scrape data from Google results? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'd like to fetch results from Google using curl to detect potential duplicate content.
Is there a high risk of being banned by Google?
Google disallows automated access in their TOS, so if you accept their terms you would break them.
That said, I know of no lawsuit from Google against a scraper.
Even Microsoft scraped Google, they powered their search engine Bing with it. They got caught in 2011 red handed :)
There are two options to scrape Google results:
1) Use their API
UPDATE 2020: Google has reprecated previous APIs (again) and has new
prices and new limits. Now
(https://developers.google.com/custom-search/v1/overview) you can
query up to 10k results per day at 1,500 USD per month, more than that
is not permitted and the results are not what they display in normal
searches.
You can issue around 40 requests per hour You are limited to what
they give you, it's not really useful if you want to track ranking
positions or what a real user would see. That's something you are not
allowed to gather.
If you want a higher amount of API requests you need to pay.
60 requests per hour cost 2000 USD per year, more queries require a
custom deal.
2) Scrape the normal result pages
Here comes the tricky part. It is possible to scrape the normal result pages.
Google does not allow it.
If you scrape at a rate higher than 8 (updated from 15) keyword requests per hour you risk detection, higher than 10/h (updated from 20) will get you blocked from my experience.
By using multiple IPs you can up the rate, so with 100 IP addresses you can scrape up to 1000 requests per hour. (24k a day) (updated)
There is an open source search engine scraper written in PHP at http://scraping.compunect.com
It allows to reliable scrape Google, parses the results properly and manages IP addresses, delays, etc.
So if you can use PHP it's a nice kickstart, otherwise the code will still be useful to learn how it is done.
3) Alternatively use a scraping service (updated)
Recently a customer of mine had a huge search engine scraping requirement but it was not 'ongoing', it's more like one huge refresh per month.
In this case I could not find a self-made solution that's 'economic'.
I used the service at http://scraping.services instead.
They also provide open source code and so far it's running well (several thousand resultpages per hour during the refreshes)
The downside is that such a service means that your solution is "bound" to one professional supplier, the upside is that it was a lot cheaper than the other options I evaluated (and faster in our case)
One option to reduce the dependency on one company is to make two approaches at the same time. Using the scraping service as primary source of data and falling back to a proxy based solution like described at 2) when required.
Google will eventually block your IP when you exceed a certain amount of requests.
Google thrives on scraping websites of the world...so if it was "so illegal" then even Google won't survive ..of course other answers mention ways of mitigating IP blocks by Google. One more way to explore avoiding captcha could be scraping at random times (dint try) ..Moreover, I have a feeling, that if we provide novelty or some significant processing of data then it sounds fine at least to me...if we are simply copying a website.. or hampering its business/brand in some way...then it is bad and should be avoided..on top of it all...if you are a startup then no one will fight you as there is no benefit.. but if your entire premise is on scraping even when you are funded then you should think of more sophisticated ways...alternative APIs..eventually..Also Google keeps releasing (or depricating) fields for its API so what you want to scrap now may be in roadmap of new Google API releases..

Using GoogleAnalytics to keep track of sessions [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
It it possible to use GoogleAnalytics to keep track of individual user sessions? I mean, not the average time spend by users on the website, but how long did each user stay in total on the website?
My hosting option does not include databases, so I'd like to use GoogleAnalytics to keep track of user sessions. Can it be done? I was unable to find anything satisfying on the web. :(
No, not really - Google Analytics doesn't track on a per user basis (because that's actually quite useless, among other things).
You could do a workaround, however. Assign each visitor an id (or pull the user id from the analytics cookie via javascript). Then trigger a E-Commerce-Transaction with the id as transaction id and the time on site (use the timestamps from the GA cookie) as transaction value - Google will add up transaction values if the transaction id is the same. This idea is untested and obviously needs some work, but it should be doable.
Which leaves the question , why ? What kind of business decision do you want to base on a vast list of useless info ? The average values are much more useful.
Update: after reading your comment, you're doing it wrong. You want an advanced segment-> Exclude Visit Duration smaller (or greater, whatever) than 10 seconds.
Google Analytics does not allow you to track individual users.
See this thread:
http://productforums.google.com/forum/#!topic/analytics/tTaqssN7sY8
Try out Woopra: http://www.woopra.com/
You might be able to filter by advanced segments or use custom reports to do your own filtering. Or create a custom metric that itself is filtered.

How long should I wait to see a Sprint Burndown chart in RTC? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've just set up a Rational Team Concert v3 system. The data was loaded on Friday, but there was an issue connecting to the report data warehouses that was not fixed until today (Monday). We've fixed it, and the data load operations seem to be finishing correctly now.
I'm desperately eager to see a burndown chart - even though I know that in 24 hours we won't really have enough data to make it useful. I'm also eager to see just about any report from the RTC server, as we want to be able to share as much information as possible with the customer, and this is a trial for RTC as a large team tool.
How long should one expect it to take before RTC is able to show reports relating to work items? We've already cached several data updates - but only within the last few hours.
Should we wait 24 hours? 48? should it show up immediately? Haven't found any good heuristics for this on the Rational site.
You need a few things to happen to get a decent burn down chart in RTC
Run the Data Warehouse job (this happens every 24 hours automatically, or you can trigger it manually from the Reports page in the Admin section.
Get some work done - complete tasks, set Stories to Completed, etc. The burn down is a graph over time of work done.
You should see progress on the chart after the two event above occur.
Another thing to check - is that specific burn down chart set to point to the right project and team?
If that does not work - you may want to raise the question with IBM support (sounds like something is wrong, or raise the question on the RTC forum on jazz.net
Closure - It turns out we had several problems. Problems included:
- incorrect account setup on the account syncing between RTC and the data warehouse - we had to both make a new account and setup more privileges for it.
- a truly messed up set of sprints. I don't know what went wrong with the Sprints that were first set up (by default!) with the project, but they did not ever sync properly. Moving tasks to a newly made sprint caused the tasks to show up properly in reports (after a sync), but the original sprints were simply broken. Eventual workaround - make new sprints, same dates, and move all assigned stories/tasks to them.
The final answer was - the data should show up instantly after a sync. If you think your sync shows new data and you don't see a change in your report, then you have a problem.
Other notes - the data in the selection fields in "edited" reports is based on the data in the warehouse. If you don't see a sprint or release in there, it means that the report search criteria is not showing that there is data in the column that you are looking for. Report business logic seems to vary by report - in some cases, not being able to select a sprint (or not having a sprint in the data that matches the "current iteration") - will cause empty reports.

Is anybody happily using Google Analytics with big websites? (million+ pages, million+ monthly visitors) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was a happy customer of Google Analytics starting from the Urchin times. But something strange happened a few months ago and GA started showing a fake URL called "(other)" that is credited between 5% and 45% of all site traffic. I've tried filtering out some URL parameters to reduce the number of pages. Currently GA shows only 150,000 pages on my site, which is well below the half million limit that some people are talking about. Still, the page "(other)" is showing as the most popular page on my site.
Is anybody else struggling with this issue? I am wondering whether this could be a scalability issue. My site has been growing over the years, and currently doing 1.25 million unique monthly visitors and over 10 million pageviews. The site itself has around half a million pages. If you are successfully using GA with a bigger website than mine, please share your story. Are you using the Sampling feature of their tracking script?
Thanks!
For a huge website like and I would not use a Free Analytics. I would use something like Web trends or some other paid analytics. We cannot blame GA for this after all its a free service ;-)
GA has page view limits too. (5 Million page views)
Just curious. How long did you take to add the analytics code to your pages? ;-)
In Advanced Web Metrics with Google Analytics Brian Clifton writes that above a certain number of page views, Google Analytics is no more able to list all the seperate page views and starts aggregating the small amount ones under „(other)” entry.
By default, Google Analytics collects
pageview data for every visitor. For
very high traffic sites, the amount
of data can be overwhelming, leading
to large parts of the “long tail” of
information to be missing from your
reports, simply because they are too
far down in the report tables. You can
diminish this issue by creating
separate profiles of visitor
segments—for example, /blog, /forum,
/support, etc. However, another option
is to sample your visitors.
I get about 3.5 million hits a month on one of my sites using GA. I don't see (other) listed anywhere. Specifically what report are you viewing? Is (other) the title or URL of the page?
You can get a loooonnnngggg way on Google Analytics. I had a site doing about 25mm uniques/mo. and it was working for us just fine. The "other" bucket fills up when you hit a certain limit of pageviews/etc. The way around this is to create different filters on the data.
For a huge website (millions of page views per day), you should try out SnowPlow:
https://github.com/snowplow/snowplow
This will give you granular data down to the individual page URLs (unlike Google Analytics at that volume) and, because it is based on Hadoop/Hive/Infobright, it will happily scale up to billions of page views.
Its more to do with a daily limit of unique values for a metric they will report on. if your site uses querystring parameters, all those unique values and parameter variations are seen as separate pages and cause the report to go over the limit of 50,000 unique values in a day for a metric. To eliminate, you should add all the big culprits querystring names to be ignored, making sure however to not add any search querystring names if search is on.
On the Profile Settings, add them to the Exclude URL Query Parameters textbox field, delimited by commas. Once I did this, the (other) went away from the reports. It takes affect at the point they are added, previous days will still have (other) displaying.

Resources