I'm trying to Import a list from:
https://www.ebay.com/sch/i.html?_from=R40&_nkw=silver+chain&_sacat=0&rt=nc&LH_Sold=1&LH_Complete=1
to a Google Spreadsheet using =IMPORTHTML function, The Formula I was using as below,
A1:
https://www.ebay.com/sch/i.html?_from=R40&_nkw=silver+chain&_sacat=0&rt=nc&LH_Sold=1&LH_Complete=1
A2:
=IMPORTHTML(A1,"list",4)
But it returns Incorrect results.
For example:
ebay - Alloy (34,237)
google sheets - Alloy (42,069)
Can somebody help me, I'm new to google sheet scripting and I'll really grateful if somebody can help me.
waiting to hear from somebody...
You may be getting a discrepancy in results based on your location.
If you are located overseas for example, Ebay may parse the number of results based on sellers who will ship to your region. When the Google server makes the inquiry, it's doing so from a location different than your own, and thus may get a different total.
I run across this from time to time. If you're logged into Ebay you may try changing your shipping address and see what it does to the total results you get.
Related
I just started using google analytics and I wanted to import the exist analytics tool data into GA.
I see that GA has a submenu where one can import data, but i don´t seem to find any export menu in AWStats.
I asked my webmaster before bothering you, but he didn´t have an idea.
I checked the web and couldn't´t find export related topics.
in the attachment a screenshot of the submenus for AWStats.
Did anybody try this before ?
Thanks in advance.
GA data import cannot create hits, it can just amend already existing hits with additional data. So the answer is pretty much "no".
(You could pipe in new hit data via the measurement protocol, but you could not set a date/timestamp and there is a limited hit quota per second, so this would take a long time and you would lose the original time information on your hits, which would probably render the data useless).
For my recent project, i m trying to develop a brand new report for google analytics using the Sessions data over a period of time.
When I compare the numbers that I get from https://ga-dev-tools.appspot.com/query-explorer/ and check the report that we have created in analytics.google.com the numbers for sessions are off. They dont match exactly. They are off by like 1%. What might be the reason for this.
Can someone please help me here?
I can give more details if needed.
Thanks
Adding to the above , one more thing I noticed
This happens only When I add Segment filter to be specific. Without the segments the numbers for all users look good.
Had a call with Google Analytics support. They say that there isnt actual support for the Rest Api code and the Front End report from Google Analytics has built in Logic to filter out certain personal information like age related, sex, etc. which is not there in the Rest Api when it pulls in data.
So this is the reason why the numbers are off by 1% all the time.
Hope this helps
Thanks
I'm doing some complex reports for google analytics and would like to ask you if the following is possible. The client wants to have just organic data for a bunch of metrics. Like pageviews, visitBounceRoutes, etc. The query I ended up with is the following:
https://www.googleapis.com/analytics/v3/data/ga?dimensions=ga:source,ga:medium,ga:keyword,ga:day,ga:month,ga:year&end-date=2013-11-20&fields=columnHeaders/name,rows,totalResults,totalsForAllResults&filters=ga:medium==organic&ids=ga:79067749&metrics=ga:pageviews,ga:pageviewsPerVisit,ga:visitors,ga:avgTimeOnSite,ga:newVisits,ga:visitBounceRate&start-date=2013-10-20
However the response is as follows:
'{"totalResults":0,"columnHeaders":[{"name":"ga:source"},{"name":"ga:medium"},{"name":"ga:keyword"},{"name":"ga:day"},{"name":"ga:month"},{"name":"ga:year"},{"name":"ga:pageviews"},{"name":"ga:pageviewsPerVisit"},{"name":"ga:visitors"},{"name":"ga:avgTimeOnSite"},{"name":"ga:newVisits"},{"name":"ga:visitBounceRate"}],"totalsForAllResults":{"ga:pageviews":"0","ga:pageviewsPerVisit":"0.0","ga:visitors":"0","ga:avgTimeOnSite":"0.0","ga:newVisits":"0","ga:visitBounceRate":"0.0"}}'
Can the dimensions ga:source,ga:medium,ga:keyword be mixed with the above metrics? It seems they can't since if I omit them the API returns an array of values 1 per each day within the specified range.
Where can I find more information about this and what categories are mixable? https://developers.google.com/analytics/devguides/reporting/core/dimsmets just shows all the available metrics but do not explains how they are combined and which one would be valid requests. I'm new at the analytics API and would be great any kind of help or guidance
Thanks a lot
Google Analytics Query Explorer is your friend for playing around with analytics dimensions/metrics/filters ;-)
Try http://ga-dev-tools.appspot.com/explorer/?dimensions=ga:source,ga:medium,ga:keyword,ga:day,ga:month,ga:year&metrics=ga:pageviews,ga:pageviewsPerVisit,ga:visitors,ga:avgTimeOnSite,ga:newVisits,ga:visitBounceRate&filters=ga:medium%253D%253Dorganic&start-date=2013-10-20&end-date=2013-11-20&max-results=100
Some thoughts:
Those dimensions & metrics should work -- maybe there was no organic data recorded during that time range?
Try removing the ga:medium==organic filter and see what your data looks like.
Does the profile you're using (ga:79067749) have any filters on it? If so, maybe try a different profile that has unfiltered data. (Analytics best practices -- make sure you have a profile with no filters applied that captures all data.)
As Mike said, there is no problem with the combination of metrics and dimensions you are using.
If you are entering the URL query directly in the browser problem might be the lack of URL encoding in your query string. For example, you need to convert == to %253D%253D
For example, instead of ga:medium==organic, you need ga:medium%253D%253Dorganic
If you build your query in the Google Analytics Query Explorer as Mike suggests, you can grab the direct link to your report by clicking the link symbol in the upper left:
I think the question has been answered here before,but i could not find the desired topic.I am a newbie in web scraping.I have to develop a script that will take all the google search result for a specific name.Then it will grab the related data against that name and if there is found more than one,the data will be grouped according to their names.
All I know is that,google has some kind of restriction on scraping.They provide a custom search api.I still did not use that api,but hoping to get all the resulted links corresponding to a query from that api. But, could not understand what will be the ideal process to do the scraping of the information from that links.Any tutorial link or suggestion is very much appreciated.
You should have provided a bit more what you have been doing, it does not sound like you even tried to solve it yourself.
Anyway, if you are still on it:
You can scrape Google through two ways, one is allowed one is not allowed.
a) Use their API, you can get around 2k results a day.
You can up it to around 3k a day for 2000 USD/year. You can up it more by getting in contact with them directly.
You will not be able to get accurate ranking positions from this method, if you only need a lower number of requests and are mainly interested in getting some websites according to a keyword that's the choice.
Starting point would be here: https://code.google.com/apis/console/
b) You can scrape the real search results
That's the only way to get the true ranking positions, for SEO purposes or to track website positions. Also it allows to get a large amount of results, if done right.
You can Google for code, the most advanced free (PHP) code I know is at http://scraping.compunect.com
However, there are other projects and code snippets.
You can start off at 300-500 requests per day and this can be multiplied by multiple IPs. Look at the linked article if you want to go that route, it explains it in more details and is quite accurate.
That said, if you choose route b) you break Googles terms, so either do not accept them or make sure you are not detected. If Google detects you, your script will be banned by IP/captcha. Not getting detected should be a priority.
I'll be working on a project that will require a live output of a number of tweets users have hash tagged on Twitter as well as their tweets. Something along the lines of MTV's Twitter Tracker: http://vma-twittertracker.mtv.com/live/#buzz.
What intrigued me about this site is how can they constantly make API calls to Twitter without breaching the request limit?
I'd appreciate if anyone could guide me on the most effective way to accomplish this. From the research I've carried out thus far, I presume I will need to use Twitter's Streaming API.
Since there is a chance that the number of tweets output to my page could be in their thousands (AJAX loaded) along with stats on number of retweets/favourites, what would be the most scalable approach within my .NET site? Any examples or guidance would be appreciated.
Check out Linq2Twitter. It is a great wrapper around the Twitter API, and provides two mechanisms that will help you:
There is a search function that allows you to search for hash tags, etc, which will limit the amount of data you are getting back
You have the option to specify getting all the data since a certain tweet ID. You can therefore incrementally search the feed by performing searches and searching, in subsequent calls, from the ID you left off on.
I have used this many times to search the public feed and have not had any issues to date. I think the search function is key not requesting too much. Good luck!
you can look into Storm framework. Below are few links for further reference:-
http://storm-project.net/
https://github.com/nathanmarz/storm
Thanks for all your responses.
It looks like sites such that display a lot of Twitter stats/data use third party approved providers that have direct access to Twitter's Firehose API.
I have managed to get in contact with an approved provider to supply us with the feeds of data required (and it ain't cheap!).