Organic Keyword Data vs. SEO Queries in Google Analytics - google-analytics

What is the difference between Keywords>Organic data and Search Engine Optimization>Queries data? Aren't they both showing (or in the case of Organic Keywords, not showing) keywords that visitors clicked to go to your URLs?

Organic search data in the acquisition reports is the search term plucked from the referring search engine url that brought a visitor to the page and is connected to other data points (so for example for a provided search keywords you can see if it lead to a conversion).
SEO data are Google search keywords imported from Google Webmaster Tools. This report shows you what has been searched for in connection with your site (even if it didn't result in a visit) but is not really connected to other data points in your reports.
To be a bit more concise, organic keywords is what brings visitors to your site. SEO Queries is what makes your site show up in Googles search result pages.

Related

Track Internal Number of Search Results per search with Google Analytics

This is for a WordPress / WooCommerce setup.
I recently updated my site to Google Analytic 4 and have attempted to setup tracking of search results AND the number of search results per search. I have out-searched Google attempting to find any information on how to set this up...
To be more clear: I have tracking setup to track the search terms users search for and how many times a search term has been searched BUT I'd really like to be able to track how many results are returned when someone searches for something. For example, if someone searches for "Pink Blanket", then I'd like to have a column for Search Results, showing that I have 4 pink blankets for sale on my site.
Why you ask? Let's say over the course of the previous 30 days, I've had 18 searches for "Pinc Blanket" and 32 searches for "Pink Blinket" and 1 search for "Pink Blanke". This would tell me people can't spell correctly and I can use a few of those highly searched terms on my site to return results.
After out-searching Google, I came across this older blog article (https://mixedanalytics.com/blog/number-search-results-google-analytics-gtm/) that does exactly what I want but it's for use with Universal Analytics and not GA4. I tried to set it up for GA4 but no luck as the two versions are quite different. Instead, I reverted back to GA3 / Universal Analytics and attempted the setup but still having issues getting the "Search Results" or the "Avg. Search Results" columns to collect any data:
Does anyone have any idea how to go about doing this?
Ideally, I'd like to get this working with GA4 since Universal Analytics will no longer work come July of 2023.
I feel like this information should be easily obtainable online but perhaps I am not searching for the right answers to my questions.

How to track SEO Conversion in Google Analytics?

I've been trying to track the conversion rate of users acquired through organic search to direct (Users who discover the site through an organic medium and eventually start coming back to the site on their own through direct search). The way I decided to go about this was create a segment with a sequencing of the following kind-
The results that I get are very counter-intuitive which leads me to think that maybe my understanding of how sequencing works is not correct. What could I change to get a measure of organic to direct search conversion?
Changing the scope of the segment from 'Sessions' to 'Users' should fetch you the desired result.

Google Analytics Keyword shows URLs instead of keywords

I'm having some trouble with GA reports for one of my sites. Since the beginning of the month, GA is reporting URLs instead of keywords under Traffic Sources > Sources > Search > Organic, in the "keyword" column.
What can be causing this, and how can I fix it?
--
Additional info:
Have not set up custom filters in this account.
I get mixed keywords and URLs in the same "keyword" column.
Having the same issue I scrutinized my server logs to get to the bottom of this. It seems that most organic searches with the domain as keyword do originate from google maps (and other google services) - when people click the link in the marker on maps the url is transmitted as the search term.
You need to select "Keyword" for the primary dimension. You've likely selected "Landing Page".
What you are observing is a normally possible scenario. In organic search report, under keywords column, GA will list the search keywords which showed your sites in the search engine result and subsequently led to an incoming traffic.
Let the website which your tracking using analytics be http://www.abc.com . Now assume someone searches , say in google, by giving a full url like http://www.xyz.com instead of just a word like xyz and your site link shows up among the search results. Now if this person clicks on your site link in the search results, then google analytics will log this with a traffic source as search>organic and keyword http://www.xyz.com , as used for the search. Now if the person searches using the keywords http://www.xyz.com + someword and your site appears in the search result and the person clicks on it , then in analytics you will see the keyword as http://www.xyz.com + someword which will be combination of url + word. This happens mostly if your site abc.com has some page which links to xyz.com or provides the full link and so your site is also indexed by search engines for searches with that web url.

Google analytics: how to count occurrences of each item in internal search listing

In company I am working on a website that presents users' announcements. Announcements can be searched through our internal search engine. SO far we have implemented Googla Analytics API to present our users 'pageViews' information of their announcement, but they also want to know how often their announcement has been shown in search listing (probably to compare with pageviews and later modify some information like title or thumbnail of announcement to gain CTR). How can we collect such data? I obviously tried to google it, but couldn't find any information.

Basic site analytics doesn't tally with Google data

After being stumped by an earlier quesiton: SO google-analytics-domain-data-without-filtering
I've been experimenting with a very basic analytics system of my own.
MySQL table:
hit_id, subsite_id, timestamp, ip, url
The subsite_id let's me drill down to a folder (as explained in the previous question).
I can now get the following metrics:
Page Views - Grouped by subsite_id and date
Unique Page Views - Grouped by subsite_id, date, url, IP (not nesecarily how Google does it!)
The usual "most visited page", "likely time to visit" etc etc.
I've now compared my data to that in Google Analytics and found that Google has lower values each metric. Ie, my own setup is counting more hits than Google.
So I've started discounting IP's from various web crawlers, Google, Yahoo & Dotbot so far.
Short Questions:
Is it worth me collating a list of
all major crawlers to discount, is
any list likely to change regularly?
Are there any other obvious filters
that Google will be applying to GA
data?
What other data would you
collect that might be of use further
down the line?
What variables does
Google use to work out entrance
search keywords to a site?
The data is only going to used internally for our own "subsite ranking system", but I would like to show my users some basic data (page views, most popular pages etc) for their reference.
Lots of people block Google Analytics for privacy reasons.
Under-reporting by the client-side rig versus server-side eems to be the usual outcome of these comparisons.
Here's how i've tried to reconcile the disparity when i've come across these studies:
Data Sources recorded in server-side collection but not client-side:
hits from
mobile devices that don't support javascript (this is probably a
significant source of disparity
between the two collection
techniques--e.g., Jan 07 comScore
study showed that 19% of UK
Internet Users access the Internet
from a mobile device)
hits from spiders, bots (which you
mentioned already)
Data Sources/Events that server-side collection tends to record with greater fidelity (much less false negatives) compared with javascript page tags:
hits from users behind firewalls,
particularly corporate
firewalls--firewalls block page tag,
plus some are configured to
reject/delete cookies.
hits from users who have disabled
javascript in their browsers--five
percent, according to the W3C
Data
hits from users who exit the page
before it loads. Again, this is a
larger source of disparity than you
might think. The most
frequently-cited study to
support this was conducted by Stone
Temple Consulting, which showed that
the difference in unique visitor
traffic between two identical sites
configured with the same web
analytics system, but which differed
only in that the js tracking code was
placed at the bottom of the pages
in one site, and at the top of
the pages in the other--was 4.3%
FWIW, here's the scheme i use to remove/identify spiders, bots, etc.:
monitor requests for our
robots.txt file: then of course filter all other requests from same
IP address + user agent (not all
spiders will request robots.txt of
course, but with miniscule error,
any request for this resource is
probably a bot.
compare user agent and ip addresses
against published lists: iab.net and
user-agents.org publish the two
lists that seem to be the most
widely used for this purpose
pattern analysis: nothing sophisticated here;
we look at (i) page views as a
function of time (i.e., clicking a
lot of links with 200 msec on each
page is probative); (ii) the path by
which the 'user' traverses out Site,
is it systematic and complete or
nearly so (like following a
back-tracking algorithm); and (iii)
precisely-timed visits (e.g., 3 am
each day).
Biggest reasons are users have to have JavaScript enabled and load the entire page as the code is often in the footer. Awstars, other serverside solutions like yours will get everything. Plus, analytics does a real good job identifying bots and scrapers.

Resources