How can you achieve targeted yet evenly displayed banners in OpenX? - openx

Many banners are tied to a zone. All of these banners have different targeting requirements using the site:variable (I say "requirements" loosely as the banner can be displayed even when requirements are not matched). The reason for this is because all banners must ultimately have an even number of impressions; however, along the way, the system should use the best of targeting when possible.
An example of the desired logic is below:
Given -
Banner 1 Targeting: IncomeGreaterThan20k=1, FishingIndustry=1
Banner 2 Targeting: IncomeLessThan20k=1, FishingIndustry=1
Visitor Profile: IncomeGreaterThan20k=1, FishingIndustry=1
Case 1 -
Banner 1 Impressions = 999
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 1 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = true, show Banner 1.
Case 2 -
Banner 1 Impressions = 1000
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 1 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = true, show Banner 1.
Case 3 -
Banner 1 Impressions = 1001
Banner 2 Impressions = 1000
Zone Rendered to Visitor 1 - Banner 2 is displayed
Why?: Targeting of Banner 1 is better than targeting of other ads (more matches on site:variables), best targeted banner has impressions less than or equal to other banners = false, show Banner 2.
When there are more than 2 banners, the logic should be extended based on the number of targeted variables matched and the number of impressions.
How can you configure the banner targeting to accomplish this?
If this can be accomplished, is there a way to put importance weights on the various site:variables?
If this can be accomplished, can you adjust the threshold for the number of impressions difference that can occur between the ads? Rule: No ad should be rendered more than 10 time more than any other ad.

The number of targeting fields matching does not affect ad selection.
If 4 banners in a zone end up with their targeting as 'true' (as in, all targeting criteria are met) then they are all considered for delivery.
After that, if all 4 are remnant banners from different campaigns, the only thing which adjusts the ad selection is the campaign weight. If they're all equal weighting, they all have equal chance of selection. If campaign1 has double the weight of campaign 2,3, and 4, then it has double the chance of the other campaigns of being selected.
To do exactly what you wish would require a plugin which alters the ad selection process.
1) Set all campaign weights equal (lets say weight=10), and all campaigns as remnant
2) Once all banners with targeting=false are thrown away, analyze the remaining banners and give more weight to ones with more targeting criteria
3) During hourly maintenance, analyze the stats and give a higher weight to ones which are falling behind. You don't want to do this during delivery because querying stats during delivery will cause a lot of overhead to the delivery process, which should be as quick as possible without DB calls
Using weights does not guarantee equal impressions - if they have a 50/50 chance of delivering there is a chance bannerA will delivery 1005 and bannerB will delivery 995, etc. It generally works out well - but since you are altering weights depending on targeting you are going against the 'deliver evenly' idea and perhaps pausing an ad which has gone above the 10x is a better idea, and then re-activating once it is within 5x (or such)
Note - unfortunately, making plugins for OpenX isn't very easy unless you have someone who already knows their way around. Its not a matter of knowing PHP, its a matter of knowing the OpenX plugin architecture.

Related

Google Analytics - Product List Views - how google calculate the metric?

recently I tried to prepare a Product List analysis. I was interested in how products on particular possitions are clickable, and then added to the basket and finally purchased.
One of the metrics that affect effectiveness is Product List Views - the number of times a given item on the list was viewed by the user. But when I started to analyze the data, one thing disturbed me. Well, on our website, immediately after entering, 18 products are loaded automaticlly, 3 in each line (and we send to GA ec: impression list for each product). And strangely, according to Product List Views, each item has a different number of views. Strange all the more because all products are loaded immediately. Even more bizarre, even products in one line have different views. And the question - how GA calculates the that metric.
I also checked Google Merchandise Store (to be sure that this is not just our implementation error). And they have similar results. ec: impression list loads when the page is opened for 12 products, 4 per line. in GA the results for each item are different. I checked only for the desktop devices, 1440x900 resolution to minimize the issues of window size and the number of products in the line. Nothing. The results between the products are different
It is also not related with scrolling. in GMS they fire tags on 25, 50 and 75% of page, but events are not related to line with products. On my page there are no scroll events due to infinity scroll we use on product list pages.
So, once again - how GA calculates the metric?

Analytics | Measuring page velocity for page value?

I'm trying to give my pages a value equal to page velocity. I'm reading through this guide which says:
https://online-behavior.com/analytics/page-velocity
"To measure Page Velocity, we will need to send an Ecommerce transaction with an arbitrary value of $1 on every pageview so it receives credit for the future pageviews as well."
How would I go about doing this?
I hate to say this but the implementation 'depends' on different use cases. The blog you have shared is primarily intended for content heavy site and not ecommerce sites. The next step would be to create a completely different view so that the ecommerce numbers do not mess up your original numbers.
Second step would be to assign a page value in terms of 1 dollar to the goal pages (main content pages) or all pages depending on how you want to do it.
You just need to remember page value = (transaction value + goal value)/ unique pageviews.
Unique page views is ' the number of individual users who have loaded a given page per session'. Hope this makes sense

Google Analytics: Making an Experiment respond to earnings

I'm currently in the process of creating an A/B test in Google Analytics (aka an "Experiment"). It's for a ticket purchasing page where there are several levels of tickets for sale (eg. Standard, Premium, First Class, etc).
We want the winning page variant to be the one that earns the most money, not necessarily just the one that triggers the most Goals (ie. number of successful transactions).
For example:
Variant A
Sales: 20
Total Sales Value: $100
(20 x Standard tickets)
Variant B
Sales: 5
Total Sales Value: $1000
(5 x First Class tickets)
We would want Variant B to win, even though the number of events triggered (aka "Goals") was higher with Variant A.
I cannot find anywhere discussing this issue, but surely the average value of an Event should factored into the success of a Variant?
We have the Goal configured like this at the moment:
One thing I've considered is increasing the Value threshold to be larger than the average sale, so the Goal doesn't trigger unless the page performs better than average. This isn't ideal for obvious reasons, though.
Is there another way?

Using events as page section usage

I'm currently researching a solution to monitor the performance of specific sections of a page. For example, you have a simple page with 2 images with links to other pages. You are driving lots of traffic to this page and you are experimenting with different contents on that page.
6 months after, you want to see which section of the page performed better with what kind of specific imges.
Let's imagine you require a report that should tell you the following: on average, the first spot performs better, but last week the image was bad and that's why you had less conversion from that spot.
I'd like to use such a system on a high-traffic homepage of an eCommerce website, in order to better monitor the usage of the selling spots.
I was thinking to use Google Analytics events with a positioning scheme (splitting the website in columns and rows, giving to each cell an identification ID such as a1 for column a, row 1) and keeping a local datawarehouse of creatives (images, promotions etc.), but apparently, after 10.000.000 hits per month, Analytics is recommending the premium version which is quite pricey (12k USD per month, 1 year upfront payment).
I was thinking about PIWIK as an alternative, but there is no event tracking there - or am I missing anything?
Looking forward to hearing your input on this matter.
You're better off with a provider like Optimizely for this use case. Still gonna be expensive, but it'll more quickly get you the information you need to make decisions.
We normally use multi variation tests or A/B tests to measure the success of user interfaces. Google Analytics have this feature and it is free.
This links maybe useful
https://www.youtube.com/watch?v=yDWTMOC_Dp4
https://support.google.com/analytics/answer/1745147?hl=en

How can I determine the "correct" number of steps in a questionnaire where branching is used?

I have a potential maths / formula / algorithm question which I would like help on.
I've written a questionnaire application in ASP.Net that takes people through a series of pages. I've introduced conditional processing, or branching, so that some pages can be skipped dependent on an answer, e.g. if you're over a certain age, you will skip the page that has Teen Music Choice and go straight to the Golden Oldies page.
I wish to display how far along the questionnaire someone is (as a percentage). Let's say I have 10 pages to go through and my first answer takes me straight to page 9. Technically, I'm now 90% of the way through the questionnaire, but the user can think of themselves as being on page 2 of 3: the start page (with the branching question), page 9, then the end page (page 10).
How can I show that I'm on 66% and not 90% when I'm on page 9 of 10?
For further information, each Page can have a number of questions on that can have one or more conditions on them that will send the user to another page. By default, the next page will be the next one in the collection, but that can be over-ridden (e.g. entire sets of pages can be skipped).
Any thoughts? :-s
Simple answer is you can't. From what you have said you won't know until you have reached the end how many pages the user is going to see so you can't actually display an accurate result.
What you could do to get a better result as in your example is to assume they are going through all the remaining pages. In this case you would on any page have:
Number of pages gone through so far including current (visited_pages)
Number of the current page (page_position)
Total number of pages (total_pages)
The maximum number of pages is now:
max_pages = total_pages - page_position + visited_pages
You can think of total_pages-page_position as being the number of pages left to visit which makes the max_pages quite intuitive.
So in the 10 page example you gave visited_pages = 2 (page 1 and page 9), page_position = 9 and total_pages = 10.
so max_pages = 10-9+2 = 3.
then to work out the distance through you just do
progress = visited_pages/max_pages*100
One thing to note is that if you were to go to pages 1,2,3,4,9,10 then your progress would be 10%,20%,30%,40%,83%,100% so you would still get a strange jump that may confuse people. This is pretty much inevitable though.
Logically unless the user's question graph is predetermined, you can not predetermine their percentage complete.
That being said.
What you can do is build a full graph of the users expected path based on what information you know: what questions they have completed, and what questions they have to take, and simply calculate a percentage.
This will probably involve a data structure such as a linked list to calculate where they have been, and what the user has left to complete, this implementation is up to you.
The major caveat here is you have to accept that if the users question graph changes, so will their percent completed. Theoretically they could display 90% and then it changes and they will display 50%.

Resources