Ads System - asp.net

I am creating an Ad system for an ASP.NET website. The website has a section for advertisers. They register their and posts there ads, They will pay the maximum budget for the ad first, There is a daily budget , so the advertiser can control his budget, There will be a lot of ads from different advertisers to show in the website. The ads has two attributes maximum budget and daily budget, How can i select ads , How many times an ad can display, Can anyone give me a method or algorithm for that.

Hey Priyan,
here's how we handle it in AdServerBeans (http://www.adserverbeans.com - it's open source, you can check the source code):
DROP FUNCTION if exists get_speed;
CREATE FUNCTION get_speed(from_date DATETIME, to_date DATETIME, views_limit INT, views_served INT, now_date_time TIMESTAMP)
RETURNS double
DETERMINISTIC
NO SQL
BEGIN
DECLARE banner_total_serving_time INTEGER;
DECLARE banner_served_time INTEGER;
DECLARE percent_time_served DOUBLE;
DECLARE percent_ad_events_served DOUBLE;
IF (views_limit IS NULL OR views_limit=0) THEN RETURN -1;END IF;
IF (views_served IS NULL) THEN SET views_served = 0;END IF;
IF (banner_total_serving_time = 0) THEN SET banner_total_serving_time = 1;END IF;
IF (views_limit = 0) THEN SET views_limit = 1;END IF;
SET banner_total_serving_time = TIMESTAMPDIFF(SECOND, from_date, to_date);
SET banner_served_time = TIMESTAMPDIFF(SECOND, from_date, now_date_time);
SET percent_time_served = (100 * banner_served_time) / banner_total_serving_time;
SET percent_ad_events_served = (100 * views_served) / views_limit;
RETURN percent_ad_events_served - percent_time_served;
END
;;
This MySQL function returns negative or positive number. Negative if we are underperforming, positive if overperforming.
Underperforming - serve, overperforming - skip to the next banner or not serve.
Input parameters are self-explanatory I hope.

I would recommend looking at scheduling algorithms.
For example, you could use the budget to determine a number of times / period (day/week/etc), and use that as a weighting factor in a weighted round robin schedule. This would be a simple way to balance out requests from different advertisers evenly through time. (NOTE: The link above is more geared towards network packet scheduling, but the basic algorithm would work...)

I think you should use different algorithms for your problem. Normally in that kind of systems you have:
ASAP (The campaign will be shown as soon as possible without limitation)
Even distribution/redistribute if overdelivered (The campaign's traffic will be evenly distributed through the indicated period of time. If the campaign is overdelivered due to any modification, a new traffic distribution will be calculated so the remaining traffic amount is evenly distributed in the remaining period of time)
Even distribution in two halves (The even distribution in two halves function is similar to the even distribution/redistribute option. However, it allows the user to assign an amount of traffic to the first half of the campaign and another to the second half. The halves are calculated by taking into account the total duration of the campaign, its weekly timetable and its multiple begin and end dates, if applicable. Then, the system takes the total amount of traffic, multiplies it by the corresponding percentage set for each half, and assigns it uniformly within each half.)
Adaptable even distribution (Traffic will be assigned according to traffic distribution of the sites where the campaign will run. More traffic will be assigned to peak hours and less to off-peak hours. When campaign is near its schedule end date, traffic distribution will be accelerated to ensure that goals are met.)
If too many algorithms is something which you do not want to deal, then implement only ASAP, I mean if that an advertiser can win he will win still his daily budget is over

Related

Google Analytics: How to properly filter ga:1dayUsers and ga:30dayUsers

Question: What is the right way to filter active users based on the presence of an event?
I'm trying to report on a count of users that have performed a particular action (purchased an item) on my site.
The aim is to have a Daily Unique Buyer (akin to DAU or 1dayUsers) and Monthly Unique Buyer (akin MAU or 30dayUser) metric.
For the Daily Unique Buyer metric I have tried two separate approaches and I am getting different results for both.
Approach 1) Use ga:Users metric and apply filter ga:eventCategory=="Purchase"
Approach 2) Create custom Segment, Ensure that Advanced Filter condition is for Users (not Sessions) and set the same filter ga:eventCategory=="Purchase"
The first approach seems to yield the desired result when compared to the second.
Unfortunately, the first approach does not extend to computing the same metric for Monthly Unique Buyers.
Most post on StackOverflow suggest that creating a segment (approach 2) is the right way forward. This however, yields more users than events, which can't be correct.
Even more perplexing - Applying the segment in Audience -> Active Users interface yields a different result to programmatic app-script query below
const optArgs = {
'dimensions': 'ga:date',
'sort': '-ga:date','
start-index': '1',
'max-results': 250,
'segment: 'gaid::xxxx',
}
Analytics.Data.Ga.get(
myViewId, startDate, endDate, 'ga:1dayUsers', optArgs
);
update: For those that struggled with this. I don't claim to understand why, but I was able to get the correct number by querying the desired metrics 1dayUsers and 30dayUsers one date at a time.
Running the report over a date range failed. I checked this with the list of actual active users (under User Explorer in the interface) and both 1 day and 30 day metrics are correct.
Would love for someone to explain why this is needed.

Complex Queries in Firestore / Realtime database and modeling for querying

I have a problem doing complex queries in Firestore database. I have read the documentation countless times so I know where the limitations are, but I wonder whether there is a way to structure the data so it supports my use cases. Let me explain what the use cases first:
I have a list of jobs, and users and I want to able to list/filter jobs according to some criteria and to list/filter users according to some criteria.
Job
JOB ID
- job type (1 of predefined values)
- salary (any number value)
- location (any value)
- long
- lat
- rating (1 - 5)
- views (any number value)
- timeAdded (any timestamp value)
- etc.
User
User ID
- experiences (0, 1 or more of predefined values)
- experience1
- jobCategory
- jobName
- timeEmployed
- experience2
- etc
- languages (0, 1 or more of predefined values)
- language1
- languageName
- proficency
- language2
- etc.
- location (any value)
- long
- lat
- rating (1 - 5)
- views (any number value)
- timeLastActive (any timestamp value)
- etc.
Filtering by field which can only have one value is fine. Even when I add sorting by "timeAdded" or a ragne filter.
1) The problem is when I introduce second range filter, such as jobs with "salary" higher then 15 bucks and at the same time "rating" higher then 4. Is there a way to get around this problem when I have N range filters?
2) Second problem is that I cannot use logical OR. Lets say, filter jobs, where "jobCategory" is Bartender or Server.
3) Another problem is to filter by fields, which can have more then 1 value, e.g. User can speak more than one language. Then if I want to filter users which speak English, it is not possible. Not to mention to filter users who speak e.g. English OR French.
I know I can model the data the way that I use the language as the name of the field, like -english = true, but when I introduce range filter to this, I need to create a Firestore index, which is very inconvenient since I can have around 20 languages and around 50 job types at the same time, and I would have to create indexes all the combinations together with different range filters.. is this assumption correct?
4) How would I filter jobs which up to 20 km from certain position? Radius and the position is on the user to choose.
5) What if I want to filter by all those fields at the same time? E.g. filter certain "jobCategory", location and radius, "salary" higher then something and "rating" higher then something, and sort it all by "timeAdded".
Is this possible with Firestore / Realtime database, can I model the data in some way to support this, or do I have to look for an alternative DB solution? I really like the real-time aspect of it. It will come handy when it is time to implement chat feature to the app. Is it solvable with Cloud functions? I am trying to avoid doing multiple requests, merging them together and sending that to client, since there can be any combination of filters.
If not doable with Firebase, do you know of any alternatives similar to Firestore with better querying options? I really hope I am just missing something :)
Thank you!

Google analytics - metrics mismatch while exporting data via API with various set of dimensions

I am working on GA reporting metrics in Power BI via reporting API.
While I create a query with some very basic attributes like sessions and users, I get same values as I can see directly in google analytics dashboard.
but when I add more dimensions and attributes, say, user type, pageviews or gender etc, alingwith users and sessions, the value of users and sessions is inflated.
I have tried to go through various documentations, where I know there are some restrictions that not all dimensions and attributes can be put together, but in this case, GA has allowed me to add these basic attributes togehter but the results are not matching.
Is there any documentation to explain this behaviur, or has anyone experienced anything like this.
has this to do something to do with binning, though I would expect, even if the difference is due to different binnings on different counters, the difference should be a smaller value, not the ones I am getting, which is huge(multiple times of error ) not just few percent of error.
I have come across with this problem and the reason is because of a limit on Google Analytics Core Reporting API.
Sampling thresholds
Default reports are not subject to sampling.
Ad-hoc queries of your data are subject to the following general
thresholds for sampling:
Analytics Standard: 500k sessions at the property level for the date
range you are using
Analytics 360: 100M sessions at the view level for
the date range you are using
i.e. Once the data you are requesting is returning more than 500k sessions / rows of data in a query, Google Analytics will return sampling data but not exact data.
The way I work around with this limit is to break down the query into separate queries (to make sure the returned data is fewer than 500k rows) with a date filter (per year, month or day, depends on data volume) apply to each of it. Then append all the queries back into one.
Sample M code:
(year as number, month as number) =>
let
Source = GoogleAnalytics.Accounts(),
...,
#"Added Items" = Cube.Transform(#"...", {{Cube.AddAndExpandDimensionColumn, "ga:pagePath", {"ga:pagePath"}, {"Page"}}, {Cube.AddAndExpandDimensionColumn, "ga:pageDepth", {"ga:pageDepth"}, {"Page Depth"}}, {Cube.AddAndExpandDimensionColumn, "ga:pageTitle", {"ga:pageTitle"}, {"Page Title"}}, {Cube.AddAndExpandDimensionColumn, "ga:date", {"ga:date"}, {"Date"}}, {Cube.AddMeasureColumn, "Page Load Time (ms)", "ga:pageLoadTime"}}),
#"Filtered Rows" = Table.SelectRows(#"Added Items", each [Date] >= #date(year, month, 1) and [Date] <= Date.EndOfMonth(#date(year, month, 1)))
in
#"Filtered Rows"
Result:

AX2009 manual NumSeq on business relation?

Anyone know why the number sequence on business relation (AX 2009) should not be manual, according to basic code?
Table method smmBusRelTable.checkNumberSequence()
if (numberSequenceReference)
{
numberseq = NumberSequenceTable::find(numberSequenceReference.NumberSequence);
if (numberseq)
{
if (numberseq.Manual)
{
// Business relation number sequence must not be manual
ret = checkFailed("#SYS81360");
}
}
...
Manual can be used, but of course the warning "Business relation number sequence must not be manual" will pop up everytime.
My guess is it was written in some spec, that prospects should have an automatic number sequence.
Also AX users rarely cares about the number of a prospect, but may care about the number of the customer, maybe using phone numbers or something else. Prospect are usually imported from external sources and there may be 10 or 1000 more prospects than customers.

oracle year change trigger

I m on a problem that I can t figure out. I m building an application in c++ builder 2009 and oracle 11g. I have some calculated data that depend on users age. What I want to do is to re-calculate these data every new year. I thought I could have a trigger to do this, but I don t know which event I should catch and I didn t find something in internet.
My table is :
ATHLETE (name, ......, birthdate, Max_heart_frequency)
Max_heart_frequency is the field that depends on age. In insertion I calculate athlete's age, but what about next year??????
Can anyone help????
How is the max_heart_frequence calculated?
If this is a simply formula, I would create a view that returns that information. No need to store values that can easily be calculated:
CREATE VIEW v_athlete
AS
select name,
case
-- younger than 20 years
when (MONTHS_BETWEEN(sysdate, birthday) / 12) < 20 then 180
-- younger than 40 years
when (MONTHS_BETWEEN(sysdate, birthday) / 12) < 40 then 160
-- younger than 60 years
when (MONTHS_BETWEEN(sysdate, birthday) / 12) < 60 then 140
-- everyone else
else 120
end as max_heart_frequency
from athlete
Then you only need to select from the view and it will always be accurate.
You can use oracle scheduler to run a procedure at specific intervals (can be minutes hours, daily, yearly etc .. any time span).
Check this linke: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schedover.htm
You have two options:
Have a stored procedure that calculates and updates the Max_Heart_Frequency of all the athletes every 01st Jan (using the yearly scheduling of a procedure)
Have a stored procedure that runs daily and calculates and updates the Max_Heart_Frequency of all the athletes every day (using the daily scheduling of a procedure)
If Max_Heart_Frequency changes over time because the user is getting older, why are you storing it in the table in the first place? Why not just call the function that computes the maximum heart rate at runtime when you need the value? Potentially, it may make sense to have a view on top of the Athlete table that adds the computed Max_Heart_Frequency column to hide from the callers that this is a computed column.

Resources