How to get assisted Interaction break down by date? - google-analytics

I'm using this query
https://www.googleapis.com/analytics/v3/data/mcf?ids=ga%3A99364917&start-date=2016-11-05&end-date=2016-12-05&metrics=mcf%3AfirstInteractionConversions&dimensions=mcf%3Asource&key={YOUR_API_KEY}
To get first interaction analysis, but I get only the summary / total. I would like to get that number by date.
Meaning instead of getting: Facebook:100, Direct: 100
I would like to get
Facebook: 12/1/16:50,12/2/16:50
Direct: 12/1/16:30, 12/2/16:70
How can I do that?

I think you can just add mcf:conversionDate to the dimensions in your query. The returned rows will then be segmented by both MCF source and conversion date.

Related

Grouping, missing data - Cognos Report Studio

In IBM Cognos Report Studio
I have a data structure like so, plain dump of the customer details:
Account|Type|Value
123-123| 19 |2000
123-123| 20 |2000
123-123| 21 |3000
If I remove the Type from my report I get:
Account|Value
123-123|2000
123-123|3000
It seems to have treated the two rows with an amount '2000' as some kind of duplicated amount and removed it from my report.
My assumption was that Cognos will aggregate the data automatically?
Account|Value
123-123|8000
I am lost on what it is doing. Any pointers? If it is not grouping it, I would at least expect 3 rows still
Account|Value
123-123|2000
123-123|2000
123-123|3000
In any case I would like to end up with 1 line. The behaviour I'm getting is something I can't figure out. Thanks for any help.
Gemmo
The 'Auto-group & Summarize' feature is the default on new queries. This will find all unique combinations of attributes and roll up all measures to these unique combinations.
There are three ways to disable auto-group & summarize behavior:
Explicitly turn it off at the query level
Include a grain-level unique column, e.g. a key, in the query
Not include any measures in the query
My guess is that your problem is #3. The [Value] column in your example has to have its 'Aggregate Function' set to an aggregate function or 'Automatic' for the auto-group behavior to work. It's possible that column's 'Aggregate Function' property is set to 'None'. This is the standard setting for an attribute value and would prevent the roll up from occurring.

Google Spreadsheet IF and AND

im trying to find an easy formula to do the following:
=IF(AND(H6="OK";H7="OK";H8="OK";H9="OK";H10="OK";H11="OK";);"OK";"X")
This actually works. But I want to apply to a range of cells within a column (H6:H11) instead of having to create a rule for each and every one of them... But trying as a range:
=IF(AND(H6:H11="OK";);"OK";"X")
Does not work.
Any insights?
Thanks.
=ArrayFormula(IF(AND(H6:H11="OK");"OK";"X"))
also works
arrayformulas work the same way they do in excel... they just need an ArrayFormula() around to work (will be automatically set when pressing Ctrl+Alt+Return like in excel)
In google sheets the formula is:
=ArrayFormula(IF(SUM(IF(H6:H11="OK";1;0))=6;"OK";"X"))
in excel:
=IF(SUM(IF(H6:H11="OK";1;0))=6;"OK";"X")
And confirm with Ctrl-Shift-Enter
This basically counts the number of times the said range is = to the criteria and compares it to the number it should be. So if the range is increased then increase the number 6 to accommodate.

Subtract payment until amount due is zero

Using powershell I read in a text file that contains the check amount. I then create a query and get the amount due. The problem comes in because buyer could have multiple balances for different products. So they could write a check that covered A but not B and C.
$remainAmount = $currentAmount[0] - $checkAmount
How can I do this and not produce a negative number, force it to stop subtracting when zero is reached?
One solution would be to use the [Math]::Max() function like this:
$remainamount = [Math]::Max($currentamount[0] - $checkamount,0)
That will give you the higher of the two numbers, so if they still owe something it gives that, or it gives 0.

Transform for graphite counter

I'm using the incr function from the python statsd client. The key I'm sending for the name is registered in graphite but it shows up as a flat line on the graph. What filters or transforms do I need to apply to get the rate of the increments over time? I've tried an apply function > transform > integral and an apply function > special > aggregate by sum but no success yet.
Your requested function is "Summarize" - see it over here: http://graphite.readthedocs.org/en/latest/functions.html
In order to the totals over time just use the summarize functions with the "alignToFrom =
true".
For example:
You can use the following metric for 1 day period:
summarize(stats_counts.your.metrics.path,"1d","sum",true)
See graphite summarize datapoints
for more details.
The data is there, it just needs hundreds of counts before you start to be able to see it on the graph. Taking the integral also works and shows number of cumulative hits over time, have had to multiple it by x100 to get approximately the correct value.

What is the best way to determine what articles are available for a given usenet group?

I was wondering what the most efficient way is to get the available articles for a given nntp group. The method I have implemented works as follows:
(i) Select the group:
GROUP group.name.subname
(ii) Get a list of article numbers from the group (pushed back into a vector 'codes'):
LISTGROUP
(iii) Loop over codes and grab articles (e.g. headers)
for code in codes do
HEAD code
end
However, this doesn't scale well with large groups with many article codes.
In RFC 3977, the GROUP command is indicated as also returning the 'low' and 'high' article numbers. For example,
[C] GROUP misc.test
[S] 211 1234 3000234 3002322 misc.test
where 3000234 and 2002322 are the low and high numbers. I'm therefore thinking of using these instead rather than initially pushing back all article codes. But can these numbers be relied upon? Is 3000234 definitely indicative of the first article id in the above-selected group and likewise is 3002322 definitely indicative of the last article id in the above-selected group or are they just estimates?
Many thanks,
Ben
It turns out I was thinking about this all wrong. All I need to do is
(i) set the group using GROUP
(ii) execute the NEXT command followed by HEAD for however many headers I want (up to count):
for c : count do
articleId <-- NEXT
HEAD articleID
end
EDIT: I'm sure there must be a better way but until anyone suggests otherwise I'll assume this way to be the most effective. Cheers.

Resources