I'm attempting to export leads from a Marketo list into my system, but I'd like to be able to display a progress bar with the total number of leads in the Marketo list. Is there a way to find the total number of leads in a given list using the Marketo Rest API?
The Get Import Lead Status(http://developers.marketo.com/documentation/rest/get-import-lead-status/) call will return numOfLeadsProcessed which you can compare to the number of rows in your list, to give an approximation of total progress. You shouldn't need the existing size of the destination list to know the progress which you've made.
Related
I'm logging some custom metrics in Application insights using the TelemetryClient.TrackMetric method in .NET, and I've noticed that occasionally some of the events are duplicated when I view them in the Azure portal.
I've drilled into the data, and the duplicate events have the same itemId and timestamp, but if I show the ingestion time by adding | extend ingestionTime = ingestion_time() to the query then I can see that the ingestion times are different.
This GitHub issue indicates that this behavior is expected, as AI uses at-least-once delivery.
I plot these metrics in charts in the Azure portal using a sum aggregation, however these duplicates are creating trust issues with the charts as the duplicates are simply treated as two separate events.
Is there a way to de-dupe the events based on itemId before plotting the data in the Azure portal?
Update
A more specific example:
I'm running an algorithm, triggered by an event, which results in a reward. The algorithm may be triggered several dozen times a day, and the reward is a positive or negative floating point value. It logs the reward each time to Application Insights as a custom metric (called say custom-reward), along with some additional properties for data splitting.
In the Azure portal I'm creating a simple chart by going to Application Insights -> Metrics and customising the chart. I select my custom-reward metric in the Metric dropdown, and select Sum as the aggregation. I may or may not apply splitting. I save the chart to my dashboard.
This simple chart gives me a nice way of monitoring the system to make sure nothing unexpected is happening, and the Sum value in the bottom left of the chart allows me to quickly see whether the sum of the rewards is positive or negative over the chart's range, and by how much.
However, on occasion I've been surprised by the result (say over the last 12 hours the sum of the rewards was surprisingly negative), and on closer inspection I discovered that a few large negative results have been duplicated. Further investigation shows this has been happening with other events, but with smaller results I tend not to notice.
I'm not that familiar with the advanced querying bit of Application Insights, I actually just used it for the first time today to dig into the events. But it does sound like there might be something I can do there to create a query that I can then plot, with the results deduped?
Update 2
I've managed to make progress with this thanks to the tips by #JohnGardner, so I'll mark that as the answer. I've deduped and plotted the results by adding the following line to the query:
| summarize timestamp=any(timestamp), value=any(value), name=any(name), customDimensions=any(customDimensions) by itemId
Update 3
Adding the following line to the query allowed me to split on custom data (in this case splitting by algorithm ID):
| extend algorithmId = tostring(customDimensions.["algorithm-id"])
With that line added, when you select "Chart" in the query results, algorithmId now shows up as an option in the split dropdown. After that you can click "Pin to dashboard". You lose the handy "sum over the time period" indicator in the bottom left of the chart which you get via the simple "Metrics" chart, however I'm sure I'll be able to recreate that in other ways.
if you are doing your own queries, you would generally be using something like summarize or makeseries to do this deduping for a chart. you wouldn't generally plot individual items unless you are looking at a very small time range?
so instead of something like
summarize count() ...
you could do
summarize dcount(itemId) ...
or you might add a "fake" summarize to a query that didn't need it before with by itemId to coalesce multiple rows into just one, using any(x) to grab any individual row's value for each column for each itemId.
but it really depends on what you are doing in your specific query. if you were using something like sum(itemCount) to also deal with sampling, you have other odd cases now, where the at-least-once delivery might have duplicated sampled items? (updating your question to add a specific query and hypothetical result would possibly lead to a more specific answer).
After how many clicks will the Core Reporting API start to sample the click data, when using samplingLevel=LARGE?
I'm trying to retrieve data from a large account (i.e. more than 30,000 clicks/day on average) and the number of clicks doesn't always match what I can see on Google Analytics. This, however, seems to happen only on this large account, and not every day. Strangely, on those days where the click count doesn't match, transactions and revenue match what I can see in Google Analytics.
In my query, I'm only trying to retrieve the data for a given account, without applying any filter.
EDIT: If I don't retrieve data aggregated at the account level ― thus not including ga:adwordsCampaignID, ga:adwordsAdGroupID and ga:adwordsCriteriaID in the dimensions ― I can retrieve all the clicks.
EDIT2: If add the ga:deviceCategory dimension, along with ga:adwordsCampaignID, ga:adwordsAdGroupID and ga:adwordsCriteriaID I can retrieve all clicks. I'm not sure if this can help narrow down the issue.
Google Analytics has a cardinality of 50K after that you ll receive (others)
Based on the "EDIT" i can safely assume the reason is that when a click doesn't have an 'ga:adwordsCampaignID' associated with it, it ll not retrieve that click. This happened to me with custom dimensions.
EDIT: Try using the 'include-empty-rows' parameter on your query. https://developers.google.com/analytics/devguides/reporting/core/v3/reference#includeEmptyRows
If I make a request to the Google Analytics API using only the metric "ga:users", the result is different to the one that is returned in the "totalsForAllResult" field when I add a dimension.
Does anyone know the explanation for this and which is the correct result?
You cannot sum up the user counts per dimension and get a gross total as one and the same user can appear in multiple dimension values. For a detailed explanation look here. If you want to get the total users value, repeat the API request w/o the dimension. Apparently, Google does the mistake of blindly summing the values themselves in the totalsForAllResults field of the response of the Core Reporting API which can be highly misleading.
I'd like to limit the number of markers that appear on the map in the right hand panel to something like 10 at any zoom level.
How can this be achieved?
The library can and examples can be found here:
http://storelocator.googlecode.com/git/index.html
I am following the code example given here:
http://storelocator.googlecode.com/git/examples/panel.html
There is a code reference here:
http://storelocator.googlecode.com/git/reference.html
But it's still not clear to me exactly how I can customise the example I am following so that it only shows a maximum of 10 markers at any one time.
EDIT : Why I want to do this
I sell a product wholesale to many salons. With this map I am trying to show customers which salons they can go to buy the products I supply.
However in the example given by google, the full list of salons appear as markers on the map. This is not good because it is then possible for competitors to glean an entire list of salons that they can market competing products to.
The solution I'd like would be to only show a maximum of 10 markers at a time according whichever is closest to the inputted address.
For me the example( http://storelocator.googlecode.com/git/examples/panel.html ) always show only up to 10 entries in the panel. There is a hardcoded limit of 10 , so it's not possible to achieve it without modifying the store-locator.min.js
But when you wan't to display less than the 10 entries, it would be possible via CSS:
/* limit the displayed entries to 5 */
.store-list li:nth-child(n+6){display:none}
When you want to apply a higher limit(or when it should be compatible with IE<9) edit this part in store-locator.min.js(line 28)
m=e.min(10,c[E]);
(set the 10 to the desired value)
To limit the number of results at all edit this line in MedicareDataSource.prototype.parse_
for (var i = 1, row; row = rows[i]; i++)
and set it to
for (var i = 1, row; row = rows[i],i<XXX; i++)
(where XXX is the limit +1, so e.g. setting XXX to 11 will apply a limit of 10)
There's a few general approaches, and the better solution depends a bit on your total number of stores you have, and how hard you want to make it for someone to scrape.
You could continue to use the static data feed like in this example (which means sending all stores to the browser on load), and then add some logic to only display the closest 10 (such as setting the map to null for all markers that aren't also shown in the panel), but this is not a good solution if:
there are lots of stores (more than a thousand or so) since it will be unnecessarily slow to load them all when only displaying a few.
you don't want someone to look at your code and just grab the full CSV you're sending down the wire with all your data.
Given your scraping concern, a better method is probably to implement the store locator using a dynamic datasource that only returns the closest N records for a given lat/lng so you don't expose the entire thing at once. Using Google services you could use Maps Engine which has an API, and the store locator includes a Google Maps Engine example you could start with. Your security concern here is if these queries are publicly available for anyone to hit directly, the table is also public and then someone could query for the full table. So you'd want to put a proxy inbetween to avoid that type of query hack (although of course someone could just feed you lots of locations to eventually get all your stores if they really wanted).
Other options (again just looking at Google's stack although there are lots of alternatives for this kind of thing, like CartoDB and many more) include AppEngine's Search API which also returns the N closest items (but would require some server side coding which Maps Engine would not), or even put the data into Google spreadsheets and implement a basic Script -> Web Service, where your script takes the lat/lng and do some basic math to find the closest.
Again if you don't love the server side aspect then Maps Engine is probably your best bet for a quick start especially given there's a working sample in the storelocator code.
I am trying to create some charts in Flex/FlashBuilder 4.5
The issue I have it that the information I wish to display is the number of events within an area. I am using HTTP service to access a rails controller which is returning an XML list of the events.
I need to figure out how to chart the number of events, eg number of records returned. There is no numeric value within chart, as there would be if I was charting sales or prices for example.
I'm a little stumped as to the best way to do this?
Use a dataFunction. Check out http://flexdiary.blogspot.com/2008/08/charting-example.html for an example that uses one.