How to only share stats for a specific URL? - google-analytics

So I know it's possible to see analytics stats on a specific URL but is it possible to only share stats on specific URLs to other people through permissions?
Preferably I'd like to do this in Adsense so others can see how much revenue that URL generated, but if thats not possible being able to share URL specific stats in Analytics is also fine.

You could create a data view that excludes all but the desired URL and give access only to that. You could also create something via the Core Reporting API (which would be programming and makes this tenuously on-topic. But it would be easier than API programming (and more economically than creating a view, as the number of views is limited) to create a report in Google Data Studio (which is free) that filters by that single URL and then share it (with just reading rights).

Related

Add meta tags to a SPA html page with Cloud Functions

I've developed a SPA using VueJS, Quasar, and Firebase. Currently, the app is hosted on Firebase and makes use of the Storage and the Firestore Database features. I've encountered an issue regarding a simple yet very useful feature, Facebook/Twitter/Linkedin social share buttons(html 'a' tags that pass the URL to be shared). After much reading into the matter, I've come to understand these social platforms use crawlers to subtract certain open graph 'meta' tags from the served HTML to form a cohesive post that includes Images, Title, Description.
The obstacle I'm facing is that this being a SPA, the same static HTML is always served from the hosting service(Firebase). This means that the meta tags that would need to be unique per different route can't be changed, for example, injected using javascript. I've reached this conclusion after hours of painstakingly trying to find a work-around using different injection methods and libraries even trying to see if NuxtJs would somehow be of help. Now I understand that logically this cannot work since the same HTML file will always be served in a SPA.
Based on my current understanding, which might be wrong, I will need to implement Cloud Functions(sort of a server provided by google) in order to intercept the HTML requests from these crawlers and somehow serve a server-rendered HTML file containing these meta tags.
My questions for anyone that has stumbled upon something similar would be:
First of all is there something I'm missing and there is a way of serving a rendered HTML file with the current setup?
Is there a way of implementing Cloud Functions to intercept requests from these crawlers or otherwise continue the current SPA flow?

Download/Upload of Page Remarkup in Phabricator Wiki (Phriction)

The company I work for uses the "Phriction" wiki in Phabricator for a considerable amount of documentation. I'd like to be able do the following, programmatically, in order of importance:
Download (e.g., with curl or wget) the ReStructuredTExt (RST) to a local file where I can edit it, diff it, etc. Ideally I should be able to download either the latest version or any specific version.
Locally render (e.g., in a local graphical web browser) the markup as Phabricator would render it. If relative links can link correctly back to the original wiki, that's a bonus.
Upload new versions of the wiki page.
If you have don't know how to do exactly any of this, but have information or tool suggestions that would help me get started on writing software to do the above, please mention them. (If you're worried about too many answers that don't actually answer any of the questions above, try adding or editing a single community answer for this sort of information.)
I would do the following in your situation:
Downloading the single phriction pages using the API (Conduit) methods in the phriction section.
Therefore you need a Conduit Api Token. You can create in your profile settings of your phabricators intstance.
Then take a look at the phriction.info mehtod: This methods needs the page slug as parameter. In this example I use the /changelog/ page.
You can choose between arcanist, cURl or PHP to use the RestApi. Additionally you can use any other way to preform RestApi commands in the cURL syntax.
If you need some more examples how to run the conduit method you can toggle between some variations at the bottom of the output page.
Transform the page content as you like.
Upload the page again with the conduit methods (phriction.edit).
The way you downloaded the content you can edit the documents, too. But here you need some more parameter:
I personally, try first all conduit methods via the web interface first and then transform it to an a script.

How would you go about writing a custom script that grabs the Adobe or Google Analytics image request?

If I wanted to build a scraper that pings each URL on a site and stores the adobe (or Google) image request, how would I go about this? I.e. I just want something that grabs all the parameters in the URL posted to Adobe in a csv or something similar. I'm familiar with how to build simple web scrapers, but how do I grab the URL I see in for example Fiddler that contains all the variables being sent to the Analytics solution?
If I could do this I could run a script that lists all URLs with the corresponding tracking events that are being fired and it would make QAing much more manageable.
You should be able to query the DOM for the image object created by the tag request. I am more familiar with the IBM Digital Analytics (Coremetrics) platform and you can find the tag requests using accessing the following array document.cmTagCtl.cTI in the Web Console on a Coremetrics tagged page. I used this method when building a Selenium WebDriver test case and wanted to test for the analytics tags.
I don't have the equivalent for Adobe or GA at the moment since it depends in the library implementation am trying the do the same as you for GA.
Cheers,
Jamie

Push facebook images to ASP.net website?

Does anyone know if it is possible to develop a facebook application which could detect images uploaded into a specific album and somehow upload them to an asp.net website automatically?
Any input would be much appreciated as I don't have any experience of FB development yet.
I am not aware of an application existing already to accomplish this, however; it is very possible to develop an app to do this. The reasonable steps may be as follows:
Retrieve data (to database) from album for current images
Poll the album at a certain interval to check for changes
If change exists, compare the album contents with the saved data
Download (to the desired location) the images that did not exist already
Update application records (database) to reflect the new items
repeat.
This could be produced rather quickly if desired for personal use, and you dont need the finishing touches that an application for general consumption may need.
If this is the route you decide to take, the FB api Facebook Developers reference is very complete, as well as the information available at stackoverflow.

Facebook API and Drupal

I'm trying to use the drupal module called FB (http://drupal.org/project/fb). I just want to know if I'm on the right track. I've installed the module, setup the keys and so forth. All I want is the following:
To make a call into facebook
Retrieve all MY notes
Retrieve all the COMMENTS on my notes
So my questions are:
Is it necessary to write an app if I just want to make simple call like this?
Is there an easier way than the module i'm using?
You may find that the Activity Stream facebook extention does what you need.
Making some assumptions here: 1) you're familiar with programming drupal modules, 2) not so much with the facebook api, 3) you want to export your notes to display on a personal site.
The short answer is Yes, you need to create an application. You need the facebook client API files, and an API key/secret, which you only get by creating an app. Sounds like you've already gotten this far anyway.
However, from first glance, it looks like you only need to enable the DFF facebook API module, and then you can write your own module, using the global $fb to access facebook. The client API methods are mostly self-explanatory, if a little tweaky.
Possible problem: if your facebook details are restricted to friends only, visitors to your personal site won't have access to your notes via facebook API calls. The API only exposes what's publically accessible, or it exposes the (user-specific) details & friends when a visitor logs in (prompted by calling the require_login method). So, I imagine you'd need to store the notes locally, updating them by logging in yourself.
You could feasibly bypass the DFF modules altogether & just include the FB client API from your own module, but DFF looks like it handles all the weird & badly-documented fb behaviour - particularly that related to require_login - that'd otherwise have you smacking your head against a wall for three days. Good luck.
Caveat: I've never used the DFF modules. I used FBConnect, which was sufficient, but I spent a lot of time smacking my head against a wall.

Resources