Embed API Working with Custom Components fails to display - google-analytics

Google Analytics demo code at .
Logged in to Google Chrome as the owner of the Analytics Account and then navigating to that page displays my Google analytics data correctly.
I follow instructions on the page and embed the code into a simple page .
Authentication works as indicated by the displayed message: “You are logged in as: me(at)gmail.com” but there is nothing more, no graph no message.
I am reasonably certain that the page is coded correctly as I have:
Basic Dashboard (basic.html)
Multipleviews (multipleviews.html)and
Interactive Charts (ic.html)
all working and displaying correctly (they display but not styled like the demo).
Why will the page not display the graphics?

As Eike pointed out in the comments, you've simply copied and pasted the code from the demo without downloading the components to your own server. If you open up your JavaScript console, you'll notice that you have 404 errors saying the browser can't find those components. Here's a screenshot of what I see on your site:
To add those components to your site, you have a number of options. I've answered a similar question on one of the repo's Github issues, but I'll copy it here for convenience.
The built and minified versions of those components are located in the build/javascript/embed-api/components directory. You can simply download those files and add them as script tags on your page, or include them in your site's main, bundled script.
If you're using an AMD script loader like RequireJS, you can also just point to those built files as they're wrapped in a UMD wrapper.
If you're using a tool like browserify or webpack, you can npm install this repo and require the files in the src/javascript/embed-api/components directory.

Related

Download a file from a permalink URL, and not a direct exe url

So I am using InnoSetup 6 which natively supports downloading files from the internet during installation. I have figured out downloading files given a direct link, from this thread Inno Setup: Install file from Internet
However, I can't for the life of me figure out how to download the latest version of a file given a permalink URL. My specific example is to download the Microsoft Hosting package.
https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer
Going to this page automatically downloads the latest package.
Inno doesn't like this link (or I don't know how to get Inno to use it) since it doesn't point to the direct file. If I use the direct link (https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe) this works for obvious reasons.
I'd like to always download the latest, but I'm not sure how to accomplish this. Any suggestions?
Adding super basic code being used...
DownloadPage.Clear;
DownloadPage.Add('https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer', 'dotnet-hosting.exe', '');
DownloadPage.Show;
You would have to retrieve the HTML page, find the URL in the HTML code and use it in your download code.
See Inno Setup - HTTP request - Get www/web content
It would be quite unreliable. Microsoft can change the HTML any time.
You better setup your own webpage (web service) that will provide an up to date link to your installer. The web page can even do what I suggested: retrieve the URL from the Microsoft's download page. In case Microsoft changes the HTML, you can fix your web page any time. What you cannot do with the installer.
Without realizing it you are asking two different question here. That is because these "permalinks" aren't really permalinks but redirects to some dynamic resource that has a link to what you are looking for.
So first, addressing the Microsoft "permalink", you need to realize that under the hood you are accessing a URL that redirects to some page which will point to the latest. Then under the hood, that page invokes a JavaScript function, IF YOU ACCESSING VIA A WEB BROWSER, to download the installer. Note that both the page pointed to and the code to invoke the installer WILL eventually change. In fact, the code itself logs a "warning" when people attempt to download directly:
If you do a view source you'll see:
<script>
$(function () {
recordDownload('.NET', 'runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer');
window.open("https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe", "_self");
});
function recordManualDownload() {
ga("send", "event", "Download.Warning", "Direct Link Used", "runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer");
}
</script>
So you can download the HTML from this page and use some regex to get the directo downloadlink but beware, the link is going to change every time Microsoft releases a new version. Furthermore, WHEN (not if but when) MS decides to rebrand this entire process might break. So the best you can do here is try to download the html and try parse the download URL from this "permalink"
As an alternative. you can to download the latest DotNet powershell install script as described here.
If possible, execute that script directly. If not look at the function Get-AkaMSDownloadLink within the install script to see how it builds the url to get the latest version. You would probably be better served using that building and using that URL as opposed to attempting to download from some arbitrary HTML code.
Now, onto the second question you might not have realized your were asking is how to automate this for any random installer. The answer is you can't. Some might have a permalink that directly points to the latest but you are always going to find cases like Microsoft. Best you can down is hard code some links in some service, as #martin-prikryl suggested, and when the break update the links in those services.

Blogdown site pages do not render properly

I'm using RSTudio Blogdown/Github/Netlify to maintain my blog site. I'm using the Acadmic theme. When I push the changed .RMD files to Github the changed pages do not seem to deploy but if I build the entire site and push it then the site deploys on Netlify without any problem. Unfortunately, it takes about three minutes to build the entire site, so I'm looking for a faster solution.
I think that I should be able to build a single directory, which would be super fast, but when I build a directory with this, blogdown::build_dir("content/project/cont_imp"), the HTML document does not build properly. It seems to render as a single long javascript and since all of the metadata in the YAML header is wrapped into the script the page on Netlify does not deploy properly, things like the date and subtitle are missing and it is not formatted like the rest of my site.
I have one bad page that I built with build_dir on GitHub so you can view both the .RMD source and .HTML rendered documents: https://github.com/grself/icochise/tree/master/content/project/cont_imp. You can see this project page on my live site at: https://icochise.com/ (scroll down to the "Projects" section and notice the difference between the "Continuous Improvement" link (no text there, just an image of a hand and a whiteboard) and the "Blogdown and Bookdown" link. I just now noticed that the HTML document seems to be some sort of self-extracting javascript so after a couple of seconds the source code looks normal. Maybe there is some kind of setting on Netlify I need to change so it will extract the javascript as it is deploying the page?
I checked the settings in my "Configure Build Tools" and unchecked "Preview site after building" and "Re-knit current preview..." but that didn't help. I also tried changing the Project build tools dropdown from "Website" to "Custom" and specified the Hugo executable. None of these things helped.
I also tried running "Serve Site" while I worked, thinking that would continuously render the HTML page, but that tool seemed to hang and would not display the site once I made changes to an .RMD file. In fact, it was hung up so badly that I had to kill RStudio with the Windows Task Manager.
Finally, I also tried to update Hugo, hoping that there was something fouled up in my Hugo install, but that did not help.
I suspect that I'm doing some simple thing wrong, but have tried everything I can think of to fix this and would appreciate any suggestions.

Remove unwanted CSS from 1 stylesheet - inspecting all my website pages and not only one

I got a project to work on that includes a lot of unwanted CSS within a stylesheet.
I used a few tools like "Audits" (Chrome), "CSS Usage" (FireFox) and "Uncss" Nodejs npm package.
They all output unused CSS for the current page that is refreshed or mentioned in CLI (uncss looks like this: uncss https://example.com > style.css)
I have thought of getting this by template - but the website I am working doesn't have any CMS and templates organized like Wordpress - it is built with Zend MVC Framework and there is no specific organized "templates".
What is the easiest way to clear unused CSS from all of my website in a more efficient way?
I am working on a tool, https://www.bleachcss.com/, that detect unused CSS based on user actions.
Thanks to a little snippet of JavaScript, the tools detect use CSS selector when your user interact with the page and then send a report back to our server.
We then aggregate all the reports sent by all your users, and then we create pull request automatically to remove the unused CSS from the code.
By using real user actions, we are sure to support any kind of website, even pure JavaScript applications. Moreover, we are not slowing down your build system by adding headless browser runs or static analysis into it.
We are still in beta right now but I would love to learn more about your app, so please contact us if you are interested in giving it a spin!

Javascript file is not loaded properly

I have a nice html, css template (source code here).
I am going to use this template in my angular2 app (source code here).
I got the html template out of this repository (index.html).
My problem is in the angular2 source code
You need to clone the angular source.
Run npm install
Run ng serve
Unfortunatly, it seems that the <script src="assets/js/main.js"></script> in index.html is not added properly. Although, there is no error in the console, the left menu is broken. I know that this problem occurs when main.js is not fit.
Here is the correct html page:
Here is the angular page (broken header and menu):
The codes are identical, but I have decomposed the html template into 3 components (header, menu, and app (main content)).
Instead of trying to figure out what happened with your CSS, I took the original template, converted it to Angular 2 with the angular-cli, and fixed the CSS issues. It all works now, and the complete source is at https://github.com/Boyan-Kostadinov/angular2-miminium
When you broke apart index.html it's likely that you also altered some file paths.
The relative path would go from src="assets/js/main.js" to something like src="../assets/js/main.js".
Prepending ../ to the path will back out of the current directory to the next level up. As you have it now, the browser is looking for the assets directory in what I assume you have compartmentalized as an htmlComponents directory.
Consider using the absolute path to main.js, at least to diagnose the issue.
I ran into a similar issue with the same file. In my case, I have a complicated application that is developed in stages. I installed my Angular seed in a subdirectory. Because of my file structure, when I run npm start, the live server that is started has bad relative link locations. For example, in the screen shot below, you will see that the application is trying to find style.css at http://localhost:3000/medface/RecordWriter/styles.css; however, it should be looking at http://localhost:3000/styles.css, because the root of the web server that was created by npm start is at /medface/RecordWriter/.
With respect to your project. The key to finding the problem with your link is to open the developer panel and inspect the actual network request. If you share a screen shot, we may be able to help you inspect your instance with more insight.
What Worked for Me
In my case, I reconfigured my local web server to handle any unserved pages in the Angular2 folder and return the index instead. When I run npm start, I close the browser page that opens and use my regular web server. Instead of viewing my application on localhost:3000, I view my application at localhost/medface/RecordWriter/ (which is equivalent to localhost:80/medface/RecordWriter).
The down side to my makeshift approach is that the page must be refreshed before changes appear, but it loads all resources predictably and reliably, and allows my Angular2 code to run in conjunction with some of the older code base in other areas of the website that have not been converted to Angular2. Regardless, this may work for you also.

Google Analytics Receiving Data -- but no analytics in view source

My client created a website and a google analytics account. The report indicates that the account is receiving data -- and yet, when we do a view-source of the pages of the site, there is definitely absolutely no analytics code there. How is this posssible?
It is possible that the Analytics code is added via Javascript, not appearing in the "View Source" page. It is also possible for it not to appear in the inspector either.
I do not know how this happens, but I have encountered scripts that exist and run although they are not displayed in either the source page or the live DOM inspector (in Google Chrome). This happened to me while loading a PHP template containing Javascript through an Ajax request.
If you have access to the source code of your website, search the entire project for for the Analytics ID (Here's how to find it: https://support.google.com/analytics/answer/1032385?hl=en), and you'll locate your tracking code.
If your project is running on a Linux server, here's a post about how to quickly find a keyword (like the Analytics ID) in a folder: How do I find all files containing specific text on Linux?

Resources