We are in the process of implementing Sign In With Google functionality on our website. In the tutorial code snippet, external script is loaded from Google server:
<script src="https://accounts.google.com/gsi/client" async defer></script>
Is it possible to host this library locally? Where can I find all the files that I need to download?
EDIT:
I tried saving the JavaScript file content locally. However, it still tries to load the styles from the external URL (https://accounts.google.com/gsi/style). I guess I could modify the source of JavaScript source code so that it loads this CSS from my server, but it seems like an ugly solution to me. Is there any other way besides modifying their source code?
Related
So I am using InnoSetup 6 which natively supports downloading files from the internet during installation. I have figured out downloading files given a direct link, from this thread Inno Setup: Install file from Internet
However, I can't for the life of me figure out how to download the latest version of a file given a permalink URL. My specific example is to download the Microsoft Hosting package.
https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer
Going to this page automatically downloads the latest package.
Inno doesn't like this link (or I don't know how to get Inno to use it) since it doesn't point to the direct file. If I use the direct link (https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe) this works for obvious reasons.
I'd like to always download the latest, but I'm not sure how to accomplish this. Any suggestions?
Adding super basic code being used...
DownloadPage.Clear;
DownloadPage.Add('https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer', 'dotnet-hosting.exe', '');
DownloadPage.Show;
You would have to retrieve the HTML page, find the URL in the HTML code and use it in your download code.
See Inno Setup - HTTP request - Get www/web content
It would be quite unreliable. Microsoft can change the HTML any time.
You better setup your own webpage (web service) that will provide an up to date link to your installer. The web page can even do what I suggested: retrieve the URL from the Microsoft's download page. In case Microsoft changes the HTML, you can fix your web page any time. What you cannot do with the installer.
Without realizing it you are asking two different question here. That is because these "permalinks" aren't really permalinks but redirects to some dynamic resource that has a link to what you are looking for.
So first, addressing the Microsoft "permalink", you need to realize that under the hood you are accessing a URL that redirects to some page which will point to the latest. Then under the hood, that page invokes a JavaScript function, IF YOU ACCESSING VIA A WEB BROWSER, to download the installer. Note that both the page pointed to and the code to invoke the installer WILL eventually change. In fact, the code itself logs a "warning" when people attempt to download directly:
If you do a view source you'll see:
<script>
$(function () {
recordDownload('.NET', 'runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer');
window.open("https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe", "_self");
});
function recordManualDownload() {
ga("send", "event", "Download.Warning", "Direct Link Used", "runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer");
}
</script>
So you can download the HTML from this page and use some regex to get the directo downloadlink but beware, the link is going to change every time Microsoft releases a new version. Furthermore, WHEN (not if but when) MS decides to rebrand this entire process might break. So the best you can do here is try to download the html and try parse the download URL from this "permalink"
As an alternative. you can to download the latest DotNet powershell install script as described here.
If possible, execute that script directly. If not look at the function Get-AkaMSDownloadLink within the install script to see how it builds the url to get the latest version. You would probably be better served using that building and using that URL as opposed to attempting to download from some arbitrary HTML code.
Now, onto the second question you might not have realized your were asking is how to automate this for any random installer. The answer is you can't. Some might have a permalink that directly points to the latest but you are always going to find cases like Microsoft. Best you can down is hard code some links in some service, as #martin-prikryl suggested, and when the break update the links in those services.
I am attempting to load the Glyphicons font files associated with Bootstrap 3 into JxBrowser, however, the network requests appear to be timing out and getting canceled by Chromium. See screenshot of devtools linked below. Observed with Java 1.8.0_121, JxBrowser 6.14.2 using JavaFX.
I do not encounter this problem from a dev environment, i.e. reading Bootstrap and the associated font files directly from the file system. It only occurs when attempting to load the files from an EXE, and more explicitly, whenever the request is initiated from CSS via a #font-face call. I attempted to preload the font from HTML using:
<link rel="preload" as="font" type="font/woff2" href="path/to/resource/in/exe">
That appears to have worked as can be seen from the 200 response also in the screenshot linked below. However, Bootstrap appears to be unaware the font loaded and attempts to load the font itself which subsequently fails.
https://imgur.com/a/w8wd0nr
According to the screenshot, the relative path points to a resource located inside a JAR archive. Please note that Chromium cannot load resources from an archive such as JAR. You have two options:
Extract the required resources to a directory and load them from this directory.
Implement a custom protocol handler that will intercept the URL requests to the jar:// resources, reads the content of the required resource using the standard Java API, and sends a response to the web page as it was sent from a web server. For more details, please see the example at https://github.com/TeamDev-IP/JxBrowser-Examples/blob/master/network/src/main/java/JarProtocolHandler.java
I started using ES6 javascript modules in my ASP.NET MVC application but IIS express is refusing to serve javascript file of type module in script tag. I'm getting 401 Unauthorized.
<script src="~/Scripts/index.js" type="module"></script>
When i remove type="module" from script tag then it works fine.
Are request filters involved? Can you please help me set them right?
I ran into the same issue in a pretty bare-bones Apache setup for personal use. I was pretty dumbfounded until I came across this.
Check out the small section titled "Server Considerations", which mentions the use of crossorigin="use-credentials" in the <script> tag. It sounded only vaguely relevant since I'm working exclusively on a local origin, but I had nothing else to go on so tried it on a whim and it worked.
I can't pretend to understand why, or speak to any unintended consequences, so I would suggest diving into those aspects before pasting this into deployment.
When loading a website I get this warning and I can't find the origin of the call:
In the network pane I found that it was initiated by js:45, but I have no idea where js:45 is coming from:
At this point I'm stuck.
It is a WordPress site and I can't seem to find the function/plugin which loads those scripts.
Any ideas how to find the function that originally loads those scripts?
The following is the script that is being loaded and will eventually log that warning due to a missing API key.
<script type="text/javascript" charset="UTF-8" src="https://maps.googleapis.com/maps-api-v3/api/js/25/4/intl/en_gb/common.js"></script>
You'd need to look for the file that this script tag is in on your server. It's not possible for us to work out which file it is originating in on the client, as the server will send a generated response from multiple resources on the server.
A) <script src="https://apis.google.com/js/api:client.js"></script>
versus
B) <script src="https://apis.google.com/js/client.js"></script>
The only differnence being the api: before client.js.
CDN A is used in the Google Sign-In for Websites docs in the Building a button with a custom graphic section.
CDN B is used almost in the Google API Client Library for JavaScript (Beta) docs.
They both appear to work interchangeably.
Short answer: there is no difference
Long answer:
The Google JS client CDN is a bit weird because the actual JS you get is dynamically created based on the file name you provide.
You can load multiple components of the library by constructing the URL as module1:module2:module3.js
api is the core part and is always loaded even if you don't add it to the list of modules, because it handles loading the other modules.
Theoretically you could just include api.js and then dynamically load extra modules by calling gapi.load("module", callback) which is exactly what happens when you load api:client.js or just client.js
If for example you would want to use the API Client Library together with the new sign-in methods you could include api:client:auth2.js or client:auth2.js.
And for extra confusion you could even include https://apis.google.com/js/.js which is the same as https://apis.google.com/js/api.js
Use links only from the documentation!
Simple to check this:
1) Add to header of your page this script:
<script src="https://apis.google.com/js/client.js"></script>
Open DevTools -> Network
I see:
2) Change link to other script
<script src="https://apis.google.com/js/api.js"></script>
Open DevTools -> Network
I see:
api.js is the core, when client.js is the module.
Here a completely different content: https://apis.google.com/js/platform.js