Currently I implemented a site.webmanifest and a service worker on my new version of my blog.
https://nextjs.marcofranssen.nl/
Despite Lighthouse reports I'm 100% matching the PWA requirements, including installability, I do not see the install button in the address bar of my browser.
On my current version of my blog it does show up.
https://marcofranssen.nl
Following picture shows the button when navigating to my current blog.
Now I'm wondering which requirement I'm missing.
I also did a review of this criteria https://web.dev/install-criteria/.
Does anybody have a clue what I'm missing or overreading?
See here for the manifest file https://nextjs.marcofranssen.nl/site.webmanifest
This manifest file is also referenced in the head section.
My old blog is fully static generated html. My new blog is build using Next.js so not entirely a static page, although I don't think that should matter.
Despite Lighthouse reports I'm 100% matching the PWA requirements, including installability, I do not see the install button in the address bar of my browser.
I ran a lighthouse test on https://nextjs.marcofranssen.nl/ and [PWA] section had a few problems
as detailed above, your website manifest misses start_url :
"name": "Marco Franssen - Blog",
"short_name": "MF Blog",
"description": "Blog by Marco Franssen, covering software development!",
"icons": [],
"theme_color": "#000000",
"background_color": "#ffffff",
"display": "standalone",
"start_url": "/"
}
for more details about manifest structure please refer to Web app manifests
after that you need to implement a serviceWorker, and for that you can use next-pwa package or next-offline package and both use google's workbox under the hood.
I prefer next-pwa package, because it works out of the box and there is no need for too much configurations.
Related
So I am using InnoSetup 6 which natively supports downloading files from the internet during installation. I have figured out downloading files given a direct link, from this thread Inno Setup: Install file from Internet
However, I can't for the life of me figure out how to download the latest version of a file given a permalink URL. My specific example is to download the Microsoft Hosting package.
https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer
Going to this page automatically downloads the latest package.
Inno doesn't like this link (or I don't know how to get Inno to use it) since it doesn't point to the direct file. If I use the direct link (https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe) this works for obvious reasons.
I'd like to always download the latest, but I'm not sure how to accomplish this. Any suggestions?
Adding super basic code being used...
DownloadPage.Clear;
DownloadPage.Add('https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer', 'dotnet-hosting.exe', '');
DownloadPage.Show;
You would have to retrieve the HTML page, find the URL in the HTML code and use it in your download code.
See Inno Setup - HTTP request - Get www/web content
It would be quite unreliable. Microsoft can change the HTML any time.
You better setup your own webpage (web service) that will provide an up to date link to your installer. The web page can even do what I suggested: retrieve the URL from the Microsoft's download page. In case Microsoft changes the HTML, you can fix your web page any time. What you cannot do with the installer.
Without realizing it you are asking two different question here. That is because these "permalinks" aren't really permalinks but redirects to some dynamic resource that has a link to what you are looking for.
So first, addressing the Microsoft "permalink", you need to realize that under the hood you are accessing a URL that redirects to some page which will point to the latest. Then under the hood, that page invokes a JavaScript function, IF YOU ACCESSING VIA A WEB BROWSER, to download the installer. Note that both the page pointed to and the code to invoke the installer WILL eventually change. In fact, the code itself logs a "warning" when people attempt to download directly:
If you do a view source you'll see:
<script>
$(function () {
recordDownload('.NET', 'runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer');
window.open("https://download.visualstudio.microsoft.com/download/pr/24847c36-9f3a-40c1-8e3f-4389d954086d/0e8ae4f4a8e604a6575702819334d703/dotnet-hosting-5.0.6-win.exe", "_self");
});
function recordManualDownload() {
ga("send", "event", "Download.Warning", "Direct Link Used", "runtime-aspnetcore-5.0.6-windows-hosting-bundle-installer");
}
</script>
So you can download the HTML from this page and use some regex to get the directo downloadlink but beware, the link is going to change every time Microsoft releases a new version. Furthermore, WHEN (not if but when) MS decides to rebrand this entire process might break. So the best you can do here is try to download the html and try parse the download URL from this "permalink"
As an alternative. you can to download the latest DotNet powershell install script as described here.
If possible, execute that script directly. If not look at the function Get-AkaMSDownloadLink within the install script to see how it builds the url to get the latest version. You would probably be better served using that building and using that URL as opposed to attempting to download from some arbitrary HTML code.
Now, onto the second question you might not have realized your were asking is how to automate this for any random installer. The answer is you can't. Some might have a permalink that directly points to the latest but you are always going to find cases like Microsoft. Best you can down is hard code some links in some service, as #martin-prikryl suggested, and when the break update the links in those services.
I have a link that looks like this:
https://mywebsite.com/#/new-account?jk=-LOLgLiyfxANW-ojMKrf
jk is the variable that I would like to read and use on this page. The variable will change for each user.
The problem is that when this link is clicked, the user is redirected to the main page of the application. The app is a PWA developed using Ionic. I checked my code carefully and there is nothing that would cause such redirect behavior.
So far I tried uploading the app again with deselecting "single page" option during Firebase Init but the problem persists.
thanks
I contacted firebase support and they suggested I solve it by including this in firebase.json within the hosting tag:
"redirects": [{
"source": "/#/new-account?:vars*",
"destination": "https://mywebsite.com/#/new-account?:vars*",
"type": 301
}]
Firebase hosting captures whatever text is after : and transfers it to the destination. Otherwise, it creates a 404 for any link that doesn't exactly match the URL of existing pages.
Maybe there is a more elegant way to do this but it worked well for my situation. More info is available at this link:
https://firebase.google.com/docs/hosting/full-config#redirects
I'm using RSTudio Blogdown/Github/Netlify to maintain my blog site. I'm using the Acadmic theme. When I push the changed .RMD files to Github the changed pages do not seem to deploy but if I build the entire site and push it then the site deploys on Netlify without any problem. Unfortunately, it takes about three minutes to build the entire site, so I'm looking for a faster solution.
I think that I should be able to build a single directory, which would be super fast, but when I build a directory with this, blogdown::build_dir("content/project/cont_imp"), the HTML document does not build properly. It seems to render as a single long javascript and since all of the metadata in the YAML header is wrapped into the script the page on Netlify does not deploy properly, things like the date and subtitle are missing and it is not formatted like the rest of my site.
I have one bad page that I built with build_dir on GitHub so you can view both the .RMD source and .HTML rendered documents: https://github.com/grself/icochise/tree/master/content/project/cont_imp. You can see this project page on my live site at: https://icochise.com/ (scroll down to the "Projects" section and notice the difference between the "Continuous Improvement" link (no text there, just an image of a hand and a whiteboard) and the "Blogdown and Bookdown" link. I just now noticed that the HTML document seems to be some sort of self-extracting javascript so after a couple of seconds the source code looks normal. Maybe there is some kind of setting on Netlify I need to change so it will extract the javascript as it is deploying the page?
I checked the settings in my "Configure Build Tools" and unchecked "Preview site after building" and "Re-knit current preview..." but that didn't help. I also tried changing the Project build tools dropdown from "Website" to "Custom" and specified the Hugo executable. None of these things helped.
I also tried running "Serve Site" while I worked, thinking that would continuously render the HTML page, but that tool seemed to hang and would not display the site once I made changes to an .RMD file. In fact, it was hung up so badly that I had to kill RStudio with the Windows Task Manager.
Finally, I also tried to update Hugo, hoping that there was something fouled up in my Hugo install, but that did not help.
I suspect that I'm doing some simple thing wrong, but have tried everything I can think of to fix this and would appreciate any suggestions.
Currently, we are using the Professional Plan (Monthly) which give us the ability to deploy the website to our own url. I follow the following documentation provided on the shiny official website
https://shiny.rstudio.com/articles/custom-domains.html
After following this tutorial, we met one problem. The 'full-screen' button for the leaflet map will not longer work because we get a error like this:
Mixed Content: The page at '<URL>' was loaded over HTTPS, but requested an insecure image '<URL>'. This content should also be served over HTTPS.
Are there some possible solutions for this? Note: We are setting the CNAME using the [blue host] and deploy the application to the shiny web server.
Google Analytics demo code at .
Logged in to Google Chrome as the owner of the Analytics Account and then navigating to that page displays my Google analytics data correctly.
I follow instructions on the page and embed the code into a simple page .
Authentication works as indicated by the displayed message: “You are logged in as: me(at)gmail.com” but there is nothing more, no graph no message.
I am reasonably certain that the page is coded correctly as I have:
Basic Dashboard (basic.html)
Multipleviews (multipleviews.html)and
Interactive Charts (ic.html)
all working and displaying correctly (they display but not styled like the demo).
Why will the page not display the graphics?
As Eike pointed out in the comments, you've simply copied and pasted the code from the demo without downloading the components to your own server. If you open up your JavaScript console, you'll notice that you have 404 errors saying the browser can't find those components. Here's a screenshot of what I see on your site:
To add those components to your site, you have a number of options. I've answered a similar question on one of the repo's Github issues, but I'll copy it here for convenience.
The built and minified versions of those components are located in the build/javascript/embed-api/components directory. You can simply download those files and add them as script tags on your page, or include them in your site's main, bundled script.
If you're using an AMD script loader like RequireJS, you can also just point to those built files as they're wrapped in a UMD wrapper.
If you're using a tool like browserify or webpack, you can npm install this repo and require the files in the src/javascript/embed-api/components directory.