Firebase serve does not update Service Worker - firebase

When I do a firebase init at command line, and create a standard web page, then a firebase serve, then open http://localhost:5000. I usually get a web page that I was working on at previous time. I am almost certain this is from a previous version of serviceworker.js.
Also depends on which browser I am using on my Mac (Safari, Chrome, Firefox and Opera), they will get different results. My feeling is that there should be a ServiceWorker.js clear or reset command so that a new serviceworker.js will be created. So Question, is there a SW reset command someplace. Ideally at the command line?
Or am I just nuts?

Since the Service Worker registration lives entirely within the browser, there's no way for the firebase serve command to know that you've made changes.
To clear all local stored data including Service Worker registrations (for Chrome at least), you can open the web inspector, go to the "Application" tab, and click the "Clear site data" button. Safari does not yet support Service Worker, so you shouldn't see the same behavior there.
If you're working on multiple web apps at the same time, you might want to consider using different ports for each, e.g. firebase serve -p 5001 for a second app.

Related

Pupeteer is downloading unrequested files that cannot be intercepted or stopped

I am using Puppeteer along a proxy service, and after getting unexplainable high bandwidth usage I used a local proxy server to monitor the requests that were generating this bandwidth. I discovered that almost 90% of the traffic was used to request some crx files/updates.
My project requires me to open a a few thousand browsers every hour, in order to keep each task with it's own cookies and proxy. Every Chromium browser I open will eventually download ~10-15MB of files, using the proxy that is passed as arg to puppeteer.launch.
puppeteer.launch({
headless: false,
args: [
`--proxy-server=http://${this.proxy.host}:${this.proxy.port}`
]
)}
This requests do not appear in the network section of devtools and cannot be intercepted using:
await page.setRequestInterception(true);
this.page.on("request", cb);
I started a local proxy server and gave it to puppeteer via launch args to use, in order to monitor the requests made through it by Chrome. This is how I found out about this downloads. I blocked the first domain that Chromium was using to download these crx files, but Chromium started to download them from another domain, and so on. Some of this domains and URLs are:
http://redirector.gvt1.com/edgedl/chromewebstore/L2Nocm9tZV9leHRlbnNpb24vYmxvYnMvYjFkQUFWdmlaXy12MHFUTGhWQUViMUVlUQ/0.57.44.2492_hnimpnehoodheedghdeeijklkeaacbdc.crx
http://dl.google.com/chromewebstore/L2Nocm9tZV9leHRlbnNpb24vYmxvYnMvYjFkQUFWdmlaXy12MHFUTGhWQUVi
https://google.com/dl/something/something.crx
There were even more. When I block one domain, puppeteer finds another. This files are getting downloaded for every new browser launched, using expensive proxy bandwidth.
Is there a way to stop these downloads, or at least make Chromium only download them once? Not for every new browser launched. Can I at least instruct chrome to download these files without using the proxy?
This happens for both v5.5.0 and v8.0.0.
After a lot of time trying to find what is this extension that chrome always has to download, I found out about Chromium Components, that can be inspected using chrome://components. Looks like these are also shipped as crx files.
In my particular case Chrome was downloading "pnacl". The only way I was able to find this is by recognising the version number from the first link that I posted in my question (0.57.44.2492). Using chrome://components in a browser instance launched by puppeteer with the headless option to false, I found that pnacl had the exact same version.
I was able to prevent Chrome from downloading this component using the flag --disable-component-update. This flag is used by default by some webdrivers but not by the one that puppeteer (v5.5.0 or v8.0.0) downloads.
If anybody else encounters this problem, yours may be related to an extension instead of a component, so you may need to also use a flag to disable extension updates, but there is none, so I use --disable-extensions and --disable-default-apps just to make sure.

What to use instead of Azure Web Apps to allow installation of google chrome in app environment?

I've just created a feature for our application which generates a powerpoint report from the data a given user has in our system.
In short, the server spawns an instance of google chrome using Selenium's ChromeDriver, and from there scrapes out the charts from our application running in chrome. It was done this way to ensure the charts in the report look exactly the same as they appear in the clients' browsers.
We use Azure Web Apps to host our development and production environments, and while my reporting feature works fine in local environments, it doesn't work once deployed to any other environments, because it depends on chrome being installed, and I can't get it installed in the Azure Web App sandboxed environment.
(you can see this other question of mine for a bit of a reference to where things are going wrong: PowerShell StartProcess: invalid handle )
SO
What I pretty much want to know is, if an Azure Web App environment isn't going to allow me to install google chrome, where should I look next?
It looks like using Service Fabric may allow me to install what I need appropriately (https://learn.microsoft.com/en-us/azure/app-service/choose-web-site-cloud-service-vm), but it seems like a big change to make just to be able to facilitate this small part of the feature.
Another option is to just re-architect the feature so it doesn't depend on the server spawning an instance of google chrome.. but I'd just prefer to avoid that if there's a straightforward way for me to get what I have working.
Ideally, there'd just be a way to get google chrome installed in the given environment, but I've spent a good 10 hours trying to get that to happen now, and it's not looking promising.
There's a couple of solutions which would work - depending on your code and framework dependencies.
IMO - the simplest way would be to build your code in a docker container (that runs the Selenium ChromeDriver) and deploy it either through the container features on Web Apps or run it on demand through ACI (Azure container instances) and have it create the report and drop it in Azure Storage. In a container you have a lot more options - and you have a great amount of options on how to run it. Spinning up an ACI on-demand to do the job can be done in multiple ways (e.g. from Code or through logic-apps or Powershell/Azure automation).
Here are some links on running containers in your App Service:
https://learn.microsoft.com/en-us/azure/app-service/containers/
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image
You could start off by building and adding your code from this image: https://github.com/SeleniumHQ/docker-selenium
Other alternatives of course - you could have a VM that you can install and do what you want with on-demand - however - it'd add more management overhead and other implications to think about.
Many options - but in the regual Web App Sandbox - you're limited.
I have found myself this problem with chromedriver.exe needing a real Chrome. As I cannot install Chrome in Azure App Service I am trying a portable version of Chrome. When using the chrome webdriver I tell it where to find the chrome binary.
var options = new ChromeOptions();
options.AddArguments("headless"); // any options you need
options.BinaryLocation = "YOUR CHROME BINARY PATH HERE";
var driver = new ChromeDriver("YOUR CHROME DRIVER PATH HERE", options);
You should be able to copy the chrome portable files as no installation is required. Although it is heavy, 250 MB, because it includes the non portable version inside.
Be sure to use a Chrome version compatible with your ChromeDriver as pointed in the documentation

chromium profile directory is already/used by another BrowserContext instance or process

I used an evaluated jxbrowser, which version is 6.14, I write an demo to use it. but i have a problem with it.
Use the demo app to start an application, which can show web UI, keep this applciation with opened, but then I start demo app again, system will throw below exception:
chromium profile directory is already/used by another BrowserContext instance or process
jxbrwowser cannot start two clients in one PC? if can, how to resolve it?
We strongly recommend that you don't use several BrowserContext instances with the same profile directory. Chromium engine wasn't designed for such usage and doesn't support it. Even if you don't see any issues right now, the issues will appear later in end user environments. For example, in macOS environment you will get the Chromium's error message dialog every time when you run your application instance developed in such way.
Since it's a critical requirement in the Chromium engine, I don't think we will make it configurable in next versions. This is how Chromium engine works. These is a recommendation we have to follow when working with the Chromium engine.

Windows Explorer not refreshing after CreateFolder (new folder)

We have built a WebDav Service with your Engine and have a one problem when we create a new Folder or File:
The new folder / file is created successfully, but not showing in the Windows Explorer. Only if you press F5, the new folder / file is showing (and the name is already selected to be edited).
This behavior is reproducible even with a blank WebDav Solution.
We can reproduce this on Windows 7 and Windows 8 (8.1) using WebDav .NET Server 3.8 and the latest 3.9.
Is there a way to get around this “refresh-problem”?
I solved this issue but clicking in the folder explorer at view > options > then i restored to default and everything is back to normal.
I assume this issue is in Windows Explorer on a single computer. Most likely the WebDAV server-side code is failing with with some exception. Here are some ideas how to detect what is wrong:
Unmount network connections executing 'net use * /DELETE' in a command prompt, this will unmount WebDAV connections too and simulate 'clean' environment.
Retry reproducing the issue and examine your WebDAV logfile. By default it is located in /App_Data/WebDAV/Logs/ folder. Are there any exceptions in it?
Use Fiddler tool or any other debugging proxy to capture and examine HTTP requests. Are there any failed requests?
In case you are creating a folder/file on one computer using Windows Explorer (Microsoft Mini-redirector driver) or IT Hit Ajax File Browser and expect the files list to refresh automatically on another computer this would not work. Mini-redirector does not support any notifications from server and WebDAV does not submit any notifications, you need to refresh the files list manually to see the new items created.
I found this video on Youtube that explains in very much detail how to fix this problem: https://www.youtube.com/watch?v=UUiCPsQquqc
It is a bit lengthy, so I'll just quickly sum it up here:
The reason for these problems are one or more (broken) Shell Extensions that prevent the refresh of the Windows Explorer
To fix it, open up regedit.exe (requires admin privileges), do a search for the Registry Key "DontRefresh". If it is "1", set it to "0". There might be multiple matches for that Key, so repeat until all Keys have the value "0".
This might not work immediately, you may have to kill and restart your explorer.exe process (easiest to do with the Task Manager). Or you can simply reboot your computer. In my case, it worked immediately.
According to the video, the Keys should only be located under HKEY_CLASSES_ROOT/CLSID, but in my case I could only find such Keys in HKEY_LOCAL_MACHINE/Classes/Wow6432Node/CLSID.
I figured it makes most sense to simply search the complete Registry, it does not take very long.
I tred a lot of hacks, from scanning the system, to recreating the profile to hacking Registry keys and hives.
Finally what worked for me -
Right click on desktop
Select Personalize
Click Themes
Click Change desktop icons
Click Restore default & OK
And instantly it began to auto refresh with a new folder, rename, delete, copy, etc.

not seeing changes after publishing changes

i have an asp.net 2.0 site which is compiled and has been put in the IIS virtual directory
however the new changes cant be seen by some users, and some other users can see the new changes on the web site, i have tried deleting cache and cleaning internet settings and trying with mozilla and chrome and still cant see the new changes.
we also tried cleaning the connection pool on iis.
is there anything more i can possibly do?
Have you confirmed that the machines are hitting the right server? Ping the URL from the "bad" machines and see what it resolves to.
The javascript changes may need browsers to download newer script which was published. (Cached javascript might be older)
For session based changes, if users retained older session, the changes may not reflect until they restart the session.
I found the answer, the server was being split in two machines, and when the server admin updated files only did in one of the servers, not both, now both are updated and running ok, thanks every one for the suggestions.
Two things to try (if Bill Gregg's suggestion doesn't work):
1) Bounce the web server services.
2) Download a freeware app called CCLEANER (formerly called Crap Cleaner....it was seriously called that). This app (when configured correctly) will clean all the junk web files off client machines.

Resources