How to enable caching in google chrome in cypress - automated-tests

I have an application that loads the main.js file. Running tests on Cypress every time a "cy.Visit" loads it, if using Google. Other browsers don't have this problem, they cache this file, how do I get cypress to cache it on google?
I can't find information about this problem anywhere. It confuses me that in other browsers everything works as it should.
I want to enable caching of this file when running a test. Otherwise, every "visit" it downloads it again.
i use cypress": "^10.3.0"

You can do this by using the chrome command line switches listed here. The one you are looking for is --app-cache-force-enabled.
To use this in cypress, you have to configure it in your config file.
The examples of how to do that are available here. Your code should look something like below.
if (browser.family === 'chromium' && browser.name !== 'electron') {
launchOptions.args.push('--app-cache-force-enabled');
}
return launchOptions

Found a solution to the problem
Google's security policy implies that if there is a problem with the certificate, caching is disabled. see chromium bugs https://bugs.chromium.org/p/chromium/issues/detail?id=110649#c8 and https://github.com/cypress-io/cypress/issues/7307
The problem can be solved by adding the certificate to the trusted ones. Certificates can be found along the path cy/production/proxy/certs/

Related

Why is my raw source code easily accessible via the Debugger's Network tab?

I have been working on my website for a month now and just realized that there is this extra _N_E server that is providing access to my raw source code used for each page.
I am using NextJS and suspect that Sentry may be responsible here but I cannot find anything in their documentation about it. This is a risk because not only does this happen in development but in production as well and I do not users to have access to my raw source code.
Has anyone ever seen this before?
Can anything be done about it and still get accurate results from Sentry?
Publishing sourcemaps publically means anyone (including Sentry) have access
There are two ways you can achieve this
Setup a CDN rule that only allows Sentry's servers to get the sourcemaps, a.k.a IP Whitelisting
You could upload SourceMaps to sentry - https://docs.sentry.io/platforms/javascript/guides/react/sourcemaps/uploading/
Here is a ticket describing this problem and how to resolve it.
Make sure to use #sentry/nextjs >= 6.17.1.
In your next config file, you want to add the hidden-source-map flag. This boolean will determine if the source map should be uploaded or not. For instance, you may want to conditionally set it for preview deploys.
// next.config.js
const nextConfig = {
// ... other options
sentry: {
hideSourceMaps: process.env.NEXT_PUBLIC_VERCEL_ENV === "production",
},
}
One thing to note. Previously I was using v7.6.0 and was able to get the source map files. I have now upgraded to v7.14.1 and am no longer able to get the source files to display on deploys, regardless of the flags condition. Not sure if this is a regression or just a partially implemented feature.

Google Chrome localhost error NET::ERR_CERT_AUTHORITY_INVALID without option to dismiss

I am developing a website using Roots.io stack (Trellis + Bedrock + Sage).
I am working, locally, on several sites and they're all working fine. Until today I reboot my computer > execute vagrant up > attempt to access the local development URL https://mysite.dev > but suddenly get an error, in Chrome, stating "NET::ERR_CERT_AUTHORITY_INVALID".
Normally, I do get a similar error, but I have the option to dismiss it. But now I do not.
Via BrowserSync, I can access the site via localhost:3000 but not using the development URL.
If you're familiar with Roots, you know that Trellis generates the SSL locally as self-signed in an automated process. So I know very little about how it works outside of their documentation.
I understand that this issue seems to be a mix-match with the SSL certificates locally, but I don't really know how to troubleshoot that. I'm thinking there is a file locally that needs to be deleted and replaced. But I don't know how to generate a replacement if that's the case.
I have spent about an hour reading any articles I could find on the topic but none seem to really explain precisely what's going on in a way I can apply.
Update: Ultimately I'm looking for a way to add an exception for the ticket in Chrome. I was able to do it in Firefox and it's working there.
Thank you.
Short Answer: Don't use .dev extensions in your local URLs as this is now a real domain name extension no longer reserved for localhost.
Long Answer: https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/
You can either
Import this certificate using Chrome's Options > Manage Certificates > Import
Or simply ignore SSL errors launching chrome with args --ignore-certificate-errors like /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --ignore-certificate-errors &> /dev/null & (not recommended).

Phabricator: running over https, doesn't load any images. Firefox reports blocking unencrypted content

Phabricator: running over https, doesn't load any images. Firefox reports blocking unencrypted content.
If I click that little shield thingy next to 'https', and select "Disable protection for now" with "Options" button, things seem to work fine.
I added https:// in phabricator.production-uri and phabricator.allowed-uris with no luck.
Found it:
bin/config set phabricator.base-uri https://<your-base-url>
bin/phd restart
I had previously added that https url in phabricator.production-uri and phabricator.allowed-uris (I don't know if that mattered).
Warning: At one point, I was able to complete messup the login screen. Probably because I didn't run bin/phd restart. If that happens, restore phabricator.base-uri to its previous value.
In addition to setting phabricator.base-uri, you may also need to change security.alternate-file-domain to use HTTPS. Read https://secure.phabricator.com/book/phabricator/article/configuring_file_domain/ to find more about this setting.
Alternatively, you can simply delete the setting by running bin/config delete security.alternate-file-domain.
This same issue occurred to me after installing a TLS certificate.
Setting the base-uri option did not work for me, nor did the production or allowed uri options.
What solved it was setting the security.alternate-file-domain parameter to the https url, as explained here: https://secure.phabricator.com/book/phabricator/article/configuring_file_domain/
Perhaps this isn't the optimal solution, but it's not clear what else to do.
My setup: Bitnami Phabricator pre-configured instance over AWS.
Looks like now the way to go is to screate a support/preamble.php which contains nothing but
<?php
$_SERVER['HTTPS'] = true;
as described here

What can be preventing this cdn file from loading on my webpage?

I wish I had a more generic way of asking this question but I really can't figure out what could be going on.
Using dev-channel Chrome 26 (and IE 10) I'm hitting a simple html site in my public dropbox here
In my browser Handlebars.js (from cdnjs.com) never loads and I get an error. Heck, according to the Network tab it never even tries to load it. Yet click through the source and the script file - it is definitely a live link. Why handlebars? Additionally, running the same exact site with a local server loads just fine.
I'm at a loss here what could possibly have this effect. You'd think the issue would be running the server in dropbox but it seems to be the actual browser misbehaving. And why on earth does it not make any request at all?
My repo by the way is on github on the preformance-tuning branch
It looks like Chrome is throwing an insecure content warning on your scripts. Most likely because you are trying to access content hosted over HTTP while your site is being served from dropbox using ssl. Most likely a Chrome security setting silently block scripts it considers "insecure"

artifactory webinterface doesnt show properly

I installed artifactory (2.6.5) at a linux server. Started it and now i want to configure some stuff over the webinterface. But the webinterface doesnt look like it should.
Only thing I edit after the installation was the default port in the jetty.xml file.
No idea what the reason for this look (see Screenshot) is. So any help is appreciated.
A couple of things could cause this:
Residual browser caches (very likely).
UI Resources blocked by the browser (quite unlikely).
The first case can be easily solved by clearing the browser's cache and re-loading the page.
The second case might require you to investigate which resources are being blocked (using the browser's console) and tweak the security settings/rules.

Resources