We have a Google Optimize redirect test running. We noticed that the page is being loaded twice (that is, reloaded by the Google Optimize javascript right after it first loaded) even when there's no redirect taking place and the URL is the same.
Is there any way to turn this off?
Unfortunately, this happens even after we end the particular test (why?). We keep getting these double loads. We really want to avoid this because it's messing with some pages that use session flash messages.
Related
I am experiencing something in production which isn't reproducible locally.
The Link from nextjs (version 12.3.x) works well in development and when running the build in production mode locally, meaning that the navigation happens without a full page reload. But when deployed with terraform, all Link components are causing a full page reload. Everything else works as expected.
I have a mixture of Link children across the application, sometime it's a, but other times it's a button or simply a div or span. In every case, the full page refresh happens. That's why I suspect it must be something related with the configuration rather than the Link usage, however I am not sure where to start debugging and I am looking for a hint in the right direction.
Back with an answer to this. In my case, nothing was wrong with the Link components itself, nor with the build. The problem was with a path rewrite in our terraform configuration (it was rewriting everything in /_next/*).
It appears that getServerSideProps fetches JSON files which will be used to render the page Their paths were being re-written and causing a 403 error, which made the page reload instead of allowing me to navigate seamlessly as I am used to with next.
This problem was very specific to my configuration, but my general recommendation is to check whether you are rewriting any path of the json files created by next at navigation, in case you are experiencing the same problem only on staging/production.
I am trying to restrict the user from downloading the page as .html or .aspx file from browser.
Or is there a way to change the content of file if its downloaded?
This is a complex area, with lots of moving parts. The short answer is "there is no way to do this with 100% success; there are a few things you can do which make it harder".
Firstly, you can include JavaScript to disable the right click context menu. This doesn't stop Ctrl+S, but might discourage casual attempts.
Secondly, you can use DRM in the browser (though this is primarily aimed at protecting media content. As browser support is all over the show, this isn't realistic right now.
Thirdly, you could write your site as a single page web application, and build some degree of authentication into the "retrieve content" logic. This way, saving the page to disk wouldn't bring the content along, just the "page furniture". However, any mechanism you include to only download content when you think you should is likely to be easily subverted by anyone who is moderately motivated.
Also, any steps you take to stop people persisting your pages locally are likely to break the caching mechanisms on which the internet depends for performance, so your site would likely be dramatically slower.
No you can't stop them.
Consider how the web actually works here: once the user has visited your website and loaded your page into their browser, they have already downloaded it - the web page was transmitted from your server to their computer and appeared on their screen.
All they have to do then is click the Save button to keep it permanently on their disk. That doesn't involve downloading it again, it just copies the page data from a temporary folder to a permanent one. Of course it's also possible for people to use another HTTP client (i.e. not a browser, but maybe an existing program, or some code they wrote themselves) to visit the URL of your page and save the returned contents.
It's not clear what problem think you would solve by stopping people from saving pages. Saving the page is something done within the browser - you as a site developer don't control the user's browser, so you can't prevent that. And if you stop them from downloading your page in the first place then - by definition - you also stop them from using your website...which kind of defeats the point of having one :-).
If you've got some sort of worry about security, you'll have to clarify exactly what you are concerned about, and maybe you can get advice about a sensible way to deal with it.
If I add a tracking pixel that suddenly 404s or contains an error, does the parent page still load normally? Will some browsers hang with a perpetual 'loading' indicator?
A 404 would never result in a permanent loading indicator (after all a 404 is a pretty definitive response). If by "pixel" you mean an actual image then a 404 will not affect the original page (other than the a browsers network tab will show the error, but this will only be visible to developers).
If you mean "tracking pixel" generically for tracking script and include Javascript tracking tags the answer is only marginally different - a 404 will not affect page load, but if your site calls a function that is included in the external file then there will be problems.
If a script you include directly in GTM throws an error GTM will not allow you to publish the tag in question (however logical errors and variable name collisions are very much possible).
If an error is introduced in a linked (i.e. loaded from a remote location) 3rd party script the script it will affect the parent page - GTM does not check linked scripts, and they will behave no differently as if they were linked directly in the source code. The nature of the error will depend on the script, so as a second worst case the hosting page will die (the real worst case is if this is not an error but an malicious attack that injects javascript that runs in the context of the page - in that case the script might steal cookies, send you clients credit cards numbers to scammers and deface your website. Many people don't seem aware of this, so I thought I just mention it).
I have an ASP.NET application I've inherited and am trying to debug. I'm using Page Inspector in VS 2012 Express to work on a particular page that has lots of JS in it. Unfortunately, that page is opened as a popup whose URL is dynamically generated by JS. Page Inspector does not seem to handle this well.
If it just popped out into the new window, that would be OK because I could then get the URL and paste it back into the main PI window. However, it seems to lose the session reference when it pops up, because it logs me out of the application, and when I log back in I lose the location I was at.
I've tried changing the function that does the URL generation / window opening to a window.location.href, but that doesn't seem to work either; it just stays on the same page.
The references to the URL generation functions are done in such a way that it would be a lot of work to switch it from calling a JS function to just a straight link on the page, especially since I would have to switch it back for production.
Any ideas on how I can configure Page Inspector to handle popup windows better?
Thanks.
I am encountering a strange issue which is only affecting several users from an over 7000 user-base. Having searched the web for several hours to no avail, I'm hoping someone here can help!
I have an ASP.NET 2.0 website and when certain users try to access the home page (Default.aspx) they receive a white screen with no content loaded. This issue is occurring both in production environment and if I run the solution against a copy of production data. So I am able to replicate the exact same issue when I pseudo the problematic users.
When debugging the application in VS2005 and set a breakpoint in the code behind in the Default.aspx, the breakpoints are fired/hit so I know the request is working. The problem seems to be once the server has finished serving the request, the response back to the client/browser is empty.
Here's another strange thing I've noticed. If I alter the HTML in Default.aspx by adding a new white line or whitespace, the page will load fine for the same set of users. I thought I had resolved the issue with this fix but unfortunately the white screen issue just manifests itself once again.
Within Default.aspx, there's some AJAX requests using jQuery .load function but this can't be the issue because this functionality exists for every user of the site. The only variable is the amount of content returned within this request can vary depending on the user. But why would it resolve itself when I put a whitespace or whiteline in the page and then manifest itself hours later?
Another thing to note is it's only Default.aspx that is encountering this issue. If I browse to another page by typing in a page in the address bar, the page is served OK.
Hope someone can point me in the right direction on how I can debug or even resolve the issue.
It sounds like your ajax is the cause but without seeing some code, it's difficult to know why.
It could be a timeout, or an error that is preventing the ajax from completing it's function.
You need to use a tool like Charles or Fiddler to debug what is happening whilst the page loads whilst logged in as these users. In a nutshell, a tool like Charles will display all the detail surrounding requests made and responses served to the browser, including any failed responses.
I think it has to do with http headers, caching or encoding. But I cannot tell more without code.
Is output caching enabled for this page?
Can you give us the raw http headers for both the request and response?
If a white screen appears, will it be fixed by pressing ctrl+f5?