We are having an issue in a Blazor WebAssembly app, which is using PayPal buttons for users to Add Funds to the system's wallet.
The issue is random, sometimes the popup closes immediately after opening, and does not allow users to even login into paypal to continue with the checkout process.
This is the js file: https://github.com/efonsecab/FairPlayTube/blob/main/src/FairPlayTubeSln/FairPlayTube.Client/wwwroot/js/paypal/paypal.js
This is the Blazor component which calls the js to render the buttons
https://github.com/efonsecab/FairPlayTube/tree/main/src/FairPlayTubeSln/FairPlayTube.Client/CustomComponents/Paypal
It happens constantly in dev, but not that much in prod.
This is the error shown in the console:
No ack for postMessage wn() in https://www.sandbox.paypal.com in 10000ms
The thing is that sometimes everything works successfully on the first try, but other show this.
Any ideas how to solve it?
If it closes immediately after opening, the createOrder function's invocation of actions.order.create is failing, likely due to an invalid request. Log everything, look at the console errors/output, and find out what you are doing wrong.
The object you are passing to actions.order.create can first be saved to a variable, logged with console.log(JSON.stringify(somevar,null,2)), and then return actions.order.create(somevar)
After several hours searching and trying numerous suggestions, I finally found the answer with this post: https://stackoverflow.com/a/66473740
For me, this solution works 100% of the time so far. I disabled Javascript debugging.
Related
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.
I had a working project, that has worked for the past 3 weeks. All of a sudden, out of nowhere, the fetch requests have stopped working.
I haven't run any update or npm installs. So I'm lost as to what could be the cause of this.
my fetch request is pretty standard. Simplified here with a console.log instead of setState in the response.
fetch("https://54fd510.ngrok.io/api/v1")
.then((response)=>response.json())
.then((responseData)=>{
console.log(responseData);
}).done();
All I get now whenever doing any sort of request,
is "Network Request Failed" with...
onload. _sendload, setReadyState as the stack trace items.
Infact any and all my fetch requests in the app arent working.
So I tried boiling it down, and just created a new blank app.
With only a button and a fetch request linked to the button. To make sure it has nothing to do with my particular app. And this request fails as well.
So it appears its react-native itself that is failing, it isn't anything to do with my code.
No matter what I try, the network request is always failing.
Tried downgrading, and get the same.
Really lost as to where to look now.
Any ideas?
I have an ASP.NET application using custom errors. There is one error that occurs that I can't seem to get any information about. When the error happens, it does trigger the custom error page, but for some reason it doesn't trigger Page_Error or Application_Error. If I turn CustomErrors Off completely, I get absolutely no feedback at all. The only feedback I ever have gotten up until now is the redirect to the Custom Error Page.
Scenario: ASP.NET WebForms application. One of the pages has an UpdatePanel with a Submit button. The page works fine. But if I walk away for a while (seems to be 30-60 minutes) and then come back and click the Submit button, nothing happens; no error, no response from the page- nothing. I have not been able to get this to happen while running in the Debugger; it only happens when it is hosted in IIS (7.5). [I've seen other SO posts about this issue but none of the suggestions worked for me.]
When the error happens, with CustomErrors=On or RemoteOnly, the redirect to the custom error page works, and with RemoteOnly, from the server, I get no feedback at all, similar to a remote connection with customerrors=off. I was really hoping for the YSOD error page.
I tried trapping the error in Page_Error by logging the errors to a database, but that didn't work either. I know the Page_Error and the DB portion is working because if I change the submit code behind to do a divide by zero, the error is logged in the DB. Also, the divide by zero error will be displayed to the client with customerrors=off, and with it on it will display the custom error page. But removing the divide by zero error, and then waiting 30 mins or so and trying again, the Page_Error code is not hit, even though it does redirect to the custom error page. I then tried moving the code in the Page_Error to Application_Error, but the exact same thing happens. The forced divide by error works but this seemingly timeout related error does not.
So, are there certain errors that can still trigger the redirect to the custom error page, but will not trigger the Application_Error event?
Thank you John and Sergey! This really was a can of worms. Sergey, you were exactly right- the session timed out. John your idea made me think to look at the Windows Application Event Log rather than trying to track it down in the code. It turned out that my ViewState was expiring when the session timed out, which was set to 20 minutes by default. The exact event log error was:
"Viewstate verification failed. Reason: The viewstate supplied failed integrity check."
Now I was able to manually recycle the app pool and force the error to happen at my leisure which made solving it easier. I wasn't using Session though, at least not on the page in question, so why did this matter? This link was helpful in troubleshooting what was happening.
http://support.microsoft.com/default.aspx?scid=kb;en-us;829743
Buried deep into the article was the following sentence describing a scenario causing the worker process to recycle:
"The application pool is running under an identity other than the Local System account, the Network Service account, or an administrative-level account."
My AppPool is in fact running under a local user account which I created specifically for this purpose. When I changed the AppPool to run with ApplicationPoolIdentity, after recycling the AppPool the ViewState error went away. Then I set the AppPool back to run with the local user account again, gave that account local admin privileges, and this also fixed it. Not wanting to have this user be a local admin, I ended up going with a different solution, which was to generate a machine key for this app so the ViewState MAC is always the same, rather than using the default of auto-generating a new one every time the pool recylces or the session times out. Note this is also what you typically need to do if you have multiple web servers behind a load balancer.
I have an aspx page with a simple form to send emails to pre-defined lists of users. On the longer lists the page usually times out before the emails finish sending but this has never been an issue.
Today something weird happened and each user got four emails. In the log I could see three new threads crank up one at a time and start over sending from the beginning of the list.
Any ideas? I absolutely know I didn't intentionally refresh the Web page myself, and certainly not three times. But could the browser (IE8) have done it? Would it post again trying to re-establish a connection when it timed out? Or when I switched back to the browser window from another app? I have never seen behavior like this before.
First question would be whether there is any reason to do a long-running task syncronously, i.e. lock up a thread that should be serving web requests for something that could be done in the background, while the browser sits and waits for a response that its probably not going to get. I'd look into running this asynchronously unless there's a very deliberate reason not to.
Secondly have you looked into creating some kind of locking mechanism such that the process can't be started more than once? I have processes where I add a token to the application cache (and remove it when I'm done) so that if the token exists the process won't run again (the call to the asynch task isn't made), and that does the job. That way it doesn't matter how many clients call your code, you prevent things happening more than they should.