Is there a way to check the status of the Nest servers?
They appear to be down right now. Currently I'm checking by firing a GET request to:
https://developer-api.nest.com/?auth=...
Which works fine, I can just set a timeout and check the status codes.
I'm using the Firebase API (OS X) and I'm wondering, maybe there is a better way I can check? I don't see anything in their API. observeEventType:withBlock:withCancelBlock: never gets called.
Also, will the firebase observeEventType: block automatically start being called once the servers are back?
After 2 days the block appears to be lifted. I tried contacting Nest 2 days ago and I never got a reply. Perhaps they lifted the block and didn't reply, or it happened automatically.
I believe I was blocked because I was using my real account, with a real device. And obviously because I was in development I was logging in/out and changing values a lot.
I didn't realise until after the block you can create virtual devices (on a new account). More information here: https://developer.nest.com/documentation/cloud/chrome-extension
Moral of the story: use virtual devices!
Related
I'm hosting a website serves global regions, and recently there's a weird issue came up.
Already checked other posts on the Internet including the one in stackoverflow with a lot of discussions:Chrome net::ERR_HTTP2_PROTOCOL_ERROR 200 after a reconnect , but none of the answers helped.
Website is building on ASP.NET webform legacy "website" (not web application).
There's a important function which performs several process once user click a button on website.
Let's say there are 100 lines of code in that function, and I've added some flags to log which steps have been hit and processed.
Weird situation is:
Only China users are facing the issue. (website is not hosted in China)
Some users are using firefox and it returned below, in English it is "Secure Connection Failed"
But checked several posts including firefox documents, there should be error code on screen like
ssl_error_no_cypher_overlap but there is nothing.
Firefox error
Some users are using other browsers which is Chrome based, it returns:
Chrome error
In additionally, I checked the process log in these user feedbacks, most of them does not finish all the code, in other words, if there are 100 lines of codes and some of them just stopped in line 50.
Website has TLS 1.2 enabled, also http2 protocol (h2) is applied when I checked via Chrome-Network tab.
I'm wondering if it is possible if client browser shut down the connection in some reasons, it will end with the result I see (stopped at the middle of entire code flow), from my opinion if a request is posted to server then no matter what client does, the process should finish entire flow.
Any ideas or thoughts will be appreciated!
I was just dealing with that exact situation.
From what I read in various posts on the HTTP2_PROTOCOL_ERROR, I think what happens is the response is started but code problem(s) prevent the server from completing the response. The incomplete response gives the protocol error in Chrome, and, because it's over TLS, Firefox sees it as a security error. (I'd share links, but I've already closed all those windows - sorry.)
Somehow my code was preventing the server from completing the response without causing an exception.
I was able to track down the offending code by commenting out the body of every code-behind procedure on the page and then bringing them back one at a time.
Good luck to you!
I can't give you a concrete example, but in my case, there was no problem on the application side.
Have you recently added settings to your in-house infrastructure engineer?
For example, have you added WAF settings? You may want to check.
FYI
I'm using Firebase storage buckets to host some files. The bucket itself is in the US region, and it seems to be accessible from anywhere in the world - except, earlier in the week a user from the Philippines showed me that no image would load (on the web, as well as the app, and it was this that led me to think it was geo-related). We flipped on the VPN to be in the US, and the images started to load... so I'm confused, are there geo-restrictions on storage buckets, and is there a way we can know of it? Could this be some other issue if anyone else has encountered something like it?
I contacted the Firebase support team and they sent me this:
"We have received similar reports with some ISPs (PLDT) and one of their subsidiaries (Smart communications) in the Philippines. However since the issue is caused by something outside the Google network, there is nothing much we can do. Would you mind trying to try using another network to test other ISPs?
So far, I have created an internal escalation for measurement purposes and to see if there is something that we can do to help, but the general recommendation is to report this directly to the ISP, a couple of other developers have reached out to them and they are waiting for a response, but I think that pushing harder could help here."
I still haven't fixed the issue on my part though.
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.
I am accessing a Drupal Views feed through xmlrpc. The script has worked in the past and my goal today was solely to access another feed. In theory, there was nothing to do except to change the name of the feed. The endpoint had not changed, my domain had not changed, I can log in to the remote site so my user credentials there are valid.
I am scratching my head as to what may have changed. Is there an obvious question that I have missed? What could have changed on the Drupal end that I should be taking into account?
I can also get a session id for an anonymous user okay.
The failure comes during the complicate authentication (that has worked in the past).
Any suggestions?
Thanks.
Ah... if anyone else has the same problem, as I worked through my script, printing out its effect at each line, I came across a comment I had made when I wrote it.
Make sure the client and the remote are on the same time, preferably the time provided by www.time.is.
My PC was running a minute slow. The detafult Resynchronise on Windows 7 runs at 1am on a Sunday. Change that to a more sensible time.
And for an immediate fix, change the PC time to within a few seconds of www.time.is.
That was the problem. Authenticated login uses a time stamp. It the remote server regards your time as too inaccurate, it will reject your login. Make sure the client is running with an accurate clock.