Firebase storage - inaccessible from the Philippines? - firebase

I'm using Firebase storage buckets to host some files. The bucket itself is in the US region, and it seems to be accessible from anywhere in the world - except, earlier in the week a user from the Philippines showed me that no image would load (on the web, as well as the app, and it was this that led me to think it was geo-related). We flipped on the VPN to be in the US, and the images started to load... so I'm confused, are there geo-restrictions on storage buckets, and is there a way we can know of it? Could this be some other issue if anyone else has encountered something like it?

I contacted the Firebase support team and they sent me this:
"We have received similar reports with some ISPs (PLDT) and one of their subsidiaries (Smart communications) in the Philippines. However since the issue is caused by something outside the Google network, there is nothing much we can do. Would you mind trying to try using another network to test other ISPs?
So far, I have created an internal escalation for measurement purposes and to see if there is something that we can do to help, but the general recommendation is to report this directly to the ISP, a couple of other developers have reached out to them and they are waiting for a response, but I think that pushing harder could help here."
I still haven't fixed the issue on my part though.

Related

What is dca0.com and why is our site making GET requests to subdomains of dca0.com?

So we are using some RUM metrics on our site now, and one error that has started cropping up is as follows:
XHR error GET https://l9.dca0.com/srv-id/?uid=a1729baf-1b2b-1c5c-b50a-bfb5d1bf04e8
Failed to load
Additionally, here's a screenshot of our RUM metrics showing a series of these errors:
I've touched base with everyone on my team and we do not know what dca0.com is or why multiple different subdomains are being called. I did do a fair amount of googling and was not able to find anything on that url beyond some WHOIS lookups that yielded no useful info.
Does anyone know what this url is, what its used for? As best I can tell, this error only comes from devices running Apple operating systems, either iOS or Mac OS. Is this perhaps some kind of Mac functionality that I'm unfamiliar with?
Any help is appreciated, even just a thread to pull on as I'm at a wall on this topic!
After some intensive debugging I found this related to one of our marketing services: Adroll. I would check if you have the same or similar retargeting services.
I was able to confirm that this is an XHR request made after the site loads. This is why it is tricky to find via normal methods. RUM metrics does a nice job capturing this.
From the findings, this looks to be an event tracker. Likely a collector for augmenting an ad farm.
Doing a Whois lookup for this domain returns a private registration. Tracing this IP back returns various AWS points in Oregon and California (can be routed through many more). This is typical of this type of tracker.

Checking server status?

Is there a way to check the status of the Nest servers?
They appear to be down right now. Currently I'm checking by firing a GET request to:
https://developer-api.nest.com/?auth=...
Which works fine, I can just set a timeout and check the status codes.
I'm using the Firebase API (OS X) and I'm wondering, maybe there is a better way I can check? I don't see anything in their API. observeEventType:withBlock:withCancelBlock: never gets called.
Also, will the firebase observeEventType: block automatically start being called once the servers are back?
After 2 days the block appears to be lifted. I tried contacting Nest 2 days ago and I never got a reply. Perhaps they lifted the block and didn't reply, or it happened automatically.
I believe I was blocked because I was using my real account, with a real device. And obviously because I was in development I was logging in/out and changing values a lot.
I didn't realise until after the block you can create virtual devices (on a new account). More information here: https://developer.nest.com/documentation/cloud/chrome-extension
Moral of the story: use virtual devices!

Memory quota exceeded using Wordpress on Shared Azure Website

I'm trying to wrap my head around a memory quota violation. In the wild, if I have a vm and I try to run something beyond its memory limits (SSMS, for instance, on my VPS), SSMS simply crashes and says "not enough memory, dude."
Apparently on Microsoft Azure, if you request a function that takes you beyond your allocated memory... IT TURNS YOUR SITE OFF FOR AN HOUR.
I can't explain how awful that is, and from the other similar questions I've seen about Azure memory quotas, most of you can't either. BUT...
Is there anyone out here with Wordpress experience on Azure who knows how to keep memory usage down? Alternatively, is there anyone here with Wordpress experience on any platform who can explain what kinds of activities might draw more than 512Meg at a time?
Any help would be good help.
Thanks.
Closing this question because as the first responder said -- there isn't a satisfactory answer. I ended up going with a different hosting company that offers dedicated WP hosting, and have had no issues whatsoever.
I love MS. I use their technology stack whenever feasible, but sometimes you gotta call a spade a spade: I am not sold on Azure yet, though not for lack of trying.

ASP.NET page to reflect server status

I'm looking to create a webpage that will reflect the status of one of my company's servers automatically. Frequently there will be a minor error that only lasts 2-3 minutes, and it would be great to have this reflected on a self-generated page, which might prevent 50-60 unhappy clients from calling in simultaneously and asking what's wrong.
I'm not quite sure where to begin - would anyone have a suggestions for good resources to study? Programming examples? I'm not referring to the basics of writing an ASP.NET page, of course, but rather process interaction in Windows.
Thanks.
To pull this off, you'd need a separate page that essentially runs server diagnostics, otherwise the page wouldn't know if it was up or down. Also, the page would need to be isolated from the sort of problems that are kill other people's requests, such as cache hit problems, memory starvation, high CPU usage, insufficient bandwidth. So ideally the diagnostics would run in a separate app-pool, separate virtual directory, separate machine.
Many of the interesting diagnostics would require a WMI call, but some you can get from the My.Computer namespace.
Also, are you going to do this on every server, or do you want one web server to display the status of several different servers?
It also depends on the type of errors your servers are encountering.
If they are going down completely, or are losing internet connection, then pinging them after an interval of time will let you know if they are up or not.
If you have a specific process running on a server that becomes unavailable, that can be a little more tricky.
Your best bet is to find a way to do a simple request from the services/applications that are important and see if you get a response, if you do, the server is likely up, if not, then it is likely not.
Anything you can do to reduce the number of support calls you get is a good idea, but I'd also focus some time and try to figure out why your servers are going down so often.
Also, telling your users that the server is down, but not giving a reason why may not give the effect you are looking for. Users will still be confused and frustrated when they can't get their work done.
I know you were looking to build a webpage to display the server diagnostics, but there are plenty of server monitoring tools that produce webpages for an easy dashboard view of the history.
A quick google returned the following link:
http://www.webdesignbooth.com/10-really-useful-server-monitoring-tools/

Multiple requests to server question

I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.

Resources