I am using Firebase to host a small site (~6 mb), yet I noticed the usage meter is showing a much higher number. I realized that old deployments counted towards storage, so I deleted them. However, the usage meter shows the same amount of storage.
Is there a time period before the old deployments are truly deleted? I can't imagine you are forced to keep consuming storage space.
Thanks.
It turns out it took roughly 24 hours for the changes to take effect.
Others mileage may vary, as this is anecdotal.
Related
Update: After 9 months of back-and-forth emails (over 40 emails), Google has acknowledged that they have found some bugs that may be responsible for high bandwidth usage, but bandwidth usage is still too high. Resolving this issue does no appear to be a priority for Google/Firebase (it took them 1.5 months to respond to the last email). In light of similar complaints such as: https://news.ycombinator.com/item?id=14356409, and many others, across a wide range of teams/developers, hopefully the situation will improve some day.
I'm just starting a Firebase project and have not accessed the database from any client. I have only created a single tiny test key-value pair in the database (using the console), which uses 23 B of data storage. Surprisingly, the console shows that I have used 215.9 KB (including when I was not touching Firebase at all). This number continues to grow every hour even though I am not using Firebase or even refreshing the data tab in the console!
Here is a screenshot of the console bandwidth usage chart:
Firebase console bandwidth usage screenshot
Others appear to be having the same problem, but there has been no response from Firebase/Google. What's going on? Any help would be greatly appreciated.
The usage chart takes time to update. You may be seeing bandwidth from a few minutes to a few hours ago.
Also, this reminds me of the old Google Analytics referrer issue, the default rules for firebase look something like this:
.read = true;
.write if auth != null;
This means that anyone anywhere can read from your database and that anyone authenticated(even anonymously) can write to it. It is possible since it is a noSQL database with json support that it is probably just crawlers which are the equivalent of Google Analytics referral spam.
We've been developing with Firebase for a couple of months and recently we've seen some long delays in downloading data (e.g. 20 seconds). During those times the "forge" web UI is also tremendously slow to respond.
After a while, it seems to clear up and go back to its lightning-fast self.
Could this be because I'm using a significant portion of the free quota (80 MB / 100MB of storage and 1.6 GB / GB in bandwidth)? Are there undocumented rate limits we're hitting?
The last time this happened we had 6 concurrent users, and our alltime peak so far has been 13.
The short answer is no, dev accounts aren't rate-limited. They're capped on connections, data storage, and monthly data transfer, but there's no rate-limiting.
If you're having performance issues, your best bet would be to email support#firebase.com detailing what you're seeing and the name of your Firebase so that we can investigate. Typically, delays are the result of large data transfers going into or out of your Firebase (e.g. downloading your entire Firebase, which could be accidentally triggered by opening Forge) and there are usually mitigation strategies that we can help you out with.
Is anyone aware of an application I might be able to use to record or data-log network delays? Or failing that, would it be feasible to write such a program?
I work for a big corporation who have recently deployed a remote file management platform which is causing severe productivity issues for staff in our branch. We work direct to a server, and every time a file is saved now, there is a significant delay (generally between 5-15 seconds, but sometimes timing out all together). Everything is just extremely unresponsive & slow, and it makes people avoid saving files so often, so quite frequently, when crashes occur, quite a bit of work is lost.
And these delays don't only occur on save operations. They also occur on navigating the network file structure. 2-3 seconds pause / outage between each folder-hop is incredibly frustrating, and adds up to a lot of time when you add it all up.
So when these delays occurs, it freezes out the rest of the system. Clicking anywhere on screen or on another application does nothing until the delay has passed its course.
What I am trying to do is to have some sort of data-logger running which records the cumulative duration of these outages. The idea is to use it for a bit, and then take the issue higher with evidence of the percentage of lost time due to this problem.
I suspect this percentage to be a surprising one to managers. They appear to be holding their heads in the sand and pretending like it only takes away a couple of minutes a day. Going by my rough estimates, we should be talking hours lost per day (per employee), not minutes. :/
Any advice would be greatly appreciated.
You could check if the process is responding using C#. Just modify the answer from the other question to check for the name of the application (or the process id, if possible with that C# API) and sum up the waiting times.
This might be a bit imprecise, because Windows will give a process a grace period until it declares it "not responding", but depending on the waiting times might be enough to make your point.
I'm not sure how your remote file management platform works. In the easiest scenario where you can access files directly on the platform you could just create a simple script that opens files, navigates the file system and lists containing dirs/files.
In case it's something more intricate, I would suggest to use a tool like wireshark to capture the network traffice, filter out the relavant packets and do some analysis on their delay. I'm not sure if this can be done directly in wireshark otherwise I'd suggest to export it as a spreadsheet csv and do you analysis on that.
This has been a hard problem to diagnose and I was hoping someone could help me out.
We have an AIR app (www.trainerroad.com) that uses Native Process to read data from a command line that comes from a USB stick. It's a cycling app the data is heart rate, cadence, speed and power.
While someone workouts, we record that data to a sqlite database every second. We have a sqlite connection to the database. We have a timer that ticks every second and that's what triggers our inserts.
We have a few hundred users. A few of them have seen it where the app will run for 45 minutes fine, then hang for 30 seconds to a couple minutes. Then, it will release and the clock in the workout will tick down really fast. That means the timers are all getting hit.
This makes me think that something is hanging the app, but I can't figure out what. The app executed the same code every second for 45 or so minutes, then suddenly freezes. (btw, it isn't EVERY 45 minutes, but it does happen after the workout has been running for a while).
I'm thinking it has to do with SQLite. Maybe the database is being compacted while it's being executed against? I'm stumped and I can't reproduce this locally.
Does anyone have any ideas of how to debug this or what area of my app might be causing this? I know it's a hard one.
update
I ran the memory profiler for 20 minutes while it was running and I didn't see peak memory usage increase at all. I could tell it was garbage collecting by looking at the peak vs current memory usage.
I've added two system.gc() calls every 60 ticks. I'm hoping this doesn't make it gittery, but I'm not doing much while it plays so I think I should be good. Not sure if this is what was causing the problem though.
It looks like the manual garbage collection has fixed the issue. We haven't seen this since we put that upgrade in.
I read somewhere that for a high traffic site (I guess that is a murky term as well), 30 - 60 seconds is a good value. Obviously I could do a load test and vary the values, but I couldn't find any kind of documentation on this. Most samples have a minute, a couple of minutes. There's no recommended range. Is there something on msdn or anywhere that talks about this?
This all depends on whether or not the content changes frequently. For slowly or non-mutating content, a longer value works perfectly. However, you may need to shorten the value for always-changing data or risk bad output.
It all depends on how often a user requests your resource, and how big the resource is.
First, it is important to understand that when you cache something, that resource will remain the same until the cache duration runs out. A short duration cache will tax the webserver more than longer one, but the short will provide more up-to-date data should the requested resource change.
Obviously you want to cache database queries as much as possible, prioritizing those who are called often. But all cache takes memory on the server, and as resources runs low the cache will be evicted. Take this into consideration when caching large things for longer durations.
If you want data on how often users requests a resource you can use Google Analytics, which is extremely easy to set up.
For very exhausitive analytics you can use Kiwik. It requires a local server though.
On very changing resources, don't cache at all, unless it's really really resource heavy and isn't vital to be realtime updated.
To give you an exact number or recommendation would be to make you a disservice, there are too many variables around.