What is considered as data transfer in firebase hosting? - firebase

What I considerd data transfare in firebase hosting according to docs
Data transfer (GB/month) — The amount of data transferred to end users from our CDN. Every Hosting site is automatically backed by our global CDN at no charge
So is when a user go to my domain and firebase send the user my web site that lets say for the sake of argument has 100mb so each 10 requests will cost me 0.15 usd according the pricing on fire base ?
Data transfer $0.15/GB
My current concern that my react build folder has the size of 3mb since it has too many pngs in it … so will I pay for the transfare of this build folder to end client each time a Client call this site ?

Lets say for the sake of argument has 100 MB so each 10 requests will cost me 0.15 USD
Yes, you are charged on the data downloaded from CDN/server.
My current concern that my react build folder has the size of 3mb since it has too many pngs in it
It'll be best to optimise and compress your images to reduce costs and also loading speed.
so will I pay for the transfer of this build folder to end client each time a Client call this site
Some static assets should get cached locally so the next time user loads the site it might load them from cache instead of the server. So it won't be 3 MB always.
You can get rough estimate of the data being loaded in the network tab of browser console.

Related

Multiple wordpress site installation

guys i have question,
lets say i want to upload hundreds thousands of post / product to wordpress, which will slow down the website performance, and the database size will also getting bigger.
what if i split the wordpress site into several different installation to different sub directory based on the product or post category, so lets say one website only contain 25-30k post / producst, but there will be like 10 of those in different installation, in this way the database will be a lot smaller.
do you think it will make the performance better than put everything in single website?
my server is around 16gb ram and 8 cpu cores.
I don't think it will make any difference given you will run it on the same hardware. In case of multiple machines and one ingress node/load balancer you could route the request to the different backend server basing on the product requested, but if you have only one server for hosting everything: web server, database, etc. you will hit the limits of CPU/RAM/etc. much faster than the size of the database table (given it's properly designed, has indices and so on)
However you can measure the performance in both cases using a load testing tool and see how does response time, resources usage and database slow query log looks like in both deployment scenarios.
Data size doesn't have to slow the site. It becomes a matter of how fast can you get the data from the DB. A few things to consider:
Place the Database on a dedicated host. If locally hosted dedicate a crossover cable from the web tier to the DB tier, with a second IP for admin on the database host. You might consider a managed instance of your database with a cloud provider.
Indexes are your friend. Larger datasets result in longer indexes, but you can make deliberately shortened indexes. Choose a database that supports partitioned indexes. Combine these partitioned indexes along with higher I/O operations per second of SSDs for your index partitions and ensuring that all lookup access via index will result in your performance for large data sets doesn't suffer. How does a partitioned index increase access speed? Instead of having to traverse an index from A to S for an index supported query with an S based where clause, in a partioned index you might have 26 indexes, one for A, then B, then C, then ... You jump straight to the S partition for the lookup.
Shape your pool size on the PHP/Web tier. You have already increased the pool size by pulling the database onto its own host. The next thing to do is to effectively manage your cache of fixed assets, the items that do not change across user sessions. Commonly these items are style sheets, images, fonts, javascript files, ... Minimally look at a cache node in front of your wordpress site. Take a look at Varnish or Nginx for this. I am partial to Varnish, but either should do the trick. If you pair this with a CDN for a multigenerational cache then all the better. If you are in the cloud then you have built in CDN options with each cloud provider. You can also widen your bandwidth by placing these fixed assets ona dedicated host and then caching that one host, but this would require a lot of base modification of your wordpress image.
There is no reason why you cannot have multiple web fronts with a common database back end. You would need a load balancer to distribute the load and your first generation cache would sit in front of the load balancer. Realistically, if all of your queries are index supported and your cache is effectively managed, then you can easily scale to hundreds of concurrent users on moderate hardware. Your most taxing item is going to be your PHP execution to pull dynamic data for user sessions. Make the queries respond as fast as possible then you have a small lock window on PHP for each session.
Watch your locks per session! You may be at the mercy of a template and how it is managing your finite resource pool, but in general (a) unless 50%+1 use something, do not allocate it early, (b) be merciless in cutting sessions to release the session based locks on memory, (c) pinch your assets until they bleed - No 45 MB images on the front page when a color optimized 120K compressed image will do the job, (d) Watch the repeat access problem - This applies to subqueries in the database as well as building a web page with hundreds of assets to resolve a page.
Have you considered other options, such as Drupal? The setup is a bit more complex, but I can validate running a dozen distinct websites out of a single Drupal instance with no degradation in performance with the above dedicated database and cache nodes with hundreds of concurrent users on fairly moderate hardware (mini-itx atom based PCs)

Reliable way to count total used space of firebase storage per user

I'm trying to figure out an approach that will guarantee correct count of user uploaded file size from a web client to the firebase storage.
The core requirements are these:
it must be stable: it must guarantee to count every uploaded file
it must be scalable: I can't read "all stored files" to calculate size at once as the number of files can be huge
it must be secure: I can't rely on browser, the calculations must be performed on server.
So far the approach is this:
user of my web app can upload multiple files
those files are uploaded using firebase web client sdk
on the server I listen to functions.storage.object().onFinalize() to get file size for each file
then I update a dedicated document in firestore to add the new size to the total (let's call it totalStorageSizeDoc)
This approach addresses security and scalability but the problem that I see here is when user is uploading a bunch of small enough files, this can easily trigger lots of functions.storage.object().onFinalize() within 1 second, which is 'not-hard-but-still' a limit for write operations in firestore. At this point the writes to totalStorageSizeDoc will also be performed too fast with the risk of rejecting some of those writes.
Is there a way to easily queue the writes from the onFinalize() to ensure the totalStorageSizeDoc is not overwhelmed? Or should I take a completely different approach? Maybe there are any best-practices out there to count the used storage size that I've missed?
Any advice is much appreciated.
The easiest thing to do would be to enable retries for your function, so that if it fails for whatever reason (like exceeding some limit), then the system will simply retry it until it succeeds.

Firebase hosting static file serve very slow

My user took 1.6 min to download a 900KB static file from firebase hosting.
May I know is there any way to optimize this?
Thanks for Doug Stevenson on pointing out, the main reason for slow TTFB and content download speed is because Fastly CDN performs poorly in my country (Malaysia). Switched to Cloudflare to resolve the problem.
The image below shows that test on Cloud Harmony, Fastly CDN has a really bad result especially downlink for > 256KB
There really is no way to optimize the time it takes for Firebase Hosting to serve static content, other than taking steps to reduce the amount of data to transfer (maybe compressing it differently), or splitting the data among more than one concurrent request.
It will take as long as it takes to transfer the content the user. The user's internet connection speed makes a huge difference, and if their connection is slow, that situation can't be improved.

Limitation of free plan in firebase

What is the meaning of the following?
50 Max Connections, 5 GB Data Transfer, 100 MB Data Storage.
Can anyone explain me? Thanks
EDIT - Generous limits for hobbyists
Firebase has now updated the free plan limits
Now you have
100 max connections
10 GB data transfer
1 GB storage
That means that you can have only 50 active users at once, only 5GB data to be transferred within one month and store only 100 MB of your data.
i.g. you have an online web store: only 50 users can be there at once, only 100 mbytes of data (title, price, image of item) can be stored in DB and only 5 GB of transfer - means that your web site will be available to deliver to users only 5gb of data (i.e. your page is 1 mbyte size and users will be able to attend that page only 50 000 times).
UPD: to verify the size of certain page (to define if 5gb is enough for you) - using google chrome right click anywhere on page - "Inspect Element" and switch to tab "Network". Then refresh the page. In bottom status bar you will amount of transferred data (attached size of current stackoverflow page, which is 25 kbytes)
From the same page where the question was copied/pasted:
What is a concurrent connection?
A connection is a measure of the number of users that are using your
app or site simultaneously. It's any open network connection to our
servers. This isn't the same as the total number of visitors to your
site or the total number of users of your app. In our experience, 1
concurrent corresponds to roughly 1,400 monthly visits.
Our Hacker Plan has a hard limit on the number of connections. All of
the paid Firebases, however, are “burstable”, which means there are no
hard caps on usage. REST API requests don't count towards your
connection limits.
Data transfer refers to the amount of bytes sent back and forth between the client and server. This includes all data sent via listeners--e.g. on('child_added'...)--and read/write ops. This does not include hosted assets like CSS, HTML, and JavaScript files uploaded with firebase deploy
Data storage refers to the amount of persistent data that can live in the database. This also does not include hosted assets like CSS, HTML, and JavaScript files uploaded with firebase deploy
These limits mentioned and discussed in the answers, are per project
The number of free projects is not documented. Since this is an abuse vector, the number of free projects is based on some super secret sauce--i.e. your reputation with Cloud. Somewhere in the range of 5-10 seems to be the norm.
Note also that deleted projects take around a week to be deleted and they continue to count against your quota for that time frame.
Ref

Building ASP.Net web page to test user's connection (bandwidth)

We need to add a feature to our website that allows the user to test his connection speed (upload/download). After some research I found that the way this being done is downloading/uploading a file and divide the file size by the time required for that task, like here http://www.codeproject.com/KB/IP/speedtest.aspx
However, this is not possible in my case, this should be a web application so I can't download/upload files to/from user's machine for obvious security reasons. From what I have read this download/upload tasks should be on the server, but how?
This website has exactly what I have in mind but in php http://www.brandonchecketts.com/open-source-speedtest. Unfortunately, I don't have any background about php and this task is really urgent.
I really don't need to use stuff like speedtest
thanks
The formula for getting the downloadspeed in bit/s:
DataSize in byte * 8 / (time in seconds to download - latency to the server).
Download:
Host a blob of data (html document, image or whatever) of which you know the downloadsize on your server, make a AJAX GET request to fetch it and measure the amount of time between the start of the download and when it's finished.
This can be done with only Javascript code, but I would probably recommend you to include a timestamp when the payload was generated in order to calculate the latency.
Upload:
Make a AJAX POST request towards the server which you fill with junk data of a specific size, calculate the time it took to perform this POST.
List of things to keep in mind:
You will not be able to measure higher bandwith than your servers
Concurrent users will drain the bandwith, build a "limit" if this will be a problem
It is hard to build a service to measure bandwidth with precision, the one you linked to is presenting 5Mbit/s to me when my actual speed is around 30Mbit/s according to Bredbandskollen.
Other links on the subject
What's a good way to profile the connection speed of Web users?
Simple bandwidth / latency test to estimate a users experience
Note: I have not built any service like this, but it should do the trick.

Resources