Multicloud load balancer (Firebase and Digital Ocean) - firebase

I'm currently running my webapp on Firebase Hosting under the free tier, but I am about to start advertising a little bit, meaning I will drive consistently traffic to the website with clear in mind that the conversion rate, aka purchases, will be really low (I'd be happy if 0.1% converts xD).
To avoid incurring in huge cost from Firebase, without a relative return, would it be possible to turn on
1 Load Balancer
1 Instance (small, 5 eur/month)
on Digital Ocean with a replica of the website and drive traffic also there with 1 of the following possible patterns?
50-50, halving the load on Firebase at the very least
(best, but not sure if possible) hit Firebase Hosting only when Digital Ocean is reaching saturation
Is it feasible in any way? Do you have previous experience on this?
Thank you

It's definitely possible to implement a load balancer like that. But that's a quite involved and advanced project, and I'd seriously consider whether it's worth it for this type of optimization.
From the Digital Ocean docs:
Droplets include free outbound data transfer, starting at 1000 GB/month for the smallest plan. Excess data transfer is billed at $.01/GB.
And for Firebase Hosting:
Free/Spark plan: GB transferred: 10 GB/month
Metered/Blaze plan: GB transferred: $0.15/GB (with the first 10GB/month being non-charged)
If you're going for pure bandwidth cost, it seems that your Digital Ocean droplet is always going to be cheaper. Also: the free quota on Firebase is 100x smaller than the included bandwidth in your Digital Ocean plan. Are you really looking to shed load from Digital Ocean to Firebase for a 1% increase in the quota?
For hosting of static content, I'd recommend finding a single host that has a pricing plan that fits with what you're willing to pay. It's just not worth the overhead of coming up with your own load balancer.
If instead you are looking for a load balancer that somebody else already built (or a tutorial on how to build one), those questions are off-topic on Stack Overflow and are a better fit for your favorite search engine. If you get stuck while implementing the load balancer, post back with the concrete question about where you got stuck.

Related

How can I tell how much it costs to run a low traffic WordPress site on GCP?

Started a GCP free trial, migrated two WordPress sites with almost zero traffic to test the service. Here's what I'm running for each of the two sites:
VM: g1-small (1 vCPU, 1.7 GB memory) 10gb SSD
Package: bitnami-wordpress-5-2-4-1-linux-debian-9-x86-64
After about 1-2 months it seems to show that $46 has been deducted from the $300 free trial credit. Is this accurate / typical? Am I looking at paying $20+ per month to process perhaps 100 hits to the site from myself, plus any normal bot crawling that happens? This is roughly 10 times more expensive than a shared hosting multi domain account available from other web hosts.
Overall, how can I tell how much it will actually cost, when it looks to me that GCP reports about $2 of resource consumption per month, a $2 credit, and somehow a $254 balance from $300? Also GCP says average monthly cost is 17 cents on one of the billing pages, which is different from the $2 and the $46 figures. I can't find any entry that would explain all the other resources that were paid/credited.
Does anyone else have experience how much it should cost to run the Bitnami WordPress package provided on GCP marketplace?
Current Usage:
Running 2x g1-small (1 vCPU, 1.7 GB memory) 10gb SSD Package 24x7 should have deducted around ~$26* USD from your free-trial.
I presume you need MySQL would cost you minimum of $7.67* per instance:
Assuming you used 2x MySQL instances it would have costed you ~$15
So $26 Compute + $15 DB + $5 (other network, dns cost etc) would come upto about $46. Please note that price would go up if you used compute for less than a month.
*
1. As you can see from the image, you could get sustained use discount if you run it for a full month
if you are planning to use it for even longer you can get bigger discount for commited use.
Optimise for Cost
Have a look at the cost calculator link to plan your usage.
https://cloud.google.com/products/calculator/
Since compute and relational storage are the most cost prohibitive factor for you. If you are tech-savvy and open to experimentation you can try and use cloud run which should reduce your cost significantly but might add extra latency in serving your request. The link below shows how to set this up:
https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Currently there is no way around using database. Serverless databases could help bring down your cost but gcp does not over this at this point. AWS has this offering so gcp might come up with this in future.
Scalability
When your user base grows you might want to use
CDN which would help with your network cost.
Saving images to cloud storage would also help bring down your cost as disks are more expensive and less scalable and has increased maintenance.
Hope this helps.

What's the google cloud configuration I need for hosting wordpress site having 20,000 visits daily

I have a wordpress site hosted on dedicated server having below configuration.
CPU (8 cores): Intel Xeon CPU E3-1265L v3 # 2.50GHz,
Memory: 24GB,
Currently Used Storage: 350GB,
MySQL size: 3GB,
I have maximum daily visitors of 20,000 and maximum concurrent users at any point would be 400.
I would like to know which Google Cloud "Compute Engine" I should choose to cater these many requests without compromising the performance. Also, what are the other required resource I need to buy?
Is AWS better for this than GCP in this case?
Nobody can give you an answer to this type of question. It totally depends upon the application setup and the pattern of usage. The best way to determine this would be to create an automated test system that simulates usage in a pattern similar to how your website will be used, then to monitor the system (CPU, RAM usage, etc) and determine performance.
Alternatively, you can choose to oversize the system and monitor real-life metrics. Then, scale-down the system such that the metrics stay within acceptable ranges. It is relatively cheap to over-size systems in the cloud given that it might only be for a week.

Asp.net website on AWS ec2 t2 medium

We have an asp.net application currently running on a shared hosting provider.
The application specifications are
.NET 4.5 Web forms webiste.
1 MSSQL data base of size - 100 MB
Web Application size including images - 1.2GB
Page visits/day - ~2500
Bandwidth consumed/day - ~200MB
We are not happy with the Hosting providers and have faced a lot of issues recently. Our application requires trust level as full for some of the services and the hosting provider is unwilling to set the trust of the application as full.
Statistics obtained(1 week) from the hosting provider(See image below)
(We have around 2500 page visits a day on week days and 500 page visits on weekends)
Based on the above information, we are planning to move to AWS. I was thinking to go with a t2 medium reserved windows instance(It has 2 cores and 4 gb ram) and EBS storage(Will cost us around 50 USD a month)
Since we have a small database when compared to enterprise standards, I was thinking to host it in the same instance. We are also planning to go with Sequel server express as this does not need licensing rights.
I have read up a lot of documentation and I am not able to reach a conclusion about the size of the instance to go for and will the t2 medium EC2 instance be able to serve my purpose?
1. It would be great if anyone can tell me if the above t2 medium ec2 infrastructure will suit our needs?
2. Do I need a t2 medium instance or can I go for a lower level
configuration ?
Open to suggestions on change of above mentioned infrastructure as well.
Instead of quoting page visits, etc, you are better off showing RAM and CPU utilisation.
A couple of things to consider:
1 - The t instance is a burstable throughput CPU. That means you gain 'credits' when your CPU is not being used, and when you use it (I cant remember exactly but possibly >10%?) you use up credits. This means if your average workload is either very low, or very bursty, the t series is a good choice, but if your workload is quite constant and high, it is a very bad choice.
2 - How static is the website? You might see benefit in decoupling the database onto an RDS instance that matches your needs, and then having the website on a spot instance to reduce cost, for example.
3 - With respect to configuration requirements, only you can answer this.

Network Internet Egress Price

I never got the right pricing policy from Google. It's a little confusing for me. I'm currently testing google compute engine to try to understand how it all works.
In a simple example when using Cloud Laucher Wordpress there is a sustainable forecast of $4,49, Using a machine Instance of the VM: 1 shared vCPU + 0.6 GB of memory (f1-micro) and standard 10G disk.
In less than 10 days of testing where I am the only user, where the instance became connected throughout the period and my use was very little. I began tracking billing details.
Look at the numbers:
Generic Micro instance with burstable CPU, no scratch disk 4.627 Minutes $0,62
Storage PD Capacity 1,92 GB-month $0,08
And my big surprise
Network Internet Egress from Americas to Americas 12,82 GB $1,54
I am aware that this value is very small, this is very clear
But imagine an example 100 people making use in the same period
Network Internet Egress from Americas to Americas Would jump $ 154,00
Is my reasoning correct?
Is there a way to lower this value?
Another doubt.
Which has the lowest cost Google compute engine or Google app engine?
Don't buy web server server in cloud platform unless you know the pricing strategy inside out.
Yes, GCS and other cloud platform charge a hefty sum for Egress/outgoing traffics if you are not careful, e.g. if you get DDoS on your website, you will be doomed with a huge bills. As shown in the table, GCS is charging $0,12/GB egress
# Europe using "," as decimal separator
12,82GB x $0,12/GB = $1,5384 ~ $1,54
If you expect more traffics, then you should look into Google cloud CDN service offer, which charge a lower egress price. However, you need to be careful when using those CDN, they will charge ingress traffics (so only allow traffics between your storage repo with the CDN).
It is a good idea to setup alarm/event alert services to warn you about abnormal traffics.
Since you are in the cloud, you should compare prices of CDN services
(Update)
Google App Engine(GAE) is just a Platform as a Services, which Google will give you daily free resources, i.e. 1GB egress per day. $0,12/GB price still apply if you use above the limit. In addition, you are limited to the offering given, there is not web services GAE at the moment.

Asp.net application hosting in Amazon Cloud

Hosting .NET application in Amazon EC2. what would be optimum
configuration for a group that has 525 employers and around 85,000 employees ? I am googling this for past 1 week but could not found a reliable solution
You might want to consider hosting your application on AppHarbor. We'll seamlessly scale you application, and you won't have to worry about sizing your infrastructure up front.
(disclaimer, I'm co-founder of AppHarbor)
Perhaps you need to provide more information to get better answers - for example, what does your application do? How many users it has? What is the relevance of "525 employers and around 85,000 employees" - does it indicate amount of data or users? How many users will be concurrent at a time? What will be the average request time? What will be the usage pattern? How much memory it needs? Is your app CPU intensive or IO intensive? If its IO intensive, where exactly is your data stored?
Said all that, you need not worry too much from provisioning/scaling front. Amazon EC2 offers on-demand resourcing - so you can easily up-scale your configuration as per your need.
If you really want to find out optimal configuration, only way is to load test your application (with typical usage pattern/scenarios). Decide your parameters such as average response time and find out user limits served by say 1, 4 and 8 ECU (Elastic Compute Unit). You can load test using say standard instances - small, large and extra large. You can easily interpolate to project your actual ECU & Memory needs. Based on that you can choose actual optimal configuration.
You can try off-site load testing considering the fact that as per Amazon:
EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the
equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon
processor.
You can arrange hardware equivalent of say 1, 2 and 4 ECU and do your load testing looking at memory consumption with performance counter. That should give you some clue as to what is needed. IMO, you will be better off load testing in actual EC2 environment.

Resources