Network Internet Egress Price - wordpress

I never got the right pricing policy from Google. It's a little confusing for me. I'm currently testing google compute engine to try to understand how it all works.
In a simple example when using Cloud Laucher Wordpress there is a sustainable forecast of $4,49, Using a machine Instance of the VM: 1 shared vCPU + 0.6 GB of memory (f1-micro) and standard 10G disk.
In less than 10 days of testing where I am the only user, where the instance became connected throughout the period and my use was very little. I began tracking billing details.
Look at the numbers:
Generic Micro instance with burstable CPU, no scratch disk 4.627 Minutes $0,62
Storage PD Capacity 1,92 GB-month $0,08
And my big surprise
Network Internet Egress from Americas to Americas 12,82 GB $1,54
I am aware that this value is very small, this is very clear
But imagine an example 100 people making use in the same period
Network Internet Egress from Americas to Americas Would jump $ 154,00
Is my reasoning correct?
Is there a way to lower this value?
Another doubt.
Which has the lowest cost Google compute engine or Google app engine?

Don't buy web server server in cloud platform unless you know the pricing strategy inside out.
Yes, GCS and other cloud platform charge a hefty sum for Egress/outgoing traffics if you are not careful, e.g. if you get DDoS on your website, you will be doomed with a huge bills. As shown in the table, GCS is charging $0,12/GB egress
# Europe using "," as decimal separator
12,82GB x $0,12/GB = $1,5384 ~ $1,54
If you expect more traffics, then you should look into Google cloud CDN service offer, which charge a lower egress price. However, you need to be careful when using those CDN, they will charge ingress traffics (so only allow traffics between your storage repo with the CDN).
It is a good idea to setup alarm/event alert services to warn you about abnormal traffics.
Since you are in the cloud, you should compare prices of CDN services
(Update)
Google App Engine(GAE) is just a Platform as a Services, which Google will give you daily free resources, i.e. 1GB egress per day. $0,12/GB price still apply if you use above the limit. In addition, you are limited to the offering given, there is not web services GAE at the moment.

Related

AWS wordpress - calculating network data transfer charge

I'm trying to calculate the price of network data transfer in and out from an AWS WP website.
Everything is behind Cloudfront. EC2/RDS returns dynamic resources and few statics, S3 returns only static resources. The Application Loadbalancer is there just for autoscaling purpose.
Even if everything seems simple the experience taught that the devil is in the detail.
So, at the end of my little journey (reading blogs and docs) I would like to share the result of my search and understand what the community thinks of.
Here is the architecture, all created within the same region/availability zone (let's say Europe/Ireland):
At time of writing, the network data transfer charge is:
the traffic out from Cloudfront (first 10 TB $0.15/GB per month, etc.)
the traffic in and out from the Application load balancer (processed bytes: 1 GB per hour for EC2 instance costs ~7.00$/GB)
For the rest, within the same region is free of charge and Cloudfront does not charge the incoming data.
For example: within the same region, there should be no charge between an EC2 and an RDS DB Instance.
Do anyone knows if I'm missing something? There are subtle costs that I have to oversee?
Your question is very well described. Thanks for the little graph you drew to help clarify the overall architecture. After reading your question, here are the things that I want to point out.
The link to the CloudFront data transfer price is very outdated. That blog post was written by Jeff Barr in 2010. The latest CloudFront pricing page is linked here.
The data transfer from CloudFront out to the origin S3 is not free. This is listed in "Regional Data Transfer Out to Origin (per GB)" section. In your region, it's $0.02 per GB. Same thing applies to the data from CloudFront to ALB.
You said "within the same region, there should be no charge between an EC2 and an RDS DB Instance". This is not accurate. Only the data transfer between RDS and EC2 Instances in the same Availability Zone is free. [ref]
Also be aware that S3 has request and object retrieval fees. It will still apply in your architecture.
In addition, here is a nice graph made by the folks in lastweekinaws which visually listed all the AWS data transfer costs.
Source: https://www.lastweekinaws.com/blog/understanding-data-transfer-in-aws/

How can I tell how much it costs to run a low traffic WordPress site on GCP?

Started a GCP free trial, migrated two WordPress sites with almost zero traffic to test the service. Here's what I'm running for each of the two sites:
VM: g1-small (1 vCPU, 1.7 GB memory) 10gb SSD
Package: bitnami-wordpress-5-2-4-1-linux-debian-9-x86-64
After about 1-2 months it seems to show that $46 has been deducted from the $300 free trial credit. Is this accurate / typical? Am I looking at paying $20+ per month to process perhaps 100 hits to the site from myself, plus any normal bot crawling that happens? This is roughly 10 times more expensive than a shared hosting multi domain account available from other web hosts.
Overall, how can I tell how much it will actually cost, when it looks to me that GCP reports about $2 of resource consumption per month, a $2 credit, and somehow a $254 balance from $300? Also GCP says average monthly cost is 17 cents on one of the billing pages, which is different from the $2 and the $46 figures. I can't find any entry that would explain all the other resources that were paid/credited.
Does anyone else have experience how much it should cost to run the Bitnami WordPress package provided on GCP marketplace?
Current Usage:
Running 2x g1-small (1 vCPU, 1.7 GB memory) 10gb SSD Package 24x7 should have deducted around ~$26* USD from your free-trial.
I presume you need MySQL would cost you minimum of $7.67* per instance:
Assuming you used 2x MySQL instances it would have costed you ~$15
So $26 Compute + $15 DB + $5 (other network, dns cost etc) would come upto about $46. Please note that price would go up if you used compute for less than a month.
*
1. As you can see from the image, you could get sustained use discount if you run it for a full month
if you are planning to use it for even longer you can get bigger discount for commited use.
Optimise for Cost
Have a look at the cost calculator link to plan your usage.
https://cloud.google.com/products/calculator/
Since compute and relational storage are the most cost prohibitive factor for you. If you are tech-savvy and open to experimentation you can try and use cloud run which should reduce your cost significantly but might add extra latency in serving your request. The link below shows how to set this up:
https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Currently there is no way around using database. Serverless databases could help bring down your cost but gcp does not over this at this point. AWS has this offering so gcp might come up with this in future.
Scalability
When your user base grows you might want to use
CDN which would help with your network cost.
Saving images to cloud storage would also help bring down your cost as disks are more expensive and less scalable and has increased maintenance.
Hope this helps.

Multicloud load balancer (Firebase and Digital Ocean)

I'm currently running my webapp on Firebase Hosting under the free tier, but I am about to start advertising a little bit, meaning I will drive consistently traffic to the website with clear in mind that the conversion rate, aka purchases, will be really low (I'd be happy if 0.1% converts xD).
To avoid incurring in huge cost from Firebase, without a relative return, would it be possible to turn on
1 Load Balancer
1 Instance (small, 5 eur/month)
on Digital Ocean with a replica of the website and drive traffic also there with 1 of the following possible patterns?
50-50, halving the load on Firebase at the very least
(best, but not sure if possible) hit Firebase Hosting only when Digital Ocean is reaching saturation
Is it feasible in any way? Do you have previous experience on this?
Thank you
It's definitely possible to implement a load balancer like that. But that's a quite involved and advanced project, and I'd seriously consider whether it's worth it for this type of optimization.
From the Digital Ocean docs:
Droplets include free outbound data transfer, starting at 1000 GB/month for the smallest plan. Excess data transfer is billed at $.01/GB.
And for Firebase Hosting:
Free/Spark plan: GB transferred: 10 GB/month
Metered/Blaze plan: GB transferred: $0.15/GB (with the first 10GB/month being non-charged)
If you're going for pure bandwidth cost, it seems that your Digital Ocean droplet is always going to be cheaper. Also: the free quota on Firebase is 100x smaller than the included bandwidth in your Digital Ocean plan. Are you really looking to shed load from Digital Ocean to Firebase for a 1% increase in the quota?
For hosting of static content, I'd recommend finding a single host that has a pricing plan that fits with what you're willing to pay. It's just not worth the overhead of coming up with your own load balancer.
If instead you are looking for a load balancer that somebody else already built (or a tutorial on how to build one), those questions are off-topic on Stack Overflow and are a better fit for your favorite search engine. If you get stuck while implementing the load balancer, post back with the concrete question about where you got stuck.

What's the google cloud configuration I need for hosting wordpress site having 20,000 visits daily

I have a wordpress site hosted on dedicated server having below configuration.
CPU (8 cores): Intel Xeon CPU E3-1265L v3 # 2.50GHz,
Memory: 24GB,
Currently Used Storage: 350GB,
MySQL size: 3GB,
I have maximum daily visitors of 20,000 and maximum concurrent users at any point would be 400.
I would like to know which Google Cloud "Compute Engine" I should choose to cater these many requests without compromising the performance. Also, what are the other required resource I need to buy?
Is AWS better for this than GCP in this case?
Nobody can give you an answer to this type of question. It totally depends upon the application setup and the pattern of usage. The best way to determine this would be to create an automated test system that simulates usage in a pattern similar to how your website will be used, then to monitor the system (CPU, RAM usage, etc) and determine performance.
Alternatively, you can choose to oversize the system and monitor real-life metrics. Then, scale-down the system such that the metrics stay within acceptable ranges. It is relatively cheap to over-size systems in the cloud given that it might only be for a week.

Tuning the cost of data transfer with NGINX

I have a streaming setup using ngnix and i would like to know how to fine tune the data transfer, say i have the following in this diagram.
You can see one person is connected via a media player but nobody is watching their stream but it remains connected constantly even if i reboot ngnix it will reconnect. So it is currently at 56.74GB but can reach up to 500GB or more. Does this get charged as data transfer bill on my hosting of am i ok to forget about this?
Just want to understand best practises when using ngnix live streaming and try and reduce the costs of users using my server as much as possible.
Would love some good advise on this from any one doing something similar.
Thanks
When the hosting providers themselves procure the traffic capacity wholesale for their clients, they usually have to pay on a 95th percentile utilisation scale, which means that if a 5-minute average utilisation is at or below 5Gbps 95% of the time, then they'll pay at a rate for 5Gbps for all of their traffic, even if consumption at about 04:00 in the morning is way below 1Gbps, nor at certain times of the day is way above 5Gbps for a spike of many minutes at a time -- they still pay for 5Gbps, which is their 95th percentile on a 5-minute average basis.
Another consideration is that links are usually symmetrical, whereas most hosting providers that host web-sites have very asymmetrical traffic patterns -- an average HTTP request is likely to be about 1KB, whereas a response will likely be around 10KB or more.
As for the first point above, as it's relatively difficult to calculate the 95th percentile usage for the clients individually, the providers absorb the cost, and charge their retail clients on a TB/month basis. As for the second point, what this basically means is that in most circumstances, the incoming traffic is basically already paid for through the roof, and noone's using it, so, most providers only really charge for the outgoing traffic because of that.

Resources