I'm trying to figure out how much bandwidth SPICE requires for office and multimedia usage, but each post that I read, has a different bandwidth usage. In some place, I read that for office 150 kilobits per seconds it's ok for office and 1 megabit per second it's ok for multimedia, but if I go some other place, I read that 4 kilobytes per second for office and 1 megabyte per second for multimedia.
Then I decided to try by myself and installed Proxmox and configured SPICE but I get a really different usage. For 1080p videos on YT I got 2 megabytes per second of usage. I'm not an expert configuring SPICE and maybe my configuration was not optimized.
What should I expect for office and for multimedia (videos editing, cad, etc)? Any good documentation to read about how to properly setup spice on qemu?
Related
Based on this article, when implementing a WebRTC solution without a server, I assume it means SFU, the bottleneck is that only 4-6 participants can work.
Is there a solution that can work around this? For example, I just want to use Firebase as the only backend, mainly signaling and no SFU. What is the general implementation strategy to achieve at least 25-50 participants in WebRTC?
Update: This Github project shares a different statement. It states "A full mesh is great for up to ~100 connections"
Your real bottleneck with MESH is that each RTCPeerConnection will do its own video encoding in the browser.
The p2p concept naturally includes the requirement that both peers should adjust encoding quality based on network conditions. So, when your browser sends two streams to peers X (good download speed) and Y (bad download speed), the encodings for X and Y will be different - Y will receive lower framerate and bitrate than X.
Sounds reasonable, right? But, unfortunately, mandates separate video encoding for each peer connection.
If multiple peer connections could re-use the same video encoding, then MESH would be much more viable. But Google didn't provide that option in the browser. Simulcast requires SFU, so that's not your case.
So, how many concurrent video encodings can browser perform on a typical machine, for 720p 30 fps video? 5-6, not more. For 640x480 15 fps? Maybe 20 encodings.
In my opinion, the encoding layer and networking layer could be separated in WebRTC design, and even getUserMedia could be extended to getEncodedUserMedia, so that you could send the same encoded content to multiple peers.
So that's the real practical reason people use SFU for multi-peer WebRTC.
If you want to make a conference with 25 people all sending their video, then a regular webrtc setup will not work. Except if you massively lower your video quality. The reason for this is that every participant would need to send 24 seperate streams to every other client. So lets say you stream is 128 KB/s then you will need to have 3MB/s in upload speed available. Which isn't always available. Then also downloading that same amount.
The problem is that isn't scalable. That's why you need an SFU. Then you will only send a single stream and receive from others. The other positive thing about SFUs is that you can use simulcast which adapts the quality of your received streams depending on your network speed.
You can use the Janus gateway or mediasoup for example. Here is an already setup mediasoup video conferencing application that is scalable github repository
Started a GCP free trial, migrated two WordPress sites with almost zero traffic to test the service. Here's what I'm running for each of the two sites:
VM: g1-small (1 vCPU, 1.7 GB memory) 10gb SSD
Package: bitnami-wordpress-5-2-4-1-linux-debian-9-x86-64
After about 1-2 months it seems to show that $46 has been deducted from the $300 free trial credit. Is this accurate / typical? Am I looking at paying $20+ per month to process perhaps 100 hits to the site from myself, plus any normal bot crawling that happens? This is roughly 10 times more expensive than a shared hosting multi domain account available from other web hosts.
Overall, how can I tell how much it will actually cost, when it looks to me that GCP reports about $2 of resource consumption per month, a $2 credit, and somehow a $254 balance from $300? Also GCP says average monthly cost is 17 cents on one of the billing pages, which is different from the $2 and the $46 figures. I can't find any entry that would explain all the other resources that were paid/credited.
Does anyone else have experience how much it should cost to run the Bitnami WordPress package provided on GCP marketplace?
Current Usage:
Running 2x g1-small (1 vCPU, 1.7 GB memory) 10gb SSD Package 24x7 should have deducted around ~$26* USD from your free-trial.
I presume you need MySQL would cost you minimum of $7.67* per instance:
Assuming you used 2x MySQL instances it would have costed you ~$15
So $26 Compute + $15 DB + $5 (other network, dns cost etc) would come upto about $46. Please note that price would go up if you used compute for less than a month.
*
1. As you can see from the image, you could get sustained use discount if you run it for a full month
if you are planning to use it for even longer you can get bigger discount for commited use.
Optimise for Cost
Have a look at the cost calculator link to plan your usage.
https://cloud.google.com/products/calculator/
Since compute and relational storage are the most cost prohibitive factor for you. If you are tech-savvy and open to experimentation you can try and use cloud run which should reduce your cost significantly but might add extra latency in serving your request. The link below shows how to set this up:
https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Currently there is no way around using database. Serverless databases could help bring down your cost but gcp does not over this at this point. AWS has this offering so gcp might come up with this in future.
Scalability
When your user base grows you might want to use
CDN which would help with your network cost.
Saving images to cloud storage would also help bring down your cost as disks are more expensive and less scalable and has increased maintenance.
Hope this helps.
I am currently looking at 1Gb/s download and 35 MB/s upload over coax. We are looking at setting up some VOIP services etc which will be impacted by such a low upload speed. How do I determine what the max bandwidth usage for the day was? I'm aware that netstat, netsh, and network monitor provide information regarding individual processes but I cannot find the data I need to determine whether upgrading to fiber would be marginally beneficial or entirely necessary. Any help would be greatly appreciated.
Netstat, netsh, performance monitor, network monitor
I can obtain the information regarding any connection in particular but i need something more akin to over all statistics so that i can make an informed decision regarding our network limitations ( fiber vs coax)....Do we need an additional 200 mb/s ? etc
Typical VOIP services only require a few kilobytes per second of upload bandwidth per phone call. Do you anticipate having many (hundreds of) concurrent phone calls which would add up to 35MBytes/s (or more likely 35Mbits/sec). As an aside, network bandwidth is typically expressed with big-M and little-b (e.g. Mb) to denote megabits per second.
I would suggest first using a utility like SolarWinds RealTime Bandwidth Monitor to look at your router/gateways utilization.
i'm wondering how does the media servers work, do they require large bandwidth if you are doing, let's say, live streaming something like ustream, and there are 10k people watching, do you need a large bandwidth or it is something like p2p ?
I'm more on the client development side with Flash than server admin, but more than likely, yes, you would need a lot of bandwidth to have 10k people watching. The good thing is that with streaming video, you're only downloading the data your watch (unlike progressive). More of an issue would be the number of concurrent connections you could handle per FMS install. 10k would probably require a lot more than 1 server running FMS apps to handle. I'm currently working on a project where we are streaming from 2 installs (beyond the installations of FMS, not sure how they load balanced it) with the hopes of supporting up to something like 2k concurrent connections. I found this article to be pretty helpful (users + bandwidth stats):
http://www.adobe.com/devnet/flashmediaserver/articles/performance_tuning_webcasts.html
The part where "code" meshes with server administration can get pretty daunting (if you ask me)...and every client wants "youtube but with X feature." At 1K a license plus BW, this can get super pricey.
Depending on your needs, you may want to use a 3rd-party FMS company to handle your streaming (especially if it's just for a single event; you can get 'per-event' pricing). Also, I recently used the justin.tv api to create a streaming video feed in Flex. It was pretty painless and all the BW is on them :)
The good part is that once FMS is running, it's super easy to develop with in Actionscript :)
I am studying various ASP.Net deployment approaches. In there, I got a basic question. Is there any thumb rule about enviornment definition? What could be called a 'good' setup if I have to support 1000 concurrent users(requests).
I understand that there are many factors like how application is designed etc. But assuming that everything else is great, what configuration should I look for like Which processor, how much RAM etc?
Also how many concurrent users below configuration should be able to support ?
CPU: Dual 3.40 GHz Intel Xeon (Hyper-Threaded)
Memory : 3GB
OS: Windows Server 2003 SP2
Thanks for thelp
Having been on both sides of the equation (web developer and hardware engineer), my current opinion is that the answer involves both of those sides as well.
Your hardware needs to be not only sufficient for general usage, but it also has to cope with reasonable unexpected peaks and failures - which means that it needs to be redundant, and in excess of your capacity planning.
Your software needs to be designed so its easily redundant - theres no point in speccing a tiered hardware architecture (now or for future planning) if the software is going to require significant amount of changes to handle it.
Your software also needs to be designed so sudden unexpected peaks in resource usage don't happen as a regular occurrence for no external reason (eg marketing campaign).
I know that you say you understand the non-hardware factors, but the real answer to your question is that there is no real way to answer it without knowing the other factors - each situation and circumstance is unique, and requires a unique solution.
However, in an effort to add generalised recommendations, try these:
CPU - choose something with a lot of cache, and individual cache per core as well. This will do wonders to speed up the system. I typically go for dual core, dual processor at a minimum (for a total of 4 cores on two seperate physical cpus). Processor speed ratings don't really matter as much as you think these days.
Memory - fast memory, minimum of 8GB of it. Use the smallest dimms possible for the server.
Harddisk - SAS 15K RPM at a minimum, RAID 6 for the data partition on one controller, RAID 1 or 6 for the system partition on another controller. Choose a good quality controller backed by a good support or warranty package - your controller is no good if it dies in 3 years time and you can't get a replacement.
But above all, don't just install the OS and app and let it be, profile the set up as much as possible, don't be afraid of making changes to optimise to the individual setup (within reason). Move your ASP.Net temporary files to a fast disk (or a ram disk - if they are going to be rebuilt anyway, no matter worrying over losing them). Move the database to a second server, with a crossover 1GBit link between the two. Turn off disk maintenance in the OS, turn off services you do not need.
Good luck!