I am new to OpenCPU, I look at the documents at https://www.opencpu.org/, It looks that OpenCPU can process http requests concurrently? I ask so because R itself only has single-thread mode, and how many requests can it process concurrently?
Thanks.
If you run the Apache based opencpu-server there is no limit to the number of concurrent requests. You can tweak the number of workers in the prefork settings.
The local single-user server in R on the other hand only uses a single R process. You can still make concurrent requests, but they will automatically be queued and processed one after the other.
One way or another, you shouldn't worry about it in the client.
Related
I have a public website, which processes the data and run different schedulers for that data to process i.e. save data into db, send notifications to users
All these processes are running independently through the schedulers, and these processes are running concurrently
I am trying to run these schedulers using http, but using http the problem was up of available tcp ports into the system, because I have to process huge amount of data which will run millions of schedulers at a certain time, I have implemented the rate limiting also.
I have tried to run the schedulers using curl also, but it starts giving error
too much open files
even I have increased the open file limit in my system to 1 million ,and curl occupies too much resources, so I am avoiding to go with curl
For more clarification on data, let's say I have 10,000 schedulers are running concurrently, and inside these schedulers, 10-20 schedulers(for each scheduler) are running in parallel for sending notifications, I am thinking to run these internal schedulers by another method rather than http or curl.
Note:- I have to pass different data to each scheduler.
I am thinking to run these schedulers internally, can I do that?
Is there any better solution to run schedulers, by not using http or curl?
I want to create a load test for a feature of my app. It’s using a Google App Engine and a VM. The user sends HTTP requests to the App Engine. It’s realistic that this Engine gets thousands of requests in a few seconds. So I want to create a load test, where I send 20.000 - 50.000 in a timeframe of 1-10 seconds.
How would you solve this problem?
I started to try using Google Cloud Task, because it seems perfect for this. You schedule HTTP requests for a specific timepoint. The docs say that there is a limit of 500 tasks per second per queue. If you need more tasks per second, you can split this tasks into multiple queues. I did this, but Google Cloud Tasks does not execute all the scheduled task at the given timepoint. One queue needs 2-5 minutes to execute 500 requests, which are all scheduled for the same second :thinking_face:
I also tried a TypeScript script running asynchronous node-fetch requests, but I need for 5.000 requests 77 seconds on my macbook.
I don't think you can get 50.000 HTTP requests "in a few seconds" from "your macbook", it's better to consider going for a special load testing tool (which can be deployed onto GCP virtual machine in order to minimize network latency and traffic costs)
The tool choice is up to you, either you need to have powerful enough machine type so it would be able to conduct 50k requests "in a few seconds" from a single virtual machine or the tool needs to have the feature of running in clustered mode so you could kick off several machines and they would send the requests together at the same moment of time.
Given you mention TypeScript you might want to try out k6 tool (it doesn't scale though) or check out Open Source Load Testing Tools: Which One Should You Use? to see what are other options, none of them provides JavaScript API however several don't require programming languages knowledge at all
A tool you could consider using is siege.
This is Linux based and to prevent any additional cost by testing from an outside system out of GCP.
You could deploy siege on a relatively large machine or a few machines inside GCP.
It is fairly simple to set up, but since you mention that you need 20-50k in a span of a few seconds, siege by default only allows 255 requests per second. You can make this larger, though, so it can fit your needs.
You would need to play around on how many connections a machine can establish, since each machine will have a certain limit based on CPU, Memory and number of network sockets. You could just increase the -c number, until the machine gives an "Error: system resources exhausted" error or something similar. Experiment with what your virtual machine on GCP can handle.
I'm trying to understand servlets and advantages it offers over CGI. It was mentioned that a new process is started everytime in CGI and it is slow compared to servlet. Can someone explain what exactly is a process here and how servlet it beneficial over CGI ?
A CGI can be thought of a normal executable - it's a program that is run, does something, then ends. Like a dos or shell command. The issue is that there is a small amount of overhead in starting up such an executable, with the operating system allocating memory, loading the program into memory, running it, then deallocating everything. If you're running a website with many 100's of requests per second, this overhead can become significant, with potential many copies of this CGI ending up in memory should many concurrent HTTP requests hit the server.
Servlets on the other hand, have its resources allocated just once, for one single instance held in memory. This single instance can process many HTTP requests concurrently, the single instance sharing it's allocated resources between all requests. This can be an issue - instance and static variables may be corrupted if two requests attempt to access. However, the advantages of efficiency and speed far outweighs this.
I understand that R is single-threaded and it does not support concurrent requests. This is the same issue when we use rplumber:
R is a single-threaded programming language, meaning that it can only do one task at a time. This is still true when serving APIs using Plumber, so if you have a single endpoint that takes two seconds to generate a response, then every time that endpoint is requested, your R process will be unable to respond to any additional incoming requests for those two seconds.
What about rapache? Does it support concurrent requests? Can I use rapache as a server for rplumber or jug?
How to test the performance of an http server that serves and accepts only JSON requests (post and get)? I'm new to web testing, so tell me if I'm trying to do it in incorrect way.
I want to test if:
server is capable of handling hundreds of simultaneous connections.
server is capable to serve thousands requests per second.
server does not crash or get stuck when the number of requests exceeds server capabilities, and continues to run normally when the number of requests drops below average.
One way is to write some logic that repeats certain actions per run, and run multiple of them.
PS: Ideally, the tool/method should support compression like gzip as an option.
You can try JMeter and it's HTTPSampler.
About gzip. I've never used it in JMeter, but it seems it can:
How to get JMeter to request gzipped content?
Apache Bench (ab) is a command line tool that's great for these kinds of things. http://en.wikipedia.org/wiki/ApacheBench
ab -n 100 -c 10 http://www.yahoo.com/
If you are new to web testing then there are a lot of factors that you need to take into account. At the most basic level you want to do the things you have outlined.
Beyond this you need to think about how poorly performing clients might impact your service eg. keeping connections alive, sending malformed requests etc. These may translate into exceptions on the server which might in turn have additional impact (due to logging or slower execution). This means that you have to think of ways to break the service and monitor events that have an impact at higher scales.
Microsoft have a fairly good introduction to performance testing for web applications.