Parse.com has a very useful tool in which it graphs the number of requests per second made to your application across a given time. I was wondering for an Nginx configuration, is there any tool that does the same time?
Using Nginx Plus would be another option to parsing the logs.
You can use the ngx_http_stub_status module (http://nginx.org/en/docs/http/ngx_http_stub_status_module.html) to export basic information, combined with collectd's nginx plugin (https://collectd.org/wiki/index.php/Plugin:nginx).
Related
There is nginx web server, that serves API calls from different User-Agents. I want to parse nginx logs and collect statistics about API calls from different User-Agents.
I'm going to write python script to parse nginx access.log like this https://gist.github.com/sysdig-blog/22ef4c07714b1a34fe20dac11a80c4e2#file-prometheus-metrics-python-py
Is there more suitable solution?
I highly discourage this approach.
Parsing logs is an old task, and there are many tools out there that are more than capable of doing this in an efficient way.
For me personally, I had success with Fluentd - Open Source Data Collector, but there are more tools, depending on your specific needs.
The community, e.g, the amount, and quality of plugins/addons to the tool, is relevant when choosing the tool.
So if googling fluentd prometheus gets you some results from github and the developer itself - that might be your right course of action.
When an application doesn't expose whitebox monitoring endpoints, parsing the logs is the only solution.
From there, you have multiple choices depending on the scale and the budget of your setup:
centralizing logs (in ES by example) using a sidecar like Filebeat to parse and ship them. You can then make queries to export statistics
log parsing that expose statistics: fluentd, telegraf, mtail are good examples
regular executions of a script that dump the data in a prom file to be collected by a node exporter is also a cheap solution
Rolling your own script would be a last resort: if you need statistics you cannot get from of the shelve tools or statistics that need context to be extracted. But it comes at the cost of handling painful scenarios; in your case, following the file when it rolls can be an issue.
Need some advice before starting develop some things.. I've 15 WordPress websites on different installs, and I've remote server which gets data 24/7 from those websites.
I've reached a point that I want the server to modify the websites based on his calculated data.
The things are this:
Should I allow the server the access the WP DB remotely and modify things without using WP on the circle?
Or, use WP REST API and supply some secured routes which provide data and accept data and make those changes?
My instinct is to use the WP API, but. After all its a PHP (nginx+apache) which have some limits (timeout for example) and I find it hard to run hard and long process on the WP itself.
I can divide the tasks to different levels, for example:
fetching data (simple get)
make some process on the remote server
loop and modify in small batches to another route
My concerns are that this circle require perfect match between remote server and WP API, and any change or fix on WP side brings plugins update on the websites which is not much fun.
Hope for any ideas and suggests to make it forward.
"use WP REST API and supply some secured routes which provide data and accept data and make those changes", indeed.
i don't know why timeout or another limits may cause a problem - but using API is the best way for such kind of cases. You can avoid timeout problems with some adjustments on web servers side.
Or you can increase memory, timeout limit exclusively for requested server.
f.e.
if ($_SERVER["remote_attr"]=='YOUR_MAIN_SERVER_IP') {
ini_set('max_execution_time',1000);
ini_set('memory_limit','1024M');
}
We have a plugin for Wordpress that we've been using successfully on many customers- the plugin syncs stock numbers with our warehouse and exports orders to our warehouse.
We have recently had a client move to WP-Engine who seem to impose a hard 30 second limit on the length of a running request. Because sometimes we have many orders to export, the script simply hits a 502 bad gateway error.
According to WP-Engine documentation, this cannot be turned off on a client by client basis.
https://wpengine.com/support/troubleshooting-502-error/
My question is, what options do I have to get around a host's 30 second timeout limit? Setting set_time_limit has no effect (as expected as it is the web server killing the request, not PHP). The only thing I can think of is make heavy modifications to the plugin whereby it acts as an API and we simply pull the data from the clients system, however this is a last resort.
The long-process timeout is 60 seconds.
This cannot be turned off on shared plans, only plans with dedicated servers. You will not be able to get around this by attempting to modify it as it runs directly on Apache outside of your particular install
Your optons are:
1. 'Chunk' the upload to be smaller
2. Upload the sql file to your sFTP _wpeprivate folder and have their support import it for you.
3. Optimize the import so the content is imported more efficiently.
I can see three options here.
Change the web host (easy option).
Modify a plugin to process the sync in batches. However, this also won't give you a 100% guarantee with a hard script execution time limit - something may get lost in one or more batches and you won't even know.
Contact WP Engine and ask to raise the limit for this particular client.
we are using FTE agents for transferring files,
we want to configure scheduler transfer to work in a certain hours of the day,
so for example, if *.txt files is in the folder, transfer those files between 08:00AM to 12:00PM.
We tried so far several designed patterns (such as using ANT to determine the current hour and using trigger file which is different from the *.txt files) to solve the issue, but no success.
Any suggestion ?
I do not believe there is currently an option in WebSphere MQ FTE/MFT that provides exactly what you are looking for. From my understanding, what you are basically requesting is the Resource Monitor functionality (see the link below) but with an extra to option to say only have the Resource Monitor active between two time periods.
http://www-01.ibm.com/support/knowledgecenter/SSEP7X_7.0.4/com.ibm.wmqfte.doc/resource_monitoring.htm
Currently, a Resource Monitor is active when the FTE/MFT agent hosting the Resource Monitor is running.
You would need a system that requests these transfers manually at the times you want them to be processed.
Perhaps you would like to consider raising a Request For Enhacement (RFE) against the product?:
https://www.ibm.com/developerworks/rfe/?BRAND_ID=181
Hi, We are developing a multi-tenant application in Asp.Net with separate Database for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant,
i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both.
so what are the available options, the ones which i can think of can be
IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant.
Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that.
Use some third part components if available
So what do you think will be the best approach, also if there is any other way to do this.
Ok, here is an idea (that I have not test, leave that to you)
On global.asax
use one of this function (find the one that have a valid final size)
Application_PostRequestHandlerExecute
Application_ReleaseRequestState
and get the size that you have send with
Response.Filter.Length
No need to metion, that you get the filename of the call using the
HttpContext.Current.Request.Path
This functions called with every single request, so you can get your size and you do the rest.
Here must note, that you need first to test this idea to see if its work, and maybe improve it, and have in mine that if you have compress the pages on server the length is not the correct and maybe you need to compress it on Global.asax to have the actually lenght.
Hope this help.
Well, since the IIS logs already contain the request size and response size, it doesn't seem like too much trouble to develop a small tool to parse them and calculate the total per day/week/month/whatever.
Trying to segment traffic based on host is difficult in my experience. Instead, if you give each tenant their own IP(s) for the applications you should be able to find programs that will monitor bandwidth based on IP.
ADDITION Is the structure of IIS that you have one website to rule them all for all tenants and on login the system forks to the proper database? If so, this may create problems with respect to versioning in that all tenant's sites will all have to have exactly the same schema and would all need to be updated simultaneously when you update the application such that a schema change is required.
Another structure, which sounds like what you may have, is that each tenant has their own website like so:
tenant1_site/appvirtualdir
tenant2_site/appvirtualdir
...
Where the appvirtualdir points to the same physical path for all tenant's sites. When all clients have the same application version, they are all using literally the same code. If you have this scenario and some sort of authentication, then you will need one IP per tenant anyway because of SSL. SSL will only bind to IP and port unlike non-SSL which will bind to IP, port and host. If that were the case, then monitoring traffic based on IP will still be simpler and more accurate as it could be done at the router or via a network monitor.