I'm trying to log the number of file descriptors nginx is using. The docs suggest i can access this data with nginx.workers.fds_count (docs) but that doesn't result in any useful data. How do i access it?
Related
When error occurs, I want to see the data payload uploaded by users.
I wasn't able to find the post/put/patch data in apm report. (kibana)
Is there an option I need to turn on for this?
Most agents have a CaptureBody config option, with that you can capture the request body. It's off by default - you can set it to error.
I linked the Java docs, you should be able to find the same config for (I think all) other agents.
I am looking into GoReplay as to reproduce part of the production traffic that occurred yesterday.
The traffic I want to reproduce has been recorded with nginx, and I can save it as a .log or .csv file.
From what I can tell from the replay http traffic docs it is possible to reproduce traffic using a command like:
sudo gor --input-file request.gor --output-http="http://localhost:3001"
but this requires a .gor file.
My question is, is the reproduction of traffic (using GoReplay) restricted to .gor files, or could I use nginx .log files to do so?
If this is not possible, and given that I don't have a .gor file describing the yesterday requests, would you recommend creating a file conversion script, to convert the log files into .gor files, or can you recommend a better approach?
After asking this question on the GoReplay GitHub page, I got the answer that:
* there is no way to reproduce traffic directly from logs;
* you must use .gor files to recreate the traffic;
Thus, the only way to replay from traffics is to create a .log to .gor file converter.
link to official answer: https://github.com/buger/goreplay/issues/668
I've found that I can use another package to replay the logs I have, as-is, locally. At the same time, you can have goreplay listen for traffic to capture that traffic and save to log files. Then you can run goreplay with those newly created logs, updating the domain and whatever else you need.
Let me know if you want me to provide a step-by-step.
please could you help me
can I let NGINX to write access log directly to sqlite table which I will make the same field as in access.log?
I know I can try to do it with LUA, but I do not know how to put the trigger to nginx to run LUA script on every record in access.log file
You would use the log_by_lua phase to write access logs, as it runs last and allows you to access variables like upstream_response_time, etc.
I can see from querying our elasticsearch nodes that they contains internal statistics that for example show disk, memory and CPU usage (for example via GET _nodes/stats API).
Is there anyway to access these in Kibana-4?
Not directly, as ElasticSearch doesn't natively push it's internal statistics to an index. However you could easily set something like this up on a *nix box:
Poll your ElasticSearch box via REST periodically (say, once a minute). The /_status or /_cluster/health end points probably contain what you're after.
Pipe these to a log file in a simple CSV format along with a time stamp.
Point logstash to these log files and forward the output to your ElasticSearch box.
Graph your data.
I would like to implement the following functionality:
downloading all the files from a specified remote directory to a local directory.
after downloading all the files I need a list file which contains all the downloaded files.
(I only want this list file when all the files were downloaded successfully.)
Point 1:
Let's say we have around 10 files in the remote directory.
I can use an int-sftp:inbound-channel-adapter component to download all the files but 10 poll cycles are needed to download all of them since the inbound component is only able to download 1 file per poll request.
Spring Integration creates 10 File messages one by one.
Questions:
How can I identify the last file (message) received from the FTP server?
I don't want let users access to list file till all the files from the FTP is successfully received.
How can I achive this?
I can write file names into a list file using the int-file:outbound-channel-adapter but users can read temorary information from that file before the download process is finished.
How can I trigger the event that all files which are on the FTP are downloaded?
Thanks for your advices
Ferenc
First of all this isn't correct:
the inbound component is only able to download 1 file per poll request
You can configure it to to download infinitely during the single poll - max-messages-per-poll=-1. Anyway it is a default option on <poller>.
Anyway if it is your case to dowload one file per poll, you can go ahead with that requirements.
Since any Messaging system tries to achieve stateless paradigm, it is normal that one message doesn't know anything about another. And with that they all don't impact each other. The async scenario is the best for Messaging. With that we can process the second message quicker, than the first one.
Your requirement is enough interest and I won't dare to call it strange. Because any business may have place.
Since you are going to process several download files as one group, there will be need to have some marker on the remote server. Or it can be some timeframe, which we can extract from file timestamp. Or there will be need to store on the remote server some marker file to point that a set of files are finished and you can process them from your application using their local version. Would be great, if that marker file can contain a list of file names of that group.
Otherwise we don't have any hook to group messages for those files.
From other side you can consider to use <int-sftp:outbound-gateway> with MGET command: http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/sftp.html#sftp-outbound-gateway