From a client machine running syslog-ng I want to send multiple log files to remote syslog-ng server.
Is there any macro that can tell me the source file name so that on remote server I can separate out the logs and put it to separate log files.
Or if not by filename is there any other way I can separate log messages.
Basically there should be 1-1 mapping. Logs of file a.log to go to x.log on remote server, b.log -> y.log
Could solve it with below config -
Client side configuration in syslog-ng -
file("/var/log/shell.log" log_prefix("shell: "));
Server side configuraion in syslog-ng -
filter f_shell { match("shell" value("MSGHDR")); };
destination d_shell { file("/var/log/syslog-ng/shell.log"); };
log { source(demo_tls_src); filter(f_shell); destination(d_shell); flags(final); };
Related
I have enabled security for my Hadoop cluster and it works fine. But when I visit the link http://namenode_host:8020, it shows:
It looks like you are making an HTTP request to a Hadoop IPC port. This is not the correct port for the web interface on this daemon.
But I don't want such behavior, because it is unencrypted message and the policy of our company is to encrypted the data for all the ports. 8020 is a RPC port of Hadoop. Any idea on how to disable HTTP requests to Hadoop RPC port?
Take a look at the Data Confidentiality section from the apache doc, I think you are looking for the RPC encryption.
8020 - is the default port of Hadoop File System, which listens for the IPC calls from HDFS clients to Hadoop NameNode for HDFS metadata operations. You should not try to access it directly through HTTP. If you want to work with your data on HDFS through web you have to use WebHDFS API which allows to perform web requests upon the data in the file system.
I just started to test Google compute engine. Now I'm trying to deploy my Go (golang) application on it, so that it can be reached from outside. I use compute engine in favor of the app engine, since my application requires a MongoDB database.
I did the following:
create compute engine instance
setup up firewall so that port 1234 is open and IP is static
install MongoDB
upload my application
start
The application starts just fine. But I cannot reach it from outside if I open it in my browser with ip:1234. I also tried to start it on port 80 as root user, but this didn't work neither.
The server is configured as following:
{
"host": "localhost:1234",
"dbhost": "localhost",
"db": "dbname",
"logfile": "log"
}
When I'm using an apache server it servers port 80 and the page is displayed... OS is ubuntu 14.04.
The main simply adds some handlers to a mux and adds a FileServer to the public dir:
mux.Handle("/", http.FileServer(http.Dir(public_dir)))
// [...]
if err := http.ListenAndServe(cfg.Host, mux); err != nil {
panic(err)
}
So what's the issue here?
Try changing host from localhost to 0.0.0.0, because right now it's only listening to "inside" requests.
I have a Kiwi syslog server running on a PC,when i add the ip address of this sys log server to my device i'm able to see the syslogs on server side. If i add the host name of syslog server to syslog-ng.conf file of my device,i do not see my logs on the server side.
I added below command to syslog-ng.conf file
destination df_remote_1 {udp("target_host");};
log { source(s_all); filter(f_remote); destination(df_remote_1);};
I have also added entries for "/etc/reolv.conf" file which holds DNS config.
I'm able to ping the host name from my device,but i do not see the logs? Could someone please guide me on this?
Check if the messages reach the target host using tcpdump/wireshark. Maybe there is a firewall rule somewhere that blocks the packages.
I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.
I need to upload a file using ftp or http. I checked downloading from servers like ftp.qt.nokia.com. where download was successful[even though the file(s) downloaded was broken but with almost same size as on Web server]. I tried to upload a file to servers like qt.nokia.com,ftp.trolltech.com. Doing so When I used http, I got an html doc as a response with error count 0,but error string as unkown error. When I used ftp, connection was established, logged in but didn't close connection and again error string unkown error. My point is how can I know that whether my file is uploaded to a server successfully? Can I check it on my own system by setting up a server(for ex: Apache Tomcat server).
Set up a local FTP or HTTP server, you should not rely on any server you don't have full control over. A virtual machine is the probably the most hassle-free solution.