I've got a remote server on eapps.com that I'm using as my "production" server. I have my own computer at home that I'm using as my "development" server. I'm trying to use JNDI over HTTP to do some batch processing. The following works at home, but not on the eapps machine.
I'm connecting to some EJBs (stateless session), and have my jndi.properties set to this:
(this is for the eapps machine)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://my.prodhost.com:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jboss.naming.client:org.jnp.interfaces
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
(this is for my machine at home)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://localhost:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jnp.interfaces
java.naming.factory.url.pkgs=org.jboss.naming.client
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
As I said, it works at home, but when I try it remotely, I get:
Can not get connection to server. Problem establishing socket connection for InvokerLocator [socket://my.prodhost.com:4446//?dataType=invocation&enableTcpNoDelay=true&marshaller=org.jboss.invocation.unified.marshall.InvocationMarshaller&socketTimeout=600000&unmarshaller=org.jboss.invocation.unified.marshall.InvocationUnMarshaller]
...
Caused by: java.net.ConnectException: Connection timed out: connect
Am I doing something wrong here, or is it possibly a firewall issue? To the best of my knowledge, port 4446 is not blocked.
Are the differences in the jndi.properties intentional (at the java.naming.factory.url.pkgs property level)?
Also, can you run a netstat -a | grep 4446 on both machines and update the question with the output?
Update: If the netstat command didn't return anything for port 4446 (JBoss was running, right?), then the JBoss Remoting Connector for the UnifiedInvoker service is very likely not listening on your eApps host, hence the connection timeout. Maybe this service has been disabled by eApps, you should contact the support and discuss this with them.
Just in case, a sample Connector configuration can be found in the jboss-service.xml under the server node's conf directory. Maybe compare the remote one (if you have access to it) with your local file to confirm this (but if it's disable, there must be a reason, discuss it with the support).
And by the way, this is what I get when I run the netstat command with JBoss 4.2.3.GA started on my GNU/Linux machine (default configuration):
$ netstat -a | grep 4446
tcp 0 0 localhost:4446 *:* LISTEN
Related
I've configured shadowsocks system by running ss-server on VPS and ss-local on my client machine.
Then I made a simple SOCKS5 client which connects to ss-local and resolve SOCKS request using C.
All work well, when I run ss-tunnel instead of ss-local, my SOCKS5 client can't connect to ss-tunnel.
TCP connection terminates as soon as it established.
I'm not sure what is the proper reason.
ss-tunnel -c config.json -L <destaddr:port>
But it does when I run ss-local instead.
ss-local -c config.json
Below is my config file.
{
"server":"xxx.xxx.xxx.xxx",
"server_port":443,
"local_address": "127.0.0.1",
"local_port":10800,
"password":"xxxxxxxxxx",
"timeout":60,
"method":"chacha20-ietf-poly1305",
"workers":8,
"plugin":"obfs-local",
"plugin_opts":"obfs=tls;obfs-host=www.google.com"
}
Is there any difference between protocols of ss-local and ss-tunnel? I know it's false, but anyway can't get it how they go wrong.
Thanks.
ss-tunnel establishes a complete tunnel with ss-server, all traffic to ss-tunnel is directly relayed to ss-server without any SOCKS request/resolve processes.
After I've removed SOCKS handshake in my client program, it worked properly.
I used FileZilla to connect to one of my Linux servers via the SFTP protocol, but got below error stack trace.
Status: Connecting to <server_ip>...
Response: fzSftp started, protocol_version=5
Command: keyfile "C:\ruifeng_ibm.ppk"
Command: open "root#<server_ip>" 22
Status: Connected to <server_ip>
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
On the server when I ran lsof -i, I was able to see the established sshd connection.
sshd 12333 root 3u IPv4 109406 0t0 TCP <server_hostname>:ssh-><workstation_ip>:54315 (ESTABLISHED)
How could the directories not be listed when the connection is successful? No idea how to debug either.
Turned out to be a silly problem.
I put below welcome message in the .bashrc file.
echo -e "\n\nHello Ruifeng...Welcome to the Arena! \n#>>------>---->>"
Either it contained some illegal characters FileZilla does not honor, or it's completely not supported by FileZilla. Too lazy to further dig in. After removing this message, the connection worked and the directories got listed.
Is there is a way to check if localhost is making ftp connection to other server?
The requirement is like this: Local host -> serverA
Remote server --> serverB.
Need to check if serverA is making ftp connection to serverB.
So whenever serverA is making ftp connection to serverB, how to get notified.
I tried like this: ps -ef | grep -i ftp; however since ps process too would get notified, so can't make this use in shell script, is there any better way which checks if serverA is making ftp connections to serverB, and if so, get notified / logs to a file.
Thanks
Your problem of "ps -ef | grep -i ftp" also reporting the 'ps' process is resulting from grep searching the string "ftp". This would also hit a lot of other processes which also have the word 'ftp' in it's command line.
To fix that check if you have the procps tools "pgrep" and "pkill" installed. They are very helpful for 'grepping' processes and running commandlines.
To solve your initial problem you might check if you have the 'ss' (show sockets from iproute2 packages) command installed.
It's output might be useful (11.22.33.44 is you local IP 130.133.3.130 the remote):
root:sigkill:~/# ss -p|cat
State Recv-Q Send-Q Local Address:Port Peer Address:Port
[...]
ESTAB 0 0 11.22.33.44:43681 130.133.3.130:ftp users:(("ftp",19729,4),("ftp",19729,3))
[...]
There are a few approaches that you could take:
You could poll running processes for ftp. This wouldn't catch other FTP clients (if you care about that), and it wouldn't catch very short ftp sessions that slip between polls.
If your system supports execution logging, you could log all executions of ftp. Again, this wouldn't catch other FTP clients.
You could watch for outbound connections on port 21/tcp using some mechanism provided by your system (for instance, on Linux, use an iptables rule that matches outbound FTP connections to any servers that you care about and logs them using the LOG target). This would catch all connections regardless of client, but tracking down the process and user would be a little more complicated.
You can use $ grep ftp /etc/services to list the current ftp connections.
$ grep ftp /etc/services
ftp-data 20/tcp
ftp-data 20/udp
...
ftp 21/tcp
ftp 21/udp fsp fspd
...
sftp 115/tcp
sftp 115/udp
...
ftp-data 20/sctp # FTP
ftp 21/sctp # FTP
...
ftps-data 989/tcp # ftp protocol, data, over TLS/SSL
ftps-data 989/udp # ftp protocol, data, over TLS/SSL
ftps 990/tcp # ftp protocol, control, over TLS/SSL
ftps 990/udp # ftp protocol, control, over TLS/SSL
Use netstat to see the open connections. e.g., For simple FTP...
$ netstat -tan | grep \:21
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN
tcp 0 0 :::21 :::* LISTEN
I want to make a backup for my remote server folders(ubunto server)to another remote sever (Linux server). but once I run this command from the the first server it dispalys me an error message:
rsync -raz --progress firstdirectoy root#serverIP:/home
The displayed messahe is:
ssh: connect to host <serverIP> port 22: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]
But the same command from the server 2 to the server 1 works fine and the folder is nicely copyed into the server1.
How can I escape the connexion error in order to copy my folder from server 1 to server 2 throw rsync?
Seems like server2 has no active ssh daemon while server1 has.
Try to run ssh daemon or use raw rsync protocol and rsync daemon.
If it's a connection timeout because your SSH server is slow to respond, you can tweak the timeout in rsync:
rsync -e 'ssh -o ConnectTimeout=120'
Else it may be a missing SSH daemon (sshd) on server 2 as stated by #geov, or a closed port on your firewall. You may start by testing an SSH login:
ssh user#serverIP
And see if it's working or not. Probably nmap serverIP will help you too, stating if SSH is running or not.
And please do NOT use root user for your rsync copy!
if you wait for a long time, the prompt appears
I think that your server2's IP is wrong
For me, this error appeared when attempting to rsync between two AWS EC2 instances where the two instances were not a part of the same security group.
Overview of how to create security groups
How to change the security groups of the instances
Allow instances within the same security group to communicate
I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT