nmap does not show all open ports [closed] - networking

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a YARN cluster running in EMR. When ssh into the master node and run nmap 10.0.0.254 I get the following result
Starting Nmap 5.51 ( http://nmap.org ) at 2015-06-10 00:17 UTC
Nmap scan report for ip-10-0-0-254.ec2.internal (10.0.0.254)
Host is up (0.00045s latency).
Not shown: 987 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
3306/tcp open mysql
8443/tcp open https-alt
8649/tcp open unknown
8651/tcp open unknown
8652/tcp open unknown
9000/tcp open cslistener
9101/tcp open jetdirect
9102/tcp open jetdirect
9103/tcp open jetdirect
9200/tcp open wap-wsp
14000/tcp open scotty-ft
I know the YARN resource manager is running on 10.0.0.254:9026, but I do not see it in the result above, however when I run nmap -p 9026 10.0.0.254 I get
Starting Nmap 5.51 ( http://nmap.org ) at 2015-06-10 00:18 UTC
Nmap scan report for ip-10-0-0-254.ec2.internal (10.0.0.254)
Host is up (0.000055s latency).
PORT STATE SERVICE
9026/tcp open unknown
Why does nmap not include the service running on 9026 when I run the first command?

By default, Nmap scans the most common 1,000 ports for each protocol (TCP in your case) 9026 is not one of the most common.
Here's how to specify ports to scan:
http://nmap.org/book/man-port-specification.html

Related

Can't establish connection over second NIC (two hops)

We are having trouble with network routing configuration in Ubuntu Xenial.
We have many servers with both Debian 8.4 (Jessie) and Ubuntu 16.04.2 (xenial)
and the exact same networking setup (or at least as far as we can see).
They all have two NICs attached to two VLANs (Say "A" and "B") both accessible
though other VLANs say, for example, from VLAN "C".
Both /etc/network/interfaces files are of the form:
NOTE: I faked names and IPs for the sake of better readability.
# VLAN A
auto eth0
iface eth0 inet static
address 192.168.111.xxx
netmask 255.255.255.0
broadcast 192.168.111.255
network 192.168.111.0
gateway 192.168.111.254
dns-nameservers 192.168.111.25 192.168.111.26
# VLAN B
auto eth1
iface eth1 inet static
address 192.168.222.xxx
netmask 255.255.255.0
broadcast 192.168.222.255
network 192.168.222.0
gateway 192.168.222.254 # <-- (Commented out in Ubuntu machine)
dns-nameservers 192.168.111.25 192.168.111.26
...say xxx is 100 for Debian Machine and 200 for Ubuntu machine and I'm
trying to ping from 192.168.1.10 in VLAN "C" to following addresses:
192.168.111.100: Works fine.
192.168.222.100: Works fine.
192.168.111.200: Works fine.
192.168.222.200: NO Answer!!
The "B" vlan is used mostly for backup and other "background" traffic to
avoid saturation problems in vlan "A".
I know that having two network paths to access same machine is not an usual
setup and I must say that only being able to connect thought one of them from
other networks is not a big problem nowadays. But what stucks to me is why
I can access to Debian Machines and not to Ubuntu ones?
Even, on the other hand, if it were working well in both platforms, we could
consider closing some services (such as ssh, and backend interfaces) from NIC
"A" to improve security (Our firewall only allows access to vlan "B" from our
IT staff vlan).
Of course, as it is commented in previous interfaces snippet, gateway
row is commented out in Ubuntu machines, but that is because, networking
initialization fails in that machines otherwise. That is, in fact, what we are
trying to solve.
But both machines routing tables are almost identical. The only difference
I could see was the onlink flag in the Ubuntu machine:
myUser#debianMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.100
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.100
myUser#ubuntuMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0 onlink
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.200
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.200
...but I was able to remove it by following command:
myUser#ubuntuMachine:~$ sudo ip route replace default via 192.168.111.254 dev eth0
myUser#ubuntuMachine:~$ sudo ip route
default via 192.168.111.254 dev eth0
192.168.111.0/24 dev eth0 proto kernel scope link src 192.168.111.200
192.168.222.0/24 dev eth1 proto kernel scope link src 192.168.222.200
And it did'nt fix the problem.
After that, I also tried to uncomment gateway row of 'VLAN B' which, as I
said, it were commented out in /etc/network/interfaces file and tryed to
restart networking but this is what happened:
myUser#ubuntuMachine:~$ sudo /etc/init.d/networking restart
[....] Restarting networking (via systemctl): networking.serviceJob for networking.service failed because the control process exited with error code. See "systemctl status networking.service" and "journalctl -xe" for details.
failed!
...and the onlink flag came back again.
As a note, commenting out that line again and issuing new
/etc/init.d/networking restart command, the output is the same until the
machine is rebooted, (even networking, despite the VLAN B default gateyay
issue, continues working as usual).
Following are the output of suggested commands:
myUser#ubuntuMachine:~$ sudo systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/generator/networking.service.d
└─50-insserv.conf-$network.conf
Active: failed (Result: exit-code) since jue 2017-12-21 14:55:29 CET; 42s ago
Docs: man:interfaces(5)
Process: 8552 ExecStop=/sbin/ifdown -a --read-environment --exclude=lo (code=exited, status=0/SUCCESS)
Process: 8940 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 8934 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-envi
Main PID: 8940 (code=exited, status=1/FAILURE)
dic 21 14:55:29 ubuntuMachine systemd[1]: Stopped Raise network interfaces.
dic 21 14:55:29 ubuntuMachine systemd[1]: Starting Raise network interfaces...
dic 21 14:55:29 ubuntuMachine ifup[8940]: RTNETLINK answers: File exists
dic 21 14:55:29 ubuntuMachine ifup[8940]: Failed to bring up eth1.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILUR
dic 21 14:55:29 ubuntuMachine systemd[1]: Failed to start Raise network interfaces.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Unit entered failed state.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Failed with result 'exit-code'.
...and the meaningful part of sudo journalctl -xe:
dic 21 14:55:29 ubuntuMachine sudo[8922]: myUser : TTY=pts/0 ; PWD=/home/myUser ; USER=root ; COMMAND=/etc/init.d/networking restart
dic 21 14:55:29 ubuntuMachine sudo[8922]: pam_unix(sudo:session): session opened for user root by myUser(uid=0)
dic 21 14:55:29 ubuntuMachine systemd[1]: Stopped Raise network interfaces.
-- Subject: Unit networking.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has finished shutting down.
dic 21 14:55:29 ubuntuMachine systemd[1]: Starting Raise network interfaces...
-- Subject: Unit networking.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has begun starting up.
dic 21 14:55:29 ubuntuMachine ifup[8940]: RTNETLINK answers: File exists
dic 21 14:55:29 ubuntuMachine ifup[8940]: Failed to bring up eth1.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
dic 21 14:55:29 ubuntuMachine systemd[1]: Failed to start Raise network interfaces.
-- Subject: Unit networking.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit networking.service has failed.
--
-- The result is failed.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Unit entered failed state.
dic 21 14:55:29 ubuntuMachine systemd[1]: networking.service: Failed with result 'exit-code'.
dic 21 14:55:29 ubuntuMachine sudo[8922]: pam_unix(sudo:session): session closed for user root
I googled a lot about being able to found some related information but none
fully answering my question:
An explanation of "onlink" flag that seemed to me it were pointing
out the possibilitity that the "onlink" flag were responsible of a
"wrong back routing" in the meaning that «tells the kernel that the it
does not have to check if the gateway is reachable directly by the
current machine» so (I figured out) the kernel may thought it could (or
should) route the answers of incomming connections from VLAN C to the
default gateway instead of thought the same NIC from where the
connection was started.
But, as I said, removing the "onlink" flag didn't seem to change
anything.
This unix StackExchange answer seems to solve the problem (I didn't
tested it yet) by using multiple routing tables and rules (to tell the
kernel which table to use). But it doesn't explain why Debian
machines are working well (I checked /etc/iproute2/rt_tables file of
both machines and they are identical too:
myUser#bothMachines:~$ sudo cat /etc/iproute2/rt_tables
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
So my final hypothesis is that it could be just an implementation difference
between kernel versions and, having that ubuntu one is much more recent, this
could be the correct behaviour so, in modern kernels, I need to use two
different routing tables (but I'm not sure and don't know why...).
myUser#debianMachine:~$ sudo uname -a
Linux debianMachine 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
myUser#ubuntuMachine:~$ sudo uname -a
Linux ubuntuMachine 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
And, hence, the question is:
Are we doing something wrong (or there is some bug in them) in the Ubuntu machines? Or, conversely, this is the correct behaviour and we are forced to setup more complex routing schema (either by per-vlan routes or by using two routing tables to make two default gateway's to work again)?
EDIT:
Now I tried to add static route to fix the problem:
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1.0/24 via 192.168.222.254 dev eth1
...but that freezed my ssh connection (thought NIC A) even I could then connect thought NIC B (at 192.168.111.200)
Both rules at the same time seems to not being possible:
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1/24 via 102.168.111.254 dev eth0
myUser#ubuntuMachine:~$ sudo ip route add 192.168.1/24 via 192.168.222.254 dev eth1
RTNETLINK answers: File exists
EDIT 2:
I finally found the Linux Advanced Routing & Traffic Control HOWTO which seems to be more accurate than all other documentation I found and specifically in its Chapter 4. Rules - routing policy database I see following text:
If you want to use this feature, make sure that your kernel is
compiled with the "IP: advanced router" and "IP: policy routing"
features
...so I thing all points to that my previous hypothesis of a kernel implementation difference was right and that difference is concretely is those two features being compiled in.
Not an authoritative answer, but my first working attempt (applying what I managed to understand):
sudo ip route add 192.168.1.0/24 via 192.168.222.254 from 192.168.222.200 dev eth1 table 253
sudo ip rule add from 192.168.222.200 table 253
Update: from and devarguments in the ip route command aren't required (it works perfetly well without them).
...after isuinng first command I couldn't connect yet, but after issuing second one yes.
The logic behind that comes from this text i found in this document:
Linux-2.x can pack routes into several routing tables identified by a number in the range from 1 to 255 or by name from the file /etc/iproute2/rt_tables By default all normal routes are inserted into the main table (ID 254) and the kernel only uses this table when calculating routes.
Actually, one other table always exists, which is invisible but even more important. It is the local table (ID 255). This table consists of routes for local and broadcast addresses. The kernel maintains this table automatically and the administrator usually need not modify it or even look at it.
In fact, I finally ended up using another routing table, identified by its id (253) instead of what I now understand it is just an alias (defined in /etc/iproute2/rt_tables file).
...and checking again that file, I now see that there was an alias ("default") already defined for that routing table (next to the "main" one which is indeed 254 as the text fragment I pasted previously says.
What I don't know yet is which is the logic behind this naming (the "default" for 253 routing table I mean) and if, for any reason, is better to use lower routing tables (1, 2, 3...) like this solution (already mentioned in the question) does.
But, for the sake of simplicity, if we aren't going to build complex routing policies and just want to fix this connectivity issue, I guess it could be a good solution to use something like (not yet tested):
gateway 192.168.222.254 table 253
post-up ip rule add from 192.168.222.200 table 253
I still need to test and check if I need an additional via 192.168.222.254 in the gateway row or if it won't work at all and need to add it with another post-up command instead.
I will update this answer with the results.
Edit 1: Same works with default routes:
sudo ip route add default from 192.168.222.200 via 192.168.222.254 table 253
sudo ip rule add from 192.168.222.200 table 253
Edit 2: First (now fully¹) working approach
After playing for a while with a testing machine, I think that the best solution is to add following rows to the second NIC configuration in /etc/network/interfaces file:
gateway 192.168.222.254 table 1
post-up ip rule add from 192.169.222.200 table 1
pre-down ip rule del from 192.168.222.200 table 1
post-up ip route add 192.188.222.0/24 dev eth1 src 192.168.222.200 table 1
Comments:
Adding table 1 to the gateway keyword worked well so additional (less readable) post-up command to add that default route was not necessary.
...in fact, using specific table (other than main) for first NIC together with a similar rule than what we used for our second NIC would be a bad idea because, that that rule will only apply when 192.168.111.200 is going to be used as source address so there will not be any "default default gateway". Leaving first NIC configuration in the main routing table, will make all ("locally generated") outgoing connections to remote LANs will go though our first default gateway by default.
First post-up command adds a rule that packets with the source address of that NIC, should be routed using table 1 (otherwise our new default gateway wouldn't be used).
pre-down command removes that rule. It is not mandatory but, without it, multiple network service restarts will duplicate this rule every time.
I also tried to use dev eth1 instead of from 192.169.222.200 (to avoid having to duplicate network address) but it didn't work. I guess which NIC to use to for "response" packets were "not yet decided".
I used table 1 for eth1 (our second NIC) and I could use table 2 for an eventual third one and so on. It wasn't needed to specify any table/rule for first NIC because it comes to the main table (not "default": see below note).
Finally(¹) the second post-up command make all things work well because (as I now realize) only (first matching) one routing table is used so the default network route (automatically created when the interface brought up) doesn't apply because it was created in table main.
I still don't know if there is a way to force it to be crated directly into table 1.
NOTE: By command sudo ip rule list we can see current routing rules as follows:
0: from all lookup local
32765: from 192.168.222.200 lookup 1
32766: from all lookup main
32767: from all lookup default
As I can understand, they are added decreasingly from 32767 to 0 and tried
increasingly until one matches. Last two ones and the "0" were already
defined by default. The former because of the logic I previously cited
from this document but that documents says that rules starts from "1"
so I guess "0" should also be some predefined "default starting point".
Edit 3:
As I said in the Edit 2 (of the question), I found this Linux Advanced Routing & Traffic Control HOWTO that helped me a lot in clarifying things.
Concretely the Routing for multiple uplinks/providers chapter was very useful to me in the task of understanding setups having "network loops" (even in our case we aren't acting as a router to Internet).

Get current public IP / host used while connecting via SSH with ansible [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have an ansible host file (called my_host_file) similar to this:
[my_group_name]
MY_PUBLIC_IP_FOR_VM_XYZ
Then I am attempting a few different approaches in a YAML playbook (called my_playbook.yml) similar to this:
---
- hosts: my_group_name
sudo: yes
tasks:
- debug: var=hostvars
- setup:
register: allfacts
- debug: var=allfacts
- debug: var=ansible_default_ipv4.address
- debug: var=ansible_hostname
- command: bash -c "dig +short myip.opendns.com #resolver1.opendns.com"
register: my_public_ip_as_ansible_var
I run everything like this: ansible-playbook -v -i my_host_file my_playbook.yml
I would like to get the public IP address in the my_host_file file (MY_PUBLIC_IP_FOR_VM_XYZ) at runtime in a different way than using the dig command combined with opendns then storing that into the variable my_public_ip_as_ansible_var.
After all, this has been used by ansible itself to establish the SSH session, so it may be stored somewhere.
I can not find this information either:
in the hostvars (actually here I can find it here, but I can also see all the other hosts, so I have no way to recognize the current SSH session from the group of hosts)
in the allfacts (using setup: [...]) variable (only the IP address in the private network, among many useful info about that VM like disk size, networking, OS kernel version etc.)
in ansible_default_ipv4.address (this is the IP of the private network)
in ansible_hostname (this is the host name, not the public IP I've used in my_host_file)
Is there a cleaner way / more ansible-ish way of getting the host used during the SSH session that comes from my_host_file?
inventory_hostname : host name declared in your inventory (can be the IP, the DNS or a logical name)
inventory_hostname_short : the same but with removing everything after the first dot
ansible_nodename : hostname of the host (result of the commande hostname)
ansible_hostname : short hostname of the host (result of command hostname --short)
ansible_fqdn : full hostname of the host (with domain) (result of command hostname --fqdn)
ansible_default_ipv4.address : IPv4 address to access 8.8.8.8 from the host
ansible_ethX.ipv4.address : IPV4 address of ethX interface of the host
ansible_ssh_host : hostname or IP used to access the host with SSH if defined in the inventory
Example :
# hosts
[mygroup]
myremote.foo.bar ansible_ssh_host=my-machine.mydomain.com
inventory_hostname: myremote.foo.bar
inventory_hostname_short: myremote
ansible_nodename: my-host
ansible_hostname: my-host
ansible_fqdn: my-host.domain.local
ansible_default_ipv4.address: 1.2.3.4
ansible_eth1.ipv4.address: 5.6.7.8
ansible_ssh_host: my-machine.mydomain.com
To get host alias from inventory file you would use inventory_hostname variable.
There is also ansible_host variable, because inventory alias and actual host may differ.

Why is openvpn responding with "could not read Auth username from stdin?" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 5 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
Just did an update on my system and for some reason I can no longer log into my VPN service. I'm running gentoo.
Here's my /etc/openvpn/openvpn.conf.
client
dev tun
proto udp
remote myvpnguys.com 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
tls-client
remote-cert-tls server
comp-lzo
verb 1
reneg-sec 0
crl-verify crl.pem
keepalive 10 300
auth-user-pass
I start my service on gentoo as follows:
$ sudo /etc/init.d/openvpn start
* Caching service dependencies ... [ ok ]
* Starting openvpn ... [ ok ]
* WARNING: openvpn has started, but is inactive
And here is the log file which shows the username prompt, but it's as if it just keeps on going.
$ sudo cat ./openvpn.log
Sat Aug 15 00:57:32 2015 OpenVPN 2.3.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Aug 15 2015
Sat Aug 15 00:57:32 2015 library versions: OpenSSL 1.0.1p 9 Jul 2015, LZO 2.08
Enter Auth Username:
Sat Aug 15 00:57:32 2015 ERROR: could not read Auth username from stdin
Sat Aug 15 00:57:32 2015 Exiting due to fatal error
This is a bug in 2.3.7 and fixed in 2.3.8:
https://community.openvpn.net/openvpn/ticket/248
Add this line to /etc/portage/package.keywords:
=net-misc/openvpn-2.3.8
and install 2.3.8.

how to parse CISCO IPS configuration? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I need a tool or script to parse Cisco IPS configuration,I know there is a tool called nipper for parsing firewall and switch configuration , but i doesn't support Cisco IPS , and I google it but there is no good result.
You should use ciscoconfparse.
The following example uses a Cisco configuration below... I can't use an IPS config unless the OP posts one... this uses a Cisco IOS configuration...
The following script will load a configuration file from /tftpboot/bucksnort.conf and use CiscoConfParse.find_lines() to parse it for the names of all serial interfaces. Note that the ^ symbol at the beginning of the search string is a regular expression; ^interface Serial tells python to limit it’s search to lines that begin with interface Serial.
[mpenning#typo tmp]$ python
Python 2.6.6 (r266:84292, Sep 11 2012, 08:34:23)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from ciscoconfparse import CiscoConfParse
>>> parse = CiscoConfParse("/tftpboot/bucksnort.conf")
>>> serial_intfs = parse.find_lines("^interface Serial")
>>>
>>> serial_intfs
['interface Serial1/0', 'interface Serial1/1', 'interface Serial1/2']
>>>
>>> qos_intfs = parse.find_parents_w_child( "^interf", "service-policy output QOS_1" )
>>> qos_intfs
['interface Serial1/1']
! Filename: /tftpboot/bucksnort.conf
!
policy-map QOS_1
class GOLD
priority percent 10
class SILVER
bandwidth 30
random-detect
class default
!
interface Ethernet0/0
ip address 1.1.2.1 255.255.255.0
no cdp enable
!
interface Serial1/0
encapsulation ppp
ip address 1.1.1.1 255.255.255.252
!
interface Serial1/1
encapsulation ppp
ip address 1.1.1.5 255.255.255.252
service-policy output QOS_1
!
interface Serial1/2
encapsulation hdlc
ip address 1.1.1.9 255.255.255.252
!
class-map GOLD
match access-group 102
class-map SILVER
match protocol tcp
!
access-list 101 deny tcp any any eq 25 log
access-list 101 permit ip any any
!
access-list 102 permit tcp any host 1.5.2.12 eq 443
access-list 102 deny ip any any
!
logging 1.2.1.10
logging 1.2.1.11
logging 1.2.1.12

JNDI over HTTP on JBoss 4.2.3GA

I've got a remote server on eapps.com that I'm using as my "production" server. I have my own computer at home that I'm using as my "development" server. I'm trying to use JNDI over HTTP to do some batch processing. The following works at home, but not on the eapps machine.
I'm connecting to some EJBs (stateless session), and have my jndi.properties set to this:
(this is for the eapps machine)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://my.prodhost.com:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jboss.naming.client:org.jnp.interfaces
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
(this is for my machine at home)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://localhost:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jnp.interfaces
java.naming.factory.url.pkgs=org.jboss.naming.client
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
As I said, it works at home, but when I try it remotely, I get:
Can not get connection to server. Problem establishing socket connection for InvokerLocator [socket://my.prodhost.com:4446//?dataType=invocation&enableTcpNoDelay=true&marshaller=org.jboss.invocation.unified.marshall.InvocationMarshaller&socketTimeout=600000&unmarshaller=org.jboss.invocation.unified.marshall.InvocationUnMarshaller]
...
Caused by: java.net.ConnectException: Connection timed out: connect
Am I doing something wrong here, or is it possibly a firewall issue? To the best of my knowledge, port 4446 is not blocked.
Are the differences in the jndi.properties intentional (at the java.naming.factory.url.pkgs property level)?
Also, can you run a netstat -a | grep 4446 on both machines and update the question with the output?
Update: If the netstat command didn't return anything for port 4446 (JBoss was running, right?), then the JBoss Remoting Connector for the UnifiedInvoker service is very likely not listening on your eApps host, hence the connection timeout. Maybe this service has been disabled by eApps, you should contact the support and discuss this with them.
Just in case, a sample Connector configuration can be found in the jboss-service.xml under the server node's conf directory. Maybe compare the remote one (if you have access to it) with your local file to confirm this (but if it's disable, there must be a reason, discuss it with the support).
And by the way, this is what I get when I run the netstat command with JBoss 4.2.3.GA started on my GNU/Linux machine (default configuration):
$ netstat -a | grep 4446
tcp 0 0 localhost:4446 *:* LISTEN

Resources