Access impossible to newly setup EJBCA PKI - networking

I have just finished installing ejbca community edition on top of wildfly.
The EJBCA server is a VM in the azure cloud.
everything went fine during build : Build successful for every 3 steps of deployment.
- ant deployear
- ant runinstall
- ant deploy-keystore)
Versions :
Wildfly 18.0
EJBCA 7.4.3.2
Ant 1.10.10
Mysql Ver 15.1 Distrib 10.3.27-MariaDB
JDBC connector : mariadb 2.7.3
Debian 10 buster
However i am unable to reach the destination
https://<public ip address>:8443/ejbca/
Error message :
The connection has timed out
The server at <my public ip #> is taking too long to respond.
So, started checking the different ports open :
**remote** nmap scan from my local vm to the remote EJBCA VM :
nmap -Pn8080,22,8442,8443,9990,3306 52.188.59.103
Host is up (0.037s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
21/tcp open ftp
80/tcp open http
443/tcp open https
554/tcp open rtsp
1723/tcp open pptp
Nmap done: 1 IP address (1 host up) scanned in 5.62 seconds
On the EJBCA VM a local port scan shows that port 8443 and 8080 are open :
rDNS record for 127.0.0.1: localhost
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
3306/tcp open mysql
8080/tcp open http-proxy
8443/tcp open https-alt
Azure connectivity tests from my ip to EJBCA host is OK for every ports tested.
however, online Port check says ports 8443 and 8442 are closed
https://portchecker.co/
So i don't know which test to trust ?
I tried disabling both my local firewall and my proxy but it didn't make any difference.
I did a tcpdump on the EJBCA server whilst trying to access ejbca url : but nothing was displayed.
What am i missing here ?
What other tests can i perform?
EDIT :
serverlog: (errors and warnings )
web admin error:
2021-06-14 13:00:07,332 ERROR [org.jboss.as.jsf] (MSC service thread 1-2) WFLYJSF0002: Could not load JSF managed bean class: org.ejbca.ui.web.admin.peerconnector.PeerConnectorsMBean
2021-06-14 13:00:07,433 ERROR [org.jboss.as.jsf] (MSC service thread 1-2) WFLYJSF0002: Could not load JSF managed bean class: org.ejbca.ui.web.admin.peerconnector.PeerConnectorMBean
Deprecated lib:
2021-06-14 13:00:14,598 WARN [org.jboss.weld.Bootstrap] (MSC service thread 1-4) WELD-000146: BeforeBeanDiscovery.addAnnotatedType(AnnotatedType<?>) used for class com.sun.faces.flow.FlowDiscoveryCDIHelper is deprecated from CDI 1.1!
Severe errors :
2021-06-14 13:00:15,967 SEVERE [javax.enterprise.resource.webcontainer.jsf.flow] (MSC service thread 1-4) Unable to obtain CDI 1.1 utilities for Mojarra
2021-06-14 13:00:15,971 SEVERE [javax.enterprise.resource.webcontainer.jsf.application.view] (MSC service thread 1-4) Unable to obtain CDI 1.1 utilities for Mojarra
Warnings:
2021-06-14 13:00:16,770 INFO [org.ejbca.core.ejb.StartupSingletonBean] (ServerService Thread Pool -- 94) Init, EJBCA 7.4.3.2 Community (67479006a69140e81d66e39871bed8255362effc) startup.
2021-06-14 13:00:16,780 WARN [io.undertow.servlet] (ServerService Thread Pool -- 66) UT015020: Path /* is secured for some HTTP methods, however it is not secured for [HEAD, POST, GET]
2021-06-14 13:00:16,780 WARN [io.undertow.servlet] (ServerService Thread Pool -- 73) UT015020: Path /* is secured for some HTTP methods [...]

During startup WildFly should log something like the following, so you can verify that WildFly is configured to listen on ports for all IPs.
16:58:12,890 INFO [org.wildfly.extension.undertow] (MSC service thread 1-7) WFLYUT0006: Undertow HTTPS listener httpspriv listening on 0.0.0.0:8443
16:58:12,920 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0006: Undertow HTTPS listener httpspub listening on 0.0.0.0:8442
You can also try connecting to port 8442, to check that the problem is not that you don't have the client certificate in your browser.

Related

I can't connect watcher to openstack

I have the following scenario:
server: Ubuntu 20.04.3 LTS
openstack installed with devstack
watcher 2.2.0
Every services seem to be working and i can see the watcher dashboard on localhost.
But I think I have an auth problem between watcher and keystone :
Error contacting Watcher server: Unable to establish connection
to http://x.x.x.x:9322/v1/services: HTTPConnectionPool(host='x.x.x.x', port=9322):
Max retries exceeded with url: /v1/services (Caused by NewConnectionError('<urllib3.connection.
HTTP Connection object at 0x7fba33b1be50>: Failed to establish a new connection: [Errno 111] Connexion refusée')).
Attempt 6 of 6
I'm testing something on a VM, I put the same IP and the same password everywhere
Where should I look first ?

WSO2 Identity Server and WSO2 API Manager integration - java.rmi.server.ExportException: Port already in use: 9999;

I have integrated WSO2 Identity server and WSO2 API Manager.
While starting the WSO2 Id Server , I am getting the below error in console..
ERROR {org.wso2.carbon.core.init.JMXServerManager} - Could not create the RMI local registry
java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
java.net.BindException: Address already in use: JVM_Bind
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:341)
at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:249)
at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411)
at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:208)
at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:152)
at sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:137)
Can anyone help?
But its starting successfully with the following message
INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Server : WSO2 Identity Server-5.2.0
[2016-12-27 15:31:13,744] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 57 sec
[2016-12-27 15:31:14,909] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL : https://localhost:9443/carbon/
[2016-12-27 15:31:14,948] INFO {org.wso2.carbon.identity.authenticator.x509Certificate.internal.X509CertificateServiceComponent} - X509 Certificate Servlet activated successfully..
Before i started wso2server.bat , there is nothing in the port..
My machine's JAVA_OPTS was being set to debug mode and this was causing WSO2 to start in debug mode . Thus it was listening to 9999 port.
I have removed the JAVA_OPTS and now able to start it properly.
related issue
WSO2 Identity Server listening to port 9999

Passing hostname to netty

Background: I've got two machines with identical hostnames, I need to set up a local spark cluster for testing, setting up a master and a worker works fine, but trying to run an application with the driver causes problems, netty doesn't seem to be picking the correct host (regardless of what I put in there, it just picks the first host).
Identical hostname:
$ dig +short corehost
192.168.0.100
192.168.0.101
Spark config (used by master and the local worker):
export SPARK_LOCAL_DIRS=/some/dir
export SPARK_LOCAL_IP=corehost // i tried various like 192.168.0.x for
export SPARK_MASTER_IP=corehost // local, master and the driver
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_DIR=/some/dir
Spark starts up and I can see the worker in the web-ui.
When I run the spark "job" below:
val conf = new SparkConf().setAppName("AaA")
// tried 192.168.0.x and localhost
.setMaster("spark://corehost:7077")
val sc = new SparkContext(conf)
I get this exception:
15/04/02 12:34:04 INFO SparkContext: Running Spark version 1.3.0
15/04/02 12:34:04 WARN Utils: Your hostname, corehost resolves to a loopback address: 127.0.0.1; using 192.168.0.100 instead (on interface en1)
15/04/02 12:34:04 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/04/02 12:34:05 ERROR NettyTransport: failed to bind to corehost.home/192.168.0.101:0, shutting down Netty transport
...
Exception in thread "main" java.net.BindException: Failed to bind to: corehost.home/192.168.0.101:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/04/02 12:34:05 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Process finished with exit code 1
Not sure how to proceed... its a whole jungle of ip addresses.
Not sure if this is a netty issue either.
My experience with the identical problem is that it revolves around setting things up locally. Try being more verbose in your spark driver code, add the SPARK_LOCAL_IP and driver host ip to the config:
val conf = new SparkConf().setAppName("AaA")
.setMaster("spark://localhost:7077")
.set("spark.local.ip","192.168.1.100")
.set("spark.driver.host","192.168.1.100")
This should tell netty which of the two identical hosts to use.

Cannot start Plone production instances normally with plone.app.async enabled

After adding plone.app.async, I cannot start my production instances normally using 'bin/instance start'. However, the instances run fine using 'foreground' and I can start the production instances on my development machine just fine. (The machines have almost identical configurations but the production machine has almost 100GB of data in blob storage.)
Additionally, I can start the instances normally if I remove support for plane.app.async, specifically the zcml-additions section, from my buildout. And I can start the worker instance for plone.app.async just fine. It uses almost all the same sections as the regular instances except for 'zcml-additional' being for worker instead of instance.
This happens with both single and multi db for plone.app.async.
The instance log shows that it gets trapped in some sort of cycle during startup. Here is the log of what happens:
....
2012-02-09T18:31:27 INFO ZServer HTTP server started at Thu Feb 9 18:31:27 2012
Hostname: 0.0.0.0
Port: 8081
2012-02-09T18:31:32 INFO ZServer WebDAV server started at Thu Feb 9 18:31:32 2012
Hostname: 0.0.0.0
Port: 1980
2012-02-09T18:31:32 INFO Zope Set effective user to "plone"
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage ClientStorage (pid=16331) created RW/normal for storage: '1'
2012-02-09T18:31:34 INFO ZEO.cache created temporary cache file '<fdopen>'
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 8100)>
2012-02-09T18:31:34 INFO ZEO.zrpc.Connection(C) (127.0.0.1:8100) received handshake 'Z3101'
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Connected to storage: ('localhost', 8100)
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage No verification necessary -- empty cache
2012-02-09T18:31:45 INFO ZServer HTTP server started at Thu Feb 9 18:31:45 2012
Hostname: 0.0.0.0
Port: 8081
2012-02-09T18:31:50 INFO ZServer WebDAV server started at Thu Feb 9 18:31:50 2012
Hostname: 0.0.0.0
Port: 1980
....
This repeats forever.
With a logging level of debug, I receive the following output: http://pastebin.com/nnyekuRA
Around line 58 is what I think is the culprit:
2012-02-09T17:18:22 DEBUG ZEO.ClientStorage pickled inval None '\x03\x94X\x8a\xa8\xe9\xf6\xee'
------
2012-02-09T17:18:22 BLATHER ZEO.zrpc (15892) CM.connect_done(preferred=1)
------
2012-02-09T17:18:22 BLATHER ZEO.zrpc (15892) CT: exiting thread: Connect([(2, ('127.0.0.1', 8100))])
But I have no idea why this is happening or even if this is correct.
Here is the buildout for deployment:
http://pastebin.com/u8D7swJs
The permissions were set incorrectly on the Plone 'parts' directory. This prevented 'uuid.txt' from being written in 'parts/instance/' . There were no error messages to indicate this problem.

JNDI over HTTP on JBoss 4.2.3GA

I've got a remote server on eapps.com that I'm using as my "production" server. I have my own computer at home that I'm using as my "development" server. I'm trying to use JNDI over HTTP to do some batch processing. The following works at home, but not on the eapps machine.
I'm connecting to some EJBs (stateless session), and have my jndi.properties set to this:
(this is for the eapps machine)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://my.prodhost.com:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jboss.naming.client:org.jnp.interfaces
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
(this is for my machine at home)
java.naming.factory.initial=org.jboss.naming.HttpNamingContextFactory
java.naming.provider.url=http://localhost:8080/invoker/JNDIFactory
java.naming.factory.url.pkgs=org.jnp.interfaces
java.naming.factory.url.pkgs=org.jboss.naming.client
# timeout is in milliseconds
jnp.timeout=15000
jnp.sotimeout=15000
jnp.maxRetries=3
As I said, it works at home, but when I try it remotely, I get:
Can not get connection to server. Problem establishing socket connection for InvokerLocator [socket://my.prodhost.com:4446//?dataType=invocation&enableTcpNoDelay=true&marshaller=org.jboss.invocation.unified.marshall.InvocationMarshaller&socketTimeout=600000&unmarshaller=org.jboss.invocation.unified.marshall.InvocationUnMarshaller]
...
Caused by: java.net.ConnectException: Connection timed out: connect
Am I doing something wrong here, or is it possibly a firewall issue? To the best of my knowledge, port 4446 is not blocked.
Are the differences in the jndi.properties intentional (at the java.naming.factory.url.pkgs property level)?
Also, can you run a netstat -a | grep 4446 on both machines and update the question with the output?
Update: If the netstat command didn't return anything for port 4446 (JBoss was running, right?), then the JBoss Remoting Connector for the UnifiedInvoker service is very likely not listening on your eApps host, hence the connection timeout. Maybe this service has been disabled by eApps, you should contact the support and discuss this with them.
Just in case, a sample Connector configuration can be found in the jboss-service.xml under the server node's conf directory. Maybe compare the remote one (if you have access to it) with your local file to confirm this (but if it's disable, there must be a reason, discuss it with the support).
And by the way, this is what I get when I run the netstat command with JBoss 4.2.3.GA started on my GNU/Linux machine (default configuration):
$ netstat -a | grep 4446
tcp 0 0 localhost:4446 *:* LISTEN

Resources