Deploy Springboot TCP APP on PWS - spring-mvc

Trying to deploy SpringBoot TCP server application on Pivotal Web Service(Cloud Foundry)
The following is in manifest.yml file
applications:
- name: myapp-api
path: target/myapp-api-0.0.1-SNAPSHOT.jar
host: app
domain: cf-tcpapps.io
memory: 1G
instances: 1
When i cf push i get this error
FAILED
Server error, status code: 400, error code: 310009, message: You have exceeded the total reserved route ports for your organization's quota.
when i cf router-groups i get
FAILED
Failed fetching router groups.
Server error, status code: 401, error code: UnauthorizedError, message: You are not authorized to perform the requested action
How can one deploy a spring mvc api that exposes a TCP port

The PWS docs indicate:
Note: By default, PWS only supports routing of HTTP requests to applications.
This implies maybe they do, if you get a special dispensation? May be worth contacting PWS support.

The cf router-groups command is an admin-only command.
If cf domains returns the cf-tcpapps.io domain, a router group has already been set up, but your org or space has not been given permission (i.e., a quota of available TCP ports) to use it.
Ask your platform operator/admin. They may be able to increase your quota, or create a new TCP domain for you to use.

Related

Trouble connecting to gRPC server on AWS Fargate

I have a Python gRPC server running on AWS Fargate (configured very similar to this AWS guide here), and another AWS Fargate task (call it the "client") that attempts to make a connection to my gRPC server (also using Python gRPC). However, the client is unable to make a call to my server, with the following error:
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1619057124.216955000","description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":5397,
"referenced_errors":[{"created":"#1619057124.216950000","description":"failed to connect to all addresses",
"file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc",
"file_line":398,"grpc_status":14}]}"
Based on my reading online, it seems like there are myriad situations in which this error is thrown, and I'm having trouble figuring out which one pertains to my case. Here is some additional information:
When running client and server locally, I am able to successfully connect by having the client connect to localhost:[PORT]
I have configured an application load balancer target group following the guide from AWS here that makes health check requests to the / route of my gRPC server, using the gRPC protocol, and expect gRPC response code 12 (UNIMPLEMENTED); these health check requests are coming back as expected, which I believe implies the load balancer is able to successfully communicate with the server (although I could be misunderstanding)
I configured a service discovery system (following this guide here) that should allow me to reach my gRPC server within my VPC via the name service-name.dev.co.local. I can confirm that the corresponding DNS record exists in Route 53, and when I SSH into my VPC, I am indeed able to ping service-name.dev.co.local successfully.
Anyone have any ideas? Would appreciate any and all advice, and I'm happy to answer any further questions.
Thank you for your help!
on your grpc server use 0.0.0.0:[port] and expose this port with TCP on your container.

corda CENM networkmap server start failing to connect database after a few week run

we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi

AMQP handshake timeout error while deploying AWS Corda Enterprise Template

I am deploying AWS Corda Enterprise Template. The Quick start deployed the stack as per the defined CloudFormation template. I can see 2 AWS instances, up and running as Corda nodes, in Hot-Cold setup with a load balancer.
However the Log for Corda node has following ERROR related to AMQP communication.
[ERROR] 2018-10-18T05:47:55,743Z [Thread-3
(ActiveMQ-scheduled-threads)] core.server.lambda$channelActive$0 -
AMQ224088: Timeout (10 seconds) while handshaking has occurred. {}
What can be possible reason for this error? This error keeps on occurring after a certain time interval. So it looks like some connectivity issue to me.
Note: The load balancer shows the status of this AWS Corda instances as healty (In Service). So I believe the Corda node has booted up successfully.
The ERROR message isn't necessarily tied to AMQP. Perhaps you were confused by the "AMQ" in the error ID (AMQ224088)?
In any event, this error indicates that something on the network is connecting to the ActiveMQ Artemis broker, but it's not completing any protocol handshake. This is commonly seen with, for example, load balancers that do a health check by creating a socket connection without sending any real data just to see if the port is open on the target machine.

API Management 2018.1 and DataPower 7.7

I am trying to add DataPower 7.7 into API Management 2018.1.
I need to configure API Connect Gateway Service in DataPower (new APIC 2018.1 doesn't work with XML Management Service).
After configuration I got an error:
8:07:19 mgmt notice 959 0x00350015 apic-gw-service (default):
Operational state down
8:07:19 apic-gw-service error 959 0x88e00001 apic-gw-service
(default): Unexpected queue error: Domain check failed! Please ensure that
the 'default' domain exists and is enabled. Also, please verify that the API
Gateway Service is configured with the correct domain and SOMA credentials.
8:07:19 apic-gw-service error 959 0x88e000a0 apic-gw-service
(default): Failed to initialize gateway environment: datapower
DP version is 7.7.
Please suggest, if you have any information or manuals.
Note: Domain exists, main services are enabled
It's hard to tell what exactly the problem is based on the log messages shown above.
Update to original answer:
See also the documentation that is now available in the IBM API Connect Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSMNED_2018/com.ibm.apic.install.doc/tapic_install_datapower_gateway.html
However, here are the basic steps for configuring a DataPower gateway to work with API Connect 2018.x.
You will need to ensure:
DataPower is running DP 7.7.0.0 or higher.
You have the AppOpt license installed. (Use the “show license” command in the DataPower CLI to confirm.)
You have a shared certificate and a private key for securing the
communication between the API Connect management server and the
gateway.
On DataPower, you need to:
Create an application domain. All of the subsequent configuration should be done in the application domain.
Enable statistics
Upload your private key and shared certificate to the cert:// directory in the application domain.
Create a crypto key object, a crypto certificate and a crypto identification credentials object using your key and certificate.
Create an SSL client profile and an SSL server profile that reference the crypto identification credential object.
Configure a gateway-peering object.
Configure and enable the API Connect Gateway Service in the application domain.
At that point, you should be able to configure the gateway in the API Connect cloud manager.
Here are the DataPower CLI commands to create a basic configuration. In the configuration below, IP address 1.1.1.1 represents a local IP address on your DataPower appliance. Traffic from the API Connect management server to the gateway will be sent to port 3000. API requests will go to port 9443 (but you can change it to the more standard port, 443, if you prefer.)
For a production environment, you will want to build on this configuration to ensure you are running with at least 3 gateways in the peer group, but this will get you started.
Create the application domain called apiconnect
top; configure terminal;
domain apiconnect; visible default; exit;
write mem
Use the Web GUI to upload your private key and shared certificate to the cert:// folder in the apiconnect domain
Then run these commands to create the configuration in the apiconnect domain
switch apiconnect
statistics
crypto
key gw_to_apic cert:///your-privkey.cer
certificate gw_to_apic cert:///your-sscert.cer
idcred gw_to_apic gw_to_apic gw_to_apic
ssl-client gwd_to_mgmt
idcred gw_to_apic
no validate-server-cert
exit
ssl-server gwd_to_mgmt
idcred gw_to_apic
no request-client-auth
validate-client-cert off
exit
exit
gateway-peering apic
admin-state enabled
local-address 1.1.1.1
local-port 15379
monitor-port 25379
priority 100
enable-ssl off
enable-peer-group off
persistence local
exit
apic-gw-service
admin-state enabled
local-address 0.0.0.0
local-port 3000
api-gw-address 0.0.0.0
api-gw-port 9443
v5-compatibility-mode on
gateway-peering apic
ssl-server gwd_to_mgmt
ssl-client gwd_to_mgmt
exit
write mem
The problem you are seeing is an issue with creating your api connect service in the default domain. To work around just put your Api Gateway Service in a domain other than default.

Flex AIR Application Connecting to RemoteObject Through Proxy

I am having some issues trying to make an AIR application connect to a RemoteObject when the application is run in a domain that has proxy servers for outbound connection.
The error provided is as below:
[RPC Fault faultString="Send failed" faultCode="Client.Error.MessageSend" faultDetail="Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Failed: url: 'http://myTestService.org:8080/default/message/amf'"]
Any ideas? I think the proxy server may be preventing the application from accessing the Remote Object. How do I work around this?
Thanks.
Edit:
I saw a quite similar post to this:
Remoting with AIR
And I did declared the endpoint and destination to my RemoteObject.
In application/WEB-INF/flex/services-config.xml give only relative paths, do not use ip address and port number. You can look here for detail moving to production server
And here send failed error
In your case channel url should be
"/default/message/amf"
Drupal RPC Fault looks to be some what same as your problem and has issues with crossdomain.xml
Do check it.

Resources