I have WSO2 API MAnager 2.6.0 setup with 2 instances running for a while.
Instance one - Traffic Manager, Store, Publisher, Key Manager
Instance two - gateway
For some reasons I need to start using Store and Publisher on the gateway node for one particular case. According to manuals, there should not be any issues with that since I haven't done optimization and profile startup. just like HA setup, but without huge traffic on second gateway Store/Publisher.
The issue is, whet I open https://Instance two:9444/store (offset+1 on that server) I see only 2 APIs out of 7 published on Instance one.
The DB datasources are noted in config as well as synapse files are on same server. At Instance two startup logs I see that APIs are initialized..
Any ideas?
TID: [-1234] [] [2019-08-20 09:53:56,759] INFO {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker} - Endpoint : URApiProxy--v2.0.0_APIproductionEndpoint was added to the Synapse configuration successfully {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker}
TID: [-1234] [] [2019-08-20 09:53:56,760] INFO {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker} - Endpoint : URApiProxy--v2.0.0_APIsandboxEndpoint was added to the Synapse configuration successfully {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker}
further
TID: [-1234] [] [2019-08-20 09:53:57,528] INFO {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker} - API : admin--URApiProxy:v2.0.0 was added to the Synapse configuration successfully {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker}
further
TID: [-1234] [] [2019-08-20 09:54:20,692] INFO {org.apache.synapse.rest.API} - Initializing API: admin--URApiProxy:v2.0.0 {org.apache.synapse.rest.API}
Make sure you have the same configs in registry.xml in instance 2 according to instance 1. Optionally you can try reindexing the server. Please refer WSO2 API Manager issues with solr
Related
We have configured the high availability for API end points as mentioned here https://apim.docs.wso2.com/en/3.2.0/learn/design-api/endpoints/high-availability-for-endpoints/#configuring-load-balancing-endpoints
In Load balance and Failover Configurations we have chosen the "EndpointType" as "Load Balanced". We could see the requests are routed to these load balanced end points successfully. However when we stop any one of the end-point node, 2 requests are still routed to the stopped node before the remaining requests are successfully routed to the active node. This is happening again and again when we receive new requests. The particular failed end-point is not marked as inactive or down
The error response are
{"fault":{"code":101503,"type":"Status report","message":"Runtime Error","description":"Error connecting to the back end"}}
The entries from carbon logs are attached below
TID: [-1234] [] [2022-12-14 09:51:23,307] WARN {org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler} - Error while getting throttling information for resource and http verb
TID: [-1] [] [2022-12-14 09:51:23,308] WARN {org.apache.synapse.transport.passthru.ConnectCallback} - Connection refused or failed for : /100.66.2.32:7010
TID: [-1234] [] [2022-12-14 09:51:23,309] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : NewMCMInboundChannel-RESTAPIService--vv2_APIproductionEndpoint_1 with address http://100.66.2.32:7010/mcm-provider will be marked SUSPENDED as it failed
TID: [-1234] [] [2022-12-14 09:51:23,309] WARN {org.apache.synapse.endpoints.EndpointContext} - Suspending endpoint : NewMCMInboundChannel-RESTAPIService--vv2_APIproductionEndpoint_1 with address http://100.66.2.32:7010/mcm-provider - current suspend duration is : 30000ms - Next retry after : Wed Dec 14 09:51:53 UTC 2022
TID: [-1234] [] [2022-12-14 09:51:23,310] WARN {org.apache.synapse.endpoints.LoadbalanceEndpoint} - Endpoint [NewMCMInboundChannel-RESTAPIService--vv2_APIproductionEndpoint] Detect a Failure in a child endpoint : Endpoint [NewMCMInboundChannel-RESTAPIService--vv2_APIproductionEndpoint_1]
TID: [-1234] [] [2022-12-14 09:51:23,310] INFO {org.apache.synapse.mediators.builtin.LogMediator} - {api:admin--NewMCMInboundChannel-RESTAPIService:vv2} STATUS = Executing default 'fault' sequence, ERROR_CODE = 101503, ERROR_MESSAGE = Error connecting to the back end
Here what you are experiencing is the default behavior of the endpoint suspension. Any endpoint created with API Manager can be in 3 states.
Active
Timeout
Suspended
In your configuration, since you have configured two endpoints in a load-balanced manner, initially both endpoints are in the active state. Both endpoints share the load. Once the endpoint 2 is stopped, the next request routed to the endpoint failed with an error code and that has put the endpoint from active to suspended state.
There are three configurations you can set in such suspended situation.
Error code (The error code which puts the endpoint from active to suspended state)
Initial suspension duration.
Maximum duration
Progression Factor
In the default configurations, initial suspension duration is set to 30 seconds. This means, server will remove the suspended state of the endpoint after 30 seconds from the endpoint failure and will put it back to the active state. That's why you can observe that endpoint is getting active from time to time. This is expected as the server tries to determine whether the endpoint is active or not.
You can increase this suspension time with the configurations and suspension time is calculated considering the other 3 configurations.
Endpoint suspension time = Min(current suspension duration *
progressionFactor, maximumDuration)
With each failed attempt, the progression factor will increase the suspension duration until the maximum time. This will reset once the endpoint has served at least one successful request.
You can configure all the in the publisher UI, endpoints section as below.
More information on endpoint suspension can be found in here 1.
1 - https://docs.wso2.com/display/EI660/Endpoint+Error+Handling
I was facing issue while login to Carbon Management Console
Version; WSO2 IS 5.10. 0 as Key Manager
Changes:
I have made Mysql database changes for WSO2AM_DB, WSO2Shared_DB,WSO2User_db in deployment.toml file.
When i tried login Management Console with default admin/admin credentials, I am getting error in UI:
Login failed! Please recheck the username and password and try again.
ERROR {org.wso2.carbon.core.services.authentication.AuthenticationAdmin} - System error while Authenticating/Authorizing User : com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Also user store changes in deployment.toml cause error
Changes:
[user_store]
type = "database"
[user_store.properties]
TenantManager="org.wso2.carbon.user.core.tenant.JDBCTenantManager"
ReadOnly=false
ReadGroups=true
WriteGroups=true
scim_enabled = true
[realm_manager]
data_source = "WSO2USER_DB"
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure normally means the configured database is not accessible.
Did you define the database.user configuration in the deployment.toml according to WSO2 jdbc user configuration guidelines? If this is already configured, check the configured connection url and network connectivity from WSO2-IS to userstore database.
I am using WSO2APIM 2.6.0 with default configuration of databases which it is using as H2 and other settings. I have been a user of APIM 2.5.0 and things are working fine there.
But, we had some requirement to install the SSL certificate of backend server of the APIs which I create in APIM 2.5.0
On reading documentation [https://docs.wso2.com/display/AM260/Dynamic+SSL+Certificate+Installation]
and [https://docs.wso2.com/display/AM260/RESTful+APIs] I understand that 2.6.0 has the feasibility to add a new SSL certificate in APIM client trustore using restAPI.
I have tested this and it seems to be working fine (adding certificate using rest api).Once you add the certificate, it seems that it has to be loaded in the gateway nodes which happens every 10 mins (by default and then it can be changed based on the requirement from axis2.xml file)
But, even after adding the certificate in client-truststore, when I click the endpoint TEST button in API Publisher it says 'Invalid Endpoint.'. The certificate does not seem to get loaded even though there are logs like below
TID: [-1234] [] [2019-09-18 14:44:51,302] INFO {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl} - Certificate is successfully added to the Publisher client Trust Store with Alias 'devcertificate' {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl}
TID: [-1234] [] [2019-09-18 14:44:51,341] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin#carbon.super [-1234]' logged in at [2019-09-18 14:44:51,341+0000] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [-1234] [] [2019-09-18 14:44:51,365] INFO {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl} - The Alias 'devcertificate' exists in the Gateway Trust Store. {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl}
TID: [-1234] [] [2019-09-18 14:44:51,365] INFO {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl} - The Transport Sender will be re-initialized in few minutes. {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl}
TID: [-1234] [] [2019-09-18 14:44:51,365] INFO {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl} - The certificate with Alias 'devcertificate' is successfully added to the Gateway Trust Store. {org.wso2.carbon.apimgt.impl.certificatemgt.CertificateManagerImpl}
TID: [-1] [] [2019-09-18 14:49:12,582] INFO {org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent} - Running DB sync task. {org.wso2.andes.kernel.disruptor.inbound.InboundDBSyncRequestEvent}
TID: [-1] [] [2019-09-18 14:53:28,348] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender} - PassThroughHttpSender reloading SSL Config.. {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender}
TID: [-1] [] [2019-09-18 14:53:28,352] INFO {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder} - customSSLProfiles configuration is loaded from path: /opt/new/test/apim/fresh/usr/lib/wso2/wso2am/2.6.0/repository/resources/security/sslprofiles.xml {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder}
TID: [-1] [] [2019-09-18 14:53:28,352] INFO {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder} - HTTPS Loading custom SSL profiles for the HTTPS sender {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder}
TID: [-1] [] [2019-09-18 14:53:28,358] INFO {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder} - HTTPS Custom SSL profiles initialized for 1 servers {org.apache.synapse.transport.nhttp.config.ClientConnFactoryBuilder}
TID: [-1] [] [2019-09-18 14:53:28,358] INFO {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender} - Pass-through HTTPS Sender updated with Dynamic Configuration Updates ... {org.apache.synapse.transport.passthru.PassThroughHttpSSLSender}
So, my question here is, do we have to restart the server to reflect the certificate which are added in the client truststore to work fine. Or as per the documentation after 10mins once the certificate is loaded in the gateway node, without restarting the server the certificate gets reflected and communication with backend server can be done? am I missing anything here.
Can someone please help me with this.
Thanks
Are there any multiple nodes here?
When dynamic SSL certificates are uploaded through the publisher console it will be added into the /repository/resources/security/sslprofiles.xml and /repository/resources/security/client-truststore.jks of the current node. But if the setup is clustered these 2 files need to be synced between the nodes to update these dynamically added certificates as mentioned in the doc.
I am working with WSO2 APIM 2.6.0 version and trying to integrate with WSO2 APIM Analytics Server as per the link [https://docs.wso2.com/display/AM260/Configuring+APIM+Analytics#MSSQL-AM_USAGE_UPLOADED_FILES]
Already, we have working application of WSO2 APIM 2.5.0 and Analytics ti it and the data is generated as it should be. However, due to technical road block in APIM 2.5.0 ( adding certificate using RestAPIs), I am trying to migrate APIM from 2.5.0 to 2.6.0.
APIM is migrated as it is provided in the documentation link [https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release#code]
but when I am trying to integrate with Analytics, It throws constant error like below
[2019-09-09 10:03:17,367] INFO
{org.wso2.carbon.databridge.core.DataBridge} - user admin connected
[2019-09-09 10:03:17,368] ERROR
{org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping
wrongly formatted event sent
org.wso2.carbon.databridge.core.exception.EventConversionException: Error
when converting loganalyzer:1.0.0 of event bundle with events 1
at
org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:188)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId loganalyzer:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:171)
... 7 more
[2019-09-09 10:14:02,374] INFO {org.wso2.extension.siddhi.io.mgwfile.task.MGWFileCleanUpTask} - Uploaded API Usage data in the db will be cleaned up to : 2019-09-04 10:14:02.374
[2019-09-09 10:18:17,343] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting loganalyzer:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:188)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId loganalyzer:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:171)
... 7 more
Can some one please let me know if the compatibility exists between APIM 2.6.0 and Analytics as while migration process, I read while migrating the Analytics part of APIM - in Note Section [https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release#code]
Step 3.1 - Note that it is mandatory to use a WUM updated WSO2 API Manager Analytics 2.6.0 pack when migrating the configurations for WSO2 API-M Analytics.
Can some one please let me know why this constant errors are thrown at Analytics server. I have started the worker node of stream processor and as I understand there should be one carbon app which can receive the LogAnalyzer Events from APIM.
Thanks
Log analyser analytics is depreciated in APIM analytics 2.6.0. Remove the log analyser publisher from APIM 2.6.0
I'm trying to connect my wso2 API manager to external LDAP/Active Directory. LDAP connection is fine but I'm getting this error while starting the server.
9714 is SSL port
[2018-07-13 14:52:03,250] ERROR - JMSListener Unable to continue server startup as it seems the JMS Provider is not yet started. Please start the JMS provider now.
[2018-07-13 14:52:03,251] ERROR - JMSListener Connection attempt : 5 for JMS Provider failed. Next retry in 320 seconds
[2018-07-13 14:52:27,889] WARN - DataEndpointGroup No receiver is reachable at reconnection, will try to reconnect every 30 sec
[2018-07-13 14:52:27,906] INFO - DataBridge user admin connected
[2018-07-13 14:52:27,914] ERROR - AuthenticationServiceImpl Invalid User : admin
[2018-07-13 14:52:27,915] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://10.10.183.27:9714.
org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException: Cannot borrow client for ssl://10.10.183.27:9714.
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:134)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException:
Error while trying to login to data receiver :/10.10.183.27:9714
at org.wso2.carbon.databridge.agent.endpoint.binary.BinaryDataEndpoint.login(BinaryDataEndpoint.java:50) at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:128)
... 6 more
This error usually is due to the Analytics configuration or Throttling configuration.
Try to disable the Analytics publisher in the api-manager.xml file, if enabled, or review the connection details under DASServerUrl element.
<Analytics>
<!-- Enable Analytics for API Manager -->
<Enabled>false</Enabled>
<!-- Server URL of the remote DAS/CEP server used to collect statistics. Must
be specified in protocol://hostname:port/ format.
An event can also be published to multiple Receiver Groups each having 1 or more receivers. Receiver
Groups are delimited by curly braces whereas receivers are delimited by commas.
Ex - Multiple Receivers within a single group
tcp://localhost:7612/,tcp://localhost:7613/,tcp://localhost:7614/
Ex - Multiple Receiver Groups with two receivers each
{tcp://localhost:7612/,tcp://localhost:7613},{tcp://localhost:7712/,tcp://localhost:7713/} -->
<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<!--DASAuthServerURL>{ssl://localhost:7712}</DASAuthServerURL-->
<!-- Administrator username to login to the remote DAS server. -->
<DASUsername>${admin.username}</DASUsername>
<!-- Administrator password to login to the remote DAS server. -->
<DASPassword>${admin.password}</DASPassword>
You can also try to disable, or review the configuration in ReceiverUrlGroup and AuthUrlGroup, of the Advanced Throttling feature.
<ThrottlingConfigurations>
<EnableAdvanceThrottling>false</EnableAdvanceThrottling>
<DataPublisher>
<Enabled>true</Enabled>
<Type>Binary</Type>
<ReceiverUrlGroup>tcp://${carbon.local.ip}:${receiver.url.port}</ReceiverUrlGroup>
<AuthUrlGroup>ssl://${carbon.local.ip}:${auth.url.port}</AuthUrlGroup>
<Username>${admin.username}</Username>
<Password>${admin.password}</Password>
I faced the same issue. Check the log of traffic manager, if there arn't any exceptions. Kill & start traffic manager helped me.