I'm trying to run a simple sqoop import using oozie from hue ( Cloudera VM ).Few seconds after submitting, the job gets hung with heart beat issue for ever, I did some search and found this thread https://community.cloudera.com/t5/Batch-Processing-and-Workflow/Oozie-launcher-never-ends/td-p/13330, I added the XML properties that is mentioned in all the below yarn-site.xml files not knowing which specific file, but no use, I'm still facing the same issue can anyone give some insights on this?
/etc/hive/conf.cloudera.hive/yarn-site.xml
/etc/hadoop/conf.empty/yarn-site.xml
/etc/hadoop/conf.pseudo/yarn-site.xml
/etc/spark/conf.cloudera.spark_on_yarn/yarn-conf/yarn-site.xml
/etc/hive/conf.cloudera.hive/yarn-site.xml
job log
'12480 [main] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of order_items
13225 [main] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies.
16314 [main] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation
18408 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://quickstart.cloudera:8088/proxy/application_1484596399739_0002/
18409 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1484596399739_0002
25552 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_1484596399739_0002 running in uber mode : false
25553 [main] INFO org.apache.hadoop.mapreduce.Job - map 0% reduce 0%
Heart beat
Heart beat
workflow XML
<workflow-app name="Oozie_Test1" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-e57e"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="sqoop-e57e">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import --m 1 --connect jdbc:mysql://quickstart.cloudera:3306/retail_db --username=retail_dba --password=cloudera --table order_items --hive-database sqoopimports --create-hive-table --hive-import --hive-table sqoop_hive_order_items</command>
<file>/user/oozie/share/lib/mysql-connector-java-5.1.34-bin.jar#mysql-connector-java-5.1.34-bin.jar</file>
</sqoop>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
Thanks
Mx
this thread helped me to resolve the issue
https://community.cloudera.com/t5/Batch-Processing-and-Workflow/Oozie-sqoop-action-in-CDH-5-2-Heart-beat-issue/td-p/22181/page/2
after getting through this error I got stuck in "Failure to launch flume" issue and this thread helped me to fix the issue
oozie Sqoop action fails to import data to hive
Related
I am running a corda network on kubernetes (corda version 4.4) and I am trying to install and run a cordapp .
The cordapp am trying to run is the Heartbeat one (from the github corda sample folder) .
But whenever I try to start the flow using the command start StartHeartbeatFlow
I get the following error message :
[INFO] 11:00:32+0200 [pool-2-thread-11] shell.StartShellCommand.main - Executing command "start StartHeartbeatFlow <no arguments>",
start StartHeartbeatFlow: exception: com.heartbeat.StartHeartbeatFlow
Tue Apr 07 11:00:32 CEST 2020>>> [ERROR] 11:00:32+0200 [pool-2-thread-11] command.CRaSHSession.execute - Error while evaluating request 'start StartHeartbeatFlow' start StartHeartbeatFlow: exception: com.heartbeat.StartHeartbeatFlow [errorCode=1oe81or, moreInformationAt=https://errors.corda.net/OS/4.4/1oe81or]
Which doesn't really help me on how to solve the issue :/
Running flow list is listing the StartHeartbeatFlow so it's not an issue with the installation of the cordapp ...
Has anyone encountered the same kind of issue ?
Thanks !
Edit : The logs in the corda node I have when I execute the flow start StartHeartbeatFlow command .
corda#corda-node-corda-node-0:~/logs$ tail -f corda-node.log | grep -A 10 -B 10 "heartbeat"
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] realm.AuthenticatingRealm. - Looked up AuthenticationInfo [rpcuser] from doGetAuthenticationInfo
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] realm.AuthenticatingRealm. - AuthenticationInfo caching is disabled for info [rpcuser]. Submitted token: [org.apache.shiro.authc.UsernamePasswordToken - rpcuser, rememberMe=false].
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] credential.SimpleCredentialsMatcher. - Performing credentials equality check for tokenCredentials of type [[C and accountCredentials of type [java.lang.String]
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] credential.SimpleCredentialsMatcher. - Both credentials arguments can be easily converted to byte arrays. Performing array equals comparison
[DEBUG] 2020-04-07T13:21:09,767Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] authc.AbstractAuthenticator. - Authentication successful for token [org.apache.shiro.authc.UsernamePasswordToken - rpcuser, rememberMe=false]. Returned account [rpcuser]
[DEBUG] 2020-04-07T13:21:09,768Z [Thread-8 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Login for rpcuser succeeded
[DEBUG] 2020-04-07T13:21:09,771Z [Thread-12 (ActiveMQ-client-global-threads)] rpc.RPCServer. - -> RPC by rpcuser -> registeredFlows
[DEBUG] 2020-04-07T13:21:09,772Z [Thread-12 (ActiveMQ-client-global-threads)] rpc.RPCServer. - Arguments: [] {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,772Z [rpc-server-handler-pool-1] realm.AuthorizingRealm. - No authorizationCache instance set. Checking for a cacheManager... {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,772Z [rpc-server-handler-pool-1] realm.AuthorizingRealm. - No cache or cacheManager properties have been set. Authorization cache cannot be obtained. {actor_id=rpcuser, actor_owning_identity=OU=Regular Node, O=organization, L=Brussels, C=BE, actor_store_id=NODE_CONFIG, invocation_id=aac88106-cf60-4c63-b1b9-c5fac224b89a, invocation_timestamp=2020-04-07T13:21:09.771Z, origin=rpcuser, session_id=df6cc401-6f9f-41b5-9a18-790c28e33b06, session_timestamp=2020-04-07T13:11:30.204Z}
[DEBUG] 2020-04-07T13:21:09,773Z [rpc-server-sender] rpc.RPCServer. - <- RPC <- RpcReply(id=10ea96d9-5c19-4200-a64e-1eb3903835ce, timestamp: 2020-04-07T13:21:09.748Z, entityType: Invocation, result=Success([com.heartbeat.StartHeartbeatFlow, net.corda.core.flows.ContractUpgradeFlow$Authorise, net.corda.core.flows.ContractUpgradeFlow$Deauthorise, net.corda.core.flows.ContractUpgradeFlow$Initiate]), deduplicationIdentity=9c974c01-08af-44d0-bdef-c609efee11a8)
[DEBUG] 2020-04-07T13:21:09,775Z [Thread-62 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Processing login for SystemUsers/NodeRPC
[DEBUG] 2020-04-07T13:21:09,775Z [Thread-62 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#2f4a89fa)] artemis.BrokerJaasLoginModule. - Login for SystemUsers/NodeRPC succeeded
[DEBUG] 2020-04-07T13:21:09,809Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - pollDirectory /opt/corda/additional-node-infos
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Examining /opt/corda/additional-node-infos/nodeInfo-FEBE485DF04D12B91F70740AC3EDDDB1A0C5058B017C6DD6046A1AF37AB1687D
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Read 0 NodeInfo files from /opt/corda/additional-node-infos
[DEBUG] 2020-04-07T13:21:10,560Z [RxIoScheduler-2] network.NodeInfoWatcher. - Number of removed NodeInfo files 0
[DEBUG] 2020-04-07T13:21:11,824Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:13,844Z [Network Map Updater Thread-1] pool.PoolBase. - HikariPool-1 - Reset (autoCommit) on connection org.postgresql.jdbc.PgConnection#6fa7296c
[DEBUG] 2020-04-07T13:21:15,559Z [RxIoScheduler-2] network.NodeInfoWatcher. - pollDirectory /opt/corda/additional-node-infos
I could invoke the flow from the standalone shell , I was having some weird issues with the /cordapp folder holding my cordapp locally . I deleted it and recreated it and now it works .
Can you update your question with more stack trace? Are there any other errors in your node's log file? I'm asking because I just tried the example and it worked for me.
Here's what I did:
Built the Java version:
// Browse to Java files.
cd /heartbeat/contracts-java
// Build the nodes (Notary and PartyA).
./../gradlew deployNodes
Start the nodes (I don't like using runnodes task, so I start each node individually):
// Terminal 1 (Notary).
cd /heartbeat/contracts-java/build/nodes/Notary
// Start the Notary.
java -jar corda.jar
// Terminal 2 (PartyA).
cd /heartbeat/contracts-java/build/nodes/PartyA
// Start PartyA.
java -jar corda.jar
Start the flow inside of PartyA's terminal. Notice that I use flow start instead of just start (like in your case); it's worth trying flow start, even though both should work:
flow start StartHeartbeatFlow
You will see in my screenshot below that the flow completed (i.e. it created the SchedulableState that will start the flow again, which will lead to an endless loop until you shutdown the node):
Now I can watch that flow being called again and again by typing the below in PartyA's terminal:
flow watch
I could invoke the flow from the standalone shell , I was having some weird issues with the /cordapp folder holding my cordapp locally . I deleted it and recreated it and now it works .
I have the following problem with WSO2 API Manager and I suspect that it could be a problem related to certificate of the final HTTPS endpoint that it was registered.
I try to explain my situation in details:
First thing: I changed this section of the repository/conf/axis2/axis2.xml file in order to contact the 443 port instead the default 8243 port to call the registered endpoint over HTTPS (at the moment I can't change the registered end point port and I can't install a reverse proxy, but I have to test if the system works as expected, basically I need to call the final endpoint on the 443 port and I have to obtain the JSON response).
The original section that I changed is:
<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
<parameter name="port" locked="false">8243</parameter>
<parameter name="non-blocking" locked="false">true</parameter>
<!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
<!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
<parameter name="httpGetProcessor" locked="false">org.wso2.carbon.mediation.transport.handlers.PassThroughNHttpGetProcessor</parameter>
<parameter name="keystore" locked="false">
<KeyStore>
<Location>repository/resources/security/wso2carbon.jks</Location>
<Type>JKS</Type>
<Password>wso2carbon</Password>
<KeyPassword>wso2carbon</KeyPassword>
</KeyStore>
</parameter>
<parameter name="truststore" locked="false">
<TrustStore>
<Location>repository/resources/security/client-truststore.jks</Location>
<Type>JKS</Type>
<Password>wso2carbon</Password>
</TrustStore>
</parameter>
<!--<parameter name="SSLVerifyClient">require</parameter>
supports optional|require or defaults to none -->
</transportReceiver>
I changed it in this way:
<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">
<parameter name="port" locked="false">443</parameter>
<parameter name="non-blocking" locked="false">true</parameter>
<!--parameter name="bind-address" locked="false">hostname or IP address</parameter-->
<!--parameter name="WSDLEPRPrefix" locked="false">https://apachehost:port/somepath</parameter-->
<parameter name="httpGetProcessor" locked="false">org.wso2.carbon.mediation.transport.handlers.PassThroughNHttpGetProcessor</parameter>
<parameter name="keystore" locked="false">
<KeyStore>
<Location>repository/resources/security/wso2carbon.jks</Location>
<Type>JKS</Type>
<Password>wso2carbon</Password>
<KeyPassword>wso2carbon</KeyPassword>
</KeyStore>
</parameter>
<parameter name="truststore" locked="false">
<TrustStore>
<Location>repository/resources/security/client-truststore.jks</Location>
<Type>JKS</Type>
<Password>wso2carbon</Password>
</TrustStore>
</parameter>
<!--<parameter name="SSLVerifyClient">require</parameter>
supports optional|require or defaults to none -->
</transportReceiver>
Basically I only changed the 8243 default port with 443 standard HTTPS port used to expose the final endpoint.
Now, executing the API from the Store portal I obtain a cURL command that works on the expected 443 port:
curl -k -X POST "https://ENDPOINT_IP_ADDRESS:443/puntualitest/v1.0.0/puntuali" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer XXXXX-YYYY-ZZZZ-KKKK-WWWW" -d "{ \"header\": { \"msgUid\": \"a36bea3f-6dc6-49d7-9376-f31692930ba9\", \"timestamp\": 1567060509108, \"metadata\": { \"TRACKER_BIZID_REV_CODICE\": \"7175\", \"TRACKER_BIZID_REV_NUMERO\": \"545/2019\" }, \"codApplication\": null, \"codEnte\": null, \"invocationContext\": null, \"caller\": \"SRW\", \"user\": null, \"service\": \"crediti.invioPosizioneCreditoria\" }, \"body\": { \"#dto\": \"binary\", \"content\": \"PD94bWwgdmVyc2..............................+\" }}"
This seems correct, trying to perform the previous cURL command from the bash shell of the machine on which WSO2 API Manager is installed I obtain a JSON response from the API, this:
{"timestamp":"2020-02-29T12:13:54.630+0000","status":404,"error":"Not Found","message":"No message available","path":"/puntualitest/v1.0.0/puntuali"}
It contains an error message but I think that it is cause by a "wrong" payload, anyway it seems that the final registered API endpoint received my request, elaboreted it and return me a JSON message (is it this reasoning correct)?
The problem is that trying to perform the request directly from the inside of the Store portal of WSO2 API Manager I am obtaining the following error message:
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>101500</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>Error in Sender</am:description>
</am:fault>
Reading on the documantation it seems to me that the error having code 101500 could be related to a certificate problem:
WSO2 ESB 4.9.0: what means error 101500
The previous link referer to ESB product and not API Manager but I suspect that the problem could be the same. I suspect it also because into my log file (/usr/lib/wso2/wso2am/2.6.0/repository/logs/wso2carbon.log) when I perform the previous request from the Store portal I obtain the following error message:
TID: [-1] [] [2020-02-29 13:34:58,686] ERROR {org.apache.synapse.transport.passthru.SourceHandler} - I/O error: Received fatal alert: certificate_unknown {org.apache.synapse.transport.passthru.SourceHandler}
javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1647)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1615)
at sun.security.ssl.SSLEngineImpl.recvAlert(SSLEngineImpl.java:1781)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:1070)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:896)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:766)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doUnwrap(SSLIOSession.java:245)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:280)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSession.java:410)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:119)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:159)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:338)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:748)
TID: [-1234] [] [2020-02-29 13:34:58,827] WARN {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl} - Invalid session id for thrift authenticator. {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl}
TID: [-1234] [] [2020-02-29 13:34:58,829] ERROR {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl} - Error in invoking validate key via thrift.. {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl}
TID: [-1234] [] [2020-02-29 13:34:58,830] WARN {org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftKeyValidatorClient} - Login failed.. Authenticating again.. {org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftKeyValidatorClient}
TID: [-1234] [] [2020-02-29 13:34:58,846] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin#carbon.super [-1234]' logged in at [2020-02-29 13:34:58,845+0100] from IP address {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [-1] [] [2020-02-29 13:34:58,941] ERROR {org.apache.synapse.transport.passthru.TargetHandler} - I/O error: General SSLEngine problem {org.apache.synapse.transport.passthru.TargetHandler}
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1521)
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:528)
at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1197)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1165)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doWrap(SSLIOSession.java:237)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:271)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSession.java:410)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:119)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:159)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:338)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1709)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:318)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:970)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:967)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1459)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doRunTask(SSLIOSession.java:255)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:293)
... 9 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)
at sun.security.validator.Validator.validate(Validator.java:262)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1626)
... 17 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)
... 23 more
TID: [-1234] [] [2020-02-29 13:34:58,948] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : SirPuntuali--vv1.0.0_APIproductionEndpoint with address https://ENDPOINT_IP_ADDRESS/cmd/j/ will be marked SUSPENDED as it failed {org.apache.synapse.endpoints.EndpointContext}
TID: [-1234] [] [2020-02-29 13:34:58,948] WARN {org.apache.synapse.endpoints.EndpointContext} - Suspending endpoint : SirPuntuali--vv1.0.0_APIproductionEndpoint with address https://ENDPOINT_IP_ADDRESS/cmd/j/ - last suspend duration was : 30000ms and current suspend duration is : 30000ms - Next retry after : Sat Feb 29 13:35:28 CET 2020 {org.apache.synapse.endpoints.EndpointContext}
TID: [-1234] [] [2020-02-29 13:34:58,949] INFO {org.apache.synapse.mediators.builtin.LogMediator} - STATUS = Executing default 'fault' sequence, ERROR_CODE = 101500, ERROR_MESSAGE = Error in Sender {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1] [] [2020-02-29 13:34:58,979] INFO {org.wso2.carbon.databridge.core.DataBridge} - user admin connected {org.wso2.carbon.databridge.core.DataBridge}
So it seems that WSO2 API Manager is trying to send the request to the correct endpoint but there is a certificate problem. Is it this reasoning correct?
If this could be the problem now I have some doubts about what I have to do to solve my problem:
Have I to obtain a certificate generated on the server hosting the final endpoint and have I to set it on my WSO2 API Manager or, on the contrary, have I to generate a certificate on the WSO2 API Manager machine and I have to provide it to the machine hosting the final API?
Reading on the documentation it seems to me that I have to obtain a certificate from the API hosting machine and I have to upload this certificate into WSO2 API Manager application (as shown here: https://apim.docs.wso2.com/en/latest/learn/design-api/endpoints/certificates/). But I am not sure of this assumption.
A self signed certificate is ok? In case what is the procedure to generate it and what kind of certificate I need to obtain (I have to provide precise information to the guys working on the final API machine)
Probably a trivial question: the Store portal is generating a cURL request using the -k option that is used to ignore certificate (infact performing it directly in the shell it seems to work fine). Why sending the request from the Store portal is not working? I suspect that it generates a cURL request for test pourpose but that under the hood the API Manager is not performing a simple cURL request.
The behavior is a bit strange. Just to explain what happens here, there are 2 HTTP calls involved.
The client (curl or UI) to the gateway
The gateway to the backend
As per the 2nd error trace, the problem is with the connection between the gateway and the backend. Answering your first question, to resolve this, you have to get the certificate of the backend endpoint and install it to the APIM's client-truststore.jks. You can either do it for each API via the UI, or you can directly install it to the jks file.
However, since this is independent of the client you use, you should see the same behavior for both cURL and UI. I don't get how it works for cURL.
Answering your 3rd question, the UI does not use curl inside to make the call to the gateway. And it has nothing to do with the above error either.
I am oozie on Hue interface where i would like to get the email alerts for any killed/Failed/long runnig jobs.
Is there any way to get this.
Below is the component list:
Component Version
Hue 2.6.1
HDP 2.3.6
Hadoop 2.7.1
Oozie 4.2.0
Ambari 2.6.0
You can use the below action
<action name="ErrorHandler">
<email xmlns="uri:oozie:email-action:0.1">
<to>${notify}</to>
<subject>FAILURE - Automated email notification for process</subject>
<body>The upload process for <Process name> failed.</body>
</email>
<ok to="error"/>
<error to="error"/>
</action>
Few points to remember -
In job.properties, mention the required email address for the notify variable.
Set SMTP hostname and port in oozie-site.xml
Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of retries = 4. Exception = Connection refused
1) check oozie service running by typing
oozie admin -oozie http://OOZIE_IP:11000/oozie -status
if it doesnt show System mode: NORMAL, oozie is not started correctly
2) set in core-site.xml in hadoop config ,
<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>
Hope this works
On HDP 2.3.4, using Oozie 4.2.0 and Sqoop 1.4.2, I'm trying to create a coordinator app that will execute sqoop jobs on a daily basis. I need the sqoop action to execute jobs because these are incremental imports.
I've configured sqoop-site.xml and started the sqoop-metastore and I'm able to create, list, and delete jobs via the command line but the workflow encounters the error: Cannot restore job: streamsummary_incremental
stderr
Sqoop command arguments :
job
--exec
streamsummary_incremental
Fetching child yarn jobs
tag id : oozie-26fcd4dc0afd8f53316fc929ac38eae2
2016-02-03 09:46:47,193 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at <myHost>/<myIP>:8032
Child yarn jobs are found -
=================================================================
>>> Invoking Sqoop command line now >>>
2241 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2016-02-03 09:46:47,404 WARN [main] tool.SqoopTool (SqoopTool.java:loadPluginsFromConfDir(177)) - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2263 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.6.2.3.4.0-3485
2016-02-03 09:46:47,426 INFO [main] sqoop.Sqoop (Sqoop.java:<init>(97)) - Running Sqoop version: 1.4.6.2.3.4.0-3485
2552 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage - Cannot restore job: streamsummary_incremental
2016-02-03 09:46:47,715 ERROR [main] hsqldb.HsqldbJobStorage (HsqldbJobStorage.java:read(254)) - Cannot restore job: streamsummary_incremental
2552 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage - (No such job)
2016-02-03 09:46:47,715 ERROR [main] hsqldb.HsqldbJobStorage (HsqldbJobStorage.java:read(255)) - (No such job)
2553 [main] ERROR org.apache.sqoop.tool.JobTool - I/O error performing job operation: java.io.IOException: Cannot restore missing job streamsummary_incremental
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.read(HsqldbJobStorage.java:256)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:198)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:283)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:197)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:177)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:241)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
sqoop-site.xml
<property>
<name>sqoop.metastore.client.enable.autoconnect</name>
<value>false</value>
<description>If true, Sqoop will connect to a local metastore
for job management when no other metastore arguments are
provided.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.url</name>
<value>jdbc:hsqldb:hsql://<myhost>:12345</value>
<description>The connect string to use when connecting to a
job-management metastore. If unspecified, uses ~/.sqoop/.
You can specify a different path here.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.username</name>
<value>SA</value>
<description>The username to bind to the metastore.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.password</name>
<value></value>
<description>The password to bind to the metastore.
</description>
</property>
<property>
<name>sqoop.metastore.server.location</name>
<value>/tmp/sqoop-metastore/shared.db</value>
<description>Path to the shared metastore database files.
If this is not set, it will be placed in ~/.sqoop/.
</description>
</property>
<property>
<name>sqoop.metastore.server.port</name>
<value>12345</value>
<description>Port that this metastore should listen on.
</description>
</property>
workflow.xml
<action name="sqoop-import-job">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${outputDir}"/>
</prepare>
<arg>job</arg>
<arg>--exec</arg>
<arg>${jobId}</arg>
</sqoop>
<ok to="hive-load"/>
<error to="kill-sqoop"/>
</action>
Additional info:
We're only running a single-node cluster.
Only Sqoop Client is
installed.
I'm thinking maybe Oozie isn't able to connect to the metastore because we don't have sqoop server? Could anyone confirm this? If not that, could I have missed anything else?
Thanks!
I managed to resolve this issue with the help of #SamsonScharfrichter in the comments. I explicitly passed the metastore URL in the Oozie workflow and it worked:
<arg>job</arg>
<arg>--meta-connect</arg>
<arg>jdbc:hsqldb:hsql://<myhost>:12345/sqoop</arg>
<arg>--exec</arg>
<arg>myjob</arg>
It seems that Oozie tries to connect to a local metastore because it doesn't have a copy of sqoop-site.xml and so it doesn't know the metastore url (even though I'm running a single-node configuration).