I am oozie on Hue interface where i would like to get the email alerts for any killed/Failed/long runnig jobs.
Is there any way to get this.
Below is the component list:
Component Version
Hue 2.6.1
HDP 2.3.6
Hadoop 2.7.1
Oozie 4.2.0
Ambari 2.6.0
You can use the below action
<action name="ErrorHandler">
<email xmlns="uri:oozie:email-action:0.1">
<to>${notify}</to>
<subject>FAILURE - Automated email notification for process</subject>
<body>The upload process for <Process name> failed.</body>
</email>
<ok to="error"/>
<error to="error"/>
</action>
Few points to remember -
In job.properties, mention the required email address for the notify variable.
Set SMTP hostname and port in oozie-site.xml
Related
I have the following errors when trying to execute the netconf command
in Robot Framework. Please let me know if I'm missing/doing something wrong. Thanks
Testcase - Netconf Operation Command in Junos Router
# Manually turn on netconf in Juniper Networks router
# command = set system services netconf ssh, commit
# Juniper Network Router's IP Address
${dev_ip} = Set Variable 192.168.0.1
${netconf_cmd}= catenate SEPARATOR=
... echo '\\<rpc\\>\\n
... \\<get-interface-information/\\>\\n
... \\</rpc\\>\\n
... ' | sshpass -p a_password ssh admin#${dev_ip} netconf
${result} = Run Process ${netconf_cmd} shell=True
Log To Console ${\n}Netconf command output: $${result.stdout}${\n}
The output is not correct, it simply stores netconf's initial greeting message instead of the output of the data-request.
Output:
Netconf command output: $<!-- No zombies were killed during the creation of this user interface -->
<!-- user regress, class j-superuser -->
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
<capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:confirmed-commit:1.0</capability>
<capability>urn:ietf:params:netconf:capability:validate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:url:1.0?scheme=http,ftp,file</capability>
<capability>urn:ietf:params:xml:ns:netconf:base:1.0</capability>
<capability>urn:ietf:params:xml:ns:netconf:capability:candidate:1.0</capability>
<capability>urn:ietf:params:xml:ns:netconf:capability:confirmed-commit:1.0</capability>
<capability>urn:ietf:params:xml:ns:netconf:capability:validate:1.0</capability>
<capability>urn:ietf:params:xml:ns:netconf:capability:url:1.0?protocol=http,ftp,file</capability>
<capability>urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring</capability>
<capability>http://xml.juniper.net/netconf/junos/1.0</capability>
<capability>http://xml.juniper.net/dmi/system/1.0</capability>
</capabilities>
<session-id>93467</session-id>
</hello>
]]>]]>
<!-- netconf error: unknown command -->
<!-- session end at 2020-05-12 11:02:26 PDT -->
From the output message I suspect wrong with the xml input. Please verify the rpc message which as been sent. The response says it an netconf error.
Netconf session is complete with the hello exchange. It failed while sending a rpc
I can't exec AMI Action: FilterList in asterisk 13.
https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+ManagerAction_FilterList
I know is because I'm missing a module, an app or a res from de modules.conf, but i can't find which one.
Any idea?
CLI> manager show commands
Here is how you can check module
[arheops#pro-sip]# grep FilterList /usr/src/asterisk-13.24.0/*/*c -Ri
/usr/src/asterisk-13.24.0/main/manager.c: <ref type="manager">FilterList</ref>
/usr/src/asterisk-13.24.0/main/manager.c: <manager name="FilterList" language="en_US">
I have configured ELK-stack (Elasticsearch, Logstash, and Kibana) cluster for centralized logging system with Filebeat. Now I have been asked to reconfigure to EFK (Elasticsearch, FluentD, and Kibana) with Filebeat. I have disabled the Logstash and Installed FluentD, But I'm not able to configure FluentD with Filebeat. I have installed FluentD plugin for Filebeat and modified /etc/td-agent/td-agent.conf, but it seems not working.
td-agent.conf
<source>
#type beats
tag record['#metadata']['beat']
port 5044
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
##type file
#type elasticsearch_dynamic
logstash_format true
logstash_prefix ${tag_parts[0]}
type_name ${record['type']}
</store>
<store>
#type file
logstash_format true
logstash_prefix ${tag_parts[0]}
type_name ${record['type']}
path /var/log/td-agent/data_logs.*.log
</store>
</match>
A source is an input not an output in fluentd you would want a match with the corresponding fluentd tags that match to be shipped to filebeats then out to Elastic.
I'm trying to run a simple sqoop import using oozie from hue ( Cloudera VM ).Few seconds after submitting, the job gets hung with heart beat issue for ever, I did some search and found this thread https://community.cloudera.com/t5/Batch-Processing-and-Workflow/Oozie-launcher-never-ends/td-p/13330, I added the XML properties that is mentioned in all the below yarn-site.xml files not knowing which specific file, but no use, I'm still facing the same issue can anyone give some insights on this?
/etc/hive/conf.cloudera.hive/yarn-site.xml
/etc/hadoop/conf.empty/yarn-site.xml
/etc/hadoop/conf.pseudo/yarn-site.xml
/etc/spark/conf.cloudera.spark_on_yarn/yarn-conf/yarn-site.xml
/etc/hive/conf.cloudera.hive/yarn-site.xml
job log
'12480 [main] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of order_items
13225 [main] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies.
16314 [main] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation
18408 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://quickstart.cloudera:8088/proxy/application_1484596399739_0002/
18409 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1484596399739_0002
25552 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_1484596399739_0002 running in uber mode : false
25553 [main] INFO org.apache.hadoop.mapreduce.Job - map 0% reduce 0%
Heart beat
Heart beat
workflow XML
<workflow-app name="Oozie_Test1" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-e57e"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="sqoop-e57e">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import --m 1 --connect jdbc:mysql://quickstart.cloudera:3306/retail_db --username=retail_dba --password=cloudera --table order_items --hive-database sqoopimports --create-hive-table --hive-import --hive-table sqoop_hive_order_items</command>
<file>/user/oozie/share/lib/mysql-connector-java-5.1.34-bin.jar#mysql-connector-java-5.1.34-bin.jar</file>
</sqoop>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
Thanks
Mx
this thread helped me to resolve the issue
https://community.cloudera.com/t5/Batch-Processing-and-Workflow/Oozie-sqoop-action-in-CDH-5-2-Heart-beat-issue/td-p/22181/page/2
after getting through this error I got stuck in "Failure to launch flume" issue and this thread helped me to fix the issue
oozie Sqoop action fails to import data to hive
I have a program running on a remote host that I need to connect to, handshake, then listen for messages. I have setup the following camel route:
<route>
<from uri="netty:tcp://localhost:50001?decoders=#decoders&sync=false" />
<bean ref="TransformMessage" method="inboundDecoder" />
<to uri="eventadmin:messages/aacus/inbound" />
</route>
<route>
<from uri="eventadmin:messages/aacus/outbound" />
<bean ref="TransformMessage" method="outboundEncoder" />
<to uri="netty:tcp://192.168.0.111:50001?allowDefaultCodec=false&sync=false" />
</route>
My question is how do I make this work? If I establish the route using
<from uri="netty:tcp://192.168.0.111:50001?decoders=#decoders&sync=false" />
it fails with a binding error.
How can I setup the connection to respond on a specific port without modifying the server?
This is not possible with either camel-mina nor camel-netty at this time of writing. A consumer can only bind to a local server. There is a JIRA ticket at Apache to implement such a new feature for the future. https://issues.apache.org/jira/browse/CAMEL-1077
Use the following workaround:
Instead ob 192.168.0.111 use localhost.
Then install "socat" and start it as follows
socat -s -u tcp4:192.168.0.111:50001 tcp4:localhost:50001
This will Tunnel your remote connection to the local service you created with camel/netty.