I created a oozie bundle consisting of several coordinators and respective workflows. The bundle ran fine previously but with adding a new workflow it stopped working completely.
For simplification and debugging I stripped the bundle down to the absolute minimum consisting of one coordinator starting one workflow.
The XMLs seem to be valid (validated with oozie) and the coordinator and workflow is working fine on their own (with fitting properties).
Problem is, that I do not get any meaningful errors on -dryrun or run.
Dryrun is producing the error: Error: E1310 : E1310: Bundle Job submission Error: [null] which does not lead me anywhere.
Just running the job results in the bundle being submitted and markes as "FAILED" with no coordinator started. Therefore I do not get any error reports on the coordinator to work with.
After playing around with the coordinator and workflow and the propagation of the variables from the bundle.properties file to the coordinator and the workflow I found a couple of important things to take note of that solved my problem in the end:
-dryrun on a bundle does not work as intended it seems. The above error is persistent even after fixing the bundle to run fine in oozie. I could not find anything noting that dryrun is not supported on bundles but the [null] is indicating that dryrun can not handle bundles
HDFS Paths have to be added with port numbers to work correctly. I had several paths in the format of hdfs://nodename/hdfs/dir/.... that did not seem to be propagated correctly without the correct path in the format of hdfs://nodename:8020/hdfs/dir/.... After adding the portnumber they worked fine
I missed a couple of variables in the bundle.xml that were used in the coordinator.xml. This did not get reported by oozie at all but instead followed in the coordinator not being started at all. The bundle will just be listed with -info without any sheduled coordinators with the status "running". This is pretty hard to debug because of missing feedback on oozie. Make sure to test your coordinator with a properties file and use that "working" properties file as a schema to check the bundle.properties and .xml for any missed variable
Related
I've been trying to merge a source branch with a target branch, but have consistently gotten the following error message on my failed job(s):
$ firebase use project_name --token $FIREBASE_TOKEN
Error: Invalid project selection, please verify project project-name exists and you have access.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
I have followed the advice from this thread and logged out/in from Firebase to use the project again, which unfortunately hasn't worked:
firebase logout
firebase login
firebase use project_name
I've triple-checked that I'm using the correct project name in Firebase, rather than the name of my Gitlab repo.
Unsure if it's related, but when setting up the merge request, GitLab notes that The source branch is 3 commits behind the target branch. I don't believe this is part of the issue, but worth bringing up.
Merging branches has never been an issue until today, and this is the first time I'm seeing this particular error causing the failed jobs. Any advice is appreciated
EDIT:
I added a screenshot image of the projects list, showing I've logged into the necessary Project by Project ID on Firebase. Everything should be connected, but I can't see what I'm missing that's causing the failed jobs.
UPDATE:
I've added firebase projects:list within the pipeline editor and get the following error message.
The issue I have here is I cannot find a firebase-debug.log file, after searching ways to find it, and trying to recreate the file by commenting out # firebase-debug.log* in my .gitignore file and running firebase init to try solutions from posts like this. Any thoughts on the original merging issue or how to find firebase-debug.log to move closer to a solution are greatly appreciated.
The setup appears incorrect for CI; eg. one doesn't need to pass $FIREBASE_TOKEN.
The issue might even stem from, that you use --token once and then not anymore.
Please refer to the user manual: https://github.com/firebase/firebase-tools/#general
Using a service account might be the least troublesome.
There's also a login --reauth option; and login:ci.
I am trying to install Openstack through Ansible for a single node using All IN ONE.
When I run setup-everything.yml file, I am receiving following error:
ERROR: config_template is not a legal parameter in an Ansible task or handler
Can you please help on the issue?
I know that this answer is a little late but I found this and thought to try and help others out should they run into this.
It's very likely that the system was not bootstrapped. We see this error from time to time when the action plugin is not present on the system. With OpenStack-Ansible you will need to retrieve the roles and plugins from the ansible-role-requirements.txt file. After you've cloned the software the first step in deployment is usually running ./scripts/bootstrap-ansible.sh which would install Ansible into a venv, retrieve your roles, libraries, & plugins, and then create the openstack-ansible CLI wrapper. You can also simply use the ansible-galaxy command with the ansible-role-requirements.txt if you don't want to run that script. After you have the roles and libs you will likely no longer see that error. More documentation on getting started can be found here: https://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html
You can get access to the config_template module source code here: https://github.com/openstack/openstack-ansible-plugins/blob/master/action/_v2_config_template.py should you have issues specifically with the module or jump into the #openstack-ansible channel on freenode where there's generally someone available to help out.
config_template is a custom module developed by the OpenStack team. If you get ERROR: config_template is not a legal parameter in an Ansible task or handler it might mean that Ansible can not find the module could be an indention / syntax error as well. Check if the module config_template is in your ANSIBLE_LIBRARY environment variable. You can also pass the path via command line with --module-path
Also the pull request for this module was closed by the Ansible developers. So it is likely that you can find a similar functionality in a module supported by the Ansible developers.
I am using Hue to run my workflow which uses parameters. I would like the workflow to pickup parameter from job.properties file without prompting the user. I intend to generate/modify this job.properties before every run with new parameter values.
My current setup, I have manually created job.properties file in the same working directory as workflow.xml. I have not added parameters to the hive action since this results in prompt. But the Hive SQL uses the same parameter as specified in the job.properties file.
When I run the Workflow it fails for being unable to resolve the parameters. I believe it is not picking up my job.properties file for some reason.
Any pointers will realy help? Beating my head for almost 2 days now!
Are you using the Workflow Editor? At this time (Hue 3.7) job.properties is only picked up when submitting a workflow from File Browser.
Properties need to be entered as 'Oozie parameters' in the Properties section of the workflow. Would just doing this solve your problem?
How can meteor run on multiple ports.For example if the meteor run on 3000 i need another meteor app run on the same terminal.Please help me.
You can use the --port parameter:
`meteor run --port 3030`
To learn more about command line parameters, run meteor help <command>, e.g. meteor help run.
I see you've tagged your question meteor-up. If you're actually using mup, check out the env parameter in the config file.
I think the OP was referring to the exceptions caused because of locks on the mongo db. I am only on this platform for last week - and am learning as quick as I can. But when I tried running my application from the same project directory as two different users on two different ports - I got an exception about MongoDB :
Error: EBUSY, unlink 'D:\test\.meteor\local\db\mongod.lock'
The root of the issue isn't running on different ports - it is the shared files between the two instances - Specifically the database.
I don't think any of your answers actually helped him out. And .. neither can I yet.
I see two options -
First -
I am going to experiment with links to see if I can get the two users to use a different folder for the .meteor\local tree ... so both of us can work on the same code at the same time - but not impact each other when testing.
But I doubt if that is what the OP was referring to either (different users same app)...
Second - is trying to identify if I can inject into the run-mongo.js some concept of the URL / port number I am running on, so the mongodb.lock (and db of course) ... are named something like mongodb.lock-3000
I don't like the 2nd option because then I am on my own version of standard scripts.
B
No, it is mainly used the default port of 3000 or any state at the start, and the following (+1) to Mongo.
That is, the following application can be run through a 2-port, already in 3002, hence the previous 2-port as before - it is 2998.
Check can be very simple (Mac, Linux):
ps|grep meteor
I'm trying to set up one of my AX environments to have an XPO imported whenever the server process is started up. Because the environment is updated from production on a regular basis, and the code needs to be unique to this (non-production) environment, it seems the best option would be to use the AOTImport command on startup. However, I'm having trouble identifying the exact syntax/setup to get this to work.
Looking through the system code, it seems the syntax should be aotimport_[path to file]. Is this correct? The server does not seem to complain about this command, but the file in question does not get imported. I have also tried variations on this command, but have not seen it work.
I supose you're trying to execute a command on SysStartupCmd classes. If so, this methods are fired when AX client starts, not the AOS. It's docummented on this page:
http://msdn.microsoft.com/en-us/library/aa569641(v=ax.50).aspx
If you want to automate this import it can be done scheduling an execution of the AX client (ax32.exe) on your build workflow that run the import (it's recommended to run a full compilation after importing). This is discussed on other questions here on SO.