Setup task scheduler to run actions but throws error on open - asp.net

I've setup a class library in VS and run functions using command line arguments.
I've built the project and copied the files in the 'debug' folder over to the server.
Copied over the database and changed the connection string to match.
On the server, I've setup a windows task scheduler to run the function every 15 mins:
In the 'create a basic task wizard' dialogue: start a program:
'Program/script:' chose the .exe file in the folder, and removed .config from the end of it.
'Added Argument': DownloadPOS
and 'start in':chose the same folder the project was in.
Problem being when the scheduler runs I get this error and have no idea why:
please advise

Related

airflow logs: how to view them live in the server directly

I am using airflow and running a dag
In have following in airflow.cfg
[core]
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository. This path must be absolute.
dags_folder = /usr/local/airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /usr/local/airflow/logs
I have a long running task in airflow.
Its very difficuly to use web interface to check the logs of such size.
Instead i want to check on the airflow server directly
But i dont see a log file is created till the task fails or completely succeeded
In between the task i cant see any 1.log file created in the local server at the path mentioned in cfg
So on the server if i have access, how to check the logs of the airflow task live

(Dagster) Schedule my_hourly_schedule was started from a location that can no longer be found

I'm getting the following Warning message when trying to start the dagster-daemon:
Schedule my_hourly_schedule was started from a location Scheduler that can no longer be found in the workspace, or has metadata that has changed since the schedule was started. You can turn off this schedule in the Dagit UI from the Status tab.
I'm trying to automate some pipelines with dagster and created a new project using dagster new-project Scheduler where "Scheduler" is my project.
This command, as expected, created a diretory with some hello_world files. Inside of it I put the dagster.yaml file with configuration for a PostgreDB to which I want to right the logs. The whole thing looks like this:
However, whenever I run dagster-daemon run from the directory where the workspace.yaml file is located, I get the message above. I tried runnning running the daemon from other folders, but it then complains that it can't find any workspace.yaml files.
I guess, I'm running into a "beginner mistake", but could anyone help me with this?
I appreciate any counsel.
One thing to note is that the dagster.yaml file will not do anything unless you've set your DAGSTER_HOME environment variable to point at the directory that this file lives.
That being said, I think what's going on here is that you don't have the Scheduler package installed into the python environment that you're running your dagster-daemon in.
To fix this, you can run pip install -e . in the Scheduler directory, although the README.md inside that directory has more specific instructions for working with virtualenvs.

QT gui application not starting automatically on startup in ubuntu 14.04

I have two Qt applications, one is non gui called "App1" and another one is gui called "App2". As per my need I need to start "App1" on startup of Ubuntu 14.04 machine.
This "App1" runs a sh file called "myshfile.sh" and I am starting "App2" into this shell script file by /opt/myprojectname/App2 &
So to do the same I make .sh file called "myupstart.sh" and write into /opt/myprojectname/App1 & it and copy the file at path /etc/init.d/ and gave it +x permission to start "App1" on startup.
When I restart my machine then it runs "App1" (which is qt non-gui app) automatically on startup and runs "myshfile.sh" as expected. Till now all are working fine but the problem occurs from here as per below.
As I have mentioned above that "App1" runs a sh file called "myshfile.sh" and I am starting "App2" into the shell script file by /opt/myprojectname/App2 & but "App2" is not staring ( which is qt gui app).
When I do the same by simply running command into termianl /opt/myprojectname/App1 then all works fine and it calls the "myshfile.sh" file and "myshfile.sh" file also starts "App2".
So what I found that when I do the same by manually into termianl then all works fine and by script etc/init.d/myupstart.sh, it starts only Qt non-gui application and not starting Qt gui application on startup.
Kindly suggest me where I am wrong.
Thanks.

yeoman: grunt server waiting... can't go back to command line

I installed yeoman, and trying use command: grunt server to preview my application.
everything is fine except when i try to get back to command line to continue typing in some commands, below is the status after i run grunt server:
Running "autoprefixer:dist" (autoprefixer) task
Prefixed file ".tmp/styles/main.css" created.
Running "connect:livereload" (connect) task
Started connect web server on 127.0.0.1:9000.
Running "watch" task
Waiting...
does anybody know how to stop it?
This is intentional; the Yeoman generated grunt server command gives you a complete testing environment so that you can have files being compiled and a server to preview your app on. If you stop it, you'll have to start it again with the same command. I'd recommend using something like http://www.iterm2.com/ if you're on a Mac so that you can keep the process running in a separate window, that way you don't have to keep stopping/starting it.
Nonetheless, you can stop the task at any time with CTRL + C.

Running hadoop job without creating a jar file

I have wriiten a simple hadoop job. Now I want to run it without creating the jar file as opposed to lots of tutorials found on net.
I am calling it from a shell script on ubuntu platform which runs a cloudera CHD4 distribution of hadoop(2.0.0+91).
I can't create the jar file of the job because it depends on several other third party jars and configuration files which are already centrally deployed over my machine and are not accessible at the time of creating the jar. Hence I am looking out for a way where I can include these custom jar files and configuration files.
I also can't use -libjars and DistributedCache options because they only affect map/reduce phase but my driver class also is using these jar and configuration files. My job uses several in house utility code which internally uses these third party libraries and configuration files which I have only access to read from a centrally deployed location.
Here is how I am calling it from a shell script.
sudo -u hdfs hadoop x.y.z.MyJob /input /output
It shows me a
Caused by: java.lang.ClassNotFoundException: x.y.z.MyJob
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
My calling shell script successfully sets the hadoop classpath and contains all my required third party libraries and configuration files from a centrally deployed location.
I am sure that my class x.y.z.MyJob and all required libraries and configuration files are found in both the $CLASSPATH and $HADOOP_CLASSPATH environment varibales which I am setting before calling the hadoop job
Why at the time of running the script my program is not able to find the class.
Can't I run the job as a normal java class? All my other normal java programs are using the same classpath and they can always find the classes and configuration files without any problem.
Please let me know how can I access centrally deployed haddop job code and execute it.
EDIT: Here is my code to set the classpath
CLASSES_DIR=$BASE_DIR/classes/current
BIN_DIR=$BASE_DIR/bin/current
LIB_DIR=$BASE_DIR/lib/current
CONFIG_DIR=$BASE_DIR/config/current
DATA_DIR=$BASE_DIR/data/current
CLASSPATH=./
CLASSPATH=$CLASSPATH:$CLASSES_DIR
CLASSPATH=$CLASSPATH:$BIN_DIR
CLASSPATH=$CLASSPATH:$CONFIG_DIR
CLASSPATH=$CLASSPATH:$DATA_DIR
LIBPATH=`$BIN_DIR/lib.sh $LIB_DIR`
CLASSPATH=$CLASSPATH:$LIBPATH
export HADOOP_CLASSPATH=$CLASSPATH
lib.sh is the file to concatenate all third party files to a : separated format and CLASSES_DIR contains my job code x.y.z.MyJob class.
All my configuration files are unders CONFIG_DIR
When I print my CLASSPATH and HADOOP_CLASSPATH it shows me correct values. However whenever I call hadoop classpath just before executing the job, it shows me following output.
$ hadoop classpath
/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:myname:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
$
It obviously does not have any of those previously set $CLASSPATH and $HADOOP_CLASSPATH varibales appended. Where are these environment varibales.
Inside my shell script I was running the hadoop jar command with Cloudera's hdfs user
sudo -u hdfs hadoop jar x.y.z.MyJob /input /output
This code was actually being called from the script with a regular ubuntu user which was setting the CLASSPATH and HADOOP_CLASSPATH varibles as mentioned above. And at the time of execution the hadoop jar command was not called using the same regular ubuntu user. Hence there was an exception indicating that the class was not found.
So you have to run the job with the same user who is actually setting the CLASSPATH and HADOOP_CLASSPATH environment variables.
Thanks all for your time.

Resources