Upon failure of nodes in my dag, I want to see logs but the page just keeps rendering(symbol) and no logs are actually loaded.
Any other way I can check logs or correct this issue?
There is a directory named 'logs' at the same location with the 'dags' directory,
default: /etc/airflow/logs
In it there are directories named same with your dags, then tasks, and dates. You can look there.
example: /etc/airflow/logs/controller/trigger/2018-12-10T17:34:19.234871+00:00
Related
I'm using hydra to log hyperparameters of experiments.
#hydra.main(config_name="config", config_path="../conf")
def evaluate_experiment(cfg: DictConfig) -> None:
print(OmegaConf.to_yaml(cfg))
...
Sometimes I want to do a dry run to check something. For this I don't need any saved parameters, so I'm wondering how I can disable the savings to the filesystem completely in this case?
The answer from Omry Yadan works well if you want to solve this using the CLI. However, you can also add these flags to your config file such that you don't have to type them every time you run your script. If you want to go this route, make sure you add the following items in your root config file:
defaults:
- _self_
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
hydra:
output_subdir: null
run:
dir: .
There is an enhancement request aimed at Hydra 1.1 to support disabling working directory management.
Working directory management is doing many things:
Creating a working directory for the run
Changing the working directory to the created dir.
There are other related features:
Saving log files
Saving files like config.yaml and hydra.yaml into .hydra in the working directory.
Different features has different ways to disable them:
To prevent the creation of a working directory, you can override hydra.run.dir to ..
To prevent saving the files into .hydra, override hydra.output_subdir to null.
To prevent the creation of logging files, you can disable logging output of hydra/hydra_logging and hydra/job_logging, see this.
A complete example might look like:
$ python foo.py hydra.run.dir=. hydra.output_subdir=null hydra/job_logging=disabled hydra/hydra_logging=disabled
Note that as always you can also override those config values through your config file.
I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.
I want to send my weblogic log to syslog. here is what I have done so far.
1.Included following log4j.properties in managed server classpath -
log4j.rootLogger=DEBUG,syslog
log4j.appender.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.syslog.Threshold=DEBUG
log4j.appender.syslog.Facility=LOCAL7
log4j.appender.syslog.FacilityPrinting=false
log4j.appender.syslog.Header=true
log4j.appender.syslog.SyslogHost=localhost
log4j.appender.syslog.layout=org.apache.log4j.PatternLayout
log4j.appender.syslog.layout.ConversionPattern=[%p] %c:%L - %m%n
2. added following command to managed server arguments -
-Dlog4j.configuration=file :<path to log4j properties file> -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger -Dweblogic.log.Log4jLoggingEnabled=true
3. Added wllog4j.jar and llog4j-1.2.14.jar into domain's lib folder.
4.Then, from Admin console changed logging information by doing the following. "my_domain_name"--->Configuration--->Logging--->(Advanced options)-->Logging implementation: Log4J
Restart managed server.
I used this as refernce. But didnt get anaything in syslog(/var/log/message). What am I doing wrong?
I would recommend a couple items to check:
Remove the space in DEBUG, syslog in the file
Your last two server arguments have a space between the - and the D so make sure that wasn't just a copy and paste error in this post.
Double check that the log files are in the actual classpath.
Double check from a ps command, that the -D options made it correctly into the start command that was executed.
Make sure that the managed server has a copy of the JARs correctly as they would get synchornized from admin during the restart.
Hopefully something in there will help or give an idea of what to look for.
--John
I figured out the problem. My appender was working fine, the problem was in rsyslog.conf. Just uncommented following properties
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
We were appending the messages, but the listner was abesnt, so it didnt knew what to do with it.
and from *.debug;mail.none;authpriv.none;cron.none /var/log/messages it figures out where to redirect any (debug in this case) information to messages file.
Showing currently applied configuration values
In v2.0+ of Riak there is a new command option: riak config effective
Which I read as it would tell you the current running values of riak.
At any time, you can get a snapshot of currently applied
configurations through the command line. For a listing of all of the
configs currently applied in the node
Config changes applied only on start of each node?
In multiple locations in Riak documentation there is reference like:
Remember that you must stop and then re-start each node when you
change storage backends or modify any other configuration
Problem:
However when I made a change to a setting (I've tested this in both riak.conf and advanced.conf), I see the newest value when running: riak config effective
ie:
Start node: riak start
View current setting for log level: riak config effective | grep log.console.level
log.console.level = info
Change the level to debug (something that will output a lot to console.log)
Re-run: riak config effective | grep log.console.level, we get:
log.console.level = debug
Checking the console log file for debug: cat /var/log/riak/console.log | grep debug give no results (indicating the config change has not been applied)
So the question is, how can I retrieve and verify what config setting each Riak node is running under?
When Riak starts, it creates two files: 'app..config' and 'vm..config'. The default location is in a 'generated.configs' directory under the platform data directory (usually /var/lib/riak).
These files will contain the settings that were in place when Riak was started. The command riak config effective processes the current riak.conf and advanced.config files.
New to salt,and i add first server(wx-1),it works ,but when i add a differnt server, test.ping is ok,but when execute salt 'qing' state.highstate, it fails,the error info is:
No Top file or external nodes data matches found
Here is my top.sls:
base:
'wx-1':
- bin.nginx
- git
- web
- mongo
- redis
'qing':
- bin.nginx
qing is a new server and it's config is different to wx-1,don't know if this is ok,thanks for your help:)
If you make changes to your sls files. Make sure that you restart the master in order for it to update. This solved my problem when receiving the same error...
You didn't give much information. But here are a few things to check:
test if salt qing state.sls bin.nginx works, if not continue reading
make sure file_roots:base in master config points to /srv/salt
use salt-master/minion --version to check salt versions, make sure they are the same. Because different versions might diff
Give further info if you tried all the above.