Disable file output of hydra - fb-hydra

I'm using hydra to log hyperparameters of experiments.
#hydra.main(config_name="config", config_path="../conf")
def evaluate_experiment(cfg: DictConfig) -> None:
print(OmegaConf.to_yaml(cfg))
...
Sometimes I want to do a dry run to check something. For this I don't need any saved parameters, so I'm wondering how I can disable the savings to the filesystem completely in this case?

The answer from Omry Yadan works well if you want to solve this using the CLI. However, you can also add these flags to your config file such that you don't have to type them every time you run your script. If you want to go this route, make sure you add the following items in your root config file:
defaults:
- _self_
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled
hydra:
output_subdir: null
run:
dir: .

There is an enhancement request aimed at Hydra 1.1 to support disabling working directory management.
Working directory management is doing many things:
Creating a working directory for the run
Changing the working directory to the created dir.
There are other related features:
Saving log files
Saving files like config.yaml and hydra.yaml into .hydra in the working directory.
Different features has different ways to disable them:
To prevent the creation of a working directory, you can override hydra.run.dir to ..
To prevent saving the files into .hydra, override hydra.output_subdir to null.
To prevent the creation of logging files, you can disable logging output of hydra/hydra_logging and hydra/job_logging, see this.
A complete example might look like:
$ python foo.py hydra.run.dir=. hydra.output_subdir=null hydra/job_logging=disabled hydra/hydra_logging=disabled
Note that as always you can also override those config values through your config file.

Related

Flywaydb multiple config files for migration is failing

We have tried to migrate some SQL versions in a single database and it went well. When to tried to implement the migrations for multiple databases at the same time by passing multiple config files is failing.
The issue is it takes only the last config file and the migration is performed only for the database mentioned in the last config file, when passed the multiple config files in "-configFiles" parameter.
Below is the screenshot of the same, it took only flywayconfdb.conf file and left other files.
[oracle#localhost flyway-5.1.4]$ ./flyway -configFiles=/home/oracle/flyway/flyway-5.1.4/conf/flyway.conf,/home/oracle/flyway/flyway-5.1.4/conf/flywayjiradb.conf,/home/oracle/flyway/flyway-5.1.4/conf/flywayconfdb.conf info
Flyway Community Edition 5.1.4 by Boxfuse
Database: jdbc:oracle:thin:#//XXXXXXXXX:1521/confdb (Oracle 12.2)
Schema version: << Empty Schema >>
+----------+---------+-------------+------+--------------+-------+
| Category | Version | Description | Type | Installed On | State |
+----------+---------+-------------+------+--------------+-------+
| No migrations found |
+----------+---------+-------------+------+--------------+-------+
Please help us in resolving the same.
Flyway merges the config files. It doesn't do a separate migration for each one.
For each config file, Flyway adds the content to a Properties map. Properties has only one value per key, so if the same key appears in a second config file it would overwrite the previous value. This is why it seems like just the settings from the last config file are used.
It allows you to define some common settings somewhere, for example in ~/flyway.conf, which could be merged with some more specific settings, e.g. in individual projects.
But it doesn't allow you to migrate multiple databases in a single run. You need to run Flyway once per database:
./flyway -configFiles=/home/oracle/flyway/flyway-5.1.4/conf/flywayjiradb.conf info
./flyway -configFiles=/home/oracle/flyway/flyway-5.1.4/conf/flywayconfdb.conf info
The documentation describes the Overriding Order as:
Command-line arguments
Environment variables
Custom config files
<current-dir>/flyway.conf
<user-home>/flyway.conf
<install-dir>/conf/flyway.conf
Flyway command-line defaults
With settings defined higher up the list having greater precedence.
The documentation gives the following example:
The means that if for example flyway.url is both present in a config
file and passed as -url= from the command-line, the command-line
argument will take precedence and be used.
The Custom config files (-configFiles) lines could be expanded as:
Command-line arguments
Environment variables
Custom config file n
...
Custom config file 2
Custom config file 1
<current-dir>/flyway.conf
<user-home>/flyway.conf
<install-dir>/conf/flyway.conf
Flyway command-line defaults
And a corresponding example could be:
The means that if for example flyway.url is both present in custom config file 1 and custom config file 2, the custom config file 2 settings will take precedence and be used.
Similarly, if the flyway.url was also in custom config file n that would override that setting from custom config file 2.

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

Symfony2 Composer and environment variables

I would like to set the configuration of my symfony2 project using environment variables.
In the server I have defined:
SYMFONY__DATABASE__USER
SYMFONY__DATABASE__PASSWORD
SYMFONY__DATABASE__NAME
SYMFONY__DATABASE__HOST
SYMFONY__DATABASE__DRIVER
My parameters.yml.dist looks like this:
#app/config/parameters.yml.dist
parameters:
database_host: "%database.host%"
database_port: ~
database_name: "%database.name%"
database_user: "%database.user%"
database_password: "%database.password%"
database_driver: "%database.driver%"
when I run composer I get an exception
composer install --dev --no-interaction --prefer-source
[Symfony\Component\DependencyInjection\Exception\ParameterNotFoundException]
You have requested a non-existent parameter "database.driver". Did you mean one of these: "database_user", "database_driver"?
These variables are defined in the server so I can modify the parameters.yml.dist to define these values. But this does not seams the right way, because wat I really want to use are the environment variables.
Note: I want to read this environment variables in travis, heroku and my vagrant machine. I only want to have in the repository the vagrant machine variables.
Which is the proper way to do this?
How should look my parameters.yml.dist?
Looks you are doing everything okay.
Here is the complete documentation for Setting Environment Variables which I believe you already read.
What is important to note is this:
Also, in order for your console to work (which does not use Apache),
you must export these as shell variables. On a Unix system, you can
run the following:
$ export SYMFONY__DATABASE__USER=user
$ export SYMFONY__DATABASE__PASSWORD=secret
I remember once I have a similar issue, I was setting everything on APACHE, but when running commands it wasn't working because I forgot to EXPORT the variables on the system.
Be aware that using export is a temp solution, if you reset your server those values will be lost, you will need to setup in a permanent way according to your OS.
I think you solved this long time ago, but the problem is actually that you have 2 _ between DATABASE and USER and the parser for this have a string replace function that replaces every __ with a . .
For your example to work you should have written like this:
SYMFONY__DATABASE_USER -> database_user
SYMFONY__DATABASE__USER -> database.user
You can try this bundle if your system version is >= 2.6.2:
This bundle provides a way to read parameters from environment
variables at runtime. The value defined in the container parameter is
used as fallback when the environment variable is not available.

Flex Localization: Could not find compiled resource bundle

I tried every solution i found in the internet.
Im using flex 4.5, This is what im doing:
created directory locale/en_US in my src directory
add resources.properties file to that directory with some mappings.
add -locale en_US -source-path=./locale/{locale} -allow-source-path-overlap=true to the compile args.
checked in the framework that the en_US locale directory appear.
add metadata:
<fx:Metadata>
[ResourceBundle("resources")]
</fx:Metadata>
starting the app gives me the exception:
Error: Could not find compiled resource bundle 'resources' for locale 'en_US'.
This is some of the main solutions i tried:
uncheck "Remove unused RSLs" from the build path.
add the directory as a source path.
using the argument -include-resource-bundles and give my directory here (with using the argument -resource-bundle-list to get all bundles).
Any idea what else can i do?
Here is my structure for a mobile app (Android and iOS):
In src/locale I have 3 subdirs: de_DE, en_US, ru_RU
And in compiler options: -locale=ru_RU,en_US,de_DE -source-path=locale/{locale}
For another mobile app I have:
In src/locale 4 subdirs: en_US, hr_HR, sr_RS, sl_SI.
I had to add the latter 3 dirs with copylocale command.
And in compiler options: -locale hr_HR sr_RS sl_SI en_US -allow-source-path-overlap=true
Both apps work well for me with the latest Apache Flex SDK.
Here is the contents of a src/locale/hr_HR/recources.properties file:
# resources.properties file for locale hr_HR
navbar.tables=Stolovi za igranje:
navbar.all=Svi
navbar.vacant_long=Slobodni
navbar.vacant_short=Slb.
navbar.full_long=Su puni
navbar.full_short=Su puni
comments.good_long=dobri
comments.good_short=Dbr.
comments.bad_long=loši
comments.bad_short=loši
comments.without_long=neutralni
comments.without_short=ntr.
help.title=Pomoć
OK i found a solution here:
http://www.nbilyk.com/flex-localization-example
im really not sure why it should be that difficult.
anyway, if someone ever need a help with that. after you successfully compile the file using ant (like described in the link), if you want to load it dynamcally like i needed just use (for example):
resourceManager.localeChain = ["en_US", "es_ES"];
resourceManager.loadResourceModule("Resources_en_US.swf");
resourceManager.loadResourceModule("Resources_es_ES.swf");
worked well for me, no need to add anything to the compiler args for that solution.
Try using the fully qualified directory path name. If you're using ant you can use ${basedir}/src/locale/{locale}

trying to use log4j.xml file within WinRun4j

has anyone tried to use a log4j.xml reference within a WinRun4j service configuration. here is a copy of my service.ini file. I have tried many configuration combinations. this is just my latest attempt
service.class=org.boris.winrun4j.MainService
service.id=SimpleBacnetIpDataTransfer
service.name=Simple Backnet IP DataTransfer Service
service.description=This is the service for the Simple Backnet IP DataTransfer.
service.startup=auto
classpath.1=C:\Inbox\DataTransferClient-1.0-SNAPSHOT-jar-with-dependencies.jar
classpath.2=WinRun4J.jar
classpath.3=C:\Inbox\log4j-1.2.16.jar
arg.1=C:\Inbox\DataTransferClient.xml
log=C:\WinRun4J-Service\SimpleBacnetIpDataTransfer\NBP-DT-service.log
log.overwrite=true
log.roll.size=10MB
[MainService]
class=com.shiftenergy.ws.App
vmarg.1=-Xdebug
vmarg.2=-Xnoagent
vmarg.3=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
vmarg.4=-Dlog4j.configuration=file:C:\Inbox\log4j.xml
within the log4j.xml file, there is reference to a log file for when the application runs. if I run the java -jar -Dlog4j.configuration=file:C:\Inbox\log4j.xml ...., the log file is created accordingly. if I register my service and start the service, the log file does not get created.
has anyone had success using the -D log4j configuration, using winrun4j?
thanks
I think that you provided the vmarg.4 parameter incorrectly. In your case it has to be like:
vmarg.4=-Dlog4j.configurationFile=[Path for log4j.xml]
I am also using the same and in my case, it is working perfectly fine. Please see below example:
vmarg.1=-Dlog4j.configurationFile=.\log4j2.xml
Have you tried setting the path in your code instead:
System.setProperty("log4j.configurationFile", "config/log4j.xml");
I'm using a relative path to a folder named config that contains log4j.xml. An absolute path is not recommended, but may work as well.
Just be sure to set this before making any calls to log4j, including any log4j config settings or static method calls!
System.setProperty("log4j.configurationFile", "config/log4j.xml");
final Logger log = Logger.getLogger(Main.class);
log.info("Starting up");
I didn't specify the log4j path in the ini file, only placed log4j.xml file at the same place the jar was placed.
Also without specify the
System.setProperty("log4j.configurationFile", "config/log4j.xml");
In the Java project it was stored in (src/main/resources) and will be included in the jar, but it will not be that one used if placed outside the jar.

Resources