make sbt update resources in test - sbt

I'm having trouble with resource loading during sbt test.
If I put a typesafe config file in my test resources directory, e.g. src/test/resources/test.conf, then when I run test it loads fine. But if I edit src/test/resources/test.conf and run test again, I don't see the edits. I have to exit sbt and start over to see the changes I made. Doing clean and test:clean doesn't help.
I also tried doing
ConfigFactory.invalidateCaches()
ConfigFactory.load("test")
in my test code when I load the config, and tried adding fork in test := true to my build, but none of it makes any difference.
Any help's appreciated, thanks!

Related

(Dagster) Schedule my_hourly_schedule was started from a location that can no longer be found

I'm getting the following Warning message when trying to start the dagster-daemon:
Schedule my_hourly_schedule was started from a location Scheduler that can no longer be found in the workspace, or has metadata that has changed since the schedule was started. You can turn off this schedule in the Dagit UI from the Status tab.
I'm trying to automate some pipelines with dagster and created a new project using dagster new-project Scheduler where "Scheduler" is my project.
This command, as expected, created a diretory with some hello_world files. Inside of it I put the dagster.yaml file with configuration for a PostgreDB to which I want to right the logs. The whole thing looks like this:
However, whenever I run dagster-daemon run from the directory where the workspace.yaml file is located, I get the message above. I tried runnning running the daemon from other folders, but it then complains that it can't find any workspace.yaml files.
I guess, I'm running into a "beginner mistake", but could anyone help me with this?
I appreciate any counsel.
One thing to note is that the dagster.yaml file will not do anything unless you've set your DAGSTER_HOME environment variable to point at the directory that this file lives.
That being said, I think what's going on here is that you don't have the Scheduler package installed into the python environment that you're running your dagster-daemon in.
To fix this, you can run pip install -e . in the Scheduler directory, although the README.md inside that directory has more specific instructions for working with virtualenvs.

Force sbt to reload whole build definition

sbt.version 0.13.15
Question:
How can I force sbt to reload and recompile all build definition regardless whether they are changed or not?
The story:
I have a multiproject build consiting of multiple sbt and scala files. When I first loaded the project (I am not an author) by sbt -v in console I got a list of deprecated warnings (mainly operators like <<= <+= <++=). I want to fix all these but now when I start sbt -v again I do not get these warnings any more. Not even after reload.
I tried to modify build file (removing import) and reloading then. It showed me errors caused by removed import but no warnings. After fixing import back it showed no warnings still.
Only way I was able to see the list again was to change build file in a way which directly affects build definition and reload it then...
Bottom line
So yes I can edit every single sbt file and reload it then. But I wonder there must be a better way. There must be a way how to either force-reload whole build definition or recall that list of warnings somehow.
Thanks for ideas on this!
I found out that compiled build definition is stored in project/target. So to achieve what I want it is enough to remove this dir and then reload. It's not a command from sbt but it works.

Script does not exist at specified location

"Script does not exist at specified location: /opt/codedeploy-agent/deployment-root/76b33ccc-594b-4d58-a1b8-e40d054c64b7/d-AVYMCK28I/deployment-archive/scripts/Applicationstoptest.sh"
This is the error I am getting can any one please help me how to resolve this issue
Make sure you're using relative paths in your appspec.yml.
I had the same issue. I followed this process and it did helped me.
The ApplicationStop hook is being called from the previously installed deployment before trying to run the current deployment appspec.yml file.
In order to prevent this from happening you'll have to remove any previously installed deployment from the server.
Stop the code deploy agent - sudo service codedeploy-agent stop
clear all deployments under /opt/codedeploy-agent/deployment-root
Restart the code deploy agent - sudo service codedeploy-agent start
copied ans from here so don't have to open new link.
Thanks #paul
The paths used in source in the AppSpec file are relative paths, starting from the root of your revision. Also please make sure the appspec.yml file and the other files in the application bundle are not wrapped inside another folder.
I found my answer here: https://stackoverflow.com/a/27925591/1056283
The application stop hook uses a previous deployment to look for the script to invoke, so if the structure of your deploy package changes, there may not be a script to call at the specified location yet. The deploy fails and the script never arrives at the new location. Look in the linked answer for the steps to resolve it.

Building Brackets Shell (After running the grunt build command)

On windows after running the grunt build command for creating brackets shell it gives done without errors but i dont see any .exe file generated..
What might be the problem???
Here are some possible solutions:
Are you following the full brackets-shell build instructions, including all prerequisites?
Make sure Brackets isn't running at the same time. The build will fail silently if the .exe file is currently in use (see bug).
Try with a fresh git clone of the repo. If your brackets-shell local copy has been around for a while, sometimes the build & deps folders can get in a bad state. (I'm assuming you haven't modified the source at all. If you have, try with an unmodified copy of the source first to make sure it builds correctly without any of your changes).
Check that python --version shows 2.7.x
Verbose build output would also be helpful in diagnosing issues like this, but unfortunately there's not yet an easy way to get that...
If you follow the instructions on bracket-shell's wiki page, the Windows executable should be created in the Release directory.

Capifony deploy runs some commands against previous release

I'm running a Capifony deployment. However, I notice that Capifony's in-built commands are running against the previous release, whereas my custom commands are correctly targeting the current release.
For example, if I run cap -d staging deploy, I see some commands output like this (linebreaks added):
--> Updating Composer.......................................
Preparing to execute command: "sh -c 'cd /home/myproj/releases/20130924144349 &&
php composer.phar self-update'"
Execute ([Yes], No, Abort) ? |y|
You'll see that this is referring to my previous release - from 2013.
I also see commands referring to this new release's folder (from 2014):
--> Running migrations......................................
Preparing to execute command: "/home/myproj/releases/20140219150009/
app/console doctrine:migrations:migrate --no-interaction"
Execute ([Yes], No, Abort) ? |y|
In my commands, I use the #{release_path} variable, whereas looking at Capifony's code, it's using #{latest_release}. But obviously I can't change Capifony's code.
This issue against Capistrano talks about something similar, but I don't think it really helps, as again I can't change Capifony's code.
If I delete my releases folder on the server, I have a similar problem - #{latest_release} doesn't have any value, so it attempts to do things like create a folder /app/cache (since the code is something like mkdir -p #{latest_release}/app/cache).
(Assuming I don't delete the current symlink and the release folder, the specific error I see is when it fails to copy vendors: cp: cannot copy a directory, /home/myproj/current/vendor, into itself. However, this is just the symptom of the bigger problem - if it thinks the new release is actually the previous one, that explains why current also points there!)
Any ideas? I'm happy to provide extracts from my deploy.rb or staging.rb (I'm using the multistage extension) but didn't just want to dump in the whole thing, so let me know what you're interested in! Thanks
I finally got to the bottom of this one!
I had a step set to run before deployment:
before "deploy", "maintenance:enable"
This maintenance step (correctly) sets up maintenance mode on the existing site (in the example above, my 2013 one).
However, the maintenance task was referring to the previous release by using the latest_release variable. Since the step was running before deployment, latest_release did indeed refer to the 2013 release. However, once latest_release has been used, its value is set for the rest of the deployment run - so it remained set to the 2013 release!
I therefore resolved this by changing the maintenance code so that it didn't use the latest_release variable. I used current_release instead (which doesn't seem to have this side-effect). However, another approach would be to define your own variable which gets its value in the same way as latest_release would:
set :prev_release, exists?(:deploy_timestamped) ? release_path : current_release
I worked out how latest_release was being set by looking in the Capistrano code. In my environment, I could find this by doing bundle show capistrano (since it was installed with bundler), but the approach will differ for other setups.
Although the reason for my problem was quite specific, my approach may help others: I created an entirely vanilla deployment following the Capifony instructions and gradually added in features from my old deployment until it broke!

Resources