I'm trying to create some unit test with Mocha and Velocity.
however when i go to my webpage and click the velocity bullet, I get the message "mocha - mirror starting".
It is stuck there for around 10min, however CPU is hitting 100% so it is doing something, when i look in the logs i don't find anything. When i browse to the location where Velocity is started, I get an empty response. After that it bubbles up, but this is not acceptable. Unit test are supposed to be fast...
I20151223-11:43:59.571(1)? [velocity] * Meteor Tools is installing *
I20151223-11:43:59.572(1)? This takes a few minutes the first time.
I20151223-11:43:59.572(1)? [velocity] You can see the mirror logs at: tail -f D:\Work\coderepository\meteor\sandwich-app.meteor\local\log\mocha.log
I20151223-11:52:36.853(1)? mocha: 2 tests passed (11ms)
Any help? I'm running on a Windows 10 machine.
I'm new to Meteor and Velocity so I could be doing something wrong. I only have the sample-tests added...
I tried to run meteor in debug mode, but this doesn't add any more information into the mocha.log file
Rob
Related
I've been looking at Jupyter Lab's GitHub repository (issues, roadmap, etc.), and I couldn't find any indices regarding the progress toward resistance to network issues or browser reload.
I first noticed this with Z2JH v2.0, which I use at work. The same issue is with the latest version of JupyterLab (v3.4.8) installed with pip on my machine. Running the sample below and reloading the browser in the middle of execution would end up at the last saved state, and the cell's output would not continue updating. Any other changes after saving are also lost.
import time
def long_running_task():
for i in range(10):
time.sleep(1)
print(f'Done #{i+1}!')
long_running_task()
I came across several hacks/workarounds, such as writing to file and IPython magic %%capture. However, running a long-running task becomes frustrating when I need to move around, close the laptop and migrate to a different location.
Alternatives? Suggestions?
I am evaluating a self-hosted artifactory installation on a trial license. I followed the official installation instructions for the docker container and the linux archive file. Neither of these installation options are working. The artifactory service fails to start.
I have opened an issue to track the problem: https://www.jfrog.com/jira/browse/RTFACT-27182
TL;DR; A component fails, a nasty stack trace appears in the logs, and eventually the services stop.
It would seem that there is a bug in artifactory. I have traced this back to multiple versions and this issue spans multiple years.
The problem appears to be that artifactory cannot get past the bootstrapping/initialization phase when started with artifactoryctl. At a certain point (around 2-5 minutes in) all the services stop and a pid file is left over, which is bad.
The workaround I have found is that the service can pass this initialization phase only after multiple start/stops (3 to be exact). In other words, we call artifactoryctl start, wait for all failures, then artifactoryctl stop and repeat two more times. On the fourth and final start, we will see the service come online (in about 150 - 190s). From then on, the service will start correctly with one call to artifactoryctl start.
I have not yet looked at the systemd unit file. My guess would be that it has/or could be made to have a number of retries to work around this issue and perhapse this issue does not exist when using the service wrapper.
I have also not yet looked again at the docker container which appears to be failing for the same reason. A workaround off the top of my head would be to modify the entrypoint script. If you were to dockerk exec into the container and try the workaround above it would likely terminate the root process and kill the container.
I frequently work on a R-server environment. However, whenever come back to my work following the last working day, the system often gets stuck with 'resuming r session'. This might take upwards of 5-15min. I try to terminate R or restart R but often this doesn't really do anything.
I'm looking for a work-around as it is very frustrating to go to the R-server URL and to have to wait forever to get started again. IDEALLY, I'd be able to pick up right where I left off. However, if this can't be done, I guess that is ok….
I was looking around at the folder structure and I noticed that there is a folder called "Suspended-R-Session".
Within this folder are a few files such as:
'options',
'lib paths',
'history',
'environment_vars',
'environment',
and 'settings'.
Should I be deleting these files in order to speed up load time???
As described in this link https://support.rstudio.com/hc/en-us/community/posts/200638878-resuming-session-hangup, in my case for R version 3.5:
cd ~/.rstudio/sessions/active/session-45204d30
rm -rf suspended-session-data
Trying to setup my integration flow and I have some tests that are quite destructive using the velocity-cucumber package.
First issue I find is that these tests are being run on the standard Meteor db. Which on localhost and dev is fine, but not so great for production. As far as I can tell the velocity-cucumber doesn't do anything with mirrors yet.
Because of this I have two cases where I need Meteor to launch in a specific way.
1) On the CI server I need for JUST the tests to run then exit (hopefully with the correct exit code).
2) On the production server I need Meteor to skip all tests and just launch.
Is this currently possible with Meteor command line arguments? I'm contemplating making demeteorize a part of the process, and then use standard node.js testing frameworks.
To run velocity tests and then exit, you can allegedly run meteor with the --test option:
meteor run --test
This isn't working for me, but that's what the documentation says it is supposed to do.
To disable velocity tests, run meteor with the environment variable VELOCITY set to 0. This will skip setting up the mirror, remove the red/green dot, etc.:
VELOCITY=0 meteor run
I'm working on a Nexus 1.9.2.3 implementation and we're trying to run the Scheduled Task "Cleanup Old Snapshots".
The task runs anywhere from 2-5 mins and then fails with an "Error [XmYs]" in Last result (where X and Y are minutes/seconds values).
Logging shows that the tasks starts and is waiting, then no failure results shown in the logs (both nexus.log and wrapper.log).
We're trying to remove an excessively large collection of snapshots from the system that have been allowed to accumulate over the years, and ultimately move to just keeping the last 10. Next step will be to perform an upgrade to a newer version. But this has become a bit of a work blocker.
I'm at my whits end on this end on this and am close to just manually deleting all the files but I know that doesn't cleanly clear all the meta information out so it's a last resort option.
Any help or advice would be greatly appreciated!
Remove the files on the filesystem and run a scheduled task to update the Maven metadata.
And then upgrade Nexus as soon as possible. You are using a VERY old version.