Since I started using git I've had my RPROMPT setup to show the current branch. I've recently been using some of the "fancy" scripts to show un/staged file counts and other useful at a glance things. (https://github.com/olivierverdier/zsh-git-prompt/tree/master)
After using this for a week or two, its performance started to bother me.
Are there faster ways of getting this info or are there maybe ways to asynchronously write the RPROMPT? I don't want to wait to type a command while the RPROMPT is computed and would be perfectly happy with it popping in slightly later then my main PROMPT.
No offense to the aforementioned script; it's great. I'm just impatient.
Related
I'm coding in Rstudio and my workflow is along these lines:
make a new branch using Rstudio's UI
add some code or fix a bug
commit code when I'm satisfied and push to GitHub
merge the new code into the master on GitHub
pull the latest master code from GitHub into Rstudio using it's UI
delete any local/remote branches via command line (because Rstudio doesn't have the functionality and doesn't sync with GitHub when it comes to remote branch deletion)
This might not be the most efficient way of doing things (I'm new to git) but it works well enough except for the problem I'm having. Twice now, seemingly at random, I've created a new branch, worked on it and when I've gone back to check something in the master they are identical. The code changes I've made in the branch have already synced with the master.
This is what the last two lines of the History look like:
git history
independant_erp_norm_regressions is the last branch I merged into the master while preprocess_select_global_pars is the current branch which is syncing to the master unduly.
I'm at a loss as to what is going on because I'm doing the same thing as I usually do and haven't been able to find any similar questions on stackoverflow.
Any help would be greatly appreciated (as well as any ways in which I can streamline my workflow).
Ok thanks for the responses guys, as per Tim's reply I decided to commit the changes made to the new branch via Rstudio's UI and check in command line to see what happened behind the scenes. (After that I thought I would do an entire branch/merge via command line to see if the problem persisted or if it was a Rstudio bug). So just before committing the changes Rstudio's git interface showed that Master and my branch were still in sync up to and including having staged files selected together. After committing I used "git show-branch" in command line and it showed that only the correct branch had a new commit, this was mirrored in Rstudio's Git history interface and after merging via GitHub all is well. So it seems like it was just an Rstudio-git bug of sorts.
So I wrote my father a neat little R script that pulls financial indicators on stocks, and outputs the info to a csv...
I would like to have it set up so that the script will run automatically once a day, skipping the weekends if possible. I looked around for awhile online and it seems as though the Mac "Automator" App is what I'm looking for.
However, after reading many guides and posts (like this one https://www.r-bloggers.com/how-to-source-an-r-script-automatically-on-a-mac-using-automator-and-ical/) I cannot get it to work...
In trying to replicate what this man did above I get the error that the first path is a directory; while the latter returns stuff like "cat: Rscript: No such file or directory"
So I was wondering if anyone could recommend either any good free software that will allow me to do what I would like, or how to run an R script from the /bin/bash shell
EDIT:
The suggested solution isn't really answering my problem. The issue is making this as easy as possibly for my dad to run, that way he doesn't have to do anything, specifically use the terminal. Ideally I could just schedule a task that repeats every morning, but the cronR package requires Daemon, and the others are just command line tools
I had a similar experience.
I created an automator calendar alarm
added a Execute AppleScript Action and used the following code:
on run {input, parameters}
try
tell application "R"
activate
with timeout of 90000 seconds
cmd "source(\"Dropbox/RScripts/CV19/liibre_coronabr.R\")"
end timeout
end tell
end try
return input
end run
When you save it, just choose the date and time for it to run and select the option to repeat everyday
That's it!
P.S.: No, I do not want to debug my script. It is pretty awesome.
The problem is the application under test. If I place a few orders, it crashes. So what I want to do is: mid-execution, when I see that the application is crashed, I want to pause the test script, bring the application back up and running, and then resume the test.
I know that this is not the point of time when I should be running the test scripts as the application is not stable enough, but the developers are working on it and hopefully soon enough, they will fix it. I am just curious to know if there is a solution, because I couldn't find one. Of course I could've integrated bringing the application up again when it crashes in my tests, but that is not what I want to do.
My system:
OS: Linux Mint
Tests: Watir (Ruby) + Cucumber on Chrome
I run the tests on linux terminal using cucumber tags.
I just want to know in general if there is any way to pause and resume execution. For example, when I want to stop all the tests, I give the command line interruption Ctrl + C. So is there any such interrupt command to pause and resume?
Okay, since you want a "general" answer, here goes...
Based on your context, you are looking for a "crashed" condition in your project.
My own approach to solving this problem would involve writing a helper method that would look for this condition and, if true, it would "pause". For example...
def pause_if_crashed
sleep 30 if #browser.product_price.nil?
end
Then I would sprinkle this helper method in likely "crash" spots in my other functional methods.
Without more specifics about your needs, this is about as helpful as I can get, I think.
I currently have a Plone 4.3.8 site where editing a portlet causes a deadlock.
I'm trying to find tools to fix this, but most deadlock tools don't work & I'm not getting good information (IMO) from those that at least run.
I've tried:
z3c.deadlockdebugger => can't get to a stacktrace
ZopeHealthWatcher => can't see the results on command line (or webpage)
Products.LongRequestLogger => perhaps the best so far, gives me some log output - but it's stack traces focus on Diazo code, but the problem still occurs when Diazo isn't in scope (running against 127.0.0.1)
gdb attach - just landed me in C code
winpdb => it can't attach to running processes in the same way that gdb can (only to processes started with the intention of attachment by winpdb)
Products.signalstack (OR Products.signalstacklogger) => USR1 signal just shuts down a zope process!
Note: z3c.deadlockdebugger (and things that depend on it) needs checked out source code to drop the threadframe dependency.
My situation seems to be linked to product upgrades - probably one or both of either plone.app.contenttypes or plone.app.multilingual, an empty site doesn't have this issue, but I obviously I need my site data!
What should I do to progress this?
EDIT:
I believe Maurits answer to be the most correct one, but it didn't work in my case. What I ended up doing was using pdb to track down the point at which the code was hanging (in plone.app.debugtoolbar as it happens)
You say that using the USR1 signal shuts down Zope when using Products.signalstack. But no special packages should be necessary, so I wonder if adding signalstack has this side effect of shutting down Zope.
At least for me, a few weeks ago, this worked fine on a Plone 4.3.something site:
kill -USR1 $(cat var/zeoclient.pid)
Although the #maurits answer is right (and the simplest ones) sometimes I had issues seeing the traceback resulting from the kill command: sometimes is found on the event log, sometimes on the shell.
I prefer the integration of the buildout with haufe.requestmonitoring, configuring also the monitor long running requests feature.
You will see the deadlocked traceback in your event log and also you activate a tool for monitoring low performance on your Plone.
I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.