In R, I like to use reverse search (ctrl+r) to redo infrequent but complex commands without a script. Frequently, I will do so many other commands in between that the command history discards the old command. How can I change the default length of the command history?
This is platform and console specific. From the help for ?savehistory:
There are several history mechanisms available for the different R
consoles, which work in similar but not identical ways...
...
The history mechanism is controlled by two environment variables:
R_HISTSIZE controls the number of lines that are saved (default 512),
and R_HISTFILE sets the filename used for the loading/saving of
history if requested at the beginning/end of a session (but not the
default for these functions). There is no limit on the number of lines
of history retained during a session, so setting R_HISTSIZE to a large
value has no penalty unless a large file is actually generated.
So, in theory, you can read and set R_HISTSIZE with:
Sys.getenv("R_HISTSIZE")
Sys.setenv(R_HISTSIZE = new_number)
But, in practise, this may or may not have any effect.
See also ?Sys.setenv and ?EnvVar
Take a look at the help page for history(). This is apparently set by the environment variable R_HISTSIZE so you can set it for the session with Sys.setenv(R_HISTSIZE = XXX). I'm still digging to find where you change this default behavior for all R sessions, but presumably it will be related to .Startup or your R profile.
?history
"There are several history mechanisms available for the different R
consoles, which work in similar but not identical ways. "
Furthermore there may even be two history mechanism in the same device. I have .history files saved from the console and the Mac R GUI has its own separate system. You can increase the number of GUI managed history entries in the Preferences panel.
There is an incremental history package:
http://finzi.psych.upenn.edu/R/library/track/html/track.history.html
In ESS you can set comint-input-ring-size:
(add-hook 'inferior-ess-mode-hook
'(lambda()
(setq comint-input-ring-size 9999999)))
Related
I have 2 questions on the case recorder.
1- I am not sure how to restart an optimizaiton from where the recorder left off. I can read in the case reader sql file etc but can not see how this can be fed into the problem() to restart.
2- this question is maybe due to my lack of knowledge in python but how can one access to the iteration number from within an openmdao component (one way is to read the sql file that is constantly being updated but there should be a more efficient way.)
You can re-load a case back via the load_case method on the problem.
See the docs for it here.
Im not completely sure what you mean by access the iteration count, but if you just want to know the number of times your components are called you can add a counter to them yourself.
There is not a programatic API for accessing the iteration count in OpenMDAO as of version 2.3
I'm using a 3rd party application that uses BerkeleyDB for its local datastore (called BMC Discovery). Over time, its BDB files fragment and become ridiculously large, and BMC Software scripted a compact utility that basically uses db_dump piped into db_load with a new file name, and then replaces the original file with the rebuilt file.
The time it can take for large files is insanely long, and can take hours, while some others for the same size take half that time. It seems to really depend on the level of fragmentation in the file and/or type of data in it (I assume?).
The utility provided uses a crude method to guestimate the duration based on the total size of the datastore (which is composed of multiple BDB files). Ex. if larger than 1G say "will take a few hours" and if larger than 100G say "will take many hours". This doesn't help at all.
I'm wondering if there would be a better, more accurate way, using the commands provided with BerkeleyDB.6.0 (on Red Hat), to estimate the duration of a db_dump/db_load operation for a specific BDB file ?
Note: Even though this question mentions a specific 3rd party application, it's just to put you in context. The question is generic to BerkelyDB.
db_dump/db_load are the usual (portable) way to defragment.
Newer BDB (like last 4-5 years, certainly db-6.x) has a db_hotbackup(8) command that might be faster by avoiding hex conversions.
(solutions below would require custom coding)
There is also a DB->compact(3) call that "optionally returns unused Btree, Hash or Recno database pages to the underlying filesystem.". This will likely lead to a sparse file which will appear ridiculously large (with "ls -l") but actually only uses the blocks necessary to store the data.
Finally, there is db_upgrade(8) / db_verify(8), both of which might be customized with DB->set_feedback(3) to do a callback (i.e. a progress bar) for long operations.
Before anything, I would check configuration using db_tuner(8) and db_stat(8), and think a bit about tuning parameters in DB_CONFIG.
I am using Lidgren networking library to create a real time multiplayer game.
What I am trying to do is, save all incoming packages (including all bytes) to a peer in a binary file. Later when I need to debug some weird behavior of networking, I can load this file and have it load all (or rebuild) the packages that it saved, sequentially. This way, I can find how the weird behavior occurred exactly.
My question is, how do I recreate this package when I load it from the file?
It is a NetIncomingMessage that I need to recreate, I assume, and so far I thought of either creating it anew, or sending an NetOutgoingMessage to self, so it hopefully has the same effect I want to achieve, if the first approach fails.
The way I solved this is by creating an interface (wrapper object) of the NetIncomingMessage which contains a data byte array among other data members, and than have a thread to fill a list of these objects based on the saved incoming time, which is requested and removed (dequeued) from another thread.
See https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
I'm a new DM's user and I need to transfer data (pixels bright) between Digital Micrograph and R, for processing and modelling an image.
Specifically, I would need to extract the bright pixels from an original image, send it to R for processing, and return to DM for to represent the new image.
I would like to know if it is possible and how to do it from an script in DM.
A lot of thanks. Regards.
There is very little direct connection between DM (scripting) and the outside world, so the best solution is quite likely the following (DM-centric) route:
A script is started in DM, which does:
all the UI needed
extract the intensities etc.
save all required data in a suitable format on disc at specific path. (Raw data/text-data/...)
call an external application ( anything you can call from a command prompt, including .bat files) and waits until that command has finished
Have all your R code been written in a way that it can be called from a command prompt, potentially with command prompt parameters (i.e. a configuration file):
read data from specific path
process as required (without an UI, so do it 'silently')
save results on disc on specific path
close application
At this point, the script in DM continues, reading in the results (and potentially doing some clean-up of files on disc.)
So, in essence, the important thing is that your R-code can work as a "stand-alone" black-box executable fully controlled by command-line parameters.
The command you will need to launch an external application can be found in the help-documentation under "Utility Functions" and is LaunchExternalProcess. It has been introduced with GMS 2.3.1.
You might also want try using the commands ScrapCopy() and ScrapPasteNew()to copy an image (or image subarea) into the clipboard, but I am not sure how the data is handled there exactly.
I am a unix addict, and many of the machines I use (at home and at work) are now quickly passing the 10,000 command mark. I like to keep all of the commands I issue readily available which is why I have set the upper limit to something like 100,000 entries, but it is becoming tedious to recall particular recent entries as I have to be writing !12524 in the shell to expand out that one.
Sure, I can use the shortcut ones to recall the last command or even the tenth-last command but its impossible to keep track of that so 90% of the time I am doing things like history | grep 'configure --prefix' (for reviewing how I configured something last, etc) and then using whatever history index that spits back.
Can I reverse it so that command #10000 corresponds to the command from ten thousand commands ago, and have command #1 be the last command?