How, in pytorch, do I set the second gpu as the default within a juptyer notebook? - jupyter-notebook

I have two gpus, and want to open a second juptyer notebook and ensure everything within it runs only on the second gpu rather than the first. Ideally I'd like to do this by running a cell at the start rather than passing device=1 in multiple places.

The answer in this scenario is to call set device.
import torch
torch.cuda.set_device(1)
The docs however discourage it. According to the github note on the checkin (https://github.com/pytorch/pytorch/issues/260) my scenario is the only reasonable use case. The preferred method is to set an environment variable, which is also possible in the notebook case:
import os
os.environ['CUDA_VISIBLE_DEVICES']='1'

Related

How do I turn off report generation in OpenMDAO 3.17?

How can I turn of the report generation in OpenMDAO 3.17?
I've tried add the following code to the top of my script:
import os
os.environ['OPENMDAO_REPORTS'] = 'none'
In place of 'none', I've also tried '0', 'false', and 'off' as mentioned here in the OpenMDAO documentation. This is the only way I've seen to change the environment variables.
Are there other ways to permanently change them such as through the command prompt? I'm relatively new to python, so spelling it out for me would be helpful.
Also, I know this is a repost, but I don't have enough points to add a comment to that post. So I've had to create a new post. That post also mentioned a PR, but the summary indicates the report generation was only fixed for some Dymos functionality.
When you change the setting via a python script (as your question shows) you are only changing it for that active python session. Not permenantly.
You don't say whether you're on windows or linux, which changes the specific method you would use to achieve your goal; Generally though, you can set the environment variable so that its given whenever you open a terminal. This will have the effect you desire.
On windows, you set environment variables via a small system GUI. On linux you change them by adding a line to a config file (usually .bashrc). On Mac the file name is sometimes .bash_profile or .profile.
I couldn't get them to switch off via environmental variables, whether setting them in the python script or from the terminal, but when creating the problem this worked
p = om.Problem(model=om.Group(), reports=False)

Is it possible to detect a terminal needing a `reset`?

Program that I write often crashes leaving terminal in a S-Lang library state (similar to ncurses). I would want to add a check into development's branch Makefile main all target for such an improper state and and an automatic invocation of reset command. How to approach this task?
reset does two things; you can write a shell script which detects one of them:
it resets the terminal I/O modes (which a shell script can inspect using stty)
it sends escape sequences to the terminal (which more often than not will alter things in the terminal for which there is no suitable query/report escape sequence).
For the latter, you would find it challenging to determine the currently selected character set, e.g., line-drawing. If the terminal's set to show line-drawing characters, that's the usual reason for someone wanting to run reset (e.g., cat'ing a binary file to the screen).
There happens to be a VT320 feature (DECCIR) which can provide this information (which xterm implements):
but with regard to other terminals, you're unlikely to find it implemented:
(iTerm2 doesn't do it)
(nor does VTE). You can read an extract from the VT510 manual describing the feature on vt100.net

Is putting all "using" statements at top of file (Julia) bad?

I am unfamiliar with good coding practices in Julia. Working in Jupyter notebook in python I typically put all import statements in one cell at the top of the file, which helps me easily see what the dependencies are.
Is it advisable to do the same with 'using' statements in Julia (I'm also working in Jupyter notebook for now)?
Yes you should. There is only one case I know where you might not want to do so, and that is if there is a large module that is only used under rare conditions -- for example, if you run a program that can produce a plot, or do similar optional functions with slowly loading modules, in some rare usage scenarios, but normally will never use that slow-to-load module.
Even then, you can easily wind up with errors at run time due to redefinitions of functions that have been already been run before the newly loaded module redefines them.

Getting list of windows in Tk and destroying specific ones (R)

I was wondering if it is possible to get a list of windows in Tk, and destroy specific ones. I am working in R using the tcltk interface, and am calling a function written by someone else a long time ago (that I cannot edit) which is producing additional windows that I don't want.
From the documentation here, it seems that new Toplevel windows are children of .TkRoot by default. I know that Python has a winfo_children method, which I was thinking of trying to call on .TkRoot but I don't think that method is implemented in the tcltk library. I tried using tcl("winfo", "children", .TkRoot) but I am getting an error: [tcl] bad window path name "{}" (I'm not familiar with actual tcl, so I'm probably messing this command up).
Additionally, if there is a way to call winfo children, what's the best way to process the result to identify specific windows and then destroy them?
Looking at the R sources, I see that you should be doing:
tkwinfo("children", .TkRoot)
Except that I think that won't work either, as .TkRoot doesn't have a corresponding widget on the Tk side of things. Instead, use the string consisting of a single period (.) as the root of the search; that's the name for the initial window on the Tcl side of things. And I suspect that you'll just get back a raw Tcl list of basic Tk widget names without the R wrapping as I just can't see where that conversion is applied…
Be aware that the results may include widgets that you don't expect, and that you need to call it on every window recursively if you want to find everything.

Automatically "sourcing" function on change

While I am writing .R functions I constantly need to manually write source("funcname.r") to get the changes reflected in the workspace. I am sure it must be possible to do this automatically. So what I would like would be just to make changes in my function, save the function and be able to use the new function in R workspace without manually "sourcing" this function. How can I do that?
UPDATE: I know about selecting appropriate lines of code and pressing CTRL+R in R Editor (RGui) or using Notepad++ and executing the lines into R. But this approach has a disadvantage of making my workspace console "muddled". I would like to stop this practice if at all possible.
You can use R studio which has a source on save option.
If you are prepared to package your functions into a package, you may enjoy exploring Hadley's devtools package. This provides a suite of tools to write, test and document
packages.
https://github.com/hadley/devtools
This approach offer many advantages, but mainly reloading the package with a minimum of retyping.
You will still have to type load_all("yourpackage") but I find this small amount of typing is small beer compared to the advantages of devtools.
For additional information, including how to setup devtools, have a look at https://github.com/hadley/devtools/wiki/development
If you're using Eclipse + StatET, you can press CTRL+R+S, which saves your script and sources it. As close to automatic as I can get.
If you can get your text editor to run a system command after it saves the file, then you could use something like AutoIt (on Windows) or a batch script (on UNIX-derivative) to pass a call to source off to all running copies of R. But that's a heck of a lot of work for not much gain.
Still, I think it's much more likely to work being event-driven on the text editor end vs. having R constantly scan for updates (or somehow interface with the OS's update-event-messaging-system).
This is likely not possible (automatically detecting disc changes without intervention or running at least one line).
R needs to read into memory functions, so a change on the disc wouldn't be reflected in the workspace without reloading your functions.
If you are into developing R functions, some amount of messiness during your development process will be likely inevitable, but perhaps I could suggest that you try writing an R-package to house your functions?
This has the advantage of being able to robustly document your functions, using lazy loading so that you have access to your functions/datasets immediately without sourcing them.
Don't be afraid of making a package, it's easy with package.skeleton() and doesn't have to go on CRAN but could be for your own personal use without distribution! Just have fun!
Try to accept some messiness during development knowing you are working towards your goal and fighting the good fight of code organization and documentation!
We are only imperfect people, in an imperfect world, but we mean well!

Resources