I am trying to run a 'detection' demo of ChainerCV from here,
https://github.com/chainer/chainercv/tree/master/examples/detection
I am running this demo in BitFusion Ubuntu 14.04 Chainer AMI on AWS with a p2.xlarge instance which uses a single GPU.
When I try to run this demo at first I am getting this error,
no display name and no $DISPLAY environment variable
So I researched it on the web and get a solution for using matplotlib with AGG engine, so I tried to import that and use agg.
That does solve the problem of $DISPLAY but now when I run the demo it gets executed successfully but as an output, all I get is a white blank image.
Can anyone tell me the reason behind this?
The problem seems that you run the demo program in remote machine and X is not properly set.
You can get proper matplotlib output when you connect to your remote machine with ssh -X <your.remote.machine.address>, but it will takes time to show the resultt.
If you want to run demo in remote machine quickly, i recommend you to set MPLBACKEND=Agg, save plot figure as image by plt.savefig(<imagepath>) and do not run plt.show() in demo program.
Related
I started reading about how we could run Spark batch jobs using Airflow.
I have tried using SparkSubmitOperator in local and it works fine. However, I need a recommendation, if we could use it in cluster mode.
The only problem I see when using in cluster mode is that, here the application status not being able to be tracked,ref shared in the link below:
https://albertusk95.github.io/posts/2019/12/airflow-tracks-spark-driver-status-cluster-deployment/
Please suggest if anyone has tried using this operator and works well in cluster mode, or if there is any issue using it.
I am trying to run a R notebook on Microsoft's Azure notebooks cloud service.
When I am trying to run all cells, it displays a Loading required package: ggplot2 in the last cell and then the Kernel systematically crashes. I get:
The kernel appears to have died. It will restart automatically.
But the Kernel does not restart automatically.
How can I get a log describing the encountered issue? Is there a way to activate a debugger?
When you're running Jupyter usually you'll see messages about kernel issues in standard I/O of the console that you launch. In Azure Notebooks this gets redirected to a file at ~/.nb.log. You can open a new terminal by clicking on the Jupyter icon, and then doing New->Terminal, and doing cat ~/.nb.log. You could also start a new Python notebook for this purpose and do "!cat ~/.nb.log" - but unfortunately you can't just do that from an R notebook, they don't support the "magic" ! commands.
Usually that gives you a good starting point. If that doesn't help much you could try invoking R directly from the terminal and trying the repro steps there and see if that's more useful.
I have an R code which I am trying to call over HTTP using opencpu, but for the long running code, it is getting timeout. I came across https://github.com/joelkuiper/aplomb
Unfortunately, the documentation is not detailed and I am unable to figure out how to make it work once it is deployed and the container is running.
Look in the file /etc/opencpu/server.conf
You should see parameters timelimit.get and timelimit.post (values are in seconds). Increase them to something that seems reasonable for your code, and save it.
Then restart the service:
sudo service opencpu restart
Then try again - hope it works!
I am running Grizzly on a two node configuration. If I use the standard images, I.e. cirros-0.3.0-x86_64-disk.img or any of the ubuntu-server-cloudimg-amd64-disk1.img I have no problems, console log is there. But if I create an image using KVM and any of the ubuntu ISO standard files, then I don't get a console log. Instances always run OK and I can access them via the dashboard login screen, and there are no error messages about the console; console.log is always 0 bytes on these cases. Is there any difference on those cloud images that I need to add to my image creation process? I have already tried and added libvirt unsuccessfully.
Thanks for the help
Short answer
Inside of your virtual machine, edit /etc/default/grub so it has the line:
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"
Do
sudo update-grub
Longer answer
Grub needs to be configured to write the boot messages to the serial device (ttyS0). In particular, on Ubuntu, in your /boot/grub/grub.cfg, there should be a line that has console=ttyS0, like this:
linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0
However, you shouldn't edit this file directly. Instead, you should edit /etc/default/grub to specify the additional parameters to be passed to the kernel and then run update-grub, which will update the files in /boot/grub for you. Specify the console=ttyS0 argument by editing the GRUB_CMDLINE_LINUX_DEFAULT variable defined in /etc/default/grub.
I have a small app that I'm trying to build against windows machines. The program creates an OpenVPN connection. If I build the program and run it it first opens a console as the program output. If I pass the -w parameter to pyinstaller to not build it with a console attached the program fails to run at all. It opens allright but the vpn connection is never created.
With the console everything works perfect.
I also have a basic logging for the application in place to see where my code might stop and nothing gets written. With console on my program spits out all kinds of logs.
I just don't know why my program could be performing perfectly with a console but doing nothing without one. Any ideas?
Gonna answer this myself. Make sure you don't print anything and also you redirect all stdout to a logger, file or whatever else instead of the console.
I was having a similar problem, but couldn't find any print/stdout statements going to console. I was using subprocess.Popen and redirecting stdout=subprocess.PIPE. I subsequently added stderr=subprocess.STDOUT and stdin=subprocess.PIPE and my program worked. This page (Python subprocess.call() fails when using pythonw.exe) on subprocess failures helped me get it working.