Background info:
I have a situation when I do the computation on some device (let's say device A). After some time I would like to transfer to other device (let's say mobile device B) and continue in the same computational process as I started on device A.
Problem:
When I open the notebook on device B there is a problem with the cell presently running. There is no continuous output visible on device B (let's say every 5 seconds there should be some checkpoint visible). It is like the cell is not even running. It's marked ([ ]) instead of ([*]). This way I have no idea when the computation will stop or how the output looks like on device B.
My effort:
What I have tried so far is to reconnect to a kernel on device B with no results. Also I can see that on device B the kernel is not active which in my opinion means that I have to somehow (re)connect to the running kernel on device A. There is an option to switch kernels with possibility to "choose kernel from other session" (I assume this is the solution). But I cannot find a way to connect to a kernel from session on device A.
Bonus info:
I use Binderhub, not pure Jupyter notebooks
I followed an idea by #Rock and came up with a workaround to transfer all the output to log file. This way I got a real time output on device B which was sufficient to my use case.
Related
I currently have firmware that can reach an average deep sleep current of ~130uA. I can reach this level reproducibly on one of the boards I have.
successful deep sleep
Trouble is, when I try to clone this chip onto other chips using the nRF Programmer (Connect) app, I get extremely high power consumption, average of ~20mA at all times, seems the device doesn't reach deep sleep properly. Tried this on several other boards, so I don't believe it's simply a problem of something shorting. Strangely, the application just runs fine, the current is just several times normal for the same functionality.
unsuccessful deep sleep
Does anyone have any ideas on how I can truly clone the flash of one device, onto another? Clearly the "save as file" on nRF Connect isn't doing this. Erasing all and reuploading, starting from a blank chip and writing,
FYI I'm using the nRF52840 module by Raytac (MDBT50Q), implemented on a custom board. This board SHOULD be capable of going down to ~33uA, which I have observed in the past with this very board. By some combination of erasing all, reprogramming, setting the 3.3V logic level(nrfjprog --memwr 0x10001304 --val 5), etc,
For posterity, I did actually find the solution! For anyone else in a similar boat to me, the winning command is:
nrfjprog.exe --readcode --readuicr --readram [filename.hex]
Apparently, the --readram was the winning flag, as without it the sketch simply doesn't run at the same current consumption for whatever reason. But now, I can reproducibly image and transfer identical firmware, which was what I was after.
In my pintool, I check NtReadFile() and NtCreateFile() system calls using the PIN API:
PIN_AddSyscallEntryFunction()
PIN_AddSyscallExitFunction()
But the outputs seems to be polluted with many unexpected additional interceptions, I would like to filter out.
Problem is, the SYSCALL_ENTRY_CALLBACK functions do not let you access to information needed to deduce from where the system call has been spawned (calling site), even at the entry. Checking the value of REG_EIP (address of the instruction pointer) juste before the system call is executed, I see I am way off the calling site (out of the address range of the image I am instrumenting, although the system call is made within this image).
I also tried to instrument instructions with INS_IsSyscall() at IPOINT_BEFORE and check it's address, but it seems it is too late too (out of range of the image's low and high addresses)
What would be the correct process to instrument only system calls starting from the image I am instrumenting ?
I am trying to make a program in SciLab that would make a real time plot from data received from serial port.
My idea is to execute new plot function after every single portion of data received. But I think it is too much work for the computer and SciLab will not work properly and miss data.
Do you know some option to plot real time data from serial COM port? SciLab or another free program?
It is perfectly possible to implement such a hardware setup run from a Scilab session that plots live data.
We do it #ENSIM, for practicals in optics: we move a translation actuator step by step (Scilab driver # https://fileexchange.scilab.org/toolboxes/255000, plugged to port#1), and for each step we read a transmitted signal with an optical powermeter (driver # https://fileexchange.scilab.org/toolboxes/223000, plugged to port#2). The refresh frequency of the powermeter is 1-2 Hz. So no problem to get a live plot of received data.
As well, we have written a Scilab driver for the very popular M38XR multimeter (# https://fileexchange.scilab.org/toolboxes/232000). A syntax is implemented to continuously display the live data coming from the multimeter (same low refresh rate ~1Hz).
Etc.
2 new Scilab drivers to come soon for new instruments (a furnace, and another popular multimeter). All our drivers presently on FileExchange will be updated for Scilab 6 and gathered in a single ATOMS module (then easier to document and maintain).
As the title says, when I run my OpenCL kernel the entire screen stops redrawing (the image displayed on monitor remains the same until my program is done with calculations. This is true even in case I unplug it from my notebook and plug it back - allways the same image is displayed) and the computer does not seem to react to mouse moves either - the cursor stays in the same position.
I am not sure why this happens. Could it be a bug in my program, or is this a standard behaviour ?
While searching on Google I found this thread on AMD's forum and some people there suggested it's normal as the GPU can't refresh the screen, when it is busy with computations.
If this is true, is there still any way to work around this ?
My kernel computation can take up to several minutes and having my computer practically unusable for whole that time is really painful.
EDIT1: this is my current setup:
graphics card is ATI Mobility Radeon HD 5650 with 512 MB of memory and latest Catalyst beta driver from AMD's website
the graphics is switchable - Intel integrated/ATI dedicated card, but
I have disabled switching in BIOS, because otherwise I could not get
the driver working on Ubuntu.
the operating system is Ubuntu 12.10 (64-bits), but this happens on Windows 7 (64-bits) as well.
I have my monitor plugged in via HDMI (but the notebook screen freezes
too, so this should not be a problem)
EDIT2: so after a day of playing with my code, I took the advices from your responses and changed my algorithm to something like this (in pseudo code):
for (cl_ulong chunk = 0; chunk < num_chunks; chunk += chunk_size)
{
/* set kernel arguments that are different for each chunk */
clSetKernelArg(/* ... */);
/* schedule kernel for next execution */
clEnqueueNDRangeKernel(cmd_queue, kernel, 1, NULL, &global_work_size, NULL, 0, NULL, NULL);
/* read out the results from kernel and append them to output array on host */
clEnqueueReadBuffer(cmd_queue, of_buf, CL_TRUE, 0, chunk_size, output + chunk, 0, NULL, NULL);
}
So now I split the whole workload on host and send it to GPU in chunks. For each chunk of data I enqueue a new kernel and the results that I get from it are appended to the output array at a correct offset.
Is this how you meant that the calculation should be divided ?
This seems like the way to remedy the freeze problem and even more now I am able to process data much larger than the available GPU memory, but I will yet have to make some good performance meassurements, to see what is the good chunk size...
Whenever a GPU is running an OpenCL kernel it is completely dedicated to OpenCL. Some modern Nvidia GPUs are the exception, I think from the GeForce GTX 500 series onwards, which could run multiple kernels if those kernels did not use all available compute units.
Your solutions are to divide your calculations into multiple short kernel calls, which is the best all round solution since it will work even on single GPU machines, or to invest in a cheap GPU for driving your display.
If you are going to run long kernels on your GPUs then you must disable timeout detection and recovery for GPUs or make the timeout delay longer than the maximum kernel runtime (better as bugs can still be caught), see here for how to do this.
Every time I have had a display freeze or "Display driver stopped responding and has recovered" it's been due to a bug. It can freeze the whole system and the only thing I can do is reset. Instead, now I develop on the CPU first. This never crashes my whole system. It's easier to debug this way as well since I can use printf. Once I got my code working bug free on the CPU I try it on the GPU.
I am new to opencl and encountered a similar problem. I found a short calculation works OK, but a longer one freezes the mouse cursor. For my problem, Windows leaves a yellow triangle in the tray area, and puts a message in the event log about "Display driver stopped responding and has recovered". The solution I found is to break up the calculation into small parts that take no more than a couple of seconds each. These run back to back, yet apparently let the video driver in enough to keep it happy. If I set global_work_size to a value high enough to maximize throughput, the video response is painfully slow, but the driver restart/mouse freeze problem never occurs.
I have a simple idea but I guess it's hard to implement it on simulink. I created a TCP/IP server between a BeagleBone and a simulink block using C code. I have a switch connected to the beagle bone as an input and my idea is to have a display on the simulink block showing if the switch is close or open. I couldn't do it because my client(simulink block) is a c code and it will do the job only once as C code ends the function execution after return to get the value of the switch. Do you guys know any simulink transfer mode or fancy C tip to transfer data between the Simulink client block and the display ?
In a simulator I've been working on we used a Sink block with remote IP address and IP port to export the data calculated at real time to an external listener (this could be on the same machine or connected via network) which in our case was a computer doing the graphics rendering, which was then picked up by a C++ code written using Ogre 3D and by another simulink model using a source block.
We also had an interactive chartplotter display (GPS position indication, if you will). We could access the values generated from simulink by calling the following command any time it was needed:
variable = get_param('Simulator/Chartplotter/YDot','RuntimeObject');
You could also call "set_param" to modify a constant value located inside the simulink model.
I don't have experience in beagle but I imagine you could have your C code execute a matlab scrip which would modify a logical constant present in your simulink model to indicate whether the switch is open or not.
Alternatively you could explore the first option but we only used it to get data out of simulink into the C program, not the other way around as you'd wish to do. I haven't got direct access to the receiving C code unfortunately but if you are really stuck I could ask my colleagues to send me the important bits.
Hope this helps.