I'm trying to build a distributed application in Ada using the DSA and after hours of trial and error I finally managed to get it to compile correctly. However, now I have problems with the naming server.
My application is composed of two partitions: one hosts a simple RCI unit, the other is the client that calls the RCI unit. After compilation, I start up the name server by calling po_con_naming and it comes up correctly
I then start the executable that corresponds to my RCI partition, and here is where the problem pops up. On the name server console, these lines appear, about one every second:
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
cosnaming.namingcontext: look for "AAAA polyorb.dsa_p.partitions RCI;"
After that the RCI partition executable prints:
raised SYSTEM.RPC.COMMUNICATION_ERROR : lookup of RCI polyorb.dsa_p.partitions failed
And closes
So basically, the naming server gets contacted, but it can't find that partition. Please note that that partition is not part of my application, I'm assuming it's something the po_gnatdist compiler adds, but I can't understand what is failing here.
I didn't post code because it's a bit big, if it's needed to debug this let me know and I'll try to trim it down to a smaller sample.
Well, I found the problem shortly after posting.
In my DSA configuration file I was designating the "main" procedure as the one in the client. Turns out, it needs to be in the "server", ie: in the partition that exposes the RCI packages.
Related
I am running a .iypnb for building an ML model. The execution time takes more than 2 hrs. I wanted to know if I can mark a code cell as "Do not execute" so that the notebook can skip over it. However, I would like to preserve the previous output for that particular cell.
In Jupyter with an ipython kernel, is there a canonical way to execute cells in a non-blocking fashion?
Ideally I'd like to be able to run a cell
%%background
time.sleep(10)
print("hello")
such that I can start editing and running the next cells and in 10 seconds see "hello" appear in the output of the original cell.
I have tried two approaches, but haven't been happy with either.
(1) Create a thread by hand:
def foo():
time.sleep(10)
print("hello")
threading.Thread(target=foo).start()
The problem with this is that "hello" is printed in whatever cell is active in 10 seconds, not necessarily in the cell where the thread was started.
(2) Use a ipywidget.Output widget.
def foo(out):
time.sleep(10)
out.append_stdout("hello")
out = ipywidgets.Output()
display(out)
threading.Thread(target=foo,args=(out,)).start()
This works, but there are problems when I want to update the output (think of monitoring something like memory consumption):
def foo(out):
while True:
time.sleep(1)
out.clear_output()
out.append_stdout(str(datetime.datetime.now()))
out = ipywidgets.Output()
display(out)
threading.Thread(target=foo,args=(out,)).start()
The output now constantly switches between 0 and 1 lines in size, which results in flickering of the entire notebook.
This should be solvable wait=True in the call to clear_output. Alas, for me it results in the output never showing anything.
I could have asked about that issue, which seems to be a bug, specifically, but I wondered whether there is maybe another solution that doesn't require me doing all of this by hand.
I've experienced some issues like this with plotting to an output, it looks like you have followed the examples in the ipywidgets documentation on async output widgets.
The other approach I have found sometimes helpful (particularly if you know the size of the desired output) is to fix the height of your output widget when you create it.
out = ipywidgets.Output(layout=ipywidgets.Layout(height='25px'))
Sometimes I execute a method that takes long to compute
In [1]:
long_running_invocation()
Out[1]:
Often I am interested in knowing how much time it took, so I have to write this:
In[2]:
import time
start = time.time()
long_running_invocation()
end = time.time()
print end - start
Out[2]: 1024
Is there a way to configure my IPython notebook so that it automatically prints the execution time of every call I am making like in the following example?
In [1]:
long_running_invocation()
Out[1] (1.59s):
This ipython extension does what you want: https://github.com/cpcloud/ipython-autotime
load it by putting this at the top of your notebook:
%install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py
%load_ext autotime
Once loaded, every subsequent cell execution will include the time it took to execute as part of its output.
i haven't found a way to have every cell output the time it takes to execute the code, but instead of what you have, you can use cell magics: %time or %timeit
ipython cell magics
You can now just use the %%time magic at the beginning of the cell like this:
%%time
data = sc.textFile(sample_location)
doc_count = data.count()
doc_objs = data.map(lambda x: json.loads(x))
which when executed will print out an output like:
CPU times: user 372 ms, sys: 48 ms, total: 420 ms
Wall time: 37.7 s
The Simplest way to configure your ipython notebook in a way it automatically shows the execution time without running any %%time or %%timelit or time.time() in each cell, is by using ipython-autotime package.
Install the package in the begining of the notebook
pip install ipython-autotime
and then load the extension by running below
%load_ext autotime
Once you have loaded it, any cell run after this ,will give you the execution time of the cell.
And dont worry if you want to turn it off, just unload the extension by running below
%unload_ext autotime
It is pretty simple and easy to use it whenever you want.
And if you want to check out more, can refer to ipython-autime documentation or its github source
Say I print something which is huge, like str("dataset with 100 columns"). This output is too large to see in one shot. Is it possible to get the output in a page like form where whatever fits the screen comes in one shot then on pressing return, the next batch comes up?
Something like "more" in a linux console?
The page command might do what you're looking for:
page("dataset with 100 columns")
If I read the documentation correctly, this should call file.show, which pipes the data to the default pager (less on Unix/Linux systems).
I'm writing a program that performs several tests on a hardware unit, and logs both the results of each test and the steps taken to perform the test. The trick is that I want the program to log these results to a text file as they become available, so that if the program crashes the results that had been obtained are not lost, and the log can help debug the crash.
For example, assume a program consisting of two tests. If the program has finished the first test and is working on the second, the log file would look like:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[whatever test 2 steps have been completed]
Once the second test has finished, the log file would look like this:
Results:
Test 1 Result A: Passed
Test 1 Result B: 1.5 Volts
Test 2 Result A: Passed
Test 2 Result B: 2.0 Volts
Log:
Setting up instruments.
Beginning test 1.
[Steps in test 1]
Finished test 1.
Beginning test 2.
[Steps in test 2]
Finished test 2.
All tests complete.
How would I go about doing this? I've been looking at the help files for QFile and QTextStream, but I'm not seeing a way to insert text in the middle of existing text. I don't want to create separate files and merge them at the end because I'd end up with separate files in the event of a crash. I also don't want to write the file from scratch every time a change is made because it seems like there should be a faster, more elegant way of doing this.
QFile.readAll will read the entire file into a QByteArray.
On the QByteArray you can then use insert to insert text in the middle,
and then write it back to file again.
Or you could use the classic c style that can modify files in the middle with the help of filepointers.
As #Roku pointed out, there is no built in way to insert data in a file with a rewrite. However if you know the size of the region, i.e., if the text you want to write has a fixed length, then you can write an empty space in the file and replace it later. Check
this discussion in overwriting part of a file.
I ended up going with the "write the file from scratch" method that I mentioned being hesitant about in my question. The benefit of this technique is that it results in a single file, even in the event of a crash since the log and the results are never placed in different files to begin with. Additionally, rewriting the file only happens when adding new results (an infrequent occurrence), whereas updating the log means simply appending text to the file as usual. I'm still a bit surprised that there isn't a way to have the OS insert text into a file for you.
Oh, and for those of you who absolutely must have this functionality as efficiently as possible, the following might be of use:
http://www.codeproject.com/Articles/17716/Insert-Text-into-Existing-Files-in-C-Without-Temp
You just cannot add more stuff in the middle of a file. I would go with two separate files, another for the results and another for the logs.