I'm writing a convnet using torch and cudnn and having some memory issues.
I tried debugging the script with cuda-memcheck only to notice it actually runs when fed through cuda-memcheck (albeit slower than it should).
Turns out if cuda-memcheck is running in the background, a separate instantiation of the script itself also runs fine.
Any idea what might be happening here?
Related
I have a program that I created with pyinstaller using qt5, I converted the gui file into a py file
So I noticed the problem when I added a function and passed the program to others computers that I have, when I run the program to see that everything is good I found that in almost all the computers runs just fine while in 2 does not run, appears the message
pyinstaller failed to execute script
Just in case I always leave the previous version I created in the computer before deleting them, and I found that the previous version is not working in those computers as well
I generated the file again but now without the --noconsole parameter, and added --debug=all
And the program runs just fine, no errors, nothing so I'm at a loss of what is the problem with the computer
edit:
forgot to mention but im also using Opencv, the program is compiled in python 3.7 and the OS is windows 10 64x
While generating log.html from output.xml after execution of robot related tests, we are observing issue of python process getting hung and windows is suggesting to close the program
this is happening when executed on windows.help is required as we were not able to proceed.
I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.
I use a third-party DLL (FTD2xx) to communicate with an external device. Using Qt4, in debug mode everything works fine, but the release crashes silently after successfully completing a called function. It seems to crash at return, but if I write something to the console (with qDebug) at the end of the function, sometimes it does not crash there, but a few, or few dozen lines later.
I suspect a not properly cleaned stack, what the debug build can survive, but the release chokes on it. Did someone encounter a similar problem? The DLL itself cannot be changed, as the source is not available.
It seems the reduction of the optimization level was the only way around. The DLL itself might have problems, as a program which does nothing but calls a single function from that DLL crashes the same way if optimization is turned on.
Fortunately, the size and speed lost by the change in optimization level is negligible.
Edit: for anyone with similar problems on Qt 5.0 or higher: If you change the optimization level (for example, to QMAKE_CXXFLAGS_RELEASE = -O0), it's usually not enough to just rebuild the application. A full "clean all" is required.
Be warned - the EPANET library is not thread safe, it contains a lot of global variables.
Are you calling two methods of that library from different threads?
I'm observing this behavior, when I run the same code for the second time with different parameters in webforms with IronPython,
it runs quite faster. I thought first this had to do with asp.net temporary files, but when I restart the server it gets slow for the first time
again. It's quite a code it has to run so it's reasonable, but it would be great if I could get the speed of the second exeucution. Now cPython compiled the files into pycs files for them to run faster, and I was wondering what does IronPython do to run faster
the code for the second time
is there anything I can do for the code to run at the speed execution of the second time, after I restart the server?
Greetings, Pablo
I do not do a lot of work with .NET or IronPython, so this might be widely off the mark, but why not just precompile your files using the aspnet_compiler.exe before you redeploy?