I've cross compiled Qt and created SD card image and mounted using losetup. Compiation is much faster now compared to direct sshfs mount. Application runs OK. Now, I want to debug which is dead slow and it appears like it is copying the files back to the dev machine for debugging. I see this suggestion:
File transfers from remote targets can be slow. Use "set sysroot" to access files locally instead.
I'm using gdb-multiarch and have got gdbserver (on target board).
I'm kind of lost here. Where to set this option? I've supplied --sysroot argument to the binary but no use. Any help is really appreciated.
Update: using Qt Creator for the development.
sysroot is a gdb setting. You can set it in gdb with the set sysroot command. For example:
(gdb) help set sysroot
Set an alternate system root.
The system root is used to load absolute shared library symbol files.
For other (relative) files, you can add directories using
`set solib-search-path'.
This setting controls how gdb tries to find various files it needs, and in particular the executable and shared libraries that you are debugging.
Recent versions of gdb default sysroot to target:, which means "fetch the files from the target". If you're debugging locally, this is just local filesystem access; but if you are debugging remotely and have a slow connection, this can be a bit painful. In order to make this faster, the idea is to keep a local copy of all the files you'll need, and then use set sysroot to point gdb at this local copy.
The main issue with this approach is that if your local copy is out of sync with the remote, you can end up confusing gdb and getting nonsense results. I am not certain but maybe enabling build-ids alleviates this problem somewhat (certainly in theory gdb can detect build-id mismatches and warn, I just don't recall whether it actually does).
As Tom Tromey suggested adding set sysroot {my sysroot local path} as a starting command in the debugger has worked for me.
Related
In gRPC, when building for arm, I need to disable those three variables:
-DRUN_HAVE_STD_REGEX=OFF
-DRUN_HAVE_POSIX_REGEX=OFF
-DRUN_HAVE_STEADY_CLOCK=OFF
It is not super clear to me what they do, so I wonder:
Why is it that CMake cannot detect them automatically when cross-compiling?
What is the impact of disabling them, say on a system that does support them? Will it sometimes crash? Reduce performances in some situations?
Because they are not auto-detected by CMake, it would be easier for me to always disable them if that works everywhere without major issues for my use-case.
gRPC uses CMake's try_run to automatically detect if the platform supports a feature when cross-compiling. However, some variables need to be supplied manually. From the documentation (emphasis added):
When cross compiling, the executable compiled in the first step usually cannot be run on the build host. The try_run command checks the CMAKE_CROSSCOMPILING variable to detect whether CMake is in cross-compiling mode. If that is the case, it will still try to compile the executable, but it will not try to run the executable unless the CMAKE_CROSSCOMPILING_EMULATOR variable is set. Instead it will create cache variables which must be filled by the user or by presetting them in some CMake script file to the values the executable would have produced if it had been run on its actual target platform.
Basically, it's saying that CMake won't try to run the compiled executable on the build machine unless some test results are specified manually (test which would have been run on the target machine). The below tests will usually cause problems:
-DRUN_HAVE_STD_REGEX
-DRUN_HAVE_GNU_POSIX_REGEX
-DRUN_HAVE_POSIX_REGEX
-DRUN_HAVE_STEADY_CLOCK
Hopefully that answers your first question. I do not know how to answer your second question, as I have always just set those variables manually to match the features of whatever system I've compiled for.
I have a simple application using
QT += core gui network webkitwidgets
I've used windeployqt.exe to generate the 32 bits release on my win-10 64 bit computer. When I put the folder on a win-7 64 Bit desktop and double-click the app.exe, it never starts.
I can see it in the task manager, but I can't kill it, and if I try I cannot close the explorer folder in which I double clicked anymore.
I've checked the usual platform, ICU, qwindows.dll, and so on.
http://doc.qt.io/qt-5/windows-deployment.html
EDIT Precisions:
I've compiled with default 32 Bit kit: "build-Test-Desktop_Qt_5_5_1_MinGW_32bit-Release" with "mingw492_32"
I have a package "release" generated by windeployqt.exe using the --webkit switch. I start a command prompt:
> set path=
> set mingw=
Then I make sure that no Qt/Mingw things exists anymore in my environment variables.
I also rename "c:\Qt" into "c:\ __Qt".
I move my release folder on my desktop.
I start release\test.exe ( from the clean path shell )
Everything runs fine! So The release/test.exe has everything it needs without the path/mingw variable.
But as soon as I put the folder on another windows machine ( 7 instead of 10 ) it never starts.
I tried dependency walker. It shows a lot of "API-MS-WIN*.dll" missing...
It even shows much more missing dlls on the "good" machine than on the bad one !!!
Every single "missing dll" on the "bad" target machine is actually in system32 on this machine.
Thanks for advice, every advice is welcome, I'm a bit desperate... :)
Edit
It seems to be related to the machine itself. I have successfully deployed this (very small) app to 2 non developer machine on win7 and win8 respectively. But the above "bad machine" still resits running it...
Edit
The problem seems not to be general but related to this one particular machine. Hence, feel free to close or move to the appropriate forum as it is not related to Qt/windeplyqt. If I figure out a solution, and question is closed, I'll simply add a last edit. Safe Boot and malwarebyte are my next actions.
After a long investigation.
Do not believe dependency walker, it used to be a top notch tool but it is now outdated.
If there is a missing dll, the system will prompt you with "cannot load dll xxx.dll" anyway.
Your best shot in case a soft runs on machine X but not on Machine Y is:
start in safe mode ( run: msconfig --> diagnostic startup )
turn off any antivirus or non microsoft/driver software,
"run as administrator".
If you can run with step 3. Then proceed by elimination:
run without admin rights,
Start anti spyware, etc...
Add appropriate exception to your antivirus if it is the root cause.
If the antivirus is not the root cause, run process monitor on both machines. Then compare, what Failed on one machine and not the other ? Read the windows event log and compare any error messages on both machines.
run sfc /scannow to check disk
run a complete anti spyware scan/ pc-repair tool ( malwarebytes, combofix, ... )
Make sure you really have the very same package on both machines, make sure you are not trying to run an exe on mac OS, make sure your computer is on.
Call the oracle, you are in the matrix...
In my case the problem was Avast and it was solved by adding appropriate exception.
Can R be run from a CD-ROM drive? The computer is a stand-alone (no network or Internet connection) and I can't install anything on it, nor can I use a flash drive.
Thanks.
What do you mean by "can't install"?
You don't need to install R, you can just run it from a folder copied from somewhere else. If you have hard disk storage on the PC then you can copy C:\Program Files\R from one machine onto a CD-ROM, then take the CD-ROM to the cripplebox, copy it to wherever you store your files and run it from there. Worst case scenario is you have to change the R_HOME environment variable. Works for Linux and Windows (you didnt say what OS you are on).
...unless your sysadmins have disabled executable permissions for your hard disk storage. Which is a real BOFH thing to do.
...but if they've done that I'd also suspect they've disabled executables from CD-ROM too.
...and if you don't have any writable hard disk storage, how the heck are you going to do any analysis?
...the real fix may be to kick the sysadmins until you tell them you can't do your job without R installed on the machine.
You may have trouble with packages, but otherwise, the instructions for installing R on a USB key should be pertinent.
I'm having problems compiling applications with remote ant, something similar to this. However the flex compiler seems to have problems with this. When I run the same script on my local compiles everything without any problems but when I try the remote ant it fails without giving more information.
Things to look for on the remote machine:
The paths in the build file need to be valid (I'm pretty sure the script file you use to build the project on your computer, will not be good on the remote one because of a possible difference in paths)
You need the Flex SDK installed
You need Java JDK installed
You might also need some environment variables set correctly (like JAVA_HOME)
ANT binaries on the remote machine (but I assume you already have this, probably coming with the system you're using)
The sources to be compiled, obviously (I assume you already have these too, probably gotten by your system from a repository)
Also I find it hard to believe it would fail without any error. There should be at least a log file somewhere to give you an idea of what went wrong.
I am running mxmlc in the command-line with -incremental=true. Flex is building the cache file using a checksum the first time. Subsequent compilations fail with this message:
Failed to match the compile target with path_to_cache/projectname_329043.cache. The cache file will not be reused.
path_to_cache exists
the cache file exists in path_to_cache
the compiler is not trying to create a new cache file, so I assume it is generating the same checksum
My environment:
Flex 3.0
Mac - OSX 10.4.x
I just ran across this issue myself and after not finding the answer anywhere on the web, I bashed my head against mxmlc in practically trail-and-error until finding the answer. In my case, I was regenerating the flex config xml file each time I compiled from within ant. It turns out that this is the error you get in the case where it thinks the config has changed. You can test this by simply touching your config file and running against unmodified sources. So, if the timestamp is changing on your flex config.xml between compiles, that is likely the culprit.
It could be a permissions issue. Have you tried running with sudo? I wouldn't recommend doing that permanently, but if using sudo makes the error message go away, then you know it's a permissions issue; and you can move on to the proper way to resolve it.
You could also try going into Disk Utility and doing a check/repair of disk permissions. OSX has been notorious for needing this done occasionally.