As we know, there are two methods of library loading.
1) Static libraries (.a): Library of object code which is linked with, and becomes part of the application.
2) Dynamically linked shared object libraries (.so), which will link at execution of application and it can be used in two ways.
a) Dynamically linked at run time but statically aware.
b) Dynamically loaded/unloaded and linked during execution (i.e. browser plug-in) using the dynamic linking loader system functions.
After compilation, we can check the library dependency of type 'a' as below
objdump -x usr/bin/flashcp
.....
Dynamic Section:
NEEDED libgcc_s.so.1
NEEDED libc.so.6
My question is how to check/detect type 'b' library dependency ? Please suggest is there any way to detect before execution ?
Thanks in advance
Thiru
There's no way to generally check for libraries that are loaded dynamically and whose functions are called via function pointers.
In some special cases, as a hack, you can attempt various ways of reverse-engineering the executable, e.g. statically analyzing code around the calls to LoadLibrary and GetProcAddress on Windows. You could derive some heuristics that would work on many executables, but there's no way that's guaranteed to work, other than executing the code in a virtual machine and intercepting all calls to LoadLibrary/dlopen as they happen.
Related
Is it documented somewhere how ASP.Net sets up search paths for native DLLs? I need to be able to replicate the logic in my own code.
For more background: I'm maintaining a managed library (say Managed.DLL) that wraps a native library (say Native.DLL) that in turn uses another native DLL (say Driver.DLL). So far Managed.DLL has been importing functions from Native.DLL using .Net's DllImport attribute, but now I have to change this to hand-coded calls to LoadLibrary and GetProcAddress to get more control; in particular, I need to be able to call FreeLibrary to unload Native.DLL and I can't do this when Native.DLL has been loaded via DllImport.
And here comes the problem: While just DllImport("Native.DLL") is sufficient to locate both Native.DLL and Driver.DLL, calling LoadLibrary("Native.DLL") fails with ERROR_FILE_NOT_FOUND when Managed.DLL is used in an ASP.Net application, because the directory containing Managed.DLL is not on the search path for native code DLLs.
My first thought was to use Assembly.GetExecutingAssembly().Location and then issue the LoadLibrary call with a full path, but then Native.DLL fails to find Driver.DLL, because the directory containing them both is still not on the search path.
I could work around this by using the Assembly.GetExecutingAssembly().Location value to set the native DLL search path with SetDllDirectory, but this has two major drawbacks:
1) SetDllDirectory changes global WinAPI settings and could interfere with other code within the same ASP.Net worker process that also uses native code DLLs; I have also verified that using DllImport attribute does not mess with this setting, so changing it now could indeed break something that was working before.
2) It would still not work for debugging ASP.Net applications from within Visual Studio, because VS copies the managed resources into a temporary directory, but leaves the native DLLs in the project build directory, so they end up in different locations in the debugging session (and the temporary directory is wiped for every debugging session, so copying the native DLLs into it manually does not work either; I had to copy the native DLLs into IIS's directory for the debugging session to find them and this is clearly not acceptable solution).
I really would like to do the compatible thing here, but so far haven't been able to find out what this is and after couple of days of fruitless searches any pointers would be greatly appreciated.
To answer my own question:
1) The keyword I was missing regarding the copying of Managed.DLL was "shadow copying" (James Schubert explains it much better than any official Microsoft documentation I've seen) and the trick is to use Assembly.CodeBase instead of Assembly.Location, because the former gives the the original location of Managed.DLL and the latter the location of the shadow copy (John Sibly and Sneal shared nice code snippets to extract the directory name from the URI in Assembly.CodeBase).
2) The way to make dependencies of the native DLLs available is to explicitly load them using LoadLibrary before they are needed (and since this increments their reference counts, also release them with FreeLibrary when done).
So, the loading sequence is
string dir = Assembly.GetExecutingAssembly().CodeBase;
dir = new Uri(dir).LocalPath;
dir = Path.GetDirectoryName(dir);
IntPtr driver = LoadLibrary(dir + Path.DirectorySeparatorChar + "Driver.DLL");
IntPtr managed = LoadLibrary(dir + Path.DirectorySeparatorChar + "Managed.DLL");
and the unloading sequence
FreeLibrary(managed);
FreeLibrary(driver);
(note also the order of LoadLibrary and FreeLibrary calls).
I usually follow the unofficial Meteor FAQ on how to structure my codebase, but I can't figure out where I should put my global constants.
To give an example: I have some database entries with a constant GUID that I need to reference in many points of my app. So far I just attached the constants to the relevant collection, such that in collections/myCollectionWithGuids.coffee it would say:
#MyCollectionWithGuids = new Meteor.Collection "myCollectionWithGuids"
#MyCollectionWithGuids.CONSTANT_ID = "8e7c2fe3-6644-42ea-b114-df8c5211b842"
This approach worked fine, until I need to use it in the following snippet, located in client/views/myCollectionWithGuidsView.coffee, where it says:
Session.setDefault "selectedOption", MyCollectionWithGuids.CONSTANT_ID
...which is unavailable because the file is being loaded before the Collections are created.
So where should I put my constants then, such that they are always loaded first without hacking in a bunch of subdirectories?
You could rely on the fact that a directory names lib is always treated first when it comes to load order.
So I would probably advise you to organize your code as follow :
lib/collections/collection.js
client/views/view.js
In your particular use case this is going to be okay, but you might find cases when you have to use lib in your client directory as well and as the load order rules stack (subdirectories being loaded first), it will be loaded BEFORE the lib folder residing in your project root.
For the moment, the only way to have full control over the load order is to rely on the package API, so you would have to make your piece of code a local package of your app (living in the packages directory of your project root).
It makes sense because you seem to have a collection and a view somehow related, plus splicing your project into a bunch of collaborative local packages tends to be an elegant design pattern after all.
Creating a local package is really easy now that Meteor 0.9 provide documentation for the package.js API.
http://docs.meteor.com/#packagejs
I would put your collection definitions in a lib directory. File structure documentation explains that all files under the lib directory get loaded before any other files, which means your variable would be defined when you attempt to access it in your client-side code.
Generally speaking, you always want your collections to be defined before anything else in your application is loaded or executed, since your application will most likely heavily depend upon the use of the collection's cursor.
I've developed a program and I am trying to make this program work with a controllable light source manufactured by some other company. I've emailed the company and they have agreed to send me their external library as a DLL.
I have developed all of my software using Qt 4.8.1 and it has been compiled using MSVC2008.
The controllable light's DLL was compiled in Visual Studio 2008 and was written in either C++ or C# (the manufacturer is uncertain). All I have been given is the DLL and a text file saying that I must:
Add the DLL as a reference to my project
Add using LightName; to the top of the class
Instantiate an instance of the object like so: LightName *ln = new LightName();
Call function void turnOn() with the newly created LightName instance.
Firstly, I find it odd that an external library requires me to instantiate an instance of their object especially when its for a simple piece of hardware.
Secondly, the other company has provided me with no interface files.
My question is:
How can I possibly link to a C++ DLL and expose the functions nested in this library without having an interface header file in a Qt environment? Is there someway to make an interface for an external library?
I have already attempted using QLibrary and doing the following:
QLibrary myLib("mylib");
typedef void (*MyPrototype)();
MyPrototype myFunction = (MyPrototype) myLib.resolve("mysymbol");
if (myFunction)
myFunction();
However, this doesn't work because the DLL I was given was not a C DLL and I have no interface so Qt doesn't have a clue about what symbols it needs to resolve.
I've attempted to display all the definitions exported from my DLL using the dumpbin /EXPORTS command. Unfortunately this was unable to produce anything. I was hoping that I would get some sort of mangled C++ from this that I could then unscramble to make my own header.
I've attempted to use the dependency walker(very useful tool) however it couldn't resolve any symbols to give me some function definitions.
QLibrary only helps you if the library has functions exported as C symbols. If that is C++ you can dump the symbol table and look if that is sufficient for you. Names must be demangled: try to look for dumpbin or similar. It is however possible that you can't do this, it depends on how the symbols have been defined. In that case you'll have to ask for the header: read this.
Well it's absolutely legal to ask you for "instantiating an instance of their object". It's been simply their design decision to make the dll interface object oriented (as contrary to plain extern "C"). QtCore.dll is just someone else's DLL too, and you are instantiating their objects all the time.
But it also means that you will have harder time to call the DLL. The symbols are not "C" symbols (you can't export class that way) so QLibrary can't do anything for you. You can try dumpbin /EXPORTS from the DLL and then tediously unmangle them to reconstruct the class declaration. Luckily there are tools to help you (even online)
But not providing a header file for such DLL is completely dumb in the first place.
I am writing a Qt application that calls QProcess::startDetached("wscript.exe script.vbs") to show the delete confirmation dialog in Windows.
this is the script:
Set objShell = CreateObject("Shell.Application")
Set objFolder = objShell.Namespace("-")
Set objFolderItem = objFolder.ParseName("-")
objFolderItem.InvokeVerb("Delete")
the arguments for Namespace and ParseName are from the arguments passed to the script.
This may be inefficient because it opens an external application first before running the script. I was wondering if i can run VBScripts in a Qt application.
If not, what alternatives can i do?
My VBScript is very weak, so I'm not 100% sure I understand what you are trying to do. My assumption is that you are trying to delete a folder, but want to give the user the normal confirmation box and animation while the action is occurring. If that is not correct, please let me know and I will remove this answer.
A few ideas:
You could call the Windows API directory within your C++ code to do this. I believe the correct call would be to use IFileOperation (Vista and later) or SHFileOperation (pre-Vista)
Qt already has message box dialogs. Although you might not get the exact same functionality as the native shell, you could use this (QMessageBox::warning) and then delete the folder using QDir. This would also be cross-platform portable.
If you stick with the VBScript, I doubt you would see any performance issues unless this is being called many, many times in a loop or something. You know, the old "premature optimization is the root of all evil" thing.
You should read up on the IActiveScript COM interface. You can create an instance of an interpreter that implements IActiveScript to provide a runtime for evaluating scripts. VBScript and JScript can both be used for this and a number of other third-party scripting languages also provide IActiveScript support.
The overview for working with this is you create a language runtime (an instance of VBScript for instance) then add some custom objects to it. Typically if you are embedding an interpreter into your application then exposing an Application object is a good place to start. This can be just an IDispatch interface or something more concrete with an IDL generated typelibrary and all the trimmings. Once you have added the necessary named items into the runtime you load one or more scripts. Any public functions or subroutines declared in the scripts now get exposed via the IDispatch interface of the live runtime once you switch its state to active or running. To actually run the script program, I invoke the Main function for my stuff - you could choose some other scheme as applicable to your environment.
The nice thing about ActiveScripting, is to change language you just change the runtime CLSID. So if people prefer Perl they can use PerlScript or PythonScript etc. Your Application object remains the same hence you don't have to write additional code to support the new languages. The only requirement is that everything is COM.
When using HP-UX I can use the chatr utility to report on various internal attributes of a shared library. It will also allow me to modify the internal attributes of shared libraries that I have built.
The chatr utility can report, and in some cases modify, such things as:
the run-time binding behaviour,
the embedded library path list created at build time,
whether the library is subject to run-time path lookup,
internal names,
etc., etc.
Is such a utility available for Solaris?
Edit: Freaky! Thanks to mark4o's answer below I revisited the elfdump output for a typical system .so (libm.so.2 on Sol 10). However, and here's the freaky part, I mistyped the command to enter:
elfdump libm.so.2 | moe
In an amazing stroke of serendipity, this gave me back the usage message for a utility called moe whose man page description section says:
The moe utility manifests the optimal expansion of a path-name containing reserved runtime linker tokens. These tokens can be used to define dependencies, filtees and runpaths within dynamic objects. The expansion of these tokens at runtime, provides a flexible mechanism for selecting objects and search paths that perform best on this machine.
This will help me resolve why a libm.so.2 shlib is not compatible on both of two different machines leaving my incomplete executable unable to start on one server.
For displaying the information, see the Solaris elfdump and pvs utilities. For debugging binding issues, lari and moe may also be helpful. However, these utilities do not have the ability to modify the library.
Starting with Solaris 11 (and some of the OpenSolaris & Solaris Express releases leading up to it, but not Solaris 10 or older), there is now an elfedit tool for modifying runtime paths and similar attributes.