I am creating an application in Qt which can be used by users to read some confidential text file. The idea is that if a user wants to access this file, he can only do so through this application and not read it directly. I am planning to add this file using qrc resource.
What I would like to know is that:
Is it possible for a user of the application to somehow "extract" the embedded resource from the compiled executable?
If so, in order to prevent this, is it possible to encrypt or hash the said resource before compiling?
P.S.Maybe someone out there has already faces this scenario and came up with a better solution then what I am thinking. If so, new ideas are always welcome.
Depending on your level of expertise, you could make retrieving the text a bit more difficult, but you won't get a secure result this way.
rcc (Qt's resource compiler) tries to compress a resource and if the resource compresses to less than 30%, it will compress the resource. Otherwise the resource will go uncompressed into your executable. As a starting point, you could persuade rcc to always compress by calling rcc with option -threshold 1.
Next you will have to make sure, that all debug symbols are erased from your delivery, otherwise an astute code reader will do something like this:
objdump -all-headers your.app/Contents/MacOS/your | grep qrc
and will get something like this:
00000001002162f0 l F __TEXT,__text __GLOBAL__sub_I_qrc_resources.cpp
Where 00000001002162f0 is a good starting point for disassembling your executable.
Still: Even if you remove all debug symbols, your resources will always pop up in the DATA section of your code.
So even if you are following this and further advice people might give, it's just obfuscation. Welcome to the wonderful world of cryptology and steganography.
Related
Can someone please give me a document to describe the kernel source folders about their structure, functionality and how they are organized?
Specifically, what's the use of of the folder include/uapi/**?
Thanks.
The uapi folder is supposed to contain the user space API of the kernel. Then upon kernel installation, the uapi include files become the top level /usr/include/linux/ files. (I'm not entirely clear on what exceptions remain.)
The other headers in theory are then private to the kernel. This allow clean separation of the user-visible and kernel-only structures which previously were intermingled in a single header file.
The best discussion I have seen of this is located at a Linux Weekly News article that predates the patch landing.
The UAPI patch itself landed with kernel 3.7. Linus's quick and dirty summary is:
the "uapi" include file cleanups. The idea is that the stuff
exported to user space should now be found under include/uapi and
arch/$(ARCH)/include/uapi.
Let's hope it actually works. Because otherwise this was just a
totally pointless pain in the *ss. And regardless, I'm definitely done
with these kinds of "let's do massive cleanup of the include files"
forever.
So this is the situation. Our company has its own standard codes and windows (for commonly used routines and for inheriting) that we use in developing applications. These "standard codes and windows" is saved in its own library (pbl). Normally when we deploy our software to the client we just compile it to pbd's and exe's, but this time our client wants also the source code of the software. The thing is we don't want our standard codes and windows to be visible when we give the source code to the client. So is there a way to encrypt (shield, hide etc...) the codes.
I hope someone can point me to where should I start researching.
The .pbls contain sources, resources and binaries, while the .pbd contain no source.
If you do not want to leak any source code, just give the .pbd and .exe files.
If you do want to give the source code of the application minus the source code of your standard library, give the all .pbl files but your standard library, and give the .pbd of the standard lib. Thus your client will even be able to recompile the app (providing that the standard lib object are called by, but do not call other pbjects from the application).
Please note that like Java, the PowerBuilder objects can be decompiled from binaries with the right tool.
I am not aware of a mean to encrypt PB source code, but there is the possibility to obfuscate the objects through PB-Protect. I never used it and I cannot tell more about it.
If they're actually looking for insurance should you disappear, perhaps a code escrow service might be acceptable? My company escrows our source at customer request as a paid-for contract line item, I think with Iron Mountain.
I am trying to trace a possible memory leak in a very large ASP.NET application. I am trying to familiarize myself with WinDBG before attempting to use this tool in the live environment.
I have followed the instructions in the following article, which I found very helpful: http://humblecoder.co.uk/uncategorized/spotting-a-memory-leak-with-windbg-in-net. I am able to create a "memory dump" file of the ASP.NET process and show that the delegate is causing the memory leak as specified in the article. I refer to the paragraph in the article that starts: "Next we need the symbols". I did not add the symbol files using File\Symbol File Path; in WinDBG and yet I still seem to be able to debug the application and follow through the remaining steps of the article. Are symbol paths not required with an ASP.NET application?
Because .NET assemblies contain metadata, including the name of every method and its parameters, symbols aren't necessary to obtain a readable stack trace of a managed thread.
One thing symbols can provide is the file name and line number of each statement, so you can more easily figure out which frames in the stack trace correspond to which lines in your source code.
As Michael says symbols are not strictly necessary for managed code as most of the relevant information is available at runtime as metadata, but if you're digging into native code it is very useful to have symbols.
For many scenarios you can just do .symfix which will tell WinDbg to use Microsoft's public symbol server. That will give you access to symbols for all the CLR and Win32 specific calls in your code. Remember to do a .reload if you set the path.
If your code includes native non-Microsoft assemblies as well, you need to append the location of the corresponding PDB files to the symbol path. Use the .sympath command for that.
To troubleshoot symbol loading use the !sym noisy command.
For more information see this.
We are creating ReST Web Services using ASP.NET and OpenRasta.
Is there any tool that can could help us:
create WADL file
or/and create human readable API documentation similar which decribed resources/HTTP
methods supported for each resource, etc ?
Looks like REST Describe & Compile should do the trick.
On the WADL developer site Marc Hadley
maintains a command line tool named
WADL2Java. The ambitious goal of REST
Describe & Compile is to provide sort
of WADL2Anything. So what REST
Describe & Compile does is that it:
Generates new WADL files in a completely interactive way.
Lets you upload and edit existing WADL files.
Allows you to compile WADL files to source code in various programming
languages.
For OpenRasta, it'd be possible to use a UriDecorator to have help-like URIs defined for your resources (such as /myResource$help). You can then rewrite the URI before parsing to something yo can document easily, parse teh uri, find the resource type, and rewrite to /help/{resourcetype}
From there you register a resource for your help system:
ResourceSpace.Has.ResourcesOfType()
.AtUri("/help/{resourceType}")
.HandledBy()
.RenderedByXxx()
Then you can create your handler to return the documentation about a resource. You could for example use the IOperationCreator service to know which http methodds are available and with what input arguments, use the ICodecRepository to see what media types may be accepted as input, and potentially what a media type serialization would look like by calling the codec and generating an html friendly view of it.
That's definitly an area we're going to work on for the next version.
I want to know a bit more about 'xmlns:mx="http://www.adobe.com/2006/mxml". Generally namespaces acts as the pointers to the component location, but I've always seen them directing resources within local directory structure. When 'xmlns:mx="http://www.adobe.com/2006/mxml"' is used than is a new connection is set with adobe server or is it just a convention?
If an actual connection is set then the application should not get compiled without internet connection but in reality we can compile and run our application without internet connection as well !! Plz correct me if I am wring somewhere.
Please help me understanding its significance,
Thanks in advance.
Ashine.
It's just an identifier that with the use of flex-config.xml file (you can find it in your $SDK_HOME/frameworks folder) points to mxml-manifest.xml file which contains definitions of classes that you can use by "importing" specific namespace.
It's just a convention. Try actually following the URI, the page doesn't exist!
Namespaces aren't the same as directory structures btw... The actionscript compiler cheats a lot to make it look that way.
The URL is known as a namespace URL. Not all namespaces are directory structures. But, it takes quite a bit more work to create a namespace URL; whereas directory namespaces are almost automatic.
To create a namespace URL, you need to use a library project and add a manifest.xml file.
The documentation is really light on this topic. But, I demo about this in the last episode of The Flex Show's screencast series on creating custom components and in an episode of the Flextras Friday Lunch.