What's in include/uapi of kernel source project - networking

Can someone please give me a document to describe the kernel source folders about their structure, functionality and how they are organized?
Specifically, what's the use of of the folder include/uapi/**?
Thanks.

The uapi folder is supposed to contain the user space API of the kernel. Then upon kernel installation, the uapi include files become the top level /usr/include/linux/ files. (I'm not entirely clear on what exceptions remain.)
The other headers in theory are then private to the kernel. This allow clean separation of the user-visible and kernel-only structures which previously were intermingled in a single header file.
The best discussion I have seen of this is located at a Linux Weekly News article that predates the patch landing.
The UAPI patch itself landed with kernel 3.7. Linus's quick and dirty summary is:
the "uapi" include file cleanups. The idea is that the stuff
exported to user space should now be found under include/uapi and
arch/$(ARCH)/include/uapi.
Let's hope it actually works. Because otherwise this was just a
totally pointless pain in the *ss. And regardless, I'm definitely done
with these kinds of "let's do massive cleanup of the include files"
forever.

Related

Is it possible to read .qrc Qt resource file from compiled executable?

I am creating an application in Qt which can be used by users to read some confidential text file. The idea is that if a user wants to access this file, he can only do so through this application and not read it directly. I am planning to add this file using qrc resource.
What I would like to know is that:
Is it possible for a user of the application to somehow "extract" the embedded resource from the compiled executable?
If so, in order to prevent this, is it possible to encrypt or hash the said resource before compiling?
P.S.Maybe someone out there has already faces this scenario and came up with a better solution then what I am thinking. If so, new ideas are always welcome.
Depending on your level of expertise, you could make retrieving the text a bit more difficult, but you won't get a secure result this way.
rcc (Qt's resource compiler) tries to compress a resource and if the resource compresses to less than 30%, it will compress the resource. Otherwise the resource will go uncompressed into your executable. As a starting point, you could persuade rcc to always compress by calling rcc with option -threshold 1.
Next you will have to make sure, that all debug symbols are erased from your delivery, otherwise an astute code reader will do something like this:
objdump -all-headers your.app/Contents/MacOS/your | grep qrc
and will get something like this:
00000001002162f0 l F __TEXT,__text __GLOBAL__sub_I_qrc_resources.cpp
Where 00000001002162f0 is a good starting point for disassembling your executable.
Still: Even if you remove all debug symbols, your resources will always pop up in the DATA section of your code.
So even if you are following this and further advice people might give, it's just obfuscation. Welcome to the wonderful world of cryptology and steganography.

the concepts of git ignore and not commiting, but still using, global variables

I'm a bit of a newbie, but already running apps with Meteor.js. Since I'm now working with API keys I'm finally realizing that security is a thing, and so I placed my keys in a settings.json, and am instructed not to commit, or to .gitignore the file. But despite reading the documentation, this all seems very counter-intuitive. If I need the variables to make my HTTP requests, then how can my app possibly function without adding my keys, in some form, to the repo? I know the answer is "it can," but how? (in general terms, I don't need a Meteor specialist yet) .
Typing this question out makes me feel pretty ignorant for the stage I'm at, but the docs out there for some reason are not clarifying this for me.
You can generate the file with sensitive information on git checkout.
That is called a smudge script, part of a content filter driver, using using .gitattributes declaration.
(image from "Customizing Git - Git Attributes", from "Pro Git book")
That 'smudge' script( that you have to write) would need to:
fetch the right key (from a source outside the repo, that way no risk to add and push by mistake)
generate the settings.json, using a tracked manifest template settings.json.tpl with placeholder value in it to replace.
That means:
the template settings.json.tpl is added to the git repo
the generate file settings.json is declared in the .gitignore file and never versioned.

Where do I properly put my constants in Meteor

I usually follow the unofficial Meteor FAQ on how to structure my codebase, but I can't figure out where I should put my global constants.
To give an example: I have some database entries with a constant GUID that I need to reference in many points of my app. So far I just attached the constants to the relevant collection, such that in collections/myCollectionWithGuids.coffee it would say:
#MyCollectionWithGuids = new Meteor.Collection "myCollectionWithGuids"
#MyCollectionWithGuids.CONSTANT_ID = "8e7c2fe3-6644-42ea-b114-df8c5211b842"
This approach worked fine, until I need to use it in the following snippet, located in client/views/myCollectionWithGuidsView.coffee, where it says:
Session.setDefault "selectedOption", MyCollectionWithGuids.CONSTANT_ID
...which is unavailable because the file is being loaded before the Collections are created.
So where should I put my constants then, such that they are always loaded first without hacking in a bunch of subdirectories?
You could rely on the fact that a directory names lib is always treated first when it comes to load order.
So I would probably advise you to organize your code as follow :
lib/collections/collection.js
client/views/view.js
In your particular use case this is going to be okay, but you might find cases when you have to use lib in your client directory as well and as the load order rules stack (subdirectories being loaded first), it will be loaded BEFORE the lib folder residing in your project root.
For the moment, the only way to have full control over the load order is to rely on the package API, so you would have to make your piece of code a local package of your app (living in the packages directory of your project root).
It makes sense because you seem to have a collection and a view somehow related, plus splicing your project into a bunch of collaborative local packages tends to be an elegant design pattern after all.
Creating a local package is really easy now that Meteor 0.9 provide documentation for the package.js API.
http://docs.meteor.com/#packagejs
I would put your collection definitions in a lib directory. File structure documentation explains that all files under the lib directory get loaded before any other files, which means your variable would be defined when you attempt to access it in your client-side code.
Generally speaking, you always want your collections to be defined before anything else in your application is loaded or executed, since your application will most likely heavily depend upon the use of the collection's cursor.

WinDBG - Symbols path not included

I am trying to trace a possible memory leak in a very large ASP.NET application. I am trying to familiarize myself with WinDBG before attempting to use this tool in the live environment.
I have followed the instructions in the following article, which I found very helpful: http://humblecoder.co.uk/uncategorized/spotting-a-memory-leak-with-windbg-in-net. I am able to create a "memory dump" file of the ASP.NET process and show that the delegate is causing the memory leak as specified in the article. I refer to the paragraph in the article that starts: "Next we need the symbols". I did not add the symbol files using File\Symbol File Path; in WinDBG and yet I still seem to be able to debug the application and follow through the remaining steps of the article. Are symbol paths not required with an ASP.NET application?
Because .NET assemblies contain metadata, including the name of every method and its parameters, symbols aren't necessary to obtain a readable stack trace of a managed thread.
One thing symbols can provide is the file name and line number of each statement, so you can more easily figure out which frames in the stack trace correspond to which lines in your source code.
As Michael says symbols are not strictly necessary for managed code as most of the relevant information is available at runtime as metadata, but if you're digging into native code it is very useful to have symbols.
For many scenarios you can just do .symfix which will tell WinDbg to use Microsoft's public symbol server. That will give you access to symbols for all the CLR and Win32 specific calls in your code. Remember to do a .reload if you set the path.
If your code includes native non-Microsoft assemblies as well, you need to append the location of the corresponding PDB files to the symbol path. Use the .sympath command for that.
To troubleshoot symbol loading use the !sym noisy command.
For more information see this.

Better way to handle page that links to hundreds of binaries?

I've struggled with a better solution for the following setup. I'm not actively working on this, but know some that might appreciate other ways of handling this.
Setup:
Tridion-managed page has a single "linked list" component Linked list
Single component has component links to other components in Tridion
Linked-to components often link to multimedia component (mm)
An XSLT component template (XSLT CT) renders XML with above content and with links to PDF
XSL document() function used to grab embedded (linked-to) content, all content converted to XML nodes and attributes
TCMScriptAssistant namespace with publishBinary() publishes related PDF and other media
Page template just outputs the result of the CT
Business requirements:
improved publishing (last I worked on this, some of these files created a 2GB publishing transaction because of the PDFs)
published XML content file must reference the associated PDFs; hyperlinks work but identifiers might not help because of...
no Tridion content delivery APIs, mainly for independence from the storage database but also to avoid Tridion-specific code on the presentation server (loosely coupled setup and less training for developers)
The biggest issue is the huge transport package during publishing. The second problem is publishing any of the linked-to PDFs will cause the page to republish.
How could this setup be improved or re-engineered, preferably without too many changes to the existing templates, though modular templating could be considered.
Dynamic component presentations could possibly work, but would need to be published to the file system and not use dynamic linking or broker objects (e.g. no criteria filters, binary metadata, etc).
There are indeed 2 questions. I will handle them in reverse order.
To prevent the page from being republished when you publish a binary, you can use the event system in older versions of Tridion (pre-2011) to turn off link resolving, or with newer versions you can use a custom resolver to prevent this. There is an article by Nuno which explains this(http://nunolinhares.blogspot.com/2011/10/tridion-publisher-and-custom-resolvers.html)
Your second one is a bit tougher, in no small part because of your criteria for not using the SDL Tridion CD APIs. I would have suggested publishing the binaries separately (this would keep the file size down of your transaction package), and using Binary Linking to resolve the paths at request time.
Given this is not an option, I think the only was I would approach it would be to still use dynamic component presentations, and then use predictable unique file names for the PDfs (i.e. use something like 317-12345.pdf based on the URI), and use one directory for all the binaries. That way you could enter the paths to the binary using your XSLT template, as you know where the binaries will be located later. You could then use a custom resolver to publish the binaries when you publish the main list component or page.
Hope that helps
Chris

Resources