find declaration in unix dev toolchain - unix

I recently read a lot of opinions on whether or not one should use an IDE or the unix toolchain to develop c++ programs. One thing I was asking myself is if you can somehow follow symbols in gvim or any editor that you like. This is one feature I use in Visual Studio or Eclipse a lot.
Or put in another way. Assume you browse some foreign code and you spot an unknown function name. How do you find out from which library it comes without manually searching all the includes?

Here are two popular tools for finding symbols.
ctags
cscope
I believe emacs and vim both have support for those two tools.

Related

GDB MI Interface parser

I am trying to write a GDB frontend. I use GDB for debugging embedded targets, especially with ARM processors. I have used Eclipse before, but I didn't like it very much. At work, we use Lauterbach Trace32, which is one of the most comfortable debuggers I have ever used. That's why I started a project where I try to implement a very similar application as a GDB frontend, which is specialised to embedded debugging.
However, I already have implemented some very basic commands, like viewing the source code, viewing registers of the target etc., which works well. Currently, I am implementing breakpoints. But for this I need some help.
I use Qt (with Qtcreator) on Linux for my project, so I have set up a QProcess for the gdb running in the background. I use the GDB Mi2 interface to communicate with gdb. Then, using write() I can send commands and with a SIGNAL, I can receive the responses from GDB. This works well, but the answers from GDB in this MI format are very ugly to parse. Could anyone give me a hint how this can be done in a elegant way? I tried BOOST to build a parser, which worked, but this is a much too complicated way in my opinion. Is there some better way to parse the GDB responses?
There's no truly great way to parse MI. MI is quite old and so predates the widespread adoption of XML and JSON. Also, even more unfortunately for you, there are a few known MI emission bugs in gdb (search bugzilla and the list archives) -- spots where gdb's MI output doesn't conform to the MI grammar. These have gone unfixed because it was judged that changing the output (without bumping the MI version number, something nobody seems to want to drive) would break existing parsers without much corresponding benefit.
The good news is that there are a few reusable parsers already in existence. So I would suggest using one of those. The parser in nemiver is written in C++ and is a reusable library. However as you're already using Qt you might check out the parsers in Qt Creator or KDevelop.
It might also be possible to roll your own gdb interface by programming gdb in Python. However, there are likely to be some holes in gdb's Python API, so I would only recommend it if you also want to get into hacking on the gdb Python layer as well.
There is the pygdbmi python library for exactly this purpose (it did not exist when you originally asked your question) https://github.com/cs01/pygdbmi
gdbgui is a frontend that utilizes pygdbmi: https://github.com/cs01/gdbgui
Have fun!
Disclaimer: I am the developer.
Qt Creator itself has a parser for (some superset of) the MI protocol, see plugins/debugger/debuggerprotocol.{h,cpp} in its sources. This (superset) protocol also used to capture data from Qt Creator's "pretty printing" produces and also serves as the debugger->frontend channel for the LLDB integration.
The protocol implementation is fairly well separated from the rest of Qt Creator's debugger plugin. It should be re-usable.

Does Django require an IDE an does ASP.NET require Visual Studio?

I wasn't sure how to ask, but from my limited understanding and experience, interpreted languages like Python and PHP, are quicker to build and implement changes because you can do so using, well, notepad if you need to and fix something quickly.
From what I know about asp.net, you can do the same, but with a simple text editor it is very complicated to do so and using Visual Studio is just about the only way to be productive with .net. I just remember LOTS of xml files produced automatically by Visual Studio based upon Linq, or a BLL, or whatever the current technology is (no slam there intended). I'd throw the Java stack in the same category as .NET for the purposes of this question. I like Visual Studio but I don't like how I have to use it to recompile the whole project each time I change something in my model or controller (I'm sure that is an exaggeration but I don't know.)
Are these statements accurate?
The need to recompile your code while using Visual Studio is due to the design of the programming language used - not due to Visual Studio. Java is the same, whether or not you are using NetBeans, Eclipse, or NotePad.
An IDE is typically not required, but as you pointed out, it makes life easier. I started out using vim to program in Python - but now use Eclipse.
Django comes with built-in command line tools to help you build your initial project, but in theory, you could manually create these files.
If you're comfortable using notepad to do your PHP work, then you wouldn't need an IDE to do Django.
That said, there are many IDEs for Django (and PHP) that can make the process easier for some.

Qt: how to make help system?

I need to provide some help system for my application. The app mostly works on the computers without any Qt installed. I would like to have some way (tool etc) to create the professionally looking help system. I mean I need to provide the regular help system like most applications have. It should look like regular CHM file (with index, search etc.).
I tried to use QtAssistance class, created .adb file but if I run assistance utility, it doesn't know -profile key so I even cannot check if I did this file properly.
I'm a little bit confused because I could see QtAssistant and QHelp classes and I don't know which one is more suitable for my purpose.
Thanks a lot
If you do not care for using Microsofts chm-files, then go ahead and use the QtHelp API - if you are using Qt versions 4.4 or newer. The QAssistant API has been superseded by QtHelp starting with version 4.4, so don't start with old or deprecated interfaces. The QAssistant help files will still be readable from a QtHelp based implementation.
If you do need to read chm files, then a chmlib-based approach with a customized QWebBrowser would be suitable, but I don't think that's what you are looking for.

What is currently the best build system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
A few years ago I looked into using some build system that isnt Make, and tools like CMake and SCons seemed pretty primitive. I'd like to find out if the situation has improved. So, under the following criteria, what is currently the best build tool:
platform agnostic: should work on windows, linux, mac
language agnostic: should have built-in support for common things like building C/C++ and other static langs. I guess it doesn't need to support the full autotools suite.
extensible: I need to be able to write rules to generate files, like from restructuredText, latex, custom formats, etc. I dont really care what language I have to write the rules in, but I would prefer a real language rather than a DSL.
I would prefer to avoid writing any XML by hand, which I think for example ant requires.
Freely available (preferably open source)
The term "best" is slightly subjective, but I think answers can be rated objectively by the criteria above.
I'd definitively put my vote up for premake. Although it is not as powerful as it's older brothers, it's main advantage is absurd simplicity and ease of use. Makes writing multi-compiler, multi-platform code a breeze, and natively generates Visual Studio solutions, XCode projects, Makefiles, and others, without any additional work needed.
So, judging purely by the criteria set forth in the question, the build system that seems like the best fit is probably waf - pure Python, provides support for C++ and other languages, general, powerful, not a DSL.
However, from my personal experience, I prefer CMake for C++ projects. (I tried CMake, SCons, and waf, and liked them in roughly that order). CMake is a general solution, but it has built-in support for C++ that makes it nicer than a more generic solution when you're actually doing C++.
CMake's build model for C++ is more declarative and less imperative, and thus, to me, easier to use. The CMake language syntax isn't great, but a declarative build with odd syntax beats an imperative build in Python. Of the three, CMake also seems to have the best support for "advanced" things like precompiled headers. Setting up precompiled headers reduced my rebuild time by about 70%.
Other pluses for CMake include decent documentation and a sizable community. Many open source libraries have CMake build files either in-tree or provided by the CMake community. There are major projects that already use CMake (OGRE comes to mind), and other major projects, like Boost and LLVM, are in the process of moving to CMake.
Part of the issue I found when experimenting with build systems is that I was trying to build a NPAPI plugin on OS X, and it turns out that very few build systems are set up to give XCode the exact combination of flags required to do so. CMake, recognizing that XCode is a complex and moving target, provides a hook for manually setting commands in generated XCode projects (and Visual Studio, I think). This is Very Smart as far as I'm concerned.
Whether you're building a library or an application may also determine which build system is best. Boost still uses a jam-based system, in part because it provides the most comprehensive support for managing build types that are more complex than "Debug" and "Release." Most boost libraries have five or six different versions, especially on Windows, anticipating people needing compatible libraries that link against different versions of the CRT.
I didn't have any problems with CMake on Windows, but of course your mileage may vary. There's a decent GUI for setting up build dependencies, though it's clunky to use for rebuilds. Luckily there's also a command-line client. What I've settled on so far is to have a thin wrapper Makefile that invokes CMake from an objdir; CMake then generates Makefiles in the objdir, and the original Makefile uses them to do the build. This ensures that people don't accidentally invoke CMake from the source directory and clutter up their repository. Combined with MinGW, this "CMake sandwich" provides a remarkably consistent cross-platform build experience!
Of course that depends on what your priorities are. If you are looking primarily for ease of use, there are at least two new build systems that hook into the filesystem to automatically track dependencies in a language agnostic fashion.
One is tup:
http://gittup.org/tup/
and the other is fabricate:
http://code.google.com/p/fabricate/
The one that seems to be the best performing, portable, and mature (and the one I have actually used) is tup. The guy who wrote it even maintains a toy linux distro where everything is a git submodule, and everything (including the kernel) is build with tup. From what I've read about the kernel's build system, this is quite an accomplishment.
Also, Tup cleans up old targets and other cruft, and can automatically maintain your .gitignore files. The result is that it becomes trivial to experiment with the layout and names of your targets, and you can confidently jump between git revisions without rebuilding everything. It's written in C.
If you know haskell and are looking for something for very advanced use cases, check out shake:
http://community.haskell.org/~ndm/shake/
Update: I haven't tried it, but this new "buildsome" tool also hooks into the filesystem, and was inspired by tup, so is relevant:
https://github.com/ElastiLotem/buildsome
CMake
CMake is an extensible, open-source
system that manages the build process
in an operating system and in a
compiler-independent manner.
Gradle seems to match all the criteria mentioned above.
It's a build system which took the best of Maven and Ant combined. To me, that's the best.
The Selenium project is moving over to Rake, not because its the best but because it handles multiple languages slightly better than all the other build tools and is cross platform (developed in Ruby).
All build tools have their issues and people learn to live with them. Something that runs on the JVM tends to be really good for building apps so Ant, Maven (i know its hideous), Ivy, Rake
Final Builder is well known in Windows world
smooth build matches most of your requirements.
platform agnostic: yes, it's written in java
language agnostic: it doesn't support c/c++t yet, only java but it is extensible via plugins written in java so adding more compilers support is not a problem
extensible: yes, you can implement smooth function via java plugin, you can also create smooth function via defining it as expression built of other smooth functions.
I would prefer to avoid writing any XML: you won't see a single line of it in smooth build
Freely available: yes, Apache 2 license
disclosure: I'm the author of smooth build.

How do small software patches correct big software?

One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.

Resources