I come from UNIX world, I'm quite familiar with Linux, Solaris, Cygwin
and MinGW development. Recently I ported one of my
big projects (cppcms) to support MSVC,
including building static and dynamic libraries with CMake.
And I get all the time absolutely weird issues:
I had CMake build issues because Windows programming
lacks naming convention
for import and static libraries.
Now I discovered that I should use different versions of ICU (debug/release builds) according to the
actual build I do (Debug/RelWithDebInfo -- should use Debug ICU, Release release ICU) and so I should
change actual conventions for searching libraries according to debug/release mode only under MSVC.
Otherwise application just would not start giving a error on missing DLL.
I don't have any such issues under Mingw or Cygwin with GCC, Open Solaris with Sun Studio or Linux with gcc or intel compilers.
And I still have numerous wired issues and wired bugs and very strange behavior -- even some trivial things do not work
under MSVC builds, when everything works absolutely fine under Solaris/Linux/Cygwin/Mingw using GCC from 3.4 up to 4.4,
Sun Studio and Intel compilers). But not under MSVC.
To be honest, I have no idea how to deal with Last one! Because it looks like for me more like environment issues.
I know that the question is not really well defined. I think I'm quite experienced
developer and I know how to write portable and good C++ code. But using Microsoft native
tools drives me crazy with issues I just don't know how to solve.
Question: What should experienced Unix programmer with quite good base in Win32 API should know when it
starts using Genuine Microsoft Tools?
P.S.: Can someone explain why "Release With Debug Info" requires Debug version of MSVC runtime? And why there two versions of runtime exist at all?
P.P.S.: Please note I don't have issues with Win32 API, in fact Windows GCC build works absolutely fine.
Clarifications:
I'm looking for pitfalls that programmer that come from Unix world would may fall into.
For example, when moving from Linux to Solaris: make sure you compile code with -mt or
-pthreads when using multi threaded programs, linking with -lpthread is not enough.
P.S.: Can someone explain why "Release
With Debug Info" requires Debug
version of MSVC runtime?
It doesn't.
And why there
two versions of runtime exist at all?
Because the debug version does more error checking.
And I still have numerous wired issues
and wired bugs and very strange
behavior -- even some trivial things
do not work under MSVC builds,
* What am I doing wrong?
Not telling us what "wired issues and wired bugs and very strange behavior" you get.
* Where should I start?
By telling us the specific errors and problems you encounter.
* What do I miss?
Reading the documentation and learning the tools.
If your question is "What do I read to become a good Windows programmer?" then my answer is: Everything from Jeff Richter, as a start.
There is no magic bullet which will automatically make you an experienced Windows developer. Windows is a very different land compared to Unix. There are lots of quirks, weird behavior, and stuff which is just plain different. The only way to get out with your sanity intact is to tackle the transition one small problem at a time. Concentrate on a specific problem and try to understand the problem. Don't just "get it to work", but really understand what is happening. A good book about Windows programming will help.
There are huge amounts of Windows knowledge and experience accumulated in the SO community, but the only way to access it is to ask concrete questions about specific problems.
The release and debug versions of DLL's use different ways of allocating memory, that is why it is not advisable to mix release and debug versions. If you allocate something in a debug mode DLL and pass it back to the application which was compiled in release mode you may get into trouble.
In the case of your naming issues you may want to have different directories where you place your static / dll's. You can do do this in visual studio by using the configuration manager, not sure how it is under the express version.
I think you need to try and actually understand the new toolset rather than just try and squish it into your current understanding of your existing tools. For that, the best way, IMHO, is for you to try and start to use Visual Studio as Microsoft intended and then once you can build a simple project in the IDE you can move to building it using your preferred make system but do so with an understanding of how the IDE is using its make system to set things up for that build (which WILL work).
So, for example, for part 1 of your question you want to create a simple static library project and a simple dll project and look at the linker options tabs. Jump to the 'Command line' view and you'll see that a DLL uses the /OUT linker option to set the name and location of the dll file and the /LIB linker option to set the name and location of the import library. With a static library only the /OUT option is used and it indicates the name of the static lib. It's true that if you're building a static lib and a DLL from the same source and you have both the /LIB for the dll set to MyCrossPlatformCode.lib and /OUT set to MyCrossPlatformCode.dll then you may have problems if you also build a static lib with an /OUT switch of MyCrossPlatformCode.lib... Simply don't do that; either build the static libs to a different output directory (which is what OpenSSL does), or, better (IMHO), mangle the names somewhat so that you have MyCrossPlatformCode.lib/.dll and MyCrossPlatformCode_static.lib (which is what STLPort does).
Note that you might also want to mangle in (or account for) building with different versions of the Microsoft tool chain (so you might end up with stlport_vc8_x64d_static.5.1, perhaps).
An alternative approach, if you really can't face the thought of understanding your toolset, is that you could take a look at some of the popular open source systems that build quite fine on Windows and Unix systems; OpenSSL and STLPort for a start, perhaps.
Related
We started to test OE12.4 to migrate our systems from 11.7, but 12.4 licenses doesn't include 32-bit versions of PROWIN32 and AppBuilder anymore. Therefore, we're unable to update and develop new applications using MSCOMCTL Treeview OCX, as we did until now. This isn't a problem for future programs, but for migration it is. We're unable to run programs from PROWIN (x64) which uses MSCOMCTL Treeview OCX's, as well as all other programs who uses 32-bit DLL's. The programs compile like a charm, flawless, but doesn't execute.
I tried to register MSCOMCTL OCX throug CMD in c:\windows\system32 directory but, as expected, it didn't worked. For this DLL, as a discontinued resource, Microsoft doesn't provide a 64-bit compiled code.
I'm aware that exist a open project called PureAblTreeView, which works pretty good and doesn't rely on DLL's dependency, but this object was built on ADM2 and the majority of my old programs are ADM1. Therefore, I'm unable to use that solution without rewrite my programs, which can't be done at this time. I even tried to "rebuid" PureAblTreeView as a SmartV8Object, but it didn't worked. Too many differences to be solved and too little time to achieve a running solution.
Is there any workaround for this situation, besides rebuild the programs?
Thank you all in advance.
If I am reading Knowledge Base article 000153835 correctly, then your issue should have been addressed in 12.4:
The 12.2.5 update and the 12.4 release have re-introduced the
AppBuilder to 32-bit OpenEdge. It is now available to run from the
installed Program Group icons.
I'm hoping someone has come across this problem before too.
I am trying to use visual studio to develop for Linux with G++.
I am trying to include math.h and use tanf()
If I compile with the g++ compiler, "arm-none-linux-gnueabi-g++", everything works
but if I add this include directory, which the docs say is the right one, and
"CodeSourcery\Sourcery G++ Lite\arm-none-linux-gnueabi\libc\usr\include\"
then include math.h,
visual studio does not recognize any of the math functions, namely tanf().
anyone have any idea why?
thanks for any help.
edit:
the same app successfully compiles with this command line:
arm-none-linux-gnueabi-g++ -o test main.cpp "-I%PALMPDK%\include" "-I%PALMPDK%\include\SDL" "-L%PALMPDK%\device\lib" -Wl,--allow-shlib-undefined -lSDL -lGLESv2 -lpdl
I am trying to use visual studio to develop for Linux with G++.
Don't.
It looks like you are trying to use a cross compiler to build for an embedded ARM machine. Likely, you won't be able to get away with just the compiler - you'll need a whole root environment in order to link to anything more than libc. Visual Studio, while a good IDE, really can't be molded into this role. You will really need an actual Linux machine, with your corresponding root environment (be it home grown, buildroot, openembedded, etc).
1) In C++, you include <cmath>, not math.h.
2) in C++, you use tan with float arguments (there is an overload), not tanf.
I would guess that Visual Studio sees <math.h>, and thinks that must refer to the Microsoft math header (which is fundamentally antediluvian and lacks support for C99 niceties such as tanf). This is just guessing though, since you haven't posted the actual error that you're encountering; what exactly do you mean when you say "visual studio does not recognize any of the math functions"? Does it fail to compile? To link? What is the exact text of the error message? What are the exact options that are being passed to the compiler or linker?
I found a solution!
I downloaded and installed MinGW instead, and that works fine. I have all the function prototypes for extra stuff like gettimeofday(), and all the regular suff like tanf() is still working fine.
ps: visual studio even has a button for "Use Output Window" where it nicely dumps any errors that are generated by "arm-none-linux-gnueabi-g++"
I would like to release my app with both 32 and 64bit support. I am using elmah and SQLite. Both packages have a separate binary for 32 and 64bits. I cant add both 32 and 64bit DLL. I tried adding both 32 & 64 bit DLLs with a different filename in my bin/release folder and i get an bad image format error. (I tested by running on a Windows server 64 bit with both DLLs in the directory and on my system which does 32bit).
How do i release the app so the same folder can be run as either 32bits or 64bits?
Having just fought with 32- vs. 64-bit not long ago, I'll take a shot at this, at least with some general observations. I know this question is over a year old, but I hope the answer helps someone anyway, regardless of whether you "accept" the answer (which, in contrast to some, is not why I answer questions on StackOverflow).
First off, will 32-bit-only work in a WOW64 context in your situation? Often it will, and that can simplify your situation.
There are, however, situations in which third-party libraries make WOW64 not workable, at least according to their documentation, which is the situation I was facing. To solve the problem, I had to have both a 32-bit and a 64-bit build. If there is a way to release them both in the "same folders" somehow, I did not find it. However, it really was not too difficult to do so. I did have to edit the Visual Studio project files by hand. If I recall, the basic steps were:
Set up my build definitions in Visual Studio carefully, so that both release and debug versions had individual projects set correctly. That meant, in my case, that anything interacting with the native libraries at all had to be built in either x86 or x64 format, not Any CPU. Projects that are MSIL-only can be Any CPU, as far as I can tell.
Edit your project files (e.g., .csproj) so that the correct third-party DLLs are put in the correct folders based on the build. If there was an easier way to accomplish this, I didn't see it. There may be an obvious project file within your solution for this to happen, though it can really happen anywhere in your solution, so long as:
In your actual web project, make sure you add the project with the third-party DLL copy as a reference, even if the web project does not use this other project directly. That way, the files will get copied to your build directory. This seems like a bug in either MSBuild or Visual Studio to me, but as of VS2010, it does not seem to have been fixed. Also, if you need to copy the third-party DLLs to a directory such as App_Code, you may need to edit the web project file to accomplish that.
I would have to be at work to look at the project files and see what changes I specifically made, but these were the basic steps. Unless you can get away with your entire project being compiled as Any CPU, I think you will need to compile both a 32-bit and 64-bit version and deploy the one you need. Your actual code likely will not change, unless you are using native or unmanaged code.
Someone may come along who knows more about this than I, but I hope this helps someone. I'll try to answer any questions left in the comments.
I downloaded the qt embedded demo source code recently on my linux machine. Following are the outcomes during running of the program
I compiled it statically on my x86 machine and run the application on x86 machine it runs fine. But when i took the statically compiled binary file to other machine with Atom platform It run with some missing widgets. I found that the plugins cant be ported with static compilation. Can anybody tell me is it true? If no can anybody tell me the steps for it?
I compiled it dynamically with shared libraries. Then got an executalbe on linux. I did "ldd MyAppName". It show me the shared library files it is using. But I dont know how to package these. Can anybody tell me the steps to package it?
I checked in the article on deploying qt applications on X11-linux platforms. But its not complete. Can anybody give me the detailed steps?
Any help will be appreciated......
you either have a distro, that does'nt support atom, or libraries, that are not compiled with support for it. either way - something somewhere on your system (or your qt) is not compiled for atom
The problem is that you are compiling your app, and its libraries (static or dynamic) work for x86, not for Atom. Perhaps you are able to create some sort of fat binary (lipo?) so that pieces of your app will function on x86 and Atom, but bits using the x86-only libraries will not function on Atom. (Right? That's a concise definition of your problem?)
If you have the source code for the libraries that don't run on Atom, and they're important to you, you should consider porting the code to Atom. If it's open-source code, you can contribute to the project. While you didn't give many details, my (very generic) approach to this would be to get the code on an Atom machine, write a very short test application for the library, and work out the issues.
Re #2: There's little difference between compiling an app and linking to shared libraries or dynamic libraries. On your x86 machine, if you have this code (these "plugins") compiled as dynamic libraries, it's pretty much the same as statically linking those binaries into the app. These libraries will work on x86, whether they're dynamically or statically linked.
I'm not sure if that helps very much -- if you're getting binary Qt plugins as static or dynamic libraries without source, you're out of luck. Submit a bug report. If you have source code, you can do a lot more.
I just dynamically compiled my application and ported to atom platform. I found the dependencies and ported them also and set the environment variable LD_LIBRARY_PATH on target machine to my ported shared libraries and It worked. Thanks everybody for your suggestions
I'm mostly a spoiled Windows + Visual Studio (or Borland C++ or whatever, in the past) developer. Although my first contact with Unix was around 20 years ago, and I've used Linux on-and-off for some years, I have only a very limited idea of how to set up a build on a *nix system.
For example, I'm OK with the basics of make - I can get a number of files to compile and link. But I don't really know how to set things up to cope with multiple configurations - how to get all the object files and targets for the release version to go to different folders from the debug version etc etc. Yes, I can RTFM and improvise something, but it's a fair guess that I'd improvise something stupid, overcomplex, fragile and WTF, where it'd make so much more sense to copy a common convention if only I knew what the common conventions are.
Also, I can run a configure script, and I'm vaguely aware that they're associated with autoconf, whatever that is, but I have little idea if/why/how I should set this kind of stuff up in my own projects.
Hopefully, this is enough to give the general idea of what I'm looking for. Of course I could ask/search specific questions here, but that assumes I know all the right questions to ask which I almost certainly don't. So - any pointers?
EDIT
Just thought I'd update this with some longer-term experience.
I tried using premake for a while, but couldn't live with it in the long run. In substantial part there's a dislike of Lua behind that.
Now, I'm using cmake. It generates makefiles/visual studio projects/whatever. It has (so far) handled everything I've needed to do, including support for unit testing and custom build steps. And as I got used to the cmake way of doing things, I found it was a good way, allowing me to easily use multiple sets of tools at once - I can be checking test coverage in MinGW GCC while simultaneously debugging in Visual Studio.
That reveals, of course, that I'm still mostly working in Windows - but switching back and forth is easier than ever.
The downsides of cmake...
Although it generates makefiles/whatever, it can't really be seen as a makefile generator. The resulting makefiles are dependent on cmake being installed. To be honest, I don't really understand why they don't drop makefiles altogether for makefile platforms and just do the building directly, slightly reducing the potential for problems.
It wasn't easy to get started.
The second point has mostly been resolved by asking questions here...
How do I fix this cmake file? - problem linking to imported library
How to apply different compiler options for different compilers in cmake?
For the cmake "include" command, what is the difference between a file and a module?
How to adapt my unit tests to cmake and ctest?
I'd strongly suggest using one of the newer cousins of Make instead of autoconf or makefiles for smaller projects. One of the best ones for you (and the one I mostly love) could be premake4. Why do I suggest it? Because it's extremely simple to use, yet quite powerful, and capable of producing GNU Makefiles, Visual Studio projects, Code::Blocks projects and many more. And the premake files are very clear and readable, using the Visual Studio nomenclature that you're already familliar with.
Here's an example:
-- A solution contains projects, and defines the available configurations
solution "MyApplication"
configurations { "Debug", "Release" }
-- A project defines one build target
project "MyApplication"
kind "ConsoleApp"
language "C++"
files { "inc/**.h", "src/**.cpp", "main.cpp" }
configuration "Debug"
defines { "DEBUG" }
flags { "Symbols" }
configuration "Release"
defines { "NDEBUG" }
flags { "Optimize" }
I would stay away from autoconf until you actually need it as it is pretty complex to use, and in most small projects, make is all you need...
I don't have a tutorial, but I will give you an extremely amazing(and portable between both BSD and GNU make) makefile to start with for small projects. I dug it out of the original BSD 4.3 assembler(as)
HDRS = project.h
OBJS = main.o
LDFLAGS =
project: ${OBJS}
${CC} ${LDFLAGS} ${OBJS} -o project
.c.o: ${HDRS}
${CC} ${CFLAGS} -c $*.c
clean:
rm -f *.o ${OBJS} project *.core a.out errs core
just replace Project with your projects name and such..
edit:
technically, you need a BSD 3 clause license for Berkeley... as per:
* Copyright (c) 1982 Regents of the University of California.
* All rights reserved. The Berkeley software License Agreement
* specifies the terms and conditions for redistribution.