VC6 project missing msvcp71d.dll? - dynamic-linking

I've been searching for a while now, and i came to understand that msvcp71d.dll is for dynamic linking - debug - CRT library for .Net 2003?
Now the project is to be used in VS.net 2003 OR VC6, that's what's written in the readme file.
So i opened the project in VC6 (which i'm more used to, concerning C++) and I don't understand why it's requiring (when debugging) the dll of 71d, shouldn't be asking for msvcpd.dll or msvcp60d.dll?
I think that the project is being linked to a wrong CRT library, probably due to being converted to VC.net 2003 . So i'm wondering if there's a way to link the project back to the older CRT library of VC6?

You need to point the linker the right lib you want to use.
.libs come always by couple:
- a big static one or
- a small one that will require a dll at execution time, each small one is related to a specific dll. so you need to ask the linker:
please use this mylib..v6...lib so I'll need the corresponding mylib..6...dll when executing.

Related

Static link sqlite in Lazarus

I am building an application with Lazarus where I use a sqlite database to store thousands of records. Right now I am linking to the sqlite library dynamically via the sqlite3.dll.
Is it possible to link to it statically? Where can I find the Lazarus compatible lib file to do that?
Note:
I only started using Lazarus and Free Pascal a month ago so something that might look very obvious to one, might not be for me. So bear with me a bit.
Cheers
Actual static linking is difficult since the TSQLite3Connection component is inherently designed to actively load the SQLite3 DLL. In other words, it's not linking against the library when you compile the program, the component is coded to dynamically load the DLL at run time.
If you are looking to have a totally self contained program, then you can accomplish this two different ways.
Create a new TSQLite3Connection component that links statically against sqlite3 instead of loading the DLL dynamically.
Include the sqlite3.dll as a resource in your program and have your program automatically deploy it before it runs.
Solution #1 is not trivial and not for the faint of heart. I've done it, and I intended to include a link to the component, but the result isn't stable. The problem is that you have to compile a static version of sqlite3, which isn't a real problem, but you have to do it with something like gcc under MinGW and that introduces issues. Compiling with gcc under MinGW means you have to then link in libgcc.a, and because FreePascal's internal linker doesn't know how to interpret stdcall symbols properly, you also have to link against MinGW's libkernel32.a, and libmsvcrt.a. The result just isn't stable. Crashes galore.
Solution #2 should be fairly easy, but the Lazarus maintainers make it a little hard. The part where you store the dll inside the executable as a resource is easy enough to do. And so is writing it out as a temp file. The problem is that you can't tell the TSQLite3Connection component where to find it after. So it looks in the executable's folder, or in system folders. Neither of which can necessarily be written to by the executable. The only place you can guarantee that your program will be able to write to is a temp folder. So what I did is created a new version of TSQLite3Connection component call TSQLite3DynConnection, meaning you can dynamically specify where the DLL is. I made a published property called ClientLibrary where you can specify the location of the dll (it doesn't have to end in .dll, so you can use system temp filename generation routines). You can get this component at: http://icculus.org/~kfitzner/misc/sqlite3dyndll.zip. It will compile against Lazarus 1.6.2 FP 3.0.0, or FP 1.0.6 / FP 2.6.0, which are the two versions I use.
I'll update this answer if I can get the statically linked version stable.
2 Dec 2016 update: I managed to get a static version stable.

Determine Version of .NET Software Running

This may be a stupid question and/or a futile effort -- you've been warned...
I have a ASP .NET application (with the VB parts compiled to a DLL). This application has been around a while and the person who wrote it apparently messed up the old source code repository system. He is no longer around and I'm not clear on whether the source code I was given was a re-write or an older version (or by some strange luck the actual version of the website running).
Being that part of this website is running as a DLL, what is the best way I can go about in determining if the version of the source code I have matches what is running? I'm unable to setup an IIS server to throw this on (licensing/server cost/time/etc).
Is there a better way than compiling the project and then finding some disassembler and doing a comparison?
Is there a better way than compiling the project and then finding some disassembler and doing a comparison?
That's what I've done in the past in your situation.
Open each compiled assembly using ILSpy, and use the option "File / Save Code" to generate source files.
Build the source code from your source code repository, and use ILSpy to generate source files.
Compare the results of 1 and 2.
Obviously this won't give you the whole picture - you'll also need to compare aspx files, config files, ..., but it's the only approach I know.

What should expericenced Unix programmer to be aware of using Microsoft Tools?

I come from UNIX world, I'm quite familiar with Linux, Solaris, Cygwin
and MinGW development. Recently I ported one of my
big projects (cppcms) to support MSVC,
including building static and dynamic libraries with CMake.
And I get all the time absolutely weird issues:
I had CMake build issues because Windows programming
lacks naming convention
for import and static libraries.
Now I discovered that I should use different versions of ICU (debug/release builds) according to the
actual build I do (Debug/RelWithDebInfo -- should use Debug ICU, Release release ICU) and so I should
change actual conventions for searching libraries according to debug/release mode only under MSVC.
Otherwise application just would not start giving a error on missing DLL.
I don't have any such issues under Mingw or Cygwin with GCC, Open Solaris with Sun Studio or Linux with gcc or intel compilers.
And I still have numerous wired issues and wired bugs and very strange behavior -- even some trivial things do not work
under MSVC builds, when everything works absolutely fine under Solaris/Linux/Cygwin/Mingw using GCC from 3.4 up to 4.4,
Sun Studio and Intel compilers). But not under MSVC.
To be honest, I have no idea how to deal with Last one! Because it looks like for me more like environment issues.
I know that the question is not really well defined. I think I'm quite experienced
developer and I know how to write portable and good C++ code. But using Microsoft native
tools drives me crazy with issues I just don't know how to solve.
Question: What should experienced Unix programmer with quite good base in Win32 API should know when it
starts using Genuine Microsoft Tools?
P.S.: Can someone explain why "Release With Debug Info" requires Debug version of MSVC runtime? And why there two versions of runtime exist at all?
P.P.S.: Please note I don't have issues with Win32 API, in fact Windows GCC build works absolutely fine.
Clarifications:
I'm looking for pitfalls that programmer that come from Unix world would may fall into.
For example, when moving from Linux to Solaris: make sure you compile code with -mt or
-pthreads when using multi threaded programs, linking with -lpthread is not enough.
P.S.: Can someone explain why "Release
With Debug Info" requires Debug
version of MSVC runtime?
It doesn't.
And why there
two versions of runtime exist at all?
Because the debug version does more error checking.
And I still have numerous wired issues
and wired bugs and very strange
behavior -- even some trivial things
do not work under MSVC builds,
* What am I doing wrong?
Not telling us what "wired issues and wired bugs and very strange behavior" you get.
* Where should I start?
By telling us the specific errors and problems you encounter.
* What do I miss?
Reading the documentation and learning the tools.
If your question is "What do I read to become a good Windows programmer?" then my answer is: Everything from Jeff Richter, as a start.
There is no magic bullet which will automatically make you an experienced Windows developer. Windows is a very different land compared to Unix. There are lots of quirks, weird behavior, and stuff which is just plain different. The only way to get out with your sanity intact is to tackle the transition one small problem at a time. Concentrate on a specific problem and try to understand the problem. Don't just "get it to work", but really understand what is happening. A good book about Windows programming will help.
There are huge amounts of Windows knowledge and experience accumulated in the SO community, but the only way to access it is to ask concrete questions about specific problems.
The release and debug versions of DLL's use different ways of allocating memory, that is why it is not advisable to mix release and debug versions. If you allocate something in a debug mode DLL and pass it back to the application which was compiled in release mode you may get into trouble.
In the case of your naming issues you may want to have different directories where you place your static / dll's. You can do do this in visual studio by using the configuration manager, not sure how it is under the express version.
I think you need to try and actually understand the new toolset rather than just try and squish it into your current understanding of your existing tools. For that, the best way, IMHO, is for you to try and start to use Visual Studio as Microsoft intended and then once you can build a simple project in the IDE you can move to building it using your preferred make system but do so with an understanding of how the IDE is using its make system to set things up for that build (which WILL work).
So, for example, for part 1 of your question you want to create a simple static library project and a simple dll project and look at the linker options tabs. Jump to the 'Command line' view and you'll see that a DLL uses the /OUT linker option to set the name and location of the dll file and the /LIB linker option to set the name and location of the import library. With a static library only the /OUT option is used and it indicates the name of the static lib. It's true that if you're building a static lib and a DLL from the same source and you have both the /LIB for the dll set to MyCrossPlatformCode.lib and /OUT set to MyCrossPlatformCode.dll then you may have problems if you also build a static lib with an /OUT switch of MyCrossPlatformCode.lib... Simply don't do that; either build the static libs to a different output directory (which is what OpenSSL does), or, better (IMHO), mangle the names somewhat so that you have MyCrossPlatformCode.lib/.dll and MyCrossPlatformCode_static.lib (which is what STLPort does).
Note that you might also want to mangle in (or account for) building with different versions of the Microsoft tool chain (so you might end up with stlport_vc8_x64d_static.5.1, perhaps).
An alternative approach, if you really can't face the thought of understanding your toolset, is that you could take a look at some of the popular open source systems that build quite fine on Windows and Unix systems; OpenSSL and STLPort for a start, perhaps.

Automatic BizTalk Versioning in My Build Process

In all of my other .net apps my build process (a mixture of nant and custom tasks) automatically updates the [AssemblyVersionAttribute] AssemblyInfo.cs with the current build number before the call to msbuild, stamping in the build number in the version number.
I'm now working on my first BizTalk project and I'd like to do the same thing with the version numbers of the BizTalk assemblies, but I've run into trouble!
First of all the aseembly version numbers are stored in the btproj files, so I did some googling and found www.codeplex.com/biztalk which looked like the answer to my problem, but there is a deeper problem!
I have a project for my schemas and another for my pipelines, the pipelines project references my schemas project as I have a flat file dis/assemblers. The problem comes when I update the version numbers, as updating them even from within visual studio does not update the pipeline components references to the schemas.
So if I update all the version numbers manually in the VS IDE from 1.0.0.0 to 1.1.0.0, the build fails as the pipeline components flat file dis/assemblers still reference the old 1.0.0.0 version of the schemas! They don't automatically update!
Is this really a manual process of updating the version numbers of the BizTalk projects in the property pages, then building the projects and manually updating the references to them in the properties of all the pipeline components that reference them?
This means that I can't have my build process control the build number part of my version numbers!
Or is there a better method of managing the version numbers of the BizTalk assemblies?
I'm sorry to disappoint you but I've been down the exact some road I had to give up. I guess it could be possible to achieve it but it would require a lot of changes to both the binding files and other XML files (as you mentioned and even more if you have published services etc).
Maybe it could be possible to wrap all these necessary changes in a build step (a MSBuild step or similar in other build frameworks) - that would be useful!
Developer- :)
We had the similar problem and we ended up developing a small utility which would change the version number in all the projects i.e. *.csproj (asssemblyinfo.cs), *.btproj accordingly. Apart from this it would open and modify the *.btp files with the new version of schemas. In nutshell, what all you have to do is to configure this utility in your VS.net tools menu and execute it.
I guess its not very difficult to develop such utility in any .net lanagauge.
Caveat: Do not forget to save the files after updates with the same encoding as they were originally.
Cheers!
Gutted, thought that might be the case. Maybe BizTalk 2009 projects will play more nicely when updating references when changing version numbers.
I started to go through and automate it manually, and when I realised what needed to be done, I took a biiig step back when I realised just how many places I'd have to modify to get it working. Thank god for Undo Checkout.
I do have a standard C# class library included in my project (various helper functions), which i am able to update the version number of during my build process, so I'm basically using that one assembly to version the whole application. If anyone wants to know what version is in any environment, check out the version number of that one assembly.
Not ideal, but it's working.
We've done this successfully on our project - I'll see if I can get the developer of the tool to post details...
This problem arises when you perform an integration build to the latest versions of your dependent components as file references (aka schemas here).
Keep in mind that upgrading the assemblyversion must always performed manually, that way you are always in charge of changes to assemblyversions.
A possible solution to solve the buildbreaks issue is to file reference to a specific version of a dependent component build and not to the latest version and use a subst drive and a copy script to get the latest component builds.
For example:
SchemaA, assembly version 1.0.0.0
PipelineA (with pipelinecomponent XMLValidator for example), assembly version 1.0.0.0
PipelineA has a file reference to a subst drive(say R drive, which maps to a workspace D:\MyComponents) and version 1.0.0.0 of SchemaA as follows:
R:\SchemaA\1.0.0.0\SchemaA.dll.
The copy-script copies the buildoutput of SchemaA locally to your R drive.
When schema A updates to version 1.1.0.0 you don't have any issues because you still use version 1.0.0.0 and YOU have the choice to use the 1.1.0.0 version of your schema. When you want to upgrade, you have to alter your copy-script and replace the file reference to R:\SchemaA\1.1.0.0\SchemaA.dll.

How do you handle versioning on a Web Application?

What are the strategies for versioning of a web application/ website?
I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files?
It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application.
NB I'm not sure that I have been totally clear with my question.
What I want to know is how to build and auto increment a version number for an asp.net application.
I'm not interested in how to link it with svn.
I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work.
For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php).
For just websites I usually just have a revision number (1,2,3,...).
I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger...
Tried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner "yeah... that's getting pretty old now..."
Unfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean "rotten on the inside?"
... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong.
(I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....)
I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the "metarepository" could possibly be used for versioning the complete system.
In another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method.
An inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2).
During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates ("the January 2009 update").

Resources