i have to create a dynamically linked library in zOS . What are the options to be passed to the compiler.
Also, how to check if a library in zOS is dynamically linked[dependent] on other libraries.
we have ldd in linux, which shows this linkage. Do we have a 'ldd' equivalent in zOS land?
You don't say it directly, but I assume you mean a C/C++ DLL. You can do shared libraries in other languages as well (even assembler), but the steps would be different.
First, you need to decide what you want to export. A lot of the IBM examples use the compiler EXPORTALL directive, but be aware this can lead to very slow executables, depending on your coding style. If you don't do EXPORTALL, you'll need #pragma export for anything (code or data) you want to export. Don't forget you can export data (variables) as well as executable functions...sometimes you'll need this to share data with DLL functions.
Then, you need to set your compile options on both client (caller) and DLL to use the DLL linkage...this is the -Wc,DLL compile option and when enabled, it generates extra logic in your program to load and manage the DLL. It's a good idea to also include #pragma csect for your exported functions if you think you'll ever have the need to update the DLL without replacing it entirely.
When you link your DLL, be sure to specify the -Wl,DLL option (there are lots of ways...this part is different if you link in batch - I'm assuming you're building in a make file of some sort). The link will generate the actual DLL, as well as a "side deck" containing "IMPORT" statements for all of your exported functions. You'll need these to link any of the client-side programs that you expect to call the DLL. For example, if your imports are in a file called AAA.x, c89 -Wc,DLL myapp.c AAA.x would compile the calling code, with awareness that functions in AAA.x are off in some sort of DLL.
To your point about DLLs calling other DLLs, don't forget that a DLL can both "serve" and "consume" functions...by including the side deck for functions in other DLLs, you can have a DLL that provides some functions while calling other DLLs to access others.
The actual DLL itself can be in several places depending on the nature of your app. If you're UNIX Services friendly, it's just an executable in LIBPATH. It can also be STEPLIB, LNKLST, LPA and so forth.
If you need to, you can access your DLLs explicitly at runtime using dlopen(), dlsym() and so forth. Generally, this lets you control exactly which DLL you're using (sometimes handy if the user can provide one himself), and it gives you what amounts to function pointers that are resolved within the DLL.
There are some other basic things to consider when linking, such as ensuring that your code is reentrant. Most of these are spelled out in the IBM documentation, and if you build with things like "c89" (or equivalent), the correct options are usually setup for you automatically (in fact, to get a good idea of what's going on, turn on the verbose output and see all the parameters for yourself).
If you need to build up a cross reference of what calls what, the UNIX Services "nm" command can give you that information. If you produce detailed link-edit listings, all the data is in there too when you're building your DLLs.
Good luck!
Related
Qt's resource system lets you build an executable that has resources (.qml's, images...) embedded inside it. I'd like to load the files from the filesystem instead, and ship them alongside my executable. Is this a technique supported by Qt? Any gotchas? Any advantages to one way or the other?
Qt doesn't limit you in what you can do here. It's your choice. Qt's resource system is there if you want it, it's not forced down your throat. Not using it doesn't make you automatically wrong.
If you want to deploy files along with your application, go for it - if it makes sense for your particular application.
My personal preference for small (<0.25GB) applications is for a nice monolithic portable executable on Windows, with everything inside, that you don't have to install if you don't wish - mimicking how an app bundle would be on OS X.
The portability helps, as does the slightly stronger locality of reference: most filesystems will attempt to keep a file's blocks together a bit harder than they do for files that merely are in the same folder.
If there's any utility in power users tweaking the contents of the deployed files, then certainly using the files over using resources has advantages. You could also use the resource system as a fallback for files that are missing - that way a user could provide a replacement file optionally, if it's something you could use.
To make it short: If it is applicable for your application and 'business model' it is ok.
Besides being supported by Qt using QFile (for example), Qt's resource system has some advantages:
compression (ZIP)
simple usage in the application
no care about missing files (typically)
If you mind adding all resources to the executable will not work for you and you want to seperate them, look at the option of seperate (binary) resource files with Qt.
I am building an application with Lazarus where I use a sqlite database to store thousands of records. Right now I am linking to the sqlite library dynamically via the sqlite3.dll.
Is it possible to link to it statically? Where can I find the Lazarus compatible lib file to do that?
Note:
I only started using Lazarus and Free Pascal a month ago so something that might look very obvious to one, might not be for me. So bear with me a bit.
Cheers
Actual static linking is difficult since the TSQLite3Connection component is inherently designed to actively load the SQLite3 DLL. In other words, it's not linking against the library when you compile the program, the component is coded to dynamically load the DLL at run time.
If you are looking to have a totally self contained program, then you can accomplish this two different ways.
Create a new TSQLite3Connection component that links statically against sqlite3 instead of loading the DLL dynamically.
Include the sqlite3.dll as a resource in your program and have your program automatically deploy it before it runs.
Solution #1 is not trivial and not for the faint of heart. I've done it, and I intended to include a link to the component, but the result isn't stable. The problem is that you have to compile a static version of sqlite3, which isn't a real problem, but you have to do it with something like gcc under MinGW and that introduces issues. Compiling with gcc under MinGW means you have to then link in libgcc.a, and because FreePascal's internal linker doesn't know how to interpret stdcall symbols properly, you also have to link against MinGW's libkernel32.a, and libmsvcrt.a. The result just isn't stable. Crashes galore.
Solution #2 should be fairly easy, but the Lazarus maintainers make it a little hard. The part where you store the dll inside the executable as a resource is easy enough to do. And so is writing it out as a temp file. The problem is that you can't tell the TSQLite3Connection component where to find it after. So it looks in the executable's folder, or in system folders. Neither of which can necessarily be written to by the executable. The only place you can guarantee that your program will be able to write to is a temp folder. So what I did is created a new version of TSQLite3Connection component call TSQLite3DynConnection, meaning you can dynamically specify where the DLL is. I made a published property called ClientLibrary where you can specify the location of the dll (it doesn't have to end in .dll, so you can use system temp filename generation routines). You can get this component at: http://icculus.org/~kfitzner/misc/sqlite3dyndll.zip. It will compile against Lazarus 1.6.2 FP 3.0.0, or FP 1.0.6 / FP 2.6.0, which are the two versions I use.
I'll update this answer if I can get the statically linked version stable.
2 Dec 2016 update: I managed to get a static version stable.
I'm frustrated by the lack of flexibility in the Visual Studio project/solution, but I realized that now that it uses MSBUILD it might be quite powerful but just doesn't expose that to the IDE. So I took a look at MSBUILD docs and don't know where to start! I wish there was a Nutshell book for that. Is there any good tutorial someone could point me to?
More specifically, here is the kinds of things I want to do:
Run a utility pre-processor to generate .CPP and .H files, which are then used by a regular C++ project. There are multiple inputs (to figure dependencies of; specifically should know if a normal .h file it uses has changed) and multiple outputs (at least one .cpp and one .h file) that are used as files in another project.
FWIW, the most complex case involves using Qt in a "normal" C++ project that can be built using VS Express 2010 or MSBUILD directly from a script on a server. Since that is a common library, there might be some guides or whatever to help? Note that a VS plug-in is not useful for the building stage, but could be used to initially generate project files that then rely only on MSBUILD and stuff included with the source code.
Would somebody please point me in the right direction?
--John
It gets worse from there, but that's my first goal.
I found the kind of information I was looking for in a book MSBuild Trickery: 99 Ways to Bend the Build Engine to Your Will by Brian Kretzler.
In the first 18 pages I found a few key pieces of information that, along with the on-line documentations I've already gone through, helps clear things up enough to try tackling my project. Details of interest include the processing order of how MSBuild reads and operates on the things in the file, quick points on when wildcard in items are expanded and how to handle generated files, and how to see what's happening in some practical cases or even step in the debugger.
FWIW, I managed to attack my problem without using the murky ".targets"/rules files that I have yet to understand, but only using better documented/exampled features (in particular, a Target that has wildcard items doesn't care that the file name extension is not in any ".target"; is simple enough to copy from example and allows the files to be seen in the IDE Project and added to the list using the IDE; again, the FileExtension there just works OK.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What is a best practices to write large software projects in OCaml?
How do you structure your projects?
What features of OCaml should and should not be used to simplify code management? Exceptions? First-class modules? GADTs? Object types?
Build system? Testing framework? Library stack?
I found great recommendations for haskell and I think it would be good to have something similar for OCaml.
I am going to answer for a medium-sized project in the conditions that I am familiar with, that is between 100K and 1M lines of source code and up to 10 developers. This is what we are using now, for a project started two months ago in August 2013.
Build system and code organization:
one source-able shell script defines PATH and other variables for our project
one .ocamlinit file at the root of our project loads a bunch of libraries when starting a toplevel session
omake, which is fast (with -j option for parallel builds); but we avoid making crazy custom omake plugins
one root Makefile contains all the essential targets (setup, build, test, clean, and more)
one level of subdirectories, not two
most subdirectories build into an OCaml library
some subdirectories contain other things (setup, scripts, etc.)
OCAMLPATH contains the root of the project; each library subdirectory produces a META file, making all OCaml parts of the projects accessible from the toplevel using #require.
only one OCaml executable is built for the whole project (saves a lot of linking time; still not sure why)
libraries are installed via a setup script using opam
local opam packages are made for software that it not in the official opam repository
we use an opam switch which is an alias named after our project, avoiding conflicts with other projects on the same machine
Source-code editing:
emacs with opam packages ocp-indent and ocp-index
Source control and management:
we use git and github
all new code is peer-reviewed via github pull requests
tarballs for non-opam non-github libraries are stored in a separate git repository (that can be blown away if history gets too big)
bleeding-edge libraries existing on github are forked into our github account and installed via our own local opam package
Use of OCaml:
OCaml will not compensate for bad programming practices; teaching good taste is beyond the scope of this answer. http://ocaml.org/learn/tutorials/guidelines.html is a good starting point.
OCaml 4.01.0 makes it much easier than before to reuse record field labels and variant constructors (i.e. type t1 = {x:int} type t2 = {x:int;y:int} let t1_of_t2 ({x}:t2) : t1 = {x} now works)
we try to not use camlp4 syntax extensions in our own code
we do not use classes and objects unless mandated by some external library
in theory since OCaml 4.01.0 we should prefer classic variants over polymorphic variants
we use exceptions to indicate errors and let them go through happily until our main server loop catches them and interprets them as "internal error" (default), "bad request", or something else
exceptions such as Exit or Not_found can be used locally when it makes sense, but in module interfaces we prefer to use options.
Libraries, protocols, frameworks:
we use Batteries for all commodity functions that are missing from OCaml's standard library; for the rest we have a "util" library
we use Lwt for asynchronous programming, without the syntax extensions, and the bind operator (>>=) is the only operator that we use (if you have to know, we do reluctantly use camlp4 preprocessing for better exception tracking on bind points).
we use HTTP and JSON to communicate with 3rd-party software and we expect every modern service to provide such APIs
for serving HTTP, we run our own SCGI server (ocaml-scgi) behind nginx
as an HTTP client we use Cohttp
for JSON serialization we use atdgen
"Cloud" services:
we use quite a lot of them as they are usually cheap, easy to interact with, and solve scalability and maintenance problems for us.
Testing:
we have one make/omake target for fast tests and one for slow tests
fast tests are unit tests; each module may provide a "test" function; a test.ml file runs the list of tests
slow tests are those that involve running multiple services; these are crafted specifically for our project, but they cover as much as possible as a production service. Everything runs locally either on Linux or MacOS, except for cloud services for which we find ways to not interfere with production.
Setting this all up is quite a bit of work, especially for someone not familiar with OCaml. There is no framework taking care of all that yet, but at least you get the choice of the tools.
OASIS
To add to Pavel answer:
Disclaimer: I am the author of OASIS.
OASIS also has oasis2opam that can help to create OPAM package quickly and oasis2debian to create Debian packages. This is extremly useful if you want to create a 'release' target that automate most of the tasks to upload a package.
OASIS is also shipped with a script called oasis-dist.ml that creates automatically tarball for upload.
Look all this in https://github.com/ocaml.org.
Testing
I use OUnit to do all my tests. This is simple and pretty efficient if you are used to xUnit testing.
Source control/management
Disclaimer: I am the owner/maintainer of forge.ocamlcore.org (aka forge.o.o)
If you want to use git, I recommend to use github. This is really efficient for review.
If you use darcs or subversion, you can create an account on forge.o.o.
In both case having a public mailing list where you send all commit notification is a must have, so that everyone can see them and review them. You can use either Google groups or a mailing list on forge.o.o.
I recommend to have a nice web (github or forge.o.o) page with OCamldoc documentation build everytime you commit. If you have a huge code base this will help you to use the OCamldoc generated documentation right from the beginning (and fix it quickly).
I recommend to create tarballs when you reach a stable stage. Don't just rely on checking out the latest git/svn version. This tip has saved me hours of work in the past. As said by Martin, store all your tarballs in a central place (a git repository is a good idea for that).
This one probably doesn't answer your question completely, but here is my experience regarding build environment:
I really appreciate OASIS. It has a nice set of features, helping not only to build the project, but also to write documentation and support test environment.
Build system
OASIS generates setup.ml file from the specification (_oasis file), which works basically as a building script. It accepts -configure, -build, -test, -distclean flags. I quite used to them while working with different GNU and other projects that usually use Makefiles and I find it convenient that it is possible to use all of them automatically here.
Makefiles. Instead of generating setup.ml, it is also possible to generate Makefile with all options described above available.
Structure
Usually my project that is built by OASIS has at least three directories: src, _build, scripts and tests.
In the former directory all source files are stored in one directory: source (.ml) and interface (.mli) files are stored together. May be if the project is too large, it is worth introducing more subdirectories.
The _build directory is under the influence of OASIS build system. It stores both source and object files there and I like that build files are not interfered with source files, so I can easily delete it in case something goes wrong.
I store multiple shell scripts in the scripts directory. Some of them are for test execution and interface file generation.
All input and output files for tests I store in a separate directory.
Interfaces/Documentation
The use of interface files (.mli) has both advantages and drawbacks for me. It really helps to find type errors, but if you have them, you have to edit them as well when making changes or improvements in your code. Sometimes forgetting this causes nasty errors.
But the main reason why I like interface files is documentation. I use ocamldoc to generate (OASIS supports this feature with -doc flag) html pages with documentation automatically. In my opinion it is enough to write comments describing each function in the interface and not to insert comments in the middle of code. In OCaml functions are usually short and concise and if there is a necessity to insert extra comments there, may be it is better to split the function.
Also be aware of -i flag for ocamlc. The compiler can automatically generate interface file for a module.
Tests
I didn't find a reasonable solution for supporting tests (I would like to have some ocamltest application), that's why I am using my own scripts for executing and verifying use cases. Fortunately, OASIS supports executing custom commands when setup.ml is run with -test flag.
I don't use OASIS for a long time and if anyone knows any other cool features, I would like also to know about them.
Also, it you are not aware of OPAM, it is definitely worth looking at. Without it installing and managing new packages is a nightmare.
Why does every source package that uses a makefile come with a ./configure script, what does it do? As far as I can tell, it actually generates the makefile?
Is there anything that can't be done in the makefile?
configure is usually a result of the 'autoconf' system. It can generate headers, makefiles, really anything. 'Usually,' since some are hand-crafted.
The reason for these things is to allow source code to be compiled in disparate environments. There are many variations on Unix / Linux, with slightly different system headers and libraries. configure scripts allow code to auto-adapt.
The configure step is a sort of meta build. It generates the makefile that will work on the specific hardware / distribution you're running. For instance it determines the name of the C or C++ compiler and embeds that in the makefile.
The configure step will also frequently take a set of parameters, the values of which may determine what libraries need to be linked against. For instance if you compile Apache HTTP with SSL enabled it needs to link against more shared libraries than if you don't. Since linking is handled by the makefile, you need an extra step to create a custom makefile (rather than requiring the make command to require dozens or hundreds of options.
Everything can be done from within the makefile but some build systems were implemented otherwise.
I don't personally use configure files for my projects but I admit I mostly export Erlang & Python based projects.
I don't think of the makefile as a script, I think of it as an input to the make utility.
The configure script (as its name suggests) configures the makefile, including as you say resolving dependencies.
If only from the idea of avoiding self-modifying code, the things in the configure script don't really belong in the makefile.
the point is that autoconf autohdr automake form an integrated system the makes cross platform building on unix relatively str8forward. THe docs are really bad and there are lots of horrible gotchas but on the other hand there are a lot of working samples
When I first came across this stuff I thought - "ha I can do that with a nice clean makefile" and proceeded to rework the source that way. Big mistake. Learn to write and edit configure.ac and makefile.am files, you will be happy in the end
To answer your question. Configure is good for
is function foo available on this platform and if so which include and library do I need
letting the builder choose if they want feature wizzbang included in a nice simple consistent way