I want to upload cobol files to Nexus. These cobol files can be equipped with groupId, artifactId, version. For packaging/type I would use cobol.
Is there any harm in doing this (these files are not zipped like jar,war,ear,zip)?
At the moment we are using Nexus 2.14, but I would like to use this method in the future also with Nexus 3.x or Artifactory.
The reason: Our developers often have to release wars/ears and cobol simultaneously and I would like to handle them in a similar way.
Repository Managers are at their core somewhat like hard drives. There's no harm in putting anything in them but there can be problems on usage, which is generally more the concern. Without knowing the specific usage, it's really impossible to say.
NXRM3 does have a "raw" format which is intended for "everything else" use so not as to tangle with your actual repository formats. Documentation here: https://help.sonatype.com/display/NXRM3/Raw+Repositories+and+Maven+Sites
That being said there are use cases where formats need to intermingle with things outside their norm and thus it is helpful to be stored side by side. The default security of a NXRM repository prevents this but can be lowered to accept anything (temporarily or permanently). See the "Strict Content Validation Type" field in a repository configuration. This is at your own risk=)
We are using Flyway to keep up-to-date many databases in our test environments with sql scripts and it works fine.
But we have a special need to also update databases with csv files.
I know Flyway offers some Java based migrations to handle more complicated updates.
But the problem is that these Java classes have the wanted version in their names, that would oblige us to recompile the class each time we want to use it.
It would be more simple if we could drop our csv files in migration directories exactly like we do with sql files.
Then some specific Java code would handle these csv files to do the right update.
So how can we extend Flyway with this specific code that would handle our csv files ?
Thanks
There is currently no support this. Sounds like the same issue as https://github.com/flyway/flyway/issues/469
I am still not sure how to resolve this without exposing too much of Flyway's internals.
I am interested in testing flyway and if I am not wrong I read that it supports db changes both through java and SQL. I am a dba and familiar with SQL but not java.
I read through the “Getting Started” page and wanted to try out the sample application available under the “Downloads tab” link however I couldn’t find any readme document explaining the available downloads which appeared to contain .jar files.
Q) is there an instruction manual for a newbies to explain how to put together this sample application?
Q) can one uses flyway without knowing java? If yes, please provide any how-to url/notes/documents available. If not do you have any how-to for one to get started with java just enough to operate this tool?
Thanks Bob
I think you might find the command line tool useful:
http://flywaydb.org/documentation/commandline/
As it says on the website:
The Flyway command-line tool is meant for users who
do not run their applications on the JVM
wish to migration their database from the command-line without having to install Maven
You may need to browse the source code to figure out some more details:
https://github.com/flyway/flyway
Although I think you should be able to adapt the regular documentation to the CLI option.
Try starting here:
http://flywaydb.org/getstarted/existingDatabaseSetup.html
Having an argument with my team. We are developing an application using SQLite and some want to add it to the repo (GIT) and some don't. Previously with RDBMS system there has been no perceived benefit of using VCS on the DB. However SQLite is a self contained file with no external dependencies so i assume, even though it is binary, that a commit of the project code + the SQLite file will give an accurate snapshot of the state of play at that point.
I also assume that a branch and merge would work as well.
Has anyone actually done this and if so does it work?
You'd get more benefit from GIT's versioning facilities if you stored a dump of the SQLite database (i.e. commands required to create it) rather than the database file itself. That way you could look at the history of the dump file and see tables or data being added etc.
Generally speaking, it's preferable to include full set of dependencies in a VCS repository. This makes your life a whole lot simpler.
If you're after versioning DB schema, check out Wizardby.
Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT