I have implemented a DfsRepository using jgit-2.0.0.201206130900. It works great but I want to repack it so that I only have one packfile. How do I do that via jgit?
Got this working. DfsGarbageCollector basically does the equivalent of repack -d. To get the repack -a behavior, use DfsPackCompactor:
void repack(DfsRepository repo) throws IOException {
DfsGarbageCollector gc = new DfsGarbageCollector(repo);
gc.pack(null);
// update the list of packs for getPacks() below
// otherwise not all packs are compacted
repo.scanForRepoChanges();
// only compact if there are multiple pack files
DfsPackFile[] packs = repo.getObjectDatabase().getPacks();
if (packs.length > 1) {
DfsPackCompactor compactor = new DfsPackCompactor(repo);
for (DfsPackFile pack : packs) {
compactor.add(pack);
}
compactor.compact(null);
}
}
That's not quite all though.
DfsGarbageCollector creates a separate packfile for the garbage.
The easiest way I found to "delete" the garbage packfile was to return a DfsOutputStream from my DfsObjDatabase.writePackFile() implementation that simply threw away the data if the pack file's source was PackSource.UNREACHABLE_GARBAGE.
Related
First, Let me explain why I need do this.
I have an inbound port with EDIReceive Pipeline configuration. it receives EDI X12 837I files and disassemble these files to 837I messages.
There's one file failed with error description below:
The following elements are not closed: ns0:X12_00501_837_I. Line 1, position 829925.
It looks like the incoming file have some structure issue. Making the disassembler cannot produce the message correctly. But the error itself don't help to locate the issue. Also, no TA1 and 999 generated to help us locate the issue.
So I created a little console application using the Pipeline Component Test Library try to run this file through the edidisassembler pipeline component to see if I can find what cause the error.
The code is pretty straightforward:
namespace TestEDIDasm
{
using System;
using System.IO;
using Microsoft.BizTalk.Edi.Pipelines;
using Microsoft.BizTalk.Message.Interop;
using Winterdom.BizTalk.PipelineTesting;
using Microsoft.BizTalk.Edi.BatchMarker;
class Program
{
static void Main(string[] args)
{
var ediDasmComp = new EdiDisassembler();
ediDasmComp.UseIsa11AsRepetitionSeparator = true;
ediDasmComp.XmlSchemaValidation = true;
var batchMaker = new PartyBatchMarker();
IBaseMessage testingMessage = MessageHelper.LoadMessage(#"c:\temp\{1C9420EB-5C54-43E5-9D9D-7297DE65B36C}_context.xml");
ReceivePipelineWrapper testPipelineWrapper = PipelineFactory.CreateEmptyReceivePipeline();
testPipelineWrapper.AddComponent(ediDasmComp, PipelineStage.Disassemble);
testPipelineWrapper.AddComponent(batchMaker, PipelineStage.ResolveParty);
var outputMessages = testPipelineWrapper.Execute(testingMessage);
if (outputMessages.Count <= 0)
{
Console.WriteLine("No output message");
Console.ReadKey();
return;
}
var msg = outputMessages[0];
StreamReader sr = new StreamReader(msg.BodyPart.Data);
Console.WriteLine(sr.ReadToEnd());
Console.ReadKey();
}
}
}
I added some breakpoint but end up with following errors in message context:
"X12 service schema not found"
Clearly, the EDIDisassembler component rely on some other stuff to do its job.
Now goes to my question:
Is there anyway to make EdiDisassembler working in testing
environment?
If there any other way to debug/trace the disassembler component
processing file other than Pipeline Component Test Library?
Theoretically, sure, but you have to replicate a lot of engine context that exists during Pipeline execution. The EDI components have issues running inside Orchestrations so it's likely a pretty tall order.
Have you tried a Preserve Interchange Pipeline with the Fallback Settings? That's about as simple as you can get with the EDI Disassembler.
I've got a weird one (to me): Using Nexus 2.11.4-01 and a another piece of software (Talend) is interfacing with it.
When Talend tries to talk to Nexus it throws an error, looks like it's trying to hit a URL of the form http://servername:8081/nexus/service/local/repositories/scratch/content which throws a 403 when browsed to with Chrome.
The Nexus logs show:
2015-09-07 15:47:30,396+0000 WARN [qtp131312334-65] admin org.sonatype.nexus.security.filter.authz.NexusTargetMappingAuthorizationFilter - Cannot translate request to Nexus repository path, expected pattern /service/local/repositories/([^/]*)/content/(.*), request: GET http://servername:8081/nexus/service/local/repositories/scratch/content
For any repo that I try, now "scratch" should match the pattern and the source here (for Nexus 2.11.3 admittedly) which I found via some googleing suggests it should work too:
http://grepcode.com/file/repo1.maven.org/maven2/org.sonatype.nexus/nexus-core/2.11.3-01/org/sonatype/nexus/security/filter/authz/NexusTargetMappingAuthorizationFilter.java
private String getResourceStorePath(final ServletRequest request) {
String path = WebUtils.getPathWithinApplication((HttpServletRequest) request);
if (getPathPrefix() != null) {
final Pattern p = getPathPrefixPattern();
final Matcher m = p.matcher(path);
if (m.matches()) {
path = getPathReplacement();
// TODO: hardcoded currently
if (path.contains("#1")) {
path = path.replaceAll("#1", Matcher.quoteReplacement(m.group(1)));
}
if (path.contains("#2")) {
path = path.replaceAll("#2", Matcher.quoteReplacement(m.group(2)));
}
// and so on... this will be reworked to be dynamic
}
else {
// what happens here: router requests are formed as: /KIND/ID/REPO_PATH
// where KIND = {"repositories", "groups", ...}, ID is a repo ID, and REPO_PATH is a repository path
// being here, means we could not even match anything of these, usually having newline in string
// as that's the only thing the "dotSTAR" regex would not match (it would match any other character)
log.warn(formatMessage(request, "Cannot translate request to Nexus repository path, expected pattern {}"), p);
return null;
}
}
return path;
}
So my question is what am I doing wrong, what am I missing?
The solution is that the version of Nexus shipped with Talend 5.6 (and that it is written to interface with) is pretty ancient and that the newer versions of Nexus use a different interface.
I have a Qt5 application which uses QNetworkAccessManager for network requests which is accessible via a singleton and QPluginLoader to load extensions which add the functionality to the program. Currently I'm using static linking for plugins and everything works just fine.
However I want to switch to using dynamic libraries to separate the core functionality from other parts of the app. I've added the necessary declspec's via macro, and made necessary adjustments in my .pro files.
The problem is that very often (like, 3 of 4 starts) QNetworkAccessManager when used from dlls just returns an empty request or a null pointer. No data, no error string, no headers.
This is the code I'm using for loading plugins:
template <typename PluginType>
static QList<PluginType*> loadModules() {
QList<PluginType*> loadedModules;
foreach (QObject* instance, QPluginLoader::staticInstances()) {
PluginType* plugin = qobject_cast<PluginType*>(instance);
if (plugin) {
loadedModules << plugin;
}
}
QDir modulesDir(qApp->applicationDirPath() + "/modules");
foreach (QString fileName, modulesDir.entryList(QDir::Files)) {
QPluginLoader loader(modulesDir.absoluteFilePath(fileName));
QObject *instance = loader.instance();
PluginType* plugin = qobject_cast<PluginType*>(instance);
if (plugin) {
loadedModules << plugin;
}
}
return loadedModules;
}
Which is used in this non-static non-template overload called during the startup:
bool AppController::loadModules() {
m_window = new AppWindow();
/* some unimportant connection and splashscreen updating */
QList <ModuleInterface*> loadedModules = loadModules<ModuleInterface>();
foreach (ModuleInterface* module, loadedModules) {
m_splash->showMessage(tr("Initializing module: %1").arg(module->getModuleName()),
Qt::AlignBottom | Qt::AlignRight, Qt::white);
module->preinit();
QApplication::processEvents();
// [1]
ControllerInterface *controller = module->getMainController();
m_window->addModule(module->getModuleName(),
QIcon(module->getIconPath()),
controller->primaryWidget(),
controller->settingsWidget());
m_moduleControllers << controller;
}
m_window->addGeneralSettings((new GeneralSettingsController(m_window))->settingsWidget());
m_window->enableSettings();
/* restoring window geometry & showing it */
return true;
}
However, if I insert QThread::sleep(1); into the line marked 1, it works okay, but the loading slows down and I highly doubt it is a stable solution that will work everywhere.
Also, the site I'm sending requests to is MyAnimeList.
All right, now I have finally debugged it. Turned out I deleted internal QNetworkAccessManager in one of the classes that needed unsync access. That, and updating to Qt5.3 seem to have solved my problem.
I have some jar file (custom) which I need to publish to Sonatype Nexus repository from Groovy script.
I have jar located in some path on machine where Groovy script works (for instance: c:\temp\module.jar).
My Nexus repo url is http://:/nexus/content/repositories/
On this repo I have folder structure like: folder1->folder2->folder3
During publishing my jar I need to create in folder3:
New directory with module's revision (my Groovy script knows this revision)
Upload jar to this directory
Create pom, md5 and sha1 files for jar uploaded
After several days of investigation I still have no idea how to create such script but this way looks very clear instead of using direct uploading.
I found http://groovy.codehaus.org/Using+Ant+Libraries+with+AntBuilder and some other stuff (stackoverflow non script solution).
I got how to create ivy.xml in my Groovy script, but I don't understand how to create build.xml and ivysetting.xml on the fly and setup whole system to work.
Could you please help to understand Groovy's way?
UPDATE:
I found that the following command works fine for me:
curl -v -F r=thirdparty -F hasPom=false -F e=jar -F g=<my_groupId> -F a=<my_artifactId> -F v=<my_artifactVersion> -F p=jar -F file=#module.jar -u admin:admin123 http://<my_nexusServer>:8081/nexus/service/local/repositories
As I understand curl perform POST request to Nexus services. Am I correct?
And now I'm trying to build HTTP POST request using Groovy HTTPBuilder.
How I should transform curl command parameters into Groovy's HTTPBuilder request?
Found a way to do this with the groovy HttpBuilder.
based on info from sonatype, and a few other sources.
This works with http-builder version 0.7.2 (not with earlier versions)
And also needs an extra dependency: 'org.apache.httpcomponents:httpmime:4.2.1'
The example also uses basic auth against nexus.
import groovyx.net.http.Method
import groovyx.net.http.ContentType;
import org.apache.http.HttpRequest
import org.apache.http.HttpRequestInterceptor
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.protocol.HttpContext
import groovyx.net.http.HttpResponseException;
class NexusUpload {
def uploadArtifact(Map artifact, File fileToUpload, String user, String password) {
def path = "/service/local/artifact/maven/content"
HTTPBuilder http = new HTTPBuilder("http://my-nexus.org/")
String basicAuthString = "Basic " + "$user:$password".bytes.encodeBase64().toString()
http.client.addRequestInterceptor(new HttpRequestInterceptor() {
void process(HttpRequest httpRequest, HttpContext httpContext) {
httpRequest.addHeader('Authorization', basicAuthString)
}
})
try {
http.request(Method.POST, ContentType.ANY) { req ->
uri.path = path
MultipartEntity entity = new MultipartEntity()
entity.addPart("hasPom", new StringBody("false"))
entity.addPart("file", new FileBody(fileToUpload))
entity.addPart("a", new StringBody("my-artifact-id"))
entity.addPart("g", new StringBody("my-group-id"))
entity.addPart("r", new StringBody("my-repository"))
entity.addPart("v", new StringBody("my-version"))
req.entity = entity
response.success = { resp, reader ->
if(resp.status == 201) {
println "success!"
}
}
}
} catch (HttpResponseException e) {
e.printStackTrace()
}
}
}
`
Ivy is an open source library, so, one approach would be to call the classes directly. The problem with that approach is that there are few examples on how to invoke ivy programmatically.
Since groovy has excellent support for generating XML, I favour the slightly dumber approach of creating the files I understand as an ivy user.
The following example is designed to publish files into Nexus generating both the ivy and ivysettings files:
import groovy.xml.NamespaceBuilder
import groovy.xml.MarkupBuilder
// Methods
// =======
def generateIvyFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml."ivy-module"(version:"2.0") {
info(organisation:"org.dummy", module:"dummy")
publications() {
artifact(name:"dummy", type:"pom")
artifact(name:"dummy", type:"jar")
}
}
}
return file
}
def generateSettingsFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml.ivysettings() {
settings(defaultResolver:"central")
credentials(host:"myrepo.com" ,realm:"Sonatype Nexus Repository Manager", username:"deployment", passwd:"deployment123")
resolvers() {
ibiblio(name:"central", m2compatible:true)
ibiblio(name:"myrepo", root:"http://myrepo.com/nexus", m2compatible:true)
}
}
}
return file
}
// Main program
// ============
def ant = new AntBuilder()
def ivy = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.ivy.ant')
generateSettingsFile("ivysettings.xml").deleteOnExit()
generateIvyFile("ivy.xml").deleteOnExit()
ivy.resolve()
ivy.publish(resolver:"myrepo", pubrevision:"1.0", publishivy:false) {
artifacts(pattern:"build/poms/[artifact].[ext]")
artifacts(pattern:"build/jars/[artifact].[ext]")
}
Notes:
More complex? Perhaps... however, if you're not generating the ivy file (using it to manage your dependencies) you can easily call the makepom task to generate the Maven POM files prior to upload into Nexus.
The REST APIs for Nexus work fine. I find them a little cryptic and of course a solution that uses them cannot support more than one repository manager (Nexus is not the only repository manager technology available).
The "deleteOnExit" File method call ensures the working files are cleaned up properly.
Is it possible to output the db migration to an SQL file instead of directly invoking database changes in flyway?
Most times this will not be needed as with Flyway the DB migrations themselves will already be written in SQL.
Yes it's possible and as far as I am concerned the feature is an absolute must for DBAs who don't want to allow flyway in prod.
I made do with modifying code from here, it's a dry run command for flyway, you can add a filewriter and write out migrationDetails:
https://github.com/killbill/killbill/commit/996a3d5fd096525689dced825eac7a95a8a7817e
I did it like so... Project structure (just copied it out of killbill's project and renamed package to flywaydr:
.
./main
./main/java
./main/java/com
./main/java/com/flywaydr
./main/java/com/flywaydr/CapturingMetaDataTable.java
./main/java/com/flywaydr/CapturingSqlMigrationExecutor.java
./main/java/com/flywaydr/DbMigrateWithDryRun.java
./main/java/com/flywaydr/MigrationInfoCallback.java
./main/java/com/flywaydr/Migrator.java
./main/java/org
./main/java/org/flywaydb
./main/java/org/flywaydb/core
./main/java/org/flywaydb/core/FlywayWithDryRun.java
In Migrator.java add (implement callback and put it in DbMigrateWithDryRun.java) :
} else if ("dryRunMigrate".equals(operation)) {
MigrationInfoCallback mcb = new MigrationInfoCallback();
flyway.dryRunMigrate();
MigrationInfoImpl[] migrationDetails = mcb.getPendingMigrationDetails();
if(migrationDetails.length>0){
writeMasterScriptToFile(migrationDetails);
}
}
Then to write stuff to file something like:
private static void writeMasterScriptToFile(MigrationInfoImpl[] migrationDetails){
FileWriter fw = null;
try{
String masterScriptLoc="path/to/file";
fw = new FileWriter(masterScriptLoc);
LOG.info("Writing output to " + masterScriptLoc);
for (final MigrationInfoImpl migration : migrationDetails){
Path file =Paths.get(migration.getResolvedMigration().getPhysicalLocation());
//if you want to copy actual script files parsed by flyway
Files.copy(file, Paths.get(new StringBuilder(scriptspathloc).append(File.separator).append(file.getFileName().toString()).toString()), REPLACE_EXISTING);
}
//or just get the sql
for (final SqlStatement sqlStatement : sqlStatements) {
//sqlStatement.getSql();
}
fw.write(stuff.toString());
} catch(Exception e){
LOG.error("Could not write to file, io exception was thrown.",e);
} finally{
try{fw.close();}catch(Exception e){LOG.error("Could not close file writer.",e);}
}
}
One last thing to mention, I compile and package this into a jar "with dependencies" (aka fatjar) via maven (google assembly plugin + jar with dependencies) and run it via command like below or you can include it as a dependency and call it via mvn exec:exec goal, which is something I had success with as well.
$ java -jar /path/to/flywaydr-fatjar.jar dryRunMigrate -regular.flyway.configs -etc -etc
I didnt find a way. Switched to mybatis migration. Looks quite nice.