I'm trying to use CheckoutCommand the following way:
Git git = new Git(repository);
CheckoutCommand checkoutCommand = git.checkout();
checkoutCommand.setUpstreamMode(CreateBranchCommand.SetupUpstreamMode.SET_UPSTREAM);
checkoutCommand.setStartPoint("origin/" + branchName);
checkoutCommand.setCreateBranch(true);
checkoutCommand.setForce(true);
checkoutCommand.call();
I tried using SetupUpstreamMode.TRACK as well but it still failed.
This results in a strange behaviour:
the repository contents are deleted and instead a clone from each remote branch is created.
Can you please advise?
You are not setting the name of the branch to create, which should result in an exception on call() (is it possible that you are swallowing that?). Add a call like this:
checkoutCommand.setName(branchName);
See the documentation of CheckoutCommand (the above is mentioned here).
Also note that these calls can be chained, so you could also write it like this:
git.checkout().setCreateBranch(true).setName(branchName) // ...
Related
Consider the following code:
mt.resolve().getQualifiedSignature();
Here mt is of type MethodDeclaration, and it might come from a MethodCallExpr.
Now in order it to work accurately, I need to set the following:
CombinedTypeSolver combinedTypeSolver = new CombinedTypeSolver();
combinedTypeSolver.add(new ReflectionTypeSolver());
combinedTypeSolver.add(new JavaParserTypeSolver(new File("src/java1/")));
combinedTypeSolver.add(new JavaParserTypeSolver(new File("src/java2/
")));
This is easy, except I have two issues considering my scenario.
1) I can not set the root source directories manually. I need to find them automatically.
2) I can not give path like the above, because I am using jGit to checkout to different commits. So the paths are not fixed and might vary based on the different checkout commits. So the path should be accessed using JGit tree.
Any help would be highly appreciated.
I'm using insomnia to make calls to the Artifactory API.
I have the following query, which works really well:
items.find({"repo":{"$eq":"my-repository-virt"}}, {"$and":[{"#my.fileType":{"$match": "jar"}},{"#my.otherType":{"$match": "type2"}},{"#prodVersion":{"$match": "false"}}]})
But I have a problem in that there are duplicate files in some sub-folders with the same properties/filename that I would like to exclude.
I would like to add path to this query, but I can never get any results returned.
The repository is a virtual repository that links to 3 other real repositories.
One of my colleagues can call the following query with the command line tool and get the expected results:
jfrog rt search my-repo-snapshots/myproject/subfolder/jars/*.jar
I have tried adding the path parameter to my query, I've tried removing everything except the repo and the path, like this:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : "my-repo-snapshots/myproject/subfolder/jars/*.jar"})
I've tried with just the path, with variations on the path, including/excluding the repo name, using the virtual repo, the actual repo, but I always get a successful search with 0 results returned.
How can I build this query to search the virtual repo, along a certain path, and including certain properties?
EDIT:
I've also tried:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : {"$match":"my-repo-snapshots/myproject/subfolder/jars/*.jar"}})
Both with the repo in the path and without, I still get 0 results.
OK I figured it out.
The path part needs to be added in with the {"$and": ...} section where the properties are included. Like so:
items.find({"repo":{"$eq":"my-repository-virt"}},
{"$and":[
{"path":{"$match":"path/to/relevant/folders/*"}},
{"#my.fileType":{"$match": "jar"}},
{"#my.otherType":{"$match": "type2"}},
{"#prodVersion":{"$match": "false"}}
]})
The easier fix would have been:
items.find({"repo":{"$eq":"my-repo-snapshots"}},{"path" : {"$eq":"my-repo-snapshots/myproject/subfolder/jars"}, {"name" : {"$match":"*.jar"}})
So the problem with your initial attempt, is that the "path" should match the folder and the "name" should match the filename
I'm creating several commands programmatically and want to avoid having to add key mappings for them explicitly in keymap.cson.
The Flight Manual page for Keymap Manager shows an add method. It doesn't give an example of how to actually use this method, so my guess is that this should work:
atom.keymaps.add('atom-text-editor',{'alt-1':'custom:my-command'});
However, this does not appear to work. When I run this in the developer console, I get this message:
Encountered an invalid key binding when adding key bindings from 'atom-text-editor' 'custom:my-command'.
I got this message even if I changed alt to ctrl.
What does the correct method call on atom.keymaps look like.
I agree, the docs are not detailed enough. However, through trial and error, I managed to figure it out:
atom.keymaps.add('foo', {
'atom-text-editor' : {
'alt-1': 'custom:my-command',
'#': 'application:about'
// etc
}
});
Explanation:
atom.keymaps.add(source, bindings, priority);
The source argument is not the same as what is referred to as the selector in Atom speak. Instead, it's an identifier that can be used to remove the keybindings, should you wish to (except it seems they haven't actually implemented a remove method!).
Instead, the selector should go inside the bindings argument, as shown above.
I have a table with different methods, for example, one of them is validateWrite, when setting Field A to value X, Field B and C has to be filled in.
Suddenly (without changing code, I have compared the code with the test enviroment, it does work there) the validateWrite has stopped working.
I have tried to recompile the table, but that did not work.
Any idea why it suddenly (without making other modifications in this enviroment, or generating a CIL) stopped working and what i can try to solve it?
If some piece of code is calling table.doInsert(), it skips the validateWrite() method.
If the environments are truly identical, then I would try closing your AX client and deleting your user caches (see http://dynamics-ax-live.blogspot.com/2010/03/more-information-about-auc-file.html) where you delete all of the *.auc files located at C:\Users\[Username]\AppData\Local
In addition to what that tells you to delete, I'd also remove the *.kti file and all of the files & folders inside of C:\Users\[UserName]\AppData\Local\Microsoft\Dynamics Ax
Then open AX, see if the problem still exists. Then full system compile, CIL build, and delete your usage data.
The preferred route though would be to just drop a breakpoint in and debug the code to see what the execution stack is.
I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!