Jgit how to get files in a specific commit - jgit

I am trying to get list of commits during a specfic period and trying to get the list of files in each commit . Tried below code
gitRepo = Git.cloneRepository().setURI("****").setCredentialsProvider(
new UsernamePasswordCredentialsProvider("PRIVATE-TOKEN", "***")).setDirectory(new File("test")).setNoCheckout(true).call();
ObjectId masterId = gitRepo.getRepository().exactRef("refs/remotes/origin/master").getObjectId();
Date since = new SimpleDateFormat("yyyy-MM-dd").parse("2021-12-08");
Date until = new SimpleDateFormat("yyyy-MM-dd").parse("2021-12-10");
RevFilter between = CommitTimeRevFilter.between(since, until);
for (RevCommit commit : gitRepo.log().add(masterId).setRevFilter(between).call()) {
System.out.println( "* "
+ commit.getId().getName()
+ " "
+ commit.getShortMessage()
+ " "
+ commit.getAuthorIdent().getName());
ObjectId lastCommitId = gitRepo.getRepository().resolve(commit.getId().getName());
RevTree tree = commit.getTree();
TreeWalk treeWalk = new TreeWalk(gitRepo.getRepository());
treeWalk.addTree(tree);
treeWalk.setRecursive(false);
while(treeWalk.next()){
System.out.println("File Name = "+treeWalk.getPathString());
}
}
Its getting all files in the repo instead of the files changed as part of the specific commit I am passing in last commit . Not sure what I am missing ?

Related

How can insert Cutout sign ( ' ) sqlite database from a txt file

I used sqlite database to save my data. I backuped sqlite table to a txt file. When I try to restore from txt to sqlite I get syntax error (code-1) . Because I used cutout sign ( ' ) for example when I writing (Türk'ler geldi).
Error is : android.database.sqlite.SQLiteException: near "ler": syntax error (code 1): , while compiling: .......
How can I insert it. I used ( \' ) but it didn't work.
Here is my code from sqlite to txt file
semptoms = db.allSemptom();
int say = semptoms.size();
StringBuilder message = new StringBuilder("");
for (int i = 0; i < say; i++) {
Semptom semptom = semptoms.get(i);
String name = semptom.getName();
message.append(name + "\n");
}
saveToSymptom(message.toString());
I try to add .replace(" ' "," \ ' ") but didn't work
I mean like this
message.append(name.replace(" ' "," \ ' ") + "\n")
Thanks to forpas for this answer
I used the below code and now problem solved.
message.append(name.replace("'","''") + "\n")
as forpas said in comment

Google Appmaker createItem failing with could not select element

I have a temporary table where I let the user copy the record that needs to be edited. Once the edit is complete, I copy it back.
I am getting an error when I am trying to copy the original record to temporary table for editing. Here's the code I am using
console.log('copyOriginalToTemp ' + tempRecord.ID + ' options ' + JSON.stringify(options));
var myCreateDatasource = app.datasources.RadiosTemp.modes.create;
console.log('# of items in myCreateDatasource ' + myCreateDatasource.items.length);
var draft = myCreateDatasource.item;
draft.BatchId = options.BatchId ;
draft.County = tempRecord.County ;
... // lot of assignments
console.log('About to create item ');
myCreateDatasource.createItem(function(createdRecord) {
console.log('Creating the Item ' + createdRecord._key);
app.datasources.RadiosTemp.query.filters.BatchId._equals = options.BatchId;
.....
});
Error message tells me that newly created item cannot be selected but I have no idea why?. If I change the datasource to Manual Save, I get the same error with no key since it is in Manual save mode.

Failing to write offset data to zookeeper in kafka-storm

I was setting up a storm cluster to calculate real time trending and other statistics, however I have some problems introducing the "recovery" feature into this project, by allowing the offset that was last read by the kafka-spout (the source code for kafka-spout comes from https://github.com/apache/incubator-storm/tree/master/external/storm-kafka) to be remembered. I start my kafka-spout in this way:
BrokerHosts zkHost = new ZkHosts("localhost:2181");
SpoutConfig kafkaConfig = new SpoutConfig(zkHost, "test", "", "test");
kafkaConfig.forceFromStart = false;
KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("test" + "spout", kafkaSpout, ESConfig.spoutParallelism);
The default settings should be doing this, but I think it is not doing so in my case, every time I start my project, the PartitionManager tries to look for the file with the offsets, then nothing is found:
2014-06-25 11:57:08 INFO PartitionManager:73 - Read partition information from: /storm/partition_1 --> null
2014-06-25 11:57:08 INFO PartitionManager:86 - No partition information found, using configuration to determine offset
Then it starts reading from the latest possible offset. Which is okay if my project never fails, but not exactly what I wanted.
I also looked a bit more into the PartitionManager class which uses Zkstate class to write the offsets, from this code snippet:
PartitionManeger
public void commit() {
long lastCompletedOffset = lastCompletedOffset();
if (_committedTo != lastCompletedOffset) {
LOG.debug("Writing last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
Map<Object, Object> data = (Map<Object, Object>) ImmutableMap.builder()
.put("topology", ImmutableMap.of("id", _topologyInstanceId,
"name", _stormConf.get(Config.TOPOLOGY_NAME)))
.put("offset", lastCompletedOffset)
.put("partition", _partition.partition)
.put("broker", ImmutableMap.of("host", _partition.host.host,
"port", _partition.host.port))
.put("topic", _spoutConfig.topic).build();
_state.writeJSON(committedPath(), data);
_committedTo = lastCompletedOffset;
LOG.debug("Wrote last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
} else {
LOG.debug("No new offset for " + _partition + " for topology: " + _topologyInstanceId);
}
}
ZkState
public void writeBytes(String path, byte[] bytes) {
try {
if (_curator.checkExists().forPath(path) == null) {
_curator.create()
.creatingParentsIfNeeded()
.withMode(CreateMode.PERSISTENT)
.forPath(path, bytes);
} else {
_curator.setData().forPath(path, bytes);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
I could see that for the first message, the writeBytes method gets into the if block and tries to create a path, then for the second message it goes into the else block, which seems to be ok. But when I start the project again, the same message as mentioned above shows up. No partition information can be found.
I had the same problem. Turned out I was running in local mode which uses an in memory zookeeper and not the zookeeper that Kafka is using.
To make sure that KafkaSpout doesn't use Storm's ZooKeeper for the ZkState that stores the offset, you need to set the SpoutConfig.zkServers, SpoutConfig.zkPort, and SpoutConfig.zkRoot in addition to the ZkHosts. For example
import org.apache.zookeeper.client.ConnectStringParser;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
import storm.kafka.KeyValueSchemeAsMultiScheme;
...
final ConnectStringParser connectStringParser = new ConnectStringParser(zkConnectStr);
final List<InetSocketAddress> serverInetAddresses = connectStringParser.getServerAddresses();
final List<String> serverAddresses = new ArrayList<>(serverInetAddresses.size());
final Integer zkPort = serverInetAddresses.get(0).getPort();
for (InetSocketAddress serverInetAddress : serverInetAddresses) {
serverAddresses.add(serverInetAddress.getHostName());
}
final ZkHosts zkHosts = new ZkHosts(zkConnectStr);
zkHosts.brokerZkPath = kafkaZnode + zkHosts.brokerZkPath;
final SpoutConfig spoutConfig = new SpoutConfig(zkHosts, inputTopic, kafkaZnode, kafkaConsumerGroup);
spoutConfig.scheme = new KeyValueSchemeAsMultiScheme(inputKafkaKeyValueScheme);
spoutConfig.zkServers = serverAddresses;
spoutConfig.zkPort = zkPort;
spoutConfig.zkRoot = kafkaZnode;
I think you are hitting this bug:
https://community.hortonworks.com/questions/66524/closedchannelexception-kafka-spout-cannot-read-kaf.html
And the comment from the colleague above fixed my issue. I added some newer libraries to.

how can i recover the error(Could not find a part of the path 'E:\Work Station\Works\NewThrissurDiary\images\'.) that i got in my update query?

I want to update the files which I uploaded to database
This is my query.. please help...
if (FileUpload1.HasFile || FileUpload2.HasFile || FileUpload3.HasFile || FileUpload4.HasFile || FileUpload5.HasFile)
{
string filename1 = Path.GetFileName(FileUpload1.PostedFile.FileName);
FileUpload1.SaveAs(Server.MapPath("~/images/" + filename1));
string filename2 = Path.GetFileName(FileUpload2.PostedFile.FileName);
FileUpload2.SaveAs(Server.MapPath("~/images/" + filename2));
string filename3 = Path.GetFileName(FileUpload3.PostedFile.FileName);
FileUpload3.SaveAs(Server.MapPath("~/images/" + filename3));
//string filename4 = Path.GetFileName(FileUpload4.PostedFile.FileName);
//FileUpload4.SaveAs(Server.MapPath("~/images/" + filename4));
string filename5 = Path.GetFileName(FileUpload5.PostedFile.FileName);
FileUpload5.SaveAs(Server.MapPath("~/images/" + filename5));
}
even It is an old Topic....
But I have recently gotten same Error... The Error Try to access the pathe which is not available. Let me say as my example :
I Have a folder (Fa) which belongs to My Experts and The main Path where records upload files are In another folder (Fb)...
At First I wrote this Code :
string strname2;
strname2 = ("Fa/" + FileUpload2.PostedFile.FileName.Substring(FileUpload2.PostedFile.FileName.LastIndexOf("//") + 1));
LabelFile.Text = strname2;
and when I use Following Error I got that deadly Error :
if (strname2 != "Fa/")
{
FileUpload2.PostedFile.SaveAs(Server.MapPath(strname2));
}
else
{
LabelFile.Text = "";
}
After some test and try I change both above code to following Codes and Guess what? Everything works like a charm :
1:
string strname2;
strname2 = ("../Fb/Fa/" + FileUpload2.PostedFile.FileName.Substring(FileUpload2.PostedFile.FileName.LastIndexOf("//") + 1));
LabelFile.Text = strname2;
And then:
2:
if (strname2 != "../Fb/Fa/")
{
FileUpload2.PostedFile.SaveAs(Server.MapPath(strname2));
}
else
{
LabelFile.Text = "";
}
Now the solution is check your folders and see that folders are available or not.
PS: Search specially for this folder : NewThrissurDiary
Hope this Help you

Show Log of a file in jgit when committng it as a BLOB

Implemented the above logic to create a new commit (with no parents) that contains only my file. Commits are fast in the repository as compared to CommitCommand commit = git.commit();
But I am not able to get the Logs of a particular file , the number of times it has been updated/revised , and every time i go for Constants.HEAD , I am getting null.
Any help will be of great advantage.
Git git = jGitUtil.openRepo();
Repository repository = git.getRepository();
ObjectInserter repoInserter = repository.newObjectInserter();
ObjectId commitId = null;
try
{
byte[] fileBytes= FileUtils.readFileToByteArray(sourceFile);
// Add a blob to the repository
ObjectId blobId = repoInserter.insert(org.eclipse.jgit.lib.Constants.OBJ_BLOB, fileBytes);
// Create a tree that contains the blob as file "hello.txt"
TreeFormatter treeFormatter = new TreeFormatter();
treeFormatter.append(actualFileName, FileMode.REGULAR_FILE, blobId);
ObjectId treeId = treeFormatter.insertTo(repoInserter);
System.out.println("File comment : " + relativePath + PortalConstants.FILESEPARATOR + actualFileName + PortalConstants.EP_DELIMETER + userComments);
// Create a commit that contains this tree
CommitBuilder commit = new CommitBuilder();
PersonIdent ident = new PersonIdent(user.getFirstName(), user.getUserId());
commit.setCommitter(ident);
commit.setAuthor(ident);
commit.setMessage(relativePath + PortalConstants.FILESEPARATOR + actualFileName + PortalConstants.EP_DELIMETER + userComments);
commit.setTreeId(treeId);
commitId = repoInserter.insert(commit);
System.out.println(" commitId : " + commitId.getName());
repoInserter.flush();
System.out.println("Flush Done");
}catch(IOException ioe){
log.logError(StackTraceUtil.getStackTrace(ioe));
System.out.println(StackTraceUtil.getStackTrace(ioe));
}
finally
{
repoInserter.release();
}
return commitId.getName();
}
I would probably use Add File and Commit File porcelain commands instead of trying to implement it myself. Then retrieving the log should be possible via something like this:
Iterable<RevCommit> logs = new Git(repository).log()
.all()
.call();
for(RevCommit rev : logs) {
System.out.println("Commit: " + rev + " " + rev.getName() + " " + rev.getId().getName());
}
Where you can retrieve the additional information based on the commit-id.
I have now also added this as new snippet Show Log
Note that if you are creating a commit with no parents, you are saying that there is no history. Therefore you won't be able to see any other changes from that history because it doesn't exist.
You probably should use a commit with the parent of the current position of the branch so that you'll be able to get more information.

Resources