[Solved, it seems that there was some bug affecting Alfresco 3.3.0, which is no longer present on Alfresco 3.3.0g]
Hi,
I'm using OpenCMIS to retrieve data from Alfresco 3.3, but it's having a very weird behaviour on CMISQL queries. I've googled somebody else with the same problems, but it seems I'm the first one all over the world :), so I guess it's my fault, not OpenCMIS'.
This is how I'm querying Alfresco:
public Class CmisTest {
private static Session sesion;
private static final String QUERY = "select cmis:objectid, cmis:name from cmis:folder where cmis:name='MyFolder'";
public static void main(String[] args) {
// Open a CMIS session with Alfresco
Map<String, String> params = new HashMap<String, String>();
params.put(SessionParameter.USER, "admin");
params.put(SessionParameter.PASSWORD, "admin");
params.put(SessionParameter.ATOMPUB_URL, "http://localhost:8080/alfresco/s/api/cmis");
params.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value());
params.put(SessionParameter.REPOSITORY_ID, "fa9d2553-1e4d-491b-87fd-3de894dc7ca9");
sesion = SessionFactoryImpl.newInstance().createSession(params);
// Ugly bug in Alfresco which raises an exception if we request more data than it's available
// See https://issues.alfresco.com/jira/browse/ALF-2859
sesion.getDefaultContext().setMaxItemsPerPage(1);
// We repeat the same query 20 times and count the number of elements retrieved each time
for (int i = 0; i < 20; i++) {
List<QueryResult> result = doQuery();
System.out.println(result.size() + " folders retrieved");
}
}
public static List<QueryResult> doQuery() {
List<QueryResult> result = new LinkedList<QueryResult>();
try {
int page = 0;
while (true) {
ItemIterable<QueryResult> iterable = sesion.query(QUERY, false).skipTo(page);
page++;
for (QueryResult qr : iterable) {
result.add(qr);
}
}
} catch (Exception e) {
// We will always get an exception when Alfresco has no more data to retrieve... :(
// See https://issues.alfresco.com/jira/browse/ALF-2859
}
return result;
}
}
As you can see, we just execute the same query, up to 20 times in a row. You would expect the same result each time, wouldn't you? Unfortunately, this is a sample of what we get:
1 folders retrieved
1 folders retrieved
1 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
0 folders retrieved
1 folders retrieved
1 folders retrieved
Sometimes we get 20 1 in a row, sometimes it's all 0. We have never get a "mix" of 1 and 0, though; we always get "a run" of them.
It does not matter if we create the session before each query, we still have the random issue. We have tried against two different Alfresco servers (both of them 3.3 Community), clean installation, and they both fail randomly. We also tried to measure the time for each query, but it doesn't seem to have any relation with the result being wrong (0 folders retrieved) or right (1 folders retrieved).
Alfresco seems to be working fine: if we go to "Administration --> Node browser" and launch the CMISQL query from there, it always retrieves one folder, which is right. So, it must be our code, or an OpenCMIS bug...
Any ideas?
I can't reproduce this behavior. It's running fine against http://cmis.alfresco.com . The issue https://issues.alfresco.com/jira/browse/ALF-2859 states that there have been bug fixes. Are you running the latest Alfresco version?
Florian
Related
I have written code to extract tables and name value pairs from pdf using Amazon Textract. I followed this example:
https://docs.aws.amazon.com/textract/latest/dg/async-analyzing-with-sqs.html
which was in sdk for java version 1.1.
I have refactored it for version 2.
This is an async process that only applies to multi page documents. When i get back the results it is pretty accurate for first page. But the consecutive pages are mostly empty rows. The documents i parse are scanned so the quality is not great. However if i take a jpg of individual pages and use the one page operation, i.e. AnalyzeDocumentRequest, each page comes out good. Also Amazon Textract tryit service renders the pages correctly.
So the error must be in my code but can't see where.
As you see it all happens in here :
GetDocumentAnalysisRequest documentAnalysisRequest = GetDocumentAnalysisRequest.builder().jobId(jobId)
.maxResults(maxResults).nextToken(paginationToken).build();
response = textractClient.getDocumentAnalysis(documentAnalysisRequest);
and i can't really do any intervention.
The most likely place I could make a mistake would be in the util file that gathers the page and table blocks i.e. here:
PageModel pageModel = tableUtil.getTableResults(blocks);
But that works perfectly for the first page, and i could also see in the response object above, that the number of blocks returned are much less.
Here is the full code:
private DocumentModel getDocumentAnalysisResults(String jobId) throws Exception {
int maxResults = 1000;
String paginationToken = null;
GetDocumentAnalysisResponse response = null;
Boolean finished = false;
int pageCount = 0;
DocumentModel documentModel = new DocumentModel();
// loops until pagination token is null
while (finished == false) {
GetDocumentAnalysisRequest documentAnalysisRequest = GetDocumentAnalysisRequest.builder().jobId(jobId)
.maxResults(maxResults).nextToken(paginationToken).build();
response = textractClient.getDocumentAnalysis(documentAnalysisRequest);
// Show blocks, confidence and detection times
List<Block> blocks = response.blocks();
PageModel pageModel = tableUtil.getTableResults(blocks);
pageModel.setPageNumber(pageCount++);
Map<String,String> keyValues = formUtil.getFormResults(blocks);
pageModel.setKeyValues(keyValues);
documentModel.getPages().add(pageModel);
paginationToken = response.nextToken();
if (paginationToken == null)
finished = true;
}
return documentModel;
}
Has anyone else encountered this issue?
Many thanks
if the response has NextToken, then you need to recall textract and pass in the NextToken to get the next batch of Blocks.
I am not sure how to do this in Java but here is the python example from AWS repo
https://github.com/aws-samples/amazon-textract-serverless-large-scale-document-processing/blob/master/src/jobresultsproc.py
For my solution, I did a simple if response['NextToken'] then recall method and concat the response['Blocks'] to my current list.
I am creating a java fx application for openfire chat client.
i am using smack 4.1 rc1 to connect to the server.
i am able to to connect to server send presence information to others and send messages to other users as well.
however i am not able to iterate through the roster.
when i get roster object and debug it its shows a hash map of 3 roster entries that means the roster is getting loaded in roster object. however when i use roster.getentries method to store it into the Collection of roster entries it shows 0 object. even the roster.getentriescount() method returns 0 though i can see the roster user names in the debug view
try {
config = XMPPTCPConnectionConfiguration.builder()
.setUsernameAndPassword(mUserName+ "#" + Domain, mPassword)
.setServiceName(HostName)
.setHost(HostName)
.setPort(PortName)
.setResource(Resource)
.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled)
.build();
mXmppConnection = new XMPPTCPConnection(config);
mXmppConnection.connect();
mXmppConnection.login();
// Presence presence=new Presence();
Presence presence ;
if(mPresence) presence = new Presence(Presence.Type.available);
else presence = new Presence(Presence.Type.unavailable);
presence.setStatus("On Smack");
XMPPConnection conn=(XMPPConnection) mXmppConnection;
Chat chat = ChatManager.getInstanceFor(mXmppConnection).createChat
("monika#ipaddress");
chat.sendMessage("Howdy from smack!");
// Send the packet (assume we have a XMPPConnection instance called "con").
mXmppConnection.sendPacket(presence);
System.out.println("Connected successfully");
Roster roster = Roster.getInstanceFor(conn);
Collection<RosterEntry> entries = roster.getEntries();
int i=0;
for (RosterEntry entry : entries) {
System.out.println(entry);
i++;
}
System.out.println("Rosters Count - "+ i+ roster.getEntryCount());
has any one encountered the same problem before?
You may have to check if roster is loaded before calling getEntries.
Roster roster = Roster.getInstanceFor(connection);
if (!roster.isLoaded())
roster.reloadAndWait();
Collection <RosterEntry> entries = roster.getEntries();
thanks for Deepak Azad
Here full code
public void getRoaster(final Callback<Collection<RosterEntry>> callback) {
final Roster roster = Roster.getInstanceFor(connection);
if (!roster.isLoaded())
try {
roster.reloadAndWait();
} catch (SmackException.NotLoggedInException | SmackException.NotConnectedException | InterruptedException e) {
e.printStackTrace();
}
Collection<RosterEntry> entries = roster.getEntries();
for (RosterEntry entry : entries) {
android.util.Log.d(AppConstant.PUBLIC_TAG, entry.getName());
}
}
I have just solved this problem. I'm using OpenFire as XMPP server. I checked the field "Subscription" in the users in roster and it was "None". After change it by "Both", it worked, and entries are being fetched.
Hope it helps!
On a GitHub project, when we go to any branches page, we can see graphs which describes commit ahead/behind numbers of a branch w.r.t. master.
How can we determine those ahead behind numbers using JGit?
I used BranchTrackingStatus class for this, but I'm getting BranchTrackingStatus object always null.
Here is the code which I used
private static Lis<Integer> getCounts(Repository repository, String branchName) throws IOException{
BranchTrackingStatus trackingStatus = BranchTrackingStatus.of(repository, branchName);
List<Integer> counts = new ArrayList<Integer>();
if (trackingStatus != null){
counts.add(trackingStatus.getAheadCount());
counts.add(trackingStatus.getBehindCount());
} else {
counts.add(0);
counts.add(0);
}
return counts;
}
public void show(String repoName, String baseBranchName) throws IOException, GitAPIException{
Repository repository = repoManager.openRepository(new Project.NameKey(repoName));
List<Ref> call = new Git(repository).branchList().call();
for (Ref ref : call) {
List<Integer> counts = getCounts(repository, ref.getName());
System.out.println("Commits ahead : "+counts.get(0));
System.out.println("Commits behind : "+counts.get(1));
}
}
BranchTrackingStatus.of() assumes that branchName denotes a local branch, either by its full name (e.g. refs/heads/master) or its short name (e.g. master). It returns null if the given branchName cannot be found or the tracking branch is not configured or does not exist.
To compare two arbitrary branches, you could adopt the BranchTrackingStatus code like so:
void calculateDivergence(Ref local, Ref tracking) throws IOException {
try (RevWalk walk = new RevWalk(repository)) {
RevCommit localCommit = walk.parseCommit(local.getObjectId());
RevCommit trackingCommit = walk.parseCommit(tracking.getObjectId());
walk.setRevFilter(RevFilter.MERGE_BASE);
walk.markStart(localCommit);
walk.markStart(trackingCommit);
RevCommit mergeBase = walk.next();
walk.reset();
walk.setRevFilter(RevFilter.ALL);
aheadCount = RevWalkUtils.count(walk, localCommit, mergeBase);
behindCount = RevWalkUtils.count(walk, trackingCommit, mergeBase);
}
}
Your code looks fine to me, I took some of it and added it as a sample in the jgit-cookbook to see if it works. Locally I see counts when there are actual differences to the remote repostory.
Please note how BranchTrackingStatus works: It will not do a remote request and fetch the latest commits from the remote side, but it will only show what the local git repository has on the remote-tracking branches compared to the local branches.
I.e. you probably need to do a git fetch first on a repository that has updates on the remote side in order to get the status of the remote repository updated and only then BranchTrackingStatus will show values.
I have a very unusual problem.
I'm trying to create a simple database (6 tables, 4 of which only have 2 columns).
I'm using an in-house database library which I've used in a previous project, and it does work.
However with my current project there are occasional bugs. Basically the database isn't created correctly. It is added to the sdcard but when I access it I get a DatabaseException.
When I access the device from the desktop manager and try to open the database (with SQLite Database Browser v2.0b1) I get "File is not a SQLite 3 database".
UPDATE
I found that this happens when I delete the database manually off the sdcard.
Since there's no way to stop a user doing that, is there anything I can do to handle it?
CODE
public static boolean initialize()
{
boolean memory_card_available = ApplicationInterface.isSDCardIn();
String application_name = ApplicationInterface.getApplicationName();
if (memory_card_available == true)
{
file_path = "file:///SDCard/" + application_name + ".db";
}
else
{
file_path = "file:///store/" + application_name + ".db";
}
try
{
uri = URI.create(file_path);
FileClass.hideFile(file_path);
} catch (MalformedURIException mue)
{
}
return create(uri);
}
private static boolean create(URI db_file)
{
boolean response = false;
try
{
db = DatabaseFactory.create(db_file);
db.close();
response = true;
} catch (Exception e)
{
}
return response;
}
My only suggestion is keep a default database in your assets - if there is a problem with the one on the SD Card, attempt to recreate it by copying the default one.
Not a very good answer I expect.
Since it looks like your problem is that the user is deleting your database, just make sure to catch exceptions when you open it (or access it ... wherever you're getting the exception):
try {
URI uri = URI.create("file:///SDCard/Databases/database1.db");
sqliteDB = DatabaseFactory.open(myURI);
Statement st = sqliteDB.createStatement( "CREATE TABLE 'Employee' ( " +
"'Name' TEXT, " +
"'Age' INTEGER )" );
st.prepare();
st.execute();
} catch ( DatabaseException e ) {
System.out.println( e.getMessage() );
// TODO: decide if you want to create a new database here, or
// alert the user if the SDCard is not available
}
Note that even though it's probably unusual for a user to delete a private file that your app creates, it's perfectly normal for the SDCard to be unavailable because the device is connected to a PC via USB. So, you really should always be testing for this condition (file open error).
See this answer regarding checking for SDCard availability.
Also, read this about SQLite db storage locations, and make sure to review this answer by Michael Donohue about eMMC storage.
Update: SQLite Corruption
See this link describing the many ways SQLite databases can be corrupted. It definitely sounded to me like maybe the .db file was deleted, but not the journal / wal file. If that was it, you could try deleting database1* programmatically before you create database1.db. But, your comments seem to suggest that it was something else. Perhaps you could look into the file locking failure modes, too.
If you are desperate, you might try changing your code to use a different name (e.g. database2, database3) each time you create a new db, to make sure you're not getting artifacts from the previous db.
I have an ASP.NET application that caches some business objects. When a new object is saved, I call remove on the key to clear the objects. The new list should be lazy loaded the next time a user requests the data.
Except there is a problem with different views of the cache in different clients.
Two users are browsing the site
A new object is saved by user 1 and the cache is removed
User 1 sees the up to date view of the data
User 2 is also using the site but does not for some reason see the new cached data after user 1 has saved a new object - they continue to see the old list
This is a shortened version of the code:
public static JobCollection JobList
{
get
{
if (HttpRuntime.Cache["JobList"] == null)
{
GetAndCacheJobList();
}
return (JobCollection)HttpRuntime.Cache["JobList"];
}
}
private static void GetAndCacheJobList()
{
using (DataContext context = new DataContext(ConnectionUtil.ConnectionString))
{
var query = from j in context.JobEntities
select j;
JobCollection c = new JobCollection();
foreach (JobEntity i in query)
{
Job newJob = new Job();
....
c.Add(newJob);
}
HttpRuntime.Cache.Insert("JobList", c, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default, null);
}
}
public static void SaveJob(Job job, IDbConnection connection)
{
using (DataContext context = new DataContext(connection))
{
JobEntity ent = new JobEntity();
...
context.JobEntities.InsertOnSubmit(ent);
context.SubmitChanges();
HttpRuntime.Cache.Remove("JobList");
}
}
Does anyone have any ideas why this might be happening?
Edit: I am using Linq2SQL to retreive the objects, though I am disposing of the context.
I would ask you to make sure you do not have multiple production servers for load balancing purpose. In that case you will have to user some external dependency architecture for invalidating/removing the cache items.
That's because you don't synchronize cache operations. You should lock on writing your List to the cache (possibly even get the list inside the lock) and on removing it from the cache also. Otherwise, even if reading and writing are synchronized, there's nothing to prevent storing the old List right after your call to Remove. Let me know if you need some code example.
I would also check, if you haven't already, that the old data they're seeing hasn't been somehow cached in ViewState.
You have to make sure that User 2 sent a new request. Maybe the content it saws is from it's browser's cache, not the cache from your server