I am having a hard time figuring out how I will get the workflow data from Filenet. I tried using process engine and content engine but I am lost on where to look at. Should I use PE or CE? also what particular part in the API?
I can already get the list of object stores from CE. Also I can already get the list of search parameters are its data from the PE, but I am lost on how to get the workflow step properties and its data and possible update it thru JAVA API.
You need to query for the workitems using PE API. Assuming workitems are in a queue then,
VWQueueQuery vwQueueQuery = **yourqueue**.createQuery(java.lang.String indexName, java.lang.Object[] firstValues, java.lang.Object[] lastValues, int queryFlags, java.lang.String filter, java.lang.Object[] substitutionVars, int fetchType)
then
while (vwQueueQuery.hasNext()) {
vwStepElement = (VWStepElement) vwQueueQuery.next();
//lock if you want to modify the workitem
vwStepElement.doLock(true);
//once you have vwstepelement, there are different ways to get properties
String[] properties = vwStepElement.getParameterNames();//this will give you all the properties that are exposed for that queue.
//if you want to get a specific property then use
Object specificParameter = vwStepElement.getParameterValue("propName");
//then if you want to set a value
vwStepElement.setParameterValue(parameterName, parameterValue, compareValue);
//finally, if you want save and dispatch to next level
vwStepElement.setSelectedResponse(response);
vwStepElement.doSave(true);
vwStepElement.doDispatch();
}
Related
I am working on building streaming APIs for client/server communication using Axon and ServerSentEvents and not sure if it is possible to stream and identify multiple different events using Axon query update emitter and subscription query.
I am using Axon QueryUpdateEmitter.emit to emit the events from a projection based on different events. Emitter is emitting in projection whereas subscription query is taking place in the REST API that is supposed to stream the server sent events to client.
For example,
I want to emit 3 different events for a use case which creates, updates and deletes an entity.
I am wondering if we can emit different types of data from different events but still combine in one stream, i.e. send actual object upon entity create and update in the emitter but, since I don’t have any entity/data to emit in case of delete, I thinking whether to send a simple message for delete?
I also want a way to specify the type of event while emitting so when ServerSentEvent is build from subscription query, I can specify the type/action (for ex, differentiate between create or update event) along with data.
Main idea is to emit different events and add them in one stream despite knowing all events may not return exactly same data (create, update vs. delete) as part of one subscription query and to be able to accurately identify the event and specify in the stream of ServerSentEvents with appropriate event type.
Any ideas on how I can achieve this?
Here's how I am emitting an event upon creation using QueryUpdateEmitter:
#EventHandler
public void on(LibraryCreatedEvent event, #Timestamp Instant timestamp) {
final LibrarySummaryEntity librarySummary = mapper.createdEventToLibrarySummaryEntity(event, timestamp);
repository.save(librarySummary);
log.debug("On {}: Saved the first summary of the library named {}", event.getClass().getSimpleName(), event.getName());
queryUpdateEmitter.emit(
AllLibrarySummariesQuery.class,
query -> true,
librarySummary
);
log.debug("emitted library summary: {}", librarySummary.getId());
}
Since I need to distinguish between create and update so I tried using GenericSubscriptionQueryUpdateMessage.asUpdateMessage upon update event and added some metadata along with it but not sure if that is in the right direction as I am not sure how to retrieve that information during subscription query.
Map<String, String> map = new HashMap();
map.put(“Book Updated”, event.getLibraryId());
queryUpdateEmitter.emit(AllLibrarySummariesQuery.class,query → true,GenericSubscriptionQueryUpdateMessage.asUpdateMessage(librarySummary).withMetaData(map));
Here's how I am creating subscription query:
SubscriptionQueryResult<List<LibrarySummaryEntity>, LibrarySummaryEntity> result = queryGateway.subscriptionQuery(new AllLibrarySummariesQuery(),ResponseTypes.multipleInstancesOf(LibrarySummaryEntity.class),ResponseTypes.instanceOf(LibrarySummaryEntity.class));
And the part where I am building server sent event:
(.event is where I want to specify the type of event - create/update/delete and send the applicable data accordingly)
Flux<ServerSentEvent<LibrarySummaryResponseDto>> sseStream = result.initialResult()
.flatMapMany(Flux::fromIterable).map(value -> mapper.libraryEntityToResponseDto(value))
.concatWith((streamingTimeout == -1)? result.updates().map(value -> mapper.libraryEntityToResponseDto(value)): result.updates().take(Duration.ofMinutes(streamingTimeout)).map(value -> mapper.libraryEntityToResponseDto(value)))
.log()
.map(created -> ServerSentEvent.<LibrarySummaryResponseDto>builder()
.id(created.getId())
.event("library creation")
.data(created).build())
.doOnComplete(() -> {log.info("streaming completed");})
.doFinally(signal -> result.close());
As long as the object you return matches the expected type when making the subscription query, you should be good!
Note that this means you will have to make a response object that can fit your scenarios. Whether response is something you'd emit as the update (through the QueryUpdateEmitter) or a map operation from where you return the subscription query, is a different question, though.
Ideally, you'd decouple your internal messages from what you send outward, like with SSE. To move to a more specific solution, you could benefit from having a Flux response type. You can simply attach any mapping operations to adjust the responses emitted by the QueryUpdateEmitter to your desired SSE format.
Concluding, the short answer is "yes you can," as long as the emitted response object matches the expected update type when dispatching the subscription query on the QueryGateway.
What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}
I'm researching how to use Realm DB for my new project and I got some issues. Please share your experiences when you worked on Realm DB. Sorry about my long question list.
First, please refer my sample code
Dog class
Dog.h
#interface Dog : RLMObject
#property NSString *name;
#property NSInteger age;
#end
RLM_ARRAY_TYPE(Dog) // define RLMArray<Dog>
And Person class
Person.h
#interface Person : RLMObject
#property NSString *name;
#property NSInteger age;
// to-many relationship
#property RLMArray<Dog *><Dog> *dogs;
#end
RLM_ARRAY_TYPE(Person) // define RLMArray<Person>
Then, I create data for person and dog (1000000 records) - 1 person will have 1-many dogs. And I got stuck in some cases
How to get people have dog with dog name is "Rex"? I research it but there is no guide for objective C
The performance in deleting an object seems to be slow
I tried to delete person has a name is "Ethan" and performance is slow and crash app when it execute a half list. I guess I used incorrect way to delete object.
RLMResults *people = [Person objectsWhere:#"name == 'Ethan'"];
// Get the default Realm
RLMRealm *realm = [RLMRealm defaultRealm];
[realm beginWriteTransaction];
for (int i=0; i< people.count; i++) {
Person *aPerson = [people objectAtIndex:i];
[realm deleteObject:aPerson];
}
[realm commitWriteTransaction];
The result is >52000 record with have a name is "Ethan" and app only delete a half of them (26000)
I don't know how to delete record with condition with Realm. I think I will write the code like below for my question #2
[Person deleteWhereObject:#"name = 'Ethan'"];
It is not yet clear on how can rename, delete or add new column to the DB after it is created (apart from using the most simple way which is delete the DB and recreate it again)
The tools (Realm browser) for browsing the data file created on desktop doesn't provide much flexibility in querying data. It only allow browse through the data but not allow querying using a certain condition. Please guide me to query data with this tool if I lost something.
For troubleshooting, may be I have existing DB from client app and I want to import it in my project to troubleshoot client's bugs. So, how can I do it with Realm DB?
After insert 1000000 records for Person and Dog table, DB size is 52.8 MB. But DB size increase to 92.3 MB after I call delete all data
// delete all object
[realm beginWriteTransaction];
[realm deleteAllObjects];
[realm commitWriteTransaction];
Then, I insert data again and file size continue to increase. I don't know what's wrong in my steps.
Hope getting your support soon!
RLMResults *people = [Person objectsWhere:#"ANY dogs.name == 'Rex'"];
Use deleteObjects: with the retrieved RLMResults instead of deleting every single object yourself. If you enumerate, I'd recommend to use NSFastEnumeration with for (… in …). Note that RLMResults are auto-updated, so if you do concurrent changes, then you can run into surprises.
See my answer for question 2.
Change your schema, bump the schemaVersion of your RLMRealmConfiguration and provide a migration block, if you rename properties or change their type.
This isn't possible yet, but still in the making. This is tracked by issue #28 in the realm-browser-osx repo.
Put the realm file in the app bundle and copy it at runtime into the user data directory (e.g. RLMRealm.defaultConfiguration.path), if you want to be able to write to the file.
The file size blows up, because no compaction is happening. You can enforce that by writing a compacted copy with writeCopyToPath:. You could do this e.g. at app start, because otherwise it can be non-trivially to make sure that all RLMRealm instances are teared down before.
I have a primary node in my database called 'questions', when I create a ref to that node and bring it into my project as a $asObject(), I can modify the individual questions and $save() the collection without any problems, however as soon as I try to limit the object, by priority, the $save() deletes everything off of the object!
this works fine:
db.questions = $firebase(fb.questions).$asObject();
// later :
db.questions.$save();
// db.questions is an object with many 'questions', which I can edit and resave as I please
but as soon as I switch my code to this:
db.questions = $firebase(fb.questions.startAt(auth.user.id).endAt(auth.user.id)).$asObject();
// later :
db.questions.$save();
// db.questions is an empty firebase object without any 'questions!'
Is there some limitation to limited objects (pun not intended) and their ability to be changed and saved?? The saving actually saves updates to the questions to the database, but somehow nukes the local $firebase object...
First line of synchronized arrays ($asArray) documentation:
Synchronized arrays should be used for any list of objects that will be sorted, iterated, and which have unique ids.
First line of synchronized objects ($asObject) documentation:
Objects are useful for storing key/value pairs, and singular records that are not used as a collection.
As demonstrated, if you are going to work with a collection and employ limit, it would behoove you to use a tool designed for collections (i.e. $asArray).
If you were to recreate the behavior of $save using the Firebase SDK, it would look like this:
var ref = new Firebase(URL).limit(10);
// ref.set(data); // throws an error!
ref.ref().set(data); // replaces the entire path; same as $save
Thus, the behavior here exactly matches the SDK. You cannot, technically, call set() on a query instance and this doesn't make any sense, really. What does limit(10) mean to a JSON object? If you call set, which 10 unordered keys should be set? There is no correlation here and limit() really only makes sense with a collection of data, not a list of key/value pairs.
Hope that helps.
My problem, simplified:
I have a dataGrid with a dataProvider "documents"
A column of the datagrid has a labelFunction that gets the project_id field of the document, and returns the project name, from a bindable variable "projects"
Now, I dispatch the events to download from the server the documents and the projects, but If the documents get downloaded before the projects, then the label function gives an error (no "projects" variable)
Therefore, I must serialize the commands being executed: the getDocuments command must execute only after the getProjects command.
In the real world, though, I have dozens of resources being downloaded, and those command are not always grouped together (so I can't for example execute the second command from the onSuccess() method of the first, because not always they must be executed together..)..
I need a simple solution.. I need an idea..
If I understand you correctly, you need to serialize the replies from the server. I have done that by using AsyncToken.
The approach: Before you call the remote function, add a "token" to it. For instance, an id. The reply from the server for that particular call will then include that token. That way you can keep several calls separate and create chains of remote calls.
It's quite cool actually:
service:RemoteObject;
// ..
var call:AsyncToken = service.theMethod.send();
call.myToken = "serialization id";
private function onResult(event:ResultEvent):void
{
// Fetch the serialization id and do something with it
var serId:String = event.token.myToken;
}