I have used the following code in the Alfresco 4.1 but after upgrading to Alfresco 4.2 it stop working and throw exception like "org.activiti.engine.ActivitiObjectNotFoundException: Process instance activiti$401 doesn't exist". I've checked the process id in database it is exist. Any ideas how to create attachments?
the failing code below
// 1. find task by params
WorkflowTask task = workflowService.findTask(roomId, assignee, PrintOrderModel.BASKET_TASK);
// 2. create attachment
Attachment attachment = taskService.createAttachment("url", null, instaceId, name, "", url);
Activiti does not use the "activiti$" prefix, it's just used by Alfresco APIs. When using the TaskService createAttachment please use
Attachment attachment = taskService.createAttachment("url", null, "401", name, "", url);
and not activiti$401
Related
I have been using .NET Core and RabbitMQ client to publish the messages in the queue. I am using the official RabbitMQ docker image and I can validate that the BasicPublish method is working fine and I can get the message as well. However, When I open the admin dashboard, I do not see any queues and messages over there.
This is the first time when I am using RabbitMQ so can someone please highlight where I am making mistake ? I am also pasting code for the reference.
channel.QueueDeclare(queue: queueName, durable: false, exclusive: false, autoDelete: false, arguments: null);
var message = JsonConvert.SerializeObject(publishModel);
var body = Encoding.UTF8.GetBytes(message);
IBasicProperties properties = channel.CreateBasicProperties();
properties.Persistent = true;
properties.DeliveryMode = 2;
channel.ConfirmSelect();
channel.BasicPublish(exchange: "", routingKey: queueName, mandatory: true, basicProperties: properties, body: body);
channel.WaitForConfirmsOrDie();
channel.ConfirmSelect();
RabbitMQ Admin Dashboard
Let me know if more details are required.
I have looked everywhere and I can't find out how to do this; I'm so frustrated...
How can I allow the user to send (via email) the SQLite db file?
That's it in a nutshell. I can convert it to string and attach, but I want to send the actual db file. And I'm using a new phone that doesn't have an external SD card.
The app is just a form that the user fills out, then it's saved to a SQLite database. That works wonderfully. As does printing the db to string (text) and then sending it. But, I want the user to email the actual db file (so I can use C# to read, process it, and "recreate" a real form).
Or should I be using something other than SQLite?
Edit: This is as far as I've made it. It seems to work, but it does not actually attach the file or rather the file is "blank/empty". Debug log says no such file or directory. screenshot of debug log here:http://imgur.com/oyzdtuJ
//trying again to send a SQL db file
//this seems to work and shows that it's attaching a file, but the file is empty so it won't attach
//gmail will say "cant attach empty file"
private void sendFile(String email){
File myFile = this.getFileStreamPath("testresults.db");
if(myFile != null) {
Log.d("LOG PRINT SHARE DB", "File Found, Here is file location: " + myFile.toString());
}else {
Log.w("Tag", "file not found!");
}
Uri contentUri = FileProvider.getUriForFile(this, "com.columbiawestengineering.columbiawest.MainActivity", myFile);
Log.d("LOG PRINT SHARE DB", "contentUri got: here is contentUri: " + contentUri.toString());
//grant permision for app with package "com.columbiawestengineering.columbiawest", eg. before starting other app via intent
this.grantUriPermission("com.columbiawestengineering.columbiawest", contentUri, Intent.FLAG_GRANT_WRITE_URI_PERMISSION | Intent.FLAG_GRANT_READ_URI_PERMISSION);
Log.d("LOG PRINT SHARE DB", "permission granted, here is contentUri: " + contentUri.toString());
Intent shareIntent = new Intent();
shareIntent.setAction(Intent.ACTION_SEND);
shareIntent.setType("application/octet-stream");
shareIntent.putExtra(Intent.EXTRA_SUBJECT, "blaaa subject");
String to[] = { email };
shareIntent.putExtra(Intent.EXTRA_EMAIL, to);
shareIntent.putExtra(Intent.EXTRA_TEXT, "blah blah message");
shareIntent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
shareIntent.putExtra(Intent.EXTRA_STREAM, contentUri);
startActivityForResult(Intent.createChooser(shareIntent, "Send mail..."), 1252);
//revoke permisions
this.revokeUriPermission(contentUri, Intent.FLAG_GRANT_WRITE_URI_PERMISSION | Intent.FLAG_GRANT_READ_URI_PERMISSION);
}
See this answer
Android Utility to send sqlite db to server
You could do this any number of ways. I'd say posting it to a web service is easiest. If you can only use email then I'd compress and encode it and attach it to an email but that sounds painful.
Solved. FileProvider cannot access the database directory. The db file must be copied to the files directory before it is attached. See solution here:Android: FileProvider "Failed to find configured root"
i wanted to use the Google Drive API along with a simple WEB API 2 - Project.
Somehow the GoogleWebAuthorizationBroker.cs is missing.
What i use:
Visual Studio 2013 Update 4
Empty Template with WEB API
My steps:
Creating the empty project including WEB API
building the project
updating packages via Nuget Packager
Install-Package Google.Apis.Drive.v2 (using this guide: https://developers.google.com/drive/web/quickstart/quickstart-cs)
Copy and Paste the code from the above link into a clean api-controller:
public IEnumerable<string> Get()
{
UserCredential credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
new ClientSecrets
{
ClientId = "228492645857-5599mgcfnhrr74a7er1do1chpam4rnbt.apps.googleusercontent.com",
ClientSecret = "onoyJQaUazQK4VsKUjD63sDu",
},
new[] { DriveService.Scope.Drive },
"user",
CancellationToken.None).Result;
// Create the service.
var service = new DriveService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "Drive API Sample",
});
File body = new File();
body.Title = "My document";
body.Description = "A test document";
body.MimeType = "text/plain";
byte[] byteArray = System.IO.File.ReadAllBytes(#"C:\Projects\VS\DataAnime\DataAnime\document.txt");
System.IO.MemoryStream stream = new System.IO.MemoryStream(byteArray);
FilesResource.InsertMediaUpload request = service.Files.Insert(body, stream, "text/plain");
request.Upload();
File file = request.ResponseBody;
return new string[] { file.Id, "value2" };
}
building
6.1 Error: GoogleWebAuthorizationBroker.cs is missing
6.2 Google says following error in browser:
That’s an error.
Error: redirect_uri_mismatch
Application: Project Default Service Account
You can email the developer of this application at: xxxx#gmail.com
The redirect URI in the request: http://example.com:63281/authorize/ did not match a registered
redirect URI.
http://example.com:63281/authorize/ was neither the url i am using for my project nor the url i registered in my developer console (this errorshowing-port is changeing everytime i run this project.
Has anyone an idea why is that?
No other sources helped fixing this weird issue.
I solved it by creating a new project on https://console.developers.google.com for a native software instead of a web-client project, even i am using a web client.
There is just one weird thing:
If i debug my code, it still says that GoogleWebAuthorizationBroker.cs is missing.
Without debugging i can do everything i want.
Thank you very much for your help.
I'm attempting to upload a file to my Amazon S3 bucket and all the code examples give me something like this:
///////////////////////////////////
using System;
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
string accessKey = "put your access key here!";
string secretKey = "put your secret key here!";
AmazonS3Config config = new AmazonS3Config();
config.ServiceURL = "objects.dreamhost.com";
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(
accessKey,
secretKey,
config
);
////////////////////////////////////
The problem is that the Amazon S3 client doesn't find the assembly or reference and when I click on the options to help bind it the only option that pops up is to generate a class for AmazonS3. I have the amazon aws for sdk.net installed through nuget. Besides "AmazonS3", everything else references fine.
////////////////UPDATE///////////////////
This has not been fun, it has wasted away the first half of my day :( I'm going to post this so that maybe someone else can use the code. I don't know why I couldn't find anything on the internet about it. Here it is:
AmazonS3Config config = new AmazonS3Config();
config.ServiceURL = "http://s3-us-west-2.amazonaws.com(your service URL)";
Amazon.S3.IAmazonS3 s3Client = AWSClientFactory.CreateAmazonS3Client("Id", "Key", config);
String S3_KEY = "Test.txt";
PutObjectRequest request = new PutObjectRequest();
request.BucketName = "Your Bucket Name";
request.Key = S3_KEY;
request.ContentBody = "This is body of S3 object.";
s3Client.PutObject(request);
It appears in the AWS template console app using the AWS SDK, they do not actually call AmazonS3 class directly. What they do is call the interface IAmazonS3 instead. Try replacing AmazonS3 with IAmazonS3 and it should work.
Am facing a problem, while creating components through TOM API using .NET/COM Interop.
Actual Issue:
I have 550 components to be created through custom page. I am able to create between 400 - 470 components but after that it is getting failed and through an error message saying that
Error: Thread was being aborted.
Any idea / suggestion, why it is getting failed?
OR
Is there any restriction on Tridion 2009?
UPDATE 1:
As per #user978511 request, below is error on Application event log:-
Event code: 3001
Event message: The request has been aborted.
...
...
Process information:
Process ID: 1016
Process name: w3wp.exe
Account name: NT AUTHORITY\NETWORK SERVICE
Exception information:
Exception type: HttpException
Exception message: Request timed out.
...
...
...
UPDATE 2:
#Chris: This is my common function, which is called in a loop by passing list of params. Here am using Interop dll's.
public static bool CreateFareComponent(.... list of params ...)
{
TDSE mTDSE = null;
Folder mFolder = null;
Component mComponent = null;
bool flag = false;
try
{
mTDSE = TDSEInitialize();
mComponent = (Component)mTDSE.GetNewObject(ItemType.ItemTypeComponent, folderID, null);
mComponent.Schema = (Schema)mTDSE.GetObject(constants.SCHEMA_ID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadAll);
mComponent.Title = compTitle;
...
...
...
...
mComponent.Save(true);
flag = true;
}
catch (Exception ex)
{
CustomLogger.Error(String.Format("Logged User: {0} \r\n Error: {1}", GetRemoteUser(), ex.Message));
}
return flag;
}
Thanks in advance.
Sounds like a timeout, most likely in IIS which is hosting your custom page.
Are you creating them all in one synchronous request? Because that is indeed likely to time out.
You could instead create them in batches - or make sure your operations are done asynchronously and then polling the status regularly.
The easiest would just be to only create say 10 Components in one request, wait for it to finish, and then create another 10 (perhaps with a nice progress bar? :))
How you call TDSE object. I would like to mention here "Marshal.ReleaseComObject" procedure. Without releasing COMs objects can lead to enormous memory leaks.
Here is code for component creating:
private Component NewComponent(string componentName, string publicationID, string parentID, string schemaID)
{
Publication publication = (Publication)mTdse.GetObject(publicationID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Folder folder = (Folder)mTdse.GetObject(parentID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Schema schema = (Schema)mTdse.GetObject(schemaID, EnumOpenMode.OpenModeView, publicationID, XMLReadFilter.XMLReadContext);
Component component = (Component)mTdse.GetNewObject(ItemType.ItemTypeComponent, folder, publication);
component.Title = componentName;
component.Schema = schema;
return component;
}
After that please not forget to release mTdse ( in my case it is previously created TDSE object). Disposing "Components" object can be useful also after finish working with them.
For large Tridion batch operations I always use a Console Application and run it directly on the server.
Use Console.WriteLine to write to the output window and Console.ReadLine as the last line of code in the app (so the window stays open). I also use Log4Net as the logger.
This is by far the best approach if you have access to a remote session on the server - or can ask an admin to run it for you and give you access to the log folder via a network share.
As per #chris suggestions and part of immediate fix I have changed my web.config execution time out to 8000 seconds.
<httpRuntime executionTimeout="8000"/>
With this change, custom page is able to handle as of now.
Any more best suggestion, please post it.