I have a rule on a folder which executes a JavaScript code whenever a new document enters the folder. The issue is the rule doesn't run automatically when a document enters the folder, but I have to run it manually.
I have tried running the script in background too. If I put a rule on update, that works automatically. The problem is with creation or entering of new documents in the folder. I am using Alfresco community 4.2.f share.
Please advise.
Thanks.
I cannot recreate this problem in Alfresco Community Edition 4.2.f. Make sure that:
All of the rules are enabled
The person putting the document in folder1 has permissions to create new documents in Folder 2.
The criteria for the rules are valid
The script in Folder 2 is not actually running. The best way to validate this is by turning on the server-side JavaScript debugger by editing $TOMCAT_HOME/webapps/alfresco/WEB-INF/classes/log4j.properties and setting log4j.logger.org.alfresco.repo.web.scripts.AlfrescoRhinoScriptDebugger=on
Alternatively, for #4, you could change to an out-of-the-box action, like another Move that would move the document to Folder 3. That's what my test does. If that works for you like it does for me, you would be able to narrow down your troubleshooting to a problem with the custom script.
when running scripts fired by rules you can't rely on search for the new doc since indexing isn't finished when the script runs. If SOLR is configured as search engine, indexing is executed async from outside the repository every 15 secs. You already may know that you can get the name from the script node?
Related
I already made pipeline. Which is working fine. Suddenly it give error like
2015-12-18 02:39:08.091 GMT] ERROR system.core ISH-CORE-2368 Sites-SiteGenesis-Site core Storefront [uuid] [request-id]-0-00 [timestamp] "Error executing pipeline: Hello com.demandware.beehive.core.capi.pipeline.PipelineExecutionException:Pipeline not found (Hello) for current domain (Sites-SiteGenesis-Site)"
Does anybody know how to solve this?
In the event that your pipeline can not be found for the selected domain, please go trough and verify all of the following:
Double check Pipeline-Node naming
Pipeline URLs are generated by their name and your desired entry node, in this scenario, I would expect a file named Hello.xml in you cartridge's pipeline directory, and a start node named Start, would be accessed via {instanceURL}/on/demandware.store/Sites-mySite-Site/Hello-Start
Try and force upload of your cartridges
Occasionally the files on the server will not be updated correctly when a save is made; to force an update, right click your project, click Demandware > Upload Cartridges
Check your Cartridge Path
If you are using a shared instance, or your instance is re-provisioned, you may need to check your cartridge path to be sure your custom cartridge(s) are still there.
Check your Code Versions
Occasionally you may increment / change your code version - if you do, make sure that the path you select in Studio is the one that you have selected in Business Manager.
Tech Support
Should you still have issues following the four steps above, please file a support ticket and the tech-support team will be able to provide you with more assistance.
I was wondering what could be the easiest way to allow users to edit HTML and CSS files that were uploaded to the meteor server, simulating what we can do when inspecting code in the browser, and then saving changes. I'm not talking about dynamic editing of style or simple DOM changes where we grab the element we want and change/add/remove attributes.
Or putting it another way: how do we write and save files on the meteor server by accessing the app that is running on those files.
that's an interesting question.
There is a relatively easy way to allow users to edit CSS and see the changes applied to the meteor app if you store the CSS in the database and apply the CSS to the CSSOM when the page loads or the user makes edits.
here is the method to append arbitrary css
Check out Meteorpad , I think it writes the app files to and from the database, but it somehow manages to compile them and run the app, as you would in a normal development environment by restarting the server or pushing code to the client. quite clever and I am also trying to learn more about they did it.
I've developed an ASP.NET v4 web app which I am trying to get to write to a folder in the webroot.
For the life of me I cannot get Windows 7 to allow it to write a file, I get Access is Denied error each time.
What I have tried on the folder is
attrib -r /foldernameandpath
attrib -r -s /foldernameandpath
Removing Check from ReadOnly attribute in folder properties
Changed owner to folder to current user whom is also the admin
Changed owner to Everyone
Adding Everyone user to folder with all permissions
Changing the folder to the Public Document directory
Changing to the root of the C drive (Out of desperation)
Nothing seems to work and all I want to do is test that the write works.
Now I know the system works and writes fine cause I had this same web app on a second machine writing the PDFs to the C drive without issue. This second machine I just cannot get it to allow the write.
Oh please for the love of all things holy, put the keyboard and mouse down and stop what you are doing. Some of the actions in your list seriously frightens me.
All the things you list as your actions are most likely useless or desperately random at best.
As a web developer, you shouldn't have to start messing around changing ownership of folders, especially not the web root folder normally used by the web server. When you do, you'll end up in a dark, cold and lonely place. Lord only knows what other desperate measures you have taken. For your own sake, I'll recommend you to do a system restore or even system reinstall to make sure you are in a healthy state.
Let's go back one step and try to sort out what you are doing.
First: what is the path you are trying to write to?
Second: are you running this application directly from within Visual Studio or in IIs?
If you are doing the first option then your current user account is the account who tries to write to the path in question. Do you have write permissions there?
One thing you can try is to start Visual Studio with elevated permissions and see if that works.
If you are running in IIs, it the associated pool account that needs write permissions.
Are you doing impersonation by any chance? If so, make sure the impersonated account have proper permissions.
You should only write to a predefined folder within your app path, like ..\App_Data for example.
The easiest way to find out which account you are running with and determine what path you are trying to write to, is to fire up SysIntenals process monitor and find the entry with the access denied result. Select properties for that entry and find out which account tried to do the write action. When you have determined that, right click on the folder in question and give the proper account write permissions.
I've been testing Alfresco for a few days but seem not to find a solution for this.
In Alfresco when an user uploads a file to a folder where it has the contributor role everything works nice, but he can delete that file. I want avoid that so I have defined a rule that takes the ownership of every file uploaded. The script is defined as
document.setOwner('admin');
This way the file can be deleted only by admin. If the rule is active the upload fails for everyother user. Does anybody know what it takes to take ownership of a file in that situation using a script?
I am using Alfresco 4.0a
Thanks.
I guess you can better do the following
document.setOwner("");
Because admin already has full authority in Alfresco.
I've been messing around a bit with various solutions to what I would see as a fairly common problem, but I've not yet been able to solve it in a satisfactory way.
What I wish to achieve is some kind of functionality where a user can upload new files, or select existing files to reuse them.
What I've been using so far is a combination of the filefield, filefield_sources, imce and ckeditor modules. I guess ckeditor isn't really important for the solution, but I need to be able to embed images from the archive somehow, and this is done with IMCE . Since I do not want everything to be accessible from the filebrowser I created a subdirectory and set full access to it in the IMCE settings, lets call it default/files/site
This worked fine as long as all filehanding was done through IMCE, but when I uploaded files directly from the filefield my files ended up in the default/files root, so I set up folders for my fields, for example default/files/site/movies in a field that allowed the .flv format. This worked fine to, as long as I didn't try to access the files through IMCE. It appears the folders created by filefield are not accessible from IMCE?
I'm also in a position where I need to support large uploads (200MB+), but from my experience in other projects, allowing file uploads through FTP is usually a life-saver, but from what I understand IMCE won't support files not uploaded through Drupal in some way, since they are not present in the database (giving the message: The selected file could not be used because the file does not exist in the database.)
I'm aware that I don't really have a clear question to my problem, but somehow I need to figure this out pretty fast. How would I preferably solve this? I'm aware that I'm not the first to have this problem, but I have not yet been able to find a nice and stable solution. What am I missing?
Also check this thread (http://drupal.org/node/438940) and the reference to John Locke's work at: http://www.freelock.com/blog/john-locke/2010-02/using-file-field-imported-files-drupal-drush-rescue
Well, I'm not personally familiar with IMCE off of the top of my head, but if you need files that have been uploaded via ftp to be added to the files tables, then my impulse would be to write a small module which would then allow the user to click a button and start off a batch process. (This is me assuming that you are using Drupal 6, as the batch api doesn't exist in 5.)
Said batch process would then iterate over all of the files in the appropriate directory, which I would assume you had uploaded the files to, use file_copy() (from Drupal's file API) to copy the files to default/files/site, and then would add said files to the files table, which is actually quite simple with drupal_write_record().
It might not even need to use the batch api - it somewhat matters if you're just uploading 10-30 really big files, or 200-300 MB files.
For using the batch api, I'd look at http://drupal.org/node/180528 - this has a fairly basic example of how the batch api works, which basically consists of telling the api that you want to keep calling function_a, and then inside of function_a setting your progress in the context array until you're done, at which time the batch process finishes. Then you just have whoever uploads the files via ftp to hit a button on the website to move and register the files.