Can I remove/disable all Share functionality by removing share.war file? - alfresco

In my new project we are going to use Alfresco as back-end and Angular as front-end, so we wish to remove/disable Share completely if possible. I read somewhat in internet and some people just removed share.war file. Is this safe? Is it the correct way for doing this? Will any errors appear in the future because of this?

Yes, you can just remove it. You will of course, not have the fancy front end. But if you are just using it for back end stuff it will be fine. There are no dependencies, and you should get no errors.

Yes, definitely you can remove share.war as it is completely separate from alfresco.war. It won't give you any error. refer this

As said above, yes you can live with only alfresco war installed. Were I work we don't use share, we only use the repo via its api.
But keep in mind that if you have a recent version of alfresco, you don't have access to the repo UI any-more and share UI give you access to lot of repo config you can change without restart. I would keep share, disable all of its service (properties in alfresco-global that enable services for share; like the thumb preview generation, swf transformers, activity feed etc...) and keep it safe private to your admins.

Related

Adobe CQ5 Setup in production

I am not a CQ guy. I have to use CQ5 for one of my project. I have a CAT and a production environment. I have following doubts-
I want to use author instance of my CAT only. Once I publish the content in CAT it should publish in Production also. Is it possible ?
Once I update the build of AdobeCQ in my production say new build, code changes etc- will my content be lost ?
I read somewhere about Content package in cq5. Can I separate content changes and code changes in one CQ5 environment ?
Thanks in advance.
To answer question 1...
This is not a recommended setup, but a common misconception for someone unfamiliar with AEM/CQ5. The "author" and "publish" instances should be part of the same environment. For example you should have a production author, probably behind your firewall, and production publish to serve pages to the public.
Your CAT environment should have the same thing. You want your testing environment to match as closely as possible to your production environment, including web server and dispatcher setup, to ensure quality.
Consider this. You can use one production publish instance, but it's a single point of failure. It's a general best practice to load balance across at least two. Two is sufficient for most websites. If you do this, you'd want to mimic the architecture in CAT.
To answer question 2...
If your code is written, built, and deployed correctly, it should not delete your content. Just make sure you are never deploying anything to /content (to avoid deleting content) and to /libs and most of /etc to avoid overriding platform functionality. AEM/CQ5 is a very open product, so you can do very bad things. But, if you know what not to do you are safe.
Code deployments should typically be done as part of a CRX Content Package, which brings me to...
To answer question 3...
The way we build and deploy code is to have Maven compile the Java, package everything up in a CRX Package, then deploy to the instance using the Package Manager REST API. Adobe provides a Maven Archetype that will facilitate this.
A CRX Package is a file system representation of your content repository, wrapped in what is effectively an annotated Zip file. Your compiled Java code is included in that file system representation, in a folder (to become node) named "config". That compiled Java is an OSGi bundle, which is an annotated JAR. When CRX Package Manager deploys all those nodes to the system, OSGi accepts the bundle, assuming it's valid. This is why you can do "hot" deployments of live, production AEM/CQ5 instances, with very little risk.
So...
This is a very high level answer to some very big topics. I encourage you to do a lot more research before you set this up. There are many good blog posts and documentation pages out there to help you get this set up according to best practice. Good luck!

PHPStorm - Link external SFTP-folders into project

the issue i am fighting through is a bit complicated. Ill explain the setup envoironment to you first.
I am using PHPStorm to work on a Symony2 Project.
My Apache is hosted on a Debian-VM connected to PHPStorm via "Deployment Tool".
/* So far: I can edit code and update the server automaticaly on save. Works*/
My problem now is, that i am using the composer, which is ment do get me the right bundles into the vendor folder.
I WANT to create kind of a symlink from the server directly into the project.
I DONT WANT to download the vendor folder from the server hard into the project.
COMPACT:
I want to create a symbolic link within a PHPStorm project. Linking a folder from a server into the Project. The linked in folder should be unidirectional updated on source change. The Classes and Namespaces should be known to the Project.
Is there any native way to get this done?
Or does anyone know a plugin which could handle such affairs?
I hope i expressed my point clearly :/ Please ask, if anything is unclear.
Greetings and thanks upfront.
It's not possible to do directly from PhpStorm, see the related issue. You can use some third-party tool like ExpanDrive to map a server directory to the drive letter by SFTP and then add this local directory as a content root to your PhpStorm project. Note that it may affect the performance dramatically.

Zope Management Interface know-how for better Plone development

As a typical 'integrator' programmer customising Plone, what should I know about the ZMI to help me code more effectively? What are the settings, tools, pitfalls, shortcuts and dark corners that will save me time and help me write better code?
Edit: take it as read that I am coding on the filesystem, using GenericSetup profiles to make settings changes. I know making changes in the ZMI is a bad idea and generally steer clear. But occasionally the ZMI sure is useful: for inspecting a workflow, or examining a content item's permissions, or installing only one part of a profile via portal_setup. Is there really nothing worth knowing about the ZMI? Or are there other useful little tidbits in there?
There are a few places in the ZMI that I find myself returning to for diagnostic information:
/Control_Panel/Database: Select a ZODB mountpoint. Cache Parameters tab shows how much of your designated ZODB cache size has been used. Activity tab shows how many objects are being loaded to cache and written over time.
/Control_Panel/DebugInfo/manage: Lots of info, including showing what request each thread is serving at the current moment. The 'Cache detail' and 'Cache extreme detail' links give info on what classes of objects are currently in the ZODB cache.
Components tab of the Plone site root: Quick way to see what local adapters and utilities are registered. DON'T HIT THE APPLY BUTTON!
Undo tab of most objects: See who has committed transactions affecting the object lately.
Security tab: See what permissions are actually in effect for an object. You really don't want to change permissions here 90% of the time; it's too hard to keep track of where permissions are set and they are liable to be reset by workflow. Use the Sharing tab in the Plone UI to assign local roles instead. (The one exception is that I often find it handy to enable the add permission for a particular type in specific contexts.) In Zope 2.12, there is a new feature on this tab to enter a username and see what permissions and roles would be in effect for that user, which is handy.
Catalog tab of portal_catalog: See what index data and metadata is stored for a particular path. (Can also remove bogus entries from the index.)
Index tab of portal_catalog: Select an index, then click its Browse tab to get an overview of what keys are indexed and which items are associated with each key.
The key thing to know is that while many ZMI tools provide quick, through-the-web customization, the customizations that you make this way are hard to export out of the database. So, they don't move easily from development to production environments or from one deployment to another.
Ideally, a new developer should use the ZMI to explore and find points of intervention. Then, learn how to implement the same changes in policy add ons (products) that move from one deployment to another much more reproducibly.
If you want to write code for Plone, it's best to avoid the ZMI. The concept of doing things through the ZMI is very limited and discouraged - more and more things are not available in there and it will go away at some point.
The actual Plone control panels offer you most of the configuration options you can use. For anything else the file system is the best place to look.
I agree with the other posters that you shouldn't configure too much via the ZMI, as it's not in version control and you can easily lose track of the changes.
But the ZMI is still very useful for debugging and to see specific site configurations.
Here are some tools in the ZMI that I regularly consult:
portal_javascripts: To turn debugging on off. Checking which
scripts are there, what are their
conditions for rendering and are they
found?
portal_css: Basically the same as portal_javascripts but for
stylesheets.
portal_types: To see what a type's properties are. Can it be
created globally? What types can you
create inside it? What is its default
view? Etc.
portal_catalog: What indexes are there? What metadata is there in the
catalog? You can clear and rebuild
the catalog and even browse the
catalog.
portal_workflow: What states/transitions/permissions are
there in a certain workflow? What workflow is active on a
certain type?
portal_properties/site_properties: View and set site-wide properties. A
lot of these settings are in the
plone_control_panel (i.e outside of
the ZMI), but here they are on one
page and the ZMI is quicker to
navigate.
portal_skins: See which skins folders are installed. See the
ordering of the skin layers (via the properties tab). You can also edit
the templates, stylesheets and javascripts in the skins directories.
Not recommended! But useful for debugging.
portal_setup: Some very big and complex Plone websites can break if
you just willy-nilly add/remove/reinstall add-ons. Often it's safer to just run a specific GenericSetup update. For example, if you have added a new portlet, rather
import the specific (portlets.xml) step via portal_setup (the import tab), then reinstalling the whole product.
portal_actions: Configure which actions are visible/present.
portal_quickinstaller: Quickly reinstall, uninstall add-ons. Often
quicker and more lightweight than
loading Plone Control Panel's
equivalent.
acl_users: Sometimes when using an add-on like LDAPUserFolder, you'll
have to dig around in acl_users to configure and test it. You can also
create users here, although it's better to do this via the Plone
Control Panel (i.e not in ZMI)
There are many more tools and things to tweak (and break your site with) in the ZMI, but the above ones are what I use 90% of the time.
The portal_historiesstorage tool can eat a lot of disk space. Any content type set to save revisions saves them here, and by default Plone keeps all revisions (see the portal_purgepolicy tool).
I want all revisions on the production Data.fs, but after taking a copy for development the first thing I do is purge portal_historiesstorage. The procedure is:
Go to your Plone site in the ZMI
Delete the portal_historiesstorage tool
Go to portal_setup, Import tab
Under 'Select Profile or Snapshot' choose 'CMFEditions'
Select the step with handler Products.GenericSetup.tool.importToolset
Uncheck 'Include dependencies?'
Hit 'Import selected steps' to re-add portal_historiesstorage
Pack the Data.fs and delete the resulting Data.fs.old from the filesystem
On my 3G Data.fs, this little sequence removes 2.5G!
I have only ever done this on a development Data.fs. Without advice from someone who really knows, I don't recommend doing this on your production site.
There is usually no reason for an integrator or a developer to touch the ZMI other for possible maintenance tasks. Almost any customization can be done using Python or a GenericSetup profile. The advantage of profiles are: repeatability - being able to maintain on the filesystem - being able to put files under revision control.
Being able to work and configure stuff through the ZMI is partly working against Plone - especially when Plone is doing extra stuff under the hood. So the only recommendation can be: stay of the ZMI if you can. The ZMI is not a suitable replacement for using the Plone UI and should only be touch if you really know what you are doing.
Yep, the ZMI is for the occasional maintenance task or, when pressed, a quick-and-dirty CSS or template tweak. It's not meant for any real "coding" work, and in the context of Plone is best thought of as an odd and minimally useful leftover from Zope history.
portal_actions is also useful for more flexible top level navigation. but again best configured via gnericsetup.

Using Subversion with Flex 4 - problem importing services

I'm using the Subversive plugin for Eclipse/Flex and I can commit the files correctly, but I have to rebuild Data/Services each time and reconfigure return types for each, etc. Does Subversion not provide a way to check/in out Data/Services or must these be rebuilt each time?
If I understand your comment to your question correctly, then it seems to me that it's not a problem of Subversion/Subversive, but a problem of Flash Builder's code generator which is generating/overriding your customized return types.
Maybe there are some Flex project settings files that are not committed. That would explain why you need to rebuild Data/Services each time you open the project.
By the way, if you do commit the project settings files, make sure all the paths are relative paths, so that the project settings can be shared among several developers.
You might find value in this Adobe devnet article about Flex project settings
My partner and I had different local names for the project we were working on so we had conflicts with the settings file.

Why does JavaScript not work on my site under an existing virtual directory?

I deployed my ASP.NET application under an existing virtual directory. The new deployment will have some features using JavaScript. The new features are not working.
If I deploy this build under a new virtual directory, the features using JavaScript are working.
I restarted the IIS Admin service. The problem continues.
What could be going wrong here?
Since javascript runs on the client, and not on the server, I doubt that IIS, per se, has anything to do with your problem.
What have you done to attempt to diagnose the problem? Have you looked at the network interaction between the browser and the server? Perhaps some script files are not being found.
Have you turned on any debugging tools (for instance, Firebug or the F12 command in IE8)? You may be getting errors you don't know about.
Sounds like it could be a caching issue on the browser.
Is the code that calls the Javascript routines being generated dynamically? If so, it might be a path assumption. Your description was a big vague. For instance, in ASP.NET, you should use "~" to represent the applications current path. This might change. If you have code that just referrs to "/" or another (perhaps the second attempted path), then perhaps it's just a bad assumption?
Please provide more specifics. There are a hundred possible scenarios that fit your description.
Check the IIS APPLICATION POOL on IIS Manager and the project Target Framework on Visual Studio
try to match it
After the deployment if javascript features are not working then it may be beacuse executes the script which already cached. In this case to handle the situation please do the following
Try changing the JavaScript file's src?
From this:
To this:
This method should force your browser to load a new copy of the JS file.

Resources