We're working with SVN synchronizer on our WSO2 AM-1.10.0.
We noticed that for the super admin directory (-1234) it attempts to load thousands of files, most of them we are not even sure we need to commit (for example: jaggeryapps, etc - we do not need them to be shared).
In general, we're only interested in sharing the tenants info, to allow gateway nodes on the cluster to run API's created at the manager node).
As our SVN is in SaaS that has a limit of 5000 files per commit, we're getting error message on this and the -1234 commit fails.
Is there a way to filter out the files WSO2 attempts to commit to SVN server from the -1234 directory? How can we configure it to commit only the minumum required set of files?
Related
I have two organization in my Azure DevOps. I need to create a build pipeline
in one organization using a git repository from another.
For "Get Sources" I've created a connection for "Other Git" and specified my user name/password. Running the pipeline fails:
fatal: Authentication failed for 'https://:#abc.com/abc.Kernel/_git/ABC.Kernel/'
[warning]Git fetch failed with exit code 128, back off 2.662 seconds before retry. Is there any solution?
I've created a security token and used it instead of the password. It worked.
When it comes the Azure DevOps there are several ways to manage git repositories. First make sure that you are using the source that will get you the result that you expect. Other Git is listed as a "generic" protocol (see supported-repository-types). I would recommend that you use a Service Connection that has read access to the other orgaization.
https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops
If you really want to use "Other Git", you will have to make sure you enable access to the repo from the organization's settings page. see change-application-access-policies
We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)
I have one svn repository with around 100 users having read/write access to it. Recently I noticed that some of the folks are syncing the data from this repository to their local repository using, svn sync command. This looks like a security breach and I am wondering if there is a way I can block svn sync commands from svn server side. Any help appreciated.
This is not a security breach.
The users have access to the repository and repository items with respect to path-based authorization access rules an administrator set for them. The users don't get unauthorized access to the repository unless your server is misconfigured.
We are using Tridion 2011 sp1 without any hotfix and .net web application httpupload.aspx to deploy the content in filesystem.
We monitored and found there are two issues:
1) Some time pages those are published successfully in the publishing queue are not uploaded/updated in the file system.
2) Transport package is not created for the pages which are getting failed with the error:
Deploying FailedPhase: Deployment Processing Phase failed, Could not initialize class com.tridion.storage.StorageManagerFactory, Could not initialize class com.tridion.storage.StorageManagerFactory
Also in the deployer log file and transporter log file, there is no reference to failed item transaction id.
Can anyone help me out in this?
You must have some more detail on the failure in your logs than just this.
Could not initialize StorageManagerFactory will typically point to a misconfigured cd_storage_conf.xml or a jar missing.
If you get this occasionally then there must be something that fails occasionally (like your database connection or a file system).
Please scan through your deployer and/or core logs for additional information.
[UPDATE]
I think you may have a second deployer "listening" to the same incoming directory, and that 2nd deployer is broken.
Hints of that:
You say no transport package is created. I assume you mean you can't find the transport package - it must be created in the CM otherwise it can't fail. This means "someone" picked it up
"Sometimes they're published, sometimes not" == Sometimes they're picked up by the right deployer, sometimes they're picked up by the wrong one.
No references to the transaction in the logs
Search your server for all cd_deployer_conf.xml, and go compare all your "incoming" folder settings. You can only have one deployer per incoming folder.
Try following:
1) In the windows event logs identify the path of the Deployer that is getting loaded...generally it should be define by the Tridion_Home variable but there is also a roll up logic in place and it might also get picked up the deployer path from your application config on priority if you have placed the deployer config and bin folders with in your application bin folder for processing by the Tridion Content Delivery API
2) Check if the updated SQL JDBC jar file is present in the deployer bin folder
3) Verify that you do not have jre version between 1.6.0.26 to 1.6.0.30 installed on CMA and/or CDA server - check for both 32-bit as well as 64-bit version
Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.