How to block svn sync - unix

I have one svn repository with around 100 users having read/write access to it. Recently I noticed that some of the folks are syncing the data from this repository to their local repository using, svn sync command. This looks like a security breach and I am wondering if there is a way I can block svn sync commands from svn server side. Any help appreciated.

This is not a security breach.
The users have access to the repository and repository items with respect to path-based authorization access rules an administrator set for them. The users don't get unauthorized access to the repository unless your server is misconfigured.

Related

Practical Use of Artifactory Repositories

In a near future I will start using Artifactory in my project. I have been reading about local and remote repositories and I am a bit confused of their practical use. In general as far as I understand
Local repositories are for pushing and pulling artifacts. They have no connection to a remote repository (i.e. npm repo at https://www.npmjs.com/)
Remote repositories are for pulling and caching artifacts on demand. It works only one way, it is not possible to push artifacts.
If I am right up to this point, then practically it means you only need a remote repository for npm if you do not develop npm modules but only use them to build your application. In contrast, if you need to both pull and push Docker container images, you need to have one local repository for pushing&pulling custom images and one remote repository for pulling official images.
Question #1
I am confused because our Artifactory admin created a local npm repository for our project. When I discussed the topic with him he told me that I need to first get packages from the internet to my PC and push them to Artifactory server. This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Question #2
Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy (i.e. delete packages older than 6 months)? I ask this because it is important to keep packages until a meteor hits the servers (for archiving policy of the company).
Question #3
We will need to get official Docker images and customize them for CI. It would be a bit hard to maintain one local repo for pulling&pushing custom images and one remote repo for pulling official images. Let's say I need to pull official Ubuntu latest, modify it, push and finally pull the custom image back. In this case it should be pulled using remote repository, pushed to local repo and pulled again from local repo. Is it possible to use virtual repositories to do this seamlessly as one repo?
Question #1 This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Generally, you would want to use a remote repository for this. You would then point your client to this remote repository and JFrog Artifactory would grab them from the remote site and cache them locally, as needed.
In some very secure environments, corporate policies do not even allow this (they may not even be connected to the internet) and instead manually download, vet, and then upload those third-party libraries to a local repository. I don't think that is your case and they may just not understand their intended usages.
Question #2 Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy?
They will not be deleted unless you actively configure it to do so.
For some repo types there are built-in retention mechanisms like the number of snapshots or maximum tags but not for all of them and even in those that have it, they must be actively turned on. Different organizations have different policies for how long artifacts must be maintained. There are a lot of ways to cleanup those old artifacts but ultimately it will depend on your own requirements.
Question #3 Is it possible to use virtual repositories to do this seamlessly as one repo?
A virtual repository will let you aggregate your local and remote sites and appear as a single source. So you can do something like:
docker pull myarturl/docker/someimage:sometag
... docker build ...
docker push myarturl/docker/someimage:sometag-my-modified-version
docker pull myarturl/docker/someimage:sometag-my-modified-version
It is also security-aware so if the user only has access to the local stuff and not the remote stuff, they will only be able to access the local stuff even though they are using the virtual repository that contains both of them.
That said, I don't see why it would be any harder to explicitly use different repositories:
docker pull myarturl/docker-remote/someimage:sometag
... docker build ...
docker push myarturl/docker-local/someimage:sometag-my-modified-version
docker pull myarturl/docker-local/someimage:sometag-my-modified-version
This also has the added advantage that you know they can only pull your modified version of the image and not the remote (though you can also accomplish that by creating the correct permissions).

Connect one Artifactory to another Artifactory

Our setup includes a company wide Artifactory that holds in-house-built artifacts as well as goes out and fetches publicly available artifacts. I’m trying to setup a local Artifactory at our location that would fetch publicly available artifacts through the regular internet, but would connect to the company wide Artifactory for our in-house-built artifacts. Is this possible?
In my local Artifactory setup, I put the company wide Artifactory URL as a Remote Repository. I can hit the Test button and it tells me that it successfully connected. However, when I go to download an artifact it does not work. I would like to say that publicly available artifacts can be fetched through my local Artifactory, so at least I can get to jcenter.bintray.
Can one Artifactory be connected to another Artifactory? If yes, is there a way to test if this connection works
I don’t think we would be using all the contents of the company wide Artifactory, so I don’t want to do an export and import to the local or do replication. I would prefer if we could fetch on demand. Is this possible?
Edit: Thanks to #DarthFennec pointing me to Smart Remote Repositories I have solved my problem. To others who have the same problem
Please follow the steps mentioned on the previously mentioned page to set up the Smart Remote Repository. In my case Artifactory did not detect that the remote was another instance of Artifactory and did not give me any options to set, but I was not interested in these anyway.
Note You can always click the Test button to make sure that your connection to the Remote Repository works.
Next, go to the Admin -> Virtual Repositories select your Repository Key and select your Smart Repository from the Available Repositories so that it moves into the Selected Repositories. Click Save & Finish at the bottom and you should be good to go.
I'm not sure exactly what your problem ended up being, but if you want to remote one Artifactory repository from another, it should be a smart remote repository. This is when Artifactory detects that a remote is pointing at another Artifactory, and it enables a number of extra features, like download statistics, property replication, and remote browsing.
An important thing to keep in mind when configuring a smart remote repository is that depending on the package type, you might need to point the remote at <artifactory>/api/<type>/<repo>, rather than just <artifactory>/<repo>. This is the case for Bower, Chef, CocoaPods, Docker, Go, NuGet, Npm, Php Composer, Puppet, Pypi, RubyGems, and Vagrant repositories. Other repository types should use the standard <artifactory>/<repo> URL.

Alfresco Community v.5.1 does not start and loading the index.html forever

For some unexplained reason I can't use alfresco from yesterday.
Let me tell you how happens.
First of all, I didn't change any conf file or something like that.
I started tomcat and postgre services and after that, I tried to load "localhost:8080/share" but it was loading forever.
I tried to check the logs files, but no use, too. There is no error messages, nothing unusual.
After that, I deleted alfresco and share folder inside the "webapps", just in case, but it failed, too.
Finally, I can't stop these services from service manager, cause I am at work and I have no access privilege.
My main concern is that I don't even know the cause of this issue, so I don't even know how to ask for help.
When you don't have permission to delete the folder(share + alfresco) and stopping the services also. Without stopping the services, you can't delete the complete files from alfresco and share folders.
You need to find the problem is in Alfresco share, Alfresco Repo or database or tomcat.
Check Tomcat
Type http://localhost:8080 and make sure Tomcat is running or not.
Check Database
Connect your database service from Service manager, via PgAdmin tool to check the database service is running or not.
Check Repo
http://localhost:8080/alfresco - It should display some basic information about Alfresco Repo otherwise, it is clearly decided the Alfresco Repo itself is failed.
Check Share
http://localhost:8080/share - It should display the login page, if everything works fine.
Logs
Check and share, alfresco.log, share.log, solr.log, catalina, tomcatstdout and tomcatstderror logs files. Definitely, some of the error information would have recorded any of these logs files.

Where should a .NET Web Application store it's (non database) setting

I am building a Web Application that will be installed many times. The application needs to be able to save certain setting itself upon request.
I have an installer (InnoSetup) but I want to very careful about what permissions I give the Web Application.
I can't use a database.
A default install always leaves the web.config as read-only. (Most secure)
The registry can be problematic. Unless there is a set of keys a DotNet webapp can always write to by default (IIS_IUSR)...
I was considering App_Data, but the default permissions are no longer useful and Inno-Setup can't easily fix it correctly:
https://support.microsoft.com/en-us/kb/2005172
Security and Ease of Setup are both big issues..
I also don't want to make a mess of the machines I install to.
A FAILED solution was to write to the user portion of the registry:
Microsoft.Win32.Registry.CurrentUser.CreateSubKey("Software\\MyCo\\MyApp\\");
var reg = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software\\MyCo\\MyApp", true);
reg.SetValue("MyValue", (string)dataString, Microsoft.Win32.RegistryValueKind.String);
But I found out that writing to HKEY_CURRENT_USER is also not allowed by default on Server 2012 and likely others. The server error page is helpful and gives options such as explicitly giving the IUSR_{MachineName} explicit permission but this is a no go for me.
So my final solution is to have the installer create a user configurable folder and then assigning all users Read/Write access to that folder. The administrator can always lock it down more if they want.
If anyone has a better option then let me know.
With InnoSetup I created a new Wizard page to suggest and collect a Data folder from the user. The installer then:
Created that folder and gave All Users Read/Write access,
Added a HKLM registry key telling the Web App where to look for the folder,
Notified the user that they should lock the folder down further to prevent abuse.

Issues with adding/deleting local permissions to Alfresco?

I have been trying to add/delete local permissions for a file/folder in Alfresco using open CMIS with mix success. My tests are done on Alfresco 4.2f/5.0a with Apache chemistry CMIS 0.10 on a mixture of Windows, *nix platforms.
When adding roles to the local permissions list, I am seeing Alfresco generated roles such as All, roles.write, roles.read, etc being generated. I have been trying to clean this up through resetting the ACL after adding permissions, but with some mix success.
Secondly, when deleting the last local role from the permissions list, I notice that the inherited permissions are now made local for some reason. For instance, if the inherited permissions contained user1 as a Coordinator, this will now also appear in the local permissions list.
I understand that CMIS has some limitations with handling permissions. How do I go about resolving the things I am seeing?

Resources