How do permissions on a PlasticSCM repository work in a DVCS scenario - dvcs

So I've been working on a rather large project and using PlasticSCM as by VCS. I use it with a DVCS model, but so far it's pretty much just been me sync'ing between my office machine and home.
Now we're getting other people involved in the project, and what I would like to do is restrict the other developers to specific branches so that only I can merge branches into /main.
So I went to my local repository, and I made the permissions changes (that part's pretty straight forward). But now how does that work with the other developers? When they sync up, are the permissions replicated on their local repositories? If they attempted to merge into /main on their local repository, does it allow that, and then they get an error when they attempt to push the changes to my repository?
This is my first foray into DVCS so I'm not quite sure how this kind of thing works.

Classic DVCS (Mercurial, Git) don't include ACL, meaning a clone wouldn't keep any ACL restriction.
This is usually maintain through hook on the original repo (meaning you might be able to modify the wrong branch on a cloned repo, but you wouldn't be able to push back to the original repo).
As the security page mentions, this isn't the case for PlasticSCM, and a clone should retain the ACL (caveat below) set on an object, which will inherit said ACL through two realms: the file system hierarchy (directory, subdirectories, files) and the repository object hierarchy:
The caveat in a DVCS settings is that there must be a mechanism in place to translate users and groups from one site to another.
The Plastic replication system supports three different translation modes:
Copy mode: it is the default behaviour. The security IDs are just copied between repositories on replication. It is only valid when the servers hosting the different repositories involved work in the same authentication mode.
Name mode: translation between security identifiers is done based on name. In the sample at Figure above suppose user daniel has to be translated by name from repA to repB. At repB the Plastic server will try to locate a user with name daniel and will introduce its LDAP SID into the table if required.
Translation table: it also performs a translation based on name, but driven by a table. The table, specified by the user, tells the destination server how to match names: it tells how a source user or group name has to be converted into a destination name. Figure below explains how a translation table is built and how it can translate between different authentication modes.
Note: a translation table is just a plain text file with two names per line separated by a semi-colon “;”. The first name indicates the user or group to be translated (source) and the one on the right the destination one.

Related

When migrating from an old Artifactory instance to a new one, what is the point of copying $ARTIFACTORY_HOM/data/filestore?

Artifactory recommends the steps outlined here when moving from an old Artifactory server to a new one: https://jfrog.com/knowledge-base/what-is-the-best-way-to-migrate-a-large-artifactory-instance-with-minimal-downtime/
Under both methods it says that you're supposed to copy over $ARTIFACTORY_HOME/data/filestore, but then you just go ahead an export the old data and import it into the new instance, and in the first method you also rsync the files. This seems like you're just doing the exact same thing three times in a row. JFrog really doesn't explain why each of these steps is necessary and I don't understand what each does differently that cannot be done by the other.
When migrating Artifactory instance we need to take two things into consideration:
Artifactory Database - Contains the information about the binaries, configurations, security information (users, groups, permission targets, etc)
Artifactory Filestore - Contains all the binaries
Regardless to your questions, I would like to add that from my experience, in case of a big filestore size (500GB+) it is recommended to use a skeleton export (export the database only, without the filestore. This can be done by marking "Exclude Content" in Export System) and copy the filestore with the help of a 3rd party tool such as Rsync.
I hope this clarifies further.
The main purpose of this article is to provide a bit faster migration comparing to simple full export & import.
The idea of both methods is to select the "Exclude Content". The content we select to exclude is exactly the one that is stored in $ARTIFACTORY_HOME/data/filestore/.
The difference between the methods is that Method #1 exposes some downtime, as you will have to shut down Artifactory at a certain point, sync the diffs, and start the new one.
While method #2 exposes a bit more complexed process, that includes in-app replications to sync the diffs.
Hope that makes more sense.

Unique link to an artifact deployed to JFrog Artifactory

I'm a new Artifactory user. My company just setup Artifactory v6.5.2 and I'm looking to use is for managing software deployed for our production team. What I need is a download link that will get documented in our product management system that directly points to the exact file that Software deployed for Production to use. I was anticipating this would look like this:
https://artifactory.mycompany.com/artifactory/myrepo/mymodule/mypkgfile_v1_b30b890becfb4a02510ed12a7283c676.tgz
I'm not seeing that Artifactory can do this for me. What I see is I can do this:
http://artifactory.mycompany.com/artifactory/myrepo/mymodule/mypkgfile_v1.tgz
However if another artifact is deployed with the same name, it's not reflected in the download link. This means that the link could return different results.
Am I missing something or am I asking Artifactory do something it's not intended to do?
Artifactory returns the URL based on on the filename and the path (as any web server would do). Here are two options to achieve what you need:
Name the artifacts uniquely (timestamps are the simplest). Instead of naming the artifact mypkgfile_v1.tgz, name it mypkgfile_v1-1553038888.tgz (I used the Unix Epoch time, but everything unique enough will do).
This one is more evolved but doesn't require you to change the naming scheme.
First, configure a custom repository layout to match your versioning.
Once you've done that, every time you deploy an artifact, attach a unique identifier to the artifact as property during deployment (using matrix params, for example), deploying your artifact as mypkgfile_v1;timestamp=1553038888.
On the revrieval, use the token for the latest release together with the timestamp you need as a matrix param:mypkgfile_v[RELEASE];timestamp=1553038888

Copying Business Setup Data to a new instance

Re: AX 2012 R2 Service Pack 2
I am looking for options in copying Business Setup data (eg Companies, Chart of Accounts, Location, etc) from one AX instance to another instance. The instances are currently in development so we don't have to worry about wiping out data. As of now one single instance has a master copy of all Business Setup information. What I want to do is copy that data to multiple development instances.
I can refresh the Model Store and Business Data with no problem. But the Business Setup data is a different beast entirely. In AX 2009, there was a feature to copy this information, but it has since been removed in AX 2012. I've read multiple articles on this but each one warns that it's either not recommended or only for testing. I need something that is safe to use.
Has anyone found a clean process to copy business setups from one instance to another in AX 2012?
The really simple solution is to have an AX setup database/environment without any transactions.
To deploy do a SQL Server, do a backup of the setup database and restore to the target database followed by a full AX dictionary synchronize in the target application.
This will of cause wipe out any transactions, setup etc. done in the target database.
After import a script is run to change file locations not to point to setup locations etc.
You may need to save some data beforehand using the Administration, Periodic, Data Export/Import, Export then import after the wipe.
Below is the actual procedure used by one of my customers. The setup is done in the production environment and the target may be the flow testing environment.
Sketch list:
Backup setup
Export target data
Stop target AOS
Restore to target
Run SQL script
Start target AOS
Database synchronization
Run AX script
Import target data
This procedure is also great for making a test environment of a production environment.
There is a Microsoft document, "Microsoft Dynamics AX 2012: Tech Domain New Features Module 5: Moving Between Environments" on this subject (Google that entire title and it will be the first hit).
The procedure that Jan outlines in his answer is correct but more high level. There is data in several tables that store configuration information that you want to wipe out after the restore, and insert the rows that were there prior to the restore. I.e., there are some tables that you need to export before you do the restore, so that you can create SQL scripts to insert that data.
You can do it without these procedures, but then you have a messy environment with AOS servers listed that aren't in the environment, and if you have a reporting server, help server, etc., you have to go through and change all of that manually. If you save the data prior to the restore, then you have scripts that will get the configuration where it needs to be and it just takes one second.
You can copy commonly used values from an existing company to a new one, as long as both companies are in the same database. For example, if you have a standard list of symptom codes that is common to all your service management implementations, you can copy the codes easily from one company to another.
To copy data to a new company using RapidStart Services:
Open the new company.
Choose the Lightbulb that opens the Tell Me feature icon, enter Configuration Worksheet, and then choose the related link.
Choose the Copy Data from Company.
On the Copy Company Data page, choose a company to copy from in the Copy From field, and then choose the OK button.
Select a table from one of the configuration packages that you have imported, and then choose the Copy Data action.
This works for me with Microsoft

How to Export AzMan XML Store to another Machine?

Hi
Im trying to find a way to recreate the all of the Roles, tasks operations etc that are currently defined in an AzMan XML store on our Dev box in an XML store on our QA box. Of course just simply copying the XML file to the QA box does not work.
Does anyone know if there is a tool to export or recreate the AzMan XML store on another machine?
thanks
Actually copping XML file from one location to another works fine. That is how we are doing right now. However copying file from one location to another location will also copy all users as well. So if you have test users defined in test XML they all will be copied to another machine.
Copying the XML file will not work IF your group assignments use different domains. Our QA box is in a different domain from our dev and production boxes, so the SIDs do not match, and, as you probably already know, AzMan uses only SIDs for assignments into groups. If everybody and all machines are in the same domain, then the SIDs won't change, and the XML store can be copied from Dev to QA.
Also, check that the permissions for your newly copied XML file on the QA box are appropriate (accessible by the process running your app, and possibly by the users running your app).

ASP.NET connection string deployment best practice

I've collected a (hopefully useful) summary of the ways I've researched to accomplish the subject of this post, as well as the problems I have with them. Please tell me if you've found other ways you like better, especially if they resolve the problems that the methods I mention do not.
Leave connection strings in web.config and use XDT/msdeploy transformation to replace them with settings according to my active build configuration (for example, a web.PublicTest.config file). My problem with this is I merge and bury a few server-specific settings into an otherwise globally identical file with many configuration elements. Additionally, I cannot share connection string definitions among multiple peer-level applications.
Specify a configSource="DeveloperLocalConnectionStrings.config" value for connection strings in web.config, and XDT transform this value to point to one of the multiple environment-specific files in my code-base. My problem with this is I send passwords for all my environments to all destinations (in addition to SVN, of course) and have unused config sections sitting on servers waiting to be accidentally used.
Specific connection strings in the machine.config file rather than web.config. Problem: who the heck expects to find connection strings in the machine.config, and the probability of surprise name collisions as a result is high.
Specify a configSource="LocalConnectionStrings.config", do not transform the value, and edit the project xml to exclude deployment of the connection string config. http://msdn.microsoft.com/en-us/library/ee942158.aspx#can_i_exclude_specific_files_or_folders_from_deployment - It's the best solution I've found to address my needs for a proprietary (non-distributed) web application, but I'm paranoid another team member will come one day and copy the production site to test for some reason, and voila! Production database is now being modified during UAT. (Update: I've found I can't use one-click publish in this scenario, only msdeploy command line with the -skip parameter. Excluding a file as above is the same as setting it to "None" compile action instead of "Content", and results in the package deleting it from the deployment target.)
Wire the deployment package up to prompt for a connection string if it isn't already set (I don't know how to do this yet but I understand it is possible). This will have similar results to #4 above.
Specify a configSource="..\ConnectionStrings.config". Would be great for my needs, since I could share the config among the apps I choose, and there would be nothing machine-specific in my application directory. Unfortunately parent paths are not allowed in this attribute (like they are for 'appSettings file=""' - note also that you can spiffily use file= inside a configSource= reference).
p.s. some of these solutions are discussed here: ASP.Net configuration file -> Connection strings for multiple developers and deployment servers
When using SQL Server, you can also use Integrated Security / SSPI and add the WebServer Computer Login to the Sql Server.
That way you dont have to expose anything in the web.config and you can grant roles to that login like you would to any other DB user.
Though you have to understand the implications and security considerations to be taken, because any malicious code executed as THAT machine will have access to the Sql Server.
with regards
Ole
Use the hostname as key for the connectionstring, that way you can choose the datasource automagically. Make sure the choosing routine is not buggy (change hostname - test!)...
Don't put it in the web.config, write an ini file, that way there is no XML encoding.
Encrypt the password therein, with private/public key (RSA/PGP). Don't ever use cleartext, or a symmetric key, which is just as bad.
Check my following blog post: Protecting asp.net machine keys and connection strings
If you do use Quandary's answer, use a key that's not in the site's folder, just like asp.net does with protected config sections.
We manually approve changes to the web.config that go into staging/production. We use integrated instead of username based where possible, but an option we've used in the later case is to just have placeholders for the username/passwords in SVN.
We've used separate config files in the past, but we have run into other type of issues with web.config modifications, so we have been locking it in a single file lately.

Resources