Perforce directory structure - directory

In Perforce, I notice that my work-space is linked to a specific directory (location) in my local hard drive. Is it possible to change the location of this mapping for each file? For example if I have two scripts in two completely different directories locally -
C:/File1.pl & D:/File2.pl
And I want to map these 2 scripts under the same folder in perforce. Is this possible?

The root directory must be the same for all files in a single workspace.
However, you can define multiple workspaces, one which resides on your C:\ disk and one which resides on your D:\ disk.
Generally, a single workspace is used for a single project, and generally all files for a single project are located together in a single area of your workstation. I'm having trouble thinking of a scenario in which you'd want to have files be part of a single project, and yet stored in various places scattered around your workstation. Can you explain your scenario further?
There are techniques (the SUBST command, using Windows Junction Points, etc.) which can be used to create aliases for files on a different disk, but given what you've described, using multiple workspaces seems like the clearest approach to me.

Related

Use Fossil for system files?

As a new user of Fossil, I'm curious if there are any negative implications with using Fossil to store things like /etc/, /usr/local/etc files from Unix like systems like FreeBSD & OpenBSD. If I'm doing this for multiple systems, I think I'd create a branch with each hostname to track those files.
Q1: Have you done this? Do you prefer a different VCS to handle the system files?
Q2: Lots of changes have happened in Fossil over the years and I'm curious if it's possible to restrict who can merge branches with trunk. From reading earlier threads it wasn't possible but there are two workarounds:
a) tell people not to merge to trunk
b) have people clone and trunk maintainer pick up changes from their repo
System configuration files stored in /etc, /var or /usr/local/etc can generally only edited by the root user. But since root has complete access to the whole system, a mistaken command there can have dire consequences.
For that reason I generally use another location to keep edited configuration files, a directory in my home-directory that I call setup, which is under control of git. Since I have multiple machines running FreeBSD, each machine gets its own subdirectory. There is a special subdirectory of setup called shared for those configuration files that are used on multiple machines. Maintaining multiple copies of identical files in separate repositories or even branches can be a lot of extra work.
My workflow is the following;
Edit a configuration file in my repository.
Copy it to its proper location.
Test the changes. If problems occur, go back to step 1.
Commit the changes to the revision control system. Copy the
committed files to their proper location.
Initially I had a shell script (basically a list of install commands) to install the files for me. But I also wanted to see the differences between the working tree and the installed files.
So for my convenience, I wrote a script called deploy to help me with this. It can tell me which files in the repo are different from the installed files and can show me the differences. It can also install files to their proper locations.

How i Connect the data that's in my folder without writing the specific path like:"C:\\Users\\Dima\\Desktop\\NewData\\..."

I am writing a script that's Requires Data Which is in my computer folder.
But eventually this script will be used in another computer, by another person.
I can't tell him to change all the paths to the data in the script.
How i Connect the data that's in my folder without writing the specific path
Like:"C:\Users\Dima\Desktop\NewData\..."
The best way of making your code shareable depends upon your use case.
As Carl Witthoft pointed out, most code should be encapsulated in functions. These functions can then be packaged into packages and easily redistributed on other peoples's machines. Writing packages is easier than you think.
For one off analyses, scripts are appropriate. How you make them user-independent depends on who your users are. If your are sharing the script with colleagues, try to keep your data on a network drive, then the link to the data will be the same for everyone. If you are sharing your script with the world, then keep your data on the internet, and the link to the data will be a hyperlink, again, the same for everyone.
If you are sharing your script with a few people who don't have access to a common drive, and you can't put your data on the internet, then some directory manipulation is acceptable.
Change your working directory to the root of where your project files are.
setwd("c:/Users/Dima/My Project")
Then you can reference the location of the data using relative paths.
data_file <- "Data/My data file.csv"
my_data <- read.csv(data_file)
Assuming that you keep the directory structure within your project the same, then you only need to change the call to setwd on each machine.
Also note that the special location "~" refers to your user home directory. Try
normalizePath("~")
That way, if you keep your project in that location, you can avoid reference to "Dima" entirely.

Compare files within folders between local computer and remote server

I have one folder on my computer and one folder on a remote server, I transferred a large number of files but for some reason I have now 2 more files in my own folder than on the server so I would like to check which ones these are instead of going through them all manually.
I looked for directory comparison and I found the command diff to display differences, but I tried it for my different two folders and it couldn't find the directory on the remote server. This is what I tried:
diff /Volumes/TC1-SIMDATA/Parallel/ModelWSSim/ fraukje#localhost:parallel/ModelWSSim/
Could anyone hint me what I am doing wrong here?
The diff command works only with file system accessible files and folders (i.e. mounted folders generally speaking).
If you can mount the folders, you'll be able to compare them with diff, else you need to invest some time to find a good diff merge tool with FTP or SFTP or whatever access protocol that you need.

.goutputstream-XXXXX - possible to relocate?

I've been trying to create a union file system for a college project. One of its features that differentiates it from unionfs is the fact that there are no copy-ups. This means that if a file is located in a certain branch, it will remain there even if it is written to.
But my current problem with that is the fact that .goutputstream-XXXXX are created, renamed, and deleted whenever a write operation occurs. This is actually OK if the file being written to is in the highest priority branch (i.e. the default branch where files can be created), but makes my kernel crash if I try to write to a file in a lower branch.
How do I deal with this? How can I rig it so that all .goutputstream-XXXXX files are written to only one location? These .goutputstream-XXXXX files seem to be intricately connected to the files they correspond too, and seem to work only the same directory as the file being written to.
I also noticed that .goutputstream-XXXXX files appear when a directory is read. What are they for, anyway?
There has been a bug submitted to the ubuntu launchpad in which the creation of .goutputstream-xxxxx files is discussed.
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/984785
From what i see now, these files are created when shutting down without preceding logout, but several other sources may occur, like evince or maybe gedit.
maybe lightdm has something to do with the creation of these files.
which distribution did you use?
maybe changing the distribution would help.
.goutputstream-XXXXX created by gedit and there is no simple way (menu or settings) to relocate them.

Junctions or Virtual Directories for Web Applications?

I see that junctions are a common way of referencing shared code in many projects. However, I have not seen them used in web applications before.
Our team is exploring the possibility of abandoning virtual directories in favor of junctions to simplify our build process. My goal is to compile a list of pros and cons in order to make an informed decision regarding this change.
Is it more appropriate to use junctions or virtual directories on web application projects?
Environment is ASP.NET, IIS6/IIS7, VS.NET.
Virtual directories vs. junctions is like comparing apples to pears: they both create sort of a virtual copy of a directory, as well as apples and pears are both fruits, but the comparison ends there.
First off, since Windows Vista, the new thing is symbolic links (which are essentially the same as junctions but can also point to a file or remote SMB path).
Symbolic links enable you to, for example, share every part of a web application except its Web.config and a stylesheet. This is something virtual directories can never do.
Also, virtual directories participate in ASP.NET's change monitoring. If you try and delete (a file or) directory from within your application, for instance, ASP.NET kills your app after the request completes, resulting in loss of session, etc. If instead of using a virtual directory, you use a symbolic link, the change will not be noticed and your app will keep on churning.
It's important to keep in mind that symbolic links are not an everyday feature in Windows. Yes, you can see that a file or directory is linked in Explorer, but it's not instantly visible to what it is linked. Also, from code it is much harder to see if a file is linked, so if you accidently delete the file that is being linked to from a million symbolic links, all those symbolic links suddenly 'stop existing'.
Symbolic links also speed up deployment of multiple instances of the same application, since the only thing you have to do is copy a few actual files, and then create symbolic links to the source files for all the rest.
In case with virtual folders you need IIS installed on each environment. However with both approaches you need to manually maintain all references after each change (For example situation when someone added one more reference), which is not convinient.
Consider using VCS with referncing system. For example SVN with externals. In that case you will have:
Automatic update of references on each environment.
Ability to have references in different versions of external code. This will avoid situations when it is needed to change all dependent applications after each external code change.

Resources