I have two dynamic views in ClearCase which, as far as I know, are supposed to be "equal".
One is supposed to look at the "Main branch" and one at some other branch (let's call it A).
I did a merge from A to Main (in the Main view) and for some reason the code at the A view compiles while Main does not.
Is there a way to compare the views for differences?
The simplest way is to use an external diff tool on those two views (like WinMerge or BeyondCompare on Windows, KDiff3 on Unix or Windows, ...).
I would actually create two new views (with the same config spec than the two initial views), to remove any "cache" effect, and start the comparison there.
Once that initial examen is done, I would start the compilation in those two views, and see if one of them still don't compile.
Don't forget that merging A to Main will not always result in the same set of files after the Merge.
It would be the same only if no evolution has taken place in Main since A started (or since the last merge from A to Main).
The setcs -current you mention will:
–cur/rent
causes the view_server to flush its caches and reevaluate the current config spec, which is stored in file config_spec in the view storage directory. This includes:
Evaluating time rules with nonabsolute specifications (for example, now, Tuesday)
Reevaluating –config rules, possibly selecting different derived objects than previously
Re-reading files named in include rules
If you depend within your config spec on an "include file" which was at the wrong version, the first setcs would set it at the right version, and the second one would read its content and set the right version for the rest.
Related
I'm working with appBuilder and procedure editor in Progress Release 11.6.
As mentioned in some previous questions, regularly I'm having problems with the appBuilder, not wanting to open files, corrupting them (deleting parts of source code), ..., one of the reasons now seems to be the limit that a procedure cannot exceed 32K, comments included.
At first I thought "Are we back in the stone age?", pardon my reaction.
But now I start thinking that we are completely abusing the whole concept, therefore I'd like to show my view on W-, P- and I-files, please confirm (or correct):
W-files are meant only to contain GUI definitions, like a form with some frames, buttons, fill-in fields, ..., any real programming needs to be done in P-files.
P-files contain the real intelligence: there the procedures and functions are elaborated, that can be used by the rest of the P-files, or finally by the W-files.
I-files are just there to include general behaviour.
Let me give you an example:
W-file:
DEFINE VARIABLE combo_information VIEW-AS COMBOBOX /* with some information on the content, if this is static */
...
ON CHOOSE OF combo_information DO:
RUN very_large_procedure.
END.
...
{about.i} /* see here-after */
...
P-file:
PROCEDURE very_large_procedure:
DO /* a lot */
END.
I-file (about.i):
/* Describes the help-about menu item */
While working like this (only putting the GUI-related things in the W-file and let the "real" programming be done in the P-files), the mentioned 32K limit will never be reached. In top of that, adding a procedure can be done easily, the appBuilder will not delete it as the appBuilder won't ever open the P-file.
Is my view correct (and what about the I-files)?
In case yes: one technical question: how can I launch a procedure from a P-file inside a W-file? (Obviously, the mentioned example can't work as in the W-file I did not mention where to look for the very_large_procedure)
The naming is arbitrary and you may occasionally find other extensions being used. Having said that:
"W" is for "window", it is supposed to contain code that is related to making a GUI work. It is very often abused to contain any sort of code. It is usually abused in that way by people who learned to code on the app builder or who have never coded on anything but Windows.
"P" is for "Progress" and retconned to "Procedure". It was the standard back in the old days prior to the appearance of Windows GUI code. Any "headless" code or character mode code would typically go into a dot-p file.
"I" is for "include". This is a very old school way to create reusable code snippets and common "header files". Include files are commonly parameterized. Potentially with either named or positional arguments.
Another major extension is ".cls" files. These are for OO4GL classes (OpenEdge 10 and above).
Launching procedures is acheived by running them:
RUN myproc.p.
or
RUN guiproc.w.
Or, if by "launch", you mean "start a session" then you use the "-p procedureName" startup parameter along with prowin32.exe or prowin.exe for Windows GUI code or _progres.exe for batch or character code.
One of the ancillary tools bundled with quilt is guards, which processes a list of guards and a configuration file matching guards and files, and outputs a list of files whose guard specifications match the provided guards.
However I can't figure how they're supposed to fit together: quilt(1) doesn't show any way to invoke a command to generate series files, I didn't find examples in the mans or the working copy, and the internets are less than helpful (all the hits talk about bedding).
I feel like guard has to be manually invoked whenever its "dependencies" change and the series file overwritten, is that the case? If so, how is data fed back the other way around e.g. when adding a new patch to the series, does it have to be manually synchronised to the guard file?
Background: a few years back I used mq quite a bit, but it integrates guards natively so the synchronisation back and forth is not an issue at all.
Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.
I have a Plone instance which contains some structures which I need to copy to a new Plone instance (but much more which should not be copied). Those structures are document trees ("books" of Archetypes folders and documents) which use resources (e.g. images and animations, by UID) outside those trees (in a separate structure which of course contains lots of resources not needed by the ones which need to be copied).
I tried already to copy the whole data and delete the unneeded parts, but this takes very (!) long, so I'm looking for a better way.
Thus, the idea is to traverse my little forest of document trees and transfer them and the resources they need (sparsely rebuilding that separate structure) to the new Plone instance. I have full access to both of them.
Is there a suggested way to accomplish this? Or should I export all of them, including the resources structure, and delete all unneeded stuff afterwards?
I found out that each time that I make this type of migration by hand, I make mistakes that force me to do it again.
OTOH, if migration is automated, I can run it, find out what I did wrong, fix the migration, and do it all over again until I am satisfied.
In this context, to automate your migration, I advise you to look at collective.transmogrifrier.
I recommend jsonmigrator - which is a twist on collective.transmogrifier mentioned by Godefroid. See my blog on it here
You can even use it to migrate from Archetypes to Dexterity types (you just need matching fieldnames (and matching types roughly speaking).
Trying to select the resources to import will be tricky though. Perhaps you can find a way to iterate through your document trees & "touch" (in a unix sense) any resource that you are using. Then copy across only resources whose "timestamp" indicates that they have been touched.
The problem is described below:
Suppose I have a list of files in one version(say A,B,C,D). In the next version I have the following files(A,E,F,G). There are some similarities in their contents. The files in the later version comes from the previous version by file name renaming, content addition, deletion or partial modification or without any change( for example A is not changed).
I take a block of text from a file(E, 2nd version) and check which files(in the 1st version) contain this text block. I found that B,C and D contain the text fragment. I want to determine from which file(B or c or d) this text block actually comes from.(I assume that E is a file whose name change in the second version).
Since the contents may be changed, added or deleted in the later version, so in order to determine similarity I use LCS algorithm. But I cannot map the file with its previous version.
I think one possible approach might be to use the location information of the match text blocks. But this heuristics not always work. Is there any research or algorithm exist to find so. Any direction will be helpful. Thanks in advance.
I think it may be helpful to take a look at Subversion, and its capability to track file renaming between versions. http://svnbook.red-bean.com/
It's tried and tested, because it's used by so many developers. Renaming has to occur by using subversion tools though, but there are many (command line, file explorer integration for different OS, GUIs, IDEs, you name it). It also covers moving files between directories, and merging several lines of changes (branches).