Maintaining same piece of code in multiple files - build-process

I have an unusual environment in a project where I have many files that are each independent standalone scripts. All of the code required by the script must be in the one file and I can't reference outside files with includes etc.
There is a common function in all of these files that does authorization that is the last function in each file. If this function changes at all (as it does now and then) it has to changed in all the files and there are plenty of them.
Initially I was thinking of keeping the authorization function in a separate file and running a batch process that produced the final files by combining the auth file with each of the others. However, this is extremely cumbersome when debugging because the auth function needs to be in the main file for this purpose. So I'd always be testing and debugging in the folder with the combined file and then have to copy changes back to the uncombined files.
Can anyone think of a way to solve this problem? i.e. maintain an identical fragment of code in multiple files.

I'm not sure what you mean by "the auth function needs to be in the main file for this purpose", but a typical Unix solution might be to use make(1) and cpp(1) here.

Not sure what environment/editor your using but one thing you can do is to use prebuild events. create a start-tag/end-tag which defines the import region, and then in the prebuild event copy the common code between the tags and then compile...
//$start-tag-common-auth
..... code here .....
//$end-tag-common-auth
In your prebuild event just find those tags, and replace them with the import code and then finish compiling.
VS supports pre-post build events which can call external processes, but do not directly interact with the environment (like batch files or scripts).

Instead of keeping the authentication code in a separate file, designate one of your existing scripts as the primary or master script. Use this one to edit/debug/work on the authentication code. Then add a build/batch process like you are talking about that copies the authentication code from the master script into all of the other scripts.
That way you can still debug and work with the master script at any time, you don't have to worry about one more file, and your build/deploy process keeps everything in sync.
You can use a technique like #Priyank Bolia suggested to make it easy to find/replace the required bit of code.

I ugly way I can think of:
have the original code in all the files, and surround it with markers like:
///To be replaced automatically by the build process to the latest code
String str = "my code copy that can be old";
///Marker end.
This code block can be replaced automatically by the build process, from one common code file.

Related

Standardized filenames when passing folders between steps in pipeline architecture?

I am using AzureML pipelines, where the interface between pipeline steps is through a folder or file.
When I am passing data into the pipeline, I point directly to a single file. No problem at all. Very useful when passing in configuration files which all live in the same folder on my local computer.
However, when passing data between different steps of the pipeline, I can't provide the next step with a file path. All the steps get is a path to some folder that they can write to. Then that same path is passed to the next step.
The problem comes when the following step is then supposed to load something from the folder.
Which filename is it supposed to try to load?
Approaches I've considered:
Use a standardized filename for everything. Problem is that I want to be able to run the steps locally too, independant of any pipeline. This makes very for a very poor UX for that use case.
Check if the path is to a file, if it isn't, check all the files in the folder. If there is only one file, then use it. Otherwise throw an exception. This is maybe the most elegant solution from a UX perspective, but it sounds overengineered to me. We also don't structurally share any code between the steps at the moment, so either we will have repetition or we will need to find some way to share code, which is non-trivial.
Allow custom filenames to be passed in optionally, otherwise use a standard filename. This helpes with the UX, but often the filenames are supposed to be defined by the configuration files being passed in, so while we could do some bash scripting to get the filename into the command, it feels like a sub-par solution.
Ultimately it feels like none of the solutions I have come up with are any good.
It feels like we are making things more difficult for ourselves in the future if we assume some default filename. F.x. we work with multiple file types, so it would need to omit an extension.
But any way to do it without default filenames would also cause maintainence headache down the line, or incurr substantial upfront cost.
The question is am I missing something? Any potential traps, better solutions, etc. would be appreciated. It definately feels like I am somewhat under- and/or overthinking this.

How run a script refering to an externaly stored script in R?

I am looking for a way to run a script with one part of the script stored in a separate file. Like a "normal" script but with one part of the script referring to an external script.
The script stored in a separate file will be generic to several scripts and will be regularly updated, and that is the reason for keeping this part of the script separate.
I have not found anything about how to solve this. Maybe someone else has a solution to this?
It seems you're looking for
source("[file location]")
Be aware that this will automatically run the entire script in that file location, so be aware of that with regards to which names you give objects in that script and the other script you're working with (e.g. if you open a dataframe in your current script and not in the external script, but you're working with that dataframe in the external script too, the name needs to be the same).
Alternatively, you could write the external script in a way that it is loaded into your workspace as a formula, so that you can refer to that formula in the 'current'script.

How do I use Atom's linter-jshint when code is split up across multiple files?

I'm writing a single-page JavaScript application and I'm using Atom as my text-editor. (It's an Electron application, but that's besides the point.)
I'm also using the linter-jshint plugin for Atom. This is great, as it shows immediately in the text-editor when I make a typo in a variable, among other useful things.
Lately, my app has been getting very long. Naturally, I want to try and split it up across multiple files. After doing some research on StackOverflow, I've determined that I can use Grunt to automatically concatenate JavaScript files together. This is great because I don't have to refactor my code - I can just copy paste my existing functions into separate files. Easy!
However, once I do this, Atom fills up with warnings and errors from JSHint, because it can't find variables and functions that are located in the other files!
Now, I could just abandon the JHint plugin in Atom altogether and use the JSHint plugin for Grunt after the concatenation has already occured. But that sucks! I want the code that I'm going to be writing to be checked on the fly like a real IDE.
Is there a way to tell Atom/JSHint to assume that a bunch of JavaScript files will all be concatenated together? Or am I just approaching this problem completely wrong?
You can split your electron application with Node Common Modules, and use require('./state.js'); within your application.
Although I don't use Atom, that should allow for it to understand how you're using your variables and functions in other files.
Also this should eliminate your need for concatenation as the single-page app will have all it's dependencies accounted for.

XML or text file logging in aspnet with thread safe

I need very simple text file logging. I'll only append lines to it. never change existing ones nor delete them. If it would be XML file it would be easier to bind to grids to view them. but question remains for both text files and xml files as they are in file system.
in web server there will be file locking while appending log entries. and maybe also while reading them. So this method has to be thread safe. At the same moment multiple instances can write date to file.
I know there are some third party tools like serilog etc but I want to know:
how can I append (not change) lines to text file (or xml file) without concerning about file locks ?
if I read xml file to dataset, add a new row to it and save it as xml I would use other entries made by other instances.
if I open a text file with streamwriter and append a line to it, other instances would get lock error.
I get the list of logs from admin panel again, file will be locked and instances wouldn't append logs.
any ideas ?
After long reserch hours and experiments I found out that using Nlog is the best option for me. most important thing is people who use it are very happy. I created small example page that writes a log everytime it called and tested it. I have a multithreaded application that calls this sample page again and again. If was fast enough so I could not see the counting numbers of threads. no problem raised so far.
So, I'll stick to Nlog.
best.

Unix touch command usage

I know you can use touch to create a new empty file.
I just learned that touch can be used to update the access and modification time of a file. I don't quite know in what situations and why do you need to update the access and modification time of a file , i.e. the usefulness of this particular function?
Thanks!
Some utility depends on timestamp of the file.
For example, make uses timestamp to check whether it is required to do something (usually build) based on the timestamp of the source code, and output (executable, object files, ...)
By touching followed by make, the source file, you can force rebuild.
In addition, touch has a -d option that can fake the modification time.
If one "knows what he's doing" she can avoid long build time, due to unnecessary re-compilations.
For example, when adding a declaration to a common header file,
that does not change any old API, one can fake the header real modification time,
and bypass Makefile's dependencies.

Resources