Is there a way to create a directory using common lisp. I want to first create a folder and then put my .txt .png files in that. I know that first I can create the folder externally and then using with-open-file and so on create my files inside the directory. What I want is a common lisp solution for this.
(ensure-directories-exist "/path/name/")
This page seems to be a nice writeup, explaining all the nuances of the file I/O issue that CL needs to address.
Imagine you have multiple processes running simultaneously, who might sometimes need to line up for a program (resource) that cannot run in parallel. Then creating a directory can be an efficient way to lock the resource. ensure-directories-exist is not good for that because it is not atomic. In this situation I use (in sbcl)
(sb-unix:unix-mkdir "/name/of/directory" #o777)
Related
I am using AzureML pipelines, where the interface between pipeline steps is through a folder or file.
When I am passing data into the pipeline, I point directly to a single file. No problem at all. Very useful when passing in configuration files which all live in the same folder on my local computer.
However, when passing data between different steps of the pipeline, I can't provide the next step with a file path. All the steps get is a path to some folder that they can write to. Then that same path is passed to the next step.
The problem comes when the following step is then supposed to load something from the folder.
Which filename is it supposed to try to load?
Approaches I've considered:
Use a standardized filename for everything. Problem is that I want to be able to run the steps locally too, independant of any pipeline. This makes very for a very poor UX for that use case.
Check if the path is to a file, if it isn't, check all the files in the folder. If there is only one file, then use it. Otherwise throw an exception. This is maybe the most elegant solution from a UX perspective, but it sounds overengineered to me. We also don't structurally share any code between the steps at the moment, so either we will have repetition or we will need to find some way to share code, which is non-trivial.
Allow custom filenames to be passed in optionally, otherwise use a standard filename. This helpes with the UX, but often the filenames are supposed to be defined by the configuration files being passed in, so while we could do some bash scripting to get the filename into the command, it feels like a sub-par solution.
Ultimately it feels like none of the solutions I have come up with are any good.
It feels like we are making things more difficult for ourselves in the future if we assume some default filename. F.x. we work with multiple file types, so it would need to omit an extension.
But any way to do it without default filenames would also cause maintainence headache down the line, or incurr substantial upfront cost.
The question is am I missing something? Any potential traps, better solutions, etc. would be appreciated. It definately feels like I am somewhat under- and/or overthinking this.
Over the past few weeks I have been getting into Ada, for various different reasons. But there is no doubt that information regarding my personal reasons as to why I'm using Ada is out of scope for this question.
As of the other day I started using the gprbuild command that comes with the Windows version of GNAT, in order to get the benefits of a system for managing my applications in a project-related manner. That is, being able to define certain attributes on a per-project basis, rather than manually setting up the compile-phase myself.
Currently when naming my files, their names are based off of what seems to be a standard for the grpbuild, although I could very much be wrong. For periods (in the package structure), a - is put in the name of the file, for underscores, an _ is put accordingly. As such, a package by the name App.Test.File_Utils would have a file name of app-test-file_utils: .ads and .adb accordingly.
In the .gpr project file I have specified:
for Source_Dirs use ("app/src/**");
so that I am allowed to use multiple directories for storing my files, rather than needing to have them all in the same directory.
The Problem
The problem that arises, however, is that file names tend to get very long. As I am already putting the files in a directory based on the package name contained by the file, I was wondering if there is a way to somehow make the compiler understand that the package name can be retrieved from the file's directory name.
That is, rather than having to name the App.Test.File_Utils' file name app-test-file_utils, I would like it to reside under the app/test directory by the name file_utils.
Is this doable, or will I be stuck with the horrors of eventually having to name my files along the lines of: app-test-some-then-one-has-more_files-another_package-knew-test-more-important_package.ads? Granted, I have not missed something about how an Ada application should actually be structured.
What I have tried
I tried looking for answers in the package Naming configuration of the gpr files in the documentation, but to no avail. Furthermore I have been browsing the web for information, but decided it might be better to get help through Stackoverflow, so that other people who might struggle with this problem in the future (granted it is a problem in the first place) might also get help.
Any pointers in the right direction would be very helpful!
In the top-secret GNAT documentation there is a description of how to use non-default file names. It's a great deal of effort. You will probably give up, use the default names, and put them all in a single directory.
You can also simplify much of the effort by using GPS and letting it build your project file as you add files to your source directories.
I have written personal functions in R that are not specific to one (or a few) projects.
What are the best practices (in R) to put those kind of functions?
Is the best way to do it to have one file that gets sourced at startup? or is there a better (recommended) way to deal with this situation?
Create a package named "utilities" , put utility functions in that package, try to aim for one function per file, and store the package in a source control system (e.g., GIT, SVN ). It will save you time in the long run.
P.S. .Rprofile tends to get accidentally deleted.
If you have many, it would be good to make it into a package that you load each time you start working.
It is probably not a good idea to have a monolithic script with a bunch of functions. Instead break the file up into several files each of which either has only one function (my preference) or has a group of functions that are logically similar. That makes it easier to find things when you need to make changes.
Most people use the .Rprofile file for this. Here are two links which talk about this file in some detail.
http://www.statmethods.net/interface/customizing.html
http://blog.revolutionanalytics.com/2013/10/sample-rprofile.html
At the top of my .Rprofile file I call library() for the various libraries which I normally use. I also have some personal handy functions which I've come to rely on. Because this file is sourced on startup, they are available to me every session.
From my experience, a package will be the best choice for personal functions. Firstly I put all new functions into a personal package, which I called it My. When I find some functions was similar and are worth to become an independent package, I will create a new package and move them.
I am new in uniVerse and I have to write a uniVerse program which will check permissions of folders,subfolders and files. for example we have a folder called A and subfolder A1 and files in A1. now have to check if their permissions are set correctly.
lets say files on subfolder A1 are supposed to be rwxrwxr-x (775) but they are rwxrwxrwx(777). then based on this I need to report to say files on folder A1 are not set correctly.
well so far a little push/ideas/references/code snapshots etc would really help.
Thanks a mill in advance for help.
I'm more of a UniData person but in UniVerse Basic it looks like you could leverage the STATUS command which returns a dynamic array that contains the UNIX permissions in numeric form (e.g. 777).
More information available in the UniVerse Basic reference manual here: http://www.rocketsoftware.com/u2/products/universe/resources/technical-manuals/universe-11.1.11-documentation/basicref-v11r1.pdf
One of the best resources for UniVerse and RetrieVe questions is this site: http://www.mannyneira.com/universe/
If your system allows it, you should look into trying to write a script that executes via Shell. You can write UniVerse scripts that duck in and out of Shell. According to the UniVerse and Linux page on that site, you should be able to access it via the SH command.
When writing a Shell program to interact with UniVerse, you're typically going to want to use uvsh to output the data and then pipe it to something else (such as col) to manipulate it. If you pass a string to the uvsh command, it will execute it - so you can pass it commands to read file data (such as from voc pointers).
Keep in mind that every time you run the SH or uvsh command, you're nesting another shell within your current one, not switching between them.
However, it sounds like the file permission information you're interested in could be handled purely on the Shell side of things...
i need better file attachement function. Best would be that if you upload files to FTP and have a similar name as the name of the node (containing the same word), so they appear under this node (to not have to add each file separately if you need to have more nodes below). Can you think of a solution? Alternatively, some that will not be as difficult as it always manually add it again.
Dan.
This would take a fair bit of coding. Basically you want to implement hook_cron() and run a function that loops through every file in your FTP folder. The function will look for names of files that have not already been added to any node and then decide which node to add them to.
Bear in mind there will be a delay once you upload your files until they are attached to the node until the next cron job runs.
This is not a good solution and if I could give you any advice it would be not to do it - The reason you upload files through the Drupal interface is so that they are tracked in the files table and can be re-used.
Also the way you're proposing leaves massive amounts of ambiguity as to which file will go where. Consider this:
You have two nodes, one about cars and one about motorcycle sidecars. Your code will have to be extremely complex to make the decision of which node to add to if the file you've uploaded is called 'my-favourite-sidecar.jpg'.