Protect.exe for AutoLISP code protection - encryption

I am developing an architectural LISP-based package for a member of the IntelliCAD consortium. Per recommendations I have found on websites, I have used the Kelvinator to deformat and disguise some of the code. Now I am attempting to use Protect.exe to encrypt the code. The exe seemed to work until I tried to put use a folder name in the output file name thus:
protect es.lsp L kelvinated\protected\es.lsp
First of all, can I do this? Will protect.exe work like this, or do the input and output file have to be in the same folder?
Also, one time I tried this and I got a "stack overflow" error. Therefore, I am here.

Kelvinator/protect et al are pretty old utilities, do you know the last time they were updated? Subtitle, they may expect old school 8.3 file / folder names.
As for "will this work?", I cannot say, as I use different schemes to protect my work when writing lisp for others (vlx/fas, bricscad's encryptor, my own loader / obfuscators ...).
A stack overflow in this context suggests a recursion error, perhaps when it tries to reconcile the pathing you're providing.
Have you tried to use the DOS short path? Putting the path in quotes? Using forward slashes? Using double backslashes?
What happens if you pass "/?" (and alternates) on the command line, does it provide any help?
Finally, if it refuses to process the files unless they share they same directory you could always front end with with a batch file that does the housekeeping for you.
Michael.

Related

Standardized filenames when passing folders between steps in pipeline architecture?

I am using AzureML pipelines, where the interface between pipeline steps is through a folder or file.
When I am passing data into the pipeline, I point directly to a single file. No problem at all. Very useful when passing in configuration files which all live in the same folder on my local computer.
However, when passing data between different steps of the pipeline, I can't provide the next step with a file path. All the steps get is a path to some folder that they can write to. Then that same path is passed to the next step.
The problem comes when the following step is then supposed to load something from the folder.
Which filename is it supposed to try to load?
Approaches I've considered:
Use a standardized filename for everything. Problem is that I want to be able to run the steps locally too, independant of any pipeline. This makes very for a very poor UX for that use case.
Check if the path is to a file, if it isn't, check all the files in the folder. If there is only one file, then use it. Otherwise throw an exception. This is maybe the most elegant solution from a UX perspective, but it sounds overengineered to me. We also don't structurally share any code between the steps at the moment, so either we will have repetition or we will need to find some way to share code, which is non-trivial.
Allow custom filenames to be passed in optionally, otherwise use a standard filename. This helpes with the UX, but often the filenames are supposed to be defined by the configuration files being passed in, so while we could do some bash scripting to get the filename into the command, it feels like a sub-par solution.
Ultimately it feels like none of the solutions I have come up with are any good.
It feels like we are making things more difficult for ourselves in the future if we assume some default filename. F.x. we work with multiple file types, so it would need to omit an extension.
But any way to do it without default filenames would also cause maintainence headache down the line, or incurr substantial upfront cost.
The question is am I missing something? Any potential traps, better solutions, etc. would be appreciated. It definately feels like I am somewhat under- and/or overthinking this.

Is it possible for a server to provide the same file to 2 people, on who understands only UTF-8 and the other, only ISO8559?

INTRO:
I'm in a situation because when uploading an inventory upload feed to Amazon, in 2021, they still don't understand UTF-8 encoding.
Here we have a file, in a wordpress installation, as the image for a visual product.
Example url : https://wordpresssite.com/uploads/Café-à-la-crème.jpg
Wordpress displays it fine.
Amazon reads a bunch of gibberish and can't find the file and gives an error.
Can we leave the file name on the source server as is and yet do something in cPanel or in
the excel file that lists this URL in a way that Amazon can also read it?
Is this ultimately as simple as telling Excel to encode that column differently before uploading?
Thank you in advance!
UPDATE : What I am trying now, is to export the Excel to CSV and then run it through line by line using PHP with a combination of tricks hoping to do a passable job of it. From what I see, there are many ways that "sorta" work, but nothing is sure.
UPDATE 2 : I realize that this doesn't solve my problem, because if Amazon changes the file name, changing an "é" to an "e", then it won't find the image either, so I'll have to go through all the images and find the ones with accents that I'm using.
QUESTION ABOUT PROCEDURE : I haven't been able to quite understand the way things work. I thought originally that this is about trying to get help when stuck. I have explained the problem and code isn't necessary. If I'm wrong, please tell me how it changes THIS situation? I'm using Excel, WordPress and I have to lose the UTF-8 accented characters that seem to cause Amazon's systems such grief (no judgement to Amazon, except that this resistance to UTF-8 is giving me brain shudders at the moment).
MORE INFO: If this helps, I'm writing in English but certain art products have a lot of French and some German in their names. I thought my example sufficient to illustrate what I was up against.
My problem is not how to convert the code but how to put the steps together to do what I need. It's because this whole process is not a simple iconv vs utf_decode() in php that it's extra stressful. Once I get the big picture sorted, the smaller steps are written about in many places where I could find more specifc details if I needed.
I'm not snarking here, but it seems that this kind of comment is just kicking someone when they are down. You are not the first to make such a suggestion over the years but again, I am curious how I could have explained any more than I have already — in a way that pertains to my actual problem.
Thanks for your response.
That URI is not properly encoded as per RFC 3986 (see also Wikipedia: percent/URL encoding). You cannot expect a server to blindly assume a requested URI to be UTF-8 encoded, but you can expect every server to support percent encoding:
https://wordpresssite.com/uploads/Caf%C3%A9-%C3%A0-la-cr%C3%A8me.jpg
In PHP this can be achieved thru rawurlencode(); in JavaScript it would be encodeURI().
Not sure what you want with Excel and CSV, but from what I understood it is unrelated to your actual problem.

Open a XML file not knowing the complete name and parse xml

I am using robot framework with RIDE, and for a test I need to find a XML file on my computer and open it to parse the xml and be able to use the datas.
The thing is that I don't know the exact name of the file; the format is numberNameOfTheFile, so it could be 1NameOfTheFile or 25NameOfTheFile.
How can I use regexp in my keyword? Or any other way to achieve this?
Thank you
How would you do it manually - how would you pick the file to use for the verification?
I presume, you are going to look at all the files that are matching a specific name pattern; in Robot Framework you can do that with OperatingSystem's List Files In Directory keyword, which supports passing a name pattern:
${the files}= List Files In Directory /the/path/to/the/dir *NameOfTheFile.xml
Now you have a list object with the filenames that match; if it's empty - there's no such file, which may be a problem (depends on your test/reqs, I don't know). If it has a single member - great, that's your file.
And if there are multiple files - that's another "problem". How would you pick the right file manually? It could be that the newest file is the target one - for that you would go over all of them and find the one through OperatingSystem's Get Modified Tume; or it can be the largest; or the number in its suffix would be the biggest. This really depends on your requirements, and what you are trying to achieve.
"How would you do it manually" is probably the most important question to ask. Think and break down to steps the individual tasks you would do, and now you have the algorithm; see how to put that in code - and presto, the implementation. This applies to scripts, test cases, and business process automation (e.g. software).
I was tempted to mark the question for closing, because precisely this - the algorithm - was missing, only the end goal is stated - while SO is for helping in the implementation part. But, here we are :)

Ada `Gprbuild` Shorter File Names, Organized into Directories

Over the past few weeks I have been getting into Ada, for various different reasons. But there is no doubt that information regarding my personal reasons as to why I'm using Ada is out of scope for this question.
As of the other day I started using the gprbuild command that comes with the Windows version of GNAT, in order to get the benefits of a system for managing my applications in a project-related manner. That is, being able to define certain attributes on a per-project basis, rather than manually setting up the compile-phase myself.
Currently when naming my files, their names are based off of what seems to be a standard for the grpbuild, although I could very much be wrong. For periods (in the package structure), a - is put in the name of the file, for underscores, an _ is put accordingly. As such, a package by the name App.Test.File_Utils would have a file name of app-test-file_utils: .ads and .adb accordingly.
In the .gpr project file I have specified:
for Source_Dirs use ("app/src/**");
so that I am allowed to use multiple directories for storing my files, rather than needing to have them all in the same directory.
The Problem
The problem that arises, however, is that file names tend to get very long. As I am already putting the files in a directory based on the package name contained by the file, I was wondering if there is a way to somehow make the compiler understand that the package name can be retrieved from the file's directory name.
That is, rather than having to name the App.Test.File_Utils' file name app-test-file_utils, I would like it to reside under the app/test directory by the name file_utils.
Is this doable, or will I be stuck with the horrors of eventually having to name my files along the lines of: app-test-some-then-one-has-more_files-another_package-knew-test-more-important_package.ads? Granted, I have not missed something about how an Ada application should actually be structured.
What I have tried
I tried looking for answers in the package Naming configuration of the gpr files in the documentation, but to no avail. Furthermore I have been browsing the web for information, but decided it might be better to get help through Stackoverflow, so that other people who might struggle with this problem in the future (granted it is a problem in the first place) might also get help.
Any pointers in the right direction would be very helpful!
In the top-secret GNAT documentation there is a description of how to use non-default file names. It's a great deal of effort. You will probably give up, use the default names, and put them all in a single directory.
You can also simplify much of the effort by using GPS and letting it build your project file as you add files to your source directories.

Mass Thunderbird folder to Gnus nnfolder conversions

I'm pondering the idea of importing a few thousand Thunderbird folders, each folder containing many emails of course, as a set of Emacs' Gnus mailgroups. Each mailgroup name would be derived from the folder hierarchy. Because of the quantity, the work is going to be fairly tedious, so I would automate this massive import if possible.
Among the available backends, nnfolder seems the most promising in this case. I presume it would be better to populate the mailgroups from within Gnus. Otherwise, I would have to thoroughly understand the nnfolder format, and this might require many iterations before I really get it right. Moreover, as email continues to flow in, iterations may become difficult to properly organize without loosing anything.
I guess I have to respool everything, under the constraint that the selected mailgroup is a function of the Thunderbird origin, overriding the standard Gnus selection mechanism. I did some Gnus coding in the past, but since I did not touch Emacs for a dozen years, it is all very rusty. I'm a bit lost about how to approach this task as efficiently and quickly as possible. So my question: how would you handle it? Or is there some clever Gnus hidden corner that I should explore more deeply? :-)
François
P.S. After I wrote this question, I found out that Gnus has a nice, helping function towards this goal. The idea is to first copy all Thunderbird folder files within the ~/Mail directory, as they are for the contents, but properly renamed. Once this done, M-x nnfolder-generate-active-file does at once, for each copied folder, edit the contents, leave a ~ backup, generate NOV data, create one mailgroup and, of course, adjust the ~/Mail/active file.
To copy the folders underneath the ~/.thunderbird/LOGIN/Mail/Local Folders/ directory, I wrote a small Python script. It ignores all .msf files, and recurse within .sbd directories. The folder path name, relative to Local Folders/, has all its .sbd/ strings turned into periods to produce the mailgroup name, also lowering case, turning spaces and underlines to dashes, and handling other special characters appropriately. In particular, non-ASCII characters are not handled properly, nnfolder is confusing UTF-8 and ISO-8859-1 here and there. The script also has to skip msgfilterrules.dat and likely drafts, junk and such things.
I notice two details requiring attention :
Thunderbird itself can be used to compact folders before copying them, otherwise one might unwillingly recover messages which were already deleted.
(setq nnmail-use-long-file-names t) is needed in ~/.emacs prior to the whole operation.
The batch transformation aborted, saying it is not able to decrypt one of the message. I moved the offending folder out of the way, and then, the lengthy operation succeeded.

Resources