Is it possible for a server to provide the same file to 2 people, on who understands only UTF-8 and the other, only ISO8559? - wordpress

INTRO:
I'm in a situation because when uploading an inventory upload feed to Amazon, in 2021, they still don't understand UTF-8 encoding.
Here we have a file, in a wordpress installation, as the image for a visual product.
Example url : https://wordpresssite.com/uploads/Café-à-la-crème.jpg
Wordpress displays it fine.
Amazon reads a bunch of gibberish and can't find the file and gives an error.
Can we leave the file name on the source server as is and yet do something in cPanel or in
the excel file that lists this URL in a way that Amazon can also read it?
Is this ultimately as simple as telling Excel to encode that column differently before uploading?
Thank you in advance!
UPDATE : What I am trying now, is to export the Excel to CSV and then run it through line by line using PHP with a combination of tricks hoping to do a passable job of it. From what I see, there are many ways that "sorta" work, but nothing is sure.
UPDATE 2 : I realize that this doesn't solve my problem, because if Amazon changes the file name, changing an "é" to an "e", then it won't find the image either, so I'll have to go through all the images and find the ones with accents that I'm using.
QUESTION ABOUT PROCEDURE : I haven't been able to quite understand the way things work. I thought originally that this is about trying to get help when stuck. I have explained the problem and code isn't necessary. If I'm wrong, please tell me how it changes THIS situation? I'm using Excel, WordPress and I have to lose the UTF-8 accented characters that seem to cause Amazon's systems such grief (no judgement to Amazon, except that this resistance to UTF-8 is giving me brain shudders at the moment).
MORE INFO: If this helps, I'm writing in English but certain art products have a lot of French and some German in their names. I thought my example sufficient to illustrate what I was up against.
My problem is not how to convert the code but how to put the steps together to do what I need. It's because this whole process is not a simple iconv vs utf_decode() in php that it's extra stressful. Once I get the big picture sorted, the smaller steps are written about in many places where I could find more specifc details if I needed.
I'm not snarking here, but it seems that this kind of comment is just kicking someone when they are down. You are not the first to make such a suggestion over the years but again, I am curious how I could have explained any more than I have already — in a way that pertains to my actual problem.
Thanks for your response.

That URI is not properly encoded as per RFC 3986 (see also Wikipedia: percent/URL encoding). You cannot expect a server to blindly assume a requested URI to be UTF-8 encoded, but you can expect every server to support percent encoding:
https://wordpresssite.com/uploads/Caf%C3%A9-%C3%A0-la-cr%C3%A8me.jpg
In PHP this can be achieved thru rawurlencode(); in JavaScript it would be encodeURI().
Not sure what you want with Excel and CSV, but from what I understood it is unrelated to your actual problem.

Related

VSCode : mvbasic extension on editing Unidata code with MV marks in code, ie CHAR(253), CHAR(254)

I have searched for a setting within the mvbasic extension within VSCode but I may have hit a dead end. I am new to using VSCode with the rocket mvbasic extension and still in the learning process, so please bear with me.
Our development for the most part has always been directly on the server using the editor within it to code and develop on a Unix/Aix platform with Unidata. Some of our code has array assignments with CHAR(253)/CHAR(254) characters within them. See the link to the image that shows how its done. Now I didn't do this code, the original software developer did this many many years ago and we just aren't going to go and change it all.
How code looks on actual server
The issue is when pulling the code to edit in VSCode, the extension is changing it, and I uploaded it back and didn't pay attention and it was implemented in our production incorrectly, which created a few bugs.
ALIST="H�V�P�R�M�D"
How code looks in VSCode
How code looks after uploaded back to server from VSCode
Easy to fix, no biggie, but now to my question.
Does anyone have this issue, or has a direction to point me into that maybe I need to create a setting to keep the characters in the correct ASCII format so that this doesn't happen again by mistake?
VSCode defaults to the sane choice for character encoding in 2022: utf-8, but sometimes you have to deal with legacy stuff.
https://code.visualstudio.com/docs/editor/codebasics#_file-encoding-support
If you click on the UTF-8 in the bottom right corner you can choose "Reopen with Encoding":
After that, you can select a different encoding. I chose DOS (CP437) at a guess and literal MV characters are displayed as superscript 2 (²), and for me I can save to the server and confirm those characters remain as #VM after a round trip (though for my terminal emulator they appear as } which is useful).
You can edit preferences and set "files.encoding": "cp437". One other thing that can be helpful if your programs don't have a standard extension (like .bas) as most don't is to set the default mode to basic so most of what you're editing will identify as MVbasic, and you can do a quick CTRL-K M to switch to any other modes if you're just pasting in something else like SQL.
Some useful links - the Rocket forums are helpful and the folks there are always super nice
https://community.rocketsoftware.com/forums/multivalue?CommunityKey=521bce2e-71d5-4d32-b560-dfa95e950eb5
The MV Extensions Community extension is a good group and always has been helpful when I've had issues. I've made some small contributions - they're very open. I prefer this extension, but honestly haven't done a deep comparison.
https://github.com/mvextensions

Bulk upload of Microsoft Word files to WordPress pages

I have been asked to upload 200 Microsoft Word documents — many of them containing lengthy, complex math problems or scientific notation — into a WordPress setting. Each Word file would become a separate WordPress post.
I would clearly prefer to not cut-and-paste each file one-by-one into a post and then save it . Does anyone know of a way to automate the process while ensuring the accuracy of the translation, or at least minimizing the number of issues we might find when converting from Word to WordPress? Or am I dreaming the impossible dream?
Thanks for any input you can offer.
Sounds like an interesting problem. I have an idea that might be worth exploring. There are a number of free or shareware tools that can convert Word docs to HTML.
If you can manage to convert them into decently clean markup with one of those tools, I would recommend using the HTML Import 2 WordPress plugin. It can take a batch of HTML files and create Posts / Pages out of them.
It's a two step process, but I bet it'll work. (And certainly be faster than copy/paste 200 times).
Hope that helps, have fun!
Well I got the solution which works for me, but its bit manual but still save a lots of time.
Here are the Steps.
Connect your Blog to Ms word 2007/2013
Make sure Remote writing is Enabled in WP
Copy all post in one Word document or use merge to make one single DOC.
Now Set Default posting category from WP and Save it.
Now from your MSWORD copy the post and start posting one by one.
Tips:
Make Shortcut key for publishing.
Use Ctrl+C for text before publishing.
Make shortcut for publishing to WP

Is there any way to download all google Webfont's in all formats?

First of all i am well aware of this Question and i have a strong feeling that it was closed because it was asked by a google critic.
My sub-question is: Am i right to assume that the processing that is described here is done after the ttf creation meaning it would not be possible to just batch convert them all together? If please tell me how (on linux)
Even if it would prefer the actually files google serves rather then having web services or tools convert them because google might done this the absolute best way possible!
I have some feeling that there is something out there, somewhere. Because it maybe just needs a script that changes the browser referrer and asks google http responses, downloads them and renames them to names based on the 'local' fontname in the css. In addition some web scraping to the the font names and the possible variation (bold/normal/...)
This is why i am trying this again in addition to that: I am a web Programmer i need this for my localhost web development so this is a this is a perfectly valid question for Stack Overflow in my mind and i am sure many have the desire to get hands of all fonts an easy way. So please at least migrate it somewhere it you think its not a valid question!
Extract the list of available fonts from https://fonts.google.com/metadata/fonts. Then use goog-webfont-dl to download all the formats of each font. I used the command cat ~/fonts.txt| while read line; do goog-webfont-dl -a -f $line;done to download each font in the list. Here is a gist with all the fonts up to 2016-07-12.

Protect.exe for AutoLISP code protection

I am developing an architectural LISP-based package for a member of the IntelliCAD consortium. Per recommendations I have found on websites, I have used the Kelvinator to deformat and disguise some of the code. Now I am attempting to use Protect.exe to encrypt the code. The exe seemed to work until I tried to put use a folder name in the output file name thus:
protect es.lsp L kelvinated\protected\es.lsp
First of all, can I do this? Will protect.exe work like this, or do the input and output file have to be in the same folder?
Also, one time I tried this and I got a "stack overflow" error. Therefore, I am here.
Kelvinator/protect et al are pretty old utilities, do you know the last time they were updated? Subtitle, they may expect old school 8.3 file / folder names.
As for "will this work?", I cannot say, as I use different schemes to protect my work when writing lisp for others (vlx/fas, bricscad's encryptor, my own loader / obfuscators ...).
A stack overflow in this context suggests a recursion error, perhaps when it tries to reconcile the pathing you're providing.
Have you tried to use the DOS short path? Putting the path in quotes? Using forward slashes? Using double backslashes?
What happens if you pass "/?" (and alternates) on the command line, does it provide any help?
Finally, if it refuses to process the files unless they share they same directory you could always front end with with a batch file that does the housekeeping for you.
Michael.

Reading a COBOL DAT file

I have been given a set of COBOL DAT, IDX and KEY files and I need to read the data in them and export it into Access, XLS, CSV, etc. I do not know the version, vendor of the COBOL code as I only have the windows executable that created the files.
I have tried Easysoft and Parkway ODBC drivers but I have not been successful in reading the data from the files.
I do not have access to the source code as the company that was distributing this product shut down.
I have successfully read some of the dat files using http://www.cobolproducts.com/datafile just now which I came to know through another forum. Most probably I will work with them to help me read the rest of the files that I am having an issue with.
A few possibilities.
1/ See if you can find the names of the people that worked for the company. They may be helpful.
2/ Open the DAT file in a text editor. The data may be decodable from that. If the basic format can be discerned, quick'n'dirty code can be written to extract it.
3/ Open up the executable in an editor, there may be strings in there that indicate which compiler was used, then you can search for info on its file formats. If it's a DOS application, there's a good chance it was either Microsoft or Fujitsu COBOL.
4/ Consider placing job requests on work sites like elance or rentacoder; I don't think there's a cost if the work can't be done successfully.
5/ Hire someone to examine it and advise on the likelihood of recovery.
6/ Get a screen dump of the record contents for every active record and re-construct it from that.
Some of these are pretty hard so your mileage may vary.
Good luck.
I have read COBOL DAT files only with FD, when I do not have the FD, I open the file in a Text Editor, and try to guess the columns, and try again, until I have this working, the big problem with this approach is when the DAT file have COMP columns, that can be any kind of COMP type, but with a litthe patience I cold get this done.
I had tryed Parkway ODBC, but without success.
for anyone going through this journey, I found this in sourceforge: Cobol and RPG data reader and converter
http://sourceforge.net/projects/cobol2j/
Im about to try it, sounds kind of promising

Resources