We have been using the usual code to read in a complete file into a string to then parse in VB6. The files are ANSI text but encoded using whatever code page the user was in at the time (we have Chinese and English users for example). This is the code
Open FileName For Binary As nFileUnit
sContents = StrConv(InputB(LOF(nFileUnit), nFileUnit), vbUnicode)
However, we have discovered this is VERY slow reading a file from a server running unix/linux, particularly when the ownership of the file is not the same as the process doing the reading.
I have rewritten the above using Get and discovered it is much faster and does not suffer from any issues with file ownership. I appreciate that this might be solved by reconfiguring the server somehow, but I think since deiscovering even without that issue, the Get method is still much faster than InputB I'd like to replace my existing code using Get.
I wonder if someone could tell me if this will really do the same thing. In particular, is it correctly doing the ANSI to Unicode conversion and will this always be true. My testing suggests the following replacement code does the same thing but faster:
Open FileName For Binary As nFileUnit
sContents = String(LOF(nFileUnit), " ")
Get #nFileUnit, , sContents
I also realise I could use a byte array, but again my tests suggest the above is simpler and works. So how does the buffer work correctly (if you believe the online help for Get it talks of characters returned - clearly this would cause problems when reading in an ANSI file written on the Chinese code page with 2-byte Chinese characters in it).
The following might be of interest becuase the InputB approach is commonly given as the method to read a complete file, but it is much slower, examples
Reading 380Kb file across the network from the unix server
InputB (file owned) = 0.875 sec
InputB (not owned) = 72.8 sec
Get (either) = 0.0156 sec
Reading a 9Mb file across the network from the unix server
InputB (file owned) = 19.65 sec
Get (either) = 0.42 sec
Thanks
Jonathan
InputB() is CVar(InputB$()), and is known to be horribly slow. My suspicion is that InputB$() reads the bytes and converts them to Unicode using the current codepage via some stock logic for reading text from disk, then does another conversion back to ANSI using the current codepage.
You might be far ahead to use ADODB.Stream.LoadFromFile() to load complete ANSI text files. You can set the .Type = adTypeText and .Charset = the appropriate ANSI encoding as required to read Unicode back out of it via .ReadText(x) where x can be a number of bytes, or adReadAll or adReadLine. For line reading you can set .LineSeparator to adCR, adCRLF, or adLF as required.
Many Charset values are supported: KOI8 for Cyrillic, Big5 for Chinese, etc.
Related
Is there a more efficient way than
int fileSize = size(readFileLines(fileLoc));
to get the total number of lines in a file? I presume this code has to read the entire file first, which could become costly for huge files.
I have looked into IO and Loc whether some of this info might be saved in conjunction with the file.
This is the way, unless you'd like to call wc -l via util::ShellExec 😁
Apart from streaming the file and saving some memory counting lines is always linear in the size of the file so you won't win much time.
Hello this my first question to StackOverflow, not sure about the forum and topic.
While participating in an Open Mainframe initiative using Visual Studio Code and Putty for Unix I developed a sample program in COBOL showing international sayings (german, english, french, spanish, latin for now). It works fine via batch with JCL to file and being called from REXX. In file I can't see special chars for non-english but I had a lucky punch with a twin-program in PL/1 (doing the same and showing the special chars in REXX).
Now my question: I also tried to call by mvscmd from Unix bash script. It works so far but dont show me the special chars. Ok I have last chance to call mvscmd from Python. Or alternatively I can transfer file from MVS to unix (for any reason then it automatically converts and I see my special chars contents).
Where is the place to handle it? Cobol? (as I said, for any reason PL/1 can do. I only use standard put edit in PL/1 vs display in Cobol). Converting the Sysprint/Sysout?
Any specialist can help me?
Hello and sorry for late replay. Well the whole code is a little bit much but I guess my problem is the following - MVSCMD direct coded in the shell script
#!/bin/sh
parm='Z08800.FYD.DATA'
#echo "arg1=>"$1"<"
[ ! -z "$1" ] && parm=$parm","$1
#echo "arg2=>"$2"<"
[ ! -z "$2" ] && parm=$parm","$2
#echo "parm=>"$parm"<"
mvscmd --pgm=saycob --args=$parm \
--steplib='z08800.fyd.load' \
--sysin=dummy \
--sysout=*
I have some more shell script but this is the main. I directly put it to sysout (its the COBOL diplay. I can use fixed string or my saying read from MVS file). When using PL/1 program the last file is then sysprint because PL/1 makes it by PUT EDIT.
I assume my codepage is pretty wrong. But I dont know how to repair. I used some settings in the shell but LANG remains on C ??? By the way this Unix seems to be quite old and I only have the chance to use it until August.
My main interest is to use the program on Mainframe and in JCL and/or REXX.
But they gave us chance with this embedded Unix (?) also so I wanted to try.
Direct Sysout from COBOL program to Unix terminal.
I meant when executing the program on the Mainframe and then watching the result file in ISPF (old stuff) editor by PF3 I can see German and Spanish and French special characters. So they are there seems, produced by COBOL and PL/1.
When transfering the MVS file (kind of PDS) into the UNIX by MVSCMD, it is also fine (special chars) but thats not what I wanted.
I tried to use Python instead flat shell but its going even worse. I cannot direct the Sysout to terminal, all what is Python able to call is on the Mainframe and with the MVS filesystem. So I have to transfer it after. It is to much overhead in my eyes when call say 7 sayings and I want them to be displayed in the Unix terminal lol.
Here is my REXX that is doing the trick
/* rexx */
ARG PARM1 PARM2
PARAMETER = '/Z08800.FYD.DATA'
If Length(PARM1) > 0
Then PARAMETER = PARAMETER","PARM1
If Length(PARM2) > 0
Then PARAMETER = PARAMETER","PARM2
PARAMETER = "'"PARAMETER"'"
Address TSO "Alloc File(sysprint) Dataset(*)"
Address TSO "Alloc File(sysin) Dummy"
Address TSO "Call fyd.load(saypli)" PARAMETER
Address TSO "Free File(sysprint)"
Address TSO "Free File(sysin)"
It is now the other Load, the PL/1 - but the COBOL does the same with Sysout instead of Sysprint.
It is shown in my REXX terminal that is also called by ISPF and then 3.4 in the edit panel. The program has no manual input but reads file. And yes, the sayings are not allocated here, I read them by dynamic allocation but it doesnt matter from where my strings come to the DISPLAY / PUT EDIT
And this now JCL. OK works little different, it stores to PDS member
//SAYCOB JOB
//COBCLG EXEC IGYWCLG,
// PARM.GO='Z08800.FYD.DATA'
// SET MBR=SAYCOB
//COBOL.SYSIN DD DSN=&SYSUID..FYD.SOURCE(&MBR),DISP=SHR
//LKED.SYSLMOD DD DSN=&SYSUID..FYD.LOAD(&MBR),DISP=SHR
//GO.SYSOUT DD SYSOUT=*
//*-------------------------------------------------------------
//*
//*-------------------------------------------------------------
//SAYCOB EXEC PGM=&MBR,PARM='Z08800.FYD.DATA,001,007'
//STEPLIB DD DSN=&SYSUID..FYD.LOAD,DISP=SHR
//SYSOUT DD DSN=&SYSUID..FYD.OUTPUT(&MBR),DISP=SHR
//*-------------------------------------------------------------
//LIST EXEC PGM=LINE80,PARM='/80'
//STEPLIB DD DSN=&SYSUID..FYD.LOAD,DISP=SHR
//SYSIN DD DSN=&SYSUID..FYD.OUTPUT(&MBR),DISP=SHR
//SYSPRINT DD SYSOUT=*
//
Here in the parameter I give them the library to my sayings and then I allocate by PL/1 or COBOL. I can of course show, but its a little bit much, about 200 lines... The problem is not MVS I guess but the Unix codepage.
I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.
Basically I am working on a python project where I download and index files from the sec edgar database. The problem however, is that when using the requests module, it take a very long time to save the text in a variable (between ~130 and 170 seconds for one file).
The file roughly has around 16 million characters, and I wanted to see if there was any way to easily lower the time it takes to retrieve the text. -- Example:
import requests
url ="https://www.sec.gov/Archives/edgar/data/0001652044/000165204417000008/goog10-kq42016.htm"
r = requests.get(url, stream=True)
print(r.text)
Thanks!
What I found is in the code for r.text, specifically when no encoding was given ( r.encoding == 'None' ). The time spend detecting the encoding was 20 seconds, I was able to skip it by defining the encoding.
...
r.encoding = 'utf-8'
...
Additional details
In my case, my request was not returning an encoding type. The response was 256k in size, the r.apparent_encoding was taking 20 seconds.
Looking into the text property function. It tests to see if there is an encoding. If there is None, it will call the apperent_encoding function which will scan the text to autodetect the encoding scheme.
On a long string this will take time. By defining the encoding of the response ( as described above), you will skip the detection.
Validate that this is your issue
in your above example :
from datetime import datetime
import requests
url = "https://www.sec.gov/Archives/edgar/data/0001652044/000165204417000008/goog10-kq42016.htm"
r = requests.get(url, stream=True)
print(r.encoding)
print(datetime.now())
enc = r.apparent_encoding
print(enc)
print(datetime.now())
print(r.text)
print(datetime.now())
r.encoding = enc
print(r.text)
print(datetime.now())
of course the output may get lost in the printing, so I recommend you run the above in an interactive shell, it may become more aparent where you are losing the time even without printing datetime.now()
From #martijn-pieters
Decoding and printing 15MB of data to your console is often slower than loading data from a network connection. Don't print all that data. Just write it straight to a file.
I see console apps print colors and seen apps such as ffmpeg print text over itself instead of a new line. How do I print over an existing line? I want to display fps in my console app either at the very top or very bottom and have regular printfs go there and scroll normally.
I need this for windows, but this is meant to be cross platform, so I will eventually have a linux and mac implementation.
There is two simple possibilities which work on linux as well as windows, but only for one line:
printf("\b"); will return for one character, so you might count how many character you want to backspace and fire this in a loop, or you know that you only write n numbers and do it likeprintf("\b\b\b\b\b\b\b\b\b\b");
printf("text to be overwritten by next printf\r"); this will return the cursor to the beginning of the line, so any next printf will overwrite it. Make sure to write a string of same length or longer so you overwrite it entirely.
If you want to rewrite several lines, there is nothing so portable as ncurses, there is libs for it on practically every operating system, and you don't have to take care of the ANSI-differences.
edit: added link to ncurses wikipedia page, gives great overview and introduction, as well as link list and maybe a translation to your preferred language
Check out ncurses. It has bindings for most scripting languages.
You can use '\r' instead of '\n'.
The ASCII character number 8 (A.K.A. Ctrl-H, BS or Backspace) lets you back up one character. ASCII Character number 13 (A.K.A Ctrl-M, CR or Carriage Return) returns the cursor at the beggining of the line.
If you are working in C try putchar(8); and putchar(13);
The magic of the colors, cursor locating and bliking and so on are inside ANSI escape codes. Any text console capable of handling ANSI codes can use them just printing them out to console (i.e. by means of echo in a bash script or printf() function in C).
Unix terminals support ANSI escape sequences and Windows world used to support them back in old MS-DOS days, but the multibyte console support put an end to this. There is more information here. However there are other ways out of just ANSI sequences printing available on Windows. Moreover if you have Cygwin installed on your Windows maching ANSI codes work just as great as on any Unix terminal.
Many people mention Ncurses library that is the de-facto standard for any gui-like text based applications. What this library does is to hide all the terminal differences (Windows/Unix flavours) to represent the same information as identical as possible across all the platforms, though from my own experience I tell you this is not always true (i.e. typical text window frames change because the especial chars are not available under all character encodings). The counterpart of using ncurses is that it is a complete API and it is much harder to start out with it than simply writing out some ANSI escape sequences for simple things such as change the font color, cleaning screen or moving back the cursor to a random position.
For the sake of completeness I paste an example of use of ANSI sequence under Linux that changes the prompt to blue and shows the date:
PS1="\[\033[34m\][\$(date +%H%M)][\u#\h:\w]$ "
You can use Ncurses -
ncurses package is a subroutine library for terminal-independent screen-painting and input-event handling which presents a high level screen model to the programmer, hiding differences between terminal types and doing automatic optimization of output to change one screenfull of text into another
Depending on the platform which you are developing on there's probably a more powerful API which you could use, rather than old ASCII control codes.
e.g. If you are working on Win32 you can actually manipulate the console screen buffer directly.
A good place to start might be here
http://msdn.microsoft.com/en-us/library/ms683171(VS.85).aspx
I have been looking for similar functions/API which would allow me to access the console as something other than a stream of text for other platforms. Haven't found anything yet, but then again, I haven't been looking that hard.
Hope it helps.