Turbo Pascal, Capture Writeln using stream? - turbo-pascal

I've got a legacy TP5 program. It compiles and runs OK using TP7. I'd like to capture and log some of the write / writeln statements. I can do a global search-and-replace for write and writeln, so I don't mind code changes like that. It does use some formated output:
write(r:4:2)
so I'd like to keep that.
I don't know any way to capture write/writeln other than writing to a file, then reading each line back and writing it again :~( But it occured to me that that is very like writing to a stream (introduced in Tp5.5), then streaming copies to multiple outputs.
Has anyone done this before? Is it possible? Is there another way?

Afaik it is possible, and commonly done in FPC and Delphi, TP's successors. The only trouble is that the TEXTREC isn't exported by TP, so you have to copy it from sources to somewhere.
The textrec has a bunch of procedure variables (like function pointers in C) that you can set to your own functions to process I/O. Setting these variables is what Assign() does.
The problem is finding the room to store the state (e.g. the pointer to the stream) though. IIRC the TP textrec is tighter than Delphi's.
Anyway, search for a unit StreamIO by Peter Below. This is a FPC/Delphi unit that you will have to adopt, but at least it demonstrates the principles. It would be better to simply migrate to something newer anyway. Maybe SWAG has a TP equivalent too.

Related

How do i programmatically write to *standard-input* for evaluation at the repl?

I type the following in a repl (clozure common lisp)
(defparameter test 1)
The repl responds with test
Now I enter:
(format *standard-input* "(defparameter test 2)")
Repl outputs (defparameter test 2) followed by nil.
But the value of test remains unchanged at 1.
Why is this? Isn't writing to the variable *standard-input* the same as entering the text at the repl?
How do I achieve the desired evaluation?
Some context:
I'm making a custom frontend for common lisp development using sockets. I need to write to standard input because even though I can evaluate code using eval and read, I cannot debug the code on errors.
For instance entering 1 to unwind the stack and return to top level is impossible without writing to standard input (as far as I can tell). I have the output parts figured out.
*standard-input* is an input stream, as its name implies. It's a stream you read from, not one you write to. It may be an output stream as well, but if it is then writing to it is not going to inject strings into the REPL.
I'd suggest looking at SLIME or SLY if you want to understand how to have REPLs & debuggers which interact with things down streams. In particular SWANK is probably the interesting bit to understand, or the equivalent for SLY, which is SLYNK (or slynk, not sure of the capitalisation). The implementations of these protocols in various Lisps are not entirely trivial, but the implementations exist already: you don't have to write them. Screen-scraping an interface made for humans to interact with is almost always a terrible approach: it's only reasonable when there is no better way, and in this case there is a better way, in fact there are at least two.

Why we need to compile the program of progress 4GL?

I would like to know Why we need to compile the program of progress 4GL? Really what is happening behind there? Why we are getting .r file after compiled the program? When we check the syntax if its correct then we will get one message box 'Syntax is correct' how its finding the errors and showing the messages.Any explanations welcome and appreciated.
Benefits of compiled r-code include:
Syntax checking
Faster execution (r-code executes faster)
Security (r-code is not "human readable" and tampering with it will likely be noticed)
Licensing (r-code runtime licenses are much less expensive)
For "how its finding the errors and showing the messages" -- at a high level it is like any compiler. It evaluates the provided source against a syntax tree and lets you know when you violate the rules. Compiler design and construction is a fairly advanced topic that probably isn't going to fit into a simple SO question -- but if you had something more specific that could stand on its own as a question someone might be able to help.
The short answer is that when you compile, you're translating your program to a language the machine understands. You're asking two different questions here, so let me give you a simple answer to the first: you don't NEED to compile if you're the only one using the program, for example. But in order to have your program optimized (since it's already at the machine language level) and guarantee no one is messing with your logic, we compile the code and usually don't allow regular users to access the source code.
The second question, how does the syntax checker work, I believe it would be better for you to Google and choose some articles to read about compilers. They're complex, but in a nutshell what they do is take what Progress expects as full, operational commands, and compare to what you do. For example, if you do a
Find first customer where customer.active = yes no-error.
Progress will check if customer is a table, if customer.active is a field in that table, if it's the logical type, since you are filtering if it is yes, and if your whole conditions can be translated to one single true or false Boolean value. It goes on to check if you specified a lock (and default to shared if you haven't, like in my example, which is a no-no, by the way), what happens if there are multiple records (since I said first, then get just the first one) and finally what happens if it fails. If you check the find statement, there are more options to customize it, and the compiler will simply compare your use of the statement to what Progress can have for it. And collect all errors if it can't. That's why sometimes compilers will give you generic messages. Since they don't know what you're trying to do, all they can do is tell you what's basically wrong with what you wrote.
Hope this helps you understand.

Can I execute untrusted Common Lisp code in a restricted environment?

Supposed I wanted to take advantage of Common Lisp's ability to read and execute Common Lisp code so that my program can execute external code written in Lisp, but I don't trust that code so I don't want it to have access the full power of Common Lisp. Is it possible for me to restricts its environment so that it can only see the packages/symbols to which I explicitly give it access, effectively creating a DSL?
To read the code, start by disabling *read-eval* (that stops people injecting execution during parsing, using something like #.(do-evil-stuff). You probably want to do the reading using a custom read-table that disables most (if not all) read-macros. You probably want to do the reading with a custom, one-off, package, importing only symbols you allow.
Once you've read the user-provided code, you still need to validate that there's no unexpected function/macro references in the code. If you have used a custom package, you should be able to confirm that each symbol falls in either of the two classes "belongs to the custom one-off package" (this is user-supplied stuff) or "explicitly allowed from elsewhere" (you would need this list to construct the custom package).
Once that's been done, you can then evaluate it.
However, doing this correctly would take a fair bit of care and you really should have someone else have a look at the code and actively try to break out of the sandbox.
Take a look at the section 'Reader security' in chapter 4 of Let over lambda which discusses this topic in some depth. In particular, you probably want to set *read-eval* to nil. To address your question regarding restricting access to the environment, this is generally difficult in Common Lisp, as it is designed to allow access to most pieces of the system in the first place. Maybe you can use elaborate the ideas of Let over lambda in the direction of white listing symbols (in comparison to the blacklisting of macro characters in the linked chapter). I don't think there are any ready-made solutions.

Difference between write() and printf()

Recently I am studying operating system..I just wanna know:
What’s the difference between a system call (like write()) and a standard library function (like printf())?
A system call is a call to a function that is not part of the application but is inside the kernel. The kernel is a software layer that provides you some basic functionalities to abstract the hardware to you. Roughly, the kernel is something that turns your hardware into software.
You always ultimately use write() to write anything on a peripheral whatever is the kind of device you write on. write() is designed to only write a sequence of bytes, that's all and nothing more. But as write() is considered too basic (you may want to write an integer in ten basis, or a float number in scientific notation, etc), different libraries are provided to you by different kind of programming environments to ease you.
For example, the C programming langage gives you printf() that lets you write data in many different formats. So, you can understand printf() as a function that convert your data into a formatted sequence of bytes and that calls write() to write those bytes onto the output. But C++ gives you cout; Java System.out.println, etc. Each of these functions ends to a call to write() (at least on POSIX systems).
One thing to know (important) is that such a system call is costly! It is not a simple function call because you need to call something that is outside of your own code and the system must ensure that you are not trying to do nasty things, etc. So it is very common in higher print-like function that some buffering is built-in; such that write is not always called, but your data are kept into some hidden structure and written only when it is really needed or necessary (buffer is full or you really want to see the result of your print).
This is exactly what happens when you manage your money. If many people gives you 5 bucks each, you won't go deposit each to the bank! You keep them on your wallet (this is the print) up to the point it is full or you don't want to keep them anymore. Then you go to the bank and make a big deposit (this is the write). And you know that putting 5 bucks to your wallet is much much faster than going to the bank and make the deposit. The bank is the kernel/OS.
System calls are implemented by the operating system, and run in kernel mode. Library functions are implemented in user mode, just like application code. Library functions might invoke system calls (e.g. printf eventually calls write), but that depends on what the library function is for (math functions usually don't need to use the kernel).
System Call's in OS are used in interacting with the OS. E.g. Write() could be used something into the system or into a program.
While Standard Library functions are program specific, E.g. printf() will print something out but it will only be in GUI/command line and wont effect system.
Sorry couldnt comment, because i need 50 reputation to comment.
EDIT: Barmar has good answer
I am writing a small program. At the moment it just reads each line from stdin and prints it to stdout. I can add a call to write in the loop, and it would add a few characters at the end of each line. But when I use printf instead, then all the extra characters are clustered and appear all at once, instead of appearing on each line.
It seems that using printf causes stderr to be buffered. Adding fflush(stdout); after calling printf fixes the discrepancy in output.
I'd like to mention another point that the stdio buffers are maintained in a process’s user-space memory, while system call write transfers data directly to a kernel buffer. It means that if you fork a process after write and printf calls, flushing may bring about to give output three times subject to line-buffering and block-buffering, two of them belong to printf call since stdio buffers are duplicated in the child by fork.
printf() is one of the APIs or interfaces exposed to user space to call functions from C library.
printf() actually uses write() system call. The write() system call is actually responsible for sending data to the output.

Why does zumero_sync need to be called multiple times?

According to the documentation for zumero_sync:
If a large amount of information needs to be pulled from the server,
this function may need to be called more than once.
In my Android app that uses Zumero that's no problem; I just keep calling zumero_sync until the return value doesn't start with "0;".
However, now I'm trying to write an admin script that also syncs with my server dbfiles. I'd like to use the sqlite3 shell, and have the script pass the SQL to execute via command line arguments. I need to call zumero_sync in a loop (which SQLite doesn't support) to make sure the db is fully synced. If I had to, I could invoke sqlite3 in a loop (reading its output, looking for "0;"), or even write a C++ app to call the SQLite/Zumero functions natively. But it certainly would be easier if a single zumero_sync was enough.
I guess my real question is: could zumero_sync be changed so it completes the sync before returning? If there are cases where the existing behavior is more useful, maybe there could be a parameter for specifying which mode to use?
I see two basic questions here:
(1) Why does zumero_sync() work the way it does?
(2) Can it work differently?
I'll answer (2) first, since it's easier: Yes, it could work differently. Rather, we could (and probably will, soon, you brought this up) implement an additional function, named something like zumero_sync_complete(), which performs [the guts of] zumero_sync() in a loop and returns after the sync is complete.
We didn't implement zumero_sync_complete() because it doesn't add much value. It's a simple loop, so you can darn well write it yourself. :-)
Er, except in scripting environments which don't support loops. Like the sqlite3 shell.
Answer to (1):
The Zumero sync protocol is designed to give the server the flexibility to return partial results if it wants to do so. And for the sake of reducing load on the server (and increasing its scalability) it often does want to do exactly that.
Given that, one reason to expose this to the client is to increase the client's flexibility as well. As long we're making multiple roundtrips, we might as well give the client an opportunity to do something (like, maybe, update a progress bar) in between them.
Another thing a client might want to do in between loop iterations is handle an error.
Or, in the case of a multithreaded client, it might want to deal with changes that happened on the client while the sync is going on.
Which raises the question of how locking should be managed? Do we hold the sqlite write lock during the entire loop? Or only when absolutely necessary?
Bottom line: A robust app would probably want to implement the loop itself so that it can make its own decisions and retain full control over things.
But, as you observe, the sqlite3 shell doesn't have loops. And it's not an app. And it doesn't have threads. Or progress bars. So it's a use case where a simpler-and-less-powerful form of zumero_sync() would make sense.

Resources