autoit: what's the difference between inetget and inetread - autoit

I see there are two functions in autoit to download files from the internet: inetget and inetread.
What's the difference between the two? The only difference I see is that inetGet has more options and is therefore better.

Inetget() supports background download (your script will continue its work while your file will keep downloading) and returns you a handle you can use with Inetgetinfo(). Inetread can do nothing of this.
Read the manual carefully, everything is described there. :)
By the way, AutoIT's doc is very good to me.

There IS a big difference! InetRead() directly returns the downloaded string, so you can assign it to variable while InetGet needs a filename parameter to store into, so if you periodically get some data it may be better to use inetread instead to prevent hdd usage.

Related

Standardized filenames when passing folders between steps in pipeline architecture?

I am using AzureML pipelines, where the interface between pipeline steps is through a folder or file.
When I am passing data into the pipeline, I point directly to a single file. No problem at all. Very useful when passing in configuration files which all live in the same folder on my local computer.
However, when passing data between different steps of the pipeline, I can't provide the next step with a file path. All the steps get is a path to some folder that they can write to. Then that same path is passed to the next step.
The problem comes when the following step is then supposed to load something from the folder.
Which filename is it supposed to try to load?
Approaches I've considered:
Use a standardized filename for everything. Problem is that I want to be able to run the steps locally too, independant of any pipeline. This makes very for a very poor UX for that use case.
Check if the path is to a file, if it isn't, check all the files in the folder. If there is only one file, then use it. Otherwise throw an exception. This is maybe the most elegant solution from a UX perspective, but it sounds overengineered to me. We also don't structurally share any code between the steps at the moment, so either we will have repetition or we will need to find some way to share code, which is non-trivial.
Allow custom filenames to be passed in optionally, otherwise use a standard filename. This helpes with the UX, but often the filenames are supposed to be defined by the configuration files being passed in, so while we could do some bash scripting to get the filename into the command, it feels like a sub-par solution.
Ultimately it feels like none of the solutions I have come up with are any good.
It feels like we are making things more difficult for ourselves in the future if we assume some default filename. F.x. we work with multiple file types, so it would need to omit an extension.
But any way to do it without default filenames would also cause maintainence headache down the line, or incurr substantial upfront cost.
The question is am I missing something? Any potential traps, better solutions, etc. would be appreciated. It definately feels like I am somewhat under- and/or overthinking this.

Unix touch command usage

I know you can use touch to create a new empty file.
I just learned that touch can be used to update the access and modification time of a file. I don't quite know in what situations and why do you need to update the access and modification time of a file , i.e. the usefulness of this particular function?
Thanks!
Some utility depends on timestamp of the file.
For example, make uses timestamp to check whether it is required to do something (usually build) based on the timestamp of the source code, and output (executable, object files, ...)
By touching followed by make, the source file, you can force rebuild.
In addition, touch has a -d option that can fake the modification time.
If one "knows what he's doing" she can avoid long build time, due to unnecessary re-compilations.
For example, when adding a declaration to a common header file,
that does not change any old API, one can fake the header real modification time,
and bypass Makefile's dependencies.

Under what conditions on Unix can gtk_file_chooser_get_filename() return NULL signifying a non-local filename?

From the documentation for gtk_file_chooser_get_filename():
The currently selected filename, or NULL if no file is selected, or the selected file can't be represented with a local filename. Free with g_free().
Is there at least one situation where the bolded condition is true on a Unix system (Linux, the various BSDs, etc.)? I tried reading through the source code but got lost/confused. I'd like to know so I can determine if I need to handle it in some special way; I don't need to know every possibility for this.
Thanks.
I haven't yet read through the source either, but I would guess that gtk_file_chooser_get_filename() essentially returns g_file_get_path (gtk_file_chooser_get_file (...)). Probably the only case in which you would need to care about the filename being NULL is if your file chooser is enabled to pick files from a network share, for example. It's probably not something you need to worry about if you set the local-only property on your file chooser.
However, it's probably good practice to use gtk_file_chooser_get_file() anyway, since you will transparently be able to handle non-local files if you have the proper GVFS modules installed.

Can I append or overwrite some bytes to an existing object in Openstack Swift?

I need to append some bytes to an existing object stored in Openstack Swift, say like a log file object and constantly append new logs to it. Is this possible?
Moreover, can I change (overwrite) some bytes (specify with offset and length) to an existing object?
I believe ZeroVM (zerovm.org) would be perfect for doing this.
Disclaimer: I work for Rackspace, who owns ZeroVM. Opinions are mine and mine alone.
tl;dr: There's no append support currently in Swift.
There's a blueprint for Swift append support: https://blueprints.launchpad.net/swift/+spec/object-append. It doesn't look very active.
user2195538 is correct. Using ZeroVM + Swift (using the ZeroCloud middleware for Swift) you could get a performance boost on large-ish objects by sending deltas to a ZeroVM app and process them in place. Of course you still have to read/update/write the file, but you can do it in place. You don't need to pipe the entire file over the network, which could/would be costly for large files.
Disclaimer: I also work for Rackspace, and I work on ZeroVM for my day job.

How to properly debug OCaml code?

Can I know how an experienced OCaml developer debugs his code?
What I am doing is just using Printf.printf. It is too troublesome as I have to comment them all out when I need a clean output.
How should I better control this debugging process? special annotation to switch those logging on or off?
thanks
You can use bolt for this purpose. It's a syntax extension.
Btw. Ocaml has a real debugger.
There is a feature of the OCaml debugger that you may not be aware of which is not commonly found with stateful programming and is called time travel. See section 16.4.4. Basically since all of the information from step to step is kept on the stack, by keeping the changes associated with each step saved during processing, one can move through the changes in time to see the values during that step. Think of it as running the program once logging all of the values at each step into a data store then indexing into that data store based on a step number to see the values at that step.
You can also use ocp-ppx-debug which will add a printf with the good location instead of adding them manually.
https://github.com/OCamlPro-Couderc/ocp-ppx-debug

Resources