How am I sure that my process has a user interface? - axapta

When running a class which can be used interactively or silently by batch, I want to display a hourglass, only if in interactive mode.
I found the function xGlobal::clientKind() , read below, but not sure it is sufficient (can't batches also run on Client ?)
if (xGlobal::clientKind() == ClientType::Client)
startLengthyOperation();
// here do the process
if (xGlobal::clientKind() == ClientType::Client)
endLengthyOperation();

Do not bother to test client kind when using startLengthyOperation, the method does a sufficient test itself.
Testing should be like this:
if (clientKind() == ClientType::Client)
...
Don't use xGlobal::clientKind, use without qualification.
The ClientType has four values, matching what you see in "Online Users".
Batch can be called interactively in Basic/Periodic/Batch, but it should be rarely used.

Related

enforce-guard fails in Zelcore and XWallet

Whenever I call a function that has (enforce-guard some-guard) from X-wallet or Zelcore it always fails with the error Keyset failure (keys-all)
I have no issues when doing this from Chainweaver
How to fix this
This is an issue if you are also providing capabilities with your request.
To fix this, you will need to put enforce-guard within a capability too.
So you will need to do something like
(defcap VERIFY_GUARD (some-guard:guard)
(enforce-guard some-guard)
)
And wherever you would call enforce-guard , you will then need to do
(with-capability (VERIFY_GAURD some-guard)
; Guarded code here
)
Why does this happen?
Chainweaver allows you to select unrestricted signing keys, which provides a key/guard for enforce-guard to work with.
However X-Wallet and Zelcore don't provide this if capabilities present on the request (otherwise they do).
It is probably better practice to add enforce-guard into capabilities anyways and use require-capability in places where you expect the guard to pass.

BizTalk 2013 R2 - Decision Shape not following correct logic path

Issue with an old Biz app (not designed or developed by myself).
Its orchestration receives a particular message, which is then fed into a Decision shape. If the below logic applies (as directly copy/pasted from the shapes first branch expression), it should go that route.
msg_inputCanonical.CRUD == "D" && msg_inputCanonical.DbTable == "Staff"
I can see it terminating however (by using the Orchestration Debugger) as it follows the Else branch and eventually hits a terminate shape.
I've checked the msg_inputCanonical to confirm the values being passed through (as below extracted from the Tracked Message part), and can see it matches the string condition in accordance with its mapping - CRUD = ChOp;
<DbTable>Staff</DbTable>
<ChOp>D</ChOp>
There's nothing else that I can see that's influencing this re-route, so can anyone think of any quirks that might be causing it?
Note: I've amended the WCF-SQL Stored Procedure that generates the msg_inputCanonical as prior to this it wasn't trimming any of the CRUD/DbTable values and had been leaving deadspace in.
There is also a map that uses ltrim/rtrim functoids on the ChOp property, but again - can't see what harm trimming an already trim'd field would do.
I have also tried replicating the logic in a dev environment, and it works as expected going down the correct branch when I'm passing the message through.

Can multiple requests update a single environment variable in Paw?

I have a variable named primary_address_id which can be set or updated via several API requests. For example, I may call AddAddress and specify that the new address should be the primary, or I can call MakePrimaryAddress to set an existing address as the primary.
I'm coming from Postman where I have tests defined for each of these API endpoints to update primary_address_id -- simple. But I can't find a way to do this in Paw; it seems I have to set the value to the response of just a single request. Am I missing something obvious? Or is this feature planned for a future release?
A workaround is to set the value of primary_address_id to the response from GetPrimaryAddress, but that means if I'm adding or updating an address I have to make a second call just to update my environment (which I may forget to do). If I could trigger GetPrimaryAddress to run after the Add/Update/List/etc endpoints that would be an acceptable workaround, but I shouldn't need to manually make two separate requests to accomplish this.
It sounds like you will need to make two subsequent requests but you can make groups of requests that will execute in sequence from one command.
Right click the request list and click "New Group" then within that group you can make a sequence of requests that will update your desired environment variable each time.
Create a new group of requests
To run a group of requests click on the group name; in this case "Address" and then click "Send Requests"
Execute group of requests in sequence
Hope this helps.

How do I have my Bot respond with arguments?

So I've built a Telegram bot, which can receive the following commands:
/list
/info 123
This works great, as I can catch /info and pass the additional arguments as ints. But, sadly, the Telegram clients don't see /info 123 as a complete command, but just the /info part. Is there a way to make it recognize the entirety of the command as the command?
I've tried Markdown-ing it: [/info 123](/info 123), but no joy. Is this possible?
I've reached out to #BotSupport with the same question, and he/they/it responded swiftly with the following answer:
Hi, at the moment it is not possible to highlight parameters of a command. I any case, you may can find a workaround if you use correct custom keyboards ;)
— #BotSupport
Custom keyboards may be an option for someone, but not for me. The solution I've gone for is to give the command as /info123. As the bot receives all / commands, I check if the received command starts with info, and if so, I remove the info part. I convert the remaining string/int to arguments, and pass that along to the relevant command.
If you mean to pass the 123 as an argument for your command info and if you happen to use the python-telegram-bot, then here's how you do it:
dispatcher.add_handler(CommandHandler('hello', SayHello, pass_args=True))
According to the documentation: pass_args Determines whether the handler should be passed the arguments passed to the command as a keyword argument called args. It will contain a list of strings, which is the text following the command split on single or consecutive whitespace characters. Default is False.
you can use RegexHandler() to do this.
Here is an example
def info(bot, update):
id = update.message.text.replace('/info_', '')
update.message.reply_text(id, parse_mode='Markdown')
def main():
updater = Updater(TOKEN)
updater.dispatcher.add_handler(RegexHandler('^(/info_[\d]+)$', info))
updater.start_polling()
Usage
The command /info_120 will return 120
and /info_007 will return 007
UPDATE
for newer versions, you may use this method instead!
MessageHandler(filters.Regex(r'^(/info_[\d]+)$'), info)
To get the argument of command you don't even need to use pass_args as said Moein you can simply get it from context.args look at Github page. So you can pass as many arguments as you want and you will get a list of arguments! Here is an example from Github.
def start_callback(update, context):
user_says = " ".join(context.args)
update.message.reply_text("You said: " + user_says)
...
dispatcher.add_handler(CommandHandler("start", start_callback))
ForceReply
Upon receiving a message with this object, Telegram clients will display a reply interface to the user (act as if the user has selected the bot's message and tapped 'Reply'). This can be extremely useful if you want to create user-friendly step-by-step interfaces without having to sacrifice privacy mode.
a simple shot
In this case, a user should send a valid number with /audio command (e.g. /audio 3, if they forgot it, we can inform and force them to do so.
source:
https://core.telegram.org/bots/api#forcereply
This is a fairly rudimentary way of creating kwargs from user input.
Unfortunately, it does require the user to be aware of the fields that can be used as parameters, but if you can provide informative response when the user doesnt provide any detectable kwarg style messages then you could probably make a better experience.
As I say, extremely rudimentary idea, and would probably be achieved faster with the regex filters available. And this would be much more reliable when checking input from the user of the "pesky" variety.
The script relies on || delimiter preceeding the command and as is shown will trim any extra characters like new lines and spaces
You can remove the extra check for commit as this is provided in order to tell the bot that you want to save your input to the database explicitly.
def parse_kwargs(update):
commit = False
kwargs = {}
if update.message:
for args in update.message.text.split('||')[1:]:
for kw_pair in args.split(','):
key, value = kw_pair.split('=')
if key.strip() != 'commit':
kwargs[key.strip()] = value.strip()
elif key.strip() == 'commit' and value.strip().lower() == 'true':
commit = True
return kwargs, commit

File locks in R

The short version
How would I go about blocking the access to a file until a specific function that both involves read and write processes to that very file has returned?
The use case
I often want to create some sort of central registry and there might be more than one R process involved in reading from and writing to that registry (in kind of a "poor man's parallelization" setting where different processes run independently from each other except with respect to the registry access).
I would not like to depend on any DBMS such as SQLite, PostgreSQL, MongoDB etc. early on in the devel process. And even though I later might use a DBMS, a filesystem-based solution might still be a handy fallback option. Thus I'm curious how I could realize it with base R functionality (at best).
I'm aware that having a lot of reads and writes to the file system in a parallel setting is not very efficient compared to DBMS solutions.
I'm running on MS Windows 8.1 (64 Bit)
What I'd like to get a deeper understanding of
What actually exactly happens when two or more R processes try to write to or read from a file at the same time? Does the OS figure out the "accesss order" automatically and does the process that "came in second" wait or does it trigger an error as the file access might is blocked by the first process? How could I prevent the second process from returning with an error but instead "just wait" until it's his turn?
Shared workspace of processes
Besides the rredis Package: are there any other options for shared memory on MS Windows?
Illustration
Path to registry file:
path_registry <- file.path(tempdir(), "registry.rdata")
Example function that registers events:
registerEvent <- function(
id=gsub("-| |:", "", Sys.time()),
values,
path_registry
) {
if (!file.exists(path_registry)) {
registry <- new.env()
save(registry, file=path_registry)
} else {
load(path_registry)
}
message("Simulated additional runtime between reading and writing (5 seconds)")
Sys.sleep(5)
if (!exists(id, envir=registry, inherits=FALSE)) {
assign(id, values, registry)
save(registry, file=path_registry)
message(sprintf("Registering with ID %s", id))
out <- TRUE
} else {
message(sprintf("ID %s already registered", id))
out <- FALSE
}
out
}
Example content that is registered:
x <- new.env()
x$a <- TRUE
x$b <- letters[1:5]
Note that the content usually is "nested", i.e. RDBMS would not be really "useful" anyway or at least would involve some normalization steps before writing to the DB. That's why I prefer environments (unique variable IDs and pass-by-reference is possible) over lists and, if one does make the step to use a true DBMS, I would rather turn NoSQL approaches such as MongoDB.
Registration cycle:
The actual calls might be spread over different processes, so there is a possibility of concurrent access atempts.
I want to have other processes/calls "wait" until a registerEvent read-write cycle is finished before doing their read-write cycle (without triggering errors).
registerEvent(values=list(x_1=x, x_2=x), path_registry=path_registry)
registerEvent(values=list(x_1=x, x_2=x), path_registry=path_registry)
registerEvent(id="abcd", values=list(x_1=x, x_2=x),
path_registry=path_registry)
registerEvent(id="abcd", values=list(x_1=x, x_2=x),
path_registry=path_registry)
Check registry content:
load(path_registry)
ls(registry)
See filelock R package, available since 2018. It is cross-platform. I am using it on Windows and have not found a single problem.
Make sure to read the documentation.
?filelock::lock
Although the docs suggest to leave the lock file, I have had no problems removing it on function exit in a multi-process environment:
on.exit({filelock::unlock(lock); file.remove(path.lock)})

Resources