Converting a system model into transition system for model checking - fsm

Currently i am trying to convert a system prototype into the transition system model. i have some LTL properties and i want to verify those properties using model checking tool NuSMV. I just information how to start modeling by defining atomoic properties and other mathematical aspects.
Pictorial Representation of Model

A very simple encoding in NuSMV of that transition system would be
MODULE main()
VAR
state : { GETINFO, ACK, SEND };
ASSIGN
init(state) := GETINFO;
next(state) := case
state = GETINFO : SEND;
state = SEND : ACK;
state = ACK : {GETINFO, SEND};
esac;
However, i think that the model you provided is a bit too simple to match with your problem description, thus I invite you to provide additional information about what you intend to do.

Related

Create interface wrapping existing types with pointer-receiver methods

I need to test an app which uses Google Cloud Pubsub, and so must wrap its types pubsub.Client and pubsub.Subscriber for testing purposes. However, despite several attempts I can't get an interface around them which compiles.
The definitions of the methods I'm trying to wrap are:
func (s *Subscription) Receive(
ctx context.Context, f func(context.Context, *Message)) error
func (c *Client) Subscription(id string) *Subscription
Here is the current code. The Receiver interface (wrapper around Subscriber) seems to work, but I suspect it may need to change in order to fix SubscriptionMaker, so I've include both.
Note: I've tried several variations of where to reference and dereference pointers, so please don't tell me to change that unless you have an explanation of why your suggested configuration is the correct one or you've personally verified it compiles.
import (
"context"
"cloud.google.com/go/pubsub"
)
type Receiver interface {
Receive(context.Context, func(ctx context.Context, msg *pubsub.Message)) (err error)
}
// Pubsub subscriptions implement Receiver
var _ Receiver = &pubsub.Subscription{}
type SubscriptionMaker interface {
Subscription(name string) (s Receiver)
}
// Pubsub clients implement SubscriptionMaker
var _ SubscriptionMaker = pubsub.Client{}
Current error message:
common_types.go:21:5: cannot use "cloud.google.com/go/pubsub".Client literal (type "cloud.google.com/go/pubsub".Client) as type SubscriptionMaker in assignment:
"cloud.google.com/go/pubsub".Client does not implement SubscriptionMaker (wrong type for Subscription method)
have Subscription(string) *"cloud.google.com/go/pubsub".Subscription
want Subscription(string) Receiver
First, for most uses, using the ptest package is probably a much easier approach for testing pubsub. But of course, your specific question can apply to any library, and the below approach can be useful for many things, not just mocking pubsub.
Your broader goal of using interfaces to mock a library like this, is doable. But it is complicated when the library you wish to mock out returns concrete types that you cannot mock (probably due to unreported fields). The approach to be taken is much more involved than is often worth it, as there may be easier ways to test your code.
But if you're intent on doing this, the approach you must take is to not wrap the entire package in interfaces, not just the specific methods you wish to mock.
You would need to wrap any types that you wish to mock which are returned by or accepted by your interface, too. This usually means you also need to modify your production code (not just your test code), so this can sometimes be a deal-breaker for existing code bases.
Where I have usually done this before is when mocking something like the standard library's sql driver, but the same approach can be applied here. In essence, you would need to create a wrapper package for your pubsub library, which you use even in your production code. Again, this can be quite intrusive on existing codebases, but for the sake of illustration. Using your defined interfaces:
package mypubsub
import "cloud.google.com/go/pubsub"
type Receiver interface {
Recieve(context.Context, func(context.Context, *pubsub.Message) error)
}
type SubscriptionMaker interface {
Subscription(string) Receiver
}
You can then wrap the default implementation, for use in production code:
// defaultClient wraps the default pubsub Client functionality.
type defaultClient struct {
*pubsub.Client
}
func (d defaultImplementation) Subscription(name string) Receiver {
return d.Client.Subscription()
}
Naturally, you'd need to expand this package to wrap most or all of the pubsub package you're using. This can be a bit daunting.
But once you've done that, then use your mypubsub package everywhere in your code, instead of directly depending on the pubsub package. And now you can easily swap out a mock implementation anywhere you need for testing.
It can't be done.
When defining the type signature of a method on an interface, it must match exactly. func (c *Client) Subscription(id string) *Subscription returns a *Subscription, and a *Subscription is a valid Receiver, but it does not count as conforming to the interface method Subscription(string) Receiver. Go requires precise matching for function signatures, not the duck-typing style that it usually uses for interfaces.

Difference between one-way and one-time binding in UI5

What is the difference between one-time and one-way binding in UI5?
Are there any user specific use-cases where I would use each of them?
I could not get much info from the documentation.
One-way
What it does: unidirectional data flow. Changes in the model data (e.g. via setProperty) are steadily propagated to the interested elements in the UI.
Use-case: one prominent example is the device model (a one-way JSONModel). It shouldn't accept any user inputs that might update the device information accidentally.
We have to set the binding mode to OneWay as the device model is read-only and we want to avoid changing the model accidentally when we bind properties of a control to it. By default, models in OpenUI5 are bidirectional (TwoWay). When the property changes, the bound model value is updated as well.
One-time
What it does: one-time data flow. When the binding object is evaluated, its corresponding model data are read and written to the element properties once, and never again.
Why it exists: it all comes down to the number of change listeners. Fewer listeners means less memory allocation and less stuff to maintain during the runtime. The binding mode OneTime has thus the potential, compared to other binding modes, to optimize performance and memory consumption.
When to use: for static, non-mutating data.
Note: for property bindings, oBindingInfo.value (since 1.61) can be also defined for static values instead of relying on a model in property binding.
Use-case: the prominent example here is the ODataMetaModel. From its API reference:
This model is read-only and thus only supports OneTime binding mode. No events are fired!
For more information, see the documentation topic "Data Binding".
Assigning binding modes in UI5
Specific to a binding:
<Text text="{ path: '...', mode: 'OneTime' }"/>
For all bindings of the model:
myModel.setDefaultBindingMode("OneTime");
Or in the app descriptor in case of an ODataModel:
{
"sap.ui5": {
"models": {
"myODataModel": {
"dataSource": "...",
"settings": {
"defaultBindingMode": "TwoWay"
},
"preload": true
}
}
}
}
In Expression Binding, binding modes can be defined with the following syntax:
myProperty="{:= ...}" ⇒ OneTime
myProperty="{= ...}" ⇒ OneWay
Custom formatter in binding infos (e.g. <Text text="{ ..., formatter: '.createText' }">) as well as composition with string-literals (e.g. "My Name is {/name}") forces the binding to become one-way. Use type in order to keep the binding two-way.

In a Corda transaction are particular state transitions associated with partcular commands?

The commands webinar states (at around 3:10) that "input and output states are always grouped by type and that a command is required for each group." The narrator seems to imply that if a transaction consists of multiple commands then each command will be associated with a distinct subset of the state transitions proposed in the transaction.
However such a view of things does not seem to be captured in the structure of LedgerTransaction. It consists of completely independent lists of inputs, outputs and commands. There's nothing denoting an association between particular commands and particular input or output states.
In my contract code I can group states by type, e.g.:
override fun verify(tx: LedgerTransaction) {
val fooStates = tx.inputsOfType<FooState>()
val barStates = tx.inputsOfType<BarStates>()
But I'm just grouping by choice - I don't have to, and there's nothing tying these groups to particular commands.
So what is the webinar referring to when it says "a command is required for each group"?
The sense behind signatures being associated with commands would be clear if the relationship between commands and state transitions existed as described in the webinar. However in reality one doesn't sign off on particular transitions on a per command basis as the LedgerTransaction class does not capture such relationships.
In the key concepts section on commands one has a coupon command and a pay command and it makes sense that the set of people who have to sign off on the bond state transition may be different to those who need to sign off on the cash state transition. But in the code there's nothing tying a coupon command to the particular states that the signers associated with the command are agreeing to if they sign.
Is this stated requirement that each group must have an associated command just something that the developer should implement in their contract verify logic without being something that one tries to capture in the structure of transactions?
Good question.
You touched on grouping within the contract, and that's correct it is down to the contract implementation, you just need to extend it further to enforce which parties are required to sign depending on the command in the transaction.
So, your verify function might look like the below simplified version of the contract within the Option CorDapp sample:
override fun verify(tx: LedgerTransaction) {
val command = tx.commands.requireSingleCommand<Commands>()
when (command.value) {
is Commands.Issue -> {
requireThat {
val cashInputs = tx.inputsOfType<Cash.State>()
val cashOutputs = tx.outputsOfType<Cash.State>()
"Cash.State inputs are consumed" using (cashInputs.isNotEmpty())
"Cash.State outputs are created" using (cashOutputs.isNotEmpty())
val option = tx.outputsOfType<OptionState>().single()
"The issue command requires the issuer's signature" using (option.issuer.owningKey in command.signers)
}
}
is Commands.Trade -> {
requireThat {
val cashInputs = tx.inputsOfType<Cash.State>()
val cashOutputs = tx.outputsOfType<Cash.State>()
"Cash.State inputs are consumed" using (cashInputs.isNotEmpty())
"Cash.State outputs are created" using (cashOutputs.isNotEmpty())
val inputOption = tx.inputsOfType<OptionState>().single()
val outputOption = tx.outputsOfType<OptionState>().single()
"The transfer command requires the old owner's signature" using (inputOption.owner.owningKey in command.signers)
"The transfer command requires the new owner's signature" using (outputOption.owner.owningKey in command.signers)
}
}
else -> throw IllegalArgumentException("Unknown command.")
}
}
We first pull the command (or commands) from the transaction and that gives us context.
Based on that we can then pull the states we're interested in from either the inputs or outputs e.g. cash/option and begin to check that our constraints are met.
You can find the full version of the above sample at https://github.com/CaisR3/cordapp-option and the contract code can be found in base\src\main\kotlin\net\corda\option\base\contract\OptionContract.kt

Dealing with DB handles and initialization in functional programming

I have several functions that deal with database interactions. (like readModelById, updateModel, findModels, etc) that I try to use in a functional style.
In OOP, I'd create a class that takes DB-connection-parameters in the constructor, creates the database-connection and save the DB-handle in the instance. The functions then would just use the DB-handle from "this".
What's the best way in FP to deal with this? I don't want to hand around the DB handle throughout the entire application. I thought about partial application on the functions to "bake in" the handle, but that creates ugly boilerplate code, doing it one by one and handing it back.
What's the best practice/design pattern for things like this in FP?
There is a parallel to this in OOP that might suggest the right approach is to take the database resource as parameter. Consider the DB implementation in OOP using SOLID principles. Due to Interface Segregation Principle, you would end up with an interface per DB method and at least one implementation class per interface.
// C#
public interface IGetRegistrations
{
public Task<Registration[]> GetRegistrations(DateTime day);
}
public class GetRegistrationsImpl : IGetRegistrations
{
public Task<Registration[]> GetRegistrations(DateTime day)
{
...
}
private readonly DbResource _db;
public GetRegistrationsImpl(DbResource db)
{
_db = db;
}
}
Then to execute your use case, you pass in only the dependencies you need instead of the whole set of DB operations. (Assume that ISaveRegistration exists and is defined like above).
// C#
public async Task Register(
IGetRegistrations a,
ISaveRegistration b,
RegisterRequest requested
)
{
var registrations = await a.GetRegistrations(requested.Date);
// examine existing registrations and determine request is valid
// throw an exception if not?
...
return await b.SaveRegistration( ... );
}
Somewhere above where this code is called, you have to new up the implementations of these interfaces and provide them with DbResource.
var a = new GetRegistrationsImpl(db);
var b = new SaveRegistrationImpl(db);
...
return await Register(a, b, request);
Note: You could use a DI framework here to attempt to avoid some boilerplate. But I find it to be borrowing from Peter to pay Paul. You pay as much in having to learn a DI framework and how to make it behave as you do for wiring the dependencies yourself. And it is another tech new team members have to learn.
In FP, you can do the same thing by simply defining a function which takes the DB resource as a parameter. You can pass functions around directly instead of having to wrap them in classes implementing interfaces.
// F#
let getRegistrations (db: DbResource) (day: DateTime) =
...
let saveRegistration (db: DbResource) ... =
...
The use case function:
// F#
let register fGet fSave request =
async {
let! registrations = fGet request.Date
// call your business logic here
...
do! fSave ...
}
Then to call it you might do something like this:
register (getRegistrations db) (saveRegistration db) request
The partial application of db here is analogous to constructor injection. Your "losses" from passing it to multiple functions is minimal compared to the savings of not having to define interface + implementation for each DB operation.
Despite being in a functional-first language, the above is in principle the same as the OO/SOLID way... just less lines of code. To go a step further into the functional realm, you have to work on eliminating side effects in your business logic. Side effects can include: current time, random numbers, throwing exceptions, database operations, HTTP API calls, etc.
Since F# does not require you to declare side effects, I designate a border area of code where side effects should stop being used. For me, the use case level (register function above) is the last place for side effects. Any business logic lower than that, I work on pushing side effects up to the use case. It is a learning process to do that, so do not be discouraged if it seems impossible at first. Just do what you have to for now and learn as you go.
I have a post that attempts to set the right expectations on the benefits of FP and how to get them.
I'm going to add a second answer here taking an entirely different approach. I wrote about it here. This is the same approach used by MVU to isolate decisions from side effects, so it is applicable to UI (using Elmish) and backend.
This is worthwhile if you need to interleave important business logic with side effects. But not if you just need to execute a series of side effects. In that case just use a block of imperative statements, in a task (F# 6 or TaskBuilder) or async block if you need IO.
The pattern
Here are the basic parts.
Types
Model - The state of the workflow. Used to "remember" where we are in the workflow so it can be resumed after side effects.
Effect - Declarative representation of the side effects you want to perform and their required data.
Msg - Represents events that have happened. Primarily, they are the results of side effects. They will resume the workflow.
Functions
update - Makes all the decisions. It takes in its previous state (Model) and a Msg and returns an updated state and new Effects. This is a pure function which should have no side effects.
perform - Turns a declared Effect into a real side effect. For example, saving to a database. Returns a Msg with the result of the side effect.
init - Constructs an initial Model and starting Msg. Using this, a caller gets the data it needs to start the workflow without having to understand the internal details of update.
I jotted down an example for a rate-limited emailer. It includes the implementation I use on the backend to package and run this pattern, called Ump.
The logic can be tested without any instrumentation (no mocks/stubs/fakes/etc). Declare the side effects you expect, run the update function, then check that the output matches with simple equality. From the linked gist:
// example test
let expected = [SendEmail email1; ScheduleSend next]
let _, actual = Ump.test ump initArg [DueItems ([email1; email2], now)]
Assert.IsTrue(expected = actual)
The integrations can be tested by exercising perform.
This pattern takes some getting-used-to. It reminds me a bit of an Erlang actor or a state machine. But it is helpful when you really need your business logic to be correct and well-tested. It also happens to be a proper functional pattern.

Can Code be Protected From Rogue Callers In Ada?

I'm a fairly new Ada programmer. I have read the book by Barnes (twice I might add) and even managed to write a fair terminal program in Ada. My main language is C++ though.
I am currently wondering if there is a way to "protect" subroutine calls in Ada, perhaps in Ada 2012 (of which I know basically nothing). Let me explain what I mean (although in C++ terms).
Suppose you have a class Secret like this:
class Secret
{
private:
int secret_int;
public:
Set_Secret_Value( int i );
}
Now this is the usual stuff, dont expose secret_int, manipulate it only through access functions. However, the problem is that anybody with access to an object of type Secret can manipulate the value, whether that particular code section is supposed to do it or not. So the danger of rogue altering of secret_int has been reduced to anybody altering secret_int through the permitted functions, even if it happens in a code section that's not supposed to manipulate it.
To remedy that I came up with the following construct
class Secret
{
friend class Secret_Interface;
private:
int secret_int;
Set_Secret_Value( int i );
Super_Secret_Function();
};
class Secret_Interface
{
friend class Client;
private:
static Set_Secret_Value( Secret &rc_secret_object, int i )
{
rc_secret_object.Set_Secret( i );
}
};
class Client
{
Some_Function()
{
...
Secret_Interface::Set_Secret_Value( c_object, some-value );
...
}
}
Now the class Secret_Interface can determine which other classes can use it's private functions and by doing so, indirectly, the functions of class Secret that are exposed to Secret_Interface. This way class Secret still has private functions that can not be called by anybody outside the class, for instance function Super_Secret_Function().
Well I was wondering if anything of this sort is possible in Ada. Basically my desire is to be able to say:
Code A may only be executed by code B but not by anybody else
Thanks for any help.
Edit:
I add a diagram here with a program structure like I have in mind that shows that what I mean here is a transport of a data structure across a wide area of the software, definition, creation and use of a record should happen in code sections that are otherwise unrleated
I think the key is to realize that, unlike C++ and other languages, Ada's primary top-level unit is the package, and visibility control (i.e. public vs. private) is on a per-package basis, not a per-type (or per-class) basis. I'm not sure I'm saying that correctly, but hopefully things will be explained below.
One of the main purposes of friend in C++ is so that you can write two (or more) closely related classes that both take part in implementing one concept. In that case, it makes sense that the code in one class would be able to have more direct access to the code in another class, since they're working together. I assume that in your C++ example, Secret and Client have that kind of close relationship. If I understand C++ correctly, they do all have to be defined in the same source file; if you say friend class Client, then the Client class has to be defined somewhere later in the same source file (and it can't be defined earlier, because at that point the methods in Secret or Secret_Interface haven't yet been declared).
In Ada, you can simply define the types in the same package.
package P is
type Secret is tagged private;
type Client is tagged private;
-- define public operations for both types
private
type Secret is tagged record ... end record;
type Client is tagged record ... end record;
-- define private operations for either or both types
end P;
Now, the body of P will contain the actual code for the public and private operations of both types. All code in the package body of P has access to those things defined in P's private part, regardless of which type they operate on. And, in fact, all code has access to the full definitions of both types. This means that a procedure that operates on a Client can call a private operation that operates on a Secret, and in fact it can read and write a Secret's record components directly. (And vice versa.) This may seem bizarre to programmers used to the class paradigm used by most other OOP languages, but it works fine in Ada. (In fact, if you don't need Secret to be accessible to anything else besides the implementation of Client, the type and its operations can be defined in the private part of P, or the package body.) This arrangement doesn't violate the principles behind OOP (encapsulation, information hiding), as long as the two types are truly two pieces of the implementation of one coherent concept.
If that isn't what you want, i.e. if Secret and Client aren't that closely related, then I would need to see a larger example to find out just what kind of use case you're trying to implement.
MORE THOUGHTS: After looking over your diagram, I think that the way you're trying to solve the problem is inferior design--an anti-pattern, if you will. When you write a "module" (whatever that means--a class or package, or in some cases two or more closely related classes or packages cooperating with each other), the module defines how other modules may use it--what public operations it provides on its objects, and what those operations do.
But the module (let's call it M1) should work the same way, according to its contract, regardless of what other module calls it, and how. M1 will get a sequence of "messages" instructing it to perform certain tasks or return certain information; M1 should not care where those messages are coming from. In particular, M1 should not be making decisions about the structure of the clients that use it. By having M1 decree that "procedure XYZ can only be called from package ABC", M1 is imposing structural requirements on the clients that use it. This, I believe, causes M1 to be too tightly coupled to the rest of the program. It is not good design.
However, it may make sense for the module that uses M1 to exercise some sort of control like that, internally. Suppose we have a "module" M2 that actually uses a number of packages as part of its implementation. The "main" package in M2 (the one that clients of M2 use to get M2 to perform its task) uses M1 to create a new object, and then passes that object to several other packages that do the work. It seems like a reasonable design goal to find a way that M2 could pass that object to some packages or subprograms without giving them the ability to, say, update the object, but pass it to other packages or subprograms that would have that ability.
There are some solutions that would protect against most accidents. For example:
package M1 is
type Secret is tagged private;
procedure Harmless_Operation (X : in out Secret);
type Secret_With_Updater is new Secret with null record;
procedure Dangerous_Operation (X : in out Secret_With_Updater);
end M1;
Now, the packages that could take a "Secret" object but should not have the ability to update it would have procedures defined with Secret'Class parameters. M2 would create a Secret_With_Updater object; since this object type is in Secret'Class, it could be passed as a parameter to procedures with Secret'Class parameters. However, those procedures would not be able to call Dangerous_Operation on their parameters; that would not compile.
A package with a Secret'Class parameter could still call the dangerous operation with a type conversion:
procedure P (X : in out Secret'Class) is
begin
-- ...
M1.Secret_With_Updater(X).Dangerous_Operation;
-- ...
end P;
The language can't prevent this, because it can't make Secret_With_Updater visible to some packages but not others (without using a child package hierarchy). But it would be harder to do this accidentally. If you really wish to go further and prevent even this (if you think there will be a programmer whose understanding of good design principles is so poor that they'd be willing to write code like this), then you could go a little further:
package M1 is
type Secret is tagged private;
procedure Harmless_Operation (X : in out Secret);
type Secret_Acc is access all Secret;
type Secret_With_Updater is tagged private;
function Get_Secret (X : Secret_With_Updater) return Secret_Acc;
-- this will be "return X.S"
procedure Dangerous_Operation (X : in out Secret_With_Updater);
private
-- ...
type Secret_With_Updater is tagged record
S : Secret_Acc;
end record;
-- ...
end M1;
Then, to create a Secret, M2 would call something that creates a Secret_With_Updater that returns a record with an access to a Secret. It would then pass X.Get_Secret to those procedures which would not be allowed to call Dangerous_Operation, but X itself to those that would be allowed. (You might also be able to declare S : aliased Secret, declare Get_Secret to return access Secret, and implement it with return X.S'access. This may avoid a potential memory leak, but it may also run into accessibility-check issues. I haven't tried this.)
Anyway, perhaps some of these ideas could help accomplish what you want to accomplish without introducing unnecessary coupling by forcing M1 to know about the structure of the application that uses it. It's hard to tell because your description of the problem, even with the diagram, is still at too abstract a level for me to see what you really want to do.
You could do this by using child packages:
package Hidden is
private
A : Integer;
B : Integer;
end Hidden;
and then
package Hidden.Client_A_View is
function Get_A return Integer;
procedure Set_A (To : Integer);
end Hidden.Client_A_View;
Then, Client_A can write
with Hidden.Client_A_View;
procedure Client_A is
Tmp : Integer;
begin
Tmp := Hidden.Client_A_View.Get_A;
Hidden.Client_A_View.Set_A (Tmp + 1);
end Client_A;
Your question is extremely unclear (and all the C++ code doesn't help explaining what you need), but if your point is that you want a type to have some publicly accessible operations, and some private operations, then it is easily done:
package Example is
type Instance is private;
procedure Public_Operation (Item : in out Instance);
private
procedure Private_Operation (Item : in out Instance);
type Instance is ... -- whatever you need it to be
end Example;
The procedure Example.Private_Operation is accessible to children of Example. If you want an operation to be purely internal, you declare it only in the package body:
package body Example is
procedure Internal_Operation (Item : in out Instance);
...
end Example;
Well I was wondering if anything of this sort is possible in Ada. Basically my desire is to be able to say:
Code A may only be executed by code B but not by anybody else
If limited to language features, no.
Programmatically, code execution can be protected if the provider must be provided an approved "key" to allow execution of its services, and only authorized clients are supplied with such keys.
Devising the nature, generation, and security of such keys is left as an exercise for the reader.

Resources