Cannot resolve "coin.transfer - pact

I'm learning the Pact language, and I'm looking at a contract already developed and available on the mainnet.
In this contract there is:
(coin.transfer owner ADMIN_ADDRESS (* (get-price) amount))
there is no table created in the contract called coin and the compiler gives me the error:
Cannot resolve "coin.transfer

The coin contract exists on the blockchain.
I'm guessing you're trying to call coin.transfer locally in your REPL.
To do this - you will need to
Load the coin contract and any dependencies (such as the fungible interfaces) into your REPL first
Create any accounts you want to interact with (such as the accounts for owner ADMIN_ADDRESS
Then you should be able to interact with the coin contract as you would on the blockchain

Related

creating sequential workflows using and ETL tool (Fivetran/Hevo), dbt and a reverse ETL tool (Hightouch)

I'm working for a startup and am setting up our analytics tech stack from scratch. As a result of limited resource we're focussing on using 3rd party tools rather than building custom pipelines.
Our stack is as follows:
ELT tool: either Fivetran or Hevo
Data warehouse: BigQuery
Transformations: dbt cloud
Reverse ETL: Hightouch (if we go with Fivetran - hevo has built in reverse ETL)
BI Tool: Tableau
The problem i'm having is:
With either Fivetran or Hevo there's a break in the below workflow whereby we have to switch tools and there's no integration within the tools themselves to trigger jobs sequentially based on the completion of the previous job.
Use case (workflow): load data into the warehouse -> transform using dbt -> reverse etl data back out of the warehouse into a tool like mailchimp for marketing purposes (e.g a list of user id who haven't performed certain actions and therefore we want to send a prompt email to, a list which is produced via a dbt job which runs daily)
Here's how these workflows would look in the respective tools (E = Extract, L = Load, T = Transform)
Hevo: E+L (hevo) -> break in workflow -> T: dbt job (unable to be triggered within the hevo UI) -> break in workflow -> reverse E+L: can be done within the hevo UI but can\t be triggered by a dbt job
Fivetran: E+L (fivetran) -> T: dbt job (can be triggered within fivetran UI) -> break in workflow -> reverse E+L fivetran partner with a company called hightouch but there's no way of triggering the hightouch job based on the completion of the fivetran/dbt job.
We can of course just sync these up in a time based fashion but this means if a previous job fails subsequent jobs still run, meaning incurring unnecessary cost and it would also be good to be able to re-trigger the whole workflow from the last break point once you've de-bugged it.
From reading online I think something like apache airflow could be used for this type of use case but that's all i've got thus far.
Thanks in advance.
You're looking for a data orchestrator. Airflow is the most popular choice, but Dagster and Prefect are newer challengers with some very nice features that are built specifically for managing data pipelines (vs. Airflow, which was built for task pipelines that don't necessarily pass data).
All 3 of these tools are open source, but orchestrators can get complex very quickly, and unless you're comfortable deploying kubernetes and managing complex infrastructure you may want to consider a hosted (paid) solution. (Hosted Airflow is under the brand name Astronomer).
Because of this complexity, you should ask yourself if you really need an orchestrator today, or if you can wait to implement one. There are hacky/brittle ways to coordinate these tools (e.g., cron, GitHub Actions, having downstream tools poll for fresh data, etc.), and at a startup's scale (one-person data team) you may actually be able to move much faster with a hacky solution for some time. Does it really impact your users if there is a 1-hour delay between loading data and transforming it? How much value is added to the business by closing that gap vs. spending your time modeling more data or building more reporting? Realistically for a single person new to the space, you're probably looking at weeks of effort until an orchestrator is adding value; only you will know if there is an ROI on that investment.
I use Dagster for orchestrating multiple dbt projects or dbt models with other data pipeline processes (e.g. database initialization, pyspark, etc.)
Here is a more detailed description and demo code:
Three dbt models (bronze, silver, gold) that are executed in sequence
Dagster orchestration code
You could try the following workflow, where you'd need to use a couple more additional tools, but it shouldn't need you any custom engineering effort on orchestration.
E+L (fivetran) -> T: Use Shipyard to trigger a dbt cloud job -> Reverse E+L: Trigger a Hightouch or Census sync on completion of a dbt cloud job
This should run your entire pipeline in a single flow.

Corda notary prevention in double spending,how to check it?

I need to check how the notary prevents the double spending in the Obligation Cordapp. I started the web server UI at the localhost ports and performed some multiple transactions and when I checked the notary's log ,I found this:
[WARN ] 2020-06-24T08:29:33,484Z [Notary request queue processor] transactions.PersistentUniquenessProvider. - Unable to notarise: One or more input states or referenced states have already been used as input states in other transactions. Conflicting state count: 1, consumption details:
7CF1BCA8EDF25F0602BBEDF8AD41FD60336F65EAC09C5326478A4CB7CD620579(0) -> StateConsumptionDetails(hashOfTransactionId=46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E, type=INPUT_STATE).
To find out if any of the conflicting transactions have been generated by this node you can use the hashLookup Corda shell command. [errorCode=1g4005y, moreInformationAt=https://errors.corda.net/OS/4.5-RC02/1g4005y]
net.corda.core.internal.notary.NotaryInternalException: Unable to notarise: One or more input states or referenced states have already been used as input states in other transactions. Conflicting state count: 1, consumption details:
7CF1BCA8EDF25F0602BBEDF8AD41FD60336F65EAC09C5326478A4CB7CD620579(0) -> StateConsumptionDetails(hashOfTransactionId=46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E, type=INPUT_STATE).
To find out if any of the conflicting transactions have been generated by this node you can use the hashLookup Corda shell command.
I performed hashLookup on the invalid txId and found this :
hashLookup 46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E
Found a matching transaction with Id: A86E3ECE4EC12A487E413E2BDAB9D88BFEBCB418FA0224189DE0C72BBBD34B12
I believe this is how notary has stopped the double spending. But I am unable to recreate that testing.Can someone tell me what possible input transaction has led to this error.I mean what test case can lead to this testing of double spend that is stopped by notary?
A notary is a network service that provides uniqueness consensus by attesting that, for a given transaction, it has not already signed other transactions that consumes any of the proposed transaction’s input states.
In other words, the notary will keep track of all the input states (only stores their hashes, not the real state) that are used in transactions, so when someone is trying to use these already-spent inputs, the notary will reject the transaction.
Hence, preventing the double spend.

How can I start a conference call on GSM network?

I need to make a conference voice call on GSM network.
The maximum I have seen in the datasheet is that the command AT+CLCC can report a list of current calls of ME automatically when the current call status changes.
How can I make conference calls with a SIM800L? I have 2 phone numbers for call.
The key command for the feature you are asking for is AT+CHLD (Call Holding Services). It is important to say that this is well known GSM Supplementary Service, and since AT+CHLD is a standard command it is a feature that is likely to be supported by all cellular modems, not only SIM800.
The main constraints that any user must know are:
It is a service strictly related to VOICE calls
The network operator must support this service as well
ETSI Specifications about multi-party calls
Though it may appear as a boring introduction, we need to build our procedure on solid basis. Feel free to skip this paragraph if you are just interested in the AT commands sequence.
ETSI specification TS 127.007 v15.3.0 describes its behaviour at chapter 7.13: "Call related supplementary services +CHLD":
This command allows the control of the following call related
services:
a call can be temporarily disconnected from the MT but the connection is retained by the network;
multiparty conversation
(conference calls);
the served subscriber who has two calls (one
held and the other either active or alerting) can connect the other
parties and release the served subscriber's own connection.
A further document related to MPTY calls is then referenced : 3GPP TS 22.084 (that can be found here. Another interesting source is ETSI 300 954 which states
The served mobile subscriber A may initiate an active MultiParty call
from an active call C and a held call B.
This means that we can obtain a conference call just by adding held calls to active calls.
AT Commands procedure
From the previous documents we can deduce that the following steps will setup a multi-party call:
Start a voice call with one of the parties by issuing ATD<number>;, or answer to an incoming call with ATA
Put on hold the first call by issuing AT+CHLD=2 (well supported by your SIM800, that for +CHLD=2 states "Place all active calls on hold (if any) and accept the other (held or waiting) call.").
Start a call with the third party
Start the multiparty by issuing AT+CHLD=3 (well supported by your SIM800, that for +CHLD=3 states "adds an held call to the conversation.").
About AT+CLCC
The command you mentioned in the question is not directly responsible for starting a multiparty conversation, but it is someway related to it. In fact, it is able to list the status of all the active calls.
Execution command AT+CLCC provides the following answer:
[+CLCC: <id1>,<dir>,<stat>,<mode>,<mpty>[,<number>,<type >,<alphaID>]
[<CR><LF>+CLCC: <id2>,<dir>,<stat>,<mode>,<mpty>[,<number>,<type>,<alphaID>]
[...]]]
OK
We will linger on just two relevant parameters:
<id N> is the ID of the Nth call. It is relevant because many options of +CHLD command allow to selective hold/release an X call, and this ID it is required in order to specify X in the command. All these options, not mentioned in the this answer, can be useful in order to select correctly the call to be added in the multiparty conversation.
<mpty> is the multiparty call flag, and if it is set to 1 it means that the call is one of multiparty (conference) call parties.

Corda 4 - Single Party Transaction Failed to Commit to Ledger

While upgrading from Corda 3 to Corda 4, I have an issue commiting a State to our node's ledger with only one Party. A single Party is able to create a state, notarize it, but CANNOT commit to the Corda 4 ledger without asking for an external third party.
The error Corda 4 produces (which Corda 3 did not produce) is the following:
(1) java.lang.IllegalArgumentException: A flow session for each external participant to the transaction must be provided. If you wish to continue using this insecure API then specify a target platform version of less than 4 for your CorDapp.
More specific context: Using FinalityFlow without a session yields a 'session required for external parties' error and does not complete. Adding only a session (e.g. session = initiateFlow(PartyA) ) results in an error that 'local nodes should not be included.'
Is there a workaround regarding this solution? It's important (for our use case) that a single Party can create a State and modify the State information without the involvement of other parties. Other use cases (where multiple parties are included) stem from this use case. Any guidance is greatly appreciated.
I think the error message is pretty spot on here. Just change the way you call FinalityFlow during your issuance such that it doesn’t contain a flow session to itself i.e.
return subFlow(new FinalityFlow(signedTransaction));
Although you may get a deprecation warning, in which case, do the following
return subFlow(FinalityFlow(stx, emptyList()))

TelemetryProcessor not called without an InstrumentationKey?

I've created a class derived from ITelemetryProcessor, so I can capture telemetry data during a unit test of a C# .Net Class Library. Being a unit test, there is no InstrumentationKey provided as unit tests should have no network dependencies. (I cannot factor the telemetry to an injected interface.)
I create and use TelemetryClient's and log custom events during the unit test methods. However, I noticed my Process() method was not getting called when I logged telemetry items.
After doing some experimentation, I realized that if set an InstrumentationKey to a dummy Guid, then my Processor() method started to get called.
TelemetryConfiguration.Active.InstrumentationKey = Guid.NewGuid().ToString();
Question: why should I need to provide an InstrumentationKey in order for processors to be invoked?
Thanks
-John
TelemetryProcessor's are meant to apply additional processing/filtering to telemetry items before being sent to AI. If there is no ikey, then there is no point sending to AI as it will not be accepted anyway. Why run overhead of running all processors.

Resources