How do I reference a Cloudflare Durable Object in wrangler.toml - cloudflare-workers

I have a Cloudflare Worker which uses Durable Objects. I have based my application on the chat demo code and everything has been working fine up till now but my code base is becoming too big to realistically maintain in a single file. I have tried refactoring my durable object code to put my class in a separate file and then importing it in my main file but I get an error on publish: "New version of script does not export class which is depended on by existing Durable Objects".
Can I have my Durable Object stored in a separate file and if so how do I get Cloudflare to recognise it?

You need to re-exportthe Durable Object class that you import.

Related

The transaction currently built is missing an attachment for class - Attempted to find a suitable attachment but could not find any in the storage

Full Error:
transactions.TransactionBuilder. - The transaction currently built is missing an attachment for class: com/gibtn/corda/printutilities/PrintLedgerTransaction. Attempted to find a suitable attachment but could not find any in the storage.
This has been asked here and here but I hope to get better clarification.
Problem:
I have built a set of libraries to perform common tasks in my Flows that I include in all my CorDapps. For now I just copy the JARs into each project, make some changes to the gradle files and everything works great.
I recently put together a small library for performing common tasks in Contracts and added the JAR the same way.
This works fine with MockNodes. But when I test with real nodes I will get this error in the CRaSH shell and the transaction will fail with a NoClassDefFoundError exception.
Question:
Is what I am doing even possible? Or do I always have to keep my utility classes inside the Contracts module in IntelliJ so they are bundled together with the Contracts into a single JAR? That way when the node starts the JAR (containing the Contracts and any utilities) is added to Attachment storage as a single Attachment.
I found a way to solve this. It's a bit dirty but initial testing seems to work. I just created a blank class in my utilities JAR that implements Contract. It's verify() method is empty. Now when the Corda node starts it sees this Contract and adds the JAR to Attachment storage. So from the CRaSH shell if I run:
attachments trustInfo
...my utility JAR will be listed (it wasn't before). I see when I use one of the utility methods in a Contract the utility JAR will be included as a separate Attachment in the WireTransaction.
I'm not crazy about this solution and will probably stop using a utility JAR for Contracts. I'll go back to copying the classes into each project. Nevertheless there is a way to do it. I would just need a more experienced Corda developer to give it their blessing before I'd go forward into production with it.

What FIRApp.deleteApp does and when should I use it?

The iOS SDK of Firebase provides the function FIRApp.deleteApp which, according to the documentation:
Cleans up the current FIRApp, freeing associated data and returning its name to the pool for future use.
A similar function is available in the JavaScript SDK but not in the Java or C++ ones.
What this function does and when should I use it?
My understanding is that this function works as a destructor and thus I am supposed to call it when my app is closing. Is it correct? Should I call it on FIRApp.defaultApp too?
A FIRApp object contains references to the configuration data of your Firebase project. It is a lightweight object, mostly using a bit of memory keep those settings available when your application needs them.
Most applications create only a single FIRApp instance for the entire duration of their lifecycle. In such cases the resources will be automatically released when the application exists, and there is no need to explicitly delete the FIRApp instance.

how to develop a custom connector in SailPoint

I am novices to the field of Identity and Access management.
Till now I know, Sail point has provided the some direct connectors to integrate the known systems like LDAP, HR systems, OIM, Databases..
And sailpoint also provided the support for disconnected applications with the use of Custom connectors.
Here, My question is how to develop a custom connector..?
I do not have jar file provided by sailpoint which contain "AbstractConnector" class.
So that I can write my own class and develop..?
I also so not understand, what to do with that class?(if i have a jar)
How sailpoint will refer to that class..
Do we need to deploy that class to somewhere...
Here I am expecting the complete flow to develop and deploy the custom connector..
If anyone is working please help..
If you unzip your identityiq.war, you'll find a JAR file called WEB-INF/lib/connector-bundle.jar. This is the JAR where you'll find AbstractConnector. Once you've written your connector code, you will need to compile it and bundle it into a JAR file, which you will place into WEB-INF/lib.
Finally, you will need to update the ConnectorRegistry object (under Configuration on the debug screen) to reference the new class, which will make it available as an Application type. If it has custom connection parameters (as most do), you will also need an xhtml page that will be embedded into the Sailpoint UI to prompt the user configuring the Application.
If you have Compass access, they have a whitepaper called Custom Connectors that you will find helpful.
All that said, I encourage you to try to find a way to use an out-of-box connector if possible.
Most of the times it will be better if you use the DelimitedFile connector, you can import a CSV of identity data, and make it work within Sailpoint's workflow. You will be able to map fields, correlate accounts and create multi-valued group memberships rapidly. Of course, this means that Sailpoint will not be connected directly to the application, and you will have to develop a workflow to extract the identities and upload them. But at least, you can integrate without going the Custom Connector way.

Where should connection strings be stored in a n-tier asp.net application

Folks,
I have an ASP.NET project which is pretty n-tier, by namespace, but I need to separate into three projects: Data Layer, Middle Tier and Front End.
I am doing this because...
A) It seems the right thing to do, and
B) I am having all sorts of problems running unit tests for ASP.NET hosted assemblies.
Anyway, my question is, where do you keep your config info?
Right now, for example, my middle tier classes (which uses Linq to SQL) automatically pull their connection string information from the web.config when instantiating a new data context.
If my data layer is in another project can/should it be using the web.config for configuration info?
If so, how will a unit test, (typically in a separate assembly) provide soch configuration info?
Thank you for your time!
We keep them in a global "Settings" file which happens to be XML. This file contains all the GLOBAL settings, one of which is a connection string pointing to the appropriate server as well as username and password. Then, when my applications consume it, they put the specific catalog (database) they need in the connection string.
We have a version of the file for each operating environment (prod, dev, staging, etc). Then, with two settings -- file path (with a token representing the environment) and the environment -- I can pick up the correct settings files.
This also has the nice benefit of a 30 second failover. Simple change the server name in the settings file and restart the applications (web) and you have failed over (of course you have to restore your data if necessary).
Then when the application starts, we write the correct connection string to the web.config file (if it is different). With this, we can change a website from DEV to PROD by changing one appSettings value.
As long as there isn't too much, it's convenient to have it in the web.config. Of course, your DAL should have absolutely no clue that it comes from there.
A good option is for your data layer to be given its config information when it is called upon to do something, and it will be called upon to do something when a web call comes in. Go ahead and put the information in your web.config. In my current project, I have a static dictionary of connection strings in my data layer, which I fill out like so in a routine called from my global.asax:
CAPPData.ConnectionStrings(DatabaseName.Foo) =
ConfigurationManager.ConnectionStrings("FooConnStr").ConnectionString()
CAPPData.ConnectionStrings(DatabaseName.Bar) =
ConfigurationManager.ConnectionStrings("BarConnStr").ConnectionString()
etc.
"Injecting" it like this can be good for automated testing purposes, depending on how/if you test your DAL. For me, it's just because I didn't want to make a separate configuration file.
For testing purposes don't instantiate DataContext with default ctor. Pass connection string info to constructor.
I prefer to use IoC frameworks to inject connection to data context then inject context to other classes.

Deploying Flex Projects Leveraging Imported Web Services

I'm sure there's a simple explanation for this, but I haven't had much luck at finding the answer yet, so I figured I'd put the word out to my colleagues, as I'm sure some of you've run into this one before.
In my (simple) dev environment, I'm working with a handful of WCF Web Services, imported into my FB3 project and targeting a local instance of the ASP.NET development Web server. All good, no problems -- but what I'd like to know now is, What's the right way to deploy this project to test, staging and production environments? If my imported proxies all point, say, to http://localhost:1234/service.svc (from which their WSDLs were imported), and all I'm deploying is a compiled SWF, does Flex Builder expect me to "Manage Web Services > Delete", "> Add", recompile and release ever time I want to move my compiled Flex project from development to test, and to staging, and ultimately into production? Is there a simpler workflow for this?
Thanks in advance -- hope my question was clear.
Cheers,
Chris
If you have path names which will change depending on the enviroment then you will likely need to recompile for each environment since these will be compiled in the swf.
I typically use ANT scripts to handle my compile/deployment process when moving from development and production environments. This gives me the ability to dynamically change any path names during the compile. These build files can be integrated into Flex Builder making this process very easy once you have everything set up, and can be done with one click or scheduled.
Thanks Brett. I've been meaning to dig into automating my build processes anyway, so now's probably as good a time as any. :)
You do not need to build a SWF for each environment. Here's a technique I use commonly:
Externalize your configuration properties into an XML file; in this case, it could be a URL for each service or a base URL used by all your services
When the application starts up, make an HTTPService call to load the XML file, parse it, and store your properties onto some bindable "configuration object"
Bind the values from that object against your objects that depend on the URLs
Dispatch an event that indicates your configuration is complete. If you have some kind of singleton event dispatcher used by some components in your app, use that, so that the notification is global
Now proceed with the rest of the initialization of your application
It takes a little work to orchestrate your app such that certain parts won't initialize until steps 1-5 take place. However I think it's good practice to handle a lot of this initialization explicitly rather than in constructors or various initialize or creationComplete events for components. You may need to reinitialize things when a user logs out and a different user logs in; if you already have your app set up to that initialization is something you can control then reinitialization will not be a problem.

Resources