Dynamically set Receive Pipeline- Biztalk 2016 - biztalk

What I'm trying to do is set up a decoupled/flexible framework/strategy for all applications I develop in the future, that includes as much 're-use' as possible. Preferably what I'd love to have in the end is a single orchestration that I can 'plug-in' to any other orchestration which will take a message and send to a send adapter and return the response to the calling orchestration (having converted the received response to XML dynamically based on the constructed message to the adapter). This would require being able to set the receive pipeline on the message in the orchestration.
Am I on the right track here? I can't find much on what the best practice is in regards to artifact re-use in BizTalk.

Such comes up from time to time and I can tell you, it just never works out. You will spend a lot of time building essentially a framework, only to never actually use it beyond a handful of situations.
Meaning, no one tries this anymore because it was never actually useful. You might want to look at the ESB Toolkit, but even that almost always makes things more complicated than needed.
If you describe some of your scenarios, we can give the best advice.

Related

How to avoid the forge model derivative queue

I want to use the forge viewer as a preview tool in my web app for generated data.
The problem I have is that the model derivative API is sometimes slow sometimes fast.
I read that this happens because the files are placed in a queue and being processed subsequentially.
In my opinion, this can be solved by:
Having the extraction.update webhook also tell me where I am in the queue. So I can inform my users with better progress information. Or when the queue is too long I can not stop the process.
Being able to have a private queue. I have no problem paying more credits if necessary.
Being able to generate svf2 files on my own server.
But I don't know if any of these options are possible. Or if there is another workaround.
Yes, that could be useful. I logged that request in our system: DERI-7940
Might be considered later on, but no plans currently
I'm not aware of any plans for that
We're always working on making the translation service better, but unfortunately, I cannot tell when it will meet your requirements - including the implementation of the webhook feature you mentioned.
SVF2 is specifically for very large models - is that what you are working with? If not, then I'm quite certain that translating to SVF would be faster.

ASP.Net webservice/asmx/ashx/whatever programming

I need to build a proxy (maybe a bad description) that receives an XML file from a 3rd party, saves it, sends it on to another 3rd party, gets the response back and passes that back to the original 3rd party. Let's call that entire process a "unit".
Should I use a webservice? A Generic Handler? Something else?
I might have to do 20 "units" per second, but I know that each "unit" may span 30 seconds to a minute each, so really, I mean that I need to be able to have 1200 of these "units" running at the same time, in all varying stages of the process that I described above.
As far as the file saving goes, I eventually want to put this into a database, but I would imagine that writing the file is quicker than actually saving the data into a database, so I'll just have another process that isn't nearly as time critical as this grab the files and insert them into the DB at its own convenience.
The "app" will only consist of 1 page and it will be running under SSL. This will likely be the only thing on this server at any given time to ensure that this little process is not a bottleneck.
What in .Net would be a good (fast and scalable) way to go about this? I don't have any effective limit on what I would need as far as hardware goes -- so I can get a screaming machine if it would guarantee no bottlenecks.
Since webservices are based on XML you need to consider the fact that you could end up with "XML inside XML". But part from that I'd say using webservices is a good way to go. Mostly because it is compatible, easy to use and easy to understand (for future maintainers).
There are however alternatives that use less CPU/memory/bandwith. WCF provides several models to solve this both in regard to running under IIS or stand-alone process and transfer type.
Personally I'm a fan of plain old binary transfer through TCP. REST could be one way to go as it is compatible (frontend proxy/caching for instance) and essentially gives you a binary transfer with little overhead.
I also like to leave the dirty work to IIS, so I avoid stand-alone WCF apps. I assume IIS is faster and more stable than what I can do easily.
Maybe my question on high concurrent load can be of help.
I would write a WCF service, use REST to simplify it's URLs, and set the WCF service to run as a singleton so that your memory doesn't get out of control.
Good article on WCF: http://www.c-sharpcorner.com/UploadFile/sridhar_subra/116/

How do I ensure that SOAP requests from a flash client to my ASP server are coming from the flash client?

I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores.
I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it.
I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...)
I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it.
Any help and suggestions will be appreciated, thank you
What you want to achieve is impossible. You can only make it harder for people to do. The best you can do is to use encryption and encrypt the SWF it self, which usually causes higher filesize and poorer performance.
The safest method is to evaluate or even run the whole game on the server. You can try to determine whether what the client sends you is possible at all. Rather than making sure people use your client, you're making sure people play the game according to your rules.
greetz
back2dos
All security is based on making things hard. It never makes things impossible. How about having your game register with a separate service when it starts up. It could use client information to build some kind of special code that would be unique for each iteration of the game. The game could morph the code in a way that would be hard to emulate. Then when the game is over the score gets submitted with the morphed code and validated on the server side.

Sharing Logic Between the Browser and the Server

I'm working on an app which will, like most apps, have a whole boat load of buisness logic, almost all of which will need to be executed both on the server and the Flash-based client… And I'm trying to figure out the best (read: least complex) way to implement the rules engine.
These are the parameters of the problem:
The rules engine must both run in a web browser (ie, in Flash Player) and on the server. Duplicating the logic (eg, by writing a "server" version and a "client" version) would be an unacceptable risk.
The input/output data is fairly complex, so serialization is a nontrivial problem. We are currently using AMF for all of our serialization needs, and using another protocol would add significant complexity… So it should probably be avoided.
It is infeasible to implement a "rules description language". Experimentation has shown that rules are sufficiently complex that any such language would need to be Turing complete… Which would also add a significant amount of complexity.
The rules engine will not need to make some, but not very many, service calls.
Currently, the best contenders are:
Writing the code in ActionScript, then running it on the server. In theory it's possible to start up an AVM instance, get it long-polling a gateway, then pass data back and forth that way… But that seems less than ideal. Is there a "good" way of doing this?
Writing the code in Haxe. I don't know anything about Haxe's AMF support, so that could be a deal-breaker.
Something involving Tamarin. Seems like a viable option, but I haven't done enough research to tell either way.
So, what do you think? Are any of these options clearly better than others? Is there something I haven't though of that's worth considering?
Finally, thanks for reading this wall of text :)
How much data are you talking about? You can use Air if you want to run it on the server and access a queue or something.

Biztalk : can a message select an orchestration to be processed by?

Can a message select between an 'older' or 'latest' version of the orchestration he'd like to be prosessed by ?
thanks
If you're talking about the version fo the DLL in the GAC, I don't think this is possible. But if you maintain two seperate Orchestrations, you can use a promoted property to route the message to the appropriate orchestration. If it's more complicated than that, you can have a single receiving orchestration for themessage type and it can call the appropriate orchestration based on whatever criteria you can code into it. This is still the send port groups deciding which messages they want to run. Another approach would be Dynamic Send Ports. This really gives you the freedom to move direction of the message into the orchestration/app itself.
The Microsoft ESB 2.0 Guidance has some extensive thoughts on Itineraries which as I understand it, is the concept of the message containing specific processing steps on board. I am still digesting this, but it may be something to look at.

Resources