Datapower to handle all the business (no backend server) - ibm-datapower

According to IBM XML-FW is there only for testing and debugging, what Im looking for is a way to let the Datapower do all the business with no backend server.
so basically i need to configure a MPG to read from the database and send the result back to the client with no backend server to be involed.
what is the practice to accomplish this without using XML FW loop-back?

You can accomplish this within any rules in a MPG, by simply switching the "skip-backside" flag to true. The behavior is the same as a XML FW loop-back... any transformation will do.
The precise variable is var://service/mpgw/skip-backside when used in XSLT... you can obviously use it in GatewayScript as well.
By setting this variable to 1, you essentially turn any rule into a "loop-back".
Here is one of the references I found on the subject.

Related

How to best architect website when each client has own database and subdomain?

For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.

Using Breeze with an ODBC Connection

I am in the beginning stages of writing an angularjs client that talks to a RESTful ASP.net Web Api server and am trying to integrate Breeze. I have full control over both client and server code, but the one non-negotiable is that I have to connect to a DBISAM database that I'm sharing with a legacy Windows desktop app; so I can not take advantage of the Entity Framework that most Breeze Server examples use. I've successfully retrieved data by setting up a controller similar to the one in the NoDB example and am now trying to figure out the best way to get real data from my database. Also, I am able to get data from the DB using the ODBC connection, but I'm just not sure where that will fit in with the Breeze way of doing things.
Given all of that, here are my specific questions:
are there any Breeze Server examples showing how to retrieve/save data using a database connected via ODBC that I have somehow overlooked?
will I need to create an adapter to make this work? And if so, is the mongodb adapter the closest thing to use as an example of what that code should look like?
Without Entity Framework, is it still better to return the metadata from the server, or should I create it on the client instead?
I understood the documentation to say that it is easier to have the server be Breeze "aware", but is that still true when needing to use ODBC? Perhaps I should just use Breeze on the client side instead, similar to the Edmunds example?
thanks for any help with figuring out the best way to proceed!

BizTalk sending message to Webservice without mapping

I am new to BizTalk. I got a requirement as below.
Requirement is below:-
Source: Oracle (table). I created a generated schema in BizTalk.
Target: Webservice which receives "object array" (Table of source records from BizTalk) as an input.
Source and Target systems have same structure. Hence no mapping should be implemented. Logic should be in pipelines or orchestration.
Need info on below two topics:
How to incorporate the logic in pipeline or orchestration to map data from source schema to target WS schema.
This question was posed (now deleted) on the other big BizTalk forum. So I'll share my answer here.
What you're asking is simply not possible. It doesn't matter that the source and destination are logically the same. They are represented by two different schemas in BizTalk. There is no way around this except by developing the Web Service to accept the WCF Oracle message directly.
Because of that, you must transform from the source to the destination. Maps are how that is done. While there are technically other ways, they are harder to write, bug prone and would likely offer a less desirable performance profile.
A ban on Maps is just counter-productive and as a long time BizTalk Developer I could not accept a project with such a requirement.
It's not very clear what you are asking for to be honest. Your requirement states that no mapping is required, but then you go on to ask how to incorporate mapping in pipeline or orchestrations.
A standard approach to delivering this would be;
Setup your input process from Oracle by using "Consume Adapter
Service" from visual studio's "add generated item". Use the oracle
binding, setup connection properties for typed polling along with
your query (see here for an example on MS SQL) change to a
service contract type (for inbound operations) and you'll get a set
of schemas representing your dataset, and a binding for your type
receive port poller.
Use "Consume WCF Service" to point to your "sending" web service and
you'll get the schemas, binding and a helpful orchestration with
port types add to your project
Create a simple map mapping your inbound oracle recordset schema to
your web service schema - this should be pretty straight forward if
they are identical, although I suspect you'll have to deal with
multiple sets of data - depends on your data.
Complete by wiring together your orchestration.
I appreciate this is a high level view of what you need to do, but there are plenty of example you can google to get you started. Hope that helps.

intercept data changes and alter/validate from a server

I'm working on a solution to intercept changes to the data from our node.js server and validate/alter them before they are stored/synced to other clients.
Any strategies or suggestions on how to solve this with the current code base?
Currently, it seems like the only option is to rewrite it post-sync operation. That would mean each client would probably receive the sync (including the server), then the server would rewrite the data and trigger a second sync.
To help understand the context of the question, here's what seems like an ideal strategy for my needs:
server gets a special token/key not available to clients (when security comes about)
server registers a dependency injection like firebase.child('widgets').beforeSync(myCallback)
client syncs data
server callback is notified
server modifies or validates the data
if valid, it returns it to firebase for sync ops
if invalid, aborts the sync with an error which is returned to the client
Thanks for sharing your ideas!
We've considered this type of approach. You can actually simulate this sort of behavior by structuring your data so that there is an "unvalidated" tree and a "validated" tree.
The "unvalidated" tree would be writeable by the client and the server would monitor it for changes. When a change occurs it would validate the data, and if it passes it would copy it into the "validated" tree which is only writeable by the server. You could pass errors back to the client through Firebase data as well when the validation fails.
This behavior could be packaged into a library that provides the behavior you describe. We may add this as core functionality as well, but we're still researching a variety of options.

Is it wrong to switch client logic in the service tier?

We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!

Resources