Can the same graphql schema be used across multiple strawberry fastapi routers? - fastapi

I have strawberry setup to run with fastapi. Basically, like the examples, the fastapi initialization code creates a GraphQLRouter with a Schema object and a context_getter.
Everything works great, I use __get_context to pass some Dependencies in.
What I don't know is, how do I conditionally call Dependencies?
Like I might want a Depends(UserManager) on some graphql calls and Depends(ClientManager) on others. Normally in fast api, I could use different routers to assign different dependencies. But most of the strawberry fastapi examples use a single router for a single schema. Or more rarely multiple routers across multiple schemas.
How do I break up a strawberry GraphQL Schema across multiple GraphQLRouters so I can use different Depends chains for different requests?

Related

Filter data from api calls in Next.js in SSG to increase performance

I vaguely remember seeing an NPM module that does this, but can't find it - is there a better way than just destructing off the api in SSG (next.js).
Use Case:
If the API is huge, many times we just pass that data INTO the "getStaticProps" not realizing this could be a performance hit if we need only a few fields and we pass in this giant payload.
Solution:
Just destructor what I need off of the api.
The solution is fine, but I remember seeing an npm that does this and sorta gives a graphql vibe and allows for nested objects/arrays etc..
What are your strategies for this?

How to add default options for fetchBaseQuery in RTK Query?

After looking on the Internet for a long time, I have not found whether it is possible to add default options to this library as in axios? Do I have to pass the same options to fetchBaseQuery every time I create an Api?
As you should create only one API in your application in almost all cases, there is really no need for "defaults". The fact that you are creating a fetchBaseQuery is already that default.
You should only create multiple apis if those are completely independent datasets. So if you had one api that queried against a facebook service and another one that queried against a cake recipe service, that's fine. But it it's the same database and you create one api for authors and one for books, you are not using it as intended.
This is also mentioned at about 5 different places in the RTK Query docs, for example quoting the Quick Start Tutorial:
Typically, you should only have one API slice per base URL that your application needs to communicate with. For example, if your site fetches data from both /api/posts and /api/users, you would have a single API slice with /api/ as the base URL, and separate endpoint definitions for posts and users. This allows you to effectively take advantage of automated re-fetching by defining tag relationships across endpoints.
For maintainability purposes, you may wish to split up endpoint definitions across multiple files, while still maintaining a single API slice which includes all of these endpoints. See code splitting for how you can use the injectEndpoints property to inject API endpoints from other files into a single API slice definition.

Generic Airflow data staging operator

I am trying to understand how to manage large data with Airflow. The documentation is clear that we need to use external storage, rather than XCom, but I can not find any clean examples of staging data to and from a worker node.
My expectation would be that there should be an operator that can run a staging in operation, run the main operation, and staging out again.
Is there such a Operator or pattern? The closes I've found is an S3 File Transform but it runs an executable to do the transform, rather than a generic Operator, such as a DockerOperator which we'd want to use.
Other "solutions" I've seen rely on everything running on a single host, and using known paths, which isn't a production ready solution.
Is there such an operator that would support data staging, or is there a concrete example of handling large data with Airflow that doesn't rely on each operator being equipped with cloud coping capabilities?
Yes and no. Traditionally, Airflow is mostly orchestrator - so it does not usually "do" the stuff, it usually tells others what to do. You very rarely need to bring actual data to Airflow worker, Worker is mostly there to tell others where the data is coming from, what to do with it and where to send it.
There are exceptions (some transfer operators actually download data from one service and upload it to another) - so the data passes through Airflow node, but this is an exception rather than a rule (the more efficient and better pattern is to invoke an external service to do the transfer and have a sensor to wait until it completes).
This is more of "historical" and somewhat "current" way how Airflow operates, however with Airflow 2 and beyond we are expandingh this and it becomes more and more possible to do a pattern similar to what you describe, and this is where XCom play a big role there.
You can - as of recently - develop Custom XCom Backends that allow for more than meta-data sharing - they are also good for sharing the data. You can see docs here https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html#custom-backends but also you have this nice article from Astronomer about it https://www.astronomer.io/guides/custom-xcom-backends and a very nice talk from Airflow Summit 2021 (from last week) presenting that: https://airflowsummit.org/sessions/2021/customizing-xcom-to-enhance-data-sharing-between-tasks/ . I Highly Recommend to watch the talk!
Looking at your pattern - XCom Pull is staging-in, Operator's execute() is operation and XCom Push is staging-out.
This pattern will be reinforced, I think by upcoming Airflow releases and some cool integrations that are coming. And there will be likely more cool data sharing options in the future (but I think they will all be based on - maybe slightly enhanced - XCom implementation).

Migrating from IBM MQ to javax.jms.* (Weblogic)

I've been looking for days about how one could migrate from using IBM Websphere MQ to rather only using the QueueManager within Weblogic 10.3.x server. This would save cost of licenses for IBM MQ. The closest I came was finiding an external link which stated that IBM examples existed that did something similar(moving away from MQ to standard jms libraries), but when I attempted to follow the link: http://www.developer.ibm.com/isv/tech/sampmq.html
it lead to a dead page :\
More specifically I am interested in
What classes to use in my attempts to replace the following, com.ibm.mq.* classes:
MQEnvironment
MQQueueManager
MQGetMessageOptions
MQPutMessageOptions
and other classes which don't have an obvious javax.jms.* alternative.
Some of the nuances & work-arounds I may encounter in this migration process.
The database we are forwarding the queue messages to is Oracle 11 Standard (with advanced queuing) if that changes anything, so basically we are looking to "cut out the middle-man", so to speak. Your learned responses will be highly appreciated!
You seem to use the MQI api for MQ, to which there is no replacement at hand. There is no other way than to actually rewrite your MQ application logic to use the JMS API.
A good way might be to first migrate into JMS using the same WebSphere MQ server, since it allows you to verify your results in a reliable way.
You ask for what classes to replace say MQGetOptions with. There are no single 1-to-1 replacement (there are even some aspects of MQI that JMS cannot easily replace). Most of the MQPutOptions and other options are available by setting parameters on sessions and messages in JMS. You really need to understand the JMS api before trying this switch.
Then, when you have jms working with WebSphere MQ, you can do as Beryllium suggests, but swapping the libraries to Weblogic, switch any reference to com.ibm.mq.jms.MQConnectionFactory;, configuring the new parameters and pray to any a available god - press run :)
I have completed an application which supported both JBossMQ and MQSeries/WebSphere MQ.
The MQSeries specific classes I required were
import com.ibm.mq.jms.JMSC;
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueueConnectionFactory;
import com.ibm.mq.jms.MQTopicConnectionFactory;
These were sufficient to create the javax.jms.QueueConnection/TopicConnection.
As for WebSphere MQ, I connected directly.
As for JBossMQ I looked up the factories using JNDI.
So on top of this there is only JMS.
So the first step is to rewrite your application so that only the initializing part uses WebSphere MQ specific classes (the ones I have listed above)
Replace the remaining MQ specific part with a JNDI/directory lookup for a queue connection factory provided by your application server
Remove the MQ series specific parts from your source.
Here is a simple example which shows how to send a message.

Routing/filtering messages without orchestrations

A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.

Resources