Make Firebase project FHIR-compatible? - firebase

I’m new to using FHIR - Fast Healthcare Interoperability Resources. Is it possible to make an existing Firebase project FHIR-compatible? My project uses Firebase mostly as a database for info sent to us by an application (NoSQL, BaaS). My idea is to convert the existing data into FHIR resources but I’m not sure what to do after that. How do I approach turning a Firebase project into a FHIR server?

FHIR is a standard for data exchange. It can be used with a wide variety of persistence technologies and varying proprietary database organizations. Each system is unique and needs to find its own way to map its particular data structures to and from the appropriate FHIR structures. Both the .NET and Java reference implementations include 'façade' capabilities designed to make it easier to express proprietary data structures over FHIR. (Just search for ".NET FHIR facade" or "Java FHIR facade".)

Related

Firebase + Flutter business framework

I am considering building a Firebase + Flutter framework to small business solutions.
By providing a fairly high level of security, Firebase + Flutter seems quite good for a number of business applications especially based on Android.
However, for this to make sense I have to solve a few problems and I will be very grateful for help in any of the points below.
Is it possible to connect from within a firebase database via VPN
(mainly OpenVPN) to another database via odbc and/or through
webservice (strongly preferred odbc)? The goal is to connect
firebase with local databases in companies - especially MS-SQL
databases (mainly small ERP / WMS system). If this is not possible,
how can an equivalent effect be obtained? I also need a connection
from firebase to firebase to automatically download changes to the
framework from the main repository.
Many governmental and commercial systems require signing files with
a signature based on X.509 (mainly * .pfx, * .p12 password
protected). I would like all such a signature to be implemented on
the firebase server side (possibility of managing such
certificates). Are there appropriate libraries in firebase to sign
content in accordance with X.509? If not how to get the effect of
signing content in firebase + flutter and strongly prefer that the
certificate was not on the client.
In several places on the network I met the possibility of logging in
using a certificate compatible with X.509 or identical, but I do not
see such an option for selection in the panel in firebase? Is
X.509-compliant login support by Google in Firebase, and if not,
could I ask for a step-by-step link on how to handle it yourself in
two variants: a) external certificate , b) automatically generated
certificate by the client application.
Thank you in advance for your support
The subject of this question is so broad that I don't think its possible to answer on StackOverflow.
Generally Firebase is a set of cloud functionalities including databases, functions, authentication (and many more) that can be used in your mobile apps. One of the technologies that you can use Firebase with, but not only one, is Flutter which is engine using Dart programing language. But there are a lot of API from other programing languages like Java, JS, Python and many more. They might be different depending of the product you choose from Firebase.
Now pointing to your questions:
I am not sure what do you mean by "connecting from Firebase", but I suppose you mean connecting from your app. If you build the app in Flutter you will use Dart and it has VPN support of course like here.
Although I don't know much about X.509 I have found that dart will support it as well here example.
Authentication is one of Firebase enter link description here products. You can find there also custom authentication possibilities
I think all you asking is possible, but of course there is no simple answer for it. I hope it will help you.

Relationship between IBM Cloudant, PouchDB, Hoodie, Meteor

What is the relationship between IBM Cloudant, PouchDB, Hoodie, Meteor?
I was watching https://www.youtube.com/watch?v=MALKo1bSa4Y which mentions those technologies but haven't yet wrapped my head around the relationships, so I would appreciate a neat textual summary.
IBM Cloudant is a database-as-a-service based on Apache CouchDB. It's a JSON document store whose storage mechanism makes it great of having multiple, partially-connected data sets e.g. a copy "in the cloud" and a copy on a mobile device.
PouchDB is an open-source database that can run in a browser or in Node.js that speaks the CouchDB replication protocol. It can be used to store data on the mobile device, optionally replicating data to the cloud (CouchDB or Cloudant) when needed. This practice is often called "Offline First" development - getting your app to store and retrieve data in a local data store to give the user 100% uptime, even when there's no network connection.
Hoodie and Meteor are opinionated application development frameworks. You can use their scaffolding to build your applications. They in turn may use PouchDB for local storage and/or Cloudant or CouchDB as a server-side store.

Cloud database for Azure multi-tenant application?

I am starting to port one old desktop single tenant application into the cloud and wish to hear what would be your recommendation about the databases for my cloud-based multi-tenant application?
My basic requirement is simple:
For each tenant, its data is separate to any other tenants' data. I can easily backup, restore, export the data for one single tenant without affecting other tenants.
I don't really want to care about multi-tenancy in the business logic code. It should look like a single tenant application behind the security layer, no tenant ID pass around etc.
Easy to query using some mature technology like LINQ.
Availability and scalability, of course, easy to set up replicas, fail-over and scaling up and down etc.
I have gone through some investigations about multi-tenant application development. I have noticed SQL databases from Azure and AWS are both very expensive(the cost for just SQL database instance is close to the license fee of the original application), so I definitely can't use separate SQL database instances for tenants.
Now I'm reading this book Developing Multi-tenant Applications for the Cloud, 3rd Edition, and it uses Azure Storage Service to implement multi-tenancy. I haven't finished the book yet, it seems you still have to handle the multi-tenancy by yourself and the sample code is already out of date.
I have seen lots of SO questions compare Azure Table Storage with MongoDB. The MongoDB is very new to me, not sure whether it could be easily used to fulfill my requirements?
And I have seen RavenDB as well, it does support multi-tenancy out of box. But I didn't see some good sample code about how to use it in Azure app development.
Hope to hear some good advices from awesome SO guys.
I would better opt with RavenDB on top of MongoDB. Even Raven is a new comer in to the game, it supports most of the features which traditional SQL supports.
Also to make up a decisions the volume of data you are dealing is a also a key decision pointer. Also the amount of traffic you are expecting.
Also keep in mind that operational costs and development efforts. HA and DR scenarios can be problematic when you use Raven or Mongo because of the fact that you need to host them. But when it comes to Azure Storage, it by defaults protects you to a maximum extent by maintaining 3 copies of information.
So I would suggest you to carefully make the trade offs and opt wisely based on your business needs, cost optimization, development and operational effort.
Having a single instance of your application for each tenant is a very expensive way to implement an application, however I realise that if an application was developed with a single tenant in mind, then the costs of changing over can be high.
First can we start out with why you have a desktop application connecting to a database at another location. The latency can really slow down an application. Ideally you would want a locally installed database and have it sync with the cloud DB, or add in appropriate caching into your application.
However the DB would still need to differentiate the clients.
Why do you need this to go to a cloud database? Is it for backup purposes, not installing a DB locally on a clients machine, accessing the same data from many machines or something else?
Unless your application is extremely large, I would recommend rewriting it for multi-tenant to one SQL Azure database. The architecture chosen at the beginning of the project doesn't suit your requirements now. As you expand you will run into further issues.

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

WCF Data Service or just WCF Service?

I am trying to decide which way to go. I have a solution that needs to have a web service and a client side which is a windows phone 7 project. the WP7 project needs to communicate with the database through the WCF service.
I am a little bit confused as to which way i should choose to go, and what are the differences, advantages/disadvantages of regular WCF service file VS the WCF Data Service.
Which way will be easier to go with considering my wp7 app needs to run queries on some tables on the database, nothing too fancy.
Any explanation will be welcomed.
Thanks
WCF Data Services are great if you need CRUD and flexible query capabilities - they allow you to expose underlying data (e.g. via Entity Framework) and control security with a minimum of development effort, as a RESTful API, especially to AJAX and SPA type client front ends. (Also, note that WebAPI now also offers similar capabilities).
WCF Services are more for Formal "Service" and "Operation" integration capabilities, where there is a lot more business focus, e.g. rules, processing, workflow, etc.
e.g. WCF would be useful to Submit a Claim for Processing (custom / rich graph of data input and output), Trigger a Nightly batch job (void response), etc.
Also, you can combine both technologies, e.g. for a CQRS type architecture, by using Data Services for the Query, and WCF for the Command type capabilities.

Resources