I have been using neo4j in the context of a Java Servlet web application (Maven web app archetype project). I have a separate MySQL server running which stores user information.
The application needs to provide a separate graph database per each user that needs to be stored (preferably in the database, as opposed to a folder on the server).
I have used the following code to create a database:
GraphDatabaseService graphDb = new GraphDatabaseFactory().newEmbeddedDatabase("Neo4j/db");
This stores the database in a local directory on the server host. It would be undesirable to 'pollute' the host by making a local graph database for each user (e.g. it would consume host memory for data storage which is naturally a database responsibility). Also, I would prefer if I do not need to configure and run a separate neo4j server.
Is there some way to keep a neo4j database in memory, serialize it so that the data could be stored in a relational database, and deserialize it when the server needs to use the graph database?
There should be nothing preventing you from having a single Neo4j instance, in which you store all users' graphs. If you think you can solve your problem by having multiple graphs, then it means that the users' graphs are disconnected. In that case, if you start a traversal in one graph, it will not reach the other graphs and thus expose other users' data. All you need is a link between the user in your MySQL database and the user node in Neo4j, which you can achieve by, for example, a property called mysqlId on the :User node in Neo.
Related
After online registration, my mobile app must be able to be used without a network. It consists of a service and a UI, both accessing the same data.
I do not understand how to organize my application.
I known how to record data remotely (on my ROS) and locally but what about the synchronization between the two .realm?
This is should my app manage synchronization between these two db or is there another way?
When I create the user on my ROS, I get a .realm file on the device.
But it is unusable without User.LoginAsync (Incompatible histories exception).
My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.
i m having two databases in different environments but both are having same data.Presently my application connecting one database .I need to disconnect that database and i want to connect another database .Is it possible to connect to another database? If Its possible then what are things i have to modify in the application code.
Is it possible to connect to another database?
Yes.
If it is possible then what are things I have to modify in the application code.
You need to change the Data provider APIs especially if you are working with the database specific API e.g Ms-Sql server (SqlClient) or ODP.net (oracle).
For further, read - Data Access Application Block and .NET Data Access Architecture Guide.
I am planning to develop a fairly small SaaS service. Every business client will have an associated database (same schema among clients' databases, different data). In addition, they will have a unique domain pointing to the web app, and here I see these 2 options:
The domains will point to a unique web app, which will change the
connection string to the proper client's database depending on the
domain. (That is, I will need to deploy one web app only.)
The domains will point to their own web app, which is really the
same web app replicated for every client but with the proper
connection string to the client's database. (That is, I will need to
deploy many web apps.)
This is for an ASP.NET 4.0 MVC 3.0 web app that will run on IIS 7.0. It will be fairly small, but I do require to be scalable. Should I go with 1 or 2?
This MSDN article is a great resource that goes into detail about the advantages of three patterns:
Separated DB. Each app instance has its own DB instance. Easier, but can be difficult to administer from a database infrastructure standpoint.
Separated schema. Each app instance shares a DB but is partitioned via schemas. Requires a little more coding work, and mitigates some of the challenges of a totally separate schema, but still has difficulties if you need individual site backup/restore and things like that.
Shared schema. Your app is responsible for partitioning the data based on the app instance. This requires the most work, but is most flexible in terms of management of the data.
In terms of how your app handles it, the DB design will probably determine that. I have in the past done both shared DB and shared schema. In the separated DB approach, I usually separate the app instances as well. In the shared schema approach, it's the same app with logic to modify what data is available based on login and/or hostname.
I'm not sure this is the answer you're looking for, but there is a third option:
Using a multi-tenant database design. A single database which supports all clients. Your tables would contain composite primary keys.
Scale out when you need. If your service is small, I wouldn't see any benefit to multiple databases except for assured data security - meaning, you'll only bring back query results for the correct client. The costs will be much higher running multiple databases if you're planning on hosting with a cloud service.
If SalesForce can host their SaaS using a multitenant design, I would at least consider this as a viable option for a small service.
I've always personally used dedicated servers and VPS so I have full control over my SQL Server (using 2008 R2). Now I'm working on a asp.net project that could be deployed in a shared hosting environment which I have little experience with. My question is are there limitations on the features of SQL Server I can use in a shared environment?
For example, if I design my database to use views, stored procedures, user defined functions and triggers, will my end user be able to use them in shared hosting? Do hosts typically provide access to these and are they difficult to use?
If so, I assume the host will give a user his login, and he can use tools like management studios to operate within his own DB as if it were his own server? If I provide scripts to install these, will they run on the user's credential within his database?
All database objects are available. It includes tables, views, sp, functions, keys, certificates...
Usually CLR and FTS are disabled.
At last, you will not be able to access most of the server objects (logins, server trigger, backup devices, linked servers etc...)
SQL Mail, Reporting Services are often turned off too.
Depends on how the other users are authenticated to the database, if it is one shared database for all users.
If every user on the host will recieve it's own db:
If your scripts are written in a generic way (are not bound to fixed usernames in that case for example), other users will be able to execute them on their database and will have the same functionality. (Secondary click on the db and choose task->backup for example)
You could also provide simple pure backup dumps of a freshly setup database so for other users, the setup is only one click away. Also from the beginning, you should think about how to roll out changes that need to affect every user.
One possible approach is to always supply delta scripts, no matter if you are patching errors away or adding new things.