I'm working on a project that requires multiple clients to simultaneously work on synchronised data. There will be 4 clients working together connected to the same network. I've been looking at various database solutions for this. Now I'm not looking to go super deep into anything ridiculously advanced so I have mostly considered Hive for a nice lightweight local database, or firebase for a good cloud option. A key thing to consider in this project is it will be deployed in an area with quite spotty internet, so ideally I want everything running over the local network.
Both do come with drawbacks. Firebase requires a steady network connection. Lose that and the whole system is useless.
I'd like to use hive but I need to find a way to make the data sync between the 4 computers in the system. Is it possible to put my hive directory on a network drive and have all 4 computers access the boxes there?
Alternatively, are there any other options anyone can recommend? Can Firebase be run over LAN? I know they have a local emulator suite, but is that suitable for a production app?
Thanks for any help
Your question isn't specific enough to answer directly.
Firebase Firestore provides a non-SQL database running on a Google Cloud server. Hive is only a SQL-like database. According to Hive's documentation, Hive can sync data with a server. Although to my knowledge this isn't really common.
You're asking for a way to sync data from multiple clients. As you've mentioned, the easiest solution would be to use Firebase Firestore.
Since these clients are on the same network, you can set up a local server. It might also be possible for the app itself to create a local server (like a LAN). But keep in mind, every other solution will be more error-prone and difficult to set up than Firestore.
Related
I want to support the users of my Windows 10 desktop application with:
Local Data (not having to perform a fetch from the cloud every time they want new data)
Offline support
Replication with a cloud database
There can potentially be multiple users (in the order of 10-100 but not 1000) simultaneously editing the same database. I would run CouchDb as a service (ie. in a separate process to my app).
To achieve the above I am considering installing CouchDb on each client desktop PC (all replicating to a single main cloud CouchDb instance) together with my application to achieve the above goals.
One of the reasons I am pursuing this line of thinking is that it would allow my application code to mostly just be written in a manner that interacts with local data and the sync/replication (which is probably quite complex) can be taken care of by CouchDb.
I am using CouchDb as a replacement for something that I often see is done by sqlite but I really want the replication ability of CouchDb (which sqlite does not have).
Is the above a scenario in which I can expect CouchDb to perform well or is there something that I am not considering?
We have multiple clients doing this successfully. This is a recommended use-case for CouchDB. The limiting factor will be the cloud vm configuration, but 100s to 1000s of clients should be no problem on a decent vm setup.
CouchDB is sometimes called "A replication protocol with a database attached to it." This sounds appropriate for your use case. However, installing CouchDB (or any external database/service), particularly as part of a client application, isn't necessarily trivial. That doesn't mean it should be avoided--it's just complexity to consider when making a choice.
You might consider lighter-weight options for your client. PouchDB is the go-to solution for web apps, and often mobile apps, as it syncs with CouchDB, but has much lower resource requirements (at the cost of not being multi-tenant). For a desktop app, that may be appropriate.
Ultimately, whether CouchDB (or PouchDB, or PouchDB Server, etc) is appropriate for your use case depends on which trade-offs you're willing to make.
Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.
I'm developing an application that works distributed, and I have a SQLite database that must be shared between distributed servers.
If I'm in serverA, and change sqlite row, this change must be in the other servers instantly, but if a server were offline and then it came online, it must update all info equal other servers.
I'm trying to develop a HA service with small SQLite databases.
I'm thinking on something like MongoDB or ReThinkDB, due to replication works fine and I have got data independently server online I had.
There are a library or other SQL methodology to share data between servers?
I used the Raft consensus protocol to replicate my SQLite database. You can find the system here:
https://github.com/rqlite/rqlite
Here are some options:
LiteReplica:
It supports master-slave replication for SQLite3 databases using a single master (writable node) and one or many replicas (read-only nodes).
If a device went offline and then it came online, the secondary/slave dbs are updated with the primary/master one incrementally.
LiteSync:
It implements multi-master replication so we can write to the db in any node, even when the device is off-line.
On both we open the database using a modified URI, like this:
“file:/path/to/app.db?replica=master&bind=tcp://0.0.0.0:4444”
AergoLite:
Blockchain based, it has the highest level of security. Stores immutable relational data, secured by a distributed consensus with low resource usage.
Disclosure: I am the author of these solutions
You can synchronize SQLite databases by embedding SymmetricDS in your application. It supports occasionally connected clients, so it will capture changes and sync them when a server comes online. It supports several different database platforms and can be used as a library or as a standalone service.
You can also use CopyCat, which support SQLite as well as a few other database types.
Marmot looks good:
https://github.com/maxpert/marmot
From their docs:
What & Why?
Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS Jetstream. This means if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.
I'm designing a new web project and, after studying some options aiming scalability, I came up with two database solutions:
Local SQLite files carefully designed for a scalable fashion (one new database file for each X users, as writes will depend on user content, with no cross-user data dependence);
Remote MongoDB server (like Mongolab), as my host server doesn't serve MongoDB.
I don't trust MySQL server at current shared host, as it cames down very frequently (and I had problems with MySQL on another host, too). For the same reason I'm not goint to use postgres.
Pros of SQLite:
It's local, so it must be faster (I'll take care of using index and transactions properly);
I don't need to worry about tcp sniffing, as Mongo wire protocol is not crypted;
I don't need to worry about server outage, as SQLite is serverless.
Pros of MongoDB:
It's more easily scalable;
I don't need to worry on splitting databases, as scalability seems natural;
I don't need to worry about schema changes, as Mongo is schemaless and SQLite doesn't fully support alter table (specially considering changing many production files, etc.).
I want help to make a decision (and maybe consider a third option). Which one is better when write and read operations is growing?
I'm going to use Ruby.
One major risk of the SQLite approach is that as your requirements to scale increase, you will not be able to (easily) deploy on multiple application servers. You may be able to partition your users into separate servers, but if that server were to go down, you would have some subset of users who could not access their data.
Using MongoDB (or any other centralized service) alleviates this problem, as your web servers are stateless -- they can be added or removed at any time to accommodate web load without having to worry about what data lives where.
I have run into a difficult situation.
I do not want to do my development based on an emulator, so I want to be able to have my phone (Android), to connect to my local PC to make sure what I am developing comes out the way I want it to.
Issue #1 - I need to be able to connect to my network, not internet, I can't have my PC internet facing, this limits me from opening my PC to the internet.
Issue #2 - No WiFi allowed at my work, security issues.
Issue #3 - I can't publish this to a internet facing site, since the procedure to get it to one, take a few days each publish and will put my development to a crawl.
What I'm looking for, is a way to get my phone to connect to my local PC, maybe via USB/Bluetooth but have access to my local IIS, does anyone have any idea how to accomplish this?
Any hack you use is going to take your environment away from the "reality" you seek.
I would really encourage you to ask your employer to give you the proper means to do your duties. For instance:
Getting you a cheap VPS on the Internet you can push updates to yourself
Setting up an encrypted AP separated from the rest of the company network.
Those two options are extremely cheap and would let you do what you need.
Can't you just use an Android emulator?
This is a generic problem that most mobile developers face, especially when there are big server interactions and I am afraid, in this case, there are no perfect answers. There are only workarounds.
I can see that you have tried most of the workarounds. I will suggest one more.
If your plan is to test client side code / UI, then mock up your server on any cloud based server ( eg : google app engine / amazon ec2) and get you device to access server code over the cloud.
Let me know if this approach works.