Syncing tables one-way but different tables in different directions - mariadb

I have never worked with multiple servers syncing before. I am working on a project that will require multiple MariaDB servers to sync specific tables. Each table will only ever be changed on one server, but each of those tables will be changed on a different server.
Given tables A, B, C, and D:
A - settings table - only updated when users change settings
B, C, D - work tables - these will be updated every few seconds with work done by the individual servers.
Main Server - Changes A; needs B, C, D kept up-to-date in real-time.
Server 2 - Changes B; needs A kept up-to-date in real-time.
Server 3 - Changes C; needs A kept up-to-date in real-time.
Server 4 - Changes D; needs A kept up-to-date in real-time.
Looking at replication tutorials, I see lots of information about one-way and two-way, but I haven't been able to find anything that matches up with what I'm trying to do.
It is imperative that the data is synced in real-time as the information is time-sensitive. If the servers lose connection to each other, I still need the data to be synced as soon as the connection is restored.
The tables will all be updated by PHP code on their respective servers. The servers are all running Linux. Is this something that can be done with MariaDB by itself? Or would it be better to handle this in another way? I'd really like to avoid two-way replication of the entire database to all the servers as most of the data is unnecessary for any server but the main server and the server that created it.

Forget having multiple machines trying to sync with each other. Instead think about...
Plan A: Using a single server for all the work. Then there is no syncing.
Plan B: Make use of the network to allow everybody to write to a Master, which continually replicates uni-directionally to any number of Slaves. The slaves can be read from.
Plan C: Some form of clustering, such as Galera, so that multiple nodes are continually replicating to each other, and you can read/write to each node.
Beware in any sync or replication environment of the "critical read" scenario. This is when a user writes a comment on a blog post, then fails to find it on the next web page. The problem is that it has not been "sync'd" yet.
To answer your question: There is no "sync" mechanism other than the 3 things I described.
More
Connection issues are not a serious problem -- There are many cloud services out there that come very close to being available 100.0% of the time.
Latency is not really a serious problem -- you would be surprised at how far away web sites are. A thousand miles is "nothing" in the Internet today.
Minimize the number of round trips from the user to the web server if they are far apart. And minimize the number of database calls between the front end and the database. Note that it is beneficial to put the web server near the client in one case and near the database in the other case.
In a Master-Slave setup, a few gigabytes per day is a reasonable cap on traffic. How much traffic are you talking about?

Related

uWSGI and Flask: keep objects in memory between requests

My stack is uWSGI, flask and nginx currently. I have a need to store data between requests (basically I receive push notifications from another service about events to the server and I want to store those events in the server memory, so client can just query server every n milliseconds, to receive latest update).
Normally this would not work, because of many reasons. One is a good deployment requires you to have several processes in uwsgi in production (and even maybe several machines to scale this out). But my case is very specific: I'm building a web app for a piece of hardware (You can think of your home router configuration page as a good example). This means no need to scale. I also do not have a database (at least not a traditional one) and probably normally 1-2 clients simultaneously.
if I specify --processes 1 --threads 4 in uwsgi, is this enough to ensure the data is kept in the memory as a single instance? Or do I also need to use --threads 1?
I'm also aware that some web servers clear memory randomly from time to time and restart the hosted app. Does nginx/uwsgi do that and where can I read about the rules?
I'd also welcome advises on how to design all of this, if there are better ways to handle this. Please note that I do not consider using any persistant storage for this - this does not worth the effort and may be even impossible due to hardware limitations.
Just to clarify: When I'm talking about one instance of data, I'm thinking of my app.py executing exactly one time and keeping the instances defined there for as long as the server lives.
If you don't need data to persist past a server restart, why not just build a cache object into you application that can do push and pop operations?
A simple array of objects should suffice, one flask route pushes new data to the array and another can pop the data off the array.

How do I configure OpenSplice DDS for 100,000 nodes?

What is the right approach to use to configure OpenSplice DDS to support 100,000 or more nodes?
Can I use a hierarchical naming scheme for partition names, so "headquarters.city.location_guid_xxx" would prevent packets from leaving a location, and "company.city*" would allow samples to align across a city, and so on? Or would all the nodes know about all these partitions just in case they wanted to publish to them?
The durability services will choose a master when it comes up. If one durability service is running on a Raspberry Pi in a remote location running over a 3G link what is to prevent it from trying becoming the master for "headquarters" and crashing?
I am experimenting with durability settings in a remote node such that I use location_guid_xxx but for the "headquarters" cloud server I use a Headquarters
On the remote client I might to do this:
<Merge scope="Headquarters" type="Ignore"/>
<Merge scope="location_guid_xxx" type="Merge"/>
so a location won't be master for the universe, but can a durability service within a location still be master for that location?
If I have 100,000 locations does this mean I have to have all of them listed in the "Merge scope" in the ospl.xml file located at headquarters? I would think this alone might limit the size of the network I can handle.
I am assuming that this product will handle this sort of Internet of Things scenario. Has anyone else tried it?
Considering the scale of your system I think you should seriously consider the use of Vortex-Cloud (see these slides http://slidesha.re/1qMVPrq). Vortex Cloud will allow you to better scale your system as well as deal with NAT/Firewall. Beside that, you'll be able to use TCP/IP to communicate from your Raspberry Pi to the cloud instance thus avoiding any problem related to NATs/Firewalls.
Before getting to your durability question, there is something else I'd like to point out. If you try to build a flat system with 100K nodes you'll generate quite a bit of discovery information. Beside generating some traffic, this will be taking memory on your end applications. If you use Vortex-Cloud, instead, we play tricks to limit the discovery information. To give you an example, if you have a data-write matching 100K data reader, when using Vortex-Cloud the data-writer would only match on end-point and thus reducing the discovery information by 100K times!!!
Finally, concerning your durability question, you could configure some durability service as alignee only. In that case they will never become master.
HTH.
A+

Postgresql Database Slow Down

I have installed PostgreSQL 9.1 server on my production server.
I have Changed all the configuration(postgresql.conf) according to system.
Everything has been working fine for a week.
After this suddenly, postgresql server becomes very slow.
Even for count(*) query on a table.It is taking too much time.
After this I have done so many activities like:
Monitor Load on system :- Normal within range of 0.5 to 1.5.
Monitor No of users logged in application :- 200 to 400. Normal
Recreated Index
Kill the ideal transactions.
Check Locks (No DeadLock Found)
Application server restarted.
Database server restarted.
After doing this all activities the performance of database server is not increased.
It is taking so much time for normal queries also.
Then I drop the database and recreated then Performance Increases
Everything working after recreating the database.
But after some days suddenly performance goes down.
This sounds like the active portion of the database is growing large enough that it doesn't fit in cache, causing actual disk access (which is orders of magnitude slower) for many of your reads. This is often caused by not vacuuming aggressively enough.
Other factors could be related to your configuration of PostgreSQL and the operating system. It's hard to give much advice without knowing:
exactly what version of PostgreSQL you're using (9.1 tells us the major release, but the minor release sometimes matters),
how you have PostgreSQL configured,
what OS you're running on,
what hardware you are using (cores, RAM, drive arrays, controllers), etc.
Part of that can be supplied by posting the results of running the query on this page:
http://wiki.postgresql.org/wiki/Server_Configuration
It might also help to select relname, relpages, and reltuples from pg_class for the tables involved and compare numbers when things are running well to when they are slow.
With the additional information, people should be able to make some pretty specific recommendations.

web service that can withstand with 1000 concurrent users with response in 25 millisecond

Our client requirement is to develop a WCF which can withstand with 1-2k concurrent website users and response should be around 25 milliseconds.
This service reads couple of columns from database and will be consumed by different vendors.
Can you suggest any architecture or any extra efforts that I need to take while developing. And how do we calculate server hardware configuration to cope up with.
Thanks in advance.
Hardly possible. You need network connection to service, service activation, business logic processing, database connection (another network connection), database query. Because of 2000 concurrent users you need several application servers = network connection is affected by load balancer. I can't imagine network and HW infrastructure which should be able to complete such operation within 25ms for 2000 concurrent users. Such requirement is not realistic.
I guess if you simply try to run the database query from your computer to remote DB you will see that even such simple task will not be completed in 25ms.
A few principles:
Test early, test often.
Successful systems get more traffic
Reliability is usually important
Caching is often a key to performance
To elaborate. Build a simple system right now. Even if the business logic is very simplified, if it's a web service and database access you can performance test it. Test with one user. What do you see? Where does the time go? As you develop the system adding in real code keep doing that test. Reasons: a). right now you know if 25ms is even achievable. b). You spot any code changes that hurt performance immediately. Now test with lots of user, what degradation patterns do you hit? This starts to give you and indication of your paltforms capabilities.
I suspect that the outcome will be that a single machine won't cut it for you. And even if it will, if you're successful you get more traffic. So plan to use more than one server.
And anyway for reliability reasons you need more than one server. And all sorts of interesting implementation details fall out when you can't assume a single server - eg. you don't have Singletons any more ;-)
Most times we get good performance using a cache. Will many users ask for the same data? Can you cache it? Are there updates to consider? in which case do you need a distributed cache system with clustered invalidation? That multi-server case emerging again.
Why do you need WCF?
Could you shift as much of that service as possible into static serving and cache lookups?
If I understand your question 1000s of users will be hitting your website and executing queries on your DB. You should definitely be looking into connection pools on your WCF connections, but your best bet will be to avoid doing DB lookups altogether and have your website returning data from cache hits.
I'd also look into why you couldn't just connect directly to the database for your lookups, do you actually need a WCF service in the way first?
Look into Memcached.

ASP.NET performance measurements on a hosted platform

I have a large ASP.NET website on a hosted platform. It shares the machine with a lot of other applications. We do not have access to the machine itself (only an FTP account).
Our client is complaining that it is starting to perform rather badly, particularly around peak hours. I've run some remote measurements (using a JMeter-like tool) that tells me that, yes, it does indeed perform rather badly during peak hours. It doesn't tell me why though. The client is resisting a move to a dedicated server without some hard facts.
As I see it, what I need are hard data about the machine itself. Setting up a local performance test environment would be extremely time-consuming, and I have no way to estimate the server performance.
My question: is there a good way to collect (a lot) of performance measurements when I have limited access to the machine, and certainly no access to the performance monitor? Any code would have to run in the asp.net application itself, without screwing it up too much.
We had a similar problem with our asp.net application hosted on a shared server, which also started to perform badly during peak hours.
Although I don't know of an elegant solution to your question, this is what we did:
Talk to your host providers to see what additional information they can give you - it's in their best interest to keep their clients happy. Our host providers were able to give us some time with one of their network engineers who provided us with some decent CPU and memory utilization stats.
Take your own performance measurements by dumping information to either a log file (using log4net) and/or the database - for example, user sessions, search times, page hits, timing measurements around key functionality. From this information we were able to ascertain what our systems normal behavior was for a set number of automation tests.
Setup a local server (not necessarily same stats as hosted/production server) with your application loaded and give it a full load/performance/capacity testing (we used Red Gate's ANTS Profiler). The stats that you gather from that will give you and your client a good indication of how the system should behave under certain loads with a known environment. Yes, this can be time consuming but it will give you a great performance measuring tool so that you can catch/fix bottlenecks locally rather than on production.
Good luck.

Resources