I was trying to determine the hardware and software requirements for developing an app and I was wondering if you can run a distributed transaction (through MSDTC) to two different SQL Server instances while maintaining the (web) app in medium trust.
Can I?
It turns out you can. Just create an empty ASP.NET application, set the trust level to medium and put a TransactionScope that includes two simultaneous connections to two different databases and two INSERT commands to some tables and see it working. (after you enable the MSDTC service first)
Related
I just saw the Realm Mobile Platform. I'm curious what kind of redundancy is available outside of users having a full backup locally.
Can you have multiple Realm Object Servers?
It is possible to run multiple Realm Object Servers in various configurations for greater performance or reliability. This advanced functionality is part of the Enterprise Edition.
For the Developer Edition, you can run multiple Realm Object Servers but they all act independent of each other. For example you could split you user data across multiple servers with certain user groups using specific machines.
I want to monitor oracle ebs(11i) & oracle db(11g) simultaneously during load test through dynatrace.
Oracle EBS architecture
I know we can monitor oracle db using dynatrace but did not find how to Identify what areas or modules (e.g. Order Management, Sales, Finance, Shipping) a particular work flow/user request touches during the load test?
I found that using DC RUM we can capture the metrics for Form Server. Apart from this I also want to monitor Concurrent processing server. Is it possible using dynatrace or not?
With Dynatrace DC RUM you may choose one of two approaches of monitoring EBS performance.
First - DC RUM using agentless technology, captures network traffic between all servers and clients and as result provides you with information on performance, usage and availability details. Additionally for most popular network protocols including the ones used during communication with Oracle database, Oracle Forms servers and web servers it’s possible to use analyzers, that provides deeper performance insights. For example with Oracle Forms analyzer applied for EBS monitoring DC RUM is decoding all interactions between user and oracle forms reporting user names, form names, control names and identifying EBS module name. For Oracle Database traffic it reports performance down to single query execution including SQL, database schema and user name. Answering your question it allows monitoring of Oracle EBS and Oracle DB simultaneously.
Second one – Enterprise Synthetic allows you to create synthetic tests for key transactions in EBS. This way for example you may track the performance of whole creating sales order transaction.
DC RUM is intended to constant, systematic application performance monitoring. However if you have it in your company it’s also perfect tool to evaluate the results of the load tests performed on EBS.
I'm developing an application that works distributed, and I have a SQLite database that must be shared between distributed servers.
If I'm in serverA, and change sqlite row, this change must be in the other servers instantly, but if a server were offline and then it came online, it must update all info equal other servers.
I'm trying to develop a HA service with small SQLite databases.
I'm thinking on something like MongoDB or ReThinkDB, due to replication works fine and I have got data independently server online I had.
There are a library or other SQL methodology to share data between servers?
I used the Raft consensus protocol to replicate my SQLite database. You can find the system here:
https://github.com/rqlite/rqlite
Here are some options:
LiteReplica:
It supports master-slave replication for SQLite3 databases using a single master (writable node) and one or many replicas (read-only nodes).
If a device went offline and then it came online, the secondary/slave dbs are updated with the primary/master one incrementally.
LiteSync:
It implements multi-master replication so we can write to the db in any node, even when the device is off-line.
On both we open the database using a modified URI, like this:
“file:/path/to/app.db?replica=master&bind=tcp://0.0.0.0:4444”
AergoLite:
Blockchain based, it has the highest level of security. Stores immutable relational data, secured by a distributed consensus with low resource usage.
Disclosure: I am the author of these solutions
You can synchronize SQLite databases by embedding SymmetricDS in your application. It supports occasionally connected clients, so it will capture changes and sync them when a server comes online. It supports several different database platforms and can be used as a library or as a standalone service.
You can also use CopyCat, which support SQLite as well as a few other database types.
Marmot looks good:
https://github.com/maxpert/marmot
From their docs:
What & Why?
Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS Jetstream. This means if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.
I saw this question:
How many users on one azure instance before I hit performance issues?
Which discusses how many users an azure instance could support for a webpage. I'm wondering if this would be any different for a webpage vs webserver that client applications (such as mobile phones) are call into, to get data. For example, if you have a single azure webrole running that exposes a REST enpoint, how many devices could call into the service before it starts to buckle under pressure?
How long is a string? :-)
If your app computes one million digits of pi on each web request, it will probably handle fewer concurrent web requests than an app that replies to each web request with "hello world."
(This is another, blunter, version of David's answer.)
A Web Role instance is merely a Windows 2008 Server R2 (or SP2) virtual machine of a given size (1-8 cores, 1.75-14GB usable RAM, 100-800Mbps network). You can run web sites, different web servers (tomcat, for example), WCF services (through IIS or standalone ServiceHosts), etc.
Scaling is going to depend heavily on the app itself: Is it CPU-constrained? Network-constrained? Do you have queue-based workload and your queue backlog is growing?
Sometimes it's critical to scale up to larger VMs, just to handle one of the constraints mentioned. It's always wise to pick the smallest VM size to run in a baseline mode (e.g. 1 or 2 users), then scale out to more instances as needed.
It's important to identify the key performance indicators (KPI's) for your app. You can then automate your scaling, with something like the Autoscale Appliction Block (WASABi).
Here's a reference page with all VM sizes, with details about CPU, local disk, network bandwidth, and RAM.
I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.