Does dynatrace monitor oracle ebs(11i) completely? - oracle11g

I want to monitor oracle ebs(11i) & oracle db(11g) simultaneously during load test through dynatrace.
Oracle EBS architecture
I know we can monitor oracle db using dynatrace but did not find how to Identify what areas or modules (e.g. Order Management, Sales, Finance, Shipping) a particular work flow/user request touches during the load test?
I found that using DC RUM we can capture the metrics for Form Server. Apart from this I also want to monitor Concurrent processing server. Is it possible using dynatrace or not?

With Dynatrace DC RUM you may choose one of two approaches of monitoring EBS performance.
First - DC RUM using agentless technology, captures network traffic between all servers and clients and as result provides you with information on performance, usage and availability details. Additionally for most popular network protocols including the ones used during communication with Oracle database, Oracle Forms servers and web servers it’s possible to use analyzers, that provides deeper performance insights. For example with Oracle Forms analyzer applied for EBS monitoring DC RUM is decoding all interactions between user and oracle forms reporting user names, form names, control names and identifying EBS module name. For Oracle Database traffic it reports performance down to single query execution including SQL, database schema and user name. Answering your question it allows monitoring of Oracle EBS and Oracle DB simultaneously.
Second one – Enterprise Synthetic allows you to create synthetic tests for key transactions in EBS. This way for example you may track the performance of whole creating sales order transaction.
DC RUM is intended to constant, systematic application performance monitoring. However if you have it in your company it’s also perfect tool to evaluate the results of the load tests performed on EBS.

Related

How does the Realm Mobile Platform scale?

You could say I am a fan of the Realm Mobile Platform. I'm using it and it seems to be working well.
However I am confused with how to operate it going to production. It seems to be deployed only to one server, and even the professional and enterprise editions are working on my single server.
Assuming Realm have thought of this (as Enterprise edition supports 'enterprise scaling) - how does this work if all clients point to my owned server URL?
Another question is how to monitor the load on that server.
Thanks!
The Professional Edition and the Enterprise Edition emit statsd compatible metrics which allow you to track the usage and load on each node in a Realm Object Server cluster. These metrics are also used internally inside the cluster in order to display statistics about the health of the cluster.
We are obviously still adding metrics as we understand more about our customer's use-cases, and fine-tuning the ones that we have.
With regards to the way the clustering works, we are currently implementing this according to an iterative process, where we add more and more features, and more and more resilience to the system with every passing day.
Basically, we have a logical load balancer process, which receives the incoming client connections, and then dispatches that to a node inside the cluster. This logical load balancer can be HA'd and LB'd itself as well, just like you would any regular WS connection handler. Handling many connections these days is easy. It's handling the quadratic merge algorithms that is expensive on the Realm Object Server, which is why the clustering is required for deployments at scale.

NServiceBus messaging across private networks

I was assigned with the re-architecture of a legacy (medical) product which is controlling several external devices. In the current architecture, we have several such stations in each customer's network, where each station is processing its own data, and they all share some of that data via a central server (that talks to the DB and BLOB storage).
I'm planning the new architecture such that it will allow more scenarios, such as monitoring the stations through a web interface, and allowing data processing to be scalable by adding additional servers.
This led me to choose NServicebus as the messaging and communication infrastructure. And I pretty much have a clear view of the new architecture.
However, another factor was recently added to the equation by my manager. He requires that the machine that communicates with the devices (hardware), will not be under the IT policies of the customer. The reason behind this, as I understand, is that we don't want the customer's IT to control OS updates, security, permissions and other settings, because we want full control over that machine in order to work properly with our hardware.
My manager thus added a requirement that this machine will be disconnected from the customer's LAN.
If I still want to deploy NServiceBus on that separated machine (because I want to pub/sub async messages to other machines - some are on the customer's LAN and some aren't), Will it require some special deployment? Will it require an NServiceBus gateway?
EDIT: I removed the other (1st) question, as it wasn't relevant to the scope of StackOverflow.
Regarding question 2, yes it would require the use of a "Gateway", however the current NServiceBus Gateway implementation does not support pub/sub so you would have to look at alternatives.

Method to replicate sqlite database across multiple servers

I'm developing an application that works distributed, and I have a SQLite database that must be shared between distributed servers.
If I'm in serverA, and change sqlite row, this change must be in the other servers instantly, but if a server were offline and then it came online, it must update all info equal other servers.
I'm trying to develop a HA service with small SQLite databases.
I'm thinking on something like MongoDB or ReThinkDB, due to replication works fine and I have got data independently server online I had.
There are a library or other SQL methodology to share data between servers?
I used the Raft consensus protocol to replicate my SQLite database. You can find the system here:
https://github.com/rqlite/rqlite
Here are some options:
LiteReplica:
It supports master-slave replication for SQLite3 databases using a single master (writable node) and one or many replicas (read-only nodes).
If a device went offline and then it came online, the secondary/slave dbs are updated with the primary/master one incrementally.
LiteSync:
It implements multi-master replication so we can write to the db in any node, even when the device is off-line.
On both we open the database using a modified URI, like this:
“file:/path/to/app.db?replica=master&bind=tcp://0.0.0.0:4444”
AergoLite:
Blockchain based, it has the highest level of security. Stores immutable relational data, secured by a distributed consensus with low resource usage.
Disclosure: I am the author of these solutions
You can synchronize SQLite databases by embedding SymmetricDS in your application. It supports occasionally connected clients, so it will capture changes and sync them when a server comes online. It supports several different database platforms and can be used as a library or as a standalone service.
You can also use CopyCat, which support SQLite as well as a few other database types.
Marmot looks good:
https://github.com/maxpert/marmot
From their docs:
What & Why?
Marmot is a distributed SQLite replicator with leaderless, and eventual consistency. It allows you to build a robust replication between your nodes by building on top of fault-tolerant NATS Jetstream. This means if you are running a read heavy website based on SQLite, you should be easily able to scale it out by adding more SQLite replicated nodes. SQLite is probably the most ubiquitous DB that exists almost everywhere, Marmot aims to make it even more ubiquitous for server side applications by building a replication layer on top.

What is the difference between two tier and three architecture?

I'm using JDBC in my application with business logic(client). This JDBC connects to the database which is in another machine(server). In this case, my JDBC directly connects with the database and stores & retrieves data. This is TWO-TIER architecture right?
In another application, for example servlet programming, I'm simply having browser in my client machine which is the presentation Layer(Client tier). Let me consider my business logic as Application Layer(Second tier) and database as Data layer(Third tier). Still I'm using JDBC to connect my application(business logic) with the database. Second and third tiers reside at server now.
By the above example, in three tier architecture a browser only added additionally and kept my business logic at server. I'm not feeling any performance difference other than these. If I'm wrong please correct me and explain me the exact architecture of 2-tier and 3-tier with other examples. Thanks in advance dear friends.
What you say is right.
You first example is two-tier.
The second example is three-tier.
A three-tier architecture can represent an important performance gain if the link between browser and server is slower than the link between server and DBMS. This is because usually the business logic needs to make several calls to the DBMS and/or present to the user only a small part of the information returned by the DBMS. Having the business logic in the client while having a slow connection to the DBMS would represent an important performance penalty.
In a typical web scenario, the connection between client and server is usually several times slower than the connection between server and DBMS, and there is your performance gain.

What tools does your company use to manage application performance of asp.net applications?

I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.

Resources