What is MS App Insights throughput? [closed] - azure-application-insights

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to figure out if App Insights is good solution for my app tracing. I cannot find any capacity numbers. Is there something published? How many logs per second it can take per one application or box?

Here is the short summary of this documentation page as of 4/17/2017:
Total data per day: 500 GB
Free data per month: 1GB
Throttling: 32 k events/second
Data retention: 90 days
Maximum event size: 64 K
However, currently the throttling limit seems to be temporary lowered to 12 K events/second as per this AI service blog entry.
P.S. Application Insights Pricing page also has the similar table with the limits. This may be a better reference for the future changes if any.

According to this page as of 2018-10-01:
Total data per day: 100 GB
Throttling: 32 K events/second
Data retention: 90 days
Availability multi-step test detailed results retention: 90 days
Maximum event size: 64 K
Property and metric name length: 150
Property value string length: 8,192
Trace and exception message length: 10 K
Availability tests count per app: 100
Profiler data retention: 5 days
Profiler data sent per day: 10 GB
For more detailed break-down also have a look at the Azure Monitor Pricing page.

Related

Validation Time Analysis [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
The message validation server is able to validate two types of messages - type A and type B messages correspondingly. It is commonly believed that the validation time of type B message is somewhat longer than the validation of type A message. Additionally, it's commonly believed that the actual validation time depends largely only on the type of messages (A and B types) and doesn't depend in an essential amount on the message content. However, nobody hasn't checked these claims in practice.
We have reliable statistics for four different sessions:
1st session - 17,648 of type A messages and 11,414 type B messages were validated, session lasted 6 minutes and 7.90 seconds.
2nd session - 6836 of type A messages and 12,618 type B messages were validated, session lasted 4 minutes and 23.80 seconds.
3rd session - 24,616 of type A messages and 17,648 type B messages were validated, session lasted 8 minutes and 56.10 seconds.
4th session - 10,684 of type A messages and 12,684 type B messages were validated, session lasted 5 minutes and 7.78 seconds.
. Tasks:
1.Determine to what precision the validation time depends only on the type of message (A and B types), not the message content.
2.Determine the average validation times for both, type A and type B messages.
For this problem, I first assigned the time required for type A messages as x and type B as y. Then I wrote linear equations for each session.
Using the linear equations, I was able to determine that y < 8.98x. Which would indicate that y/x is somewhere between 1 and 8.98.
I also tried this approach:
B: no. of B type messages/total messages * time required.
I used this to get a rough estimation for the average time required for both type A and type B messages. I think finding the times in this way and taking the average solved the second question.
The first question has me stumped

Token ring protocol Efficiency and Cycle time? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I read Token ring protocol from a book Forouzon.
According to book,
Let N be the number of stations in the ring, THT the token holding time, Tt be the transmission time of packet, Tp be the propagation time of packet on Channel/ Link.
Then Cycle Time = n * THT + Tp (this is cycle time for token)
and efficiency = (useful time)/(Cycle Time)
Here useful time is stated as N * Tt. (justified as transmission time at each station in single cycle of token passing)
And thus proven efficiency = (N * Tt)/(n*THT + Tp)
My question is: why not this (N*Tt) is added in Cycle time?
so the efficiency could become efficiency = (N * Tt)/(n*THT + Tp +N * Tt)
Yes. But it has already been included.
Token Holding time is THT = Transmission Time + Ring Latency time (for single round of packet transmission)
As (THT = Tt + Tp) .

Speed vs Bandwith, ISP's, misconception? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
A lot of ISP's sell their products saying: 100Mbit/s Speed.
However, compare the internet to a packet service, UPS for example.
The ammount of packages you can send every second(bandwith) is something different then the time it takes to arrive(speed).
I know there are multiple meanings of the term 'bandwith' so is it wrong to advertise with speed?
Wikipedia( http://en.wikipedia.org/wiki/Bandwidth_(computing) )
In computer networking and computer science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate of available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).> In computer networking and computer
science, bandwidth,[1] network
bandwidth,[2] data bandwidth,[3] or digital bandwidth[4][5] is a
measurement of bit-rate
This part tells me that bandwith is measured in Mbit/s, Gbit/s.
So does this mean the majority of ISP's are advertising wrongly while they should advertise with 'bandwith' instead of speed?
Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
Bandwidth - maximum rate that information can be transferred
Throughput - the actual rate that information is transferred
Latency - the delay between the sender and the receiver decoding it
Jitter - variation in the time of arrival at the receiver of the information
Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.

Why RIP(Routing Information Protocol ) uses hopcount of 15 hops? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm reading one of the Distance vector protocol RIP and come to know maximum hop count it uses is 15 hops but My doubt is why 15 is used as maximum Hop count why not some other number 10,12 or may be 8 ?
My guess is that 15 is 16 - 1, that is 2^4 - 1 or put it otherwise: the biggest unsigned value that holds in 4 bits of information.
However, the metric field is 4 bytes long. And the value 16 denotes infinity.
I can only guess, but I would say that it allows fast checks with a simple bit mask operation to determine whether the metric is infinity or not.
Now the real question might be: "Why is the metric field 4 bytes long when apparently, only five bits are used ?" and for that, I have no answer.
Protocols often make arbitrary decision. RIP is a very basic (and rather old protocol). You should keep that in mind when reading about it. As said above, the max hop count will be a 4 byte field, where 16 is equivalent to infinity. 10 is not a power of 2 number. 8 was probably deemed too small to reach all the routers.
The rationale behind keeping the maximum hop count low is the count to infinity problems. Higher max hop counts would lead to a higher convergence time. (I'll leave you to wikipedia count to infinity problem). Certain versions of RIP use split horizon, which addresses this issue).

How will you design memory for 1GB memory with each page size being 4K [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I was asked this question in a job interview recently. I answered that I wud use a hash data structure to begin with designing the system. But couldn't answer that well. I think the interviewer was looking for answers like how will i design a page table for this.
I would like to know how to answer this question. Like for each page size being 4K how many pages would be needed for 1Gb? Also, what other considerations I should keep in my mind to design it efficiently.
This question makes sense in the context of CPUs where the TLBs are
"manually" loaded and there are no predetermined page table
structures, like some models of MIPS, ARM, PowerPC.
So, some rough thoughts:
1G is 2^30 bytes or 2^18 = 256K 4K pages
Say, 4-byte entry per page, that's 1M for a single level page
table. Fast, but a bit wasteful on memory.
What can we do to reduce memory and still make it reasonably fast.
We need 18 bits per page frame number, cannot squeeze it in 2 bytes,
but can use 3-bytes per PTE, with 6 bits to spare to encode - access
rights, presence, COW, etc. That's 768K.
Instead of the whole page frame number we can keep only 16-bits of it,
with the remaining two determined by a 4-entry upper level page table
with a format like this:
2 MSB of the physical page number
21 bits for second level page table (30 bit address, aligned on 512K
boundary)
spare bits
No place for per-page bits though, so lets move a few more address
bits to the upper level table to obtain
Second level page table entry (4K 2-byte entries = 8K)
4 bits for random flags
12 LSB of the page frame address
First level page table entry format (64 4-byte entries = 256 bytes):
6 MSB of the page frame address
17 bits for second level page table address (30-bit address aligned
at 8K)
spare bits
Looks like an open end question to me. The main idea may be to know how much you know about memory and how much deep you can think of the cases to handle. Some of them could be:
Pagination - How many pages you want to keep in memory for a process and how many on disk to optimize paging.
Page Tables
Security
Others, if you can relate and think Of

Resources