I am using spring boot. To monitor the JVM Memory , I am using the /metrics endpoint for Spring Boot Actuator.
I am unable to understand what actually does the keys represent:
"gc.copy.count": 1933,
"gc.copy.time": 35972,
"gc.marksweepcompact.count": 12,
"gc.marksweepcompact.time": 7515,
Can someone tell me what exactly they are?
Is CMS(Concurrent Mark Sweep) and MarkSweepCompact same?
Also should I use CMS(Concurrent Mark Sweep)? or which GC algo should I use?
Copy,MarkSweepCompact and ConcurrentMarkSweep are differents JVM Collectors.
You can find a description of these collectors here: http://www.fasterj.com/articles/oraclecollectors1.shtml
All of the garbage collection algorithms except ConcurrentMarkSweep
are stop-the-world, i.e. they stop all application threads while they
operate - the stop is known as 'pause' time. The ConcurrentMarkSweep
tries to do most of it's work in the background and minimize the pause
time, but it also has a stop-the-world phase and can fail into the
MarkSweepCompact which is fully stop-the-world. (The G1 collector has
a concurrent phase but is currently mostly stop-the-world).
Related
...aside from the benefit in separate performance monitoring and logging.
For logging, I am confident I can get granularity through manually adding the name of the "routine" to each call. This is how it is now with several discrete Functions for different parts of the system:
There are multiple automatic logs: start and finish of the routine, for example. It would be more challenging to find out how expensive certain routines are, but it would not be impossible.
The reason I want the entire logic of the application handled by a single handle function is because of reducing cold starts: one function means only one container that can be persistently kept alive when there are very few users of the app.
If a month is ~2.6m seconds and we assume the system uses 1 GB RAM and 1 GHz CPU frequency at all times, that's:
2600000 * 0.0000025 + 2600000 * 0.000001042 = USD$9.21 a month
...for one minimum instance.
I should also state that all of my functions have the bare minimum amount of global scope code; it just sets up Firebase assets (RTDB and Firestore).
From a billing, performance (based on user wait time), and user/developer experience perspective, is there any reason why it would be smart to keep all my functions discrete?
I'd also accept an answer saying "one single function for all logic is reasonable" as long as there's a reason for it.
Thanks!
If you have very small app with ~5 end points and very low traffic. Sure you could do something like this. But why not do it:
billing and performance
The important thing to realize is that with every request a new instance of your function is created. Which means there could be 10s of them running at the same time.
If you would like to have just 1 instance handling all the traffic you should explore GCP Cloud run, where you have 1 container handling multiple requests and scaling only when it's not sufficient.
Imagine you have several end-points and every one of them have different performance requirements.
1 can need only 128MB or RAM
1 can need 1GB RAM
(FYI: You can control the CPU MHz of the function via the RAM settings too - which can speed up execution in some cases)
If you had only 1 function with 1GB of ram. Every request would allocate such function and in some cases most of the memory could go to waste.
But if you split it into multiple, some requests will require much less resources and can save you $ when we talk about bigger amount of executions / month. (tens of thousands+).
Let's imagine function, 3 second execution, 10k executions/month:
128MB would cost you $0.0693
1024MB would cost you $0.495
As you can see, with small app the difference could be nothing. But if you scale it matters. (*The cost can vary based on datacenter)
As for the logging, I don't think it matters. Usually in bigger systems there could be messages traveling trough several functions so you have to deal with that anyway.
As for the cold start. You just need good UI to facilitate that. At first I was worry about it in our apps but later on, you just get used to it that some action can take ~2s to execute (cold start). And you should have the UI "loading" regardless, because you don't know if the function will take ~100ms or 3s due to bad connection.
I'm trying to convert an application from using standard point-to-point MPI calls (e.g., MPI_Isend, MPI_Irecv) to using MPI-3's one-sided calls. My goal is to improve performance on my hardware, which is a system that has Infiniband hardware support and an MPI implementation that's optimized for RDMA calls. I've been told that the hardware performs particularly well with passive synchronization mode, as opposed to active synchronization (i.e., Post-Start-Complete-Wait).
However, even after reading through the MPI standard documentation and examples, I'm confused on how to actually use the calls. For context, my program has a setup phase where I will know the communication pattern and even the buffers of the send data and ultimate buffer of the receiver. So, it's straightforward to set up a window and use it.
Specifically, with passive synchronization, I'm confused about when the "receiver" knows the data in the window has been written by the sender. What I want to do is have the sender produce the message data, then call MPI_Win_lock on the window and then do an MPI_Put and then wait for completion with a MPI_Win_Unlock. But, what is an efficient / recommended way for the "receiver" (window target) of the data to know when the message data has been written? Similarly, given that the communication pattern is iterated and the same receive buffer (the target's buffer) is used multiple times, how do I know that the receiver is done consuming the buffer and it can be reused?
I can envision a couple of approaches:
I can use an MPI_Barrier after the MPI_Win_unlock and before the receiver accesses the data. (This seems that it would work but I'm skeptical that this would yield better performance than active synchronization.)
I can possibly use MPI_Lock and MPI_Unlock on the receiver (target), locking the window when the target is actually using the data so the access epoch can't start on the origin (but, is that the way it works? I've read that lock and unlock don't create critical sections in the traditional sense).
Some sort of home-grown approach where the receiver polls for some sort of a nonce to be written, knowing the data is available when that happens.
Docs for MPI_Win_lock: https://www.open-mpi.org/doc/v3.0/man3/MPI_Win_lock.3.php
In general, how does a programmer synchronize with MPI_Lock and 'MPI_Unlock` in a way that's any more efficient than the active synchronization approach? It does feel like I need to just use post-start-complete-wait, but I'm hoping you can help me find a way to try passive synchronization as well.
Close method (at least in java world) is something that you as a good citizen have to call when you are done using related resource. Somehow I automatically started to apply the same for the close! function from core.async library. These channels are not tight to any IO as far as I understand and therefore I am not sure whether it is necessary to call close!. Is it ok to leave channels (local ones) for Garbage Collection without closing them?
It's fine to leave them to the garbage collector unless closing them has meaning in your program. It's common to close them as a signal to other components that it's time to shut themselves down. Other channels are intended to be used forever and are never expected to be closed or garbage collected. In still other cases the channel is used to convey one value to a consumer and then both the producer and the consumer are finished and the channel is GC'd. In these last case there is no need to close them.
When configuring a Hadoop Cluster whats the scientific method to set the number of mappers/reducers for the cluster?
There is no formula. It depends on how many cores and how much memory do you have. The number of mapper + number of reducer should not exceed the number of cores in general. Keep in mind that the machine is also running Task Tracker and Data Node daemons. One of the general suggestion is more mappers than reducers. If I were you, I would run one of my typical jobs with reasonable amount of data to try it out.
Quoting from "Hadoop The Definite Guide, 3rd edition", page 306
Because MapReduce jobs are normally
I/O-bound, it makes sense to have more tasks than processors to get better
utilization.
The amount of oversubscription depends on the CPU utilization of jobs
you run, but a good rule of thumb is to have a factor of between one and two more
tasks (counting both map and reduce tasks) than processors.
A processor in the quote above is equivalent to one logical core.
But this is just in theory, and most likely each use case is different than another, some tests need to be performed. But this number can be a good start to test with.
Probably, you should also look at reducer lazy loading, which allows reducers to start later when required, so basically, number of maps slots can be increased. Don't have much idea on this though but, seems useful.
Taken from Hadoop Gyan-My blog:
No. of mappers is decided in accordance with the data locality principle as described earlier. Data Locality principle : Hadoop tries its best to run map tasks on nodes where the data is present locally to optimize on the network and inter-node communication latency. As the input data is split into pieces and fed to different map tasks, it is desirable to have all the data fed to that map task available on a single node.Since HDFS only guarantees data having size equal to its block size (64M) to be present on one node, it is advised/advocated to have the split size equal to the HDFS block size so that the map task can take advantage of this data localization. Therefore, 64M of data per mapper. If we see some mappers running for a very small period of time, try to bring down the number of mappers and make them run longer for a minute or so.
No. of reducers should be slightly less than the number of reduce slots in the cluster (the concept of slots comes in with a pre-configuration in the job/task tracker properties while configuring the cluster) so that all the reducers finish in one wave and make full utilisation of the cluster resources.
Assume that i have function called PlaceOrder, which when called inserts the order details into local DB and puts a message(order details) into a TIBCO EMS Queue.
Once message received, a TIBCO BW will then invoke some other system(say ExternalSystem) to pass on the order details.
Now the way i wrote my integration tests is
Call the Place Order
Sleep, and check details exists in local DB
Sleep and check details exists in ExternalSystem.
Is the above approach correct? Above test gives me confidence that, End to End integration is working, but are there any better way to test above scenario?
The problem you describe is quite common, and your approach is a very typical solution.
The problem with this solution is that if the delay is too short, your tests may sometimes pass and sometimes fail, but if the delay is very long, then your just wasteing time waiting, and with many tests, it can add a lot of delay. But unless you can get some signal to tell you the order arrived in the database, then you just have to wait.
You can reduce the delay by doing lots of checks with short intervals. If you're order is not there after timeout, then you would fail the test.
In "Growing Object-Oriented Software, Guided by Tests"*, there is a chapter on this very subject, so you might want to get a copy if you will be doing a lot of this sort of testing.
"There are two ways a test can observe the system: by sampling its observable
state or by listening for events that it sends out. Of these, sampling is
often the only option because many systems don’t send any monitoring
events. It’s quite common for a test to include both techniques to interact
with different “ends” of its system"
(*) http://my.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321574442