Nexus3: What happened to number to keep task cleanup functionality - nexus

In Nexus 2.14, we had a task to cleanup a repo but to keep the last 20; Number to keep. In Nexus 3, we are able to cleanup older than the last 30 days, but it's important to us to keep at least the last 20 builds. Is there a new way to do this I don't see in Nexus OSS 3.2.1?

We have not built this for Releases as of yet. Pop over to our JIRA and let us know your interest: https://issues.sonatype.org/browse/NEXUS

Related

Sonatype Nexus cleanup task in Waiting status

We've configured cleanup policies in Sonatype Nexus 3.22.1-02 and also created tasks Cleanup service Admin - Cleanup repositories AND also Admin - Compact blob store and ran them manually but they are still in waiting status after a couple of hours. This is not a scheduled task.
Is there a way to understand what is holding them up and give them a push, or cancel and rerun? TIA.
Task status Waiting means the task has completed and is waiting for next run. Whether last run succeeded or failed will be shown in Last result field.
In the screenshot below you can see that the last time the task run was on January 21, it completed successfully and it took 0 seconds to complete. Now the task is waiting for the next run that will happen on January 22.

DAG's task initialization takes time

We have a composer environment which has below configuration details.
Composer Version:  composer-1.10.0-airflow-1.10.6
Machine Type : n1-standard-4
Disk size (GB): 100
Worker Nodes: 6
python version:3
worker_concurrency: 32
parallelism:128
We have a problem in DAG to initialize it's task and it is taking more time. For example DAG has 3 tasks like Task1 -> Task2 -> Task3. Task1 initializes taking time (minimum 5 mins) and once initialized completion time of that task within seconds. Task2 initialized taking again 5 mins and executed within seconds. Like that task's initialization is taking time but completion of that task is quickly done. Have scheduled this DAG every 5 mins, but completing this DAG takes around 10 mins at least. So affecting functionalities and execution of the process.
Here are the functionalities of each three tasks. Task1 objective is to gather the basic information such as storage location from configuration files/variables. Task2 checks the storage whether any new files are coming and based on the file triggers the relevant DAGs. Task3 objective is to send success email.
Also, I noted that worker nodes did not splitted the work among themselves. Always one worker node's CPU utilization is high compared to other worker nodes. Do not know what could be the reason for it. One more interesting is even though the other DAG's are not running at that time this DAG still takes 10 mins to execute.
Appreciated your help in solving this case.
This should be a comment but I don't have the reputation required.
My initial advice is to upgrade your Composer version, 1.10.0 has a few known bugs that are fixed in later versions. Right now the latest version is 1.10.4. This should correct the CPU that stays at 100% (it did in our case). Are there many other DAGs running on your instance?
As I mentioned in the comment the reason behind the high CPU pressure on the particular GKE node might be more evident after the troubleshooting performed on Airflow workflow/GKE sides.
It is happening regularly that on some Aiflow runtime node the computation resources (Memory/CPU) are running out of the node capacity causing Airflow workloads(Pods) being Evicted and further restarted loosing all the process states data, however Celery executor which is responsible for assigning tasks to the Airflow workers can even be not aware about inconvenient state/time-out of the worker and doesn't keep the certain action to re-assign this task to another worker.
According to GCP Composer release notes, the vendor has provided some essential fixes in the latest composer-1.10.* patches, improving Composer runtime performance and reability, as #parakeet said in his answer.
You can also refer to this GCP Composer known issues knowledge base to keep track of the current problems and workarounds that vendor shares to the community.

IDMF Dynamics AX 2012 stays on Pending for scheduld task

We want to archive Dynamics AX after a year, and this is after an infrastructure hardware change and a whole IT team changed (including myself).
When i scheduled a task (Meta Sync) in acceptance environment, the task is getting picked up by the AXDataManagementSchedulerService.exe and the Job is getting finished.
If i do the same steps in production environment, the task stays on Pending status (see screenshot), and the logs do not show anything why the task remains stocked. As far as i checked, all the environment variables and services running, are the same as in acceptance environment, and the windows event-viewer doesn't show any problems.
What could hold the task not to get picked up?

FreePBX / Asterisk queue leastrecent Breaks on transfer

I am running into some weird issue's with queue's, It almsot seem's as if when you transfer, or go on hold it does not reset the lastcall time
Scenario –
John is on a call with a customer.
Mary is sitting available waiting for calls.
John finishes with her customer, transfers her customer to an external number, puts herself in ready, and gets the next call that comes in.
Mary is still sitting available.
After speaking with the team, they tell me that this is how it always works. If you’re on a call and you transfer your caller away you’re automatically put back into the queue as the person to get the next call. So, given this information, Mary could sit in ready status for two hours and not get a call if everything was timed just right and all of John’s calls were transfers.
This shouldn’t function this way. It should be the other way around. The next call should always go to the agent that has been available the longest.
Let me know what you think.
Asterisk version
Asterisk 1.8.11-cert10 built by root # 89-139-19-10.digium.internal on a x86_64 running Linux on 2013-01-02 22:24:23 UTC
FreePBX Base Version: 2.10.0rc1
FreePBX Framework Version:
FreePBX Core Version: 2.10.1.1
I think you messed with priority. You need setup context for return after transfer, in which you have put max priority and put again into queue.
It is hard to say how it exactly have look, but i am sure any asterisk developer can do that.
See examples on bottom of this page:
http://www.voip-info.org/wiki/view/Asterisk+config+features.conf

Indefinite Provisioning of EMR Cluster with Segue in R

I am trying to use the R package called Segue by JD Long, which is lauded as the ultimate in simplicity for using R with AWS by a book I read called "Parallel R".
However, for the 2nd day in a row I've run into a problem where I initiate the creation of a cluster and it just says STARTING indefinitely.
I tried this on OS X and in Linux with clusters of sizes 2, 6, 10, 20, and 25. I let them all run for at least 6 hours. I have no problem starting a cluster in the AWS EMR Management Console, though I have no clue how to connect Segue/R to a cluster that was started in the Management Console instead of via createCluster().
So my question is -- is there either some way to trouble shoot the provisioning of the cluster or to bypass the problem by creating the cluster manually and somehow getting Segue to work with that?
Here's an example of what I'm seeing:
library(segue)
Loading required package: rJava
Loading required package: caTools
Segue did not find your AWS credentials. Please run the setCredentials() function.
setCredentials("xxx", "xxx")
emr.handle <- createCluster(numInstances=10)
STARTING - 2013-07-12 10:36:44
STARTING - 2013-07-12 10:37:15
STARTING - 2013-07-12 10:37:46
STARTING - 2013-07-12 10:38:17
.... this goes on for hours and hours and hours...
UPDATE##: After 36 hours and many attempts that failed, this began working (randomly...) when I tried it with 1 node. I then tried it with 10 nodes and it worked great. To my knowledge nothing changed locally or on AWS...
I am answering my own question on behalf of the AWS support rep who gave me the following belated explanation:
The problem with the EMR creation is with the Availability Zone specified (us-east-1c), this availability zone is now constrained and doesn't allow the creation of new instances, so the job was trying to create the instances in a infinite loop.
You can see information about constrained AZ here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions-availability-zones
"As Availability Zones grow over time, our ability to expand them can become constrained. If this happens, we might restrict you from launching an instance in a constrained Availability Zone unless you already have an instance in that Availability Zone. Eventually, we might also remove the constrained Availability Zone from the list of Availability Zones for new customers. Therefore, your account might have a different number of available Availability Zones in a region than another account."
So you need to specify another AZ, or what I recommend is not specify any AZ, so EMR is going to be able to select any available.
I found this thread: https://groups.google.com/forum/#!topic/segue-r/GBd15jsFXkY
on Google Groups, where the topic of availability zones came up before. The zone that was set as the new default in that thread was the zone causing problems for me. I am attempting to edit the source of Segue.
Jason, I'm the author of Segue so maybe I can help.
Please look under the details section in the lower part of the AWS console and see if you can determine if the bootstrap sequences completed. This is an odd problem because typically an error at this stage is pervasive across all users. However I can't reproduce this one.

Resources