node is showing status as not ready. node is not working.I have already installed the node controller in other system - eucalyptus

I have run the command euserv-describe-node-controllers and it is showing notready. I have done all the steps and I have also registered the node but it is still not working. I am using EDGE networking.

There are various factor which can affect the state of node . It will be easy to troubleshoot by checking nc.log in node and cloud output in CLC

Related

Corda Example Application - Springboost server not showing APIs

I've set up the example app and I have 3 nodes running (tutorial says I should have 4; notary,A,B,and C) lacking node C is not my issues though. I run ./gradlew runPartyAServer and it starts the SpringBoost Server and I get a CordaApp template page a localhost:50005 but there are no APIs or the example landing page.
I tried this and it worked:
checked out github.com/corda/samples-kotlin/Basic/cordapp-example/
built and ran nodes ./gradlew clean deployNodes && ./build/nodes/runnodes
started the server ./gradlew runPartyAServer
connect to localhost:50005 and I see this
Check out the logs in /client/logs to see if anything strange happened, maybe something like this (which could be caused by something wrong going on with the connection to the node):
Caused by: net.corda.client.rpc.RPCException: Cannot connect to
server(s). Tried with all available servers. at
net.corda.client.rpc.internal.RPCClientProxyHandler.start(RPCClientProxyHandler.kt:282)
~[corda-rpc-4.6.jar:?];

How to pin openstack container versions when using kolla-ansible?

When installing openstack via kolla-ansible you specify openstack version in globals.yml, ie: openstack_release: "victoria". This is as specific as you can get, there are no point-in-time tags, just a moving target like "victoria".
In my experience containers are updated randomly, not all-at-once, and frequently. Every time I rebuild I'm having to wait for docker to pull down things which have changed since my last deploy. This is problematic for multiple reasons, most acutely:
This is a fast-moving community-driven project. I'm having to work through new issues every few times I rebuild as a result of changes.
If I deploy onto one set of hosts, then deploy onto more hosts hours later, I'm waiting again on updates, and my stack is running containers of different versions.
These pulls take time and make my deployments vulnerable to timeouts and network problems.
To emphasize what a problem the second issue is, usually I can reset a failed deployment and try again, but not always. There have been times where I had residual issues, and due to my noobness it was quicker to dump fresh disks and start over. I'm using external ceph (the only ceph option in kolla-ansible:victoria), colocated with the compute nodes. Resetting pool / OSD state to an earlier point in time isn't in my toolbox yet, so I also wipe my OSD's and redo the ceph installation. I can pin version on ceph containers, but I start to sweat once the kolla-ansible installation starts. For a 4-hour total install, there's a not-small chance that another container will change in this time.
The obvious answer for anybody who does IT or software professionally is to pin my kolla:* container versions to a specific point-in-time tag, and not "victoria". I could pin each container to a digest, but that's not supported in the playbooks as written. I'd need to edit ansible playboooks and add a variable for every container that I want to pin. And then maintain that logic as new containers are added. I'm pulling 43 containers right now. This approach feels like "2 trailer park girls go 'round the outside".
A far simpler approach which I'm planning is to pull all the "victoria"-tagged containers, and then iterate through pushing them back into my own docker repo (eg, "victoria-feralcoder-20120321"), and then update globals.yml to use this stable tag. I'm new to managing my own docker repos, so I don't know if I can retag images in a pull-through cache, or if I need to set up a private repo for that, so I may also have to switch kolla-ansible between docker.io and a private feralcoder repo, depending on whether I want to do a latest-pull or a pinned-pull. That would be a little "hey nineteen", cleaner and nicer, still not quite right...
I feel like this pull-retag-push-reconfigure-redeploy approach is hack jankery. Does anybody have a better suggestion? Like, to not check upstream for container changes if there's already a tag-match in the local mirror? Or maybe a way to pull-thru-and-retag, at the registry level?
Thanks, in advance, and also thanks to the kolla-ansible contributors for all their work, short of not providing version stability.
Here is one answer, for an existing deployment:
If you have already pulled containers to all your hosts, you can edit some ansible or python so that docker_container.pull=false for all containers.
This is the implementing module:
.../lib/python3.6/site-packages/ansible/modules/cloud/docker/docker_container.py.
This file might be in /usr/local/share/kolla-ansible/, or .../venvs/kolla-ansible/. When false, if the container exists on the host it won't be repulled.
This doesn't help the situation where a host hasn't yet pulled the package and you have a version already in your local mirror. In that situation, the stack host will pull the container, and your pull-through cache will pull down any container updates since last pull.
This is my current preferred solution, which is still, admittedly, a hack:
Pull the latest images as a batch, then tag them and push them to a local registry.
First, I need 2 docker registries: I can't push to a pull-through cache, so I also needed to set up a private registry, which I can push to.
I need to toggle settings in globals.yml back and forth during kolla-ansible deploy to achieve this:
When I run "kolla-ansible bootstrap-servers" I need the local registry configured, so that stack hosts are configured with appropriate insecure-registries configs.
I use "kolla-ansible pull" to prefetch the latest packages, when I want to update. For this I reconfigure globals.yml to point at kolla/*:victoria.
After I fetch the latest containers, I run a loop on one of my stack hosts to pull them from my pull-through cache, tag them to my local registry with a date stamp tag, and push them to my local registry.
Before I run the actual deploy I configure globals.yml to use my local registry and tags.
These are the globals.yml settings of interest:
## PINNED CONTAINER VERSIONS
#docker_registry: 192.168.127.220:4001
#docker_namespace: "feralcoder"
#openstack_release: "feralcoder-20210321"
# LATEST CONTAINER VERSIONS
docker_registry:
docker_registry_username: feralcoder
docker_namespace: "kolla"
openstack_release: "victoria"
My pseudocode is like this (intermediate steps pruned...):
use_localized_containers () {
cp $KOLLA_SETUP_DIR/files/kolla-globals-localpull.yml /etc/kolla/globals.yml
cat $KOLLA_SETUP_DIR/files/kolla-globals-remainder.yml >> /etc/kolla/globals.yml
}
use_latest_dockerhub_containers () {
# We switch to dockerhub container fetches, to get the latest "victoria" containers
cp $KOLLA_SETUP_DIR/files/kolla-globals-dockerpull.yml /etc/kolla/globals.yml
cat $KOLLA_SETUP_DIR/files/kolla-globals-remainder.yml >> /etc/kolla/globals.yml
}
localize_latest_containers () {
for CONTAINER in `ls $KOLLA_PULL_THRU_CACHE`; do
ssh_control_run_as_user root "docker image pull kolla/$CONTAINER:victoria" $PULL_HOST
ssh_control_run_as_user root "docker image tag kolla/$CONTAINER:victoria $LOCAL_REGISTRY/feralcoder/$CONTAINER:$TAG" $PULL_HOST
ssh_control_run_as_user root "docker image push $LOCAL_REGISTRY/feralcoder/$CONTAINER:$TAG" $PULL_HOST
done
}
use_localized_containers
kolla-ansible -i $INVENTORY bootstrap-servers
use_latest_dockerhub_containers
kolla-ansible -i $INVENTORY pull
localize_latest_containers
use_localized_containers
kolla-ansible -i $INVENTORY deploy

How to keep indexed a Maniphest task after editing its title

After a new Maniphest task has been created, chances are that you may need to change the task title to a new one with different keywords. However, upon editing the title, the task cannot be found by its new keywords but only by the old ones.
After manually reindexing the database the edited tasks can be found again but further changes in title will fail again until a new reindexation is issued.
I suppose the normal behavior is that tasks should be found anytime searching by their title without reindexing the database. Should I expect a different behavior from Maniphest?
Phabricator Version:
phabricator cb033673b6eb3dc8330d2ddea0fd358eae3b939a (Nov 16 2018)
The usual culprit is your phabricator daemons (background workers) aren't running.
From the phabricator directory:
# Check the status of daemons:
./bin/phd status
# (re)start the daemons:
./bin/phd restart
See Managing Daemons with PHD. You can also try looking at the daemon console which should be reachable at https://your.phabricator.url/daemon/, this will show the queue of jobs so you can see if jobs are failing for some reason.

How can the data of a Corda node be deleted without restarting the node?

When running Corda nodes for testing or demo purposes, I often find a need to delete all the node's data and start it again.
I know I can do this by:
Shutting down the node process
Deleting the node's persistence.mv.db file and artemis folder
Starting the node again
However, I would like to know if it is possible to delete the node's data without restarting the node, as this would be much faster.
It is not currently possible to delete the node's data without restarting the node.
If you are "resetting" the nodes for testing purposes, you should make sure that you are using the Corda testing APIs to allow your contracts and flows to be tested without actually starting a node. See the testing API docs here: https://docs.corda.net/api-testing.html.
One alternative to restarting the nodes would also be to put the demo environment in a VmWare workstation, take a snapshot of the VM while the nodes are still "clean", run the demo, and then reload the snapshot.

build queue issues in CC.net

Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.

Resources