I have installed OpenStack on a server which has 400 GB - hard disk. There are four instances and each of them was set up with 10 GB.
I need more space for one of the node. Is it possible to increase disk size of a running instance?
You can do this with nova resize command.
Related
I tried sending almost 5000 output states in a single transaction and I ran out of memory. I am trying to figure out how to increase memory. I tried increasing in the runnodes.bat file by teaking command
java -Xmx1g -jar runnodes.jar %*
But this doesn't seem to increase the heap size. So I tried running the following command for each node manually with memory option given -Xmx1g.
bash -c 'cd "/build/nodes/Notary" ; "/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home/jre/bin/java" "-Dname=Notary-corda.jar" "-Dcapsule.jvm.args=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -javaagent:drivers/jolokia-jvm-1.3.7-agent.jar=port=7005,logHandlerClass=net.corda.node.JolokiaSlf4Adapter" **"-Xmx1g"** "-jar" "corda.jar" && exit'
This solved out of memory issue but now I started seeing ActiveMQ large message size issue
E 10:57:31-0600 [Thread-1 (ActiveMQ-IO-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$4#2cfd9b0a)] impl.JournalImpl.run - appendAddRecord::java.lang.IllegalArgumentException: Record is too large to store 22545951 {}
java.lang.IllegalArgumentException: Record is too large to store 22545951
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.switchFileIfNecessary(JournalImpl.java:2915) ~[artemis-journal-2.2.0.jar:2.2.0]
Any idea?
This is because you are trying to send a transaction that is almost 20MB in size. In Corda 3 and earlier, the limit on transaction size is 10MB, and this amount is not configurable.
In Corda 4, the limit on transaction size can be configured by the network operator as one of the network parameters (see https://docs.corda.net/head/network-map.html#network-parameters). The logic for allowing a configurable limit is that otherwise, larger nodes could bully smaller nodes off the network by sending extremely large transactions that it would be infeasible for the smaller nodes to process.
We are having 20 data nodes and 3 management nodes. Each data node is having 45GB of RAM .
Data node RAM Capacity
45x20=900GB total ram
Management nodes RAM Capacity
100GB x 3 = 300GB RAM
I can see memory is completely occupied in Hadoop resource manager URL and Jobs submitted are in waiting state since 900GB is occupied till 890GB in resource manager url.
However , I have raised a request to Increase my memory capacity to avoid memory is being used till 890Gb out of 900GB.
Now, Unix Team guys are saying in data node out of 45GB RAM 80% is completely free using free -g command (cache/buffer) shows the output as free . However in Hadoop side(resource manager) URL says it is completely occupied and few jobs are in hold since memory is completely occupied.I would like to know how hadoop is calculating the memory in resource manager and is it good to upgrade the memory since it is occupying every user submit a hive jobs .
Who is right here hadoop output in RM or Unix free command .
The UNIX command free is correct because the RM shows reserved memory not memory used.
If I submit a MapReduce job with 1 map task requesting 10GB of memory per map task but the map task only uses 2GB then the system will only show 2GB used. The RM will show 10GB used because it has to reserve that amount for the task even if the task doesn't use all the memory.
I am trying to run a kaa server on a raspberry pi, and have successfully compiled it from source on the ARM processor, and installed the resulting .deb package.
However when i try to start the kaa-node i get the following error.
Starting Kaa Node...
Invalid maximum heap size: -Xmx4G
The specified size exceeds the maximum representable size.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I have tried to search through the /etc/kaa-node/conf directory, and the bin files, but I can't see where the "4G" setting is actually set, so that I might change it to something smaller and launch this on the Pi which has 1G of RAM.
Can someone point me to the correct place to make this modification, while still making use of launching the server as a service using the built in utilities? I know i could just run it with java, and passit my own JAVA_OPTIONS.
I think you can try to find the "kaa-node" file in /etc/default/ and modify the JAVA_OPTIONS in it.
We try to modify it to config heap size and GC for our Kaa server.
You can try starting kaa-node service with
service kaa-node start -Xmx500M
To limit heap size by 500mb.
If it won't work, try
export _JAVA_OPTIONS=-Xmx500m
To set global JVM heap size limit.
I do not find any information about this topic. Can I overbook my storage of virtual disks on XenServer 6.2? E.g I have a 2TB storage and 2 VM's with 2TB virtual disks each.
I'm using Xenserver 6.2 with around 12 VMS and 4 TB of storage.
One of the things I've notice is that even if the size of one vdi (virtual disk) is fixed, the amount really used on disk is variable depending on how full the disks are.
For example, if you create the VMs with 2 TB each this is a possible scenario:
VM1 after install and some moderated use is using 1 TB out of 2 TB. The vhd file would be around 1 TB of use on the storage drive.
VM2 after install is using 200 GB out of 2 TB. It also uses on disk around that size on the storage.
The problems you might encounter depends on the total amount of space you have on your SR.
When the vdis increase in size, they don't release the space after that.
Lets say you delete 500 GB from VM1 it will still use 1 TB on the storage, and if you increase the size of VM2 over your storage limit, problems will appear. Probably the SO will stop working is there's no more space to allocate.
I hope my answer helps you.
I would suggest you to use NFS, or re-create your LUN with ext3 filesystem.
This will help you to over assign the storage, but you have to make sure actual disk usage dont go beyond 2TB.
We have been using micro instance till our development phase. But now, as we are about to go live, we want to upgrade our instance to type medium.
I followed these simple steps: stop the running instance, change instance type to medium and then start the instance again. I can see the instance is upgraded in terms of the memory. But the storage still shows to be 8GB. But according to the configuration mentioned, a m1.medium type instance should have 1x410GB storage.
Am I doing anything wrong or missing out something? Please help!
Keep in mind, EBS storage (which you are currently using) and Instance storage (which is what you are looking for) are two different things in EC2.
EBS storage is similar to a SAN volume. It exists outside of the host. You can create multiple EBS volumes of up to 1TB and attach them to any instance size. Smaller instances have lower available bandwidth to EBS volumes so they will not be able to effectively take advantage of all that many volumes.
Instance storage is essentially hard drives attached to the host. While its included in the instance cost, it comes with some caveats. It is not persistent. If you stop your instance, or the host fails for any reason, the data stored on the instance store will be lost. For this reason, it has to be explicitly enabled when the instance is first launched.
Generally, its not recommended that to use instance storage unless you are conformable with and have designed your infrastructure around the non-persistance of instance storage.
The sizes mentioned for the instance types are just these defaults. If you create an image from a running micro instance, it will get that storage size as default, even if this image later is started as medium.
But you can change the storage size when launching the instance:
You also can change the default storage size when creating an image:
WARNING: This will resize the storage size. It will not necessarily resize the partition existing on it nor will it necessarily resize the file system on that partition. On Linux it resized everything automagically (IIRC), on a Windows instance you will have to resize your stuff yourself. For other OSes I have no idea.
I had a similar situation. I created a m2.medium instance of 400 GB, but when I log into the shell and issue the command
df -h
... it shows an 8 GB partition.
However, the command
sudo fdisk -l
showed that the device was indeed 400 GB. The problem is that Amazon created a default 8 GB partition on it, and that partition needs to be expanded to the full size of the device. The command to do this is:
sudo resize2fs -f /dev/xvda1
where /dev/xvda1 is the mounted root volume. Use the 'df -h' command to be sure you have the right volume name.
Then simply reboot the instance, log in again, and you'll see the fdisk command now says there's nearly 400 GB available space. Problem solved.