Error when launching an instance "not enough resources vm instances" - eucalyptus

I was able to use the web interface but I get this error when launching an instance. "Not enough resources (0 in cluster1 < 1): vm instances."
I installed Eucalyptus 4 manually on a single physical server running CentOS 6.5 (minimal)
Various articles I found, including this one URL below, mention that the Node Controller is not registered. Apparently evidence of this (shown below) is the "free/max" column is showing all zeros (0000)
http://opensource.sys-con.com/node/1349819
This command in the instructions seemed to register ok when I did it:
[root#server2 eucaconsole]# /usr/sbin/euca_conf --register-nodes "172.17.1.22"
INFO: We expect all nodes to have eucalyptus installed in $EUCALYPTUS for key synchronization.
root#172.17.1.22's password:
...done
The command given in the article has a parameter that's not supported (maybe deprecated)
[root#server2 eucaconsole]# euca_conf --no-rsync --discover-nodes
Usage: euca_conf [options]
euca_conf: error: no such option: --discover-nodes
Below is my output that shows all zeros in the "free" and "max" columns
[root#server2 eucaconsole]# euca-describe-availability-zones verbose
AVAILABILITYZONE cluster1 172.17.1.22 arn:euca:eucalyptus:cluster1:cluster:cc-22/
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0000 / 0000 1 256 5
AVAILABILITYZONE |- t1.micro 0000 / 0000 1 256 5
AVAILABILITYZONE |- m1.medium 0000 / 0000 1 512 10
AVAILABILITYZONE |- c1.medium 0000 / 0000 2 512 10
AVAILABILITYZONE |- m1.large 0000 / 0000 2 512 10
AVAILABILITYZONE |- m1.xlarge 0000 / 0000 2 1024 10
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 2 2048 10
AVAILABILITYZONE |- m2.xlarge 0000 / 0000 2 2048 10
AVAILABILITYZONE |- m3.xlarge 0000 / 0000 4 2048 15
AVAILABILITYZONE |- m2.2xlarge 0000 / 0000 2 4096 30
AVAILABILITYZONE |- m3.2xlarge 0000 / 0000 4 4096 30
AVAILABILITYZONE |- cc1.4xlarge 0000 / 0000 8 3072 60
AVAILABILITYZONE |- m2.4xlarge 0000 / 0000 8 4096 60
AVAILABILITYZONE |- hi1.4xlarge 0000 / 0000 8 6144 120
AVAILABILITYZONE |- cc2.8xlarge 0000 / 0000 16 6144 120
AVAILABILITYZONE |- cg1.4xlarge 0000 / 0000 16 12288 200
AVAILABILITYZONE |- cr1.8xlarge 0000 / 0000 16 16384 240
AVAILABILITYZONE |- hs1.8xlarge 0000 / 0000 48 119808 24000
The most relevant article I could find on the community forum on StackOverflow only found this which didn't help me.
Not enough resources available eucalyptus describe availability zones
Thanks in advance if anyone help me fix this.

Try to use euca-describe-nodes
if you get output similar to the following -
NODE PARTI00 10.111.5.176 ENABLED
Then your node is properly registered. One common problem is the lack of diskspace. If you do the normal CentOS 6.5 minimal install, it will sometimes create a separate partition for /home and put the lion's share of space in that partition. Check your eucalyptus.conf for the INSTANCE_PATH variable and whatever directory that points to, make sure it is located on a partition with a lot of space.

Make changes in /etc/eucalyptus/eucalyptus.conf, search for Max_Cores, and change the number of instances you want to launch. this solved the issues

Related

Chinese tracker TCP frame decoding

So I have this pet tracker I got from China This model (not advertising at all). It includes an option to change the report server so I set it up to report to my server but now I'd like to ba able to "decode" TCP Frame.
Here are two examples of what's sent (it's not sent together). It's really not intuitive so I'm posting this here hoping some of you are better at reading between the lines.
Frame 1
4500 0040 affa 4000 7506 3884 4dcd 9901
25bb 10b0 f05a 1e4b 1123 3ec4 0000 0000
b002 4fb0 e868 0000 0204 0550 0103 0300
0101 0402 0101 080a 0003 8282 0000 0000
Frame 2
4500 0040 affb 4000 7506 3883 4dcd 9901
25bb 10b0 d67e 1e4b 126d 9432 0000 0000
b002 4fb0 67e5 0000 0204 0550 0103 0300
0101 0402 0101 080a 0003 c629 0000 0000
Here are the information about what might be sent :
Device IMEI : 013347005954573
Devide "ID" (used to login to gps18.com servers) : 4700595457
Tracker location : ~ N43.54XXXX,W1.46XXXX (censored I don't wish for my exact location to be unveiled here. This is the value sent when I request a Google Maps url via SMS)
I can also have this sent to your server if you wish to have some samples (give me IP + Port)
Thx
I did not understand fully the usage of the server change command.
I stumbled upon this https://decoded.avast.io/martinhron/the-secret-life-of-gps-trackers/
In fact, the sensor has to get some response from the server for it to work properly, which lead me to setup a MITM (with socat, simply) and tcpdump is now way more verbose, scanning nearby WiFi networks and stuff, thanks China !! All without encryption OFC

Discrepancy in disk usage

I'm troubleshooting an issue regarding disk size usage in a centOS system (one of the partitions was growing too fast), and I notice one of my directories has 3.1GB:
$ du -hs /var/log/mongodb/
3.1G /var/log/mongodb/
$ df -h /var/log/mongodb/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-log 4.0G 3.7G 324M 93% /var/log
However, when I analyse the directory contents, I realize it only has 1 file, and that file is not that large (2.1GB):
$ ls -larth /var/log/mongodb/
total 3.1G
drwxr-xr-x 2 mongod mongod 24 Jul 2 2019 .
drwxr-xr-x. 22 root root 4.0K May 1 03:50 ..
-rw-r----- 1 mongod mongod 2.1G May 1 08:41 mongod.log
How can this happen?
Stat command:
$ stat /var/log/mongodb/mongod.log
File: ‘/var/log/mongodb/mongod.log’
Size: 2448779949 Blocks: 4880912 IO Block: 4096 regular file
Device: fd08h/64776d Inode: 6291527 Links: 1
Access: (0640/-rw-r-----) Uid: ( 996/ mongod) Gid: ( 994/ mongod)
Access: 2020-05-01 10:02:57.136265481 +0000
Modify: 2020-05-04 10:05:37.409626901 +0000
Change: 2020-05-04 10:05:37.409626901 +0000
Birth: -
Another example in another host:
$ df -kh | grep var
/dev/dm-3 54G 52G 2.1G 97% /var[
$ du -khs /var/
25G /var/
Is this somehow related to the difference between file size and actual space on disk occupied (due to disk blocks)? If so, how can I perform a defragmentation/optimization?

tcpdump - No connection and strange packets (oui unknown, ethertype Unknown (0x22f4))

I have a Xen Server which got a new NIC recently. The Xen Server got deleted from the pool, the interfaces got new IDs as per documentation and the server was re-inserted.
Since then, I cannot connect to on some of the interfaces as I was used to. The interfaces in question were used to connect to our Storage Servers.
Even after searching the interwebs for quite some time, I am confused of the configuration, as there seems to be a vlan on the other side of the interfaces. But that vlan isn't configured on the Xen Server. Still, the same configuration is present on other servers, where it seems to work.
What confuses me the most are the following messages, captured on eth4, which is a bridge (xenbr4), with its own IP:
[root#xensrv01~]# tcpdump -vv -i eth4
tcpdump: WARNING: eth4: no IPv4 address assigned
tcpdump: listening on eth4, link-type EN10MB (Ethernet), capture size 65535 bytes
10:10:48.175669
10:10:49.359977 STP 802.1w, Rapid STP, Flags [Learn, Forward], bridge-id 80c8.<bridge-id>.8009, length 43
message-age 2.00s, max-age 20.00s, hello-time 2.00s, forwarding-delay 15.00s
root-id 80c8.<root-id>, root-pathcost 6, port-role Designated
10:10:49.581931 <mac-1> (oui Unknown) > <mac-2> (oui Unknown), ethertype Unknown (0x22f4), length 125:
0x0000: 831b 0106 0f01 0001 01cc 3e5f 1422 a001 ..........>_."..
0x0010: 2c00 6f40 cc3e 5f14 22a0 0401 0201 0081 ,.o#.>_.".......
0x0020: 01c0 8f3d 0000 0108 0004 e0ec c001 0001 ...=............
0x0030: 0203 0000 4002 0301 b880 0205 0708 dfff ....#...........
0x0040: d803 1ee0 ec00 0100 01e0 ec01 b801 b8e0 ................
0x0050: ec07 0807 09e0 ec07 0b07 19e0 ec07 1b07 ................
0x0060: 1c91 01c0 d309 0000 0000 0000 0000 00 ...............
10:10:50.392126 <mac-3> (oui Unknown) > Broadcast, ethertype Unknown (0x9001), length 64:
0x0000: 0201 000b 0001 0000 0002 0000 0000 0000 ................
0x0010: 0000 0000 0100 0000 0000 0000 0000 0000 ................
0x0020: 0000 0000 0000 0000 0000 1600 0000 0000 ................
0x0030: 0840 .#
This is pretty much all you can see on the interface. It doesn't seem to appear on any other of the six servers we have.
What does this mean? Is my NIC broken (again)?
I would appreciate some insight into this, as I have no idea where to look anymore and have spent a considerable amount of time already. I can provide any additional necessary information.
In the end, it came down to two things:
Having the right cables connect the right ports with each other X)
Deleting and re-creating the network bridge
This made me have network connectivity again and the errors above stopped. I did this via the Xen Center though, which made all VMs running in the pool not have a connection for some time. Not recommended ;-)
I am still trying to find out how to re-create just the bridge on one xen host, not on all of them.

How to get tcpdump to save to a file without using binary?

I want to capture the packet content description and the packet data to a file with tcpdump for later inspection.
Currently I am using the -w option to save packet data to a file:
tcpdump -c 100 -w /root/tcpdump.txt
This saves the packet data to the file but also includes several lines of binary before each packet. However, I would like to have the packet content description (what's normally shown on STDOUT when running tcpdump) shown before the packet data itself (in the same file) without the binary.
So the file should save the following for each packet:
Packet content description
Packet data
Example of what I want to save to the file:
17:17:42.847059 IP some.server.com.17845 > some.host.net.55618: Flags [P.], seq 137568:137888, ack 1185, win 167, length 320
<-- Followed by the raw packet data here -->
This information is to be used for later analysis of the file so we can review the full packets going to a specific host/address.
Can anyone suggest how to do this?
tcpdump -c 100 -w /root/tcpdump.txt
If you use -w with a name that ends with .txt, you're misunderstanding what -w does.
-w writes out a completely binary file, in pcap format, which is intended to be read by tcpdump or by other programs such as Wireshark, NOT to be directly read by humans!
IF the packets, at some layer, are carrying a text-based protocol, such as the FTP control protocol, SMTP, or HTTP requests/responses and their headers, then SOME of the data in the file will be text, but it will NOT all be text. Do NOT treat that as an indication that it is, or should be, a text file.
However, I would like to have the packet content description (what's normally shown on STDOUT when running tcpdump) shown before the packet data itself (in the same file) without the binary.
The packet data itself is binary!
If you mean you want a text hex dump of the packet data, in a form such as
0x0000: 0001 0800 0604 0001 0001 0000 0010 0a78
0x0010: 0452 0000 0000 0000 0a78 0452 0101 0600
after the packet description, so that what you see is like this:
17:49:38.007886 ARP, Request who-has 10.120.4.82 tell 10.120.4.82, length 32
0x0000: 0001 0800 0604 0001 0001 0000 0010 0a78
0x0010: 0452 0000 0000 0000 0a78 0452 0101 0600
then you should do
tcpdump -c 100 -x >/root/tcpdump.txt
so that the text output of tcpdump - the output you get when you don't use -w - is redirected to /root/tcpdump.txt rather than being printed on your terminal or terminal emulator, and so that a hex dump is written as well as a packet description (that's what -x tells tcpdump to do).
This will not write out the link-layer header for the packet in the hex dump; if you want the link-layer header for the packet, e.g.
17:49:38.007886 ARP, Request who-has 10.120.4.82 tell 10.120.4.82, length 32
0x0000: ffff ffff ffff 0001 0000 0010 0806 0001
0x0010: 0800 0604 0001 0001 0000 0010 0a78 0452
0x0020: 0000 0000 0000 0a78 0452 0101 0600
then use -xx rather than -x.

Can lua source files be obfuscated/encrypted while using it with Nginx HttpLuaModule? If yes then how?

I am using Lua to create a custom authentication layer for my backend services. Nginx is compiled with Lua module and LuaJIT. It works fine. I would like to do some encryption of tokens that I am serving back in those lua files and want that no one read the plain text source files. Can these lua source files be compiled into a binary or obfuscated/encrypted in such a way that Nginx's access_by_lua_file directive is still able to load these compiled files? I know this is not a full proof method but better then plain text.
Lua strings are all present in the bytecode even in the absence of debugging info. Viewing a string stored in the code requires no motivation whatsoever.
$ luajit -be 'print("hello world")' hello.out
$ luajit hello.out
hello world
$ xxd hello.out
0000000: 1b4c 4a01 0229 0200 0200 0200 0434 0000 .LJ..).......4..
0000010: 0025 0101 003e 0002 0147 0001 0010 6865 .%...>...G....he
0000020: 6c6c 6f20 776f 726c 640a 7072 696e 7400 llo world.print.
$ luajit -bl hello.out
-- BYTECODE -- hello.out:0-0
0001 GGET 0 0 ; "print"
0002 KSTR 1 1 ; "hello world"
0003 CALL 0 1 2
0004 RET0 0 1
If your plan was to hide the encryption tokens within the bytecode, I would suggest first devising a reversible method to use an obfuscated version of them stored within the plain text of the source code (e.g. shuffle the characters, perform arithmetic on them, etc...)

Resources