kubernetes autoscale fails to get metrics (heapster already installed in kube-system namespace) - scale

I created a mini cluster with vagrant and centos 7. I managed to install kube-dns and heapster but when i try to test an autoscale with the php-apache example it doesnt work.
failed to get CPU consumption and request: failed to unmarshall heapster response: invalid character 'E' looking for beginning of value
That's odd because I can see the metrics with grafana and the limits. My kube-dns and my heapster are in kube-system namespace so it should work.
I have kubernetes 1.2, if someone can help that will be awesome.
Here are the logs of heapster :
[vagrant#localhost ~]$ kubectl logs --namespace=kube-system heapster-xzy31
I0625 11:08:50.041788 1 heapster.go:65] /heapster --source=kubernetes:http://192.168.50.130:8080?inClusterConfig=false&useServiceAccount=true&auth= --sink=influxdb:http://monitoring-influxdb:8086
I0625 11:08:50.042310 1 heapster.go:66] Heapster version 1.1.0
I0625 11:08:50.090679 1 configs.go:60] Using Kubernetes client with master "http://192.168.50.130:8080" and version v1
I0625 11:08:50.090705 1 configs.go:61] Using kubelet port 10255
E0625 11:09:00.097603 1 influxdb.go:209] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp: lookup monitoring-influxdb on 10.254.0.10:53: read udp 172.17.39.2:38757->10.254.0.10:53: read: connection refused, will retry on use
I0625 11:09:00.097624 1 influxdb.go:223] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s
I0625 11:09:00.097638 1 heapster.go:92] Starting with InfluxDB Sink
I0625 11:09:00.097641 1 heapster.go:92] Starting with Metric Sink
I0625 11:09:00.103486 1 heapster.go:171] Starting heapster on port 8082
I0625 11:10:05.003399 1 manager.go:79] Scraping metrics start: 2016-06-25 11:09:00 +0000 UTC, end: 2016-06-25 11:10:00 +0000 UTC
E0625 11:10:05.003479 1 kubelet.go:279] Node 192.168.50.131 is not ready
I0625 11:10:05.051081 1 manager.go:152] ScrapeMetrics: time: 47.581507ms size: 70
I0625 11:10:05.060501 1 influxdb.go:201] Created database "k8s" on influxDB server at "monitoring-influxdb:8086"
I0625 11:11:05.001120 1 manager.go:79] Scraping metrics start: 2016-06-25 11:10:00 +0000 UTC, end: 2016-06-25 11:11:00 +0000 UTC
I0625 11:11:05.091844 1 manager.go:152] ScrapeMetrics: time: 90.657932ms size: 132

Ok I found the solution, my cluster was flawed. I had to install flannel on the master too. With the option --iface=eth1 because of vagrant.
I followed this guide http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services but they didnt say to install flannel on the master.
Now everything is working.

Related

Error on Starting MySQL Cluster 8.0 Data Node on Ubuntu 22.04 LTS

When I start the data nodeid 1 (10.1.1.103) of MySQL Cluster 8.0 on Ubuntu 22.04 LTS I am getting the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
When I start data nodeid 2 (10.1.1.105) I get the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 11:10:04 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 11:10:04 [ndbd] ERROR -- Failed to allocate nodeid, error: 'Error: Could not alloc node id at 10.1.1.102:1186: Connection done from wrong host ip 10.1.1.105.'
The management node log file reports (on /var/lib/mysql-cluster/ndb_1_cluster.log):
2023-01-02 11:28:47 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
What is the relevance of failing to open: /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory?
Why is data node on 10.1.1.105 unable to allocate a nodeid?
I initially installed a single Management Node on 10.1.1.102:
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-management-server_8.0.31-1ubuntu22.04_amd64.deb
mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
The configuration set up on config.ini:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
[ndb_mgmd]
# Management process options:
hostname=10.1.1.102 # Hostname of the manager
datadir=/var/lib/mysql-cluster # Directory for the log files
[ndbd]
hostname=10.1.1.103 # Hostname/IP of the first data node
NodeId=2 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[ndbd]
hostname=10.1.1.105 # Hostname/IP of the second data node
NodeId=3 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[mysqld]
# SQL node options:
hostname=10.1.1.102 # In our case the MySQL server/client is on the same Droplet as the cluster manager
I then started and killed the running server and created a systemd file for Cluster manager:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
pkill -f ndb_mgmd
vi /etc/systemd/system/ndb_mgmd.service
Adding the following configuration:
[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service
[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
I then reloaded the systemd daemon to apply the changes, started and enabled the Cluster Manager and checked its active status:
systemctl daemon-reload
systemctl start ndb_mgmd
systemctl enable ndb_mgmd
Here is the status of the Cluster Manager:
# systemctl status ndb_mgmd
● ndb_mgmd.service - MySQL NDB Cluster Management Server
Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-01-01 08:25:07 CST; 27min ago
Main PID: 320972 (ndb_mgmd)
Tasks: 12 (limit: 9273)
Memory: 2.5M
CPU: 35.467s
CGroup: /system.slice/ndb_mgmd.service
└─320972 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Jan 01 08:25:07 nuc systemd[1]: Starting MySQL NDB Cluster Management Server...
Jan 01 08:25:07 nuc ndb_mgmd[320971]: MySQL Cluster Management Server mysql-8.0.31 ndb-8.0.31
Jan 01 08:25:07 nuc systemd[1]: Started MySQL NDB Cluster Management Server.
I then set up a data node on 10.1.1.103, installing dependencies, downloading the data node and setting up its config:
apt update && apt -y install libclass-methodmaker-perl
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-data-node_8.0.31-1ubuntu22.04_amd64.deb
vi /etc/my.cnf
I entered the address of the Cluster Management Node in the configuration:
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=10.1.1.102 # location of cluster manager
I then created a data directory and started the node:
mkdir -p /usr/local/mysql/data
ndbd
This is when I got the "Failed to open" error result on data nodeid 1 (102.1.1.103):
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
UPDATED (2023-01-02)
Thank you #MauritzSundell. I corrected the (private) IP addresses above and no longer got:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
ERROR: Unable to connect with connect string: nodeid=0,10.1.1.2:1186
Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.
2023-01-01 14:41:57 [ndbd] ERROR -- Could not connect to management server, error: ''
Also #MauritzSundell, in order to use the ndbmtd process rather than the ndbd process, does any alteration need to be made to any of the configuration files (e.g. /etc/systemd/system/ndb_mgmd.service)?
What is the appropriate reference/tutorial documentation for MySQL Cluster 8.0? Is it MySQL Cluster "MySQL NDB Cluster 8.0" on:
https://downloads.mysql.com/docs/mysql-cluster-excerpt-8.0-en.pdf
Or is it "MySQL InnoDB Cluster" on:
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html
Not sure I understand the difference.

Chilkat HTTP with https

I'm currently using the Chilkat HTTP ActiveX control (version 9.3.2.0) with VB6... One of the servers where I download files from is switching over to https, but I can't get it to work... Using http it works perfectly, but when I change the URL to https it returns 0.
Here is the result of Http.LastErrorText:
ChilkatLog:
Download:
DllDate: Aug 5 2012
UnlockPrefix: **********
Username: BILL-DESKTOP:Bill
Architecture: Little Endian; 32-bit
Language: ActiveX
VerboseLogging: 0
backgroundThread: 0
url: https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?file=gfs.t12z.pgrb2.0p25.f000&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_entire_atmosphere=on&lev_entire_atmosphere_%5C%28considered_as_a_single_layer%5C%29=on&lev_mean_sea_level=on&lev_surface=on&var_APCP=on&var_PRMSL=on&var_TCDC=on&var_TMP=on&var_UGRD=on&var_VGRD=on&leftlon=0&rightlon=360&toplat=90&bottomlat=-90&dir=%2Fgfs.2018120712
toLocalPath: C:\Progra~1\PCGrADS\gfs\grib\gfs_pgrbf_000.grib2
localFileAlreadyExists: 0
QuickGetToOutput_Download:
qGet_1:
simpleHttpRequest_3:
httpMethod: GET
requestUrl: https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?file=gfs.t12z.pgrb2.0p25.f000&lev_10_m_above_ground=on&lev_2_m_above_ground=on&lev_entire_atmosphere=on&lev_entire_atmosphere_%5C%28considered_as_a_single_layer%5C%29=on&lev_mean_sea_level=on&lev_surface=on&var_APCP=on&var_PRMSL=on&var_TCDC=on&var_TMP=on&var_UGRD=on&var_VGRD=on&leftlon=0&rightlon=360&toplat=90&bottomlat=-90&dir=%2Fgfs.2018120712
Connecting to web server...
httpServer: nomads.ncep.noaa.gov
port: 443
Using HTTPS.
ConnectTimeoutMs_1: 10000
calling ConnectSocket2
IPV6 enabled connect with NO heartbeat.
connectingTo: nomads.ncep.noaa.gov
dnsCacheLookup: nomads.ncep.noaa.gov
Resolving domain name (IPV4)
GetHostByNameHB_ipv4: Elapsed time: 140 millisec
myIP_1: 192.168.1.38
myPort_1: 55564
connect successful (1)
clientHelloMajorMinorVersion: 3.1
buildClientHello:
majorVersion: 3
minorVersion: 1
numRandomBytes: 32
sessionIdSize: 0
numCipherSuites: 10
numCompressionMethods: 1
--buildClientHello
TlsAlert:
level: fatal
descrip: handshake failure
--TlsAlert
Closing connection in response to fatal error.
Failed to read incoming handshake messages. (1)
Client handshake failed. (3)
Failed to connect to HTTP server.
connectElapsedMs: 640
--simpleHttpRequest_3
--qGet_1
--QuickGetToOutput_Download
bFileDeleted: 1
totalElapsedMs: 672
ContentLength: 0
Failed.
--Download
--ChilkatLog
What am I doing wrong?
Regards,
Bill
You were using an old version from 2012, which did not yet implement TLS 1.2. Chilkat has since added support for TLS 1.2 (for many years now) and the latest version should work fine.

Upload Stem Cell Error - Keystone connection timed out

I am getting a connection timeout error while uploading the stemcell to bosh director. I am using bosh cli v2. The following is myerror logs.
> bosh -e sdp-bosh-env upload-stemcell https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12 --fix
Using environment '10.82.73.8' as client 'admin'
Task 13
Task 13 | 05:02:40 | Update stemcell: Downloading remote stemcell (00:00:51)
Task 13 | 05:03:31 | Update stemcell: Extracting stemcell archive (00:00:03)
Task 13 | 05:03:34 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Uploading stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3541.12 to the cloud (00:10:41)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 | 05:14:16 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 Started Sat Apr 7 05:02:40 UTC 2018
Task 13 Finished Sat Apr 7 05:14:16 UTC 2018
Task 13 Duration 00:11:36
Task 13 error
Uploading remote stemcell 'https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12':
Expected task '13' to succeed but state is 'error'
Exit code 1
Check OpenStack's security group for the BOSH Director machine.
SG should contain ALLOW IPv4 to 0.0.0.0/0, if it doesn't add at least egress TCP to 10.81.102.5 on port 5000.
Check connection using ssh:
bbl ssh --director
nc -tvn 10.81.102.5 5000
If it doesn't help check network/firewall configuration.
https://bosh.io/docs/uploading-stemcells/
https://github.com/cloudfoundry/bosh-bootloader/blob/master/terraform/openstack/templates/resources.tf

How do I connect to kafka cluster on Virtualbox?

Here's my setup on local: 3 VMs (using Virtualbox), kafka and zookeeper installed on all three. They are all talking to each other as well.
I am trying to use kafka-console-producer from my local, which requires broker-list. I am supplying the IPs of my VMs but it doesn't seem to be working. I've tried the advertised.host properties on the VMs too but has no effect for me. Here's my server.properties from the three machines:
Server 1:
broker.id=4
port=9092
host.name=10.30.3.4
advertised.host.name=10.30.3.4
advertised.port=9092
zookeeper.connect=10.30.3.4:2181
zookeeper.connection.timeout.ms=6000
Server 2:
broker.id=3
port=9092
host.name=10.30.3.3
advertised.host.name=10.30.3.3
advertised.port=9092
zookeeper.connect=10.30.3.3:2181
zookeeper.connection.timeout.ms=6000
Server 3:
broker.id=2
port=9092
host.name=10.30.3.2
advertised.host.name=10.30.3.2
advertised.port=9092
zookeeper.connect=10.30.3.2:2181
zookeeper.connection.timeout.ms=6000
My virtualbox also has port forwarding setup:
Similarly for other two machines too ports are only tweaked a bit.
I am able to connect to zookeeper just fine, so:
bin/zkCli.sh -server 127.0.0.1:9999
is able to connect to zookeeper on VM. But if I try connecting kafka-console-producer it fails when I try sending messages:
bin/kafka-console-producer.sh --broker-list 127.0.0.1:9502 --topic partition2replica2 --timeout 3000
leads to:
[2016-02-17 15:06:36,943] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hi
there
[2016-02-17 15:07:23,699] WARN Failed to send producer request with correlation id 3 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2016-02-17 15:07:25,318] WARN Failed to send producer request with correlation id 7 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async
Not sure what I am doing wrong here? Any ideas? (I can provide ifconfig output if anyone wants). Any help will be appreciated.
[Edit 1]: Adding output of zookeeper quorum:
That seems to be in quorum:
echo stat| nc 10.30.3.2 2181
Received: 81
Sent: 80
Connections: 1
Outstanding: 0
Mode: follower
Node count: 149
echo stat| nc 10.30.3.3 2181
Received: 660
Sent: 664
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: leader
Node count: 149
echo stat| nc 10.30.3.4 2181
Received: 293
Sent: 295
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: follower
Node count: 149
As far as I can understand your setup, the zookeepers on each node should also been in Quorum with each other to support the 3 kafka servers instances as one cluster. You have provided kafka config only, so I cannot make out if they are configured that way.
You can check by using the 4 letter command on each zookeeper node like below
echo stat | nc <zk ip> <zk port>
echo mntr | nc <zk ip> <zk port>
One should be a "leader" and other two should be "followers".
I am not sure if they will work as expected if they are not configured to be in quorum.

Cannot start Plone production instances normally with plone.app.async enabled

After adding plone.app.async, I cannot start my production instances normally using 'bin/instance start'. However, the instances run fine using 'foreground' and I can start the production instances on my development machine just fine. (The machines have almost identical configurations but the production machine has almost 100GB of data in blob storage.)
Additionally, I can start the instances normally if I remove support for plane.app.async, specifically the zcml-additions section, from my buildout. And I can start the worker instance for plone.app.async just fine. It uses almost all the same sections as the regular instances except for 'zcml-additional' being for worker instead of instance.
This happens with both single and multi db for plone.app.async.
The instance log shows that it gets trapped in some sort of cycle during startup. Here is the log of what happens:
....
2012-02-09T18:31:27 INFO ZServer HTTP server started at Thu Feb 9 18:31:27 2012
Hostname: 0.0.0.0
Port: 8081
2012-02-09T18:31:32 INFO ZServer WebDAV server started at Thu Feb 9 18:31:32 2012
Hostname: 0.0.0.0
Port: 1980
2012-02-09T18:31:32 INFO Zope Set effective user to "plone"
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage ClientStorage (pid=16331) created RW/normal for storage: '1'
2012-02-09T18:31:34 INFO ZEO.cache created temporary cache file '<fdopen>'
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 8100)>
2012-02-09T18:31:34 INFO ZEO.zrpc.Connection(C) (127.0.0.1:8100) received handshake 'Z3101'
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage Connected to storage: ('localhost', 8100)
2012-02-09T18:31:34 INFO ZEO.ClientStorage zeostorage No verification necessary -- empty cache
2012-02-09T18:31:45 INFO ZServer HTTP server started at Thu Feb 9 18:31:45 2012
Hostname: 0.0.0.0
Port: 8081
2012-02-09T18:31:50 INFO ZServer WebDAV server started at Thu Feb 9 18:31:50 2012
Hostname: 0.0.0.0
Port: 1980
....
This repeats forever.
With a logging level of debug, I receive the following output: http://pastebin.com/nnyekuRA
Around line 58 is what I think is the culprit:
2012-02-09T17:18:22 DEBUG ZEO.ClientStorage pickled inval None '\x03\x94X\x8a\xa8\xe9\xf6\xee'
------
2012-02-09T17:18:22 BLATHER ZEO.zrpc (15892) CM.connect_done(preferred=1)
------
2012-02-09T17:18:22 BLATHER ZEO.zrpc (15892) CT: exiting thread: Connect([(2, ('127.0.0.1', 8100))])
But I have no idea why this is happening or even if this is correct.
Here is the buildout for deployment:
http://pastebin.com/u8D7swJs
The permissions were set incorrectly on the Plone 'parts' directory. This prevented 'uuid.txt' from being written in 'parts/instance/' . There were no error messages to indicate this problem.

Resources