Windows docker image build hangs on creating a directory - asp.net

I'm following the instructions given by Microsoft to create a windows docker image for an ASP.Net application but for some reason it can't get past the mkdir step.
Here is the dockerfile:
FROM microsoft/aspnet
RUN mkdir C:\storefront
RUN powershell -NoProfile -Command \
Import-module IISAdministration; \
New-IISSite -Name "Storefront" -PhysicalPath C:\storefront -BindingInformation "*:80:"
EXPOSE 80
ADD storefront/ /storefront
And here is the output I'm getting:
docker build -t storefront .
Sending build context to Docker daemon 137.8 MB
Step 1/5 : FROM microsoft/aspnet
---> e761eca2f8df
Step 2/5 : RUN mkdir C:\storefront
---> Running in a939dd7163b1
and it just hangs here on mkdir.
I've already tried using md, changing the backslashes to slashes, and using a relative path instead of the drive letter, all of them result in the same thing happening.
EDIT:
Here's the log output of the build:
[12:22:10.718][ApiProxy ][Info ] proxy >> GET /_ping
[12:22:10.718][ApiProxy ][Info ] Dial name pipe \\.\pipe\docker_engine_windows
[12:22:10.718][ApiProxy ][Info ] Successfully dialed name pipe \\.\pipe\docker_engine_windows
[12:22:10.718][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:10.718369300-05:00" level=debug msg="Calling GET /_ping"
[12:22:10.718][ApiProxy ][Info ] proxy << GET /_ping
[12:22:11.106][ApiProxy ][Info ] proxy >> POST /v1.25/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=storefront&ulimits=null
[12:22:11.106][ApiProxy ][Info ] Dial name pipe \\.\pipe\docker_engine_windows
[12:22:11.106][ApiProxy ][Info ] Successfully dialed name pipe \\.\pipe\docker_engine_windows
[12:22:11.106][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:11.106869500-05:00" level=debug msg="Calling POST /v1.25/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0& networkmode=default&rm=1&shmsize=0&t=storefront&ulimits=null"
[12:22:34.138][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.138382000-05:00" level=debug msg="[BUILDER] Cache miss: [cmd /S /C mkdir C:\\storefront]"
[12:22:34.138][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.138382000-05:00" level=debug msg="[BUILDER] Command to be executed: [cmd /S /C mkdir C:\\storefront]"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::GetLayerMountPath Flavour 1 ID 8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="Calling proc (1)"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="Calling proc (2)"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::GetLayerMountPath succeeded flavour=1 id=8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b path=C:\\ProgramData\\Docker\\windowsfilter\\8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::CreateSandboxLayer layerId 6c84bcad548f6d4b74239e882e156d7a9dffa2b38842393b237b8c4470364f2c parentId C:\\ProgramData\\Docker\\windowsfilter\\8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::NameToGuid Name 8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::NameToGuid Name 4bbfe8c329c52aca77b41bf0c3c7673ca55232f3520a74dacd3befa4ffa7a161"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::NameToGuid Name 6f7c5ba7066e3999eadcb8c7ce0ce997cf65a748f15755557b2bc3dd79d2e8cc"
[12:22:34.140][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::NameToGuid Name 4d9fc7ac017392ffdd91d3b2efe1bce1cadcab146692490e8c9300747be6ce40"
[12:22:34.141][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.140380500-05:00" level=debug msg="hcsshim::NameToGuid Name 3fc2c39416c9725cedc3e74e11d53a63338a00ec33c968657b2724cdd0da9b4a"
[12:22:34.141][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.141380400-05:00" level=debug msg="hcsshim::NameToGuid Name 70d596762a12ad5312e4594bd5d1670ee886d76d356a464a22b9fa648ab42bf9"
[12:22:34.147][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.147380700-05:00" level=debug msg="hcsshim::CreateSandboxLayer - succeeded layerId=6c84bcad548f6d4b74239e882e156d7a9dffa2b38842393b237b8c4470364f2c parentId=C:\\ProgramData\\Docker\\windowsfilter\\8aa6a1776a6803a692f68e8ac6da9c5edcb38cfcdf2edd75c5706442323e7f5b"
[12:22:34.211][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.211976300-05:00" level=debug msg="Assigning addresses for endpoint condescending_fermi's interface on network nat"
[12:22:34.211][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.211976300-05:00" level=debug msg="RequestAddress(172.25.112.0/20, <nil>, map[])"
[12:22:34.211][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.211976300-05:00" level=debug msg="attach: stdout: begin"
[12:22:34.211][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:34.211976300-05:00" level=debug msg="attach: stderr: begin"
[12:22:49.531][ApiProxy ][Info ] Cancel connection...
[12:22:49.531][WindowsDockerDaemon][Info ] time="2017-02-08T12:22:49.531681200-05:00" level=debug msg="Build cancelled, killing and removing container: 6c84bcad548f6d4b74239e882e156d7a9dffa2b38842393b237b8c4470364f2c"

Installing the latest Windows updates and a new version of Docker fixed the issue.

Related

Using templatefile to pass a list into script.yml

I am a new to terraform and struggling with using templatefile to pass a list to my script.yml.
I can pass the list devnames to a bash script without issues but I cannot figure it out with script.yml. With the belw configuration, when I run terraform plan, the list just shows as $devnames and does not unpack the content of the list devnames.
Can you please assist?
resource "aws_imagebuilder_component" "devmappings" {
name = "devmappings"
platform = "Linux"
data = templatefile("${path.module}/script.yml", {
device_names = join(" ", local.devnames)
})
version = "1.0.0"
}
I need to execute script.yml against the content of the list devnames
name: 'myName'
description: 'myDescrip'
schemaVersion: 1.0
phases:
- name: build
steps:
- name: Install-prereqs
action: ExecuteBash
inputs:
commands: ["echo $devnames; for i in `seq 0 26`; do; nvme_blkd='/dev/nvme$${i}n1'; if [ -e $nvme_blkd ]; then; mapdevs='$${devnames[i]}'; if [[ -z '$mapdevs' ]]; then; mapdevs='$${devnames[i]}'; fi; if [[ '$mapdevs' != /dev/* ]]; then; mapdevs='/dev/$${mapdevs}'; fi; if [ -e $mapdevs ]; then; echo 'path exists: $${mapdevs}'; else; ln -s $nvme_blkd $mapdevs; echo 'symlink created: $${nvme_blkd} to $${mapdevs}'; fi; fi; done"]

upload and download files from ftp server using Unix script

Below is the code that was used when we used .netrc file for automatic login. But now we can't use auto login because of multiprotocol environment.So have to manually read the .netrc file and fetch username and password.This is generic download script will download files from server. I need some help in converting this script to read the file and fetch the username and password.
I have added the code i used when auto login was used. I need to now read the file and fetch username and password and use that in the script. Below is format of .netrc file machine ftp.test login test1 password test2 .I need to read ftp.test from my script and fetch test1(username) and test2(password) to do ftp.
. $HOME/env
. $LIB_PATH/miip_functions.shl
OPTIND=1;ftpop=;user=;hosts=;quote=
while getopts h:f:n:q: arg
do
case $arg in
h) hosts="$OPTARG"
;;
f) hosts=`cat $OPTARG`
;;
n) ftpop=-n
user="user $OPTARG"
;;
q) quote="$OPTARG"
;;
\?) logMessage ERROR "download.shl was used incorrectly."
endRun 1
;;
esac
done
shift `expr $OPTIND - 1`
if [ $# -ne 2 ] ; then
logMessage ERROR "download.shl was used incorrectly."
endRun 1
fi
dataset="'$1'"
filename=$2
file=`basename $2`
if [ -z "$hosts" ] ; then
hosts=`cat $LIB_PATH/ftp.hosts 2> /dev/null`
if [ -z "$hosts" ] ; then
hosts="ftp.test ftp.test2"
fi
fi
logMessage DLOAD "Starting FTP download of $file."
for host in $hosts
do
ftp -v $ftpop $host << ! > $TMPFILE.ftp 2>&1
$user
$quote
get $dataset $filename
!
egrep '^421 |^425 |^426 |^450 |^451 |^452 |^530 |^531 |^550 |^551
|^552|^553 |^590 |^Not connected' $TMPFILE.ftp > /dev/null 2>&1
rtn=$?
if [ $rtn -eq 1 ] ; then
break
fi
done
(echo ; echo -------------- ; echo $PROGNAME ; echo --------------) >> $RUNFILE
cat $TMPFILE.ftp >> $RUNFILE
rm -f $TMPFILE.ftp
if [ $rtn -eq 1 ] ; then
logMessage DLOAD "Completed FTP download of $file."
else
logMessage ERROR "Download of $file failed."
fi
`

Ansible Not copying directory using cp command

I have the following role:
---
- name: "Copying {{source_directory}} to {{destination_directory}}"
shell: cp -r "{{source_directory}}" "{{destination_directory}}"
being used as follows:
- { role: copy_folder, source_directory: "{{working_directory}}/ipsc/dist", destination_directory: "/opt/apache-tomcat-base/webapps/ips" }
with the parameters: working_directory: /opt/demoServer
This is being executed after I remove the directory using this role (as I do not want the previous contents)
- name: "Removing Folder {{path_to_file}}"
command: rm -r "{{path_to_file}}"
with parameters: path_to_file: "/opt/apache-tomcat-base/webapps/ips"
I get the following output:
TASK: [copy_folder | Copying /opt/demoServer/ipsc/dist to /opt/apache-tomcat-base/webapps/ips] ***
<md1cat01-demo.lnx.ix.com> ESTABLISH CONNECTION FOR USER: my.user
<md1cat01-demo.lnx.ix.com> REMOTE_MODULE command cp -r "/opt/demoServer/ipsc/dist" "/opt/apache-tomcat-base/webapps/ips" #USE_SHELL
...
changed: [md1cat01-demo.lnx.ix.com] => {"changed": true, "cmd": "cp -r \"/opt/demoServer/ipsc/dist\" \"/opt/apache-tomcat-base/webapps/ips\"", "delta": "0:00:00.211759", "end": "2016-02-05 11:05:37.459890", "rc": 0, "start": "2016-02-05 11:05:37.248131", "stderr": "", "stdout": "", "warnings": []}
What is happening is that there is never being a folder in that directory.
Basically the cp command is not doing it's job, but i get no error or so. If i run the copy command manually on the machine it works however.
Use Copy module and set directory_mode to yes

Upload Image on TryStack Server using Packer tool

I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
You need to source openstack-config.sh before packer build.

Unix troubleshooting, missing /etc/init.d file

I am working through this tutorial on daemonizing php scripts. When I run the following Unix command:
. /etc/init.d/functions
#startup values
log=/var/log/Daemon.log
#verify that the executable exists
test -x /home/godlikemouse/Daemon.php || exit 0RETVAL=0
prog="Daemon"
proc=/var/lock/subsys/Daemon
bin=/home/godlikemouse/Daemon.php
start() {
# Check if Daemon is already running
if [ ! -f $proc ]; then
echo -n $"Starting $prog: "
daemon $bin --log=$log
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $proc
echo
fi
return $RETVAL
}
I get the following output:
./Daemon: line 12: /etc/init.d/functions: No such file or directory
Starting Daemon: daemon: unrecognized option `--log=/var/log/Daemon.log'
I looked at my file system and there was no /etc/init.d file. Can anyone tell me what this is and where to obtain it? Also is the absence of that file what's causing the other error?
Separate your args within their own " " double-quotes:
args="--node $prog"
daemon "nohup ${exe}" "$args &" </dev/null 2>/dev/null
daemon "exe" "args"

Resources