I've been trying to use the stock templates from the wso2 website to deploy wso2 to AWS. The CloudFormation stack fails to create because the auto scaler fails to create.
I checked the EC2 instances and the actual instance is running and healthy.
I SSH'ed to the instance and ran:
grep -ni 'error\|failure' $(sudo find /var/log -name cfn-init\* -or -name cloud-init\*)
to check the log files for errors or failures. I didn't find any.
I then tried to run:
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
from the correct instance. I filled in the correct information manually when I ran the command on the instance. I pulled this command from the YAML file from the wso2 website. This command returned an Access Denied error for the stack.
Any help would be greatly appreciated. I feel like I'm over looking something simple. I included the LaunchConfiguration and the template for the Auto Scaling group below if that's useful. Happy to provide other information.
WSO2MINode1LaunchConfiguration:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
ImageId: !FindInMap
- WSO2APIMAMIRegionMap
- !Ref 'AWS::Region'
- !Ref OperatingSystem
InstanceType: !Ref WSO2InstanceType
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: '20'
VolumeType: gp2
DeleteOnTermination: 'true'
KeyName: !Ref KeyPairName
SecurityGroups:
- !Ref WSO2MISecurityGroup
UserData: !Base64
'Fn::Sub': |
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
export PATH=~/.local/bin:$PATH
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt install -y puppet nfs-common
apt install -y python-pip
apt install -y python3-pip
pip3 install boto3
pip install boto3
sed -i '/\[main\]/a server=puppet' /etc/puppet/puppet.conf
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
yum install -y epel-release zip unzip nfs-utils
yum install -y python-pip
pip install boto3
rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
yum install -y puppet-agent
echo $'[main]\nserver = puppet\ncertname = agent3\nenvironment = production\n\runinterval = 1h' > /etc/puppetlabs/puppet/puppet.conf
fi
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
export PuppetmasterIP=${PuppetMaster.PrivateIp}
echo "$PuppetmasterIP puppet puppetmaster" >> /etc/hosts
export MI_HOST=${WSO2APIMLoadBalancer.DNSName}
export MI_PORT=8290
service puppet restart
sleep 150
export FACTER_profile=mi
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
puppet agent -vt >> /var/log/puppetlog.log
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/opt/puppetlabs/bin/puppet agent -vt >> /var/log/puppetlog.log
fi
sleep 30
service puppet stop
sh /usr/lib/wso2/wso2am/4.1.0/wso2mi-4.1.0/bin/micro-integrator.sh start
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
echo "/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}" >> /home/ubuntu/cfn-signal.txt
/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
echo 'export HISTTIMEFORMAT="%F %T "' >> /etc/profile.d/history.sh
cat /dev/null > ~/.bash_history && history -c
DependsOn:
- WSO2MISecurityGroup
- WSO2APIMSecurityGroup
- PuppetMaster
WSO2MINode1AutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
LaunchConfigurationName: !Ref WSO2MINode1LaunchConfiguration
DesiredCapacity: 1
MinSize: 1
MaxSize: 1
VPCZoneIdentifier:
- !Ref WSO2APIMPrivateSubnet1
- !Ref WSO2APIMPrivateSubnet2
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} WSO2MIInstance
PropagateAtLaunch: 'true'
CreationPolicy:
ResourceSignal:
Count: 1
Timeout: PT30M
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: '2'
MinInstancesInService: '1'
PauseTime: PT10M
SuspendProcesses:
- AlarmNotification
WaitOnResourceSignals: false
DependsOn:
- WSO2APIMNode1AutoScalingGroup
- WSO2APIMNode2AutoScalingGroup
Thank you!
variables:
TF_ROOT: ${CI_PROJECT_DIR}
TF_CLI_CONFIG_FILE: $CI_PROJECT_DIR/.terraformrc
TF_IN_AUTOMATION: "true"
ARM_SUBSCRIPTION_ID: ""
ARM_TENANT_ID: ""
NEXUS_URL: ""
cache:
key: "${TF_ROOT}"
paths:
- ${TF_ROOT}/.terraform/
.terraform-setup-and-init: &init
ls -al
source <(curl -O -k $NEXUS_URL)
ls -al
chmod +x setup-terraform.sh
ls -al *.sh
source ./setup-terraform.sh
export HTTPS_PROXY=
terraform init -var-file="./environments/US/sev.tfvars"
.validate:
stage: validate
script:
- *init
- terraform validate
.build:
stage: build
script:
- *init
- echo "executing terraform plan, needs provider credentials... Skipping"
# - terraform plan -out="plan.cache"
# - terraform show -json "plan.cache" > plan.json
artifacts:
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
.deploy:
stage: deploy
script:
- *init
# - terraform apply -input=false "plan.cache"
only:
variables:
- $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
stages:
validate
build
deploy
tf validate:
extends: .validate
tf plan:
extends: .build
tf apply:
extends: .deploy
dependencies:
- tf plan
I want to create a customized openstack OpenSUSE15-image that contains some custom software and a graphical interface. I have used an existing OpenSUSE15.0 image and packer to build that image. It works fine. The packer json file is as follows:
"builders": [
{
"type" : "openstack",
"ssh_username" : "root",
"image_name": "OpenSUSE_15_custom_kde",
"source_image": "OpenSUSE 15",
"flavor": "m1.medium",
"networks": "public-network"
}
],
"provisioners":[
{
"type": "shell",
"inline": [
"sleep 10",
"sudo -s",
"zypper --gpg-auto-import-keys refresh",
"zypper -n up -y",
"zypper -n clean -a",
"zypper -n addrepo -f http://download.opensuse.org/repositories/devel\\:/languages\\:/R\\:/patched/openSUSE_Leap_15.0/ R-patched",
"zypper -n addrepo -f http://download.opensuse.org/repositories/devel\\:/languages\\:/R\\:/released/openSUSE_Leap_15.0/ R-released",
"zypper --gpg-auto-import-keys refresh",
"zypper -n install -y R-base R-base-devel R-recommended-packages rstudio",
"zypper -n clean -a",
"zypper --non-interactive install -y -t pattern kde kde_plasma devel_kernel devel_python3 devel_C_C++ office x11",
"zypper -n install xrdp",
"zypper -n clean -a",
"zypper -n dup -y",
"systemctl enable xrdp",
"systemctl start xrdp",
"cloud-init clean --logs",
"zypper -n install -y cloud-init growpart yast2-network yast2-services-manager acpid",
"cat /dev/null > /etc/udev/rules.d/70-persistent-net.rules",
"systemctl disable cloud-init.service cloud-final.service cloud-init-local.service cloud-config.service",
"systemctl enable cloud-init.service cloud-final.service cloud-init-local.service cloud-config.service sshd",
"sudo systemctl stop firewalld",
"sudo systemctl disable firewalld",
"sed -i 's/GRUB_TIMEOUT=.*$/GRUB_TIMEOUT=0/g' /etc/default/grub",
"exec grub2-mkconfig -o /boot/grub2/grub.cfg '$#'",
"systemctl restart cloud-init",
"systemctl daemon-reload",
"cat /dev/null > ~/.bash_history && history -c && sudo su",
"cat /dev/null > /var/log/wtmp",
"cat /dev/null > /var/log/btmp",
"cat /dev/null > /var/log/lastlog",
"cat /dev/null > /var/run/utmp",
"cat /dev/null > /var/log/auth.log",
"cat /dev/null > /var/log/kern.log",
"cat /dev/null > ~/.bash_history && history -c",
"rm ~/.ssh/authorized_keys"
]
},
{
"type": "file",
"source": "./cloud_init/cloud.cfg",
"destination": "/etc/cloud/cloud.cfg"
}
]
}
There are no errors in the building and provisioning phases with packer.
In a second stage, when this base image is spawned through a heat template via the openstack client, I want some personalized tasks to be completed. User creation, granting ssh-access (including adjusting the sshd_config file...). This is done through the init_image.sh file.
#!/bin/bash
useradd -m $USERNAME -p $PASSWD -s /bin/bash
usermod -a -G sudo $USERNAME
tee /etc/ssh/banner <<EOF
You are one lucky user, if you bear the key...
EOF
tee /etc/ssh/sshd_config <<EOF
## SOME IMPORTANT SSHD CONFIGURATIONS
EOF
sudo -u $USERNAME -H sh -c 'cd ~;mkdir ~/.ssh/;echo "$SSHPUBKEY" > ~/.ssh/authorized_keys;chmod -R 700 ~/.ssh/;chmod 600 ~/.ssh/authorized_keys;'
systemctl restart sshd.service
voldata_dev="/dev/disk/by-id/virtio-$(echo $VOLDATA | cut -c -20)"
mkfs.ext4 $voldata_dev
mkdir -pv /home/$USERNAME/share
echo "$voldata_dev /home/$USERNAME/share ext4 defaults 1 2" >> /etc/fstab
mount /home/$USERNAME/share
chown -R $USERNAME:users /home/$USERNAME/share/
systemctl enable xrdp
systemctl start xrdp
For this purpose, I have created the following heat template.
heat_template_version: "2018-08-31"
description: "version 2017-09-01 created by HOT Generator at Fri, 05 Jul 2019 12:56:22 GMT."
parameters:
username:
type: string
label: User Name
description: This is the user name, and will be also the name of the key and the server
default: test
imagename:
type: string
label: Image Name
description: This is the Name of the Image e.g. Ubuntu 18.04
default: "OpenSUSE Leap 15"
ssh_pub_key:
type: string
label: ssh public key
flavorname:
type: string
label: Flavor Name
description: This is the Name of the Flavor e.g. m1.small
default: "m1.small"
vol_size:
type: number
label: Volume Size
description: This is the size of the volume that should be attached in GB
default: 10
password:
type: string
label: password
description: This is the su password and user password
resources:
init:
type: OS::Heat::SoftwareConfig
properties:
group: ungrouped
config:
str_replace:
template:
{get_file: init_image.sh}
params:
$USERNAME: {get_param: username}
$SSHPUBKEY: {get_param: ssh_pub_key}
$PASSWD: {get_param: password}
$VOLDATA: {get_resource: volume}
my_key:
type: "OS::Nova::KeyPair"
properties:
name:
list_join:
["_", [ {get_param: username}, 'key']]
public_key: {get_param: ssh_pub_key}
my_server:
type: "OS::Nova::Server"
properties:
block_device_mapping_v2: [{ device_name: "vda", image : { get_param : imagename }, delete_on_termination : "false", volume_size: 20 }]
name: {get_param: username}
flavor: {get_param: flavorname}
key_name: {get_resource: my_key}
admin_pass: {get_param: password}
user_data_format: RAW
user_data: {get_resource: init}
networks:
- network: "public-network"
depends_on:
- my_key
- init
- volume
volume:
type: "OS::Cinder::Volume"
properties:
# Size is given in GB
size: {get_param: vol_size}
name:
list_join: ["-", ["vol_",{get_param: username }]]
volume_attachment:
type: "OS::Cinder::VolumeAttachment"
properties:
volume_id: { get_resource: volume }
instance_uuid: { get_resource: my_server }
depends_on:
- volume
outputs:
instance_ip:
description: The IP address of the deployed instances
value: { get_attr: [my_server, first_address] }
If I use the original image in the template I have no problems (however, the building process takes very very long) and I need to restart to have the graphical KDE interface.
However, if I use the image build with packer, my user_data are ignored? I cannot log in, the user personalized user is not created... What have I missed? Why does it not work? As you see, I clean cloud-init, restart the services... I am stuck big time...
UPDATE
Here is the accesible boot-log from the machine.
UPDATE 2
This is the output of cloud-init analyze show:
-- Boot Record 01 --
The total time elapsed since completing an event is printed after the "#" character.
The time the event takes is printed after the "+" character.
Starting stage: init-local
|`->no cache found #00.01000s +00.00000s
|`->no local data found from DataSourceOpenStackLocal #00.04700s +15.23000s
Finished stage: (init-local) 15.31200 seconds
Starting stage: init-network
|`->no cache found #16.01000s +00.00100s
|`->no network data found from DataSourceOpenStack #16.01700s +00.02600s
|`->found network data from DataSourceNone #16.04300s +00.00100s
|`->setting up datasource #16.09000s +00.00000s
|`->reading and applying user-data #16.10000s +00.00200s
|`->reading and applying vendor-data #16.10200s +00.00000s
|`->activating datasource #16.12100s +00.00100s
|`->config-migrator ran successfully #16.17900s +00.00100s
|`->config-seed_random ran successfully #16.18000s +00.00100s
|`->config-bootcmd ran successfully #16.18200s +00.00000s
|`->config-write-files ran successfully #16.18200s +00.00100s
|`->config-growpart ran successfully #16.18300s +00.46100s
|`->config-resizefs ran successfully #16.64500s +01.33400s
|`->config-disk_setup ran successfully #17.98100s +00.00300s
|`->config-mounts ran successfully #17.98500s +00.00400s
|`->config-set_hostname ran successfully #17.99000s +00.09800s
|`->config-update_hostname ran successfully #18.08900s +00.01000s
|`->config-update_etc_hosts ran successfully #18.10000s +00.00100s
|`->config-rsyslog ran successfully #18.10100s +00.00200s
|`->config-users-groups ran successfully #18.10400s +00.00200s
|`->config-ssh ran successfully #18.10700s +00.61400s
Finished stage: (init-network) 02.73600 seconds
Starting stage: modules-config
|`->config-locale ran successfully #35.00200s +00.00400s
|`->config-set-passwords ran successfully #35.00600s +00.00100s
|`->config-zypper-add-repo ran successfully #35.00700s +00.00200s
|`->config-ntp ran successfully #35.01000s +00.00100s
|`->config-timezone ran successfully #35.01100s +00.00200s
|`->config-disable-ec2-metadata ran successfully #35.01300s +00.00100s
|`->config-runcmd ran successfully #35.01800s +00.00200s
Finished stage: (modules-config) 00.05100 seconds
Starting stage: modules-final
|`->config-package-update-upgrade-install ran successfully #35.87400s +00.00000s
|`->config-puppet ran successfully #35.87500s +00.00000s
|`->config-chef ran successfully #35.87600s +00.00000s
|`->config-mcollective ran successfully #35.87600s +00.00100s
|`->config-salt-minion ran successfully #35.87700s +00.00100s
|`->config-rightscale_userdata ran successfully #35.87800s +00.00100s
|`->config-scripts-vendor ran successfully #35.87900s +00.00500s
|`->config-scripts-per-once ran successfully #35.88400s +00.00100s
|`->config-scripts-per-boot ran successfully #35.88500s +00.00000s
|`->config-scripts-per-instance ran successfully #35.88500s +00.00100s
|`->config-scripts-user ran successfully #35.88600s +00.00100s
|`->config-ssh-authkey-fingerprints ran successfully #35.88700s +00.00100s
|`->config-keys-to-console ran successfully #35.88800s +00.09000s
|`->config-phone-home ran successfully #35.97900s +00.00100s
|`->config-final-message ran successfully #35.98000s +00.00600s
|`->config-power-state-change ran successfully #35.98700s +00.00100s
Finished stage: (modules-final) 00.13600 seconds
Total Time: 18.23500 seconds
1 boot records analyzed
Update 3
Apparently, when one does not update with zypper up, cloud-init behaves well and finds the user data. Hence, I will not update the image in provisioning. However, once provisioned it makes sense to update.
In the end of your provisioning you should stop cloud-init and wipe the state. Otherwise when the image is launched cloud-init think it already executed the first launch.
systemctl stop cloud-init
rm -rf /var/lib/cloud/
I am trying to configure gcloud sdk in cloudera VM. Below commands I have used. I have tried to pass python a default parameter in install.sh but stil not working out. Can some one guide me any clean approach.
curl https://sdk.cloud.google.com | bash
1. I have installed python3.7 on the top of existing 2.6
(base) [cloudera#quickstart google-cloud-sdk]$ which python
alias python='/home/cloudera/anaconda3/bin'
(base) [cloudera#quickstart google-cloud-sdk]$ whereis python
python: /usr/bin/python2.6 /usr/bin/python /usr/bin/python2.6-config /usr/lib/python2.6 /usr/lib64/python2.6 /usr/include/python2.6 /usr/share/man/man1/python.1.gz
(base) [cloudera#quickstart google-cloud-sdk]$
2. Error Log from sh -x install.sh
+ _cloudsdk_which python2
+ which python2
+ CLOUDSDK_PYTHON=python2
+ unset PYTHONHOME
+ case :$CLOUDSDK_PYTHON_SITEPACKAGES:$VIRTUAL_ENV: in
+ case " $CLOUDSDK_PYTHON_ARGS " in
+ CLOUDSDK_PYTHON_ARGS=-S
+ unset CLOUDSDK_PYTHON_SITEPACKAGES
+ export CLOUDSDK_ROOT_DIR CLOUDSDK_PYTHON_ARGS
+ '[' -z python2 ']'
++ id -u
+ '[' 501 = 0 ']'
+ python2 -S /home/cloudera/google-cloud-sdk/bin/bootstrapping/install.py
Traceback (most recent call last):
File "/home/cloudera/google-cloud-sdk/bin/bootstrapping/install.py", line 12, in <module>
import bootstrapping
File "/home/cloudera/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 32, in <module>
import setup # pylint:disable=g-import-not-at-top
File "/home/cloudera/google-cloud-sdk/bin/bootstrapping/setup.py", line 55, in <module>
from googlecloudsdk.core import properties
File "/home/cloudera/google-cloud-sdk/lib/googlecloudsdk/core/properties.py", line 378
self.__sections = {section.name: section for section in sections}
^
3. After hardcoding the python as default.
+ echo Welcome to the Google Cloud 'SDK!'
Welcome to the Google Cloud SDK!
++ _cloudsdk_root_dir install.sh
++ case $1 in
+++ _cloudsdk_which install.sh
+++ which install.sh
+++ command -v install.sh
++ _cloudsdk_path=
++ case $_cloudsdk_path in
++ _cloudsdk_path=/home/cloudera/google-cloud-sdk/
++ _cloudsdk_dir=0
++ :
+++ readlink /home/cloudera/google-cloud-sdk/
++ _cloudsdk_link=
++ case $_cloudsdk_dir in
++ '[' -d /home/cloudera/google-cloud-sdk/ ']'
++ break
++ :
++ case $_cloudsdk_path in
+++ dirname /home/cloudera/google-cloud-sdk//.
++ _cloudsdk_path=/home/cloudera/google-cloud-sdk
++ :
++ case $_cloudsdk_path in
++ echo /home/cloudera/google-cloud-sdk
++ break
+ CLOUDSDK_ROOT_DIR=/home/cloudera/google-cloud-sdk
+ '[' -z '' ']'
+ CLOUDSDK_PYTHON=python
+ unset PYTHONHOME
+ case :$CLOUDSDK_PYTHON_SITEPACKAGES:$VIRTUAL_ENV: in
+ case " $CLOUDSDK_PYTHON_ARGS " in
+ CLOUDSDK_PYTHON_ARGS=-S
+ unset CLOUDSDK_PYTHON_SITEPACKAGES
+ export CLOUDSDK_ROOT_DIR CLOUDSDK_PYTHON_ARGS
+ '[' -z python ']'
++ id -u
+ '[' 501 = 0 ']'
+ python -S /home/cloudera/google-cloud-sdk/bin/bootstrapping/install.py
Traceback (most recent call last):
File "/home/cloudera/google-cloud-sdk/bin/bootstrapping/install.py", line 27, in <module>
from googlecloudsdk import gcloud_main
File "/home/cloudera/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 37, in <module>
from googlecloudsdk.command_lib.util.apis import yaml_command_translator
File "/home/cloudera/google-cloud-sdk/lib/googlecloudsdk/command_lib/util/apis/yaml_command_translator.py", line 241
if self.spec.async:
^
SyntaxError: invalid syntax
Gcloud SDK works on 2.7 version or later but not python3. I had to install 2.7 and below the steps i performed.
1. Get python2.7 for anaconda
2. Install and provide an alernative path like /home/cloudera/anaconda2/
3. bash Anaconda2-2019.03-Linux-x86_64.sh
4. Update .bash_profile (alias python2.7='/home/cloudera/anaconda2/bin/python2.7')
5. Update CLOUDSDK_PYTHON="python2.7" in /home/cloudera/google-cloud-sdk/install.sh (Need to put after the if loop)
6. Then execute sh -x install.sh
Thanks
I have written the CWL code which runs the Rscript command in the docker container. I have two files of cwl and yaml and run them by the command:
cwltool --debug a_code.cwl a_input.yaml
I get the output which said that the process status is success but there is no output file and in the result "output": null is reported. I want to know if there is a method to find that the Rscript file has run on the docker successfully. I want actually know about the reason that the output files are null.
The final part of the result is:
[job a_code.cwl] {
"output": null,
"errorFile": null
}
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
Removing intermediate output directory /tmp/tmpCG9Xs1
Removing intermediate output directory /tmp/tmpCG9Xs1
{
"output": null,
"errorFile": null
}
Final process status is success
Final process status is success
R code:
library(cummeRbund)
cuff<-readCufflinks( dbFile = "cuffData.db", gtfFile = NULL, runInfoFile = "run.info", repTableFile = "read_groups.info", geneFPKM = "genes.fpkm_trac .... )
#setwd("/scripts")
sink("cuff.txt")
print(cuff)
sink()
My cwl file code is:
class: CommandLineTool
cwlVersion: v1.0
id: cummerbund
baseCommand:
- Rscript
inputs:
- id: Rfile
type: File?
inputBinding:
position: 0
- id: cuffdiffout
type: 'File[]?'
inputBinding:
position: 1
- id: errorout
type: File?
inputBinding:
position: 99
prefix: 2>
valueFrom: |
error.txt
outputs:
- id: output
type: File?
outputBinding:
glob: cuff.txt
- id: errorFile
type: File?
outputBinding:
glob: error.txt
label: cummerbund
requirements:
- class: DockerRequirement
dockerPull: cummerbund_0
my input file (yaml file) is:
inputs:
Rfile:
basename: run_cummeR.R
class: File
nameext: .R
nameroot: run_cummeR
path: run_cummeR.R
cuffdiffout:
- class: File
path: cuffData.db
- class: File
path: genes.fpkm_tracking
- class: File
path: read_groups.info
- class: File
path: genes.count_tracking
- class: File
path: genes.read_group_tracking
- class: File
path: isoforms.fpkm_tracking
- class: File
path: isoforms.read_group_tracking
- class: File
path: isoforms.count_tracking
- class: File
path: isoform_exp.diff
- class: File
path: gene_exp.diff
errorout:
- class: File
path: error.txt
Also, this is my Dockerfile for creating image:
FROM r-base
COPY . /scripts
RUN apt-get update
RUN apt-get install -y \
libcurl4-openssl-dev\
libssl-dev\
libmariadb-client-lgpl-dev\
libmariadbclient-dev\
libxml2-dev\
r-cran-plyr\
r-cran-reshape2
WORKDIR /scripts
RUN Rscript /scripts/build.R
ENTRYPOINT /bin/bash
I got the answer!
There were some problems in my program.
1. The docker was not pulled correctly then the cwl couldn't produce any output.
2. The inputs and outputs were not defined mandatory. So I got the success status in the case which I did not have proper inputs and output.