How do i bootstrap ec2 instance with an external script file in terraform? - terraform-provider-aws

Can anyone of you help me with the following code in terraform? I am trying to bootstrap an amazon linux ec2 instance using an external script file. The external script installs tomcat on the ec2 instance.
The following code is failing to deploy tomcat on ec2.
terrform code:
data "template_file" "ec2_user_data" {
template = "${file("${path.cwd}/user_data_tomcat.txt")}"
}
resource "aws_instance" "lab_ec1" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
key_name = "lab_keypair_1"
#key_name = "${aws_key_pair.lab_key_pair.name}"
subnet_id = "${aws_subnet.lab_subnet1.id}"
vpc_security_group_ids = [
"${aws_security_group.lab_bastion_sg.id}",
]
associate_public_ip_address = true
user_data = "${data.template_file.ec2_user_data.template}"
tags = {
Name = "lab_ec1"
}
}
external script file:user_data_tomcat.txt
#!/bin/bash
sudo yum -y install tomcat.noarch
sudo yum -y install tomcat-admin-webapps.noarch
sudo yum -y install tomcat-webapps.noarch
sudo yum -y install tomcat-lib.noarch
sudo service tomcat start
How do i bootstrap ec2 instance with an external script file in terraform?

It looks like the following variable is incorrect:
user_data = "${data.template_file.ec2_user_data.template}"
You should use rendered attribute instead of template as described in the following link: https://www.terraform.io/docs/providers/template/d/file.html#rendered

You must verify that logging into the system, becoming root and running this works to start Tomcat:
#!/bin/bash
sudo yum -y install tomcat.noarch
sudo yum -y install tomcat-admin-webapps.noarch
sudo yum -y install tomcat-webapps.noarch
sudo yum -y install tomcat-lib.noarch
sudo service tomcat start
If not, it can't work as user_data either. Make sure to verify it interactively first before attempting it as user_data.
Make sure the path to the file used in the template of the template_file.ec2_user_data resource is correct. I have revised the snippet from your question to use path.module (relative to the TF file) rather than path.cwd (relative to directory where the terraform command is run)
data "template_file" "ec2_user_data" {
template = "${file("${path.module}/user_data_tomcat.txt")}"
}
Make sure that the file user_data_tomcat.txt is in the same directory as the TF file where you declare template_file.ec2_user_data.
You have used template in the user_data like this:
user_data = "${data.template_file.ec2_user_data.template}"
template will get the internally encoded version of the original template file without substituting variables. You have no variables in the template, so it's likely the text encoding causing it to fail. Sending that as the user_data in my experience never works since the text encoding used interacts badly with cloud-init or bash. You can check /var/log/cloud-init-output.log for evidence of the failure.
Instead, you should use rendered in user_data like this:
user_data = "${data.template_file.ec2_user_data.template}"
Here is the full, corrected resource block:
resource "aws_instance" "lab_ec1" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
key_name = "lab_keypair_1"
#key_name = "${aws_key_pair.lab_key_pair.name}"
subnet_id = "${aws_subnet.lab_subnet1.id}"
vpc_security_group_ids = [
"${aws_security_group.lab_bastion_sg.id}",
]
associate_public_ip_address = true
user_data = "${data.template_file.ec2_user_data.rendered}"
tags = {
Name = "lab_ec1"
}
}

you can use the remote-exec provisioner, but in that case I recommend you use userdata https://www.terraform.io/docs/providers/aws/r/instance.html#user_data
user_data = "${data.template_file.ec2_user_data.rendered}"

Related

Rstudio via docker cannot read /etc/.odbc.ini, only ~/.odbc.ini

When I build and then run a Docker container which runs rstudio on Ubuntu, the odbc connection does not work when I add the odbc.ini file during the build. However, if I leave out the odbc.ini file from the build and instead add it myself from within the running container, the connection does indeed work.
So my problem is that I am trying to get the odbc connection up and running out of the box whenever this image is run, without the additional step of having to login to the ubuntu container instance and add connection details to the odbc.ini file.
Here's what the odbc.ini file looks like, with dummy data:
[PostgreSQL ANSI]
Driver = PostgreSQL ANSI
Database = GoogleData
Servername = somename.postgres.database.azure.com
UserName = docker_rstudio#somename
Password = abc123abc
Port = 5432
sslmode = require
I have a copy of this file, odbc.ini, in my repo directory and then include it in the build. My DockerFile.
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql
ADD odbc.ini /etc/odbc.ini
ADD install_packages.R /tmp/install_packages.R
RUN Rscript /tmp/install_packages.R && rm -R /tmp/*
ADD flagship_ecommerce /home/rstudio/blah/zprojects/flagship_ecommerce
ADD commission_junction /home/rstudio/blah/zprojects/commission_junction
RUN mkdir /srv/shiny-server; ln -s /home/rstudio/blah/zprojects/ /srv/shiny-server/
If I then login to the instance via rstudio, the connection does not work, I get this error message:
Error: nanodbc/nanodbc.cpp:983: 00000: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
If I take a look at the file with less /etc/odbc.ini I do indeed see the connection details per my top code block.
If I then copy to home with cp /etc/odbc.ini /home/rstudio/.odbc.ini then, after that, my connection does work.
But, even if I amend my dockerfile with ADD odbc.ini /home/rstudio/.odbc.ini, the connection doesn't work. It only works when I manually add to /home/rstudio/.odbc.ini.
So my problem is two fold:
No matter what I try I cannot get /etc/odbc.ini to be detected by ubuntu to use as odbc connection string. Whether via Dockerfile or by manually adding it. I would prefer this since I want to connection to be available to anyone using the container.
I am able to get a connection when I manually copy whats in odbc.ini above to /home/rstudio/.odbc.ini however if I try to do this via the docker build, the connection does not work. I do see the file there. It exists with all the correct data, it is just not detected by odbc.
In case it's relevant:
odbcinst -j
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/rstudio/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
I believe the problem is with the format of your /etc/odbc.ini. I don't have all your scripts, but this is the Dockerfile I used:
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql
RUN Rscript -e 'install.packages(c("DBI","odbc"))'
ADD ./odbc.ini /etc/odbc.ini
If I use an odbc.ini of this:
[mydb]
Driver = PostgreSQL ANSI
ServerName = 127.0.0.1
UserName = postgres
Password = mysecretpassword
Port = 35432
I see this (docker build and R startup messages truncated):
$ docker build -t quux2 .
$ docker run --net='host' -it --rm quux2 bash
> con <- DBI::dbConnect(odbc::odbc(), "mydb")
Error: nanodbc/nanodbc.cpp:983: 00000: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I changed the indentation of the file to this:
[mydb]
Driver = PostgreSQL ANSI
ServerName = 127.0.0.1
UserName = postgres
Password = mysecretpassword
Port = 35432
I see this:
$ docker build -t quux3 .
$ docker run --net='host' -it --rm quux3 bash
> con <- DBI::dbConnect(odbc::odbc(), "mydb")
> DBI::dbGetQuery(con, "select 1 as a")
a
1 1
(For this demonstration, I'm running postgres:11 as another container, but I don't think that that's relevant, it's the indented values.)
I am no expert in docker, and have failed to find the specific documentation for this. But from experience it seems that every time you add a new layer (eg. using RUN) any copy from previous layers are "forgotten" (Note: this might be completely wrong, if so please someone correct me and specify the documentation).
So I would try to combine your RUN arguments and add every file right before the RUN statement they're needed. This has the added benefit of reducing the final image size, because of the way layers are created and kept.
FROM rocker/tidyverse:3.6.3
ENV ADD=SHINY
ENV ROOT=TRUE
ENV PASSWORD='abc123'
#add files (could also combine them into a single tar file and add it. Or add it via git, which is often used)
ADD odbc.ini /etc/odbc.ini
ADD install_packages.R /tmp/install_packages.R
ADD flagship_ecommerce /home/rstudio/blah/zprojects/flagship_ecommerce
ADD commission_junction /home/rstudio/blah/zprojects/commission_junction
#Combine all runs into a single statement
RUN apt-get update && apt-get install -y \
less \
vim \
unixodbc unixodbc-dev \
odbc-postgresql \
&& Rscript /tmp/install_packages.R \
&& rm -R /tmp/* \
&& mkdir /srv/shiny-server \
&& ln -s /home/rstudio/blah/zprojects/ /srv/shiny-server/
Note that now add technically comes right before the statement where it is used.

AWS CodePloy pipe example repository : "bash: zip: command not found"

I tried to follow the Deploy to AWS with CodeDeploy instruction
and used the bitbucket-pipelines.yml
But I'm getting this error:
cd app && zip -r ../myapp.zip *
+ cd app && zip -r ../myapp.zip *
bash: zip: command not found
Can I do anything about it?
In the example repository the docker image is configured to atlassian/default-image:2 in the bitbucket-pipelines.yml
image: atlassian/default-image:2
If you use another docker image to run the pipeline, the zip CLI can be missing and you'll need to install it yourself.
Add this to your bitbucket-pipelines.yml
- apt-get update
- apt-get install -y zip
right before the
- cd app && zip -r ../myapp.zip *

Execute nginx exe in folder with -s reload args Consul template Windows

I am using Consul Template V0.19.0 for windows, for rendering nginx loadbalancing config.It is working as expected.
Now I want the consul template, to execute the nginx exe in a folder with args (-s reload) as below:-
Case 1:
template {
source = "Template/template.ctmpl"
destination = "F:\\IDE\\Visual Studio Collection\\Web Servers\\nginx-
1.12.0\\nginx-1.12.0\\conf\\nginx.conf"
command = "F:\\IDE\\Visual Studio Collection\\Web Servers\\nginx-
1.12.0\\nginx-1.12.0\\nginx -s reload"
command_timeout = "60s"
}
But it throws error like, "failed to execute command "F:\IDE\Visual Studio Collection\Web Servers\nginx-1.12.0\nginx-1.12.0\nginx.exe" from "Template/template.ctmpl" => "F:\IDE\Visual Studio Collection\Web Servers\nginx-1.12.0\nginx-1.12.0\conf\nginx.conf": child: exec: "F:IDEVisual": file does not exist".
Case 2:-
Currently I have achieved this by making the nginx as service (using nssm) and gave the command like,
command = "powershell restart-service nginx"
instead of giving the full path followed by "-s reload".
But for this, have to make the nginx as a service using apps like nssm.
May I know, is there any way to tell the command attribute in the consul template config to, "execute the nginx exe in folder like in the Case 1" ?
Thanks.
Try this
template {
source = "Template/template.ctmpl"
destination = "F:\\IDE\\Visual Studio Collection\\Web Servers\\nginx-
1.12.0\\nginx-1.12.0\\conf\\nginx.conf"
command = "\"F:\\IDE\\Visual Studio Collection\\Web Servers\\nginx-
1.12.0\\nginx-1.12.0\\nginx\" -s reload"
command_timeout = "60s"
}
If that doesn't work try below options also for command
command = "\"F:/IDE/Visual Studio Collection/Web Servers/nginx-
1.12.0/nginx-1.12.0/nginx\" -s reload"
or
command = "\"F:\\\\IDE\\\\Visual Studio Collection\\\\Web Servers\\\\nginx-
1.12.0\\\\nginx-1.12.0\\\\nginx\" -s reload"
Edit-1
So based on the discussion, it seems that your nginx config has a relative folder based config. When nginx is started from a folder, then it also needs to be reloaded from the same folder. So you need to changed to the folder and then execute the reload command. Two formats that you should try are
command="cd '<NGINX FOLDER PATH>' && nginx -s reload"
or
command="cmd /k 'cd \'<NGINX FOLDER PATH>\' && nginx -s reload'"

How to completely delete some specific database from realm-object-server

I need to delete specific database from realm object server.
I tried to re-install realm object server, but it looks like its not touching existing databases. So after I reinstall I still see my realms.
Dashboard currently not providing tools for administering.
So how do I remove realm?
As you indicated, the Realm Object Server currently doesn't support the delete feature.
If you would like to start from scratch, the easiest way (and most failure-proof) is to uninstall the RPM/Deb package, then delete (or rename) the data directory (you can check what directory that is by looking at the configuration file in /etc/realm/configuration.yml), and then reinstall the package (which will recreate the data directory).
I'll edit this answer when I get to the office with some specific commands. Can you tell me which distribution you are using in the meantime?
Edit:
Depending on which version of the beta you're running, the data directory will be under /var/realm or /var/lib/realm. I haven't tested the scripts below, but it should give you a pretty good indication of what you can do.
RHEL/CentOS
sudo service realm-object-server stop
sudo yum remove realm-object-server-de
[ -d /var/realm ] && sudo mv /var/realm /var/realm.backup
[ -d /var/lib/realm ] && sudo mv /var/lib/realm /var/lib/realm.backup
sudo yum install realm-object-server-de
sudo service realm-object-server start
Ubuntu
sudo service realm-object-server stop
sudo apt-get remove realm-object-server-de
[ -d /var/realm ] && sudo mv /var/realm /var/realm.backup
[ -d /var/lib/realm ] && sudo mv /var/lib/realm /var/lib/realm.backup
sudo apt-get install realm-object-server-de
sudo service realm-object-server start
2nd edit:
Please be aware that there's an issue tracking this in our public issue tracker: https://github.com/realm/realm-mobile-platform/issues/13

Sudo Path - not finding Node.js

I need to run node on my Ubuntu machine with sudo access. The directory of node is in the sudo path but when trying to run it i get a command not found. I can explicitly call node which does work.
//works
node
>
which node
/root/local/node/bin/node
echo sudo $PATH
sudo /root/local/node/bin:/usr/bin/node:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
sudo node --version
sudo: node: command not found
//explicitly calling it works
sudo node /root/local/node/bin
>
Um, I don't think there's such a thing as a "sudo path" - your second command there is just echoing "sudo" followed by your regular path. In any case, if you're running things with sudo you really, really should not depend on a path - you should give the explicit pathname for every command and file argument whenever possible, to minimize security risks. If sudo doesn't want to run something, you need to use visudo to add it to /etc/sudoers.

Resources