Start play framework app from dockerfile - sbt

I want to start a play framework 2 application in a Dockerfile.
Let us assume my play project rests in the /myapp directory in ubuntu.
When I start it manually, I would just say:
cd /myapp
sbt run
How can I run sbt run on it from the CMD command of the Dockerfile?
Running sbt run has to be from within the /myapp directory. How do I tell the CMD command that sbt run should be run from that directory?

Take a closer look at the new experimental docker feature
for the sbt-native-packager.
You should be able to build a docker container from your play application with a few simple steps.
Adding a maintainer and the exposed ports
import NativePackagerKeys._ // with auto plugins this won't be necessary soon
name := "play-2.3"
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.1"
libraryDependencies ++= Seq(
jdbc,
anorm,
cache,
ws
)
// setting a maintainer which is used for all packaging types
maintainer := "Nepomuk Seiler"
// exposing the play ports
dockerExposedPorts in Docker := Seq(9000, 9443)
and then run
sbt docker:publishLocal
docker run -p 9000:9000 play-2-3:1.0-SNAPSHOT
Update - 1.x version
With sbt-native-packager 1.x ( and so with play 2.4.x ) docker is enabled by default ( because play enables the JavaServerAppPackaging plugin ).
If you don't have a play application then enable docker with
enablePlugins(JavaAppPackagingPlugin)
maintainer := "Nepomuk Seiler"
// note that the Docker scope is gone!
dockerExposedPorts := Seq(9000, 9443)

You can just connect both statements:
FROM ubuntu
... more Dockerfile commands...
CMD cd /myapp && sbt run
You could also use the WORKDIR command of Dockerfiles (see http://docs.docker.io/reference/builder/#workdir) to set the working directory for your CMD command. But I never used this for myself (with no specific reason...):
FROM ubuntu
... more Dockerfile commands...
WORKDIR /myapp
CMD sbt run

Related

What is CLI equivalent for "Would you like to use `src/` directory with this project" in Next.js 13.1.2

We have an automated image creation pipeline that uses CLI to create a default Next.js app and install dependencies, etc.
This is our code:
WORKDIR /
RUN npx --yes create-next-app next --use-npm --js --eslint
WORKDIR /next
Yesterday Next.js team released their 13.1.2 version, which broke this line.
They have added a new CLI option that asks:
? Would you like to use src/ directory with this project? › No / Yes
What is the option to automate the answer for this option?
The option you are looking for is --src-dir. So to automate the initialization using src directory you should run:
npx create-next-app --src-dir
Check documentation here

Unable to install IIS AspNetCoreModuleV2 in a dockerimage (and Azure Pipelines)

I have a problem for few days now with the "dotnet hosting bundle" and AspNetCoreV2 IIS module in a dockerimage.
So, I creating a dockerimage with many IIS modules and requirements to execute our software. The dockerimage works good expect this AspNetCoreV2 module. When the container created, I check the modules installed with Get-WebGlobalModule and doesn't appear.
But, when I start the quiet (or passive) installation manualy, into the container, this module works and appear in the IIS Module list.
I tried many solutions to do that (multistage with aspnetcore Microsoft images, last version of dotnet_hosting_bundle.exe and many other, but same issue).
I tried to automatise the docker exec process to install this module manualy and commit it with Azure Pipelines and Windows agent in a VM, but doesn't work :(.
To try that, I use different way :
docker stop mycontainer
docker rm mycontainer
docker run --name mycontainer -d -it $(containerRegistry)/$(container_requirement_name):v1.0.$(Build.BuildId)
docker exec mycontainer powershell.exe -command Start-Process -FilePath 'C:\Program Files\MySoftware\PowerShell\Installer.Prerequisites\dotnet-hosting-3.1.2-win.exe' -ArgumentList "/passive","/install","/norestart" -PassThru -Wait
docker stop mycontainer
docker commit mycontainer $(containerRegistry)/$(container_requirement_name):v1.0.$(Build.BuildId).1
In the Start-Process, I can see :
The process is created but apparently not started
I also tried with : cmd 'C:\Program Files\MySoftware\PowerShell\Installer.Prerequisites\dotnet-hosting-3.1.2-win.exe' /quiet /install
This task in Azure Pipeline working without error, but when I download this new image (pushed after these instructions), the module doesn't appear in Get-WebGlobalModule
Also, the module is not presend into ProgramFiles
I don't really understand how can I install this module. All other modules working, expect this ...
Thanks you very much in advance for your advises.
Best
Set the preference variables with below command fixed above issue.
powershell -Command $ErrorActionPreference = 'Stop' $ProgressPreference = 'Continue'
The values of the preference variables affect how PowerShell operates and executes cmdLets. It might be because the default settings of the preference variables of the container that caused the PowerShell failing to complete the installation. You can override these preference variables in your script.
Please see this document for more information about Preference Variables.

How to utilize .ebextension while using CodePipeline

I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?
Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.
If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.
Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;
I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"

Creating a config that overrides some Docker settings while keeping docker:publish behavior

I'm trying to create a SBT build that can publish a Docker container either to DockerHub or to our internal Docker repository. I'm using sbt-native-packager 1.0.3 to build the Docker image.
Here's an excerpt from my build.sbt:
dockerRepository in dockerInternal := Some("thomaso"),
packageName in dockerInternal := "externalname",
sbt docker:publish now successfully publishes to thomaso/externalname on DockerHub.
To add the option to publish to our internal Docker repo I added a configuration called dockerInternal:
val dockerInternal = config("dockerInternal") extend Docker
I then added these two settings to override the defaults:
dockerRepository in Docker := Some("docker.nrk.no/project"),
packageName in Docker := "internalname",
My expectation was that sbt dockerInternal:publish should publish a Docker image to docker.nrk.no/project/internalname. Instead, I get this error message:
delivering ivy file to /home/n06944/repos/nrk.recommendations/api/target/scala-2.10/ivy-0.1- SNAPSHOT.xml
java.lang.RuntimeException: Repository for publishing is not specified.
It seems to me SBT tried to publish to Ivy, not to Docker - when I hardcode the values to the internal repo the publishing works fine and there is no mention of Ivy in the logs. The Docker configuration modifies the publish task, and I hoped that by letting dockerInternal extend Docker I would inherit the Docker-specific publish behavior. Is that an incorrect assumption? Am I missing some incantations, or is there another approach that would be better?
You forgot to import all the necessary task to your new config. sbt-native-packager recommends generating submodules for different packaging configurations.
If you want to fiddle around with configuration scopes (which gets pretty messy very fast) here is another SO answer I gave.
cheers,
Muki

How to invoke docker:publishLocal as a test dependency in sbt

My project uses sbt-native-packager's Docker plugin to generate Docker containers. I'd like containerization to occur before running unit tests. (The command to do this is 'sbt docker:publishLocal')
How can I wire in my Build.scala file so that the test task in sbt will run docker:publishLocal first, before its normal test activities?
(Keys.test in Test) <<= (Keys.test in Test) dependsOn (publishLocal in Docker)

Resources