Restore .bak file with docker MSSQL server in MAC OS - asp.net

I am using the docker MS-SQL server latest version and trying to restore the database (.bak) file using Azure data studio. But not able to find the physical location in MAC
As per the Azure data studio it points the data to the /var/opt/mssql/data however, we can't find the location on the MAC OS.
Docker SQL server command that I used to run the image
docker run --name MsSql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=xxxxxxx' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-latest

using the below command solve my issue, I have to mount the volume with the container volumn
docker run --name MsSql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=FeteBird#sql' -p 1433:1433 -v /var/opt/mssql/data:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2019-latest

Related

Docker Windows Container API with EFCore connecting to microsoft/mssql-server-windows-developer container

Fact 1. I've got 2 windows containers for:
MSSQL Server Developer Edition (microsoft/mssql-server-windows-developer)
Asp.Net Core API 5.0 (5.0.102) (solomiosisante/consequence:api)
Fact 2. No problem accessing the SQLServer container data from Management Studio (SSMS).
Fact 3. No problem when I run the API project from Visual Studio accessing the SQLServer container.
Fact 4. The problem is when I run both containers, the error says:
fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
An error occurred using the connection to database 'Consequence' on
server 'localhost,14344'.
fail: Microsoft.EntityFrameworkCore.Query[10100]
An exception occurred while iterating over the results of a query for
context type 'Consequence.EF.Models.ConsequenceContext'.
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: TCP Provider, error: 0 - No
connection could be made because the target machine actively refused
it.)
SQL Server Container:
The SQL Container came from the image I built using microsoft/mssql-server-windows-developer. I attached my db and created an image and uploaded it to my docker hub existing repo solomiosisante/consequence:sqlexpress. I used the sqlexpress image before, thus the tagname :sqlexpress. I've yet to change it to :developer.
API Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Consequence.API/Consequence.API.csproj", "Consequence.API/"]
RUN dotnet restore "Consequence.API/Consequence.API.csproj"
COPY . .
WORKDIR "/src/Consequence.API"
RUN dotnet build "Consequence.API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Consequence.API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS="https://+;http://+"
ENV ASPNETCORE_HTTPS_PORT=44388
ENV ASPNETCORE_Kestrel__Certificates__Default__Password="P#ssw0rd123"
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=/src/certs/consequence.pfx
ENTRYPOINT ["dotnet", "Consequence.API.dll"]
I also tried:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
#COPY ["Consequence.API/Consequence.API.csproj", "Consequence.API/"]
#COPY ["Consequence.EF/Consequence.EF.csproj", "Consequence.EF/"]
#COPY ["Consequence.Repositories/Consequence.Repositories.csproj", "Consequence.Repositories/"]
COPY . .
RUN dotnet restore "Consequence.API/Consequence.API.csproj"
Build and Run Scripts:
SQL Container:
docker run --name=sqlserver-container -d -p 14344:1433 -e sa_password=P#ssw0rd123 -e ACCEPT_EULA=Y solomiosisante/consequence:sqlexpress
ASP.Net Core 5.0.102 Container:
dotnet dev-certs https -ep certs\consequence.pfx -p P#ssw0rd123
dotnet dev-certs https --trust
docker build --pull -t consequenceapi:latest --file Consequence.API/Dockerfile .
docker run -d -p 8088:80 -p 44388:443 -v C:\SolRepo\Consequence\certs\:C:\src\certs --name consequenceapi consequenceapi:latest
Dockerhub repo for API: solomiosisante/consequence:api
I also tried:
docker run -d -p 8088:80 -p 44388:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="P#ssw0rd123" -e ASPNETCORE_Kestrel__Certificates__Default__Path=\https\aspnetapp.pfx -v $env:USERPROFILE\.aspnet\https:C:\https\ --name consequenceapi solomiosisante/consequence:api
I've read about docker network and tried all sorts. Still no luck. I'm hoping that this is not because I'm using Windows containers and that is one of the limitations of using it.
Given these facts, any ideas, comments, thanks in advance.

Open RStudio Project from the attached volume when starting Rocker container

I am trying to build my works using Docker System. It would be nice to start Rstudio server when I start a container attaching a project from a mounted volume. For example, I can start the docker as,
docker run -d -p 8787:8787 -v $(pwd):/home/rstudio --name rocker rocker/verse:latest
It runs Rstudio server and I can access it from 8787 port but it would be nice to load the project from /home/rstudio when I start the container.
There's a hidden file you can modify:
~/.rstudio/projects_settings/switch-to-project
Populate it with the path of the project you want RStudio to open at startup.

How to install JupyterHub with Docker on a local machine and in a sub domain

I will run JupyterHub in a sub domain. Here is the Dockerfile, jupyterhub_config.py, .gitlab-ci.yml.
My first question is how to configure the jupyter_config.py. How can I load the jupyterhub_config.py on the build in the container?
How do I start Jupyterhub in the .gitlab-ci.yml for tests and how do I copy the application in the sub domain? I wrote a README.md. I need a little help for the JupyterHub. If all works fine, I will write a complete HOWTO Install JupyterHub on a local machine and in a sub domain by a provider.
FROM continuumio/miniconda3
# Updating packages
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
git \
nano \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install conda and Jupyter
RUN conda update -y conda
RUN conda install -c conda-forge jupyter_nbextensions_configurator \
jupyterhub \
jupyterlab \
matplotlib \
pandas \
scipy
# Setup application
EXPOSE 8000
CMD ["jupyterhub", "--ip='*'", "--port=8000", "--no-browser", "--allow-root"]
The .gitlab-ci.yml
image: docker:latest
variables:
CONTAINER_IMAGE: registry.gitlab.com/joklein
DOCKER_IMAGE: jupyterhub
TAG: 0.1.0
services:
- docker:dind
stages:
- build
- test
- release
- deploy
before_script:
- echo "$GITLAB_PASSWORD" | docker login registry.gitlab.com --username $GITLAB_USER --password-stdin
build:
stage: build
script:
- docker build -t $CONTAINER_IMAGE/$DOCKER_IMAGE .
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
# - docker run $CONTAINER_IMAGE/$DOCKER_IMAGE -dt -p 8000:8000 --name $DOCKER_IMAGE
release:
stage: release
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
- docker tag $CONTAINER_IMAGE/$DOCKER_IMAGE:latest $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
only:
- master
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk update && apk add git openssh-client rsync
script:
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
only:
- master
The jupyterhub_config.py
c = get_config()
# Letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL
# certificate.
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
c.JupyterHub.port = 443
#
# Change from JupyterHub to JupyterLab
c.Spawner.default_url = '/lab'
c.Spawner.debug = True
#
# # Specify users and admin
c.Authenticator.whitelist = {"systemuser"}
c.Authenticator.admin_users = {"systemuser"}
Docker base image of JupyterHub and JupyterLab
JupyterHub is a multi-user server for Jupyter notebooks. JupyterLab is the
next-generation web-based user interface for the Jupyter Project. This
JupyterHub is a Docker base image for JupyterHub and JupyterLab
that works as a stand-alone application and in a (sub) domain.
Images derived from this image can either run as a stand-alone server, or
function as a volume image for your server. You can also use them in a CI/CD
system such as GitLab CI to build your content prior to bundling it into a
standalone server container.
Building your JupyterHub image
Based on this structure, you can easily build an image for your needs. There are two options for using the image you generated:
as a stand-alone image
as a volume image for your webserver
The simplest way to build your own image is to use a Dockerfile. This is only an example. If you need more software packages you can install them with this
Dockerfile and conda.
Build the container
docker build -t juypterhub .
Your JupyterHub with JupyterLab is automatically generated during this build.
Run the container
docker run -p 8000:8000 -d --name jupyterhub jupyterhub jupyterhub
-p is used to map your local port 8000 to the container port 8000
-d is used to run the container in background. JupyterHub will just write
logs so no need to output them in your terminal unless you want to troubleshoot a server error.
-- name jupyterhub names your container jupyterhub
jupyterhub the image
jupyterhub is the last command used to start the jupyterhub server
and your JupyterHub with Jupyterlab is now available of http://localhost:8000.
Start / Stop JupyterHub
docker start / stop juyterhub
Configure JupyterHub
Let's encrypt certificates for JupyterHub
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to
demonstrate control over the domain. With Let’s Encrypt, you do this using
software that uses the ACME protocol, which typically runs on your web host.
Change to zerossl.com and generate a certificate for your domain. As the
result you get four files, domain-key.txt, domain-crt.txt, domain-csr.txt, account-key.txt. This files uses base 64, which is readable in
ASCII, not binary format. The certificates are already in PEM format. Just
change the extension to *.pem.
For JupyterHub only the files domain-key.txt and domain-crt are needed.
cp domain-crt.txt fullchain.pem
cp domain-key.txt privkey.pem
Add a System user in the container
By default JupyterHub searches for users on the server. In order to be able to
log in to our new JupyterHub server we need to connect to the JupyterHub docker
container and create a new system user with a password.
docker exec -it jupyterhub bash
useradd --create-home systemuser
passwd systemuser
exit
The command docker exec -it jupyterhub bash will spawn a root shell in your
docker container. You can use the root shell to create system users in the
container. These accounts will be used for authentication in JupyterHub's
default configuration.
The first command useradd creates a new user named systemuser. The second will
ask you for a password.
The all process might be simpler with GitLab 12.0 (June 2019), and its
Git integration for JupyterHub
Deploying JupyterHub via GitLab’s Kubernetes integration provides an easy way to get started with Jupyter notebooks, which can be used to create and share documents that contain live code, visualizations, and even runbooks.
Starting with GitLab 12.0, JupyterLab’s Git extension is automatically provisioned and configured when installing JupyterHub onto your Kubernetes cluster.
This integration enables full version control of your notebooks as well as issuance of Git commands within Jupyter. Git commands can be issued via the Git tab on the left panel or via Jupyter’s command line prompt.
See documentation and gitlab-ce issue 47138.
jupyterhub --generate-config
This is what on the documentation
It created a config.py file in /srv/jupyterhub

Deploy shiny app in rocker/shiny docker

Well, I'm new at Docker and I need to implement a Shiny app in a Docker Container.
I have the image from https://hub.docker.com/r/rocker/shiny/, that includes Shiny Server, but I don't know how to deploy my app in the server.
I want to deploy the app in the server, install the required packages for my app into the Docker, save the changes and export the image/container.
As I said, I'm new at Docker and I don't know how it really works.
Any idea?
I guess you should start by creating a Dockerfile in a specific folder which would look like something like this :
FROM rocker/shiny:latest
RUN echo 'install.packages(c("package1","package2", ...), \
repos="http://cran.us.r-project.org", \
dependencies=TRUE)' > /tmp/packages.R \
&& Rscript /tmp/packages.R
EXPOSE 3838
CMD ["/usr/bin/shiny-server.sh"]
Then go into this folder and build your image, giving it a name by using this command :
docker build -t your-tag .
Finally, once your image is built you can create a container, and if you don't forget to map the volume and the port, you should be able to find it at localhost:3838 with the following command launched from the folder containing the srv folder :
docker run --rm -p 3838:3838 -v $PWD/srv/shinyapps/:/srv/shiny-server/ -v $PWD/srv/shinylog/:/var/log/shiny-server/ your-tag
As said in the Docker documentation at the following address https://hub.docker.com/r/rocker/shiny/, you might want to launch it in detached mode with -d option and map it with your host's port 80 for a real deployment.
The link(https://hub.docker.com/r/rocker/shiny/) covers how to deploy the shiny server.
Simplest way would be:
docker run --rm -p 3838:3838 rocker/shiny
If you want to extend shiny server, you can write your own Dockerfile and start with shiny image as base image.(https://docs.docker.com/engine/reference/builder/)
Dockerfile:
FROM rocker/shiny:latest

how can I set the working directory in old version of docker in the run command?

I am a bit new to docker and I have been trying to run deploy a meteor container with my meteor application. I have been using the dockerfile and instructions from https://registry.hub.docker.com/u/golden/meteor-dev/
However, I cant run docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application -w /opt/application meteor-dev because my docker (version 0.5.3) does not recognize the flag (-w) to set the working directory.
Is there some workaround to set the working directory with docker 0.5.3? The work directory is already set in the docker file, but I guess I need to set it again when I run the container.
well, my workaround was to create a bash script that would go to the working directory and call the commands one by one. I created the bash script where my source is located "/path/to/meteor/app" and call docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application meteor-dev bash /opt/application/start.sh with the bash as command and my script as argument

Resources