Why is my docker container's timestamp wrong when running an ASP.NET Framework 4.7 application? - asp.net

I have a very simple ASP.NET v4.7 web application that runs in a docker container on my local development laptop.
The web application tries to connect to DocumentDb, but this fails because the container's timestamp is completely wrong, so naturally Jwt token verification fails. The exact ASP.NET code does not really matter; this is more about why the docker container's timestamp is different from the host machine.
Note that I'm using windows containers. My dockerFile looks as follows:
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
When I connect to the container, and run "time", I get the following:
docker container exec -it <containerId> cmd
c:\>time
The current time is: 0:29:49.87
c:\>tzutil /g
South Africa Standard Time
When I do this on the host machine, my laptop, I get:
C:\>time
The current time is: 15:42:45.72
Enter the new time:
C:\>tzutil /g
South Africa Standard Time
Where does docker get that crazy timestamp? Is there a way to sync with the host machine on startup?
My laptop's operating system version is: Win 10, v1803 (build 17134.471)
I'm running: Docker for Windows CE v2.0.0.0-win81 (29211) with Docker Engine version 18.09.0

Unfortunately the only solution so far is to "Wait and let it marinate" as pointed out in http://github.com/Microsoft/aspnet-docker/issues/120
Thanks #ikkentim for pointing out the issue.
I suspected that perhaps my container did not have internet access and could not sync its clock, but was unable to verify this as it just started working fine this morning.
I will post a better answer if I find a more useful solution than "wait".

Related

Selenium/standalone-firefox docker on raspberry pi not working: how to use RSelenium on a raspberryPi

I am trying to use RSelenium on a raspberry pi 3 B+. I managed to get R and RSelenium installed.
I first tried to use rsDriver(browser = "firefox"), but I did not manage to get it work (it ends up with an error saying Could not open firefox browser).
As it is recommended to use RSelenium with docker, I am trying to get docker run a Selenium/firefox standalone container.
I managed to get docker up and running. The hello-world run works, as well as an ubuntu bash (docker run -it ubuntu bash gets me an ubuntu terminal).
I pulled a standalone-firefox image with a given version (the 3)
here are the images I have:
ubuntu latest f576a39bda44 2 weeks ago 46.7MB
selenium/standalone-firefox 3 d803a00f9219 3 weeks ago 756MB
hello-world latest 618e43431df9 10 months ago 1.64kB
I then do
sudo docker run -d -p 4445:4444 selenium/standalone-firefox:3
But there is no container when I do docker ps, and
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
351866263f7b selenium/standalone-firefox:3 "/opt/bin/entry_poin…" 10 seconds ago Exited (1) 6 seconds ago fervent_noether
shows that the container exited directly when executed. I tried with standalone-firefox:2.53.0 (pulling and executing), and it resulted in the same problem. What I am doing wrong ? The version of standalone-firefox is not supported by the raspberry pi ?
More generally, does someone know how to get RSelenium working on a raspberry pi (with firefox as browser)?
Edit
Following LinPy answer, I tried pulling docker images of selenium browsers compatible with the raspberry pi arm architecture. I found these:
https://hub.docker.com/u/kynetiv/
https://hub.docker.com/r/deinchristian/rpi-selenium-node-firefox
https://hub.docker.com/u/pun4drunk/
The docker containers ran without problem, but I never manager to have the remoteDriver connected to the broswer in RSelenium (different errors for different reasons, I do not detail here).
The only way I found to use RSelenium on the raspberry-pi without distant server was to execute the java selenium standalone server you can find here (I tried the 2.53.0):
java -jar selenium-server-standalone-2.53.0.jar
And then connect to it in R:
library(RSelenium)
rmDr <- remoteDriver(port = 4444L)
rmDr$open()
It was that easy in the end.
I think there is a missmatch between your app and the os ARCH . Actually it seems like application build for amd64, but you try start it on arm.
so check your Docker / APP versions and make sure they are compatible....
see this and this
You used the docker container incorrectly. You actually can see your container doing docker ps -a, but it is not good. You specified -p argument and didn't pass any port to that, and you passed the image without a tag. Follow the official documentation for this image and try again step by step:
https://github.com/SeleniumHQ/docker-selenium

Kibana 4.5 run as service on CentOS 7

What is the proper way to run Kibana 4.5 as service on CentOS 7?
When I run it as ./kibana, I can conenct to it form another machine without any problem. When I run it with systemctl start kibana and check with ps -ef | grep '.*node/bin/node.*src/cli'it looks like running but refuses to connect. And goes down. What can be the problem? Thanks in advance.
Here is content of kibana.service file
[Unit]
Description=no description given
[Service]
Type=simple
User=kibana
Group=root
Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml
ExecStart=/opt/kibana/bin/kibana
Restart=always
[Install]
WantedBy=multi-user.target
I am not that much of a linux expert but i recently installed kibana using yum (https://www.elastic.co/guide/en/kibana/4.5/setup.html#kibana-yum) on a minimal installation of CentOS 7 and did not face any issues whatsoever.
In order to have some debug logs and find out what is wrong in your case, edit the kibana configurations file
/opt/kibana/config/kibana.yml
and set a filename for the logging.dest property.
logging.dest: /var/log/kibana.log
Good luck
Igor,
I noticed a few questions you posted on Kafka so sounds like you need to set up a cluster that can ingest data and pass to Elastic. Kibana would be just user interface.
In my experience, components like ELK, Kafka, Zookeeper, etc should be managed by a watchdog process. I highly recommend looking at something like supervisord. http://supervisord.org/
You should run it as a service and the rest managed by the supervisor. It will guarantee starting components at boot but whats more important restart in case of failure and collecting logs. In case of Kibana, it is a NodeJS app that writes to stdout/stderr so to know what fails, you need to collect them.

Permission Issue with Docker Volume Driver for Azure File Storage

I am following the readme for this project (https://github.com/Azure/azurefile-dockervolumedriver/blob/master/contrib/init/upstart/README.md), but when I try and mount a volume on a container like this
docker volume create -d azurefile -o share=myshare --name=myvol
docker run -i -t -v myvol:/data busybox
(inside the container)
# cd /data
# touch file.txt
I get this error:
Error response from daemon: VolumeDriver.Mount: mount failed: exit status 32
output="mount.cifs kernel mount options: ip=168.61.57.82,unc=\\\\cmstoragecd.file.core.windows.net\\myshare,vers=3.0,dir_mode=0777,file_mode=0777,user=cmstoragecd,pass=********\nmount
error(13): Permission denied\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\n"
This is running on an Ubuntu 14.04 server on Azure. I have successfully used the extension with similiar servers, but it is now not working. What can I do to debug this?
your answer is correct. CIFS in many Linux distros currently do not have encryption support ––which Azure File Storage requires in cross-region SMB traffic.
Quoting the note at https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/
Note: The Linux SMB client doesn’t yet support encryption, so mounting a file share from Linux still requires that the client be in the same Azure region as the file share. However, encryption support for Linux is on the roadmap of Linux developers responsible for SMB functionality. Linux distributions that support encryption in the future will be able to mount an Azure File share from anywhere as well.
In the future, please consider directly contacting to us by opening a new issue on our GitHub repository at: https://github.com/Azure/azurefile-dockervolumedriver/issues.
I managed to get around this error by using a storage account in the same region as the Azure VM. Originally I had a VM running in West Europe, using a file share in East US.

Big blue button dev instance stuck at 0 % loading

I installed big blue button 0.9 on a Virtual box running Ubuntu 14.04 on a windows 7 host machine. It runs fine.
Then when I try to setup the dev enviroment of big blue button, it doens't load the client and is stuck at 0%.It give the response as Could not find conference
This is what I get
Things I have tried:
I tried rebuilding with ant, it says BUILD SUCCESFULL
I changed the config.xml file to match 192.168.11.88 i.e. the IP nginx is running on
Also ran the command sudo bbb-conf --setip 192.168.11.88
In the Chrome console, it tells me that GET requests to IP 10.0.2.15 fail with net::ERR_CONNECTION_TIMED_OUT.
PS: This was running fine until I changed my virtual box network adapter setting form NAT to Bridge adapter

Docker restart not showing the desired effect

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

Resources