What actually worked for me was:
download ngrok normal/free service, and put it in your path
download the authkey once you create an account, and set it w/ "ngrok authtoken " => this creates a file under your home/.ngrok folder
setup a service with nssm (install nssm from Admin\PowerShell w/ choco install nssm):
nssm install ngrok
in the service setup window, add executable "cmd.exe" w/ parameters: /K ngrok tcp 1234
in the newly installed service, change the login user to be your current one. this way, when service starts, ngrok can fetch the auth key from the user's home dir.
is there a better way?
Related
I'm working with the docker image "mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019". I've noticed that the default user for windowsservercore is ContainerAdministrator. If I try to run the image with the user ContainerUser (docker run -u ContainerUser mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019) I get the following error: ERROR: Failed to stop or query status of service 'w3svc' error [80070005].
I think that the error is related to the permissions that the user needs to run ServiceMonitor. So, first of all, is it correct to assume that windowsservercore images must run with ContainerAdministrator and cannot run with ContainerUser?
If the assumption above is correct I would like to confirm if running the container with ContainerAdministrator can expose the container to a security issue. As far as I understand even if the ServiceMonitor.exe is started with ContainerAdministrator the external-facing process is the IIS Windows service, which runs under a local account in IIS_IUSRS group. So even if an attacker could compromise the application it will not have administrator access to the container. Can anyone confirm if this is correct?
ContainerAdministrator is a special virtual account.ContainerAdministrator is the default account when you run a container – so if your CMD instruction starts a console app, that app will run as ContainerAdministrator. If your app runs in the background as a Windows Service, then the account will be the service account, so ASP.NET apps run under application pool accounts.
you could refer to the below link:
Accessing the Docker host pipe inside windows container with non-admin user
I was in the same position you are. I can't confirm your assumption (though I assume the same). But I can provide our dockerfile which enabled us to run as non root (to comply with an AKS policy).
dockerfile
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2019
SHELL ["cmd", "/S", "/C"]
# username = '1000' so the k8s policy can verify it's a non-root user
RUN net user /add 1000
# We copy some config files in the actual startup.ps1 so we need write access here
RUN icacls C:\inetpub\wwwroot\ /grant 1000:(OI)(CI)F /t
# ServiceMonitor.exe puts some environment variables in the applicationHost.config
RUN icacls C:\Windows\System32\inetsrv\Config\ /grant 1000:(OI)(CI)F /t /c
# S-1-5-32-545 is group Builtin\Users which contains user 1000. Allows user to restart the w3svc service
RUN sc.exe sdset w3svc D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA) (A;;CCLCSWLOCRRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;RPWPDTLO;;;S-1-5-32-545)
COPY startup.ps1 /
WORKDIR /inetpub/wwwroot
ARG source=obj/Docker/publish
COPY ${source} .
USER 1000
ENTRYPOINT ["powershell", "/startup.ps1"]
startup.ps1
# ContainerAdministrators doesn't have these variables, but the
# custom account does have them. If this gets put into the applicationHost.config
# iis will try to write to the user specific temp directory, and fail (with an unrelated error)
Remove-Item env:TMP
Remove-Item env:TEMP
C:/ServiceMonitor.exe w3svc
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
I am working on facebook messenger.
Facebook app only accept one url for webhook but ngrock is generating new URL every time. Now I am unable to test my app because of webhook URL changed.
UPDATE May 2020
Serveo is up and running again! No installation, no signup!
All you need to do is to run this:
ssh -R <unique subdomain>:80:<your local host>:<your local port> serveo.net
like
ssh -R youruniquesubdomain:80:localhost:8000 serveo.net
UPDATE January 2020
Since there are some issues with Serveo and localtunnel, I want to share with you another free ssh-based self-hosting service: Localhost.run
Unfortunately, it does not provide unique subdomains but it is ssh-based so you do not have to install additional applications. Still waiting for Serveo coming back.
UPDATE April 2018
I've found Serveo just now! And it is totally incredible!
UPDATE November 2017
Probably, it is not the best option for you but I started using localtunnel instead of ngrok.
An installation and run flow is very simple:
npm install -g localtunnel
lt --port <your localhost port> --subdomain youruniquesubdomain
Then I can go to my http://youruniquesubdomain.localtunnel.me
That's it!
No more free subdomain support from ngrok.....pls have a error as below
Tunnel session failed: Only paid plans may bind custom subdomains.
Failed to bind the custom subdomain 'arvindpattartestfb.ngrok.io' for the account 'arvccccc'.
This account is on the 'Free' plan.
Upgrade to a paid plan at: https://dashboard.ngrok.com/billing/plan
ERR_NGROK_313
You need to set up auth token. You can find it here https://dashboard.ngrok.com/auth. (W̶o̶r̶k̶s̶ ̶w̶i̶t̶h̶ ̶f̶r̶e̶e̶ ̶v̶e̶r̶s̶i̶o̶n̶,̶ ̶n̶o̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶p̶a̶y̶ it's now paid feature, see ngrok pricing).
Then you can use it like this:
ngrok http 80 --subdomain yoursubdomain
Neither localtunnel.me or Serveo are working for me right now so I created a temporary solution that works for some use-cases including mine (react-native local development): using the ngrok npm package one can save the generated ngrok url into a json file, and that file can be read for any other app.
First make sure to install ngrok using npm install ngrok then use this node script:
const ngrok = require('ngrok');
const fs = require('fs').promises;
(async function() {
const url = await ngrok.connect(3000);
const api = ngrok.getApi();
let data = await api.get('api/tunnels');
data = JSON.parse(data);
let dict = {'domain': data.tunnels[0].public_url}
await fs.writeFile("config.json", JSON.stringify(dict));
console.log("saved " + data.tunnels[0].public_url);
})();
Then from your app you may read it using code similar or equal to:
const backend = require('./config.json').domain;
For DHIS2 local installation, I did this on the terminal on ubuntu server.
Make sure your web is running on the specified port. Mine was on 8080.
ssh -R dani.serveo.net:80:localhost:8080 serveo.net
The beauty of this serveo.net is you can reuse the same hostname prefix url before serveo.net as many time as you want, even if power goes off or internet diconnection.
Staqlab tunnel is providing domain for free. Its works great but need a binary to be downloaded from there website. Using this service for month without any hassle
In 2022 (almost 2023) pagekite.me works for me.
It is very similar to ngrok, and requires the installation of pagekite.py (and, obviously Python).
After installation click on pagekite.py opens pagekite shell.
Run command: 8080 subdomain.pagekite.me
I noticed that no one mentioned how to have static ngrok urls, which was the main question about.
A way to do it is to edit the ngrok.yml file, which is located at
Linux: "~/.config/ngrok/ngrok.yml"
MacOS (Darwin): "~/Library/Application Support/ngrok/ngrok.yml"
Windows: "%HOMEPATH%\AppData\Local\ngrok\ngrok.yml"
You can have content such as:
version: "2"
authtoken: valid_auth_token
tunnels:
first-app:
addr: 3000
proto: http
hostname: yourfixedngrok_id1.ngrok.io
second-app:
addr: 8000
proto: http
hostname: yourfixedngrok_id2.ngrok.io
This will help you expose multiple ports, and have a persisted url for each of which based on the value you set for hostnames.
After that, you run your ngrok using this command:
ngrok start --all
Though its not a solution but take it as workaround, I had the same problem while testing. What i did is keep the ngrok running with my http port, so my ngrok url is not changing. but I frequently changing and restarting my server for testing and debugging.
I create Fedora instance in horizon by giving public key. But i didn't get any user and password to ssh the instance. Also tried to create instance from shell by running this,
nova boot --config-drive=true --flavor 3 --key-name testkey --image be1437b9-b7b4-4e56-a2c3-f92cdd0848ce --user-data cloud-config.txt test
Instance launched successfully in both case and when i try to login with root it ask me for password.
So please tell me what is the exact way to create a fedora instance in Openstack and what would be its user and password for ssh.
Just to confirm, I suppose that you have the corresponding .pem file for the keyname that you create (testkey) and this file has the appropriate permissions to be used to access using ssh. I mean chmod 600 of the .pem file.
If this is the case, you should go into the instance only executing the following sentence:
ssh -i testkey.pem root#<IP address>
Have you installed cloud-init package from epel repository?
So, you can get into the server using 'fedora' or 'cloud-user' user account.
http://docs.openstack.org/image-guide/content/ch_obtaining_images.html
Let leave cloud-init option in nova boot, I have also tried this one,
nova boot --flavor 3 --key-name testkey --image be1437b9-b7b4-4e56-a2c3-f92cdd0848ce test
In this command Instance launches successfully, but still I can't ssh the instance.
Where as now when I create instance from horizon I do ssh in that instance easily.
For the first time login it is recommended that you generate a key-pair (In ubuntu, https://help.ubuntu.com/community/SSH/OpenSSH/Keys) and inject into the image (http://docs.openstack.org/grizzly/basic-install/yum/content/basic-install_operate.html) and do SSH to the instance using the key-pair. Once you are logged in, you can create a user and using this user you can login through VNC console.
I have a Fuse ESB standalone server running in a RHEL box. I want to connect to the Karaf console remotely to manage the bundles.
If I close my current session, How I go back to my karaf console again ?
I have my Fuse ESB configured to 8101 port for SSH. Will I be able to connect it directly through my SSH client(Putty)
Or Do I need another fuse esb instance locally to access the remote Fuse instance ?
Either ways I am not able to connect, It says access denied. Is there any other easier way to connect to remote fuse/karaf instance ?
Even I tried using Client.sh from bin directory, it says authentication failure. But I have created a JAAS user with Admin role.
By the way, Is just a user is enough to do this ? Or does it need Public/Private key configuration also ?
What is the usual approach for managing the remote Fuse/Karaf instance ?
You can find many details in the JBoss Fuse documentation (eg successor to Fuse ESB) at
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/
And there is a chapter on remote connecting to containers here
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html-single/Configuring_and_Running_JBoss_Fuse/index.html#ESBRuntimeRemote
You need to pass in credentials for a user on the container that is valid and is in the admin role.
The karaf shell also has a jaas command, which allows you to list the users and their roles etc. And as well add new users, etc. You can also do some user management form the FMC web console that is part of Fuse ESB.
You might also want to check your IPtables
http://ask.xmodulo.com/open-port-firewall-centos-rhel.html.
- $ sudo iptables -I INPUT -p tcp -m tcp --dport 8101 -j ACCEPT
- $ sudo service iptables save
- $ service iptables restart
From another karaf instance you can run this command
JBossFuse:karaf#root> ssh -l username -P password -p port hostname
e.g
- JBossFuse:karaf#root> ssh -l smx-P smx -p 8101 10.234.12.12
You have to make sure that the ssh role name that is defined in etc/org.apache.karaf.shell.cfg
# shRole defines the role required to access the console through ssh
#
sshRole = ssh
matches the one in etc/user.properties
#
# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, grousp, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
karaf = karaf,_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,ssh