Hg clone, pull, or incoming command from any repository on an HgLab server throws a mismatch error - asp.net

My question's pretty much in the title. Again, when trying to issue a clone, pull, or incoming command using mercurial from any repository on an HgLab server (whether that repository was already created from scratch on the server, or whether that repository was already pushed to the server, in both cases prior to issuing a supposedly erroneous command), I get a mismatch error. Here's the log:
hg --verbose --debug --traceback incoming http://user#server:81/hg/project/repository
using http://server:81/hg/project/repository
http auth: user user, password not set sending capabilities command
[HgKeyring] Keyring URL: http://server:81/hg/project/repository
[HgKeyring] Looking for password for user user and url http://server:81/hg/project/ repository
[HgKeyring] Keyring password found. Url: http://server:81/hg/project/ repository, user: user, passwd: *****
comparing with http://user#server:81/hg/project/ repository
query 1; heads
sending batch command
searching for changes
all local heads known remotely
sending getbundle command
Traceback (most recent call last):
File "mercurial\dispatch.pyo", line 204, in _runcatch
File "mercurial\dispatch.pyo", line 887, in _dispatch
File "mercurial\dispatch.pyo", line 632, in runcommand
File "mercurial\dispatch.pyo", line 1017, in _runcommand
File "mercurial\dispatch.pyo", line 978, in checkargs
File "mercurial\dispatch.pyo", line 884, in
File "mercurial\util.pyo", line 1005, in check
File "mercurial\commands.pyo", line 5067, in incoming
File "mercurial\hg.pyo", line 820, in incoming
File "mercurial\hg.pyo", line 783, in _incoming
File "mercurial\bundlerepo.pyo", line 509, in getremotechanges
File "mercurial\bundle2.pyo", line 1319, in writebundle
File "mercurial\changegroup.pyo", line 102, in writechunks
File "mercurial\bundle2.pyo", line 1312, in chunkiter
File "mercurial\changegroup.pyo", line 228, in getchunks
File "mercurial\changegroup.pyo", line 48, in getchunk
File "mercurial\changegroup.pyo", line 43, in readexactly
abort: stream ended unexpectedly (got 0 bytes, expected 4)
Before anyone is willing to provide easy solutions, it should suffice to know that I've tried the following already:
Look up existing solutions on stackoverflow, none of which worked. Some of them are:
Using an older version of Mercurial (downgrading from 3.5.1 to 3.4.2)
Running hg verify on both the local machine and the server to fix inconsistencies in both repositories
hg pull -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg pull -f -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg incoming -r 0 http://user#server:81/hg/project/repository (gives the same error)
hg incoming -f -r 0 http://user#server:81/hg/project/repository (gives the same error)
It should also be noted that hg outgoing and hg push don't give any problems whatsoever.
Please help!
Thanks guys :)

There's a bug in HgLab in the component that handles bundling the response to hg incoming or hg pull. The exact details are unclear; you'll want to contact their customer support for details (they're very responsive).
If version 1.10.6 does not have the fix, versions after that should have it.

Related

AWS SAM Local dotnetcore2.1 exception when running API Gateway

Setup
Windows 10
Docker for Windows v18.09.0
AWS SAM CLI v0.10.0
Python 3.7.0
AWS CLI v1.16.67
dotnet core sdk v2.1.403
Powershell v5.1.17134.407
Problem
I'm following the quickstart for AWS SAM Local (as well as the readme generated once the init command is executed below), using the dotnetcore2.1 runtime.
I've run the following command to initialise AWS SAM for use with dotnetcore2.1
sam init --runtime dtonetcore2.1
Then I created the package by running
build.ps1 --target=package
Finally I start the local API Gateway service by running
sam local start-api
I then open a browser and navigate to http://localhost:3000/hello where I'm presented with the following:
PS C:\Users\user_name\Documents\Workspace\messaround\aws-sam\sam-app> sam local start-api
2019-01-04 10:39:15 Found credentials in shared credentials file: ~/.aws/credentials
2019-01-04 10:39:15 Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2019-01-04 10:39:15 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2019-01-04 10:39:16 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2019-01-04 10:40:10 Invoking HelloWorld::HelloWorld.Function::FunctionHandler (dotnetcore2.1)
2019-01-04 10:40:10 Decompressing C:\Users\user_name\Documents\Workspace\messaround\aws-sam\sam-app\artifacts\HelloWorld.zip
Fetching lambci/lambda:dotnetcore2.1 Docker container image......
2019-01-04 10:40:13 Mounting C:\Users\user_name\AppData\Local\Temp\tmpq0zka7a7 as /var/task:ro inside runtime container
2019-01-04 10:40:14 Exception on /hello [GET]
Traceback (most recent call last):
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 246, in _raise_for_status
response.raise_for_status()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\requests\models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localnpipe/v1.35/containers/102dda11417068e01873242be2383c78c7ad4e2739fd4f8b42c1e0ea494d2bbb/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\apigw\local_apigw_service.py", line 153, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\commands\local\lib\local_lambda.py", line 85, in invoke
self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\lambdafn\runtime.py", line 86, in invoke
self._container_manager.run(container)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\manager.py", line 98, in run
container.start(input_data=input_data)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\container.py", line 187, in start
real_container.start()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\models\containers.py", line 390, in start
return self.client.api.start(self.id, **kwargs)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\utils\decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 1075, in start
self._raise_for_status(res)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 248, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("error while creating mount source path '/host_mnt/c/Users/user_name/AppData/Local/Temp/tmpq0zka7a7': mkdir /host_mnt/c/Users/user_name/AppData: permission denied")
2019-01-04 10:40:14 127.0.0.1 - - [04/Jan/2019 10:40:14] "GET /hello HTTP/1.1" 502 -
2019-01-04 10:40:14 127.0.0.1 - - [04/Jan/2019 10:40:14] "GET /favicon.ico HTTP/1.1" 403 -
What I've tried
Resetting the shared drive credentials
Initially I though this was a permissioning error between my Windows drive and the VM running docker... After searching the docker forums I found this article which I've followed. However this doesn't seem to have changed the error message
Any suggestions would be greatly received. Thanks
That's how I fixed my problem:
When SAM CLI sees a zip, it unzip into a temp directory (looks to be C:/Users/user_name/AppData/Local/Temp/tmpq0zka7a7 in your case).
Docker must have access to that folder.
In my case, I've created a local user to give Docker access to shared drives and that local user didn't have access to C:/Users/user_name.
I gave it access and got my problem sorted. Maybe you can fix it the same way.
Try to run the following:
docker run --rm -v c:/Users/user_name:/data alpine ls /data
It should list c:/Users/user_name content if all is fine.
Good luck!

Install CDH 6.0.1 have trouble with install cm-agent

with cloudera install doc step by step I have in trouble with Install Agents
like this:
It said install failed and can not receive signal.
And I find the log like this:
[13/Nov/2018 16:44:19 +0000] 4306 MainThread agent ERROR Heartbeating to ryze-1.bigdata.com:7182 failed.
Traceback (most recent call last):
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/agent.py", line 1371, in _send_heartbeat
response = self.requestor.request('heartbeat', heartbeat_data)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 141, in request
return self.issue_request(call_request, message_name, request_datum)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 254, in issue_request
call_response = self.transceiver.transceive(call_request)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 483, in transceive
result = self.read_framed_message()
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 489, in read_framed_message
framed_message = response_reader.read_framed_message()
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 417, in read_framed_message
raise ConnectionClosedException("Reader read 0 bytes.")
ConnectionClosedException: Reader read 0 bytes.
I try to solve it with google I already check these setting.
/etc/cloudera-scm-agent/config.ini the port set 7182 and server_host set ryze-1.bigdata.com.
iptable altready shutdown with sudo service iptables stop
ryze-1.bigdata.com is reachable. and telnet ryze-1.bigdata.com 7183 can succeed.
OS: Centos7.4
Platform: AliCloud
So what can I do? Any one can help me ?
I closed the ssl option.
Everything is fine now.......

Salt master not able to connect to gitfs remote

I am trying to configure remote github repo as the salt server root but it can't make the authentication successful with the pub/priv keypair. I have given the location of the keys in the /etc/salt/master file as well.
Below are the logs I am getting:
2018-11-05 01:48:32,197 [salt.utils.gitfs :1574][ERROR ][21391] Error occurred fetching gitfs remote 'git#[github-endpoint].git': failed to start SSH session: Unable to exchange encryption keys
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/utils/gitfs.py", line 1552, in _fetch
fetch_results = origin.fetch(**fetch_kwargs)
File "/usr/lib64/python2.7/site-packages/pygit2/remote.py", line 405, in fetch
File "/usr/lib64/python2.7/site-packages/pygit2/errors.py", line 64, in check_error
GitError: failed to start SSH session: Unable to exchange encryption keys
I have checked the keypair and connection to the github endpoint.
I am able to sync the repo manually in the server.
I found with the same issue and I finally solved with the following steps:
I create a new ssh key: ssh-keygen -f gitfs_ssh -C 'test#example.com'
Then, I read that an empty line at the end of the private key could be fatal for libssh2, so I removed the empty lines at the bottom of the file (added by ssh-keygen at creation time) and then the new key began to work.
More info in this link

Unable to resolve chef host while bringing vagrant machine up with libvirt provider

I have a simple Vagrantfile:
Vagrant.configure(2) do |config|
config.omnibus.chef_version = '12.9.38'
config.vm.network "private_network", type: "dhcp"
config.vm.boot_timeout = 60
config.vm.define "node0" do |node0|
node0.vm.box = "baremettle/ubuntu-14.04"
node0.vm.hostname = "node0"
node0.vm.synced_folder "./", "/vagrant", type: "rsync"
node0.vm.provider :libvirt do |qemu|
qemu.driver = "kvm"
qemu.memory = 1024
end
end
end
And when I try to bring machine up I get the following:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sh install.sh -v 12.9.38 2>&1
Stdout from the command:
ubuntu 14.04 x86_64
Getting information for chef stable 12.9.38 for ubuntu...
downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=12.9.38&p=ubuntu&pv=14.04&m=x86_64
to file /tmp/install.sh.1550/metadata.txt
trying wget...
trying perl...
trying python...
Unable to retrieve a valid package!
Version: 12.9.38
Please file a Bug Report at https://github.com/chef/omnitruck/issues/new
Alternatively, feel free to open a Support Ticket at https://www.chef.io/support/tickets
More Chef support resources can be found at https://www.chef.io/support
Please include as many details about the problem as possible i.e., how to reproduce
the problem (if possible), type of the Operating System and its version, etc.,
and any other relevant details that might help us with troubleshooting.
Metadata URL: https://omnitruck-direct.chef.io/stable/chef/metadata?v=12.9.38&p=ubuntu&pv=14.04&m=x86_64
DEBUG OUTPUT FOLLOWS:
STDERR from wget:
--2016-06-13 15:54:03-- https://omnitruck-direct.chef.io/stable/chef/metadata?v=12.9.38&p=ubuntu&pv=14.04&m=x86_64
Resolving omnitruck-direct.chef.io (omnitruck-direct.chef.io)... failed: Name or service not known.
wget: unable to resolve host address ‘omnitruck-direct.chef.io’
STDERR from perl:
Can't locate LWP/Simple.pm in #INC (you may need to install the LWP::Simple module) (#INC contains: /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.
STDERR from python:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1222, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno -2] Name or service not known>
Stderr from the command:
And after that machine is runnign but chef is not installed. And is I ssh into it and try to ping, for example, google.com, I will get:
vagrant#node0:~$ ping google.com
ping: unknown host google.com
But on host machine ping works as expected, without problems.
I'm using default libvirt network:
<network>
<name>default</name>
<uuid>bd07c4da-891b-4e37-b1d0-16fabb6581c2</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:79:b9:3b'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
Vagrant version is 1.8.1
Virsh version is 1.2.2
Vagrant plugins installed:
vagrant-libvirt (0.0.33)
vagrant-omnibus (1.4.1)
UPDATE:
Adding to hosts(or guests) /etc/resolv.conf
nameserver 8.8.8.8
seems to solve the issue.
But I've never had that problem with virtualbox. Could it be that I missed something in libvirt or vagrant configuration?

Error when starting a secure public server for a notebook - IPython 2.2 and tornado 4.0.2 (Debian)

I created a new profile and set it up to be accessible publicaly over https. Such as described on the IPython documentation.
Find bellow the steps I followed
Generated a hashed password:
In [1]: from IPython.lib import passwd
In [2]: passwd()
Enter password:
Verify password:
Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
Created a certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
and created a new profile.
ipython profile created publicServer
edited the ipython_notebook_config.py file in ~/.ipython/profile_publicServer/
c = get_config()
# Kernel config
c.IPKernelApp.pylab = 'inline' # if you want plotting support always
# Notebook config
c.NotebookApp.certfile = u'/absolute/path/to/your/certificate/mycert.pem'
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.password = u'sha1:bcd259ccf...[your hashed password here]'
# It is a good idea to put it on a known, fixed port
c.NotebookApp.port = 9999
Then I executed ipython from a terminal to start the notebook using the created profile:
ipython notebook --profile=publicServer
When I try to access it using a browser, from any ip (including localhost)
https://localhost:999
The browser hangs and never loads the page.
On the terminal I get the following error message
ERROR:tornado.application:Exception in callback (<socket._socketobject object at 0x7f76ba974980>, <function null_wrapper at 0x7f76ba918848>)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 833, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 201, in accept_handler
callback(connection, address)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 225, in _handle_connection
do_handshake_on_connect=False)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 434, in ssl_wrap_socket
context = ssl_options_to_context(ssl_options)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 411, in ssl_options_to_context
context.load_cert_chain(ssl_options['certfile'], ssl_options.get('keyfile', None))
TypeError: coercing to Unicode: need string or buffer, NoneType found
Could anybody help me fixing this issue?
Cheers
I ran into this problem with a customer. It looks like the Tornado library updated how it does things, and needs to be explicitly told that the certificate/key generated by openssl are the same file.
Here is what you need: in ~/.ipython/profile_{yourprofile}/ipython_notebook_config.py, add the line
c.NotebookApp.keyfile = u'/absolute/path/to/your/certificate/mycert.pem'
Essentially, copy the same line for certfile, and replace keyfile for certfile.
See: Running the Notebook Server, specifically the section "Using SSL/HTTPS".

Resources