How do I get around 'invalid certificate' - gitahead

I am trying to clone a GitHub repository to my disk with GitAhead. In the left sidebar, I can see the remote repository. When I double click, GitAhead takes me to the "Remote repository URL" dialog, auto-filling the fields correctly. In the next dialog, "Repository Location", I choose the local directory, and click "Clone". I get this message:
Failed to clone into '/home/[myname]/Development/[myproject]' - invalid certificate
What am I doing wrong?
[OS: Linux, openSUSE LEAP 15.2]

I also got that error for an own created git repository (gitea server). The repo access is set up via https and an own created certificate (via gitea command). Such an own created certificate might also be your issue because if I change the https to http, everything worked fine.
I found an solution by reading the source code. Follow the instruction:
Open GitAhead
Open Menu Tools - Options
Open "Edit Config File..."
Add the section "[http]"
Below, add the item "sslVerify = false"
The file context from my installed GitAhead looks like this (a part of the file):
[http]
sslVerify = false

Related

Weblogic 12C sending logs to syslog

I want to send my weblogic log to syslog. here is what I have done so far.
1.Included following log4j.properties in managed server classpath -
log4j.rootLogger=DEBUG,syslog
log4j.appender.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.syslog.Threshold=DEBUG
log4j.appender.syslog.Facility=LOCAL7
log4j.appender.syslog.FacilityPrinting=false
log4j.appender.syslog.Header=true
log4j.appender.syslog.SyslogHost=localhost
log4j.appender.syslog.layout=org.apache.log4j.PatternLayout
log4j.appender.syslog.layout.ConversionPattern=[%p] %c:%L - %m%n
2. added following command to managed server arguments -
-Dlog4j.configuration=file :<path to log4j properties file> -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger -Dweblogic.log.Log4jLoggingEnabled=true
3. Added wllog4j.jar and llog4j-1.2.14.jar into domain's lib folder.
4.Then, from Admin console changed logging information by doing the following. "my_domain_name"--->Configuration--->Logging--->(Advanced options)-->Logging implementation: Log4J
Restart managed server.
I used this as refernce. But didnt get anaything in syslog(/var/log/message). What am I doing wrong?
I would recommend a couple items to check:
Remove the space in DEBUG, syslog in the file
Your last two server arguments have a space between the - and the D so make sure that wasn't just a copy and paste error in this post.
Double check that the log files are in the actual classpath.
Double check from a ps command, that the -D options made it correctly into the start command that was executed.
Make sure that the managed server has a copy of the JARs correctly as they would get synchornized from admin during the restart.
Hopefully something in there will help or give an idea of what to look for.
--John
I figured out the problem. My appender was working fine, the problem was in rsyslog.conf. Just uncommented following properties
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
We were appending the messages, but the listner was abesnt, so it didnt knew what to do with it.
and from *.debug;mail.none;authpriv.none;cron.none /var/log/messages it figures out where to redirect any (debug in this case) information to messages file.

Open files in PHPStorm with Vagrant+Symfony application

I know you can open files from Symfony profiler or exception file links using this in project/app/config.yml :
framework:
ide: "phpstorm://open?file=%%f&line=%%l"
More info: http://developer.happyr.com/open-files-in-phpstorm-from-you-symfony-application
However as I'm using vagrant, the file path of the server doesn't match my host.
I have created a PHP web application server in PHPStorm with the propper path mappings, but still doesn't work.
Any ideas?
Thanks
When running your app in a container or in a virtual machine, you can tell Symfony to map files from the guest to the host by changing their prefix. This map should be specified at the end of the URL template, using & and > as guest-to-host separators:
// /path/to/guest/.../file will be opened
// as /path/to/host/.../file on the host
// as /path/to/host/.../file on the host
'phpstorm://%f:%l&/path/to/guest/>/path/to/host/&/foo/>/bar/&...'
Symfony FrameworkBundle Configuration - IDE
The answer given by Jeffry no longer works unfortunately :(. When In configure that with my paths the profiler throws:
ParameterNotFoundException
You have requested a non-existent parameter "f:".
I have configured the path according to this line in the SF docs: This map should be specified at the end of the URL template, which results in this:
phpstorm://open?url=file://%%f&line=%%l&/path/to/guest/>/path/to/host/
However, it does open PHPStorm, but phpstorm does not open the file, so i'm a bit stuck here now.
This solves the issue with the file not opening in PhpStorm from a Vagrant:
phpstorm://open?file=%%f&line=%%l&/path/to/guest/>/path/to/host/
Source: https://youtrack.jetbrains.com/issue/IDEA-65879

Artifactory has lost track of local artifacts

I'm using Artifactory OSS 4.1.0 and Java 1.8.0_51.
When I try to download one of my local artifacts from the Artifactory web interface, I get this:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'bab1c4e18f6c5edfb65b2503a388dea2fed0deb8'"
} ]
}
But I found this file in my Artifactory data area: ./files/ba/bab1c4e18f6c5edfb65b2503a388dea2fed0deb8, and upon further inspection it is the WAR file I tried to download.
I've come across other people on the web with the same error message, but their issue was with caching external artifacts, and their workaround was to delete the cache.
Does anyone have an idea what's going on and how I can fix the problem? BTW, I did stop and restart our Artifactory server, but with no noticeable difference.
Artifactory doesn't store the binaries under ./files directory, but under $ARTIFACTORY_HOME/data/filestore.
It looks like you had a symbolic link from the files directory to the filestore directory and this link was deleted.

berks-api will not run on ubuntu in azure - get Permission denied # rb_sysopen - /etc/chef/client.pem

As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.

Web setup project fails to install dynamic-data site: "the installer was interrupted"

The last phase of the installer fails with this message:
Installation Incomplete
The installer was interrupted before [project] could be installed. You need to restart the installer to try again.
Running msiexec /i installer.msi /l*vx setup.log shows the following entries in the setup log:
INFO : [...] [ApplyWebFolderProperties]: Getting web folder property token...
INFO : [...] [ApplyWebFolderProperties]: Token is '/LM/W3SVC/1/ROOT/ProjectDir/DynamicData/Filters'.
INFO : [...] [ApplyWebFolderProperties]: Getting METADATA_HANDLE for the directory '/LM/W3SVC/1/ROOT/ProjectDir/DynamicData/Filters'.
ERROR : [...] [ApplyWebFolderProperties]: FAILED: -2147024893
ERROR : [...] [ApplyWebFolderProperties]: FAILED: -2147024893
ERROR : [...] [ApplyWebFolderProperties]: Custom Action failed with code: '3'
ERROR : [...] [ApplyWebFolderProperties]: Custom Action failed with code: '3'
INFO : [...] [ApplyWebFolderProperties]: Custom Action completed with return code: '3'
The same web application had no problems being installed with a web setup project before. The issue started after upgrading the web application from .NET 3.5 SP1 to .NET 4.0.
This blog entry points out the issue:
Which got me started thinking, I have
a subfolder named filters. Changing
nothing else but renaming the filters
subfolder made it finish properly. I'm
assuming you might have the same
problems with folders named apppools,
info, or 1 as well.
(Emphasis mine)
Unfortunately, Filters is a hard-coded folder name in Dynamic Data. If you look at FilterFactory, there doesn't appear to be any way to override that value, seeing as how the FilterFactory property of MetaModel is not marked virtual. If we can't change the folder name, then we have to look at fixing the installer...
The installer error is being raised by the ApplyWebFolderProperties custom action. That action isn't built-in to Windows Installer—it's added by the Web Setup Project. That's helpful, because it means we can remove it with WiRunSQL.vbs:
cscript WiRunSQL.vbs installer.msi "DELETE FROM CustomAction WHERE Action='WEBCA_ApplyWebFolderProperties'"
Note that the actual name of ApplyWebFolderProperties is WEBCA_ApplyWebFolderProperties. Seeing as how the action doesn't appear to be documented anywhere, caveat emptor. It doesn't appear to be too terribly important though.
To automate the workaround, you could add the command to the setup project's PostBuildEvent like so:
cscript.exe "$(ProjectDir)..\WiRunSQL.vbs" "$(BuiltOuputPath)" "DELETE FROM CustomAction WHERE Action='WEBCA_ApplyWebFolderProperties'"
If anyone knows a better way to install a folder named Filters, I'd love to hear it.

Resources