I'm trying to install a package on a old Fedora 20 virtual machine.
yum install<the_package_name> results in a failure with an HTTP 403 error:
http://download.fedoraproject.org/<...(truncated)...>/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden
My web-browser can't see anything at http://download.fedoraproject.org/pub/fedora/linux/updates/20 so I realize FC20 is no more supported (EOL) and its repository URL has changed. So I fix the baseurl in /etc/yum.repos.d/fedora.repo to look like this:
baseurl=http://archives.fedoraproject.org/<...(truncated)...>
I'm sure the URL is now correct, because I can download repomd.xml using curl or wget, and access it in my web browser...
But yum install <the_package_name> continues to fail with an HTTP 403 error! It can't access repomd.xml at the correct URL:
http://archives.fedoraproject.org/<...(truncated)...>/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden
Can you help me overcome this issue and install packages on this old Fedora (FC 20)?
Note 1: I'm working from behind a proxy (not my choice).
Note 2: Upgrading my Fedora 20 to Fedora 21 or 22 is not an option either.
Here are the suggestions (from Etan Reisner) that helped me solve the issue:
Check the proxy configuration in /etc/yum.conf
Check all YUM .repo files are using the up-to-date Fedora repo URL
Launch yum clean metadata to ensure YUM uses the updated .repo files contents
Try again yum install <the_package>
subscription-manager refresh did the trick on a RHEL 7.9 server box.
Created VPC Endpoint and allowed access to packages, repos and amazonlinux resources.
{"Version": "03-19-2021",
"Statement": [
{"Sid": "Amazon Linux AMI Repository Access",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [ "arn:aws:s3:::packages.*.amazonaws.com/*", "arn:aws:s3:::repo.*.amazonaws.com/*", "arn:aws:s3:::amazonlinux.*.amazonaws.com/*" ]
}]}
Refer to https://blog.saieva.com/category/aws/
Related
Hi!
Please, help. I want to update WordPress(or plugins), but always the same error: 'Download failed: cURL error 7: .'
OS: Linux Fedora 30. Server: nginx. DB: MySQL 8.0. PHP 7.3
I have installed all the needed PHP extensions. Curl is working - I tested to download google HTML using curl and info.php says thas cURL is enabled.
Why WordPress can't update anything? Nginx error.log does not have mistakes. On another laptop (Windows 10) with the same development environment, everything works.
What information should I give you to solve this problem?
error on the WordPress site
cURL info (info.php)
I installed nginx via yum package on RHEL7. I added my config as
/etc/nginx/conf.d/my.conf
and deleted the config file shipped with the package
/etc/nginx/conf.d/default.conf
Recently, nginx package was updated via yum update. Now, the default.conf file is present again. I would have expected that yum doesn't touch default config files if they were changed or deleted.
Is this normal yum behavior? Here some information about the RHEL version and nginx package.
root#host: [~]# yum info nginx
Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos
This system is receiving updates from RHN Classic or Red Hat Satellite.
Installed Packages
Name : nginx
Arch : x86_64
Epoch : 1
Version : 1.14.1
Release : 1.el7_4.ngx
Size : 2.6 M
Repo : installed
From repo : nginx
Summary : High performance web server
URL : http://nginx.org/
License : 2-clause BSD-like license
Description : nginx [engine x] is an HTTP and reverse proxy server, as well as
: a mail proxy server.
I upgrade the package from 1.14.0 to version 1.14.1 shown above.
root#host: [~]# nginx -v
nginx version: nginx/1.14.1
Redhat version:
root#host: [~]# hostnamectl
Static hostname: host.example.com
Icon name: computer-vm
Chassis: vm
Machine ID: SOME-ID
Boot ID: ANOTHER-ID
Virtualization: vmware
Operating System: Red Hat Enterprise Linux
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server
Kernel: Linux 3.10.0-862.14.4.el7.x86_64
Architecture: x86-64
If I rename my.conf to default.conf, it doesn't get replaced on a yum update.
I have https://packages.cloud.google.com/yum configured as a remote repo in Artifactory.
My repo file on Centos 7.3 looks like this:
[kubernetes]
name=kubernetes
baseurl=https://artifactory.company.com/artifactory/packages.cloud.google.com-yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
When I run yum install -y kubelet it prints this error:
e7a4403227dd24036f3b0615663a37 FAILED
https://artifactory.company.com/artifactory/packages.cloud.google.com-yum/repos/kubernetes-el7-x86_64/../../pool/e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1-0.x86_64.rpm: [Errno 14] HTTPS Error 500 - Internal Server Error
Trying other mirror.
I am pretty sure the problem is the relative path in the URL: kubernetes-el7-x86_64/../../pool
If I wget the URL it works fine because wget is resolving out the relative path before sending the HTTP request, but yum does not do this and Artifactory returns a 500 when you give it a url with ../ in it. Does anyone know how to enable relative URLs in Artifactory? Or how to get yum to resolve URLs before sending the requests?
I am running these versions:
Artifactory 5.2.0
Yum 3.4.3-150
Update: This is the HTTP response body from artifactory:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Path element cannot end with a dot: packages.cloud.google.com-yum-cache/repos/kubernetes-el7-x86_64/../"
} ]
}
The remote repository should be set with the following url in Artifactory
https://packages.cloud.google.com/yum/
The baseurl on the yum client should point on the repodata folder with the following:
baseurl=http://artifactory.company.com/artifactory/yum-remote/repos/kubernetes-el7-x86_64/
(The name of the remote repository is 'yum remote')
This should work without any further configuration from the Artifactory side.
The error you have mentioned regardin the relative path 'kubernetes-el7-x86_64/../../pool' happens during the caching of the artifact.
Artifactory cannot cache to a path which contains the '..' pattern so the request is failing.
It can be solved from Artifactory side with a user plugin.
If the path contains the '..' pattern then the plugin will modify the path where the artifact will cached so it will not include this pattern.
This is now redundant as the registry retrieves paths which doesn't include '..' in them.
I have a CentOS server where I have installed the vsftpd service, however I am getting the error
bash: sftp: command not found
Even the which sftp command can't find this service.
Detailed steps below :
As root:
yum install vsftpd
Total download size: 139 k
Is this ok [y/N]: **y**
Configure:
vi /etc/vsftpd/vsftpd.conf
Change anonymous_enable=YES to anonymous_enable=NO
Add userlist_deny=NO after userlist_enable
Add allowed users:
vi /etc/vsftpd/user_list
Replace contents with:
vsftpd userlist
userlist_deny=NO so only allow users in this file
user
Turn on Vsftpd service
chkconfig vsftpd on
Start the service
service vsftpd start
Can someone help figuring out what I'm doing wrong ?
sftp binary is provided by the openssh-clients package. Install that before:
yum install openssh-clients
then you can run sftp.
Assuming the vsftpd daemon is now running and can get through any firewall you have, you need to use an ftp client to connect to the server.
yum install ftp
ftp x.x.x.x <-- IP address of server
That will show that it is working. Remotely you will need a client such as Filezilla.
I want to call the Companies House API from within R on a remote Linux server running Ubuntu 14.04 LTS.
I am starting R from the terminal, and am using the 'httr' package and making the following GET request (using 'Paul+Dodd' as an example search term):
call <- GET("https://api.companieshouse.gov.uk/search/companies?q=Paul+Dodd&items_per_page=50&start_index=1", authenticate("API_KEY_HERE", ""))
You can apply for a Companies House API key and get more information on the API here:
https://developer.companieshouse.gov.uk/api/docs/index.html
The response from the API call should be a complex list of company information.
However I am getting the following error:
Error in curl::curl_fetch_memory(url, handle = handle) :
SSL connect error
I have tried set_config( config( ssl_verifypeer = 0L ) ), which occasionally gives the correct response object but usually gives either the SSL error above or the following error:
Failure when receiving data from the peer
The above API call works when running R on my Windows desktop (running Windows 7) and on my mac. The API call works from multiple IP addresses.
I have installed the following dependencies on the remote server and have upgraded and updated but the error persists:
sudo apt-get upgrade
sudo apt-get update
sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libssl-dev
Finally, when I exit R and run the API call in the remote server's terminal I get the correct response. The command I used is as follows:
curl -u"API_KEY_HERE": https://api.companieshouse.gov.uk/search/companies?q=Paul+Dodd&items_per_page=50&start_index=1
I suspect the SSL connect error in R may be an issue with CA certificates, however I am at the limit of my knowledge of SSL. Do you have any suggestions as to how I might fix this so that I can call the API from within R using httr?
After some trial and error, I found that installing the following packages:
jsonlite, mime, curl (≥ 0.9.1), openssl (≥ 0.8), R6 using:
install.packages("package_name")
solved the issue.