I have been looking for a place to store some of my XML schemas publicly without actually having to host them. I decided GitHub Pages would be the ideal platform. I was correct except that I cannot figure out how to turn off SSL/TLS. When I try to fetch my pages with plain old HTTP I get a 301 Moved Permanently response.
So obviously this isn't a big deal. Worst case scenario it takes a little longer to download my schemas, and people generally only use schemas that they've already cached anyway. But is there really no way to turn this off?
From github help :
HTTPS enforcement is required for GitHub Pages sites created after June 15, 2016 and using a github.io domain.
So, you have two solutions :
find a github.io repository older than June 15, 2016
set a custom domain name on your github.io
But is there really no way to turn this off?
No, and a simple curl -L would follow the redirection and get you the page content anyway.
For instance (get an xml file in a tree structure):
vonc#vonvb C:\test
> curl --create-dirs -L -o .repo/local_manifests/local_manifest.xml -O -L https://raw.githubusercontent.com/legaCyMod/android_local_manifest/cm-11.0/local_manifest.xml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 530 100 530 0 0 1615 0 --:--:-- --:--:-- --:--:-- 1743
vonc#voncvb C:\test
> tree /F .
C:\TEST
└───.repo
└───local_manifests
local_manifest.xml
Related
I'm having the strangest problem for days now. I took over a WordPress website of a company that was originally developed by another person – the codebase is a mess but I was able to go over it and make sure it at least is working.
The database is huge (70mb) and there is a lot of plugin dependencies on the site.
However the site works generally without issues now and I'm hosting it on an EC2 with a bitnami stack for WordPress.
The weird thing though is that everyday (for instance today morning) I check the site and it's down …
Service Unavailable The server is temporarily unable to service your
request due to maintenance downtime or capacity problems. Please try
again later.
Additionally, a 503 Service Unavailable error was encountered while
trying to use an ErrorDocument to handle the request.
When logging into the server with ssh and trying to restart apache I get this:
Failed to unmonitor apache: write /var/lib/gonit/state: no space left
on device Syntax OK /opt/bitnami/apache2/scripts/ctl.sh : apache not
running Syntax OK (98)Address already in use: AH00072: make_sock:
could not bind to address [::]:80 (98)Address already in use: AH00072:
make_sock: could not bind to address 0.0.0.0:80 no listening sockets
available, shutting down AH00015: Unable to open logs
/opt/bitnami/apache2/scripts/ctl.sh : httpd could not be started
Failed to monitor apache: write /var/lib/gonit/state: no space left on
device
I had this the 3rd time in 3 days now even though I restored the server from a snapshot with a volume of 200gb (for testing purposes) and all site files including uploads only have 5gb.
The site is running on an EC2 (t2.medium) with 200gb volume now and today morning I can't restart apache. Yesterday evening when restoring from a snapshot the site works well and normal - it's actually even fast.
I don't know where to start investigating here. What could cause the server to run out of disc space in one night?
Thanks,
Matt
Also one of the weirdest things it seems. I reset everything yesterday eventing from an EC2 snapshot to a 200gb volume and attached it to the instance. Everything was working fine. I made some changes on the files, deleted some plugins, updated some settings.
And it seems this is all gone now. And I'm using an elastic IP, so I couldn't connect to a wrong device or something.
Bitnami Engineer here, you will probably need to resize the disk of your instance. But you can investigate those issues later, these commands will show the directories with large number of files
cd /opt/bitnami
sudo find . -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
du -h -d 1
If MySQL is the service that is taking more space, you can try adding this line under the [mysqld] block of the /opt/bitnami/mysql/my.cnf configuration file
expire_logs_days = 7
That will force MySQL to purge the old logs of the server after 7 days. You will need to restart MySQL after that:
sudo /opt/bitnami/ctlscript.sh restart mysql
More information here:
https://community.bitnami.com/t/something-taking-up-space-and-growing/64532/7
What you need to do is increase the size of partition on the disk and the size of file system on that partition. Even you increased the volume size, these figure kept unchanged. Create another from snapshot would not help too.
Check how to do it here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Your df result shows
Filesystem 1K—blocks Used Available Use% Mounted on
udev 2014560 0 2014560 0% /dev
tmpfs 404496 5872 398624 2% /run
/dev/xvdal 20263528 20119048 128096 100%
tmpfs 2022464 0 2022464 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 2022464 0 2022464 0% /sys/fs/cgroup
/dev/loop0 18432 18432 0 100% /snap/amazon—ssm—agent/1480
/dev/loopl 91264 91264 0 100% /snap/core/7713
/dev/loop2 12928 12928 0 100% /snap/amazon—ssm—agent/295
/dev/loop3 91264 91264 0 100% /snap/core/7917
tmpfs 404496 0 404496 0% /run/user/1000
where the root volume /deb/xvda1 has only 20GB and that is marked as 100% of the volume, not 200GB as you mentioned.
When you increase the volume size during the instance running, it is not automatically applied. In your EC2, you have to apply the change of volume as follows:
sudo resize2fs /dev/xvda1
and check the size of the volume by doing df -h then you will see the size is now 200GB.
Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor
I am trying to use following nmap script http-wordpress-enum.nse
http-wordpress-plugins.nse scan one wordpress website.
To access this wordpress website you have to go following link : http://192.168.0.1/wp/
I am having trouble to run these nmap script against that host. when you do
nmap -p80 --script http-wordpress-plugins.nse 192.168.0.1
no result returned, even though I know there is plugin installed. is that because nmap scanned web address is http://192.168.0.1 rather than ://192.168.0.1/wp/ ? so nmap just see there is no actual word press website there and terminated the scan? anyone have suggestion how to fix this?
Thank you in advance
You should use the http-wordpress-plugins.root script argumentto specify your "/wp/" path. In your case, something like:
nmap -p80 --script http-wordpress-plugins.nse --script-args http-wordpress-plugins.root="/wp/" 192.168.0.1
Quoting the source code of the http-wordpress-plugins.nse script (/usr/share/nmap/scripts/http-wordpress-plugins.nse):
description = [[
Tries to obtain a list of installed WordPress plugins by brute force
testing for known plugins.
The script will brute force the /wp-content/plugins/ folder with a dictionary
of 14K (and counting) known WP plugins. Anything but a 404 means that a given
plugin directory probably exists, so the plugin probably also does.
The available plugins for Wordpress is huge and despite the efforts of Nmap to
parallelize the queries, a whole search could take an hour or so. That's why
the plugin list is sorted by popularity and by default the script will only
check the first 100 ones. Users can tweak this with an option (see below).
]]
---
-- #args http-wordpress-plugins.root If set, points to the blog root directory on the website. If not, the script will try to find a WP directory installation or fall back to root.
-- #args http-wordpress-plugins.search As the plugins list contains tens of thousand of plugins, this script will only search the 100 most popular ones by default.
-- Use this option with a number or "all" as an argument for a more comprehensive brute force.
--
-- #usage
-- nmap --script=http-wordpress-plugins --script-args http-wordpress-plugins.root="/blog/",http-wordpress-plugins.search=500 <targets>
--
--#output
-- Interesting ports on my.woot.blog (123.123.123.123):
-- PORT STATE SERVICE REASON
-- 80/tcp open http syn-ack
-- | http-wordpress-plugins:
-- | search amongst the 500 most popular plugins
-- | akismet
-- | wp-db-backup
-- | all-in-one-seo-pack
-- | stats
-- |_ wp-to-twitter
Be warned, though, that nmap does its best using a mix of heuristic methods, known vulnerabilties and brute force. A negative result does not mean that "something is not there, 100% sure". It just mean that "nmap could not find it", and it's possibly because the host is well protected (ex the service is wisely configured, firewall, IDS...)
I have a problem to make IIS to allow download file with Bitrate Throottling. I set limit file to 100kb/s. There is no problem without bitrate limitation. But with limit I have a problem.
I'm using a code similar to described in this article:
Securing Large Downloads Using C# and IIS 7
I also tried to switch off IIS Bitrate Throottling and control bitrate "by hand" calculating with TimeSpan the bitrate and using Thread.Sleep(10) in a while...
But all my tries was useless, I don't get any exceptions.
to test download I use wget, this way:
wget -t 1 http://db.realestate.ru/yrl/RealEstateExportToYandex.xml
(you can try it with wget for windows)
this is a 240Mb text file, wget always stops, at random position of downloading, 5% - 60% and throws this error message:
Read error at byte ... (Connection reset by peer).
May be the problem is not with IIS, because in may localhost is working well, but not online on highly loaded server.
Solved with this parameters specified in wget command:
wget -t 1 --header="Keep-Alive: 30000" -nv http://db.realestate.ru/yrl/RealEstateExportToYandex.xml
I'm running Wordpress with: Nginx + PHP-FPM + APC + W3 Total Cache + PageSpeed.
After 3 days researching and configuring, I succeeded to configure.
Running "top" and hitting some cached pages, it shows:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13387 nginx 20 0 472m 11m 4664 S 12.3 2.0 0:46.55 nginx
17577 nginx 20 0 443m 47m 29m S 0.7 8.0 0:42.88 php-fpm
17591 nginx 20 0 438m 43m 29m S 0.7 7.2 0:42.59 php-fpm
1486 mysql 20 0 851m 21m 4832 S 0.3 3.7 1:24.71 mysqld
17907 nginx 20 0 438m 48m 34m S 0.3 8.1 0:36.78 php-fpm
18065 nginx 20 0 442m 47m 29m S 0.3 8.0 0:33.49 php-fpm
18543 nginx 20 0 445m 63m 42m S 0.3 10.6 0:22.94 php-fpm
21125 root 20 0 15012 1148 868 R 0.3 0.2 0:00.86 top
1 root 20 0 19356 1388 1136 S 0.0 0.2 0:00.74 init
1) Why every request is being processed by PHP-PFM? Shouldn't W3 Total Cache supposed to prevent the request from been processed by PHP-FPM?
I know that my page is being cached because every page return in the end of HTML:
<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/
Page Caching using disk: enhanced
2) If I install Varnish in front of Nginx, will it stop the request from being processed by PHP-FPM? (Performance will increase? I'm using a Micro Ec2, Ram = 613MB)
PS: The response header is returning "Cache-Control: max-age=0, no-cache" from server. I don't know if this influence W3 Caching.
My specs:
Amazon Micro EC2
Linux version 3.4.48-45.46.amzn1.x86_64 Red Hat 4.6.3-2 (I think it's based on CentOS 5)
PHP 5.3.26 (fpm-fcgi)
I'm not aware of how this w3 total cache works, but let me state to you some facts,
first of all on the nginx level, any php page must hit the php engine, because that's probably what your try_files tells the nginx to do, if w3 total cache has some sort of html cache of the pages, then without some changes to nginx config you will still hit the php even if the cache existed. And if the cache is not really in a form of html then probably the php engine checks for the existence of the page then decides whether to rebuild the page or serve the cached one instead, so you definitely need php to run, the difference is that it won't hit the database and it won't do any processing, just serving the cached page instead.
second question, varnish, yes varnish would actually be good, it would spare you the need of a cache plugin, but then you need to make sure that wordpress knows when to ask varnish to purge the cached pages, the structure of the server would be user -> varnish -> nginx -> php, if varnish has a cached page or assets (css,js,etc) it would serve it directly without passing the request to nginx, I've tried it on a website and the response times of a cached page definitely improved, even when i did a ctrl+f5 to request the whole page without cache it still returned very fast as if the page was just a plain html page.
You'll still need to mess around with the varnish config, or at least that's what i did, because it needs a little bit of learning, but so far all I did was some copy and pasting from blogs and stuff and it worked just fine with me.
I installed Varnish in front of my server, but again, it was being processed by PHP-FPM.
The problem was the lacking of slash at the end of the URL.
In Wordpress, a page is a directory, so it respond as www.mysite.com/page1/.
The point is when you hit www.mysite.com/page1 (without the slash), Nginx have to redirect to www.mysite.com/page1/ (with slash), and by doing this, it uses PHP-FPM.
After putting the slash at the end of all links in my site, the redirect was not done, and all of my page was not processed by PHP-FPM.