I'm trying to host an RSS feed for iTunes and I keep getting a mismatched tag error
Ubuntu // Apache
feed url is:
http://fourteenthrees.com/podcasts/feed.xml
my code is:
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd">
<channel>
<title>MUSIC 4 COMMITTING CRIMES</title>
<link>http://www.fourteenthrees.com</link>
<language>en-us</language>
<copyright>℗ & © 2019 Fourteen Threes</copyright>
<itunes:subtitle>Music you listen to while committing crime.</itunes:subtitle>
<itunes:author>Fourteen Threes</itunes:author>
<itunes:summary>The soundtrack to all your crimes</itunes:summary>
<description>Every episode is the soundtrack to a different crime.</description>
<itunes:owner>
<itunes:email>editor#fourteenthrees.com</itunes:email>
</itunes:owner>
<itunes:image href="http://fourteenthrees.com/images/podlogo.jpg"/>
<itunes:category text="Society & Culture" />
<itunes:category text="Arts" />
<itunes:category text="News & Politics" />
<itunes:explicit>yes</itunes:explicit>
<item>
<title>00 - TEST</title>
<itunes:subtitle>DIRTROID:the girl dies</itunes:subtitle>
<itunes:summary><![CDATA[Dirty Harry v Metroid.]]></itunes:summary>
<itunes:image href="http://fourteenthrees.com/images/podlogo.jpg"/>
<enclosure length="8727310" type="audio/x-m4a" url="http://fourteenthrees.com/podcasts/FT-01.mp3”/>
<pubDate>Thu, 12 Sep 2019 16:00:00 PDT</pubDate>
<itunes:duration>22:04</itunes:duration>
<itunes:explicit>yes</itunes:explicit>
</item>
</channel>
</rss>
This is the error from the Apple Recommended feed validator;
Sorry,This feed does not validate. line 29, column 13: XML parsing error: :29:13: not well-formed (invalid token)
Thu, 12 Sep 2019 16:00:00 PDT
I can't see what I did wrong, could use another set of eyes, thanks.
Ububtu // Apache (should've noted that in the first question)
Log in as root
First I deleted the /var/www/html/podcasts/ dir I made to house the RSS Feed
cd /
sudo apt-get update -y
sudo apt-get install mysql-server -y
sudo /usr/bin/mysql_secure_installation
sudo systemctl enable apache2.service
sudo systemctl enable mysql.service
sudo apt-get update
sudo apt-get install software-properties-common
add-apt-repository ppa:ondrej/php
sudo apt-get install php7.3
systemctl restart apache2.service
Dowloaded Podcast Generator
Extracted the zip file to desktop and placed the new contents into a NEW folder called /podcasts/
ssh podcast folder into /var/www/html/
Chmod 777 /var/www/html/
Chmod 777 /var/www/html/podcasts/
Chmod 777 var/www/html/media
chmod 777 /var/www/hmtl/podcasts/images
Open browser, navigate to http://eaxample.com/podcasts/setup
Follow onscreen GUI instructions, fill out all the categories and use the FTP look up option to recognize your ssh upload if your file is over 2 mb.
Don't be afraid to navigate with the back button in the admin area, it remembers everything if you save it.
worked like a charm
Related
I've installed Greenbone Security Assistant Version 9.0.1 (OpenVAS) by this instruction on my VirtualBox's Ubuntu 20.4.
sudo apt install postgresql
sudo add-apt-repository ppa:mrazavi/gvm
sudo apt install gvm
greenbone-nvt-sync
sudo greenbone-scapdata-sync
sudo greenbone-certdata-sync
Unfortunately, it does not works.
When I'm trying to create a task by Wizard, I have the task completed just in moment, with an empty log. And that's all.
I've tried three commands:
systemctl status ospd-openvas # scanner
systemctl status gvmd # manager
systemctl status gsad # web ui
Everything is okay, except ospd-openvas. The status is green and active, but there are some errors too:
Jul 20 15:00:27 alex-VirtualBox ospd-openvas[833]: OSPD - openvas:
ERROR: (ospd_openvas.daemon) Failed to create feed lock file
/var/run/ospd/feed-update.lock. [Errno 2] No such file or directory:
'/var/run/ospd/feed-update.lock'
From the error message it looks like the directory /var/run/ospd/ does not exist.
Create the directory and try to restart the service.
In ubuntu 20.04 /var/run points to /run which is a temporary file system. That means that if you create the directory /var/run/ospd manually, it will be gone after the next reboot. To fix it permanently (in case the missing directory is the issue), please refer to this post.
This may help some people with some of the issues I've been facing:
mkdir -p /var/run/ospd/
touch /var/run/ospd/feed-update.lock
chown gvm:gvm /var/run/ospd/feed-update.lock
I'm building custom Debian ISO with simple-cdd utility. It worked well till the moment when I attached my own .deb package.
build-simple-cdd --dist stretch --profiles moj --force-root --local-packages /root/iso/deb
build-simple-cdd works properly, because I saw my deb package in tmp directory structure and iso image is created successfully. However debian installation fails
I suspect, that postinst script fails, since it uses systemctl command when it may be unavailable.
#!/bin/sh
set -e
echo $1
if [ "$1" = "configure" ]; then
echo "Configuring privileges..."
chown user:user /usr/bin/Koncentrator
chmod 0755 /usr/bin/Koncentrator
echo "Enabling Koncentrator services..."
systemctl daemon-reload
systemctl enable Xvfb.service
systemctl enable Koncentrator.service
fi
I've added systemd dependency to control file, but it doesn't work.
I made workaround for this issue. simple-cdd allows to prepare post installation script. apt install is called there without problems. Two steps are required to use this solution:
Add deb package to installation disk. This is configured via profile configuration file (moj.conf):
all_extras="$all_extras /root/iso/files/customapackage_0.1.3.deb"
Run apt install in moj.postinst script:
#!/bin/sh
mount /dev/cdrom /media/cdrom
cd /media/cdrom/simple-cdd
apt install ./custompackage_0.1.3.deb
cd /
sync
umount /media/cdrom
If you want to debug your postinst script, you can insert there long sleep:
#!/bin/sh
sleep 10000000
...
And switch terminal (Ctrl+Alt+F1-6) during finish-install phase. Than call chroot /target to switch in-target environemnent
I am confused after seeing Grafana installation steps (pasted below) and want to understand how can I install and run Grapaha as me.
$ echo "deb https://packagecloud.io/grafana/stable/debian/ stretch main" | sudo tee -a /etc/apt/sources.list
$ curl https://packagecloud.io/gpg.key | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install grafana
Can someone explain what the above lines are doing, and how can I get grafana installed for my user.
When I ran
echo "deb https://packagecloud.io/grafana/stable/debian/ stretch main" | sudo tee -a /etc/apt/sources.list
[sudo] password for agrawalo:
Sorry, user agrawalo is not allowed to execute '/usr/bin/tee -a /etc/apt/sources.list' as root.
This is how I did it finally.
wget https://dl.grafana.com/oss/release/grafana-6.2.5.linux-amd64.tar.gz
tar -zxvf grafana-6.2.5.linux-amd64.tar.gz
And then started Grafana by executing
./bin/grafana-server web
To run Grafana open your browser and go to http://localhost:3000/
There you will see the login page. Default username is admin and default password is admin.
So I've set up an Azure Data Science Virtual Machine on Linux (Ubuntu) and I've executed the following on the terminal to enable Remote R workspace, RStudio Server, R Server Operationalization and hadoop:
sudo apt update
sudo apt -y upgrade
# Hadoop is installed but doesn't seem to appear on the PATH or have its environment variable set by default
sudo echo "" >> ~/.bashrc
sudo echo "export PATH="'$'"PATH:/opt/hadoop/hadoop-2.7.4/bin" >> ~/.bashrc
sudo echo "export HADOOP_HOME=/opt/hadoop/hadoop-2.7.4" >> ~/.bashrc
#
source ~/.bashrc
#Setting up a password as none exists to begin with because of private key selection in the installation
#RStudio Server requires a password though
"MyPassword\nMyPassword\n" | sudo passwd sshuser
#Unfortunately hadoop fails on Data Science Virtual Machine
#error: mkdir: Call From IM-DSonUbuntu/192.168.5.4 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
# hadoop fs -mkdir /user/RevoShare/rserve2
# hadoop fs -chmod uog+rwx /user/RevoShare/rserve2
sudo mkdir -p /var/RevoShare/rserve2
sudo chmod uog+rwx /var/RevoShare/rserve2
# hadoop fs -mkdir /user/RevoShare/sshuser
# hadoop fs -chmod uog+rwx /user/RevoShare/sshuser
sudo mkdir -p /var/RevoShare/sshuser
sudo chmod uog+rwx /var/RevoShare/sshuser
#Setting up R Server Operationalisation
cd /opt/microsoft/mlserver/9.2.1/o16n
sudo dotnet Microsoft.MLServer.Utils.AdminUtil/Microsoft.MLServer.Utils.AdminUtil.dll -silentoneboxinstall MyPassword
#They say this Data Science Virtual Machine already has RStudio Server, but even though the port 8787 is open, it's nowhere to be found! So installing it now, and after the installation it's accessible by refreshing the page that failed before.
#Perhaps it's not installed then? Or a service is not running like it shoudl?
#https://www.rstudio.com/products/rstudio/download-server/
wget https://download2.rstudio.org/rstudio-server-1.1.414-amd64.deb
yes | sudo gdebi rstudio-server-1.1.414-amd64.deb
#They are small, leave them for debug reasons - lets have evidence the script run thus far.
#sudo rm rstudio-server-1.1.414-amd64.deb
# Remote R workspace Service needs dotnet sdk
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt update
sudo apt -y install dotnet-sdk-2.0.0
sudo apt install libxml2-dev
#Downloading and installing the Remote R service
wget -O rtvs-daemon.tar.gz https://aka.ms/r-remote-services-linux-binary-current
tar -xvzf rtvs-daemon.tar.gz
sudo ./rtvs-install -s
sudo systemctl enable rtvsd
sudo systemctl start rtvsd
#sudo rm rtvs-daemon.tar.gz
#sudo rm rtvs-install
#Fixing Remote R: For some reason, even though 'sudo systemctl enable rtvsd' runs, after every reboot the service won't become automatically active. So let's fix that.
wget https://sa0im0general.blob.core.windows.net/general-blob-container/StartRemoteRAfterReboot.sh
sudo mv StartRemoteRAfterReboot.sh /var/RevoShare/StartRemoteRAfterReboot.sh
sudo /sbin/shutdown -r 5
sudo chown root /etc/rc.local
sudo chmod 755 /etc/rc.local
sudo systemctl enable rc-local.service
sudo -s
sudo find /etc/ -name "rc.local" -exec sed -i 's/exit 0//g' {} \;
sudo echo "" >> /etc/rc.local
sudo echo "sh /var/RevoShare/StartRemoteRAfterReboot.sh" >> /etc/rc.local
sudo echo "exit 0" >> /etc/rc.local
exit
I've also tried, one by one, these, to see if it makes any difference to the RStudio Server (it didn't, but even if it did, I want a global solution to work on Remote R Workspace Service and R Server Operationalisation as well, not only RStudio Server):
#Configuring RStudio Server to see the R Server R
sudo echo "rsession-which-r=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> /etc/rstudio/rserver.conf
export RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.profile
source ~/.profile
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.bashrc
source ~/.bashrc
sudo echo "PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R" >> ~/.bashrc
export PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R
source ~/.bashrc
The problem is that even though "which R" points to R Server's R, i.e. typing "sudo R" will show the message "Loading Microsoft R Server packages, version 9.2.1." and will load packages like RevoScaleR, everything else fails to do so.
Accessing the RStudio Server with http://THE-IP-GOES-HERE.westeurope.cloudapp.azure.com:8787 and logging in with the initial user ("sshuser") (or with any other user for that matter) will NOT load R Server and RevoScaleR rx functions are unavailable
Using my local Visual Studio 2017 to access the remote workspace via "Add connection" on "Workspaces" tab loads MRO and says:
Installed R versions:
[0] Microsoft R Open '3.4.1.1347' (Default)
And finally, when I use R Server's Operationalisation and log in with "mrsdeploy" package's "remoteLogin()" R Server packages like RevoScaleR are not loaded again, so things like "rxSummary(~., data=iris)" fail with error 'could not find function "rxSummary"'
The exact same thing happened when I deployed from azure a "Machine Learning Server 9.2.1 on Linux (Ubuntu)".
I don't want to just use the regular open source R, I want to be able to use the R Server - that's why I deployed this VM. How can I make it so that everything loads R Server's R, not Microsoft R Open? (Like I'm able to do from terminal using "R")
As a result of my having tried all of this and the fact that R Server is loaded in the console, my mind now goes to permissions. Could it be that by default the Data Science VM doesn't have the correct permissions to allow these?
I'm at a loss
RStudio Server is installed on the Ubuntu DSVM, but the service is disabled by default as it does not support SSL. You can enable it with systemctl enable rstudio-server, then start it with systemctl start rstudio-server.
RStudio Server uses the same R as Microsoft R Server, but the .libPaths are different, which is why you cannot load the MRS packages. You will need to manually set the .libPaths so they match.
I wanted to have a Widget to view and edit the time range from within dashboards of kibana. So after lot of research i found a plugin as Kibana-time-plugin. Ref: https://github.com/nreese/kibana-time-plugin
Currently i am using kibana 5.4.0 in my local. After installing the plugin i have tried "bower install" as per the command specified in git page. But getting an error as :-
$ bower install
/usr/bin/env: ‘node’: No such file or directory
And even if Kibana is not running and giving an error as below attached image:-
kibana5.4.0
Can anyone Guide me on this ?
Thanks in Advance !!!!!!!!!!!!!!!
I think the optimization failures may be due to file permissions, the plugin files need to be accessible by the kibana user. Specifically check this instruction:
Installing plugins with linux packages
Here is a complete script that worked for me. I am new to Kibana and Kibana plugins so any feedback appreciated. Two important notes:
1) I am pulling the zip file from S3 so you will need to edit that.
2) Be sure to restart kibana afterwards and check the logs
#!/bin/bash
# install nodejs and npm
sudo curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash -
sudo yum install -y nodejs
sudo npm install -g bower
# copy the plugin zip and unzip it and fix the name
cd /usr/share/kibana/plugins
sudo aws s3 cp s3://<YOUR-BUCKET>/kibana-time-plugin-master.zip .
sudo unzip kibana-time-plugin-master.zip
sudo mv kibana-time-plugin-master kibana-time-plugin
# install the plugin
cd /usr/share/kibana/plugins/kibana-time-plugin
sudo sed -i -e 's/5.0.0/5.4.2/' package.json
sudo chown -R kibana:kibana *
sudo mkdir -p /home/kibana
sudo chown -R kibana:kibana /home/kibana
sudo -u kibana bower install