how do I install silverstripe on sourceforge for a project. I know I need a synlink...but I don't know how to?
I have a htdocs folder that is read only (once on the server) that I can access via sftp
it is accsesable via url
I have a persistent folder that is rightable (once on the server) that I can access via sftp
it is not accsesable via url
I have a mysql credentials that are accseped during install but can't be finished because of no right accsess
So you're trying to install SilverStripe on sourceforge? Well, okay.
I guess you need to check that the MySQL user you're using has write access to the database. Also, that you got the database name right in the installation process: if you didn't, the installer will try to create that database, and if you don't have the necessary permission (usually the case on shared hosting setups), you'll get an error complaining about the CREATE DATABASE statement.
So do I understand it right that your problem is that you can upload SilverStripe, but you can not install it, because the installer wants to write the config file?
Well, in this case there is actually a way to get SilverStripe running without using the installer. Just enter the database information into your mysite/_config.php file.
It should look something like this:
<?php
global $project;
$project = 'mysite';
global $databaseConfig;
$databaseConfig = array(
"type" => 'MySQLDatabase',
"server" => 'localhost',
"username" => 'myuser',
"password" => 'mypass',
"database" => 'mydatabasename',
"path" => '',
);
MySQLDatabase::set_connection_charset('utf8');
// This line set's the current theme. More themes can be
// downloaded from http://www.silverstripe.org/themes/
SSViewer::set_theme('blackcandy');
// Set the site locale
i18n::set_locale('en_US');
// enable nested URLs for this site (e.g. page/sub-page/)
SiteTree::enable_nested_urls();
Director::set_environment_type('dev');
// Director::set_environment_type('live');
please note that SilverStripe by default requires write permission on the assets/ folder, not only for uploading files, it also requires it for saving CSS files if the environment type is set to live, because SilverStripe wants to fetch all JS and CSS files, combine them into 1 single JS and 1 CSS file and saves them into the assets/ folder.
If this is not possible, the admin will simply not load, you can also work around this by letting SilverStripe generate those files on another server (your local dev server) and then upload the files.
Related
I have been given the task of moving two Drupal-based websites to a new server, not because I'm a Drupal expert but I'm the only one in the office with PHP programming skills. One is a Drupal 7 site, the other Drupal 8. These were both given to me as DevDesktop archives and SQL dumps. The Drupal 7 site was pretty straightforward - copied the contents of the docroot up to the new server, created and populated a new MySQL database and edited the default site settings file to point at the new dbase. So the Drupal 7 site works fine. Doing the same with the Drupal 8 site the main problem seems to be it won't load any CSS or Javascript.
In the Javascript Console it threw me off the scent slightly because it said the mime type of the CSS was incorrect, but on further inspection that's because the path to the CSS was returning a 404.
Compounding the problem is Antibot, and as Javascript isn't loading, although I have the username and password for the admin user, I can't login because Antibot keeps sending me back to the homepage telling me to enable Javascript. I have edited settings.php to enable /core/rebuild.php and tried that, but doesn't appear to make any difference. I've also manually truncated the 'cache_...' tables and that doesn't seem to work either. Note that I DON'T have access to SSH on the new server, so can't use drush.
Refused to apply style from '[]' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
Failed to load resource: the server responded with a status of 404 (Not Found)
What it does look like to my non-expert eye is that Drupal is configured somehow to server up optimised versions of the CSS and JS from virtual directories /css/ and /js/, although those paths don't actually exist on the server. I checked the .htaccess file, but other than some clever stuff to deliver gzipped versions to gzip-capable browsers, couldn't see anything in there that would get the server to the correct file. Perhaps if someone could explain how Drupal routes a request to /css/ or /js/ to the right file, that would help my understanding further.
Ultimately I think this problem is because Drupal 8 wants to deliver optimised files, but the cache is screwed and Antibot won't let me get into admin to turn off aggregation.
I have full access to the server files and database, but not drush. Is there a way to turn off the CSS & JS aggregation apart from via the admin menus?
In this situation you can disable aggregation either :
by editing settings.php or settings.local.php :
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
or via sql, but you have to decode and unserialize blob data from the config table to make the changes, and then make the reverse process :
# Query :
SELECT name, CONVERT (`data` USING utf8) FROM config WHERE `name`='system.performance';
# Unserialize query output and edit data locally
$config = unserialize($output);
$data['css']['preprocess'] = FALSE;
$data['js']['preprocess'] = FALSE;
# Then serialize data and write it back into the config table
# (original encoding is probably `LONGBLOB` but it may differ depending on the backend).
Once you can ssh into the server, you will need to reset permissions and ownership under sites/default/files/ before enabling aggregation again :
mkdir -p sites/default/files/{css,js}
chown -R apache:apache sites/default/files/
chmod -R 0755 sites/default/files/
You may also want to check if the public file path setting (in settings.php) is properly set according to where these ressources are actually located :
$settings['file_public_path'] = 'sites/default/files';
I implemented a feature that allow users to upload files. Everything was working perfectly on my machine. After we deploy it, I got the following error:
Access to the path '\...\VendorDocuments\TempFolder\2585' is denied.
I've added EveryOne in the list of the object that have all the permissions to the VendorDocument folder. It worked.
Now I'd like to know how to setup the permissions to take into account the security aspects.
vendorDocuments is the main folder.
Inside vendorDocuments there is another folder called TempFolder
When user selects a file, the file is automatically uploaded to a TempFolder/UserId
If the user decides to cancel the operation, the file inside the TempFolder is deleted.
If the user decides to proceed, the file will be moved from the TempFolder/UserId to a folder belonging to the vendor still inside vendorDocuments.
VendorDocuments => TempFolder => TempFolder => UserId (file inside)
VendorDocuments => VendorName => DocumentId (file inside)
So in my opinion, there are 2 problems;
How to setup the permission on the highest level, i.e. vendorDocuments folder.
Do I need to setup permission as well for every vendor folder, i.e. where files belonging to a given vendor will be saved. There reason I asking this question is because I read that it's better to setup manually permission on folder. However, in this case, vendor's own folder will be created on the fly, i.e. the first time a user belonging to that vendor upload a file.
Sorry to ask a long question. This is the first time I'm working with permissions.
We take care of our permission like this by assigning an application pool identity to the application itself. This allows you to give the applications account the permissions it needs to write files to their destination. We are using IIS and I can see that depending on your version of IIS the process is slightly different. IIS instructions: http://www.iis.net/learn/manage/configuring-security/application-pool-identities
I have an instance of Silverstripe that we have copied off a webserver that we host. We are trying to get it running locally so we can modify it but when I run it locally all assets point to the live site. Also I cannot access the login, or Admin pages of the CMS.
When I try access any local pages it states "Server Error" in the page content
Is there a place in the code where I can change the paths to assets to local, and also access the Admin area?
Assuming you're running a local copy of the database, and don't have any exotic changes to the way File is handled SilverStripe should be resolving file paths using the BASE_PATH and BASE_URL constants.
For logging in you'll want to add to the bottom of mysite/_config.php locally something like:
define('SS_ENVIRONMENT_TYPE', 'dev');
SSViewer::set_source_file_comments(true);
ini_set('display_errors', 0);
error_reporting(E_ALL);
Security::setDefaultAdmin('admin', 'admin');
// Email::setAdminEmail('admin#example.org');
define('SS_LOG_FILE',dirname(__FILE__).'/'.basename(dirname(dirname(__FILE__))).'.log');
ini_set('error_log', SS_LOG_FILE);
Director::set_environment_type('dev');
This should give you enough debug information to solve most issues.
I would use something like https://interconnectit.com/products/search-and-replace-for-wordpress-databases/ (which also works for Silverstripe), to do a search and replace for all occurences of the old domain.
This would mainly work for images paths that are inside of content fields of course. Otherwise, SS should automatically convert the paths, as the accepted answer suggests.
I've just migrated a client site to her production server using the latest version of BackupBuddy v3.0.40, and at first glance everything looks dandy, but on closer inspection, most WP file functions are borked: update wp, upload images, upload plugin.
I've done this a ton of times (several times on this host), and don't know why its not working here
I suspect it has to do with the tmp directory, but i can't see a problem..
another possibility is that a script (installatron via cpanel maybe interfering.. i notice that there are upload folders created for all months up to 2016! i read about this being a solution to permissions issues in WP's past)
This is what I've tried:
changing the wp-media upload location to the default, changing the 'store in year/month' setting and general wiggling. this was imported as '/home/###/public_html/wp-content/uploads' which looks correct, but unnecessary, the default is wp-content/uploads. neither work.
changing the permissions on wp-content and uploads dir to 777 (not all contents)
adding a line to wp-config.php:
define('WP_TEMP_DIR', ABSPATH . 'wp-content/'); no dice
uninstalled all traces of the installatron scripted wp installation (no files or db remain)
repeating the migration (same backup file, identical results)
confirming that:
i can create new posts, just not upload media
it works on the staging server (same host)
safe mode is off
apache is running as user, tx suPHP
the files were extracted by php via the browser
i've compared phpinfo to other working sites and dont notice anything out of the ordinary
hope you can shed some light!
thanks, Tim
image upload error:
“envelope-9887.jpg” has failed to upload due to an error
The uploaded file could not be moved to /home/###/public_html/wp-content/uploads/2012/07.
wordpress update error:
Download failed.: Destination directory for file streaming does not exist or is not writable.
plugin install error:
Download failed. Destination directory for file streaming does not exist or is not writable
sometimes when migrating you may have to look through the database options table and change a few entires, ie:
from the old site structure it could be: /home/yoursiteid/public_html/wp-content/ etc..
but on the new server the structure could have changed?
ie: /home/differentuserid/wwwroot/wp-content/
edit a file on the server to include :
echo getcwd() . "\n";
just to see if the home directory is the same as your current server or if its changed from your old sevrer, have a check in your database options table and update the entires which ref the old dir structure..
I found, eventually, that I'd overlook the line
define('WP_TEMP_DIR', 'old-hard-link-here');
which I believe was nestled directly under the wp salts, camoflaged to the tired eye! Simply removing that line and setting the media path to the default fixed the issue.
I believe that that line was installed by the cPanel script 'Installatron'.
Case closed
I want to create a file and then serve it using Meteor, but I don't want the server to restart when I create/update the file in the public directory.
The user will click on a button to create a config file on the server and I want the user to be able to download that config file.
Is there a way to do this without triggering the server to restart?
I have tried creating a link to the file and creating a hidden file but nothing has worked.
Thanks for your time.
Try meteor run --production. That might solve your problem.
Server restarts because you are running it in Development mode,
When it runs in production, it doesn't restart on content changes.
To run in production, only way I know is, after bundling application,
Have a look here: http://docs.meteor.com/#deploying
If you doesn't want to run in production mode, here is a workaround:
In order to prevent reloading, you have to generate your files in a folder that is located outside of your project's repository.
Then you will have your meteor app to serve the content of that folder.
Here is an example that uses the connect npm repository to make your local folder /meteor/generated_files served under the url hostname.com/downloads/:
var connect = Npm.require('connect');
var fs = Npm.require('fs');
function serveFolder(urlPath, diskPath){
if(!fs.existsSync(diskPath))
return false;
RoutePolicy.declare(urlPath, 'network');
WebApp.connectHandlers.use(urlPath, connect.static(diskPath));
return true;
}
serveFolder('/downloads', '/meteor/generated_files/');
I published the very primitive package I have that does just that.