Is it possible to upgrade from 10.1.x to 10.3.x directly in one step? or I have to upgrade first to 10.2. x then to 10.3.x.
Please it is so important question regarding upgrading our production MariaDB servers and I couldn't find any answer or notes regarding upgrade from 10.1 series to 10.3 series.
So i have to do it as follow:
10.1.32 --> 10.2.16
10.2.16 --> 10.3.7
or
once 10.1.32 --> 10.3.7
In general, for any upgrade for a critical production environment:
The best approach is to use or create a test environment that is as close as possible to your production environment and test the upgrade there.
Make backups and prepare a rollback so you are ready to undo your changes
For MariaDB specifically: to quote from other related questions on their support pages:
The main concern with skipping versions is that, while upgrading one major version is usually well-tested, skipping versions is not, so you
may bump into an incompatibility
Even if you find anecdotal indications that it worked for others, a database engine like MariaDB has possible complexities with different storage engines and the like that might make it more tricky in certain setups than in others.
1 : Shutdown or Quit your XAMPP server from Xampp control panel.
2 : Download the ZIP version of MariaDB
3 : Rename the xampp/mysql folder to mysql_old.
4 : Unzip or Extract the contents of the MariaDB ZIP file into your XAMPP
folder.
5 : Rename the MariaDB folder, called something like mariadb-5.5.37-win32, to
mysql.
6 : Rename xampp/mysql/data to data_old.
7 : Copy the xampp/mysql_old/data folder to xampp/mysql/.
8 : Copy the xampp/mysql_old/backup folder to xampp/mysql/.
9 : Copy the xampp/mysql_old/scripts folder to xampp/mysql/.
10: Copy mysql_uninstallservice.bat and mysql_installservice.bat from
xampp/mysql_old/ into xampp/mysql/.
11 : Copy xampp/mysql_old/bin/my.ini into xampp/mysql/bin.
12 : Edit xampp/mysql/bin/my.ini using a text editor like Notepad. Find skip-federated and add a # in front (to the left) of it to comment out the line if it exists. Save and exit the editor.
13 : Start-up XAMPP.
Note If you can't get mysql to start from Xampp control panel.
Add this 'skip-grant-tables' statement anywhere in xampp/mysql/bin/my.ini
file
14 : Run xampp/mysql/bin/mysql_upgrade.exe.
15 : Shutdown and restart MariaDB (MySQL).
If still mysql is not started then follow below Note steps(!Important)
Note :mysql error log file: c:\xampp\mysql\bin\mysqld.exe: unknown variable 'innodb_additional_mem_pool_size=2M' like please remove or commented this statement in my.ini file in this path xampp/mysql/bin/my.ini file.
Help from this link.
Related
I ran into the problem that everything went well with the compilation and the database. But when I start the worldserver, I get an error
Loading world information...
> RealmID: 1
> Version DB world: ACDB 335.6-dev
Will clear `logs` table of entries older than 1209600 seconds every 10 minutes.
Using DataDir /azerothcore-wotlk/data/
WORLD: VMap support included. LineOfSight:true, getHeight:true, indoorCheck:true PetLOS:true
Map file '/azerothcore-wotlk/data/maps/0004331.map': does not exist!
exit code: 1
worldserver terminated, restarting...
worldserver Terminated after 1 seconds, termination count: : 6
worldserver Restarter exited. Infinite crash loop prevented. Please check your system
What could be the problem? I rechecked the permissions to the directory including the owner and everything is fine. Tried different paths DataDir, now it set to **DataDir = "/home/azcore/azerothcore-wotlk/data". I get an error, how to fix that?
**
First of all, if you only need to get latest maps compatible with AzerothCore you can download them from here.
Otherwise, change in your config.sh file CTOOLS_BUILD='all', afterwards, run again the build using
./acore.sh compiler build
This will generate the binaries to extract the data inside azerothcore-wotlk/env/dist/bin/.
Having the binaries you can follow the guide here to extract them manually, you only need to move the binaries into the WoW directory and run the binaries in the right order.
I am using Firebird 4.0 release candidate 1 on Linux (attempting to use it either in a C application or with the ODBC driver). When I attempt to open an embedded database (/path/to/db/name.db), I get "Unable to complete network request to host XXX. Failed to establish a connection." I know that this means that the libEngine13.so can't be found so it is defaulting to the localhost. However, libEngine13.so has been built, and is in the default install location for the git repo /path/to/firebird/gen/Release/firebird/lib/plugin so I'm not sure why it isn't being found. I have also tried adding the folder containing it (plugin) to LD_LIBRARY_PATH, copying libEngine13.so to the same directory as libfbclient.so (/path/to/firebird/gen/Release/firebird/lib), adding it to a plugin folder in the directory containing libfbclient.so (etc.) Any ideas?
I should clarify this issue was actually for Firebird 4.0 release candidate 1 (which I was not aware of, but should have been). I used strace to confirm that it was finding libEngine13.so. It was looking for it in /path/to/firebird/install/plugins instead of /path/to/firebird/gen/Release/firebird/plugins. When I copied it there, I then received an "unavailable database. SQLCODE:-904" error. When I switched the database location from /path/to/proj/databases to /path/to/my/dir/on/parallel/file/system/databases it worked.
As outlined in http://wiki.bitplan.com/index.php/Apache_Jena#Script_to_start_Fuseki_server
I have been avoiding the complexity of Fuseki configuration files and started the server from a script for my usecases in which I only need one dataset/endpoint. For multiple datasets/endpoints i simply used multiple servers.
Descriptions like:
https://jena.apache.org/documentation/fuseki2/fuseki-config-endpoint.html
and questions like:
fuseki Multiple services found exception
have been intimidating me since there seem to be so many options and no straight forward way to simply say: please use these dataset from the following directories as the command line version can do for one dataset.
Just look at:
https://users.jena.apache.narkive.com/MNZHLT25/multiple-datasets-on-fuseki
where the user expectation:
java -jar fuseki-0.1.0-server.jar --update --loc=data /dataset
--loc=data2 /dataset2
can be seen that is unfortunately not fullfilled. Instead:
http://jena.apache.org/documentation/serving_data/index.html#fuseki-configuration-file
was the answer at the time which is now an outdated link.
So obviously there are people out there getting fuseki to work with multiple datasets. But how do they do it ?
I know how to load a TDB store from a triple file via command line. I know that i could use the WebGUI to setup datasets and load data but that won't work for my multi million (and partly multi-billion) triple files.
What is a (hopefully simple) example for loading multiple triple files and making the result available with the same fuseki server as different datasets and have the SPARQL endpoints running (partly read-only?)
https://jena.apache.org/documentation/fuseki2/fuseki-layout.html gives a hint on the layout of files.
Using the script to start fuseki i inspected the run directory which in my case was to be found at:
apache-jena-fuseki-3.16.0/run
There are two subdirectories which are initially empty and stay so if you run things from the commandline:
configuration
database
By adding a dataset via the webgui http://localhost:3030
a directory with the name of the dataset in this case:
databases/cr
and a configuration file
configuration/cr.ttl
is created.
For smaller datasets data can now be added via the webgui. For bigger datasets a copy or symlink of the original loaded tdb data is necessary in the databases directory.
example symlinks:
zeus:databases wf$ls -l
total 48
drwxr-xr-x 4 wf admin 136 Sep 14 07:43 cr
lrwxr-xr-x 1 wf admin 27 Sep 15 11:53 dblp -> /Volumes/Torterra/dblp/data
lrwxr-xr-x 1 wf admin 26 Sep 14 08:10 gnd -> /Volumes/Torterra/gnd/data
lrwxr-xr-x 1 wf admin 42 Sep 14 07:55 wikidata -> /Volumes/Torterra/wikidata2020-08-15/data/
By restarting the server without a --loc
nohup java -jar fuseki-server.jar&
the configurations are automatically picked up.
The good news is that you do not have to bother with the details of the config files this way as long as you do not have any special needs.
I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr
UPDATED: Took everyone's advice and decided plone.app.registry and 4.1.1 were not the issue, question is, what is? Where can I find the error logs in binary installer?
symptom: can't add content types (under Add New... folders, pages, news items, etc. -- hangs on save, more specifically my portal_factory is unable to validate and move the content to ZODB).
had same issue using both unified (4.1) and binary (4.1) installers
environment: mac book 10.6 Snow Leopard 32-bit
When I run buildout I see no errors:
2012-05-08 18:13:34 INFO ZServer HTTP server started at Tue May 8 18:13:34 2012
Hostname: 0.0.0.0
Port: 8080
2012-05-08 18:14:01 WARNING ZODB.FileStorage Ignoring index for /Applications/Plone/zinstance/var/filestorage/Data.fs
2012-05-08 18:14:27 INFO Zope Ready to handle requests
When I create a new site in Plone, Terminal says: http://pastie.org/3882025
Line 23: 2012-05-08 18:16:01 INFO GenericSetup.plone.app.registry Cannot find registry
That's not an error - that's what happens whenever you start up an instance with a new Data.fs file. If there's no Data.fs.index, or the .index file is inconsistent with the Data.fs, the existing one is ignored and the index is rebuilt. It means absolutely nothing on a new install.
There must be more information than this in the log.
Fixed this issue by following this post here: http://plone.293351.n2.nabble.com/Add-new-Plone-site-creates-site-with-JS-problems-4-1-4-tt7547774.html#a7555663
Basically, had to go to javascript registry, save, empty cache, restart browser, testing in Chrome only.