How to create primary index in my Couchbase server use query? - symfony

I have written the following code to create primary index in Couchbase server with my Symfony project.
// Establish connection
$cluster = new \CouchbaseCluster('http://ec2-ip_number.compute-1.amazonaws.com:8091'); // http://127.0.0.1:8091
$bucket = $cluster->openBucket("default");
$bucket->enableN1ql(array('http://ec2-ip_number.compute-1.amazonaws.com:8091'));
// Execute query
$query = 'CREATE PRIMARY INDEX `default-primary-index` ON `default` USING GSI';
$queryNql = \CouchbaseN1qlQuery::fromString($query);
$bucket->query($queryNql);
My couchbase cluster is created in amazon aws.
After run the above code, it shows me the error:
"LCB_NETWORK_ERROR: Generic network failure. Enable detailed error codes (via LCB_CNTL_DETAILED_ERRCODES, or via detailed_errcodes in the connection string) and/or enable logging to get more information
500 Internal Server Error - CouchbaseException"
I have tried it many times to solve this issue, but couldn't able to get any solution.

Can you run the CREATE INDEX command from the web-console (query-tab). Make sure there are no connectivity/network issues between client & server, or between various Couchbase services.
Did you retry after enabling logging?? What error do you see in logs?
Provide more details on couchbase version and AWS/cluster setup?
-Prasad

Related

R mongolite: correct format for connecting with a Mogodb on a remote server?

I'm writing some R code that queries a MongoDB database, imports records matching the query criteria into R, performs record linkage with another data source, and then pushes the updated records back into MongoDB.
The code needs to work with any instance of the MongoDB database. Some people have it installed as standalone on their own computers, while others have it installed on their organisational servers. Note that these are servers specific to individual organisations and not the public mongo server.
To test my code, I have access to both scenarios - one instance is set up on my own computer, and I have several remote server instances as well.
The MongoDB database has some APIs, but I was struggling with the adapting the APIs to include the correct syntax to form my query, so I thought I would try the mongolite package instead.
I was able to create a successful connection string for the MongoDB instance on my local computer, using my user ID (which I retrieve first with an API and save as the R object myids), password, the localhost and port number as below:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb://localhost:3000",
myids$userid,
":",
rawToChar(password)))
I understood from reading the mongolite user manual that to create the connection string / URI, you skip the http or https part of the address and preface it with either mongodb:// when the Mongodb database is on a local computer, or mongodb+srv:// when the Mongodb database is on a remote server.
However, when I try just changing the prefix and login details for the remote server version, the connection fails. Say the URL for my remote server is https://mydb-r21.orgname.org/ which opens a web page where you can log in to the Mongodb database and interact with it via a graphical user interface. Just swapping localhost:3000 for the web address mydb-r21.orgname.org/ and supplying the relevant login credentials for that server doesn't work:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb+srv://mydb-r21.orgname.org/",
myids$userid,
":",
rawToChar(password)))
When I try, this is the error I get:
Warning: [ERROR] Failed to look up SRV record "_mongodb._tcp.mydb-r21.orgname.org": DNS name does not exist.
Error: Invalid uri_string. Try mongodb://localhost
If I try changing to mongodb::// (not localhost because it isn't hosted locally) I get this:
Error: No suitable servers found (`serverSelectionTryOnce` set): [connection timeout calling hello on 'mydb-r21.orgname.org:27017']
Interestingly, the port that is suffixed in the error message is the correct one that I was expecting, but that still doesn't help me.
The documentation in the mongolite user manual and other places I've found online seem to add some read/write specifications to the connection string, but as I'm not very familiar with how connection strings are constructed, I don't know if these are specific to the databases they are using in their examples. I can't find any clear explanation of what the extra bits that are not part of the URL mean, e.g. as shown in this blog. All the prefixes seem to be a bit different too, so I am not even sure what would be appropriate to try in my case.
Can anyone explain why the connection string works fine with localhost:port number for the local instance, but doesn't work with the URL for the remote server / online instance?
Also what do I need to do to make the URI for the remote server valid?

While configuring BPS DB in wso2 is 5.9.0 , which scripts do i have to import in MySQL?

I am following this document-https://is.docs.wso2.com/en/5.9.0/setup/changing-datasource-bpsds/
deployment.toml Configurations.
[bps_database.config]
url = "jdbc:mysql://localhost:3306/IAMtest?useSSL=false"
username = "root"
password = "root"
driver = "com.mysql.jdbc.Driver"
Executing database scripts.
Navigate to <IS-HOME>/dbscripts. Execute the scripts in the following files, against the database created.
<IS-HOME>/dbscripts/bps/bpel/create/mysql.sql
<IS-HOME>/dbscripts/bps/bpel/drop/mysql-drop.sql
<IS-HOME>/dbscripts/bps/bpel/truncate/mysql-truncate.sql
Now create/mysql.sql creates table and the rest two file are responsible for deleting and trucating the same table..............what do i do?????????
Can anyone also tell the use case of BPS datasource??????
Please Help...........
You should only change your bps database if you have a requirement of using the workflow feature[1] in the wso2 identity server. It is mentioned in this documentation https://is.docs.wso2.com/en/5.9.0/setup/changing-to-mysql/
The document supposed to menstion the related db script. But it seems like mis leading. As it has requested to execute all three scripts. if you are using the workflow feature just use the
/dbscripts/bps/bpel/create/mysql.sql
script to create tables in you mysql database.
[1]. https://is.docs.wso2.com/en/5.9.0/learn/workflow-management/

create cluster for existing mariadb database

I have an existing database for which i was looking to create a new clustered environment. I tried the following steps:
Create a new database instance (OS & DB Server).
Take a backup / snapshot from existing database server for all the databases.
Import the snapshot to the new server.
Configure the cluster - referred to various sites but all giving same solution. Example reference site - https://vexxhost.com/resources/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04/
Ran the command (sudo galera_new_cluster) on the primary server. (Primary server - no issue starting up). But when we tried starting the secondary server - it actually crashed for some reason.
Unfortunately at this point, dont have the logs stored / backed up with me where it failed. But it seemed like it tried to sync in with the primary server - had some failure with that.
As for additional part of the actions performed above. Both the server with same username / password - created a passwordless ssh connection between both the machines. Also, the method of syncing is set to rsync.
Am i missing something or doing it wrong? Is there a better way available on it?

Azure SQL Database sometimes unreachable from Azure Websites

I have a asp.net application deployed to Azure websites connecting to Azure SQL Database. This has been working fine for the last year, but last weekend I have started getting errors connecting to the database giving the following stacktrace.
[Win32Exception (0x80004005): Access is denied]
[SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)]
This error comes and goes staying a few hours and then goes away for a few hours. The database is always accessible from my machine.
A few things that I have tried are:
Adding a "allow all" firewall rule (0.0.0.0-255.255.255.255) has no effect
Changing the connection string to use the database owner as credential has no effect.
what does have effect is to change the azure hosting level to something else. This resolves the issue temporarily and the website can access the database for a few hours more.
What could have have happened for this error to start showing up? The application hasn't been changed since August.
EDIT:
Azure support found that there was socket and port exhaustion on the instance. What the root cause for that is, is still unkown.
Craig is correct in that you need to implement SQLAzure Transient Fault Handling. You can find instructions here: https://msdn.microsoft.com/en-us/library/hh680899(v=pandp.50).aspx
From the article
You can instantiate a PolicyRetry object and wrap the calls that you
make to SQL Azure using the ExecuteAction method using the methods
show in the previous topics. However, the block also includes direct
support for working with SQL Azure through the ReliableSqlConnection
class.
The following code snippet shows an example of how to open a reliable
connection to SQL Azure.
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.AzureStorage;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.SqlAzure;
...
// Get an instance of the RetryManager class.
var retryManager = EnterpriseLibraryContainer.Current.GetInstance<RetryManager>();
// Create a retry policy that uses a default retry strategy from the
// configuration.
var retryPolicy = retryManager.GetDefaultSqlConnectionRetryPolicy();
using (ReliableSqlConnection conn =
new ReliableSqlConnection(connString, retryPolicy))
{
// Attempt to open a connection using the retry policy specified
// when the constructor is invoked.
conn.Open();
// ... execute SQL queries against this connection ...
}
The following code snippet shows an example of how to execute a SQL command with retries.
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.AzureStorage;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.SqlAzure;
using System.Data;
...
using (ReliableSqlConnection conn = new ReliableSqlConnection(connString, retryPolicy))
{
conn.Open();
IDbCommand selectCommand = conn.CreateCommand();
selectCommand.CommandText =
"UPDATE Application SET [DateUpdated] = getdate()";
// Execute the above query using a retry-aware ExecuteCommand method which
// will automatically retry if the query has failed (or connection was
// dropped).
int recordsAffected = conn.ExecuteCommand(selectCommand, retryPolicy);
}
The answer from Azure support indicated that the connection issues I was experiencing was due to port/socket exhaustion. This was probably caused by another website on the same hosting plan.
Some answers to why the symptoms were removed by changing hosting service level:
Changing the hosting plan helped for a while since this moved the virtual machine and closed all sockets.
Changing the hosting plan from level B to level S helped since azure limits the number of sockets on level B.

The transaction manager has disabled its support for remote/network transactions

I'm using SQL Server and ASP.NET. I have the following function:
Using js = daoFactory.CreateJoinScope()
Using tran = New Transactions.TransactionScope()
'...
tran.Complete()
End Using
End Using
However, the following exception is thrown:
The transaction manager has disabled its support for remote/network transactions.
Description of JoinScope:
Public Class JoinScope
Implements IJoinScope
Implements IDisposable
'...
End Class
I have worked this way in another application with the same environment without a problem, but here I have this problem. What could I do to fix the issue?
Make sure that the "Distributed Transaction Coordinator" Service is
running on both database and client.
Also make sure you check "Network DTC Access", "Allow Remote Client",
"Allow Inbound/Outbound" and "Enable TIP".
To enable Network DTC Access for MS DTC transactions
Open the Component Services snap-in.
To open Component Services, click Start. In the search box, type dcomcnfg, and then press ENTER.
Expand the console tree to locate the DTC (for example, Local DTC) for which you want to enable Network MS DTC Access.
On the Action menu, click Properties.
Click the Security tab and make the following changes:
In Security Settings, select the Network DTC Access check box.
In Transaction Manager Communication, select the Allow Inbound and Allow Outbound check boxes.
I had a store procedure that call another store Procedure in "linked server".when I execute it in ssms it was ok,but when I call it in application(By Entity Framework),I got this error.
This article helped me and I used this script:
EXEC sp_serveroption #server = 'LinkedServer IP or Name',#optname = 'remote proc transaction promotion', #optvalue = 'false' ;
for more detail look at this:
Linked server : The partner transaction manager has disabled its support for remote/network transactions
In my scenario, the exception was being thrown because I was trying to create a new connection instance within a TransactionScope on an already existing connection:
Example:
void someFunction()
{
using (var db = new DBContext(GetConnectionString()))
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted }))
{
someOtherFunction(); // This function opens a new connection within this transaction, causing the exception.
}
}
}
void someOtherFunction()
{
using (var db = new DBContext(GetConnectionString()))
{
db.Whatever // <- Exception.
}
}
I was getting this issue intermittently, I had followed the instructions here and very similar ones elsewhere. All was configured correctly.
This page: http://sysadminwebsite.wordpress.com/2012/05/29/9/ helped me find the problem.
Basically I had duplicate CID's for the MSDTC across both servers. HKEY_CLASSES_ROOT\CID
See: http://msdn.microsoft.com/en-us/library/aa561924.aspx section Ensure that MSDTC is assigned a unique CID value
I am working with virtual servers and our server team likes to use the same image for every server. It's a simple fix and we didn't need a restart. But the DTC service did need setting to Automatic startup and did need to be started after the re-install.
Comment from answer: "make sure you use the same open connection for all the database calls inside the transaction. – Magnus"
Our users are stored in a separate db from the data I was working with in the transactions. Opening the db connection to get the user was causing this error for me. Moving the other db connection and user lookup outside of the transaction scope fixed the error.
I post the below solution here because after some searching this is where I landed, so other may too. I was trying to use EF 6 to call a stored procedure, but had a similar error because the stored procedure had a linked server being utilized.
The operation could not be performed because OLE DB provider _ for linked server _ was unable to begin a distributed transaction
The partner transaction manager has disabled its support for remote/network transactions*
Jumping over to SQL Client did fix my issue, which also confirmed for me that it was an EF thing.
EF model generated method based attempt:
db.SomeStoredProcedure();
ExecuteSqlCommand based attempt:
db.Database.ExecuteSqlCommand("exec [SomeDB].[dbo].[SomeStoredProcedure]");
With:
var connectionString = db.Database.Connection.ConnectionString;
var connection = new System.Data.SqlClient.SqlConnection(connectionString);
var cmd = connection.CreateCommand();
cmd.CommandText = "exec [SomeDB].[dbo].[SomeStoredProcedure]";
connection.Open();
var result = cmd.ExecuteNonQuery();
That code can be shortened, but I think that version is slightly more convenient for debugging and stepping through.
I don't believe that Sql Client is necessarily a preferred choice, but I felt this was at least worth sharing if anyone else having similar problems gets landed here by google.
The above Code is C#, but the concept of trying to switch over to Sql Client still applies. At the very least it will be diagnostic to attempt to do so.
I was having this issue with a linked server in SSMS while trying to create a stored procedure.
On the linked server, I changed the server option "Enable Promotion on Distributed Transaction" to False.
Screenshot of Server Options
If you could not find Local DTC in the component services try to run this PowerShell script first:
$DTCSettings = #(
"NetworkDtcAccess", # Network DTC Access
"NetworkDtcAccessClients", # Allow Remote Clients ( Client and Administration)
"NetworkDtcAccessAdmin", # Allow Remote Administration ( Client and Administration)
"NetworkDtcAccessTransactions", # (Transaction Manager Communication )
"NetworkDtcAccessInbound", # Allow Inbound (Transaction Manager Communication )
"NetworkDtcAccessOutbound" , # Allow Outbound (Transaction Manager Communication )
"XaTransactions", # Enable XA Transactions
"LuTransactions" # Enable SNA LU 6.2 Transactions
)
foreach($setting in $DTCSettings)
{
Set-ItemProperty -Path HKLM:\Software\Microsoft\MSDTC\Security -Name $setting -Value 1
}
Restart-Service msdtc
And it appears!
Source: The partner transaction manager has disabled its support for remote/network transactions
In case others have the same issue:
I had a similar error happening. turned out I was wrapping several SQL statements in a transactions, where one of them executed on a linked server (Merge statement in an EXEC(...) AT Server statement). I resolved the issue by opening a separate connection to the linked server, encapsulating that statement in a try...catch then abort the transaction on the original connection in case the catch is tripped.
I had the same error message. For me changing pooling=False to ;pooling=true;Max Pool Size=200 in the connection string fixed the problem.

Resources