Can we use SQL server as the database for pact broker? I don't see any documentation related to that anywhere. The problem is that I don't have PostgreSQL at my workplace and I was asked to evaluate whether SQL server serves the purpose. Please help
We currently test and support PostgreSQL and MySQL. I'm sorry but SQL server has not been tested and we can't provide any guarantees.
There are services like Pactflow [1] that you could look to offload the admin burden to if you were able to go with a a SaaS product.
https://pactflow.io
Related
I am trying to import my local SQL server database into Azure and I have all the requirements (storage, bacpac file, etc). When I try to import the db, I am getting the error below.
The Azure SQL Server firewall did not allow the operation to connect.
To resolve this, please select the "Allow All Azure" checkbox in the
Sql Server's configuration blade.
I have already checked yes on the Allow Azure services and resources to access this server option in the firewall settings and added my client IP. Is there something behind the scenes preventing it from allowing access? I am running my SQL server on a Docker container.
Imported bacpac file
Import Operation Azure
Import Error
Firewall Settings
After a week of trial and error, the database imported fine with no problems so I'll answer my question. What is interesting is that I don't have a concrete answer to my question since I don't know exactly why it did, but I'll give two tips anyway.
It might have been the cache on Azure's side. I got in contact with
an Azure rep recently and they stated that the cache may have not
updated yet. Clearing your cache could be the source of the problem
as well. To clear the cache see this document.
DBCC FLUSHAUTHCACHE;
Creating a new rule that spans from 0.0.0.0 to 255.255.255.255 in
your firewall settings.
Feel free to provide more solutions in the answers. Like I said, it was likely the cache on their side. It was really odd that it didn't work for a while, even with the firewall settings configured correctly.
In a SQL Server Managed Instance I have 2 databases (for security reasons both databases have different logins). I need the possibility to allow one database to look into the other one. In a local SQL Server I was able to create a Linked Server to realize this. But this seems not to work using the Managed Instance.
Can someone give some hints how to achieve this?
Managed Instance supports linked servers (unless if they use MSDTC for some distributed writes). Make sure that you add logins for remote server:
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'PEER',#useself=N'False',#locallogin=NULL,
#rmtuser=N'$(linkedServerUsername)', #rmtpassword='$(linkedServerPassword)';
If it still doesn't work put the exact error message. This might be Network security Group blocking the port, VNets that are not peered, etc.
has anyone tried Connecting Openedge 11.1 or 11.7 to Crystal Reports? If so, what version of Crystal Reports should I use? Hoping someone will help. Thanks a lot.
Crystal Reports wants a SQL connection to the OpenEdge database.
If your client is a Windows client and has Progress installed on it the ODBC/JDBC drivers should already exist. If you do not have Progress already installed the drivers are a free download from progress.com:
https://www.progress.com/odbc/openedge
https://www.progress.com/faqs/datadirect-odbc-faqs/progress-database-odbc-and-jdbc-driver-faq
If the OpenEdge DB has been configured to permit SQL access you will need to work with the OpenEdge DBA to obtain the connection details: hostname, port, username, and password so that you can create the appropriate connection configuration within Crystal.
If the OpenEdge DB is not currently configured to support SQL queries the DBA will need to add that capability and then provide you with the credentials.
If the database is provided as part of a 3rd party application you may need to work with the vendor to get that setup.
We have existing Web application which has following layers
Web Layer (Asp.Net MVC ) Sql Server 2012 SAP ETL services: ETL
jobs pulling the data from different datasource to sql server
We have QA,STAGE,Production environment for the application.
We are planning to migrate the application to AZURE PAAS.For Web layer there is no issue's in migrating to PAAS.
For DB Layer we have used microsoft migration steps using visual studio there is no issues at data base design level.
the only concern moving DB to PAAS is SAP ETL service and application jobs which is dumping millions of data to database. Not sure about how many DTU's it will consume.
Just need help to decide for the above scenario is it good to move DB to PAAS or have Sql server VM ?
Thanks
I think it will be best for you to make some tests and base your decision on the results. For example, create an Azure Sql Server (Standard S0) and see how that performs. A great benefit of the PaaS offering, as you probably know, is that many aspects are handled for you automatically - upgrades, you can easily switch to another performance level, DB backups and restored, etc.
There is a very good article which can give you some additional pointers on how you can evaluate the pros/cons in your situation - Choose a cloud SQL Server option: Azure SQL (PaaS) Database or SQL Server on Azure VMs (IaaS)
I would personally choose the PaaS option. I have used Azure Sql Server for a long time and I never want to go back to configuring and supporting my own SQL server.
I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.