boxfuse dev db not provisioned correctly - cloudcaptain

I'm just starting with boxfuse and can't seem to find a way to get my dev database to be provisioned.
In my boxfuse.yml I have (for the database section):
database:
# the name of your JDBC driver
driverClass: com.mysql.jdbc.Driver
# the username
user: root
# the password
password: <password>
# the JDBC URL
url: jdbc:mysql://10.0.0.84:3306/dmsdb
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQLInnoDBDialect
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyApplication Health Check */ SELECT 1"
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
If I try running it (boxfuse run), my application doesn't work at all.
boxfuse info produces the following:
oxfuse client v.1.18.7.938
Copyright 2016 Boxfuse GmbH. All rights reserved.
Account: mlr11 (mlr11)
Info about mlr11/dms-service in the dev environment:
App Type : Single Instance with Zero Downtime updates
App URL : http://127.0.0.1:8082
DB Type : MySQL database
DB URL : jdbc:mysql://localhost:3306/boxfuse-dev-db
DB Host : localhost
DB Port : 3306
DB Database : boxfuse-dev-db
DB User : boxfuse-dev-db
DB Password : boxfuse-dev-db
DB Status : available
Which is very different than what I was expecting. URL, Database, User, Password) are not matching my boxfuse.yml file.
What I am missing. I know it must be something simple. I did all kind of search and read the doc a few times. I can't seem to find what's wrong. Any pointers will be appreciated.

From the config file you posted I am assuming this is a dropwizard app.
Since your Boxfuse app was configured to use a MySQL database, Boxfuse automatically provisions a database in each environment when you first deploy your application there. In your case you can see the connection info for that database in the dev environment in the output you post in your question.
Boxfuse exposes these values (db url, user, password, ...) as environment variables (https://cloudcaptain.sh/docs/databases#envvars) and automatically configures your framework (Dropwizard I assume) to use those instead of the ones included in your config file. It will do so by passing -Ddw.database.url=$BOXFUSE_DATABASE_URL -Ddw.database.user=$BOXFUSE_DATABASE_USER -Ddw.database.password=$BOXFUSE_DATABASE_PASSWORD as arguments to the JVM.
Also doublecheck in the VirtualBox GUI that your VirtualBox installation is fully functional and able to start VMs and that both the Boxfuse Dev VM and the instance of your application are started properly.

Related

create cluster for existing mariadb database

I have an existing database for which i was looking to create a new clustered environment. I tried the following steps:
Create a new database instance (OS & DB Server).
Take a backup / snapshot from existing database server for all the databases.
Import the snapshot to the new server.
Configure the cluster - referred to various sites but all giving same solution. Example reference site - https://vexxhost.com/resources/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04/
Ran the command (sudo galera_new_cluster) on the primary server. (Primary server - no issue starting up). But when we tried starting the secondary server - it actually crashed for some reason.
Unfortunately at this point, dont have the logs stored / backed up with me where it failed. But it seemed like it tried to sync in with the primary server - had some failure with that.
As for additional part of the actions performed above. Both the server with same username / password - created a passwordless ssh connection between both the machines. Also, the method of syncing is set to rsync.
Am i missing something or doing it wrong? Is there a better way available on it?

Credentials for AWS Athena ODBC connection

I want to access AWS Athena in Power BI with ODBC. I used the ODBC driver(1.0.3) that Amazon provides:
https://docs.aws.amazon.com/de_de/athena/latest/ug/connect-with-odbc.html
To access the AWS-Service I use the user=YYY and the password=XXX. To access the relevant data our administrator created a role “ExternalAthenaAccessRole#99999”.
99999 is the ID of the account where Athena runs.
To use the ODVC-driver in Power BI I created the following connection string:
Driver=Simba Athena ODBC Driver;AwsRegion=eu-central-1;S3OutputLocation=s3://query-results-bucket/testfolder;AuthenticationType=IAM Credentials;
But when I enter the User XXX with the password YYY It get the message “We couldn’t authenticate with the credentials provided. Please try again.”.
Normally I would think that I must include the role “ExternalAthenaAccessRole#99999” in the connection string, but I couldn’t find a parameter for it in the documentation.
https://s3.amazonaws.com/athena-downloads/drivers/ODBC/SimbaAthenaODBC_1.0.3/Simba+Athena+ODBC+Install+and+Configuration+Guide.pdf
Can anybody help me how I can change the connection string so that I can access the data with the ODBC driver in Power BI?
TL;DR;
When using Secret Keys, do not specify "User / password", but instead always click on "default credentials" in Power Bi, to force it to use the Local AWS Configuration (e.g. C:/...$USER_HOME/.aws/credentials)
Summarized Guide for newbies:
Prerequisites:
AWSCli installed locally, on your laptop. If you don’t have this, just download the MSI installer from here:
https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
Note: this quick guide is just to configure the connection using AWS Access Keys, and not federating the credentials through any other Security layer.
Configure locally your AWS credentials.
From the Windows command prompt (cmd), execute: aws configure
Enter your AWS Access Key ID, Secret Access Key and default region; for example "eu-west-1" for Ireland.
You can get these Keys from the AWS console, IAM service, Users, select your user, Security, Create/Download Access Keys.
You should never share these keys, and it’s highly recommended to rotate these, for example, every month.
Download Athena ODBC Driver:
https://docs.aws.amazon.com/athena/latest/ug/connect-with-odbc.html
Important: If you have Power Bi 64 bits, download the same (32 or 64) for the ODBC.
Install it on your laptop, where you have Power Bi.
Open Windows ODBCs, add a User DSN and select Simba-Athena as the Driver.
Use always "Default credentials" and not user/password, since it will use our local keys from Step 1.
Configure an S3 bucket, for the temporary results. You can use something like: s3://aws-athena-query-results-eu-west-1-power-bi
On the Power Bi app, click on Get Data and Type ODBC.
Choose Credentials "default", to use the local AWS keys (from step 1) and, optionally, enter a "select" query.
Click on Load the data.
Important concern: I’m afraid Power Bi will load all the results from the query into our local memory. So if, for example. we're bringing 3 months of data and that is equivalent to 3 GB, then we will consume this in our local laptop.
Another important concern:
- For security reasons, you'll need to implement a KMS Encryption keys. Otherwise, the data is being transmitted in clear text, instead of being encrypted.
Relevant reference (as listed above), where you can find the steps for this entire configuration process, but more in detail:
- https://s3.amazonaws.com/athena-downloads/drivers/ODBC/Simba+Athena+ODBC+Install+and+Configuration+Guide.pdf
Carlos.

Unable to start Azure Storage Emulator

I've run into problems trying to run the Azure Storage Emulator on a newly installed computer.
At first it was returning
Cannot create database 'AzureStorageEmulatorDb56' : The database 'AzureStorageEmulatorDb56' does not exist. Supply a valid database name. To see available databases, use sys.databases..
However, when I ran sqllocaldb i I could see that there was a DB named 'AzureStorageEmulatorDb56'.
I eventually ran the command
AzureStorageEmulator init -server localhost -forcecreate
which returned
Granting database access to user AzureAD\[username elided].
Database access for user AzureAD\[username elided] was granted.
Initialization successful. The storage emulator is now ready for use.
The storage emulator was successfully initialized and is ready to use.
which looks promising.
However, when I right-click the emulator's icon in the system try and select "Start Storage Emulator" nothing happens. And if I then look in the log files I can see an error log (Error20-Jul-18-11-07.log) which contains...
7/20/2018 11:06:36 AM [Error] [ActivityId=00000000-0000-0000-0000-000000000000] Input string was not in a correct format.
There's also an Info20-Jul-18-11-07.log file which contains
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Starting Service: Blob
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Stopping Service: Blob
Can anyone explain what's going wrong and how I can get the local storage emulator up and running?
Try to disable logging, there seems to be a bug in the 5.5 release:
https://github.com/Azure/azure-storage-net/issues/728

Cannot create SQL Server DB within Amazon RDS Instance

This seems to be a common question, however I haven't found a solution out there and many related questions are quite vague. Anyways, I am deploying an ASP.NET MVC 5 application to AWS using the AWS toolkit for Visual Studio Pro 2013. I have successfully published the app to Elastic Beanstalk with the exception of my database file which exists as a localDB database (.mdf). In trying to migrate this (very small) database I have created an RDS DB instance for SQL Server Express. My issue is that I cannot create a SQL Server DB which appears to be a common issue for VS users: I right click on the DB instance, select "Create SQL Server Database", VS is busy for a few moments and then nothing happens.
What I have done thus far:
I have an RDS instance created on a VPC with a security group that has an Inbound rule set to allow all traffic from my IP
I have an IAM user account with the following policies: PowerUserAccess, AmazonS3FullAccess, AmazonVPCFullAccess (I imagine some of this is redundant-I added additional policies to see if it was a permission issue)
So to succinctly state my questions, why is Visual Studio failing to create the SQL Server DB within the database instance? Or alternatively, is there a simpler method of migrating my database to AWS?
Just FYI, these are the references I have been using to deploy my application:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_NET.quickstart.html
https://aws.amazon.com/blogs/aws/net-support-for-aws-elastic-beanstalk-amazon-rds-for-sql-server-/
I'm brand new at AWS so let me know if clarification is needed.
Update: I checked the logs for my instance and I'm getting error logs
2014-12-12 18:16:02.72 Server The SQL Server Network Interface library could not register the Service Principal Name (SPN) [ MSSQLSvc/AMAZONA-E3AJMJI ] for the SQL Server service. Windows return code: 0xffffffff, state: 53. Failure to register a SPN might cause integrated authentication to use NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies and if the SPN has not been manually registered.
And
2014-12-12 18:47:23.72 Logon Error: 17806, Severity: 20, State: 14.
2014-12-12 18:47:23.72 Logon SSPI handshake failed with error code 0x8009030c, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The Windows error code indicates the cause of failure. The logon attempt failed [CLIENT: 113.108.150.211]
2014-12-12 18:47:23.73 Logon Error: 18452, Severity: 14, State: 1.
2014-12-12 18:47:23.73 Logon Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. [CLIENT: 113.108.150.211]
UPDATE: Issue solved. We use a proxy server in my office which seemed to cause authentication with the RDS instance to fail, not allowing me to connect from my machine. I accepted Ossman's answer as I think it solves a lot of similar questions I've come across trying to solve this.
This is a AWS explorer for Visual Studio 2013 bug and actually occurs because you're using the "default security group" by default when you're creating your DB instance in RDS.
Access the EC2 Service in AWS Management Console.
Click on "Security Groups", and then on "Create Security Group"
Give it a Name, Description and use "vpc-0846aa61" as VPC.
And then add following rule for both "Inbound" and "OutBound" rules
Type: "All traffic"
Source (for Inbound): "Anywhere"
Destination (for Outbound): "Anywhere"
Then Create the Security Group
Go back to your DB Instance and then change the "default" security group to the one you just created. This is done by clicking "Instance Actions" and then "Modify".
Then you should be able to see following window when you right click on your instance in Visual Studio and clicking on "Create SQL Server Database":
My DB Instance:

MSDTC between Windows 7 (32bit) and Windows Server 2003

I am currently attempting to create a test environment for a website which is using a mixture of classic ASP and ASP.NET. (The original machines are running old versions of Windows Server, so the configuration is not to easy to mimic)
Unfortunately, I am having problems interacting from Windows 7 to the Server 2003.
The error I am getting from my test application (which simply fires a stored procedure) is as follows:
New transaction cannot enlist in the specified transaction coordinator.
After reading various articles online, I believe I have set-up the COM+ side of things on the Windows 7 machine correctly. If I change my connection string to target the old server, it succeeds.
I then ran MSDTC Simulation V1.9 and the error I recieved was as follows:
DTCping log file: C:\Users\whelans\Desktop\dtping\[servername].log
RPC server is ready
Please Start Partner DTCping before pinging
++++++++++++Validating Remote Computer Name++++++++++++
Please refer to following log file for details:
C:\Users\whelans\Desktop\dtping\[servername].log
Invoking RPC method on [servername]
Problem:fail to invoke remote RPC method
Error(0x6D9) at dtcping.cpp #303
-->RPC pinging exception
-->1753(There are no more endpoints available from the endpoint mapper.)
RPC test failed
I then tried changing my connection string password and it fails due to invalid login, so I believe the COM is reaching the server's database. The user also has full permissions on the database.
I notice in the COM+ Window, that the component in use is spinning as if communicating with the server, however, it seems the server is rejection the connection.
Any ideas?
EDIT: I have now also ran DTCTester, as i read that DTCPing will always fail on Windows 7, here was the result:
C:\Users\whelans\Desktop\dtping>dtctester.exe TestDatabase username password
Executed: dtctester.exe
DSN: TestDatabase
User Name: username
Password: password
tablename= #dtc9033
Creating Temp Table for Testing: #dtc9033
Warning: No Columns in Result Set From Executing: 'create table #dtc9033 (ival i
nt)'
Initializing DTC
Beginning DTC Transaction
Enlisting Connection in Transaction
Executing SQL Statement in DTC Transaction
Inserting into Temp...insert into #dtc9033 values (1)
Warning: No Columns in Result Set From Executing: 'insert into #dtc9033 values (
1) '
Verifying Insert into Temp...select * from #dtc9033 (should be 1): 1
Press enter to commit transaction.
Commiting DTC Transaction
Releasing DTC Interface Pointers
Successfully Released pTransaction Pointer.
Disconnecting from Database and Cleaning up Handles

Resources