I am trying to install the Datastax Community Edition on my Windows Machine. But my cassandra server is not able to start.
It shows the following message:
Detected powershell execution permissions. Running with enhanced startup scripts. Setting up Cassandra environment --------------------------------------------------------------------- --------------------------------------------------------------------- WARNING! Automatic page file configuration detected. It is recommended that you disable swap when running Cassandra for performance and stability reasons. --------------------------------------------------------------------- --------------------------------------------------------------------- Cassandra port already in use (storage_port: 7000 ). Aborting
Did you install the Windows service? Search under "Control Panel > Administrative Tools > View local services" for items starting with DataStax, there should be 1 for Cassandra plus another 2 if you have OpsCenter and the DataStax agent installed. Based on the error message I'd suspect it's already running in the background.
The warning around swap being enabled you can probably ignore if you just run it on your local machine. It would be good if you could provide more details around how you're trying to start it and which version of Windows you're using.
Related
We are using a Self-Hosted Integration Runtime for Azure Data Factory.
On that machine there was installed an Exasol ODBC driver of version 6. We wanted to upgrade the driver, deleted an old one and installed a new driver of version 7.
Weird thing is that now in Exasol logs we can see that Data Factory is sometimes connecting via driver version 7, and sometimes via driver version 6.
I made an experiment and deleted Exasol ODBC driver from the machine completely. After that Data Factory still was able to connect to Exasol using the driver I just deleted.
Looks like drivers' DLLs are cached somewhere. What can it be?
Update 1
I captured following actions in Process Monitor when Data Fatory connected to Exasol with ODBC driver of version 6:
Where these C:\Config.Msi\3739be5*.rbfASolution-6.1\ODBC\ DLLs may come from? There is no C:\Config.Msi\ directory on the machine.
Update 2
I noticed that when I test connection via Microsoft Integration Runtime Configuration Manager on the machine or in Data Factory Linked Service, then connection is always performed with ODBC driver of version 7.
But when I test connection via Data Factory Dataset, then in some cases connection is done with ODBC driver of version 6.
You could check the registry but clean at your own risk. An alternative might be the SysIternals tools, Process Monitor or Process Explorer which might help you get to the bottom of this. Install them on the SHIR VM if you are allowed to. Process Explorer in particular is a bit like SQL Profiler (if you've ever used that) so will be able to tell you which registry keys external processes are using. It will give you a lot a lot of information so you will have to make judicious use of timestamp and filtering. The proposed steps:
Start a trace using Process Monitor
Start a pipeline using the Exasol driver
Wait til it completes (or at least you know it has started)
Stop the Process Monitor trace Spend time going through the millions
of records it has captured, trying to filter down, or search for your
process
An alternative would be to build a clean SHIR and install only the new driver. Then swap it in for the old one. You may have to get the new SHIR added to the firewall if this is an issue for you.
Honestly I would propose both of these approached in parallel for a production problem. Procmon / Process Explorer can be quite labour and time expensive but should help you get to the bottom of the issue. Building a cleaner SHIR is probably a safer option in the long-term, but requires new infrastructure.
It may sound silly, but rebooting the server where SHIR is working solved the problem.
We noticed, that this server was running for more than 30 days, and decided to reboot it. Maybe restarting Integration Runtime service itself would also help, but we didn't do it.
Thanks to everyone for you help.
I have updated my windows and R cannot run, and hence neither can R studio. When I run R GUI it just freezes and is unresponsive. I have allowed chromium exemption to the firewall
I am on Windows Insider program and has just updated to
Windows 10 Home, Insider Preview
Evaluation Copy.Build 20190.rs_prerelease.200807-1609
Note that R GUI freezes and then shuts down on its own, so maybe the problem is R GUI and not R Studio.
I get the following errors on R studio.
This site can’t be reached
127.0.0.1 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Cannot Connect to R
RStudio can't establish a connection to R. This usually indicates one of the following:
The R session is taking an unusually long time to start, perhaps because of slow operations in startup scripts or slow network drive access.
RStudio is unable to communicate with R over a local network port, possibly because of firewall restrictions or anti-virus software.
Please try the following:
If you've customized R session creation by creating an R profile (e.g. located at ~/.Rprofile), consider temporarily removing it.
If you are using a firewall or antivirus software which guards access to local network ports, add an exclusion for the RStudio and rsession executables.
Run RGui, R.app, or R in a terminal to ensure that R itself starts up correctly.
Further troubleshooting help can be found on our website:
Troubleshooting RStudio Startup
This has been fixed with Windows 10 Insider Preview Build 20201 (released on August 26, 2020 in the Dev channel).The previous two builds were missing 64-bit APIs required by the prebuilt version of R.
Same issue.
Rollback to the previous version solves the problem.
I think it is about the update of the graphic features of Windows.
Here is what Microsoft said in the build 20190 changelog:
Improved Graphics Settings experience
While this isn’t a new feature all together, we have made significant changes based on customer feedback that will benefit our customers’ Graphics Settings experience. We have made the following improvements:
We’ve updated the Graphics Settings to allow users to specify a default high performance GPU.
We’ve updated the Graphics Settings to allow users to pick a specific GPU on a per application basis.
I am using the AdvancedInstaller program to build an installer, and it works on some Windows 2008 R2 servers and it doesn't on other servers running the same OS.
The ones that it works on have been recently built, the ones it doesn't have been around for some time and have had programs installed and uninstalled.
What happens is the user starts the install and they get an ODBC timeout error and the install stops.
I have verbose logging turned on for the AdvancedInstaller project and this is the error I am getting:
MSI (c) (A4:74) [10:37:48:995]: Invoking remote custom action. DLL: C:\Users\ADMINI~1.DOM\AppData\Local\Temp\3\MSICCB.tmp, Entrypoint: OnSqlFetch
Action ended 10:37:49: SqlQueryAction. Return value 3.
MSI (c) (A4:04) [10:37:49:073]: Doing action: FatalError
Action 10:37:49: FatalError.
Action start 10:37:49: FatalError.
Action ended 10:37:59: FatalError. Return value 1.
Is there some other logging options / file / registry / error report I can see that can tell me more about the ODBC timeout error that is happening?
Thanks
The log snippet you attached indicates the SQL queries you added from the SQL Scripts page were not executed successfully. This indeed can be a consequence of an ODBC timeout error.
Since it works on some machines most likely this is not an installer-configuration related issue.
You can try to test the connection parameters to make sure. The following thread shows how to do it outside the installer:
Simplest Way to Test ODBC on WIndows
You can even configure this from the Advanced Installer project so the built installer can perform the test at install time before actually connecting to the server. Here is how:
How to test SQL connection parameters?
I created an MLOAD job using OleLoad on my Windows 7 x64 Professional machine. It loads data from Oracle 11g into Teradata 14. Everything works great when I run it locally. When I copy it to a remote Windows Server 2003 SP2 machine and run it, it fails with error code 12 and this message:
**** 07:30:57 UTY4203 Attempted to access out of range input data in field
'LOCATION_CODE', file 'myjob.amj',record number '1'.
**** 07:30:57 UTY4023 Access module warning '33' received during
'PreserveRestartInfo' operation: 'Attribute name not recognized by attached
AM'
I opened my .amj file on the remote machine to see what it would look like if I regenerated it using OleLoad's UI. When comparing the two .amj files in Beyond Compare afterward, I was surprised to see that the new .amj is very different. VARCHAR(214) is changed to FLOAT, VARCHAR(30) is changed to VARCHAR(10), etc.
All TTU 14 assemblies on the remote machine match what I have installed locally. The only difference I noticed is the verison of Oracle DLL that OleLoad appears to be using. Here's what OleLoad says on my machine when I click on Connection Info for my Oracle connection:
Provider
Name: OraOLEDB11.dll
Version: 11.2.0.1.0
DBMS
Name: Oracle
Version: 11.2.0.3.0
And on the Windows Server 2003 machine:
Provider
Name: OraOLEDB.dll
Version: 9.0.1.0.1
DBMS
Name: Oracle
Version: 11g
Now before anyone facepalms with "Well, DUH! There's your problem!", I'll add that it will cause me a great deal of grief if I had to install a new version of Oracle on my local machine because I have a ton of MLOAD files that I've created for personal utilities (helper loads, if you will, for when the business needs ad hoc report). I can't upgrade what's on the remote server because I'd run the risk of breaking all of the other MLOAD jobs that are running there.
I just wanted to mention all of that in case it was relevant, but I am hoping that it's not actually the problem and that there's a way I can get my current file to work without having to uninstall/reinstall/upgrade anything.
I believe that the theory proposed at the end of my OP has been confirmed. I was able to find a machine that had Oracle client 11 installed and migrated my jobs there. They worked flawlessly, so it is most certainly an issue with Oracle 9 client vs 11.
I'm trying to create a cluster on EC2. I have an account setup and validated with AWS. I have successfully downloaded and installed the segue package and related packages and set my AWS credentials. My problem starts when I try to create a cluster and I get the following:
> library(segue)
Loading required package: rJava
Loading required package: caTools
Loading required package: bitops
Segue did not find your AWS credentials. Please run the setCredentials() function.
> setCredentials('', '') #keys hidden
> myCluster <- createCluster(numInstances=5)
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
com.amazonaws.AmazonClientException: Can't turn bucket name into a URI: Illegal character in authority at index 8: https://c:\users\backup~1\appdata\local\temp\rtmp4u0n8yqaaoducils-segue.s3.amazonaws.com
Any ideas?
acesnap, I'm the author of Segue and I can say with confidence that the issue you're running into is that the Segue package has not been implemented to run on the Windows platform. I'm suspicious that the issue is that windows does funny things with file paths, temp files, and the like. The server side of the Segue package is always the Amazon Elastic Map Reduce service which runs Linux, but temporary files are built on the client machine and so Segue must talk nice with the local operating system.
There are several work-arounds I can think of:
Set up Virtual Box on your local machine and get Ubuntu and R installed.
Set up an EC2 machine and install R and Segue and then use that machine to fire off Segue jobs.
Buy a Mac or install Linux on a desktop machine (kinda obvious, I guess)
Even though my desktop machines are Mac and Linux, I use #2 above frequently. I do this because it speeds up the communication between the machine running Segue and the backend cluster. It also reduces the probability that the Segue main machine will lose connectivity to the EMR backend. This is valuable because if communication is lost between Segue and the amazon cloud while a job is running then the job will run on the cloud cluster, but have no way of returning results to the Segue main machine (the machine you submit jobs from).