Data in transit encryption for azure-kusto-spark connector - azure-data-explorer

I am currently trying to verify that the azure-kusto-spark connector encrypts data in transit. However, I do not see anywhere in documentation that specifies what protocol the connector utilizes.
Could I get the answer to that question and have it added to documentation?

Azure kusto connector uses Https (TLS 1.2) to communicate with kusto or azure storage

Related

In Transit Encryption

I'm currently developing an application for a client and their requirement is that the application needs in transit and at rest encryption. I assured that it was and was required to provide documentation for that. I referenced this documentation from Google Cloud's website. They replied by asking if my claim stands in light of the following section
Using a connection directly to a VM using an external IP or network load balancer IP
If you are connecting via the VM's external IP, or via a network-load-balanced IP, the connection does not go through the GFE. This connection is not encrypted by default and its security is provided at the user's discretion
My mobile application uses Firebase SDK to talk to the Firebase database and Firebase functions. I have no background in networking nor do I understand what is exactly being referenced here despite Googling the concepts. Is my data still encrypted? Does the above section apply to my use case?
No, that applies only to VMs and network load balancers. Both Cloud Functions (so long as you're using https for all requests) and the Firebase Realtime database encrypt data in transit.

Does snowflake support ssl using ODBC?

I want to connect to Snowflake using ODBC, and I saw that it is SSL enabled by default(Does snowflake support ssl?).
Appreciate where I can have it formally from Snowflake, as I yet to find as such documentation..
Thanks !
All snowflake connectivity is to:
https://..snowflakecomputing.com
Even the ODBC connector is just a wrapper for HTTPS calls to then https URL above. That means that everything in snowflake, Web UI, JDBC, ODBC, snowsql, Python etc all runs over HTTPS and SSL.
It's also worth noting to meet the security standards here, all traffic must be SSL:
https://www.snowflake.com/snowflakes-security-compliance-reports/
I would read the following document, which has a bunch of different sections that reference SSL, OCSP, and openSSL key-pair settings.
https://docs.snowflake.com/en/user-guide/odbc-parameters.html
Appreciate where I can have it formally from Snowflake
The Snowflake Security Policy specifies that all customer data in transit is encrypted with TLS 1.2.
https://www.snowflake.com/wp-content/uploads/2018/07/2018July12-Snowflake-Security-Policy.pdf
Reference section 3.1. Encryption of Customer Data, which states "Snowflake leverages Transport Layer Security (TLS) 1.2 (or better) for Customer Data in-transit over untrusted networks."
This statement applies to all customer data in transit, so ODBC is included.

Are the Google Cloud Endpoints only for REST?

Are the Google Cloud Endpoints only for REST?
I have a virtual machine with cassandra, and now I need (temporarly) to expose this machine for the world (the idea is to run a cassandra client in some computers in my home/office/...). Is Google Cloud Endpoints the best way to expose this machine to world?
I am assuming that you are running Cassandra on a Google Compute Engine (CE). When one runs a compute engine, one can specify that one wants a public internet address to be associated with it. This will allow an Internet connected client application to connect with it at that address. The IP address can be declared as ephemeral (it can be changed by GCP over time) or it can be fixed (I believe there will be a modest charge for its allocation). When one attempts to connect to the software running on the Compute Engine, a firewall rule (by default) will block the vast majority of incoming connections. Fortunately, since you own the CE you also own the firewall configuration. If we look here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureFireWall.html
we see the set of ports needed for different purposes. This gives us a hint as to what firewall rule changes to make.
Cloud Endpoints is for exposing APIs that YOU develop in your own applications and doesn't feel an appropriate component for accessing Cassandra.

Default Connection Policy of DocumentClient in Cosmos DB

In Performance tips of Cosmos DB suggested by Microsoft, it is recommended to use Direct Mode i.e. TCP and HTTPS protocol to query Cosmos DB, just wanted to know what is default connection policy of Cosmos DB document Client?
var client = new DocumentClient(new Uri(endpointUrl), _primaryKey)
If I use above code, what Connection Policy will be used?
https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips
If I use above code, what Connection Policy will be used?
I think the performance tips article already has made it clear. If you do not set the direct mode in sdk, it will be Gateway Mode (default).
You could see the statement:
Gateway Mode is supported on all SDK platforms and is the configured
default. If your application runs within a corporate network with
strict firewall restrictions, Gateway Mode is the best choice since it
uses the standard HTTPS port and a single endpoint.

What data format is used to send query results from Microsoft SQL Server to ASP.NET app?

I have tried searching the msdn and google but unable to find any sources that specifically touches on this question. At the moment I can only assume A JSON is used.
If so then I guess the SQL server would process the query received, and send the results in a JSON format back to the web app. Which also leads me to ask, is the query also sent to the SQL Server in a JSON format?
Any thoughts?
The Microsoft SQL Server Wikipedia provides the information I will need to know for Microsoft SQL Server If I choose to dive deep into the mechanics of the components/services.
For the original question of this post, communication is done through TDS protocol as stated in the section below.
The protocol layer implements the external interface to SQL Server. All operations that can be invoked on SQL Server are communicated to it via a Microsoft-defined format, called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer data between a database server and a client. Initially designed and developed by Sybase Inc. for their Sybase SQL Server relational database engine in 1984, and later by Microsoft in Microsoft SQL Server, TDS packets can be encased in other physical transport dependent protocols, including TCP/IP, named pipes, and shared memory. Consequently, access to SQL Server is available over these protocols. In addition, the SQL Server API is also exposed over web services.
https://en.wikipedia.org/wiki/Microsoft_SQL_Server#Architecture

Resources