Import data from MongoDB atlas to Azure Machinelearning - azure-machine-learning-studio

I'm trying to import data from MongoDB to Azure Machine learning with a python script. I use the following script:
import pymongo as pymongo
import pandas as pd
def azureml_main(dataframe1 = None, dataframe2 = None):
client = pymongo.MongoClient("SERVER:USERNAME:PASSWORD")
db = client['DATABASE']
coll = db['COLLECTION']
cursor = coll.find().limit(10)
df = pd.DataFrame(list(cursor))
return df,
This gives me the following error:
Error 0085: The following error occurred during script evaluation, please view the output log for more information:
---------- Start of error message from Python interpreter ----------
Caught exception while executing function: Traceback (most recent call last):
File "C:\server\invokepy.py", line 199, in batch
odfs = mod.azureml_main(*idfs)
File "C:\temp\416f67ae321a4f7b9a2d5eda63aa127c.py", line 23, in azureml_main
df = pd.DataFrame(list(cursor))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 977, in next
if len(self.__data) or self._refresh():
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 902, in _refresh
self.__read_preference))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 813, in __send_message
**kwargs)
File "C:\pyhome\lib\site-packages\pymongo\mongo_client.py", line 728, in _send_message_with_response
server = topology.select_server(selector)
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 121, in select_server
address))
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 97, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: SERVERNAME:XXXXX:[WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond,SERVERNAME:XXXXX: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond,SERVERNAME:XXXXX: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Process returned with non-zero exit code 1
Is this caused by not whitelisting any IP adresses? I can't find any information on what kind of IP comes out of the Azure ML. Is there a workaround to this issue?

That error is nothing to do with any IP whitelisting; it's related to not being able to connect to your mongo database. Check your connection string, and that your server is running. The connection string should look something like
mongodb://username:password#server:27017/yourdatabase?authSource=admin
First check it works from your chosen command prompt / shell using
mongo mongodb://username:password#server:27017/yourdatabase?authSource=admin
then change your python connection to:
client = pymongo.MongoClient("<working connection string>")

Related

How to connect 2 different PCs at two different physical locations through gRPC without using Cloud services

I want to connect client PC to another PC through gRPC. I tried this on same wifi network it worked.
But when I connect the client and another PC on different network it gives the following error.
Traceback (most recent call last):
File "client.py", line 17, in <module>
response = stub.SquareRoot(number)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\site-packages\grpc\_channel.py", line 946, in _call_
return _end_unary_response_blocking(state, call, False, None)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1622034767.452000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":5420,"referenced_errors":[{"created":"#1622034767.452000000","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}"
Client.py
channel = grpc.insecure_channel('192.168.0.40:50051')
stub = calculator_pb2_grpc.CalculatorStub(channel)
number = calculator_pb2.Number(value=16)
response = stub.SquareRoot(number)
Server.py
print('Starting server. Listening on port 50051.')
server.add_insecure_port('[::]:50051')
server.start()
try:
while True:
time.sleep(86400)
except KeyboardInterrupt:
server.stop(0)

Getting this error Status : Failure -Test failed: IO Error: The Network Adapter could not establish the connection

I am new to Oracle, installed oracle SQL developer but each time I try to connect, I get the error:
Status: Failure -Test failed: IO Error: The Network Adapter could not
establish the connection
and the oracleTNSlistener service turns off on its own. Each time I start the service, it turns off immediately on its own.

Airflow Exception - Task received SIGTERM signal

I am running airflow tasks using SSH operator. I am pretty sure that the python program has no error and runs successfully when i run it. But when run from airflow towards the end of program execution I end up with SIGTERM error.
I tried to figure out by looking into various solutions but nothing worked. I tried increasing
killed_task_cleanup_time = 1200 from 60 in airflow.cfg file. Also tried changing hostname_callable to socket:gethostname in airflow.cfg as I received the following warning before this error
Warning: The recorded hostname xxx does not match this instance's hostname
Error:
[2020-10-15 10:45:34,937] {taskinstance.py:954} ERROR - Received SIGTERM. Terminating subprocesses.
[2020-10-15 10:45:34,959] {taskinstance.py:1145} ERROR - SSH operator error: Task received SIGTERM signal
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/airflow/contrib/operators/ssh_operator.py", line 137, in execute
readq, _, _ = select([channel], [], [], self.timeout)
File "/opt/anaconda3/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 956, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
Any ideas and suggestions are teally helpful. Stuck with this for a day now
This problem is triggered by the fact that the RECORDED hostname XXX maps an IP address that is different from the IP address mapped by instance's hostname, throwing a SIGTERM error. So you need to specify the IP mapping for the recorded Hostname XXX
Possibly this thread might help? https://issues.apache.org/jira/browse/AIRFLOW-966.
Which version of airflow are you using, and did you check your celery broker settings?
The solution seems to be setting visibility timeout higher than the celery default, which is 1 hour, to prevent celery from re-submitting the job. I believe this only affects tasks created via manual run / CLI (not normally scheduled tasks.)

apache airflow 1.10.9 statsd enabled making scheduler crashed

my airflow running in CeleryExecutor mode + progresql 12, all things go well except when turning statsd on:
statsd_on = True
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
The schedulers can render jobs but jobs are not running, the scheduler log having below error:
[SQL: SELECT count(*) AS count_1
FROM task_instance
WHERE task_instance.pool = %(pool_1)s AND task_instance.state IN (%(state_1)s, %(state_2)s)]
[parameters: {'pool_1': 'default_pool', 'state_1': 'running', 'state_2': 'queued'}]
(Background on this error at: http://sqlalche.me/e/4xp6)[0m
[31mTraceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib64/python3.6/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.ProtocolViolation: invalid frontend message type 97
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1495, in _validate_and_run_task_instances
self._process_and_execute_tasks(simple_dag_bag)
File "/usr/local/lib64/python3.6/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DatabaseError: (psycopg2.errors.ProtocolViolation) invalid frontend message type 97
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
If disable statsd, everything resume. Is it a bug for airflow? any advise to resolve it?
I faced the same error, and after a few tests, i can get statsd metrics working. Typically, you will see the error if the following conditions are met.
Statsd enabled set to True
SqlAlchemy connection pool set to True
Scheduler syserr log enabled (by redirect the err log to a file where you can see this error)
In my case, even though the scheduler kept throwing the error logs, statsd metrics were still delivered, and tasks were also scheduled as they should. I dont know how to measure the impact, i also dont want to sacrifice sql_alchemy connection pool, so I leave statsd turned off.
(I guess other people not seeing the error because they are missing the 3rd one above)

AirflowException: Celery command failed - The recorded hostname does not match this instance's hostname

I'm running Airflow on a clustered environment running on two AWS EC2-Instances. One for master and one for the worker. The worker node though periodically throws this error when running "$airflow worker":
[2018-08-09 16:15:43,553] {jobs.py:2574} WARNING - The recorded hostname ip-1.2.3.4 does not match this instance's hostname ip-1.2.3.4.eco.tanonprod.comanyname.io
Traceback (most recent call last):
File "/usr/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 387, in run
run_job.run()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 198, in run
self._execute()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2527, in _execute
self.heartbeat()
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 182, in heartbeat
self.heartbeat_callback(session=session)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/jobs.py", line 2575, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2018-08-09 16:15:43,671] {celery_executor.py:54} ERROR - Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
[2018-08-09 16:15:43,681: ERROR/ForkPoolWorker-30] Task airflow.executors.celery_executor.execute_command[875a4da9-582e-4c10-92aa-5407f3b46d5f] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 52, in execute_command
subprocess.check_call(command, shell=True)
File "/usr/lib64/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'airflow run arl_source_emr_test_dag runEmrStep2WaiterTask 2018-08-07T00:00:00 --local -sd /var/lib/airflow/dags/arl_source_emr_test_dag.py' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/dist-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 55, in execute_command
raise AirflowException('Celery command failed')
airflow.exceptions.AirflowException: Celery command failed
When this error occurs the task is marked as failed on Airflow and thus fails my DAG when nothing actually went wrong in the task.
I'm using Redis as my queue and postgreSQL as my meta-database. Both are external as AWS services. I'm running all of this on my company environment which is why the full name of the server is ip-1.2.3.4.eco.tanonprod.comanyname.io. It looks like it wants this full name somewhere but I have no idea where I need to fix this value so that it's getting ip-1.2.3.4.eco.tanonprod.comanyname.io instead of just ip-1.2.3.4.
The really weird thing about this issue is that it doesn't always happen. It seems to just randomly happen every once in a while when I run the DAG. It's also occurring on all of my DAGs sporadically so it's not just one DAG. I find it strange though how it's sporadic because that means other task runs are handling the IP address for whatever this is just fine.
Note: I've changed the real IP address to 1.2.3.4 for privacy reasons.
Answer:
https://github.com/apache/incubator-airflow/pull/2484
This is exactly the problem I am having and other Airflow users on AWS EC2-Instances are experiencing it as well.
The hostname is set when the task instance runs, and is set to self.hostname = socket.getfqdn(), where socket is the python package import socket.
The comparison that triggers this error is:
fqdn = socket.getfqdn()
if fqdn != ti.hostname:
logging.warning("The recorded hostname {ti.hostname} "
"does not match this instance's hostname "
"{fqdn}".format(**locals()))
raise AirflowException("Hostname of job runner does not match")
It seems like the hostname on the ec2 instance is changing on you while the worker is running. Perhaps try manually setting the hostname as described here https://forums.aws.amazon.com/thread.jspa?threadID=246906 and see if that sticks.
I had a similar problem on my Mac. It fixed it setting hostname_callable = socket:gethostname in airflow.cfg.
Personally when running on my Mac, I found that I got similar errors to this when the Mac would sleep while I was running a long job. The solution was to go into System Preferences -> Energy Saver and then check "Prevent computer from sleeping automatically when the display is off."

Resources