I have a script that allow me to connect to Azure ML Workspace. But it only works in local (interactive authentification) or on Cluster Instances while doing experiences.
But I can not manage to make it work in Compute Instances.
from azureml.core import Workspace
from azureml.core.run import Run, _OfflineRun
run = Run.get_context()
if isinstance(run, _OfflineRun):
workspace = Workspace(
"subscription_id",
"resource_group",
"workspace_name",
)
else:
workspace = run.experiment.workspace
I tried to use https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.msiauthentication?view=azure-ml-py but it did not work.
from azureml.core.authentication import MsiAuthentication
from azureml.core import Workspace
msi_auth = MsiAuthentication()
workspace = Workspace(
"subscription_id",
"resource_group",
"workspace_name",
auth=msi_auth,
)
File ~/localfiles/.venv/lib/python3.8/site-packages/azureml/_vendor/azure_cli_core/auth/adal_authentication.py:65, in MSIAuthenticationWrapper.set_token(self)
63 from azureml._vendor.azure_cli_core.azclierror import AzureConnectionError, AzureResponseError
64 try:
---> 65 super(MSIAuthenticationWrapper, self).set_token()
66 except requests.exceptions.ConnectionError as err:
67 logger.debug('throw requests.exceptions.ConnectionError when doing MSIAuthentication: \n%s',
68 traceback.format_exc())
File ~/localfiles/.venv/lib/python3.8/site-packages/msrestazure/azure_active_directory.py:596, in MSIAuthentication.set_token(self)
594 def set_token(self):
595 if _is_app_service():
--> 596 self.scheme, _, self.token = get_msi_token_webapp(self.resource, self.msi_conf)
597 elif "MSI_ENDPOINT" in os.environ:
598 self.scheme, _, self.token = get_msi_token(self.resource, self.port, self.msi_conf)
File ~/localfiles/.venv/lib/python3.8/site-packages/msrestazure/azure_active_directory.py:548, in get_msi_token_webapp(resource, msi_conf)
546 raise RuntimeError(err_msg)
547 _LOGGER.debug('MSI: token retrieved')
--> 548 token_entry = result.json()
549 return token_entry['token_type'], token_entry['access_token'], token_entry
File ~/localfiles/.venv/lib/python3.8/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I do not want to use SDK2, it will break all my existing code.
I don't understand why the identidy is not automatically managed when starting the Compute Instances like Cluster Instances.
Does any one has a solution for this?
Use the below code blocks to connect to the existing workspace using the python SDK using the compute instances.
Create a compute instance
from azureml.core import Workspace
ws = Workspace.from_config()
ws.get_details()
Output:
Connecting to workspace using Config file:
import os
from azureml.core import Workspace
ws = Workspace.from_config()
Related
I have a simple application that uses Google Cloud Pub/Sub, and as of a few days ago it has ceased working. Whenever I try to interact with the service (create a topic, subscribe to a topic, etc), I receive a DeadlineExceeded: 504 Deadline Exceeded error. For example:
from google.cloud import pubsub_v1
client = pubsub_v1.PublisherClient()
client.create_topic(request={"name": "test_topic"})
I have tried:
Creating new service account credentials
Lowering security on my Xfinity/Comcast router
The one thing that worked temporarily was a factory reset of the router, but then an hour later it stopped working.
Also, this problem exists across all the Python libraries for Google Cloud.
I am quite unfamiliar with networking so don't even know where to begin to solve this problem. Thanks in advance.
EDIT:
Full error
In [14]: from google.cloud import pubsub_v1
...: client = pubsub_v1.PublisherClient()
...: client.create_topic(request={"name": "test_topic"})
---------------------------------------------------------------------------
_InactiveRpcError Traceback (most recent call last)
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs)
66 try:
---> 67 return callable_(*args, **kwargs)
68 except grpc.RpcError as exc:
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/grpc/_channel.py in __call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
945 wait_for_ready, compression)
--> 946 return _end_unary_response_blocking(state, call, False, None)
947
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/grpc/_channel.py in _end_unary_response_blocking(state, call, with_call, deadline)
848 else:
--> 849 raise _InactiveRpcError(state)
850
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "{"created":"#1634864865.356495000","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":81,"grpc_status":4}"
>
The above exception was the direct cause of the following exception:
DeadlineExceeded Traceback (most recent call last)
<ipython-input-14-e6f64ca6ba79> in <module>
1 from google.cloud import pubsub_v1
2 client = pubsub_v1.PublisherClient()
----> 3 client.create_topic(request={"name": "test_topic"})
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/cloud/pubsub_v1/_gapic.py in <lambda>(self, *a, **kw)
38 return staticmethod(functools.wraps(wrapped_fx)(fx))
39 else:
---> 40 fx = lambda self, *a, **kw: wrapped_fx(self.api, *a, **kw) # noqa
41 return functools.wraps(wrapped_fx)(fx)
42
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/pubsub_v1/services/publisher/client.py in create_topic(self, request, name, retry, timeout, metadata)
479
480 # Send the request.
--> 481 response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
482
483 # Done; return the response.
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py in __call__(self, *args, **kwargs)
143 kwargs["metadata"] = metadata
144
--> 145 return wrapped_func(*args, **kwargs)
146
147
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/retry.py in retry_wrapped_func(*args, **kwargs)
284 self._initial, self._maximum, multiplier=self._multiplier
285 )
--> 286 return retry_target(
287 target,
288 self._predicate,
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/retry.py in retry_target(target, predicate, sleep_generator, deadline, on_error)
187 for sleep in sleep_generator:
188 try:
--> 189 return target()
190
191 # pylint: disable=broad-except
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/timeout.py in func_with_timeout(*args, **kwargs)
100 """Wrapped function that adds timeout."""
101 kwargs["timeout"] = self._timeout
--> 102 return func(*args, **kwargs)
103
104 return func_with_timeout
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs)
67 return callable_(*args, **kwargs)
68 except grpc.RpcError as exc:
---> 69 six.raise_from(exceptions.from_grpc_error(exc), exc)
70
71 return error_remapped_callable
~/anaconda3/envs/testvenv/lib/python3.8/site-packages/six.py in raise_from(value, from_value)
DeadlineExceeded: 504 Deadline Exceeded
EDIT 2:
Operating system: macOS
Software firewall: Checked Network settings on my box and the firewall is turned off
Anti-virus software: None
Location: Chicago, USA
Region: Unsure what region my request would go to as it isn't specified
Also, I used my phone as a hotspot and I was able to connect to GCP.
Long story short: I have been given some Python code for a custom openAI gym environment. I can successfully run the code via ExperimentGrid from the command line but would like to be able to run the entire experiment from within Jupyter notebook, rather than calling scripts. This would be more convenient for some experiments that I will be doing farther down the road.
My question: Is it possible to execute an experiment on a custom OpenAI gym environment entirely from within Jupyter Notebook and if so, how? I've seen plenty of examples of people executing gym's standard environments (like SpaceInvaders-v0 or CartPole-v0) from Jupyter but even then, they are calling the environment with
env=gym.make('SpaceInvaders-v0')
and essentially executing that environment's script behind the scenes.
Below is a basic description of how my code is set-up to run from the command line and the errors that I'm getting in Jupyter.
Any advice would be appreciated. I am admittedly rather new to Gym, Python and Linux.
My basic environment code is structured like this in, say, envs/mygames/Custom_Env.py:
various import statements (numpy, gym, pyglet, copy)
class Entity()
class State()
class The_Custom_Env(core.Env) # This is the main environment class
class Shell_Class # This class calls The_Custom_Env and provides some arguments
In mygames/__ init__.py,I import the Shell_Class:
from gym.envs.mygames.Custom_Env import Shell_Class
In envs/__ init__.py, I have the environment registered
register(
id='TEST-v0',
entry_point='gym.envs.mygames:Shell_Class',
max_episode_steps=200,
reward_threshold=25.0,)
Finally, if I execute a script containing this code from the command line, the experiment works without issue:
from spinup.utils.run_utils import ExperimentGrid
from spinup import ppo_pytorch
import torch
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--num_runs', type=int, default=1)
args = parser.parse_args()
eg = ExperimentGrid(name='super-cool-test')
eg.add('env_name', 'TEST-v0', '', True)
eg.add('seed', [10*i for i in range(args.num_runs)])
eg.add('epochs', [10])
eg.add('steps_per_epoch', 4000)
eg.add('ac_kwargs:hidden_sizes', [(32, 32)], 'hid')
eg.add('ac_kwargs:activation', [torch.nn.ReLU], '')
eg.add('pi_lr', [0.001])
eg.add('clip_ratio', 0.3)
eg.run(ppo_pytorch, num_cpu=args.cpu)
My Jupyter Attempt
I put all of the code from Custom_env.py in cell #1.
I then registered the environment in cell #2:
gym.register(
id='TEST-v1',
entry_point='__main__:Shell_Class',
max_episode_steps=200,
reward_threshold=25.0,)
based on this Q/A: Register gym environment that is defined inside a jupyter notebook cell
, I make the environment in cell #3:
gym.make('TEST-v1')
and get this non-descriptive output:
<TimeLimit<Shell_Class< TEST-v1 >>>
In cell #4, I tried to execute ExperimentGrid code directly within Jupyter like so:
from spinup.utils.run_utils import ExperimentGrid
from spinup import ppo_pytorch
import torch
num_runs=1
cpu=4
env_name='TEST-v1'
eg = ExperimentGrid(name='Jupyter-test')
eg.add('env_name', env_name, '', True)
eg.add('seed', [10*i for i in range(num_runs)])
eg.add('epochs', 500)
eg.add('steps_per_epoch', 4000)
eg.add('ac_kwargs:hidden_sizes', [(32, 32)], 'hid')
eg.add('ac_kwargs:activation', [torch.nn.ReLU], '')
eg.add('pi_lr', 0.001)
eg.add('clip_ratio', 0.3)
eg.run(ppo_pytorch, num_cpu=cpu)
The experiment starts up as usual but then runs into some kind of error:
> ================================================================================
ExperimentGrid [Jupyter-test] runs over parameters:
env_name []
TEST-v1
seed [see]
0
epochs [epo]
500
steps_per_epoch [ste]
4000
ac_kwargs:hidden_sizes [hid]
(32, 32)
ac_kwargs:activation []
ReLU
pi_lr [pi]
0.001
clip_ratio [cli]
0.3
Variants, counting seeds: 1
Variants, not counting seeds: 1
================================================================================
Preparing to run the following experiments...
Jupyter-test_test-v1
================================================================================
Launch delayed to give you a few seconds to review your experiments.
To customize or disable this behavior, change WAIT_BEFORE_LAUNCH in
spinup/user_config.py.
================================================================================
Running experiment:
Jupyter-test_test-v1
with kwargs:
{
"ac_kwargs": {
"activation": "ReLU",
"hidden_sizes": [
32,
32
]
},
"clip_ratio": 0.3,
"env_name": "TEST-v1",
"epochs": 500,
"pi_lr": 0.001,
"seed": 0,
"steps_per_epoch": 4000
}
================================================================================
There appears to have been an error in your experiment.
Check the traceback above to see what actually went wrong. The
traceback below, included for completeness (but probably not useful
for diagnosing the error), shows the stack leading up to the
experiment launch.
================================================================================
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
<ipython-input-14-de843fd528cf> in <module>
15 eg.add('pi_lr', 0.001)
16 eg.add('clip_ratio', 0.3)
---> 17 eg.run(ppo_pytorch, num_cpu=cpu)
~/Downloads/spinningup/spinup/utils/run_utils.py in run(self, thunk, num_cpu, data_dir, datestamp)
544
545 call_experiment(exp_name, thunk_, num_cpu=num_cpu,
--> 546 data_dir=data_dir, datestamp=datestamp, **var)
547
548
~/Downloads/spinningup/spinup/utils/run_utils.py in call_experiment(exp_name, thunk, seed, num_cpu, data_dir, datestamp, **kwargs)
169 cmd = [sys.executable if sys.executable else 'python', entrypoint, encoded_thunk]
170 try:
--> 171 subprocess.check_call(cmd, env=os.environ)
172 except CalledProcessError:
173 err_msg = '\n'*3 + '='*DIV_LINE_WIDTH + '\n' + dedent("""
~/anaconda3/envs/spinningup/lib/python3.6/subprocess.py in check_call(*popenargs, **kwargs)
309 if cmd is None:
310 cmd = popenargs[0]
--> 311 raise CalledProcessError(retcode, cmd)
312 return 0
313
I'm using Hydra for training machine learning models. It's great for doing complex commands like python train.py data=MNIST batch_size=64 loss=l2. However, if I want to then run the trained model with the same parameters, I have to do something like python reconstruct.py --config_file path_to_previous_job/.hydra/config.yaml. I then use argparse to load in the previous yaml and use the compose API to initialize the Hydra environment. The path to the trained model is inferred from the path to Hydra's .yaml file. If I want to modify one of the parameters, I have to add additional argparse parameters and run something like python reconstruct.py --config_file path_to_previous_job/.hydra/config.yaml --batch_size 128. The code then manually overrides any Hydra parameters with those that were specified on the command line.
What's the right way of doing this?
My current code looks something like the following:
train.py:
import hydra
#hydra.main(config_name="config", config_path="conf")
def main(cfg):
# [training code using cfg.data, cfg.batch_size, cfg.loss etc.]
# [code outputs model checkpoint to job folder generated by Hydra]
main()
reconstruct.py:
import argparse
import os
from hydra.experimental import initialize, compose
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('hydra_config')
parser.add_argument('--batch_size', type=int)
# [other flags and parameters I may need to override]
args = parser.parse_args()
# Create the Hydra environment.
initialize()
cfg = compose(config_name=args.hydra_config)
# Since checkpoints are stored next to the .hydra, we manually generate the path.
checkpoint_dir = os.path.dirname(os.path.dirname(args.hydra_config))
# Manually override any parameters which can be changed on the command line.
batch_size = args.batch_size if args.batch_size else cfg.data.batch_size
# [code which uses checkpoint_dir to load the model]
# [code which uses both batch_size and params in cfg to set up the data etc.]
This is my first time posting, so let me know if I should clarify anything.
If you want to load the previous config as is and not change it, use OmegaConf.load(file_path).
If you want to re-compose the config (and it sounds like you do, because you added that you want override things), I recommend that you use the Compose API and pass in parameters from the overrides file in the job output directory (next to the stored config.yaml), but concatenate the current run parameters.
This script seems to be doing the job:
import os
from dataclasses import dataclass
from os.path import join
from typing import Optional
from omegaconf import OmegaConf
import hydra
from hydra import compose
from hydra.core.config_store import ConfigStore
from hydra.core.hydra_config import HydraConfig
from hydra.utils import to_absolute_path
# You can also use a yaml config file instead of this Structured Config
#dataclass
class Config:
load_checkpoint: Optional[str] = None
batch_size: int = 16
loss: str = "l2"
cs = ConfigStore.instance()
cs.store(name="config", node=Config)
#hydra.main(config_path=".", config_name="config")
def my_app(cfg: Config) -> None:
if cfg.load_checkpoint is not None:
output_dir = to_absolute_path(cfg.load_checkpoint)
original_overrides = OmegaConf.load(join(output_dir, ".hydra/overrides.yaml"))
current_overrides = HydraConfig.get().overrides.task
hydra_config = OmegaConf.load(join(output_dir, ".hydra/hydra.yaml"))
# getting the config name from the previous job.
config_name = hydra_config.hydra.job.config_name
# concatenating the original overrides with the current overrides
overrides = original_overrides + current_overrides
# compose a new config from scratch
cfg = compose(config_name, overrides=overrides)
# train
print("Running in ", os.getcwd())
print(OmegaConf.to_yaml(cfg))
if __name__ == "__main__":
my_app()
~/tmp$ python train.py
Running in /home/omry/tmp/outputs/2021-04-19/21-23-13
load_checkpoint: null
batch_size: 16
loss: l2
~/tmp$ python train.py load_checkpoint=/home/omry/tmp/outputs/2021-04-19/21-23-13
Running in /home/omry/tmp/outputs/2021-04-19/21-23-22
load_checkpoint: /home/omry/tmp/outputs/2021-04-19/21-23-13
batch_size: 16
loss: l2
~/tmp$ python train.py load_checkpoint=/home/omry/tmp/outputs/2021-04-19/21-23-13 batch_size=32
Running in /home/omry/tmp/outputs/2021-04-19/21-23-28
load_checkpoint: /home/omry/tmp/outputs/2021-04-19/21-23-13
batch_size: 32
loss: l2
This question already has answers here:
Seaborn load_dataset
(4 answers)
why is jupyter notebook not accepting my csv file link? [duplicate]
(1 answer)
Closed last month.
Hi I'm a student looking to use jupyter notebook to represent a dataset for a school task.
import seaborn as sns
spotify = sns.load_dataset('top10s.csv')
this is a data set that I found online and when I try to run this code I get and HTTPError
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-2-af1fc80c3c1b> in <module>
1 import seaborn as sns
----> 2 spotify = sns.load_dataset('top10s.csv')
~\Anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws)
426 os.path.basename(full_path))
427 if not os.path.exists(cache_path):
--> 428 urlretrieve(full_path, cache_path)
429 full_path = cache_path
430
~\Anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data)
245 url_type, path = splittype(url)
246
--> 247 with contextlib.closing(urlopen(url, data)) as fp:
248 headers = fp.info()
249
~\Anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
220 else:
221 opener = _opener
--> 222 return opener.open(url, data, timeout)
223
224 def install_opener(opener):
~\Anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout)
529 for processor in self.process_response.get(protocol, []):
530 meth = getattr(processor, meth_name)
--> 531 response = meth(req, response)
532
533 return response
~\Anaconda3\lib\urllib\request.py in http_response(self, request, response)
639 if not (200 <= code < 300):
640 response = self.parent.error(
--> 641 'http', request, response, code, msg, hdrs)
642
643 return response
~\Anaconda3\lib\urllib\request.py in error(self, proto, *args)
567 if http_err:
568 args = (dict, 'default', 'http_error_default') + orig_args
--> 569 return self._call_chain(*args)
570
571 # XXX probably also want an abstract factory that knows when it makes
~\Anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args)
501 for handler in handlers:
502 func = getattr(handler, meth_name)
--> 503 result = func(*args)
504 if result is not None:
505 return result
~\Anaconda3\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 404: Not Found
I do not know how to fix this
or why I am even getting is issue
I hope somebody can help me
and thank you for your time
There are 2 errors in the code.
We can load only datasets present in the seaborn website using sns.load_dataset as it for online CSV files on https://github.com/mwaskom/seaborn-data.
While specifying the dataset name no need to specify the extension of the dataset. Below is the sample code to load tips dataset.
import seaborn as sns
tips = sns.load_dataset("tips")
tips.head()
sns.load_dataset() searches for a dataset from online. It does not import the dataset from your working directory.
Here is the documentation for seaborn load_dataset function
Assuming that your dataset top10s.csv is located in the same folder as your python file, you should use pandas for this instead.
import pandas as pd
spotify = pd.read_csv('top10s.csv')
Be aware that you have to install this library before importing via pip like so:
pip install pandas
You'll need to install the latest master branch of zipline. You can do that by:
pip install zipline
I'm trying to extract openvpn.service status using Systemd D-Bus API.
In [1]: import dbus
In [2]: sysbus = dbus.SystemBus()
In [3]: systemd1 = sysbus.get_object('org.freedesktop.systemd1', '/org/freedesktop/systemd1')
In [4]: manager = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Manager')
In [5]: service = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Service')
In [6]: unit = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Unit')
In [7]: unit.ActiveState('openvpn.service')
---------------------------------------------------------------------------
DBusException Traceback (most recent call last)
<ipython-input-7-22857e7dcbd7> in <module>()
----> 1 unit.ActiveState('openvpn.service')
/usr/local/lib/python3.4/dist-packages/dbus/proxies.py in __call__(self, *args, **keywords)
68 # we're being synchronous, so block
69 self._block()
---> 70 return self._proxy_method(*args, **keywords)
71
72 def call_async(self, *args, **keywords):
/usr/local/lib/python3.4/dist-packages/dbus/proxies.py in __call__(self, *args, **keywords)
143 signature,
144 args,
--> 145 **keywords)
146
147 def call_async(self, *args, **keywords):
/usr/local/lib/python3.4/dist-packages/dbus/connection.py in call_blocking(self, bus_name, object_path, dbus_interface, method, signature, args, timeout, byte_arrays, **kwargs)
649 # make a blocking call
650 reply_message = self.send_message_with_reply_and_block(
--> 651 message, timeout)
652 args_list = reply_message.get_args_list(**get_args_opts)
653 if len(args_list) == 0:
DBusException: org.freedesktop.DBus.Error.UnknownMethod: Unknown method 'ActiveState' or interface 'org.freedesktop.systemd1.Unit'.
In [8]: manager.GetUnit('openvpn.service')
Out[8]: dbus.ObjectPath('/org/freedesktop/systemd1/unit/openvpn_2eservice')
In [9]: u = manager.GetUnit('openvpn.service')
In [10]: u.ActiveState
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-6998e589f206> in <module>()
----> 1 u.ActiveState
AttributeError: 'dbus.ObjectPath' object has no attribute 'ActiveState'
In [11]: manager.GetUnitFileState('openvpn.service')
Out[11]: dbus.String('enabled')
There's an ActiveState property, which contains a state value that reflects whether the unit is currently active or not. I've succeeded in reading UnitFileState, however I can't figure out how to read ActiveState property.
After having a look at some random examples [more permanent link], I've had some luck with
import dbus
sysbus = dbus.SystemBus()
systemd1 = sysbus.get_object('org.freedesktop.systemd1',
'/org/freedesktop/systemd1')
manager = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Manager')
service = sysbus.get_object('org.freedesktop.systemd1',
object_path=manager.GetUnit('openvpn.service'))
interface = dbus.Interface(service,
dbus_interface='org.freedesktop.DBus.Properties')
print(interface.Get('org.freedesktop.systemd1.Unit', 'ActiveState'))
[Edit:]
Documenting the versions, just in case:
dbus.version == (1, 2, 14)
systemd 244 (244.1-1-arch)
Python 3.8.1
You probably need to call dbus.Interface(sysbus.get_object('org.freedesktop.systemd1', u), 'org.freedesktop.systemd1.Unit') and access the ActiveState property on that object. Your code currently tries to access the ActiveState property on an object path, which is not the object itself.