Google AD historical report COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS - python-3.6

I have python script to download the google ad historical report through the API.
I am having below as report job
{
'reportQuery': {
'dimensions': [
'DATE',
'MOBILE_APP_NAME',
'ADVERTISER_NAME',
'ADVERTISER_ID',
'AD_UNIT_ID',
'AD_UNIT_NAME',
'LINE_ITEM_NAME',
'LINE_ITEM_ID',
'LINE_ITEM_TYPE',
'CREATIVE_TYPE',
'CREATIVE_NAME',
'CREATIVE_ID',
'MOBILE_DEVICE_NAME'
'PLACEMENT_NAME',
'PLACEMENT_ID'
],
'dimensionAttributes': [
'AD_UNIT_CODE',
'LINE_ITEM_PRIORITY'
'CREATIVE_CLICK_THROUGH_URL'
'ADVERTISER_EXTERNAL_ID'
],
'columns': [
'AD_SERVER_IMPRESSIONS',
'AD_SERVER_CLICKS',
'AD_SERVER_CTR',
'AD_EXCHANGE_IMPRESSIONS',
'AD_SERVER_CPM_AND_CPC_REVENUE',
'AD_SERVER_WITHOUT_CPD_AVERAGE_ECPM'
],
'dateRangeType': 'CUSTOM_DATE',
'startDate': start_date,
'endDate': end_date
}
}
I am getting below error
Failed to generate report. Error was: [ReportError.COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS # columns; trigger:'AD_EXCHANGE_IMPRESSIONS'
I am not sure which dimension need to include for AD_EXCHANGE_IMPRESSIONS. When I generate report with same fields in UI it works fine without errors

Related

Loading CLI output to Cisco Genie/pyats parser?

would like to get some help over here for using Cisco Genie parser. Is it possible to load the output of the CLI command (eg. "show version") into the Genie parser.
My customer pass me the output of "show version" for each of their device. I have no ssh access to their devices for security reason. I'm able to extract the output from a Python script.
But how do I load the CLI output to the Genie parser? Usually what I did is below, but this only applicable if I have ssh connection to the device:
output = device.parse("show version")
So how do I load a output string to the parse and tell it which parser to use?? I'm puzzle...
You can take the following example, here CLI is for "show interface" command:
from genie.libs.parser.ios.show_interface import ShowInterfaces
parser = ShowInterfaces(device= '', context='cli')
parsed_dict = parser.cli(output=str_op)
Here, str_op is the output from CLI command in string format
If you don't have SSH access, I can recommend the TTP module. After adding the CLI output to a notepad, you can write your own template. You can easily parse the data you want. I have given an example below.(show users)
Example Code:
from pprint import pprint
from ttp import ttp
import json
import time
with open("showUsers.txt") as f:
data_to_parse = f.read()
ttp_template = """
<group name="showUsers" method="table">
{{User|re(".?")|re(".*")}} {{Type}} {{Login_Date}} {{Login_Time}} {{Idle_day}} {{Idle_time}} --
{{Session_ID}} {{From}}
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
print(results)
Example Run:
[
{
"showUsers": [
{
"From": "--",
"Session_ID": "6"
},
{
"Idle_day": "0d",
"Idle_time": "00:00:00",
"Login_Date": "08FEB2022",
"Login_Time": "10:53:29",
"Type": "SSHv2",
"User": "admin"
},
{
"From": "135.244.199.185",
"Session_ID": "132"
},
{
"Idle_day": "0d",
"Idle_time": "00:03:35",
"Login_Date": "09FEB2022",
"Login_Time": "11:32:50",
"Type": "SSHv2",
"User": "admin"
},
{
"From": "10.144.208.82",
"Session_ID": "143"
}
]
}
]

Testing Cypress with Browserstack does not work: "Malformed Archive"

I am trying to get a Cypress example test running in Browserstack. I am following this tutorial: Run your Cypress tests
However when it comes to running browserstack-cypress run im getting the following output:
[2020-12-4 17:00:12] - info: Reading config from /home/dennis/Repos/CMS/browserstack.json
[2020-12-4 17:00:12] - info: Reading username from the environment variable BROWSERSTACK_USERNAME
[2020-12-4 17:00:12] - info: Reading access key from the environment variable BROWSERSTACK_ACCESS_KEY
[2020-12-4 17:00:12] - info: browserstack.json file is validated
[2020-12-4 17:00:46] - error: Malformed archive
[2020-12-4 17:00:46] - error: Zip Upload failed.
[2020-12-4 17:00:46] - info: Zip file deleted successfully.
This is what my browserstack.json looks like:
{
"auth": {
"username": "<user name>",
"access_key": "<access key>"
},
"browsers": [
{
"browser": "chrome",
"os": "Windows 10",
"versions": [
"latest",
"latest-1"
]
}
],
"run_settings": {
"cypress_config_file": "./cypress.json",
"project_name": "<project name>",
"build_name": "",
"parallels": "10",
"npm_dependencies": {},
"package_config_options": {}
},
"connection_settings": {
"local": false,
"local_identifier": null
},
"disable_usage_reporting": false
}
The cypress.json file is empty:
{}
What I'm also not getting is where I am defining what tests I want to run and where they are located.
I appreciate any help! Thanks!
I've come across the "Malformed archive" error when the runner tries to compress the entire project and tries to upload it instead of just the Cypress Test files.
You should be able to fix this by moving the Cypress test files into a subfolder:
test
|
| cypress.json
| Browserstack.json
| cypress
|
| fixtures
| integration
| support
| plugins
Set the path to cypress.json in browserstack.json
Refer: https://www.browserstack.com/docs/automate/cypress/sample-tutorial

packer creating manifest.json is not valid json

I am using packer to create a base ami and using a post proccessor to create a manifest.json file
how can i make this json valid
{
"builds": [
{
"name": "amazon-ebs",
"builder_type": "amazon-ebs",
"build_time": 1589466697,
"files": null,
"artifact_id": "eu-west-1:ami-04d3331ac647e751b",
"packer_run_uuid": "add4c072-7ac2-f5e9-b941-6b80003c03ec",
"custom_data": {
"my_custom_data": "example"
}
}
],
"last_run_uuid": "add4c072-7ac2-f5e9-b941-6b80003c03ec"
2020-05-14T14:31:37.246153577Z stdout P }
Error: Parse error on line 13:
...b941-6b80003c03ec" 2020 - 05 - 14 T14:
----------------------^
Expecting 'EOF', '}', ':', ',', ']', got 'NUMBER'
My eventual goal is to save the artifact_id to a var using bash
Thank you for the help,
In order to make it valid json, i had to add this attribute to my packer template.json:
"post-processors": [
{
"type": "manifest",
"output": "manifest.json",
"strip_path": true,
"strip_time": true
"strip_time": "true"

Google Ad Manager COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSION ReportError

I am trying to fetch report of line items and works fine from the UI but halt with error via the API. Following is the reportQuery:
{'reportQuery': {
'dimensions': [
'DATE',
'LINE_ITEM_NAME',
'LINE_ITEM_TYPE',
'CREATIVE_SIZE_DELIVERED'
],
'adUnitView': 'TOP_LEVEL',
'columns': [
'TOTAL_LINE_ITEM_LEVEL_IMPRESSIONS',
'TOTAL_LINE_ITEM_LEVEL_CLICKS',
'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE'
],
'dimensionAttributes': [
'LINE_ITEM_FREQUENCY_CAP',
'LINE_ITEM_START_DATE_TIME',
'LINE_ITEM_END_DATE_TIME',
'LINE_ITEM_COST_TYPE',
'LINE_ITEM_COST_PER_UNIT',
'LINE_ITEM_SPONSORSHIP_GOAL_PERCENTAGE',
'LINE_ITEM_LIFETIME_IMPRESSIONS'
],
'customFieldIds': [],
'contentMetadataKeyHierarchyCustomTargetingKeyIds': [],
'startDate': {
'year': 2018,
'month': 1,
'day': 1
},
'endDate': {
'year': 2018,
'month': 1,
'day': 2
},
'dateRangeType': 'CUSTOM_DATE',
'statement': None,
'includeZeroSalesRows': False,
'adxReportCurrency': None,
'timeZoneType': 'PUBLISHER'
}}
The above query throws following error when tried with API.
Error summary: {'faultMessage': "[ReportError.COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS # columns; trigger:'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE']", 'requestId': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'responseTime': '98', 'serviceName': 'ReportService', 'methodName': 'runReportJob'}
[ReportError.COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS # columns; trigger:'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE']
400 Syntax error: Expected ")" or "," but got identifier "TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE" at [1:354]
Did I miss anything? Any idea in this issue?
Thanks !
This issue has been solved by adding a dimension "Native ad format name".

Data ingestion task : Hadoop running in local instead of remote Hadoop EMR cluster

I have setup a multi-node druid cluster with:
1) 1 node running as coordinator and overlord (m4.xl)
2) 2 nodes each running historical and middle managers both. (r3.2xl)
3) 1 node running broker (r3.2xl)
Now I have an EMR cluster running which I want to use for ingestion tasks, the problem is whenever I try to submit a job via the CURL command, the job always starts as local hadoop job in both the middle managers instead of being submitted to the remote EMR cluster. My data lies in S3 and also S3 is configured for deep storage as well.
I have also copied all the jars from EMR master to hadoop-dependencies/hadoop-client/2.7.3/
Druid version: 0.9.2
EMR version: 5.2
Please find attached indexing job, common runtime properties and middle manager runtime properties.
Q1) How to get the job to submit to remote EMR cluster.
Q2) Logs for
the indexing task are not coming on overlord:8090, how to enable it.
File: data_index.json
{
"type": "index_hadoop",
"spec": {
"ioConfig": {
"type": "hadoop",
"inputSpec": {
"type": "static",
"paths": "s3n://<kjcnskd>smallTest"
}
},
"dataSchema": {
"dataSource": "multi_value_test_01",
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "day",
"queryGranularity": "none",
"intervals": [
"2011-09-12/2017-09-13"
]
},
"parser": {
"type": "string",
"parseSpec": {
"format": "tsv",
"delimiter": "\u0001",
"listDelimiter": "|",
"columns": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"date",
"week",
"month",
"year",
"style_id",
"live_styles",
"non_live_styles",
"broken_style",
"new_season_styles",
"live_styles_qty",
"non_live_styles_qty",
"broken_style_qty",
"new_season_styles_qty"
],
"dimensionsSpec": {
"dimensions": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"week",
"month",
"year",
"style_id"
]
},
"timestampSpec": {
"column": "date",
"format": "yyyyMMdd"
}
}
},
"metricsSpec": [
{
"name": "live_styles",
"type": "doubleSum",
"fieldName": "live_styles"
},
{
"name": "non_live_styles",
"type": "doubleSum",
"fieldName": "non_live_styles"
},
{
"name": "broken_style",
"type": "doubleSum",
"fieldName": "broken_style"
},
{
"name": "new_season_styles",
"type": "doubleSum",
"fieldName": "new_season_styles"
},
{
"name": "live_styles_qty",
"type": "doubleSum",
"fieldName": "live_styles_qty"
},
{
"name": "broken_style_qty",
"type": "doubleSum",
"fieldName": "broken_style_qty"
},
{
"name": "new_season_styles_qty",
"type": "doubleSum",
"fieldName": "new_season_styles_qty"
}
]
},
"tuningConfig": {
"type": "hadoop",
"partitionsSpec": {
"type": "hashed",
"targetPartitionSize": 5000000
},
"jobProperties": {
"fs.s3.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"fs.s3n.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3n.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3n.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"
}
}
}
}
File: common.runtime.properties
#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
# Extensions
#
# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
druid.extensions.loadList=["druid-kafka-eight", "druid-s3-extensions", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
druid.extensions.hadoopDependenciesDir=hadoop-dependencies/hadoop-client/2.7.3
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=10.0.1.152
druid.zk.paths.base=/druid
#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=metadata.store.ip
#druid.metadata.storage.connector.port=1527
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://10.0.1.140:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid123
# For PostgreSQL (make sure to additionally include the Postgres extension):
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments
# For S3:
druid.storage.type=s3
druid.storage.bucket=asfvdcs
druid.storage.baseKey=druid/segments
druid.s3.accessKey=XXXXXXXXXXXX
druid.s3.secretKey=XXXXXXXXXXXX
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
druid.indexer.logs.type=file
druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=testashutosh
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info
File: middle manager runtime.properties
druid.service=druid/middleManager
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
druid.server.http.numThreads=25
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=2
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=hdfs://ip-10-0-1-xxx.ap-southeast-1.compute.internal:8020/tmp/druid-indexing
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
druid.indexer.runner.type=remote
You need to tell Druid about the Hadoop cluster. To quote the manual:
Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into conf/druid/_common/core-site.xml, conf/druid/_common/hdfs-site.xml, and so on.
If you have already done that, than it would indicate in issue with one of the config files (happened to me).

Resources