Just trying to get named timezone displayed but no luck. Time zone is not displayed. What am I doing wrong?
I have moment.min.js and moment-timezone-with-data.min.js scripts
angular.module('app', ['angularMoment']);
You are missing the amTimezone filter. Otherwise the z will not generate the timezone.
{{date | | amDateFormat:"h:mm A z"}} //12:00 AM
{{date | amTimezone: 'America/New_York' | amDateFormat:"h:mm A z"}} //12:00 AM EST
Related
I am trying to retrieve the number of requests for the last day from Application Insights using the API.
When I do it via the /metrics/requests/count?timespan=P1D endpoint
I get a sum of 35871.
But if I do it via the
/query?query=requests | where timestamp > ago(1d) | count; endpoint
I get a count of 4510.
Lastly, if I do it via the
/events/requests?timespan=P1D&$count=true endpoint I get a
#odata.count of 4510, the same as from "query".
Why may the requests count difference between metrics and query be so big?
Edit:
I have run the following query in Application Insights Logs:
requests
| summarize totalCount=sum(itemCount) by bin(timestamp, 1d)
And that returns (currently it is 12/7/2021, 8:14:47.562 PM):
timestamp [UTC] totalCount
12/7/2021, 12:00:00.000 AM 35,871
That retrieves (I believe) the number of requests since the beginning of today.
Surprisingly, that matches the count obtained via /metrics:
{'value': {'start': '2021-12-06T20:13:46.054Z', 'end': '2021-12-07T20:13:46.054Z', 'requests/count': {'sum': 35871}}}
But the range of dates via /metrics/ covers roughly the last 24h (1d).
I think that the requests table in "Application Insights Logs" (/query endpoint) contains a list of requests where each entry can be the same name (uri) but with a different resultCode.
I haven't found documentation though that clearly suggests that.
With that assumption I have been able to match the requests count to that of /metrics.
Here is the query that I was chasing:
let TOTAL = requests | where timestamp > ago(1d) | summarize TotalRequests=sum(itemCount) | extend Foo=1;
let CACHED_TOTAL = materialize(TOTAL);
let FAILED = requests
| where timestamp > ago(1d)
| where resultCode hasprefix "4"
| summarize Failed=sum(itemCount)
| extend Foo=1;
let CACHED_FAILED = materialize(FAILED);
CACHED_FAILED
| join kind=inner CACHED_TOTAL on Foo
| extend PercentFailed = round(todouble(Failed * 100) / TotalRequests, 2)
| project TotalRequests, Failed, PercentFailed;
I am writing firebase security rules, and I am attempting to get a document that may have a space in its documentID.
I have the following snippet which works well when the document does not have a space
function isAdminOfCompany(companyName) {
let company = get(/databases/$(database)/documents/Companies/$(companyName));
return company.data.authorizedUsers[request.auth.uid].access == "ADMIN;
}
Under the collection, "Companies", I have a document called "Test" and another called "Test Company" - Trying to get the document corresponding to "Test" works just fine, but "Test Company" does not seem to work, as the company variable (first line into the function) is equal to null as per the firebase security rules "playground".
My thought is that there is something to do with URL encoding, but replacing the space in a documentID to "%20" or a "+" does not change the result. Perhaps spaces are illegal characters for documentIDs (https://cloud.google.com/firestore/docs/best-practices lists a few best practices)
Any help would be appreciated!
EDIT: As per a few comments, I Will add some additional images/explanations below.
Here is the structure of my database
And here is what fields are present in the user documents
In short, the following snippet reproduces the problem (I am not actually using this, but it demonstrates the issue the same way)
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /Users/{user} {
allow update: if findPermission(resource.data.company) == "MASTER"
}
function findPermission(companyName) {
let c = get(path("/databases/" + database + "/documents/Companies/" + companyName));
return c.data.authorizedUsers[request.auth.uid].access;
}
}
}
When I try to update a user called test#email.com (which belongs to company "Test"), the operation is permitted, and everything works exactly as expected.
The issue arises when a user, called test2#email.com, who belongs to company "Test Company" comes along and makes the same request (with authorization email/uid updated in playground, to match what is actually found in the company structure), the request fails. The request fails, since the get() call (line 1 of the function) cannot find the Company document corresponding to "Test Company" - indicated by the variable "c" being null in the screenshot (see below) - IT IS NOT NULL WHEN LOOKING FOR "Test"
Below is a screenshot of the error message, as well as some of the relevant variables when the error occurs
Check to see what type of space just in case it is another non-printable character. You could convert it to Unicode, and check what it might be. However, it is considered bad practice to use spaces in naming variables and data structures. There are so many different types to consider.
| Unicode | HTML | Description | Example |
|---------|--------|--------------------|---------|
| U+0020 |   | Space | [ ] |
| U+00A0 |   | No-Break Space | [ ] |
| U+2000 |   | En Quad | [ ] |
| U+2001 |   | Em Quad | [ ] |
| U+2002 |   | En Space | [ ] |
| U+2003 |   | Em Space | [ ] |
| U+2004 |   | Three-Per-Em Space | [ ] |
| U+2005 |   | Four-Per-Em Space | [ ] |
| U+2006 |   | Six-Per-Em Space | [ ] |
| U+2007 |   | Figure Space | [ ] |
| U+2008 |   | Punctuation Space | [ ] |
| U+2009 |   | Thin Space | [ ] |
| U+200A |   | Hair Space | [ ] |
i could not able to parse the below json value , I tried with parse_json() and todynamic() ,I m getting the result column values to be empty
]1
the issue is that your payload includes an internal invalid JSON payload.
it is possible to "fix" it using the query language (see usages of replace() in the example below), however it'd be best if you can write a valid JSON payload to begin with.
try running this:
print s = #'{"pipelineId":"63dfc1f6-5a43-5bca-bffe-6a36a435e19d","vmId":"9252382a-814f-4d02-9b1b-305db4caa208/usl-exepipe-dev/westus/usl-exepipe-lab-dev/asuvp306563","artifactResult":{"Id":"execution-job-2","SourceName":"USL Repository","ArtifactName":"install-lcu","Status":"Succeeded","Parameters":null,"Log":"[{\"code\":\"ComponentStatus/StdOut/succeeded\",\"level\":\"Info\",\"displayStatus\":\"Provisioning succeeded\",\"message\":\"2020-06-02T14:33:04.711Z | I | Starting artifact ''install-lcu''\r\n2020-06-02T14:33:04.867Z | I | Starting Installation\r\n2020-06-02T14:33:04.899Z | I | C:\\USL\\LCU\\4556803.msu Exists.\r\n2020-06-02T14:33:04.914Z | I | Starting installation process ''C:\\USL\\LCU\\4556803.msu /quiet /norestart''\r\n2020-06-02T14:43:14.169Z | I | Process completed with exit code ''3010''\r\n2020-06-02T14:43:14.200Z | I | Need to restart computer after hotfix 4556803 installation\r\n2020-06-02T14:43:14.200Z | I | Finished Installation\r\n2020-06-02T14:43:14.200Z | I | Artifact ''install-lcu'' succeeded\r\n\",\"time\":null},{\"code\":\"ComponentStatus/StdErr/succeeded\",\"level\":\"Info\",\"displayStatus\":\"Provisioning succeeded\",\"message\":\"\",\"time\":null}]","DeploymentLog":null,"StartTime":"2020-06-02T14:32:40.9882134Z","ExecutionTime":"00:11:21.2468597","BSODCount":0},"attempt":1,"instanceId":"a301aaa0c2394e76832867bfeec04b5d:0","parentInstanceId":"78d0b036a5c548ecaafc5e47dcc76ee4:2","eventName":"Artifact Result"}'
| mv-expand log = parse_json(replace("\r\n", " ", replace(#"\\", #"\\\\", tostring(parse_json(tostring(parse_json(s).artifactResult)).Log))))
| project log.code, log.level, log.displayStatus, log.message
I coded a java client that sends a string of meta information and a byte array through a multipart post http request to my server running cherrypy 3.6.
I need to extract both values and I coded this in python3 on the server side to find out how to manipulate the result as I can't find any relevant documentation over internet that explains how to read this html part
def controller(self, meta, data):
print("meta", meta)
print("data", type(data))
outputs :
my meta information
<class 'cherrypy._cpreqbody.Part'>
Note : the data part contains raw binary data.
How can I read the http part content into a buffer or output it to a disk file ?
Thanks for your help.
Thanks for your answer.
I'v already read this doc but unfortunately methods read-into_file and make_file, read ... it doesn't work for me. for example when trying to read a zip file sent form my java client :
Assuming data is the Http post parameter
make_file()
fp = data.make_file()
print("fp type", type(fp)) # _io.BufferedRandom
zipFile = fp.read()
outputs:
AttributeError: 'bytes' object has no attribute 'seek'
line 651, in read_lines_to_boundary raise EOFError("Illegal end of multipart body.")EOFError: Illegal end of multipart body.
read_into_file()
file = data.read_into_file()
print("file type", type(file))
zipFile = io.BytesIO(file.read())
# zipFile = file.read() # => raises same error
outputs:
line 651, in read_lines_to_boundary raise EOFError("Illegal end of multipart body.")EOFError: Illegal end of multipart body.
I don't understand what happens ...
Actually "data" is not a file like object but a cherrypy._cpreqbody.Part one. It holds a "file" file an _io.BufferedRandom class property.
Its read() method returns the whole body content in a binary form (bytes).
so to end up the straightforward solution is :
class BinReceiver(object):
def index(self, data):
zipFile = io.BytesIO(data.file.read())
path = "/tmp/data.zip"
fp = open(path)
fp.write(zipFile, 'wb')
fp.close()
print("saved data into", path, "size", len(zipFile))
index.exposed = True
and this works fine ...
fyi : I'm running python3.2
It seems like data is a file-like object which you can call .read on. In addition CherryPy provides a method read_into_file.
See the full documentation by typing help(cherrypy._cpreqbody.Part) in your REPL.
class Part(Entity)
| A MIME part entity, part of a multipart entity.
|
| Method resolution order:
| Part
| Entity
| __builtin__.object
|
| Methods defined here:
|
| __init__(self, fp, headers, boundary)
|
| default_proc(self)
| Called if a more-specific processor is not found for the
| ``Content-Type``.
|
| read_into_file(self, fp_out=None)
| Read the request body into fp_out (or make_file() if None).
|
| Return fp_out.
|
| read_lines_to_boundary(self, fp_out=None)
| Read bytes from self.fp and return or write them to a file.
|
| If the 'fp_out' argument is None (the default), all bytes read are
| returned in a single byte string.
|
| If the 'fp_out' argument is not None, it must be a file-like
| object that supports the 'write' method; all bytes read will be
| written to the fp, and that fp is returned.
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| from_fp(cls, fp, boundary) from __builtin__.type
|
| read_headers(cls, fp) from __builtin__.type
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| attempt_charsets = ['us-ascii', 'utf-8']
|
| boundary = None
|
| default_content_type = 'text/plain'
|
| maxrambytes = 1000
|
| ----------------------------------------------------------------------
| Methods inherited from Entity:
|
| __iter__(self)
|
| __next__(self)
|
| fullvalue(self)
| Return this entity as a string, whether stored in a file or not.
|
| make_file(self)
| Return a file-like object into which the request body will be read.
|
| By default, this will return a TemporaryFile. Override as needed.
| See also :attr:`cherrypy._cpreqbody.Part.maxrambytes`.
|
| next(self)
|
| process(self)
| Execute the best-match processor for the given media type.
|
| read(self, size=None, fp_out=None)
|
| readline(self, size=None)
|
| readlines(self, sizehint=None)
|
| ----------------------------------------------------------------------
| Data descriptors inherited from Entity:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| type
| A deprecated alias for :attr:`content_type<cherrypy._cpreqbody.Entity.content_type>`.
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from Entity:
|
| charset = None
|
| content_type = None
|
| filename = None
|
| fp = None
|
| headers = None
|
| length = None
|
| name = None
|
| params = None
|
| part_class = <class 'cherrypy._cpreqbody.Part'>
| A MIME part entity, part of a multipart entity.
|
| parts = None
|
| processors = {'application/x-www-form-urlencoded': <function process_u...
I'm using DynamoDB and for query API for all ComparisonOperators except "EQ" it keeps giving me "Attempted conditional constraint is not an indexable operation" error.
What is the reason?
{"TableName":"My_Table_name",
"IndexName":"titleIndex",
"Select":"ALL_ATTRIBUTES",
"KeyConditions":
{"title":
{"AttributeValueList":[{"S":"title2"}],
"ComparisonOperator":"NE"}
}
}
For query operation, only the following comparison operators are supported:
EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN
You can use NE for scan operation.
Refer: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html
That may be helpful:
DynamoDB query with Comparison Operators
you can get the additional info from this :
http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
For KeyConditions, only the following comparison operators are supported:
EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN