The visualizer does not get a response from the browser after it outputs a picture - spacy-3

How do I configure the visualizer so it does not hang after sending a picture to the browser?
DisplaCy sends the picture to the browser, it is drawn in the browser, but then they do not respond to keystrokes and buttons. In the browser, it is only manages to close the page with the picture, but this is not sent to spaCy. The program waits for a response but does not get one, I have to crash execution to continue. After kernel`s interrupting execution continues normally. Execution in Spyder environment.
My actions: start the browser, Spyder, start the visualization program, after appearing in the console "Serving on http://0.0.0.0:5000 ..." enter the address in the browser - a picture appears and in the console lines: "127.0.0.0.1 - - [09/Feb/2023 19:06:28] "GET / HTTP/1.1" 200 3400". Further - only close.
import spacy
from spacy import displacy
print('\n begin') #
nlp = spacy.load("en_core_web_sm")
doc = nlp("This is a sentence.")
displacy.serve(doc, style="dep", options={"compact": True})
print('\n end') #`
console content:
begin
Using the 'dep' visualizer
Serving on http://0.0.0.0:5000 ...
127.0.0.1 - - [09/Feb/2023 19:06:28] "GET / HTTP/1.1" 200 3400
127.0.0.1 - - [09/Feb/2023 19:06:29] "GET /favicon.ico HTTP/1.1" 200 3400
127.0.0.1 - - [09/Feb/2023 19:06:29] "GET /favicon.ico HTTP/1.1" 200 3400
Shutting down server on port 5000.
end`

Related

Private Azure Load Balancer Returning 400 Response Using NGINX

I have a brand new Azure Load Balancer configured in private mode and VMSS (Single Server) configured with nginx and the default site. Any time I try to use the load balancer nginx returns a 400 response but if I use the server directly I get a 200 response.
Further looking at the access logs I see this ->
xxx.xxx.xxx.xxx - - [30/Jun/2021:17:51:48 +0000] "\x00" 400 166 "-" "-"
xxx.xxx.xxx.xxx - - [30/Jun/2021:17:51:51 +0000] "GET / HTTP/1.1" 304 0 "-" "{Browser Info ...}"
When using the load balancer, the path is \x00 instead of / - I'm not sure what is going on here or where to look.
This was caused by a private link service configured for TCP proxy V2 that was configured on the Load Balancer

Error getting data from Graphite 500 Internal Server Error (Intermittent)

I need some help tracking down a sporadic http status 500 internal server error when using Graphite.
The server is running on Ubuntu 16.04, Graphite version 0.9.15.
10.0.0.10 - - [09/Jul/2017:08:07:14 -0500] "GET /render?from=-3minute&target=aliasByNode%28divideSeries%28hosts.Test.metric1.mean%2C+hosts.Test.metric2.mean%29%2C+1%29&format=json HTTP/1.1" 500 1237 "-" "python-requests/2.7.0 CPython/2.7.12 Linux/4.4.0-62-generic"
The same request works again without any modification.

why i see in log error 206 or 404 when user watch videos on my site?

I use nginx, jw player and sometimes i see those error 206 and 404. But usually, response is 200.
my log:
XX.XX.XX.XX - - [15/Apr/2013:21:23:40 +0400] "GET /route/86258d45ffc3403789b73e5ff2af83ce/106/video.flv HTTP/1.1" 206 1 "http://example.com/course/36/files/106/" "Mozilla/5.0 (Linux; U; Android 4.0.4; ru-ru; GT-P5100 Build/IMM76D) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Safari/534.30"
I red about error 206 - partial content. But I don't understand when and why it's happend?
HTTP status 206 (or any other status from 200 to 299) is not an error. It indicates a partial content response which is sent if the client requests it. Since it's a video, I'm guessing the user skipped part of the video and so the player software on the client sent a partial request for the rest of it.

"Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser."

Take a look at the code on GitHub.
What's happening is I try to create a Schedule (one of the models) and it is suddenly failing to do anything. Using the built-in Padrino server, it is not outputting any errors. However, with Passenger, I am getting this.
[36m DEBUG[0m - [[33m22/Aug/2011 13:52:13[0m] "GET (0.3754ms) 127.0.0.1 - - /admin/schedules - 200 3606"
[ pid=3939 thr=3067513712 file=ext/nginx/HelperAgent.cpp:921 time=2011-08-22 13:52:13.968 ]: Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser.
[36m DEBUG[0m - [[33m22/Aug/2011 13:52:13[0m] "GET (0.3211ms) 127.0.0.1 - - /admin/schedules - 200 3606"
[36m DEBUG[0m - [[33m22/Aug/2011 13:52:19[0m] "POST (0.0493ms) 127.0.0.1 - - /admin/schedules/create - 302 -"
[36m DEBUG[0m - [[33m22/Aug/2011 13:52:19[0m] "Resolving layout /var/www/fhsclock/admin/views/layouts/application"
[36m DEBUG[0m - [[33m22/Aug/2011 13:52:19[0m] "GET (0.0384ms) 127.0.0.1 - - /admin/schedules - 200 3606"
I think the second line is the problem (I'm not clicking Stop, by the way). This is the only place this is happening--when creating a new entry of another model, it works fine.
What is causing this?

How can I measure my (SAMP) server's bandwidth usage?

I'm running a Solaris server to serve PHP through Apache. What tools can I use to measure the bandwidth my server is currently using? I use Google analytics to measure traffic, but as far as I know, it ignores file size. I have a rough idea of the average size of the pages I serve, and can do a back-of-the-envelope calculation of my bandwidth usage by multiplying page views (from Google) by average page size, but I'm looking for a solution that is more rigorous and exact.
Also, I'm not trying to throttle anything, or implement usage caps or anything like that. I'd just like to measure the bandwidth usage, so I know what it is.
An example of what I'm after is the usage meter that Slicehost provides in their admin website for their users. They tell me (for another site I run) how much bandwidth I've used each month and also divide the usage for uploading and downloading. So, it seems like this data can be measured, and I'd like to be able to do it myself.
To put it simply, what is the conventional method for measuring the bandwidth usage of my server?
This depends on your setup. If you have a (near-)dedicated physical interface for your web server you could gather stats straight from the interface.
Methods to do this could include SNMP (try net-snmp) or "ifconfig", combined with RRDTool or simple logging to flat files.
An alternative is using the Apache log, which could look like this:
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET / HTTP/1.1" 200 1456
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET /apache_pb.gif HTTP/1.1" 200 2326
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET /favicon.ico HTTP/1.1" 404 303
192.168.101.155 - - [17/Apr/2005:20:39:42 -0700] "GET /index.html.ca HTTP/1.1" 200 1663
192.168.101.155 - - [17/Apr/2005:20:39:42 -0700] "GET /apache_pb.gif HTTP/1.1" 304 -
192.168.101.155 - - [17/Apr/2005:20:39:43 -0700] "GET /favicon.ico HTTP/1.1" 404 303
192.168.101.155 - - [17/Apr/2005:20:40:01 -0700] "GET /apache_pb.gif HTTP/1.1" 304 -
192.168.101.155 - - [17/Apr/2005:20:40:09 -0700] "GET /apache_pb.gift HTTP/1.1" 404 306
192.168.101.155 - - [17/Apr/2005:20:40:09 -0700] "GET /favicon.ico HTTP/1.1" 404 303
The last number is the amount of bytes transferred, excluding the header(!). See Apache Log Docs.
I am just guessing, but I think the usual approach is to use the same tools and services that are used to deliver QoS features. QoS == Quality of Service. Somewhere on the server itself, or on the network routers around the server, there will be services enabled that measure the size of the packets flowing out of your server. These same services can be used to limit the amount of bandwidth for customers that need to have such limitations enforced. I have not heard of an application that can be run on your server that measures bandwidth. I think it should be possible to create such an app, but that's not the usual way that such measurements are collected. I suspect this answer will end up not being solaris-specific.

Resources