akka.remote.RemoteTransportException: No transport is loaded for protocol - tcp

I have noticed that Akka 2.3.2 does not have the akka-remote-test-experiment. Can anyone suggest to me on how to solve this problem:
[ERROR] [05/23/2014 12:33:38.765] [Configurations-akka.actor.default-dispatcher-15] [akka://Configurations/system/cluster/core/daemon/joinSeedNodeProcess-1] No transport is loaded for protocol: [akka], available protocols: [akka.tcp]
akka.remote.RemoteTransportException: No transport is loaded for protocol: [akka], available protocols: [akka.tcp]
at akka.remote.Remoting$.localAddressForRemote(Remoting.scala:80)
at akka.remote.Remoting.localAddressForRemote(Remoting.scala:122)
at akka.remote.RemoteActorRefProvider.rootGuardianAt(RemoteActorRefProvider.scala:338)
at akka.actor.ActorRefFactory$class.actorSelection(ActorRefProvider.scala:318)
at akka.actor.ActorCell.actorSelection(ActorCell.scala:369)
at akka.cluster.JoinSeedNodeProcess$$anonfun$receive$4$$anonfun$applyOrElse$1.applyOrElse(ClusterDaemon.scala:1084)
at akka.cluster.JoinSeedNodeProcess$$anonfun$receive$4$$anonfun$applyOrElse$1.applyOrElse(ClusterDaemon.scala:1083)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at scala.collection.TraversableLike$$anonfun$collect$1.apply(TraversableLike.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.collect(TraversableLike.scala:278)
at scala.collection.AbstractTraversable.collect(Traversable.scala:105)
at akka.cluster.JoinSeedNodeProcess$$anonfun$receive$4.applyOrElse(ClusterDaemon.scala:1083)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at akka.cluster.JoinSeedNodeProcess.aroundReceive(ClusterDaemon.scala:1067)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

This is identical to Mario's answer, just different syntax. Change
val remote = context.actorFor("akka://RemoteSystem#127.0.0.1:5150/user/RemoteActor")
to
val remote = context.actorFor("akka.tcp://RemoteSystem#127.0.0.1:5150/user/RemoteActor")

This looks like a problem when creating the instance of akka.actor.Address. For remoting to work the first parameter has to be "akka.tcp" instead of "akka":
val addr = Address("akka.tcp", "actorSystem", hostname, port)

Related

Micropython uasyncio websocket server

I need to run a websocket server on ESP32 and the official example raises the following exception when I connect from any client:
MPY: soft reboot
Network config: ('192.168.0.200', '255.255.255.0', '192.168.0.1', '8.8.8.8')
b'Sec-WebSocket-Version: 13\r\n'
b'Sec-WebSocket-Key: k5Lr79cZgBQg7irI247FMw==\r\n'
b'Connection: Upgrade\r\n'
b'Upgrade: websocket\r\n'
b'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits\r\n'
b'Host: 192.168.0.200\r\n'
b'\r\n'
Finished webrepl handshake
Task exception wasn't retrieved
future: <Task> coro= <generator object 'echo' at 3ffe79b0>
Traceback (most recent call last):
File "uasyncio/core.py", line 1, in run_until_complete
File "main.py", line 22, in echo
File "uasyncio/websocket/server.py", line 60, in WSReader
AttributeError: 'Stream' object has no attribute 'ios'
My micropython firmware and libraries:
Micropython firmware: https://micropython.org/resources/firmware/esp32-idf3-20200902-v1.13.bin
Pip libraries installed: micropython-ulogging, uasyncio.websocket.server
My main.py:
import network
import machine
sta_if = network.WLAN(network.STA_IF)
sta_if.active(True)
sta_if.ifconfig(('192.168.0.200', '255.255.255.0', '192.168.0.1', '8.8.8.8'))
if not sta_if.isconnected():
print('connecting to network...')
sta_if.connect('my-ssid', 'my-password')
while not sta_if.isconnected():
machine.idle() # save power while waiting
print('Network config:', sta_if.ifconfig())
# from https://github.com/micropython/micropython-lib/blob/master/uasyncio.websocket.server/example_websock.py
import uasyncio
from uasyncio.websocket.server import WSReader, WSWriter
def echo(reader, writer):
# Consume GET line
yield from reader.readline()
reader = yield from WSReader(reader, writer)
writer = WSWriter(reader, writer)
while 1:
l = yield from reader.read(256)
print(l)
if l == b"\r":
await writer.awrite(b"\r\n")
else:
await writer.awrite(l)
import ulogging as logging
#logging.basicConfig(level=logging.INFO)
logging.basicConfig(level=logging.DEBUG)
loop = uasyncio.get_event_loop()
loop.create_task(uasyncio.start_server(echo, "0.0.0.0", 80))
loop.run_forever()
loop.close()
MicroPython 1.13 implements asyncio v3 which has a number of breaking changes compared to the 3 year old sample referenced.
I suggest you refer to Peter Hinch's excellent documentation on asyncio,
and the asyncio V3 tutorial
I encountered the same problem. I looked at the old implementation of Stream class [1] and the new one [2].
It seems to me, that you need to edit server.py from uasyncio/websocket/.
You can download the files from [3] to your PC. Then at the bottom of the file replace the two instances of "reader.ios" by "reader.s".
Save the file to your ESP32 and it should work. Of course you need to use "from server import WSReader, WSWriter" instead of "from uasyncio.websocket.server import WSReader, WSWriter".
[1] https://github.com/pfalcon/pycopy-lib/blob/master/uasyncio/uasyncio/__init__.py
[2] https://github.com/micropython/micropython/blob/master/extmod/uasyncio/stream.py
[3] https://pypi.org/project/micropython-uasyncio.websocket.server/#files
https://github.com/pfalcon/pycopy-lib/tree/master/uasyncio has a recent (may'21) sample that should also work on standard MicroPython.
or checkout https://awesome-micropython.com under web servers

Unable to connect Corda node to Postgres with SSL

My Postgres DB in GCP (Google Cloud Platform) only accepts connections over SSL.
I tried the below inside my node.conf without any success:
dataSourceProperties {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db-private-ip:5432/my_node"
dataSource.ssl = true
dataSource.sslMode = verify-ca
dataSource.sslRootCert = "/opt/corda/db-certs/server-ca.pem"
dataSource.sslCert = "/opt/corda/db-certs/client-cert.pem"
dataSource.sslKey = "/opt/corda/db-certs/client-key.pem"
dataSource.user = my_node_db_user
dataSource.password = my_pass
}
I'm sure that the keys (sslMode, sslRootCert, sslCert, and sslKey) are acceptable in node.conf (even though they are not mentioned anywhere in Corda docs), because in the logs I didn't get any errors that those key are not recognized.
I get this error when I try to start the node:
[ERROR] 21:58:48+0000 [main] pool.HikariPool. - HikariPool-1 - Exception during pool initialization. [errorCode=zmhrwq, moreInformationAt=https://errors.corda.net/OS/4.3/zmhrwq]
[ERROR] 21:58:48+0000 [main] internal.NodeStartupLogging. - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=18t70u2, moreInformationAt=https://errors.corda.net/OS/4.3/18t70u2]
I tried adding ?ssl=true to the end of the data source URL as suggested in (Azure Postgres Database requires SSL Connection from Corda) but that didn't fix the problem.
Also for the same values I'm able to use the psql client to connect my VM to the DB:
psql "sslmode=verify-ca sslrootcert=server-ca.pem sslcert=client-cert.pem sslkey=client-key.pem hostaddr=db-private-ip user=some-user dbname=some-pass"
Turns out the JDBC driver cannot read the key from a PEM file, it has to be converted to a DER file using:
openssl pkcs8 -topk8 -inform PEM -in client-key.pem -outform DER -nocrypt -out client-key.der
chmod 400 client-key.der
chown corda:corda client-key.der
More details here: https://github.com/pgjdbc/pgjdbc/issues/1364
So the correct config should look like this:
dataSourceProperties {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db-private-ip:5432/db-name"
dataSource.ssl = true
dataSource.sslMode = verify-ca
dataSource.sslRootCert = "/opt/corda/db-certs/server-ca.pem"
dataSource.sslCert = "/opt/corda/db-certs/client-cert.pem"
dataSource.sslKey = "/opt/corda/db-certs/client-key.der"
dataSource.user = db-user-name
dataSource.password = db-user-pass
}

data channel lock error while configuring flume with multiple channels

I have tried to fan out the flow from one source to two channels.Also I specified different dataDirs and checkpointDirs properties for each channel as in the channel lock error while configuring flume's multiple sources using FILE channels question.I have used a multiplexing channel selector. I have get the following error.
18/08/23 16:21:37 **ERROR file.FileChannel: Failed to start the file channel** [channel=fileChannel1_2]
java.io.IOException: Cannot lock /root/.flume/file-channel/data. The directory is already locked. [channel=fileChannel1_2]
at org.apache.flume.channel.file.Log.lock(Log.java:1169)
at org.apache.flume.channel.file.Log.<init>(Log.java:336)
at org.apache.flume.channel.file.Log.<init>(Log.java:76)
at org.apache.flume.channel.file.Log$Builder.build(Log.java:276)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:281)
at unAndReset(FutureTask.java:308) .....
My configuration file as follws.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDir=/mnt/alpha_data/
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDir=/mnt/beta_data/
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sources.source1_1.selector.type=multiplexing
agent1.sources.source1_1.selector.header=basenameHeader
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
can someone give any solution for that.
Try to set a channel for default property in multiplexing selector
agent1.sources.source1_1.selector.default=fileChannel1_1
Data channel lock error was corrected. But still could't do the multiplexing. Code as follows.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 hdfs-sink1_3
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.capacity=200000
agent1.channels.fileChannel1_1.transactionCapacity=1000
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDirs=/home/Flume/alpha_data
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sources.source1_1.basenameHeaderKey = basename
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDirs=/home/Flume/beta_data
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_3.type=file
agent1.channels.fileChannel1_3.capacity=200000
agent1.channels.fileChannel1_3.transactionCapacity=10
agent1.channels.fileChannel1_3.checkpointDir=/home/Flume/gamma/001
agent1.channels.fileChannel1_3.dataDirs=/home/Flume/gamma_data
agent1.channels.fileChannel1_3.checkpointOnClose=true
agent1.channels.fileChannel1_3.dataOnClose=true
agent1.sinks.hdfs-sink1_3.type=hdfs
agent1.sinks.hdfs-sink1_3.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_3.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/KT
agent1.sinks.hdfs-sink1_3.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_3.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_3.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_3.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_3.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_3.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_3.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sinks.hdfs-sink1_3.channel=fileChannel1_3
agent1.sources.source1_1.selector.type=replicating
agent1.sources.source1_1.selector.header=basename
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
agent1.sources.source1_1.selector.default=fileChannel1_3

Trouble connecting Firefox WebDriver with Selenium in Python

i'm running into an error in trying to start up the selenium firefox driver. it seems like others have hit snags at this step, and there is no readily available solution online, so hopefully this question will be broadly helpful. it seems like firefox is failing to establish an http server interface when initiated through selenium's driver. it appears that i can run firefox from the command line with no errors.
i should specify that i am doing this via ssh login to a linux container. i'm running python2.7 on Ubuntu 14.04 LTS (GNU/Linux 3.16.3-elastic x86_64). i have the latest version of selenium (2.44) installed, and i'm using firefox 34.0. i'm using xvfb to spoof a display.
below is my code, the error logs, and some related source code.
from selenium import webdriver
d = webdriver.Firefox()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 59, in __init__
self.binary, timeout),
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 47, in __init__
self.binary.launch_browser(self.profile)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 66, in launch_browser
self._wait_until_connectable()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 105, in _wait_until_connectable
raise WebDriverException("Can't load the profile. Profile "
selenium.common.exceptions.WebDriverException: Message: Can't load the profile. Profile Dir: %s If you specified a log_file in the FirefoxBinary constructor, check it for details.
that error is raised here, due to a timeout:
def _wait_until_connectable(self):
"""Blocks until the extension is connectable in the firefox."""
count = 0
while not utils.is_connectable(self.profile.port):
if self.process.poll() is not None:
# Browser has exited
raise WebDriverException("The browser appears to have exited "
"before we could connect. If you specified a log_file in "
"the FirefoxBinary constructor, check it for details.")
if count == 30:
self.kill()
raise WebDriverException("Can't load the profile. Profile "
"Dir: %s If you specified a log_file in the "
"FirefoxBinary constructor, check it for details.")
count += 1
time.sleep(1)
return True
in the log_file:
tail -f logs/firefox_binary.log
1418661895753 addons.xpi DEBUG checkForChanges
1418661895847 addons.xpi DEBUG No changes found
1418661895853 addons.manager DEBUG Registering shutdown blocker for XPIProvider
1418661895854 addons.manager DEBUG Registering shutdown blocker for LightweightThemeManager
1418661895857 addons.manager DEBUG Registering shutdown blocker for OpenH264Provider
1418661895858 addons.manager DEBUG Registering shutdown blocker for PluginProvider
System JS : ERROR (null):0 - uncaught exception: 2147746065
JavaScript error: file:///tmp/tmplkLsLs/extensions/fxdriver#googlecode.com/components/driver-component.js, line 11507: NS_ERROR_NOT_AVAILABLE: Component is not available'Component is not available' when calling method: [nsIHttpServer::start]
*** Blocklist::_preloadBlocklistFile: blocklist is disabled
1418661908552 addons.manager DEBUG Registering shutdown blocker for <unnamed-provider>
one more point of information. early on, in the firefox driver initalization, socket.bind(('127.0.0.1',0)) was failing with a "can't assign requested address" error. i changed the location to (0.0.0.0,0) and edited the localhost entry in my /etc/hosts, and was able to bind that way. not sure if that could be causing the current failure though.
VV edits per louis's request VV . i specify the two lines where i change the localhost address.
def free_port():
"""
Determines a free port using sockets.
"""
free_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
free_socket.bind(('0.0.0.0', 0)) # changed from 127.0.0.1
free_socket.listen(5)
port = free_socket.getsockname()[1]
free_socket.close()
return port
def is_connectable(port):
"""
Tries to connect to the server at port to see if it is running.
:Args:
- port: The port to connect.
"""
try:
socket_ = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_.settimeout(1)
socket_.connect(("0.0.0.0", port)) # changed again
socket_.close()
return True
except socket.error:
return False
here's the constructor from webdriver:
def __init__(self, firefox_profile=None, firefox_binary=None, timeout=30,
capabilities=None, proxy=None):
self.binary = firefox_binary
self.profile = firefox_profile
if self.profile is None:
self.profile = FirefoxProfile()
self.profile.native_events_enabled = (
self.NATIVE_EVENTS_ALLOWED and self.profile.native_events_enabled)
if self.binary is None:
self.binary = FirefoxBinary()
if capabilities is None:
capabilities = DesiredCapabilities.FIREFOX
if proxy is not None:
proxy.add_to_capabilities(capabilities)
RemoteWebDriver.__init__(self,
command_executor=ExtensionConnection("127.0.0.1", self.profile,
self.binary, timeout),
desired_capabilities=capabilities,
keep_alive=True)
self._is_remote = False
here's the constructor from extension_connector:
def __init__(self, host, firefox_profile, firefox_binary=None, timeout=30):
self.profile = firefox_profile
self.binary = firefox_binary
HOST = host
if self.binary is None:
self.binary = FirefoxBinary()
if HOST is None:
HOST = "127.0.0.1"
PORT = utils.free_port()
self.profile.port = PORT
self.profile.update_preferences()
self.profile.add_extension()
self.binary.launch_browser(self.profile)
_URL = "http://%s:%d/hub" % (HOST, PORT)
RemoteConnection.__init__(
self, _URL, keep_alive=True)
in posting my edits to louis's comment, i saw that my localhost issue turned up in other locations, as the host is hardcoded in twice more. i had my server master address the issue, changed everything in the source back to 127, and the problem was solved. thanks for prompting me, #louis, and i'm sorry my question wasn't more interesting. will accept my own answer after 2 days when SO allows me.

Haskell/ghci Exception: bind: resource busy (Address already in use)

I'm trying to work on following code below (code is a copy from here). Problem is that when I close the server with ctrl+c and try to run it again I get: * Exception: bind: resource busy (Address already in use).
In documentation of listenOn is written: NOTE: To avoid the "Address already in use" problems popped up several times on the GHC-Users mailing list we set the ReuseAddrsocket option on the listening socket. If you don't want this behavior, please use the lower level listen instead. Please how can I fix this? (ghci version 7.6.3)
import Network (listenOn, accept, PortID(..))
import Network.Socket (Socket, isSupportedSocketOption, SocketOption(..))
import System.IO (hSetBuffering, hGetLine, hPutStrLn, BufferMode(..), Handle)
import Control.Concurrent (forkIO)
echoImpl :: Handle -> IO ()
echoImpl client = do
line <- hGetLine client
hPutStrLn client line
echoImpl client
clientHandler :: Socket -> IO ()
clientHandler socket = do
(client, _, _) <- accept socket
hSetBuffering client NoBuffering
forkIO $ echoImpl client
clientHandler socket
felix :: IO ()
felix = do
let reuseAddrSupported = isSupportedSocketOption ReuseAddr
putStrLn $ "ReuseAddr: " ++ show reuseAddrSupported
socket <- listenOn $ PortNumber 5002
putStrLn $ "Echo server started .."
clientHandler socket

Resources