Below is list of asyncoro release accouncements / short summary of changes.
- 4.5.6 (2017-05-23)
asyncoro version 4.5.6 has been released. In this release,- Channel's deliver method implementation has been changed to get reply asynchronously from remote server. Otherwise, if n parameter is bigger than 1 and channel doesn't have enough scubscribers yet, delivering message may block until enough subscribers register with the channel (which would mean other messages to that remote server can't be delivered). Now during the time reply fromdeliver is pending, any other messages to that remote server will be transferred.
- Fixed httpd module to discard invalid servers; if a node rejoins cluster (e.g., node is closed and restarted), httpd module doesn't throw away the node and server information (so http client still shows finished / pending jobs information). When a node is rediscovered, though, the information about old servers has be thrown away - otherwise, httpd may show more servers than available.
- asyncoro project is being renamed to pycos!
asyncoro project started as a simple module within dispy project to implement asynchronous network programming and coroutines. Since then many features have been added to asyncoro project, including message passing, distributed computing, fault handling etc., that the project name didn't seem fitting. Moreover, term "coroutine" was not quite appropriate either, as these light-weight processes are like operating system processes and don't transfer control explictly from one to another. asyncoro name is also very similar to asyncore module, causing confusion. For these reasons, asyncoro is renamed to pycos. It stands for either "Python concurrent tasks", "pico tasks", or "pico os".
Programs using asyncoro can be converted to use pycos with sed program 'asyncoro2pycos.sed' in 'examples' directory of 'pycos' installation with, for example, sed -i /path/to/asyncoro2pycos.sed program.py to convert 'program.py'.
Going forward, asyncoro project will be maintained for next couple of relases. If you are currently using asyncoro and have opinions / concerns on moving to pycos, please let me know.
- 4.5.5 (2017-05-02)
asyncoro version 4.5.5 has been released. In this version httpd has been fixed to support IPv6 addresses. - 4.5.4 (2017-04-19)
asyncoro version 4.5.4 has been released. In this release- Fixed IPv6 for Windows and OS X. IPv6 with Python 2.7 under Windows needs package win_inet_pton (it is not required for Python 3.6+). For IPv6 under OS X, package netifaces is required. Even if not required with other platforms, netifaces is also strongly recommended.
- Fixed a potential deadlock in discoro module (for distributed concurrent communicating processes).
- 4.5.3 (2017-04-05)
asyncoro version 4.5.3 has been released. In this version- Fixed (default) host name resolution under IPv6.
- Path and directory names in unicode are supported.
- Added show_coro_args parameter to httpd module to control whether (remote) coroutine arguments should be shown in browser or not.
- 4.5.2 (2017-03-13)
asyncoro version 4.5.2 has been released. In this version- Fixed socket's sendto for sockets created in main thread (instead of in coroutines).
- Fixed discoro to send StatusClosed message when a server is closed (instead of StatusInitialized).
- 4.5.1 (2017-02-27)
asyncoro version 4.5.1 has been released. In this version- ping_interval and zombie_period options to Computation have been removed as these are not necessary. The scheduler will discard computation if messages couldn't be sent to client (the number of errors as per disasyncoro.MaxConnectionErrors) so no need for zombie_period. ping_interval is meant to broadcast ping messages to discover nodes (in case UDP is lossy and not all nodes could be found with one broadcast message at the beginning); however, clients can easily run a daemon coroutine to periodically call discover_peers method.
- Added await_async option to Computation.close method. If this is not set or False, any running coroutines created with run_async will be terminated. If it is set to True, then close will wait until such coroutines are finished.
- 4.5.0 (2017-02-15)
asyncoro version 4.5.0 has been released In this release, discoro_schedulers.py (and RemoteCoroScheduler) has been removed and its functionality has been moved in to Computation and discoro scheduler: The functions to create remote coroutines and obtain their results are now in Computation as run* methods. The examples have been updated to use new API. - 4.4.1 (2017-01-30)
asyncoro version 4.4.1 has been released. In this release- Fixed asyncoro.Location to resolve host names (so 'addr' parameter can be given as host name instead of IP address).
- Fixed a race condition with initialing discoro nodes.
- Added get_ssl_version and set_ssl_version class methods to AsyncSocket so appropriate SSL flags can be set before any asynchronous sockets are created (e.g., even before disasyncoro module is loaded, which starts servers that use AsyncSocket).
- 4.4.0 (2017-01-17)
asyncoro vesion 4.4.0 has been released. In this release- IPv6 support has been added. IPv6 works only if netifaces module is available. Even if using IPv4, asyncoro can use netifaces module to determine appropriate address to use, so installing netifaces module is advised.
- Fixed SSL support. This seems to have been broken since 4.1 release.
- Fixed sending large files with OS X (and other BSD variants).
- 4.3.4 (2016-12-06)
asyncoro version 4.3.4 has been released. This version fixes recvall with Python 2.7 under Windows IOCP (this was broken in 4.3.2 and 4.3.3). - 4.3.3 (2016-11-28)
asyncoro version 4.3.3 has been released. This version fixes issue with sending large files under OS X (and other BSD variants). - 4.3.2 (2016-11-15)
asyncoro version 4.3.2 has been released. In this version- accept method of AsyncSocket created with blocking=True now returns connection socket that is also an instance of AsyncSocket (so, for example, it can use send_msg, recvall methods available in AsyncSocket); this is consistent with connection sockets returned with non-blocking AsyncSocket. Until now such connection sockets had to be converted to AsyncSocket explicitly by the user.
- Fixed recv_msg to handle receiving empty string (i.e., data with length 0).
- Fixed discoro_client5.py program in examples to send relative path (instead of absolute path, which would be invalid in remote nodes) to program discoro_client5_proc.py.
- 4.3.1 (2016-10-10)
asyncoro version 4.3.1 has been released. The changes since last version are- RemoteCoroScheduler now sends status messages to status_coro (if it is coroutine) of Computation, so client programs don't need to chain them.
- Fixed discoro_schedulers.py to not import old symbol no longer valid, and removed spurious files from distribution (these issues were due to uploading package from wrong directory).
- 4.3.0 (2016-09-13)
asyncoro version 4.3.0 has been released. Following is summary of changes since last version:- discoro (distributed computing module) now supports initializing nodes on Posix platforms (i.e., Linux, OS X etc., but not Windows). With this, computations can, for example, load data in to memory at node level so all server processes have (read-only) access to that data for in-memory processing. Applications can also use in-memory processing at each server as in previous versions. Server level in-memory can be used with all platforms, and for read-write access. Examples 'discoro_client9_node.py' and 'discoro_client9_server.py' have been added to illustrate these features.
- Computation and RemoteCoroScheduler now have node_available that can be set to a coroutine function that is called by scheduler when a node is available. This function can, for example, transfer data for that node. Computation also supports node_setup that is executed on the node for initializing node (e.g., to read data in in files in to memory).
- Files sent with Computation (as components) and files transferred by node_available are now saved in node directory, which is parent directory of server directory where computations execute (i.e., current working directory of computations). In earlier version all files are saved in each server, whereas now such files are saved only once.
- Computation supports peers_communicate boolean flag. This option indicates that computations executing on remote servers can communicate among themselves; without this option (default), each computation can only communicate with the client. In earlier versions discoronode supported discover_peers for similar behavior; discover_peers is now dropped in favor of peers_communicate.
- When RemoteCoroScheduler is used, it is no longer necessary for computation to be scheduled in client programs; RemoteCoroScheduler will schedule it.
- discoro module supports DiscoroNodeAllocate to customize allocation of nodes / servers.
- discoronode now uses separate UDP port 51351; in earlier versions nodes used 51350, same port used by clients and scheduler.
- 4.2.2 (2016-08-06)
asyncoro version 4.2.2 has been released. In this release- dest_path method of (distributed) AsynCoro has been changed to create given destination path if it doesn't exist, raise exception instead of returning a value to indicate success/error. Creating destination directory fixes problem with starting discoronode.py.
- Fixed discoronode.py for Python 3 (octal number for mode must be prefixed with '0o' instead of just '0').
- 4.2.1 (2016-08-03)
asyncoro version 4.2.1 has been released. In this version- Added node_filters option to Computation to specify resource constraints on nodes to be used. This should be list of DiscoroNodeFilter instances.
- Added --service_start, --service_stop and --service_end options to discoronode program. With these options time and duration when discoro servers can be used by clients is specified.
- Fixed send_file method when dir option is not given or None.
- 4.2.0 (2016-07-27)
asyncoro version 4.2.0 has been released. The changes since lat release are:- discoronode now runs each computation in a separate process on each processor, so each computation executes in pristine environment with just asyncoro module loaded, and dependencies sent with computation. When the computation is closed, that process is finished, and a new process is started for new computation. Each computation also starts in a clean directory with only the files transferred with dependencies. Any other files transferred by clients using asyncoro's API will be saved only under computation's path and removed when computation is closed. However, computations may access files elsewhere in the file system on nodes. To isolate file system, either chroot or docker can be used.
- Added save_config and config options to dispynode to save and load configuration (various options used to start dispynode). With these options, most of the configuration can be saved once in a file (with save_config option), and later that file can be used with config option to start dispynode with those options.
- Added tcp_port option to discoro scheduler when used in shared mode. With this option the scheduler can be started with specific TCP port that can be setup for firewall / port forwarding.
- close_peer method in AsynCoro scheduler is now coroutine method that waits until close operation is complete. Earlier this was a function that simply queued the request to close, without knowing when close is completed.
- 4.1.1 (2016-07-12)
asyncoro version 4.1.1 has been released. With this version modules transferred for distributed computing will be saved at remote server with same file structure relative client's directory (i.e., if a module and submodule are sent, submodule files will be saved under main module). This version also fixes potential deadlock/crash with processing messages for remote peers in version 4.1. - 4.1 (2016-06-08)
asyncoro version 4.1 has been released. Up to now, coroutines are expected to yield frequently as all asyncoro framework, including processing of I/O events, is done in a single thread. This is (mostly) acceptable for concurrent programming, but with distributed programming if a coroutine executes a long running computation, even for a few seconds, it likely may result in loss of network packets and remote peers may even disconnect. To avoid this, it was advised to use threads to run long-running computations, or any computation that blocks asyncoro framework for significant amount of time. However, this can be error prone.
In this release I/O events are processed in a separate thread, and additional asyncoro scheduler is used for executing "system" coroutines, with ReactCoro (so called, to indicate these are for "reactive systems" that are implemented with somewhat ironically with synchronous programming) . With this setup coroutines created with Coro don't affect system coroutines that send/receive network traffic, process messages from peers etc. Now, for example, clients can distribute arbitrary computations (as long as they are generator functions) for distributed computing, even if the computaions execute long-running computations. To demonstrate this, discoro_client8.py example has been changed in this release to call time.sleep (which blocks asyncoro framework and all other coroutines from executing during that time). In earlier releases a thread was used to avoid blokcing asyncoro.
All this setup is transparent to user - earlier programs work without any modifications. In fact, ReactCoro itself is not documented, as it is meant for internal use only (at least for now).
Followng are other changes:- Added ping_interval option to Computation (for distributed computing). This option can be set to a number to indicate to scheduler to broadcast messages to discover nodes in a local network. if a scheduler is already running, those servers may not be detected by scheduler (scheduler discovers nodes running when it starts). 'ping_interval' to Computation can be used to set an interval when scheduler broadcasts discover messages so any nodes that were not running when scheduler starts, or any nodes that were not detected (which may happen if network is noisy, such as wifi which lose UDP packets used for broadcasting).
- Coroutine's and Channel's __repr__ has been changed to indicate if they are running in "user space" asyncoro or "reactive" asyncoro. If a coroutine's string has ~ as first character, it is created with Coro (user coroutine) and if it has !, it is created with ReactCoro. This is purely for disambiguation (and help with debugging in case of issues), and programs shouldn't rely on this, as this could potentially change in future.
- discoronode.py runs additional asyncoro in __main__ process to send periodic heart-beat messages to current scheduler and to check if the scheduler is zombie etc. This asyncoro doesn't run user computations. If --tcp_ports option is used, then it should include one additional port for this asyncoro, in addition to server processes.
- Corotuines can be created with Coro with same syntax used for threads; e.g., Coro(target=coro_proc, args=(42,), kwargs={'a': 'test'}). Other keyword arguments, such as group or name used for threads are not supported though. Older syntax of specifying process, arguments is also supported, so Coro(coro_proc, 42, a='test') can be used.
- Deafult timeouts for sending/receiving messages is as per module variable MsgTimeout, which has default value of 10 (seconds); in earlier releases this was hardcoded as 5 seconds. If working with slow networks, the module variable can be set in user programs (e.g., as asyncoro.MsgTimeout = 15). Smaller timeouts detect network failures quicker and bigger timeouts give enough time for sending large messages etc.
- Added discover_peers method to asyncoro scheduler, This method can be used to broadcast discover message in local network to detect peers. Peers are detected when asyncoro starts; however, UDP packets sent for broadcasting can be lost in some cases (e.g., with WiFi). In such cases, broadcasting periodically (or until desired peers are found) may be useful.
- 4.0 (2016-05-11)
asyncoro version 4.0 has been released. In this release- Distributed computing support has been improved; now clients can send long-running computations (earlier the server processes exect a computation to use yield every few seconds so the monitoring coroutines can send heartbeat messages to scheduler). Removed discomp* examples, and revised discoro_client* examples to reflect this improvement.
- Added map_results method to discoro_schedulers; this can be used to run a computation with arguments in given iterator using remote servers and get their results in a list.
- Message timeout period can be set with asyncoro.MsgTimeout variable and --msg_timeout option to discoronode, with default value of 10 seconds (earlier the timeout was either 2 seconds or 5 seconds).
- discover_peers method has been added to AsynCoro; user programs can send (UDP broadcast) messages to discover peers with this method. asyncoro sends this message when first staretd; if running over a noisy network, such as WiFi, peers may miss it.
- UDP broadcast messages are sent over the network address that is used by asyncoro, so if multiple interfaces are available for a peer, broadcast is sent specifically over the used address, insead of any address choosen by operating system.
- Channel, RCI and Coro instances can now be sent to remote coroutines as messages; earlier remote coroutines had to use appropriate locate method to get references to registered channels, coroutines and RCI at the sender.
- 3.6.16 (2016-04-21)
asyncoro version 3.6.16 has been released. This version fixes error when asyncoro scheduler is being created under Windows github Issue 5. - 3.6.15 (2016-04-15)
asyncoro version 3.6.15 has been released. In this version- Added discover_peers option to disasyncoro and discoronode. If this option is True (default value for disasyncoro), the peers broadcast a message when starting to announce them and to discover other peers in local nework. If this option is False, that message is not broadcast, so peers won't discover each other during initialization. However, peer method can be used to selectively add local or remote peers to be discovered. With discoronode default value used to initialize is False, as most often discoronode is used for computations that communicate only client and not with computations running at other discoronode servers. If, however, a computation (coroutine) needs to communicate with other comptations on discoronode servers, the nodes can be started with discover_peers option so each server (one per CPU on every node in local network) broadcasts discovery message during initialization.
- Fixed discoro scheduler so it waits until asyncoro in client program terminates. In earlier releases, the discoro scheduler terminated at the end of '__main__' in client program, before the client coroutine can schedule remote coroutines; a fix was to force client coroutine to wait until computation is finished was using value method. Now this adhoc fix is not necessary, and all discomp* and discoro* examples have been updated accordingly.
- Added swap space availability as percentage in node status information. This information is also shown web clients.
- Added min_pulse_interval and max_pulse_interval options to discoronode so node availability status indications can be requested at higher frequency than dispy.MinPulseInterval, which is 10 seconds.
- Coroutine framework is cleanly shutdown when program is exiting (earlier discoronode would hang for up to 2 seconds as servers deadlock while sending close messages to other servers which are also in the process of terminating etc.).
- 3.6.13 (2016-03-28)
asyncoro version 3.6.13 has been released. In this version- If psutil module is available, nodes send availability status (CPU, memory and disk space) at pulse_interval frequency. This information is sent to status_coro with DiscoroNodeAvailInfo structure. When httpd is used, this information is shown in web browser so cluster/application status/performance can be monitored.
- Added min_pulse_interval and max_pulse_interval options to discoronode program. By default, nodes send availability status at pulse_interval specified by client, which is 2*MinPulseInterval (defined in discoro.py as 10 seconds). Nodes don't allow pulse_interval to be shorter than MinPulseInterval. If an application's performance needs to be monitored more frequently, discoronode can now be started with min_pulse_interval to override that limit, for example, as discoronode.py --min_pulse_interval 5 to specify shortest interval clients can use is 5 seconds (they should then create Computation with pulse_interval=5 option).
- 3.6.12 (2016-03-07)
asyncoro version 3.6.12 has been released. This releases supports/fixes loading and unloading of user modules (sent with depends) for distributed computing. - 3.6.11 (2016-02-23)
asyncoro version 3.6.11 has been released. In this version discoro_ssh_ec2.py file is added to examples that illustrates one way to use cloud computing (using Amazon EC2 and port forwarding with ssh), and implementation of locating of resources (e.g., Coro.locate) is simplified. - 3.6.10 (2016-02-09)
asyncoro version 3.6.10 has been released. In this version two additional optional parameters have been added toRemoteCoroScheduler constructor:- proc_available can optionally be set to a coroutine, which is executed at client when a remote server process becomes available; the coroutine can send additional data (that is not sent as part of depends), execute remote coroutine(s) on that process (e.g., to iniitialize the process, including reading data in to memory for in-memory processing) etc. The coroutine should exit with 0 to indicate successful initialization; any other value causes the scheduler to ignore that server for running remote coroutines,
- proc_close can optionally be set to a coroutine, which is executed at client when a remote server process is about to be closed (in which case, it can execute remote coroutines on that process, for example, to cleanup the process, such as sending results files back to client, deleting global variables that have been used etc.), or when it has already closed (e.g., the user has closed the server manually / terminated discroronode, or server has been closed because it has been deemed zombie because no heartbeat messages have been received for zombie period).
- Examples illustrating various use cases / features for remote coroutine execution (discomp*.py and discoro_client*.pyfiles in examples directory) have been simplified using proc_avaialable and proc_close parameters above).
- 3.6.9 (2016-01-26)
asyncoro version 3.6.9 has been released. In this release send_file has been changed so timeout parameter is number (or fraction) of seconds to deliver (i.e., for the receiver to receive data and acknowledge) at most 1MB of data (earlier the timeout was for delivering some arbitrary amount of data, making it unreliable). - 3.6.8 (2016-01-18)
asyncoro version 3.6.8 has been released. In this version- tcp_ports option has been added to discoronode.py (program to start discoro servers for distributed communicating processes). With this option server processes can be started at given ports, instead of random (available) ports, which is default. This, and ext_ip_addr when necessary, can be used to access servers in remote networks / cloud.
- Added submit_at and submit methods to RemoteCoroScheduler (which was renamed from ProcScheduler in earlier versions) in discoro_schedulers.py. These methods are used to simplify a few examples, especially that use distributed in-memory processing, that use distributed communicating processes.
- Fixed an issue with closing IOCP (Windows) sockets: When an IOCP socket is closed while data is also being sent with sendall, the closing method should check which operation IOCP server processed instead of clearing buffers used. This was observed only with specific steps whlie testing discomp7.py example, but likely possible otherwise.
- 3.6.7 (2016-01-12)
asyncoro version 3.6.7 has been released. This version fixes issues with socket errors; specifically, if socket I/O causes an exception while the scheduler is processing socket I/O events (mostly with SSL), that is thrown back to user's coroutine instead of the scheduler crashing. With these fixes, invalid/incorrect SSL use by clients don't crash the server. - 3.6.6 (2016-01-07)
asyncoro version 3.6.6 has been released. Following are changes since last release:- Fixed 'hash' method in 'Coro' to avoid unbounded recursion for local coroutines (since 3.6.3 release).
- Fixed memory leak with discoronode under Windows.
- Fixed asynchronous pipes for Windows. 'discomp7' example has been fixed to work under Windows as well.
- Added commands "status" and "close" to discoronode.
- Added 'daemon' option to discoro and discoronode so they can be started from init scripts, for example.
- Changed 'pulse_interval' and 'zombie_period' defaults for Computation to avoid a hung/deadlocked client to hog discoronode.
- 3.6.5 (2015-12-31)
asyncoro version 3.6.5 has been released. In this version- Fixed discoronode so it removes PID file when terminating. Otherwise next run of discoronode will not start until the file is removed manually. This problem was in 3.6.3 release.
- Fixed installtion under Python 3 with 3.6.4 release.
Sorry about issues with the the last two releases.
- 3.6.3 (2015-12-29)
asyncoro version 3.6.3 has been released. In this version:- Added Dockerfile to build docker images to run discoronode program in containers. This fully isolates discoronode so executing arbitrary programs does not affect host operating system.
- Added serve option to discoronode program to quit after executing given number of computations. This option can be used in conjunction with docker images to create same environment for every computation.
- Creating asynchronous pipes in Windows has been fixed. Changed discomp7.py example (that shows how to distribute and execute standalone programs that read standard input and write to standard output) to work with Windows.
- 3.6.2 (2015-12-01)
asyncoro version 3.6.2 has been released. In this version:- Added support for node status (total and usage of CPU, memory and disk) at ping_interval period, if psutil module is available. HTTP server relays this information so status of each node can be seen in web browser.
- Added discomp7.py to 'examples'. This example shows how c;lient can distribute and execute an external program that reads from standard input and writes to standard output. The input is sent from client to remotely executing program and output received from remote program.
- 3.6.1 (2015-11-23)
asyncoro version 3.6.1 has been released. In this version ProcScheduler has been updated; it includes execute and execute_at methods that simplify executing remote coroutines and getting their results. See discomp6.py for an example. Examples also include discomp3.py, discomp4.py, discomp5.py that show how to execute remote processes that don't use yield (often) can be executed using AsyncThreadPool and distributed in-memory processing. - 3.6.0 (2015-11-09)
asyncoro version 3.6.0 has been released. In this version ProcScheduler has been added. This scheduler simplifies distributed computing, almost as easy as dispy project, except that with asyncoro all coroutines can communicate with message passing, exchange files etc.. See 'discomp*.py' files for examples. - 3.5 (2015-09-07)
asyncoro version 3.5 has been released. This version- includes 'httpd' module to monitor discroro cluster (nodes, servers, distributed coroutines); see HTTP Server for details.
- StatusMessage in discoro module has been changed to DiscoroStatus to avoid conflicts with other modules. location attribute has been changed to info in DiscoroStatus.
- When a coroutine has been started at remote server, discoro now sends message DiscoroStatus with CoroInfo structure.
- 3.4.1 (2015-08-02)
asyncoro version 3.4.1 has been released. This version- Supports 'zombie_period' option. If it is given, a process closes currently scheduled computation if it stays idle (no jobs running) for that many seconds. This is to discard idle computations so other scheduled computations can use discoro.
- Passing objects of user defined classes works with Python 3 under Windows, where processes may use __mp_main__ namespace for global scope, so user defined code must be imported into this namespace for object (de)serialization to work.
- 3.4 (2015-07-31)
asyncoro version 3.4 has been released. This version supports distributed communicating processes (as implemented in version 3.3) with Windows. In this version peer method of AsynCoro has been changed to take Location instance (or host name or IP address, as before).
In this version Python 3 under Windows is not fully supported yet - client objects (instances of classes in client programs) can't be sent to discoro processes. Next version (should be out sometime next week) will have this issue fixed. - 3.3 (2015-07-08)
asyncoro version 3.3 has been released. The major changes in this version are:- Support for distributing computations has been expanded. Now it includes a scheduler so computations / jobs can be scheduled with the scheduler, which takes care of distributing them, monitoring jobs, nodes, processes and informing the client so it can take appropriate action. With this, computations can easily create distributed communicating coroutines - the client and distributed coroutines can exchange data with asyncoro's message passing, transfer files. The scheduler can be started in the client program itself (if no other clients can use the nodes simultaneously) or started as a program on a node (in which case the scheduler can be used by more than one client, although the clients are serviced one after the other). See 'discoro_client.py' and 'discoro_client2.py' for simple examples on how to use discoro. The documentation on discoro will be expanded in the next few days.
- Peer status in asyncoro has been changed to messages; earlier this was supported with callbacks, but message passing is more convenient / appropriate.
- 3.1 (2014-11-02)
asyncoro version 3.1 has been released. This version handles peer (network) failures gracefully: If connections to a peer fail consecutively for MaxConnectionErrors (defined in disasyncoro module with default value of 10), then the peer is assumed dead and sending/delivering messages to it fail immediately. - 3.0 (2014-07-22)
asyncoro version 3.0 has been released. Major changes since 2.9 are:- discoro Computation's 'setup' and 'cleanup' methods take functions with arguments as parameter. These functions are sent to the peer and executed there. These functions can prepare for computation (e.g., load modules, unpack files set etc.) and perform any cleanup, respectively.
- If a monitored coroutine fails (i.e., terminates due to an exception), it can be restarted by monitor with 'hot_swap'. Only local monitors can restart the coroutine; remote monitors only get the notification about the exit status, but can't restart.
- 'Location' class can be initialized with either IP address (as before) or host name.
- 2.9 (2014-06-23)
asyncoro version 2.9 has been released. This version adds support for AsyncFile and AsyncPipe under Windows. See example files 'pipe_csum.py' and 'pipe_grep.py' files on how to use these features. - 2.8 (2014-06-14)
asyncoro version 2.8 has been released. Major changes since 2.7 are:- AsynCoro (scheduler) earlier had 'terminate' method. This is now separated into 'finish' and 'kill' methods; 'finish' waits for all non-daemon coroutines to finish before shutting down whereas 'kill' forcefully terminates non-daemon coroutines before shutting down.
- 'dest_path_prefix' keyword argument to AsynCoro in disasyncoro module has been renamed to 'dest_path', and 'dest_path' keyword argument to 'send_file' and 'del_file' methods has been renamed to 'dir'.
- 'discoro' module (distributed/parallel computing framework similar to dispy project) now has '0' as default value for '-c' option (number of processors to use), so by default discoro uses all processors to run servers. '-n' option to give IP address/host name in earlier version has been renamed to '-i' and now '-n' option can be used to give symbolic name for the servers. This name is appended with hyphen followed by processor number (starting with 0) when AsynCoro is created.
- 2.7 (2014-06-06)
asyncoro version 2.7 has been released. This version improves support for distributed computing with remote coroutines. 'discoro.py' can now be invoked with number of CPUs to run instances of discoro_server so that compute intensive coroutines can run (one per CPU, for example). These coroutines and the client can communicate using message passing. See 'discoro_client.py' for an example on how to use this module.
Note that it is up to the client to schedule jobs for effective use of CPUs on remote nodes. dispy project is easier to use if jobs need to be scheduled, especially if many nodes are used for distributed computing. However, with dispy the client and remote computations can not communicate (except for computations sending intermediate results to the client). - 2.6 (2014-05-31)
asyncoro version 2.6 has been released. This version adds three features:- 'discoro' module provides 'Computation' and 'discoro_server'. With this computation fragments can be sent to remote asyncoro to create remote coroutines. 'discoro' has simple use case to start 'discoro_server' to accept computations from clients. To use it as it is, start the program on a computer. Clients can then send 'Computation' to this asyncoro to create coroutines with the given computation. The client and remote coroutines can use message passing to exchange data, monitoring etc. See 'discoro_client.py' program for an example on how to use these features.
- 'asyncoro' module has 'CategorizeMessages' that can split incoming messages to coroutine in to different categories. The 'add' method can be used to add methods. Each method is called (with the most recently added method first) with incoming message. The method should return a category, in which case the message is put in that category, or None, in which case next recently added method is called with the same message. If all methods return None, then the message is added to category=None. These messages can then be retrieved with 'receive' method (similar to Coro.receive) on a category basis. This feature can be useful when different types of messages are being processed and they need to be processed on a priority basis, for example.
- 'AsynCoro' has 'peer_status' method to register a callback. When a (remote) peer is discovered or terminates, the callback is called with three parameters: The name of the peer, its location and status. The status is 1 if it is online and 0 if it is offline. See 'discoro_client.py' on how to use this feature.
- 2.5 (2014-05-23)
asyncoro version 2.5 has been released. With this version, when AsyncSocket, AsyncFile and AsyncPipe are used with timeouts and timeout expires before read/write is complete but data has been read/written partially, that result is given as result of the operation. If no data has been read/written before timeout, then appropriate timeout exception is thrown to the coroutine. (Earlier versions always threw timeout exception, ignoring partial read/write data.) This version also includes example demonstrating hotswap feature. - 2.4 (2014-05-18)
Asyncoro version 2.4 has been released. The changes since 2.3 are:- 'register' method of Channel and RCI register with their names used in initialization; they don't take 'name' parameter for registration anymore.
- Removed 'restart' method in Coro; if restart is needed, hot swap feature can be used with the same generator function.
- Added 'close' method to Channel so subscription / message passing to it are stopped.
- 2.3 (2014-05-11)
asyncoro version 2.3 has been released.
In this version AsynCoro has been split in to two parts: The one in asyncoro module is for local corotines (i.e., without networking services) and the one in disasyncoro is for communicating with peers (for distributed programming). If a program has no need for distributed programming, no changes are needed from earlier versions. However, if distributed programming is used, the only change needed is to import disasyncoro instead of asyncoro. See 'remote_*.py' in 'examples' directory.
Coro, AsynCoro, Channel and RCI have 'location' and 'name' "get property" methods. - 2.2 (2014-05-08)
asyncoro version 2.2 has been released.
With this version the default/keyword parameter 'coro' is optional in the generator method used to create coroutines with Coro. If the parameter is present, it will be set to the instance of coroutine like before, but if it is not there, it is not an error. If in the coroutine the instance is needed, it can be obtained with 'AsynCoro.cur_coro()' method as
coro = AsynCoro.cur_coro()
AsynCoroThreadPool has been renamed to AsyncThreadPool.
AsyncThreadPool's async_task doesn't take 'coro' parameter anymore; it is obtained by 'AsynCoro.cur_coro()' (otherwise, there is a danger that one coroutine can pass another coroutine as parameter, blocking the other coroutine).
AsynCoroDBCursor has been renamed to AsyncDBCursor. - 2.1 (2014-05-04)
asyncoro version 2.1 has been released. The major changes since version 2.0 are:- AsynFile and AsyncPipe have been added that provide asynchronous I/O operations on files and pipes under Linux, OS X (and other Unix variants), but not in Windows (yet). AsyncFile is meant for files that support blocking I/O (e.g., file handles used in sockets, pipes, but not regular files which don't block on I/O). AsyncPipe supports read, write and communicate methods for chained or unchained subprocess.Popen objects. See 'pipe_csum.py', 'pipe_grep.py' and 'socket_afile.py' in examples directory for illustrations on how to use these.
- Added 'finish' method to Coro so a coroutine can wait for another coroutine to finish and get its result.
- MonitorException is now sent as a message to monitor coroutine, instead of throwing this as an exception. This makes it easier to process the exit status notifications. See 'rci_monitor_client.py' for an example.
- Added 'atexit' to AsynCoro to register functions that will be called after asyncoro scheduler has terminated.
- 2.0 (2014-04-15)
asyncoro version 2.0 has been released. This version changes locate methods to get references to remote (distributed) coroutines, channels, peers and Remote Coroutine Interface (RCI). Earlier versions had these as methods in AsynCoro class such as locate_coro, locate_channel etc. Now these are static methods in the respective classes. For example, locate_coro in earlier versions is now a static method locate in Coro, so to get a reference to remote coroutine, use 'rcoro = yield Coro.locate("name")'. See included examples and API online for more details on how to use them. - 1.7 (2014-04-06)
asyncoro version 1.7 has been released. The major changes since previous release are:- Streaming of messages to peers is now supported. The method peer of asyncoro now has stream_send parameter. If stream_send=True, then messages to peer are sent as stream without closing the connection - the same connection is used for sending messages again. This can dramatically improve performance if many messages (per second, say) are sent to peers.
- peer method also supports tcp_port option to specify TCP port where peer can be contacted. With this option, UDP messages are not needed to locate peers, and thus asyncoro running in remote networks can be used, for example, with SSH port forwarding and not having to change firewall setting at all.
- 1.6 (2014-03-22)
asyncoro version 1.6 has been released.
This version removes ChannelMessage, which was used in earlier versions to wrap messages sent over channels (this was inconsistent with messages sent directly to coroutines). If it is necessary to know the sender of a message, they can be wrapped, for example, by providing appropriate 'transform' function for the channels. - 1.5 (2014-03-11)
asyncoro version 1.5 has been released. Short summary of changes since version 1.4 is:- Fixed Channel's "deliver" method so it works on remote channel when parameter n > 1 (n is minimum number of recipients of message).
- Removed UnbufferedChannel as it is spurious.
- Coro's "set_daemon" method now takes a flag so the daemon status can be toggled.
- Added "location" method to Coro and Channel to get network address where they are running.
- More scripts are included in 'example's directory.