Hello everyone,
Today we released iceoryx2 v0.4!
iceoryx2 is a service-based inter-process communication (IPC) library designed
to make communication between processes as fast as possible - like Unix domain
sockets or message queues, but orders of magnitude faster and easier to use. It
also comes with advanced features such as circular buffers, history, event
notifications, publish-subscribe messaging, and a decentralized architecture
with no need for a broker.
For example, if you're working in robotics and need to process frames from a
camera across multiple processes, iceoryx2 makes it simple to set that up. Need
to retain only the latest three camera images? No problem - circular buffers
prevent your memory from overflowing, even if a process is lagging. The history
feature ensures you get the last three images immediately after connecting to
the camera service, as long as they’re still available.
Another great use case is for GUI applications, such as window managers or
editors. If you want to support plugins in multiple languages, iceoryx2 allows
you to connect processes - perhaps to remotely control your editor or window
manager. Best of all, thanks to zero-copy communication, you can transfer
gigabytes of data with incredibly low latency.
Speaking of latency, on some systems, we've achieved latency below 100ns when
sending data between processes - and we haven't even begun serious performance
optimizations yet. So, there’s still room for improvement! If you’re in
high-frequency trading or any other use case where ultra-low latency matters,
iceoryx2 might be just what you need.
If you’re curious to learn more about the new features and what’s coming next,
check out the full iceoryx2 v0.4 release announcement.
Elfenpiff
Links:
* GitHub: https://github.com/eclipse-iceoryx/iceoryx2
* iceoryx2 v0.4 release announcement: https://ekxide.io/blog/iceoryx2-0-4-release/
* crates.io: https://crates.io/crates/iceoryx2
* docs.rs: https://docs.rs/iceoryx2/0.4.0/iceoryx2/
Shared memory is crazy fast, and I'm surprised that there aren't more things that take advantage of it. Super odd that gRPC doesn't do shared memory, and basically never plans to?[1].
All that said, the constructive criticism I can offer for this post is that in mass-consumption announcements like this one for your project, you should:
- RPC throughput (with the usual caveats/disclaimers) - Comparison (ideally graphed) to an alternative approach (ex. domain sockets) - Your best/most concise & expressive usage snippet
100ns is great to know, but I would really like to know how much RPC/s this translates to without doing the math, or seeing it with realistic de-serialization on the other end.
[0]: https://3tilley.github.io/posts/simple-ipc-ping-pong/
[1]: https://github.com/grpc/grpc/issues/19959