SCREENSHOTSSOFTWAREHARDWARERELEASE NOTESDATABASEMODULESDOWNLOADDOCUMENTS
History

Unexicon was originally started as a vehicle for trying out OpenSS7 software without having to go through the trouble of loading a system and getting it running. It grew into a custom spun Linux distribution for telecommunications.

Following are some of the design decisions that we made along the way:

Choice of Base Distribution
  • We began the original work with live distributions based on Slackware. The first problems encountered were that Slackware did not support up to date kernels that were capable of compressing and packaging a live system. Most derivatives such as Slax and Salix were rolling their own kernels to support the live distro.
  • Gentoo was considered briefly. Very briefly. Sabayon is a good live distro, but building current packages is too much work.
  • Spins based on RedHat were ruled out because the software included in the Enterprise versions is too out of date, there are too few supported software packages, and their kernels are obsolete.
  • Spins based on Debian were ruled out because Debian has too long a release cycle and the stable repositories are not up to date.
  • Others were considered, but ultimately we landed on Arch Linux. Arch is a popular mainstream release, is a rolling release and always up to date, supports a long-term-support kernel (3.0 series), is easy to build packages for, has an accessible community and user software repository, and are responsive to bug reports. We had to ride the systemd upgrade, and the upgrade from 2.6.32-lts to 3.0-lts kernel upgrade, but aside from that it is a powerful, fast and stable server environment.
Design Principles
To support a telecom spin, the system needed:
  1. Carrier-grade reliability and stability.
  2. Linux itself is reliable and stable. We have servers here that have wrapped their uptime clocks (>4 years uptime). The source of problems is usually heat and friction. So we designed a hardware platform that has no moving parts, is convention cooled and has <60W TDP: wide open. 48VDC power and NEB-3 compatible chassis rounds out the solution. Check the Hardware page for the solution.
  3. Self-organizing fault-tolerant clustering.
  4. Clustering is difficult for designers to understand at the best of times. Operators are often confused by the approaches. We wanted self-configuring clustering. We started with the spread toolkit, but it required static configuration tables for which we would have to write reams of manuals. We ported forward the ISIS toolkit, but eventually landed on a distributed, fault-tolerant state-machine approach based on reliable broadcast/multicast with a wacamole style approach for IP pooling and takeover. Nodes discover the group and join in a role. Failed nodes are identified quickly (<25 ms) and their load is redistributed over the pool. Management stations discover the pool and can provide visualization of its operation. Nodes can be taken offline by simply yanking their plugs.
  5. Full online checkpoint and rollback for software upgrades to deployed systems.
  6. Few distros understand this concept. When all of your revenue is running through a set of boxes, downtime due to failed software upgrades is intolerable. By having our own distro, using the LVM (Logical Volume Manager) and the longstanding ability to snapshot volumes, we have accomplished full checkpoint and rollback of the system image, independent of the application (subscriber) database. It includes the ability to swing back and forth between the check-pointed and upgraded systems until a commit decision can be made. Offline backup and restore of the entire system image is integral.
  7. Single build for both deployment servers and management workstations.
  8. Linux helps out a lot here. Most distributions, including Arch are equally suited to the server room as to the desktop. The decision to use graphical login on the server platform using light-weight window managers has allowed a common software base. Each load is equally suited to use as a management station as to server deployment platform. Live distribution on bootable USB flash drives means you can have a management station on your keychain.
  9. Automatic networking.
  10. Most Linux distributions fall down hard when it comes to automatic networking. They assume well configurated WAPs or DHCP servers for automatic networking. They usually assume that a user is there to configure them. Unexicon differs in that all of the networking is built to be automatic and self-configuring. The nodes will slave off of DHCP or NIS servers for VLAN 0 traffic, but they form their own spanning-tree-protocol bridged VLAN among themselves and manage their IPs on VLAN 557 through cooperation. Nodes can automatically access centralized NOC centres housing a single Unexicon management node to provide automatic VPN access into the NOC. Firewalls, routing (RIP, OSPF, BABEL, BGP), NTP, mDNS, DNS-SD, XDCMP, are all automatic with no manual configuration required. Just plug them in and they find each other.
  11. As little configuration as possible to avoid learning-curve, training, reams of paper documentation, and other dead weight.
  12. The choice for graphical login on the server platform simplifies much of the administration and management of the platform. Integrated management stations that provide network visualization and targetted management with self-documenting SNMP MIBs rounds out the solution. All telcom modules are designed for near-zero-configuration. Nodes joining a cluster obtain configuration information from the distributed state machine. As much base Linux system adminstration and Unix-based systems knowledge is leveraged as much as possible. Although we will have documents for add-on modules, they will mostly be descriptive. All management tools are a click away on the integrated desktop.
  13. Scale down and scale up.
  14. Integration of the OpenSS7 STREAMS packages provides for performance gains of up to 500% over a base kernel without the packages. This allows low cost Atom, Cortex, Arm or other fanless, low-power server boards to be used without sacrificing performance. A server can be used to handle a couple of links or trunks or can be used to handle the entire signalling traffic for a small country.
  15. Minimal capital cost.
  16. By scaling down we scale up. A bare-bones platform can be as cheap as $300; fully loaded, full blown systems, $1200. Anyone can afford to buy one of these. And they scale up. Four systems can replace two $4 million dollar STPs. But they are cheap enough to hand out to your customers for legacy SS7 access or TDM to VoIP trunking.

If you notice some principle or objective that we have missed above, drop us a note. See the Contact page.


 
   Home    History    FAQ    Blog    Contact    Support   

© Copyright 2012, 2013  OpenSS7 Corporation.  All rights reserved.