Sonar Releases New Assistive Technology Operating System
New Software Brings Accessibility To Those In Need
Today Sonar announced the release of their new assistive technology based operating system to bring accessibility options to computer users around the world. The new operating system is designed for users that have difficulty using computers because of vision and other problems.
The new operating system is built off of Manjaro Linux, a subset of Arch Linux, in an effort to not only provide a complete system full of accessibility options for free, but to do so on a stable platform. Sonar’s adaptive technology engineering is spear-headed by Kyle Brouhard with the help of the Manjaro project leader Phil Miller.
Some of the accessibility features offered by Sonar’s new assistive technology based operating system include:
* Orca, a screen reading technology that will allow vision impaired users to listen to content displayed on the monitor.
* On-screen magnification to allow users with poor eyesight the ability to read small print displayed on the monitor.
* An on-screen keyboard that can be used to type with a mouse or trackball, suitable for users unable to use a traditional keyboard for typing.
* A custom font developed to help readers that have dyslexia.
Sonar is designed to completely replace traditional operating systems currently available, the goal of Sonar’s new assistive technology is to bring accessibility options to users who need it for free. The accessible operating system brings the ability to use technology to those who have been physically incapable of doing so in the past.
The software is available for immediate, free download on the company’s website. Future updates and releases will be available at no cost so users who need accessibility options will have the ability to access technology without paying extra costs.
If you wish to check out Sonar and download it for free here is the link.
The ACF would like to thank the work of Bill Cox and Isaac Porat for their work on Speech Hub. This is a great contribution to Free software and help to the vision impaired community. You may be asking what is Speech Hub? Well here is some information taken from the Speech Hub website.
SpeechHub is the first cross-platform TTS server dedicated to the requirements of the vision impaired community. A speech server sits between a client application such as a screen reader or a self-voicing program and a speech synthesizer. It accepts commands from the client application and, after optional processing, passes it to the speech synthesizer. This approach offers a uniform interface to applications that want to use speech, allowing developers to concentrate on what their application does rather than worry about speech processing and connection to different synthesizers. The concept of a speech server is not new and there are both commercial and Free software examples. In the context of the VI community and Free software, the most well known is Speech Dispatcher which is well established in GNU/Linux.
SpeechHub’s cross-platform architecture offers flexibility, reliability and a host of new features. It is now available for Windows and GNU/Linux, other platforms will be considered in the future.
The connection to an application is done using a client plug-in available for:
Windows – NVDA screen reader
Linux – Orca, Speakup, Yasr screen readers (still in development)
Windows – SpeakOn when a new version is released
SpeechHub currently supports the following synthesizers:
eSpeak with many voice variants (courtesy of NVDA) and amazing language support
Mary TTS, 4 English voices usable with NVDA and Orca
Pico TTS voices in six languages
Microsoft Speech API version 5 on Windows
Microsoft Speech Platform on Windows 7 and Windows 8
Voxin (IBM TTS) on Linux
(requires license from http://voxin.oralux.net)
Communication with the server is done using a subset and extensions to the Standard SSIP protocol interface, over TCP/IP, Stdio, or Unix sockets. SpeechHub promotes uniformity in TTS by optionally processing text before it is sent to the synthesizers by formatting, replacing, punctuation and capital letter indication which are all customizable. Audio data samples are sent back from the synthesizer (if supported) and are processed centrally by the server. It is highly responsive and speeding up speech is built in using Sonic for most synthesizers. There is built-in support for creating audio books.
Communication with TTS synthesizers is through engine drivers which are independent from the server and can be implemented in any programming language. Very low integration effort is required for new TTS engines; for example, the eSpeak SpeechHub engine driver is less than 200 lines of code. Binary code TTS engines are compatible across all major Linux distros. TTS engines can be integrated once and run everywhere, reliably, for years. Text replacement can be customized for each engine avoiding wrong pronunciation and text that causes crashes. In the event that an engine crashes, it is restarted automatically.
SpeechHub is developed entirely by people with vision impairments for people with vision impairments.
As you can see this is some great development and the Acf is looking forward to more releases and further development of this program. Thansk goes out to Bill Cox and Isaac Porat for this great work!