[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Search]

[Emacspeak] Re: Introduction & Voice Configuration Questions



OK, so I will see if I can find those bindings for vocalizer, may take
a while.

On Tue, 25 Jan 2022 18:49:29 -0500,
Tim Cross via Emacspeak wrote:
> 
> 
> John Covici via Emacspeak <emacspeak(a)emacspeak.org> writes:
> 
> > So, how would get those voxin voices to work with emacspeak?  What
> > would have to be done to make this work?
> >
> 
> A speech server for the vocalizer voices would need to be written. I
> don't know anything about vocalizer or what that involves. However, the
> basic architecture is
> 
> Create a vocalizer speech server. This is the middleware layer which
> sitgs between Emacs/Emacspeak and the speech synthesizer. On GNU Linux,
> this middleware is typically written in Tcl/Tclx because it provides a
> convenient way to get C/C++ API bindings into a high level scripting
> language (Tcl) which is used to create the basic speech server. The
> speech server is essentially a script which runs in an infinite loop,
> reading commands from stdin (sent from Emacs/Emacspeak) and dispatches
> them to the underlying TTS synthesizer (plus doing some houskeeping,
> like managing a queue of speech requests, text cleanup and/or markup for
> synthesizer specific requirements etc.
> 
> Any language can be used for the speech server. For example, on macOS,
> python is used because there were existing bindings in python to access
> the native macOS voiceOver TTS. The only real requirement is that the
> server can read and parse commands via stdin which are sent to it by
> Emacspeak. One advantage of using Tcl over another language is that
> there is an existing Emacspeak specific TTS library which handles all
> the generic (same for all TTS synthesizer) parts of the interface for
> Emacspeak. This means that much of the generic processing work the
> speech server needs to do uses the same code across all synthesizers and
> when you implement a new speech server, you only need to add the
> synthesizer specific stuff. If you use a different language, like
> python, you would need to implement all of the necessary code (like
> managing the queue of speech requests from Emacspseak etc).
> 
> Within Emacspeak itself, there are elisp voice files which map the high
> level voices used by Emacspeak to the low level commands used by the
> synthesizer to modify voice parameters (i.e. tone, pitch etc). As each
> synthesizer handles this differently, the elisp voice files are used to
> manage the mapping for each supported synthesizer.
> 
> So, to add vocalizer support, you would need to create a speech server
> (similar to the outloud or espeak scripts used for the IBM and espeak
> synths) and a voicalizer-voices.el file.
> 
> Details on the communication protocol used by Emacspeak when
> communicating with the speech servers are outlined in the Emacspeak info
> pages.
> _______________________________________________
> Emacspeak mailing list -- emacspeak(a)emacspeak.org
> To unsubscribe send an email to emacspeak-leave(a)emacspeak.org

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

         John Covici wb2una
         covici(a)ccs.covici.com


|May 1995 - Last Year|Current Year|


If you have questions about this archive or had problems using it, please contact us.

Contact Info Page