retrieving synthesized auio data?

Halim Sahin halim.sahin at freenet.de
Fri Feb 5 03:25:40 GMT 2010


hello Bill,
Here is my answer for those not subscribed to speechd mailinglist.

-----------------
hi Jacob and Luke,


@Jacob do you need the audio data for further processing?
Or do you need only creating  wave files from the synthesized text?

Maybe a good start is to add a dummy audio output driver in speechd
which writes it's
output data into a fifo or better directly to a wave file.
This wouldn't need any api work  and could  be implemented (in my
opinion) really fast and without much work!

@Luke:
On Thu, Feb 04, 2010 at 12:04:00PM -0800, Luke Yelavich wrote:
> I intend to write up some roadmap/specification documentation as to
> what I would like to work on with speech-dispatcher next. I think
> first, we get a 0.6.8 release out the door, then start thinking what
> needs major work, to ensure speech-dispatcher is still usable both as
> a system service for those who want it, and for the ever changing
> multi-user desktop environment. 

Consider making pulse optional for ubuntu will solve this problem without
any new line code.

> One such idea I have, is to consider
> dbus as a client/server communication transport layer. This could even
> go so far as to solve the issue of using system level clients like
> BrlTTY with a system level speech-dispatcher, which would then
> communicate with a user level speech-dispatcher for example.

Luke! It's only an issue because you and other prefer the wrong audio
system. i hope one day you start thinking about other stuff to do for
speech-dispatcher than the ..... user session integration.

The decision to use pulseaudio (only) for ubuntu produced tons of mails from
many unhappy users in orca/speechd/ubuntu accessibility mailinglists.
Allmost every day some people asking howto use sd as system service etc.
BTW.: it works really well this way!

Starting paralel process and let them communicate through dbus will add
more and more and more overhead to speechd and it's deppendencies.
And it will only produce new issues without bringing really new
features instead of complexity.

Many other audio apps needs to be rewritten to be compatible with this
new approach. Thx to PA for this.
Just wondering: Are you using these system level apps like
speakup/brltty for your daily work?

Just my two cents.

Halim
PS.: it doesn't make sense to ignore the user wishes in this area.
Read the mailinglists and talk with the people who are not able to use
pulse with speechd.
Talk also with other a11y projects and speechd users if they want dbus
dependency or dealing with consolekit and possibly other kits.






More information about the Ubuntu-accessibility mailing list