I’m in need of a specific kind of software and since this is for a project involving the mini maestro I thought I’d take my chances on this forum. Feel free to move/close thread if this has to go somewhere else.
What I’m looking for is some sort of software that can map out an audio file (a recording of speech) in such a way so that I can easily animate simple servo movements (one servo moving back and fourth) in sync with the highs and lows of said audio file. I think I’ve read somewhere (on this forum actually) that such software exists and works in such a way so that an audio file is thrown into an algorithm which outputs a .dat file which goes into some kind of software framework for animatronic control.
With this being said, the requirements or abstractions are as follows:
Some sort of threshold has to be set so that an “event” on the timeline (a word being spoken) can be mapped as a simple 1 (HIGH) and vice versa for periods of silence.
The events has to be tagged with a time (for example the first HIGH occurs xxxx milliseconds in).
The format of the output file doesn’t matter as long as it’s human readable and outside of the maestro control center environment.
The reason for needing this function to be outside of the python script of the maestro is that I would like to use the GUI to make the actual sequences in accordance fo the mapped audio so that I can later easily trigger them and the actual audio file from another controller. If there’s a way to automatically create sequences by feeding data into the control center sequencer I would very much like to hear it!
I’ve been glancing at a software called Sonic Visualiser and I’ve also considered writing something like a python script that makes this happen.
The big question here is simply: does anyone have any idea of a program that works in this way? Is there a better solution to the problem?
All tips are welcome!