MusicKit
0.0.0
|
This is the allocator and manager of DSP resources. More...
#include <MKOrchestra.h>
This is the allocator and manager of DSP resources.
The MKOrchestra class manages signal processing resources used in music synthesis. Each instance of MKOrchestra represents a single "signal processor resource" that may refer to a DSP hardware processor, or to processing behaviour performed by the main processor, a networked processor (i.e Beowulf clustered), or perhaps one of the vector processor resources of a main processor. This is for historical reasons collectively termed a "DSP". The signal processing resource is identified by orchIndex, a zero-based integer index.
In the basic NeXT hardware configuration, there's only one MC56001 DSP so there's only one MKOrchestra instance (orchIndex 0). On Intel-based hardware, there may be any number of DSPs or none, depending on how many DSP cards the user has installed (See "INSTALLING INTEL-BASED DSP DRIVERS" below). On PowerPC class machines the signal processing is performed on the main processor, so there's only one MKOrchestra.
The methods defined by the MKOrchestra class let you manage a DSP by allocating (TODO should this be assign?, i.e. alloc as normal, then assign the initialised instance?) portions of its memory for specific synthesis modules and by setting its processing characteristics. There are two levels of allocation: MKSynthPatch allocation and unit generator allocation. MKSynthPatches are higher-level entities, collections of MKUnitGenerators.
You can allocate entire MKSynthPatches or individual MKUnitGenerator and MKSynthData objects through the methods defined here. Keep in mind, however, that similar methods defined in other class - specifically, the MKSynthPatch allocation methods defined in MKSynthInstrument, and the MKUnitGenerator and MKSynthData allocation methods defined in MKSynthPatch - are built upon and designed to usurp those defined by MKOrchestra. You only to need to allocate synthesis objects directly if you want to assemble sound-making modules at a low level.
Before you can do anything with an MKOrchestra - particularly, before you can allocate synthesis objects - you must create and open the MKOrchestra. The MKOrchestra is a shared resource (that is, various DSP modules all use the same single MKOrchestra instance.) Therefore, creation is done through the orchestra method - sending orchestra twice returns the same object. (This strange convention is for historical reasons and matches the ApplicationKit convention.) To open an MKOrchestra, you send it the open message. This provides a channel of communication with the DSP that the MKOrchestra represents. Once you've allocated the objects that you want, either through the methods described here or through those defined by MKSynthInstrument and MKSynthPatch, you can start the synthesis by sending the run message to the MKOrchestra. (If your application uses the MusicKit's performance scheduling mechanism, the run message should be sent immediately before or immediately after the startPerformance message is sent to the MKConductor.) The stop method halts synthesis and close breaks communication with the DSP. These methods change the MKOrchestra's status, which is always one of the following MKDeviceStatus values:
MKDeviceStatus | Description |
MK_devOpen | The MKOrchestra is open but not running. |
MK_devRunning | The MKOrchestra is open and running. |
MK_devStopped | The MKOrchestra has been running, but is now stopped. |
MK_devClosed | The MKOrchestra is closed. |
You can query an MKOrchestra's status through the deviceStatus method.
When the MKOrchestra is running, the allocated MKUnitGenerators produce a stream of samples that, by default, are sent to the "default sound output". On most modern hardware, that is the stereo digital to analog converter (the DAC), which converts the samples into an audio signal. This type of sound output is called "Host sound output" because the samples are sent from the DSP to the host computer.
TODO modify this: But there are a number of other alternatives. You can write the samples to the DSP serial port, to be played through any of a number of devices that have their own DACs or do digital transfer to DAT recorders. To do this, invoke the method setSerialSoundOut: with a YES argument before sending open to the MKOrchestra. This is also called "SSI" sound output. See the DSPSerialPortDevice class for more details.
Another option is to write the samples to a soundfile. You do this by invoking the method setOutputSoundfile: before sending open to the MKOrchestra. If you're writing a soundfile, the computer's DAC is automatically disabled. It is also possible to save the DSP commands as a "DSP commands format soundfile". Such files are much smaller than the equivalent soundfile. Use the method setOutputCommandsFile: to create such a file. However, support for playing DSP commands file may not continue in future releases. Therefore, we do not encourage their use.
The MKOrchestra can also process sound that it receives. To do this, send setSoundIn: with a YES argument. TODO update this: <<Note that currently serial input may not be combined with writing a soundfile.>>
Every command that's sent to the DSP is given a timestamp indicating when the command should be executed. The manner in which the DSP regards these timestamps depends on whether its MKOrchestra is timed or untimed, as set through the setTimed: method. In a timed MKOrchestra, commands are executed at the time indicated by its timestamp. If the MKOrchestra is untimed, the DSP ignores the timestamps, executing commands as soon as it receives them. By default, an MKOrchestra is timed.
Since the DSP is a separate processor, it has its own clock and its own notion of the current time. Since the DSP can be dedicated to a single task - in this case, generating sound - its clock is generally more reliable than the main processor, which may be controlling any number of other tasks. If your application is generating MKNotes without user-interaction, for example, if it's playing a MKScore or scorefile, then you should set the Music Kit performance to be unclocked, through the MKConductor's setClocked: method, and the MKOrchestra to be timed. This allows the Music Kit to process MKNotes and send timestamped commands to the DSP as quickly as possible, relying on the DSP's clock to synthesize the MKNotes at the correct time. However, if your application must respond to user-initiated actions with as little latency as possible, then the MKConductor must be clocked. In this case, you can set the MKOrchestra to be untimed. A clocked MKConductor and an untimed MKOrchestra yields the best possible response time.
If your application responds to user actions, but can sustain some latency between an action and its effect, then you may want to set the MKConductor to be clocked and the DSP to be timed, and then use the C function MKSetDeltaT() to set the delta time. Delta time is an imposed latency that allows the Music Kit to run slightly ahead of the DSP. Any rhythmic irregularities created by the Music Kit's dependence on the CPU's clock are evened out by the utter dependability of the DSP's clock (assuming that the such an irregularity isn't greater than the delta time).
Since parameter updates can occur asynchronously, the MKOrchestra doesn't know, at the beginning of a MKNote, if the DSP can execute a given set of MKUnitGenerators quickly enough to produce a steady supply of output samples for the entire duration of the MKNote. However, it makes an educated estimate and will deny allocation requests that it thinks will overload the DSP and cause it to fall out of real time. Such a denial may result in a smaller number of simultaneously synthesized voices. You can adjust the MKOrchestra's DSP processing estimate, or headroom, by invoking the setHeadroom: method. This takes an argument between -1.0 and 1.0; a negative headroom allows a more liberal estimate of the DSP resources - resulting in more simultaneous voices - but it runs the risk of causing the DSP to fall out of real time. Conversely, a positive headroom is more conservative: You have a greater assurance that the DSP won't fall out of real time but the number of simultaneous voices is decreased. The default is a somewhat conservative 0.1. If you're writing samples to a soundfile with the DAC disabled, headroom is ignored. On Intel-based hardware, the differences between the clock and memory speed of various DSP cards requires some hand-tuning of the headroom variable. Slower DSP cards should use a higher headroom and faster cards should use a negative headroom.
When sending sound to the DSP serial port, there is very little latency - for example, sound can be taken in the serial port, processed, and sent out again with less than 10 milliseconds of delay. However, in the case of sound output via the NeXT monitor, there's a sound output time delay that's equal to the size of the buffer that's used to collect computed samples before they're shovelled to the NeXT DAC. To accommodate applications that require the best possible response time (the time between the iniitation of a sound and its actual broadcast from the DAC), a smaller sample output buffer can be requested by sending the setFastResponse:YES message to an MKOrchestra. However, the more frequent attention demanded by the smaller buffer will detract from synthesis computation and, again, fewer simultaneous voices may result. You can also improve response time by using the high sampling rate (44100) although this, too, attenuates the synthesis power of the DSP. By default, the MKOrchestra's sampling rate is 22050 samples per second. setFastResponse: has no effect when sending samples to the DSP serial port.
To avoid creating duplicate synthesis modules on the DSP, each instance of MKOrchestra maintains a shared object table. Objects on the table are MKSynthPatches, SynthDatas, and MKUnitGenerators and are indexed by some other object that represents the shared object. For example, the OscgafUG MKUnitGenerator (a family of oscillators) lets you specify its waveform-generating wave table as a MKPartials object (you can also set it as a MKSamples object; for the purposes of this example we only consider the MKPartials case). When its wave table is set, through the setTable:length: method, the oscillator allocates a MKSynthData object from the MKOrchestra to represent the DSP memory that will hold the waveform data that's computed from the MKPartials. It also places the MKSynthData on the shared object table using the MKPartials as an index by sending the message
[MKOrchestra installSharedSynthData:theSynthData for:thePartials];
If another oscillator's wave table is set as the same MKPartials object, the already-allocated MKSynthData can be returned by sending the message
id aSynthData = [MKOrchestra sharedObjectFor:thePartials];
The method installSharedObject:for: is provided for installing MKSynthPatches and MKUnitGenerators.
If appropriate hardware is available, multiple DSPs may be used in concert. The MKOrchestra automatically performs allocation on the pool of DSPs. On Intel-based hardware, multiple DSP support is achieved by adding multiple DSP cards. On NeXT hardware, multiple DSP support is available via the Ariel QuintProcessor, a 5-DSP card.
The MKOrchestra class may be subclassed to support other 56001-based cards. See the ArielQP and ArielQPSat objects for an example.
The default sound output configuration may be customized by using the defaults data base. On NeXT hardware, you can specify the destination of the sound output, and on both NeXT hardware and Intel-based DSP cards with NeXT-compatible DSP serial ports, you can specify the type of the serial port device. The default sound out type is derived from the MusicKit "OrchestraSoundOut" variable in the defaults data base, which may currently have the value "SSI" or "Host". More values may be added in the future. Note that an "SSI" value for "OrchestraSoundOut" refers to the DSP's use of the SSI port and that usage does not imply NeXT-compatiblility. For example, for the Turtle Beach cards, the default is "serialSoundOut" via the on-card CODEC. (On Intel-based hardware, the determination as to whether the DSP serial port is NeXT-compatible is based on the driver's "SerialPortDevice" parameter - if its value is "NeXT", the serial port is NeXT-compatible. ) You can always return to the default sample output configuration by sending the message setDefaultSoundOut.
New MKOrchestras are auto-configured with their default configuration, with a DSPSerialPortDevice object automatically created. For devices with NeXT-compatible DSP serial ports, you may change the configuration using the MKOrchestra methods such as -setSerialPortDevice:.
INSTALLING INTEL-BASED DSP DRIVERS
To install an Intel-based DSP driver, follow these steps:
1. Double click on the driver you want to install. The drivers can be found on /LocalLibrary/Devices/. For example, to install the ArielPC56D driver, double click on /LocalLibrary/Devices/ArielPC56D.config. Type the root pass word. It will tell you driver was successfully installed. Click OK. You've now "installed" the driver.
2. In Configure.app, Click Other. Click Add... Click Add. Select the driver (from the "other" category) and make sure that the I/O port corresponds to your hardware configuration. From Menu/Configuration, select Save. You've now "added the driver".
3. Repeat the process for any other drivers, such as the TurtleBeach Multisound driver, /LocalLibrary/Devices/TurtleBeachMS.config.
4. If you have multiple cards of a certain type, repeat step 2, making sure to assign a different I/O address to each instance of the driver. The first will be given the name <driverName>0, where <driverName> is the name of the driver (e.g. "ArielPC56D") The second will be given the name <driverName>1, etc. The trailing number is called the "unit." For example, if you add 2 Ariel cards to your system, they will be assigned the names "ArielPC56D0" and "ArielPC56D1". If you have one Multisound card, it will be assigned the name "TurtleBeachMS0". This assignment is done by the Configure.app application.
5. Reboot. Drivers are now installed and usable.
All DSP drivers are in the same "family", called "DSP." All DSP units are numbered with a set of "DSP indecies", beginning with 0. (Note that this is distinct from the "unit" numbers.) If there is only one DSP card, there is no ambiguity. However, if there is more than one card, the user's defaults data base determines which DSP or DSPs should be used. For example, in the example given above, a user's defaults data base may have:
MusicKit DSP0 ArielPC56D1 MusicKit DSP2 ArielPC56D0 MusicKit DSP4 TurtleBeachMS0
This means that the default DSP is the one on the 2nd Ariel card that you installed. Also, notice that there may be "holes" - in this example, there is no DSP1 or DSP3. DSP identifiers up to 15 may be used. The DSP indecies refer to the MKOrchestra index passed to methods like +newOnDSP:. If there is no driver for that DSP, +newOnDSP: returns nil.
Some DSP cards support multiple DSPs on a single card. For such cards, we have the notion of a "sub-unit", which follows the unit in the assigned name with the following format: <driver><unit>-<subunit>. For example if a card named "Frankenstein" supports 4 DSPs, and there are two Frankenstein units installed in the system, the user's defaults data base might look like this:
MusicKit DSP0 Frankenstein0-0 MusicKit DSP1 Frankenstein0-1 MusicKit DSP2 Frankenstein0-2 MusicKit DSP3 Frankenstein0-3 MusicKit DSP4 Frankenstein1-0 MusicKit DSP5 Frankenstein1-1 MusicKit DSP6 Frankenstein1-2 MusicKit DSP7 Frankenstein1-3
Currently, the Music Kit provides drivers for the following cards: Ariel PC56D, Turtle Beach Multisound, I*Link i56, Frankenstein. See the release notes for the latest information on supported drivers.