lr.speech.v2beta1
Speech
Tiro Speech Recognition API
speech.talgreinir.is
Recognize
rpc Recognize(RecognizeRequest) RecognizeResponse
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
REST mapping
Verb | Path | Body |
---|---|---|
POST | /v2beta1/speech:recognize | * |
LongRunningRecognize
rpc LongRunningRecognize(LongRunningRecognizeRequest) google.longrunning.Operation
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
Operation.error
or an Operation.response
which contains
a LongRunningRecognizeResponse
message.
REST mapping
Verb | Path | Body |
---|---|---|
POST | /v2beta1/speech:longrunningrecognize | * |
StreamingRecognize
rpc StreamingRecognize(StreamingRecognizeRequest) StreamingRecognizeResponse
Performs bidirectional streaming speech recognition: receive results while sending audio.
REST mapping
Verb | Path | Body |
---|---|---|
GET | /v2beta1/speech:streamingrecognize |
Messages
LongRunningRecognizeMetadata
Describes the progress of a long-running LongRunningRecognize
call. It is
included in the metadata
field of the Operation
returned by the
GetOperation
call of the google::longrunning::Operations
service.
Field | Type | Description |
---|---|---|
progress_percent | int32 | Approximate percentage of audio processed thus far. Guaranteed to be 100 when the audio is fully processed and the results are available. |
start_time | google.protobuf.Timestamp | Time when the request was received. |
last_update_time | google.protobuf.Timestamp | Time of the most recent processing update. |
LongRunningRecognizeRequest
The top-level message sent by the client for the LongRunningRecognize
method.
Field | Type | Description |
---|---|---|
config | RecognitionConfig | Required Provides information to the recognizer that specifies how to process the request. |
audio | RecognitionAudio | Required The audio data to be recognized. |
LongRunningRecognizeResponse
The only message returned to the client by the LongRunningRecognize
method.
It contains the result as zero or more sequential SpeechRecognitionResult
messages. It is included in the result.response
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Field | Type | Description |
---|---|---|
results | SpeechRecognitionResult[] | Output-only Sequential list of transcription results corresponding to sequential portions of audio. |
RecognitionAudio
Contains audio data in the encoding specified in the RecognitionConfig
.
Either content
or uri
must be supplied. Supplying both or neither
returns google.rpc.Code.INVALID_ARGUMENT
.
Field | Type | Description |
---|---|---|
oneof audio_source.content | bytes | The audio data bytes encoded as specified in RecognitionConfig . Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. |
oneof audio_source.uri | string | URI that points to a file that contains audio data bytes as specified in RecognitionConfig . |
RecognitionConfig
Provides information to the recognizer that specifies how to process the request.
Field | Type | Description |
---|---|---|
encoding | RecognitionConfig.AudioEncoding | Required Encoding of audio data sent in all RecognitionAudio messages. |
sample_rate_hertz | int32 | Required Sample rate in Hertz of the audio data sent in all RecognitionAudio messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). |
language_code | string | Required The language of the supplied audio as a BCP-47 language tag. Example: "en-US". |
max_alternatives | int32 | Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative messages within each SpeechRecognitionResult . The server may return fewer than max_alternatives . Valid values are 0 -30 . A value of 0 or 1 will return a maximum of one. If omitted, will return a maximum of one. |
speech_contexts | SpeechContext[] | Optional A means to provide context to assist the speech recognition. |
enable_word_time_offsets | bool | Optional If true , the top result includes a list of words and the start and end time offsets (timestamps) for those words. If false , no word-level time offset information is returned. The default is false . |
metadata | RecognitionMetadata | Optional Metadata regarding this request. |
enable_automatic_punctuation | bool | If 'true', adds punctuation to recognition result hypotheses. |
model | string | Optional Model to use for the request. This may be specified to choose a specific model, private or public. |
diarization_config | SpeakerDiarizationConfig | none |
RecognitionMetadata
Description of audio data to be recognized.
Field | Type | Description |
---|---|---|
interaction_type | RecognitionMetadata.InteractionType | The use case most closely describing the audio content to be recognized. |
industry_naics_code_of_audio | uint32 | The industry vertical to which this speech recognition request most closely applies. This is most indicative of the topics contained in the audio. Use the 6-digit NAICS code to identify the industry vertical - see https://www.naics.com/search/. |
microphone_distance | RecognitionMetadata.MicrophoneDistance | The audio type that most closely describes the audio being recognized. |
original_media_type | RecognitionMetadata.OriginalMediaType | The original media the speech was recorded on. |
number_of_speakers | RecognitionMetadata.NumberOfSpeakers | How many people are speaking prominently in the audio and expected to be recognized. |
recording_device_type | RecognitionMetadata.RecordingDeviceType | The type of device the speech was recorded with. |
recording_device_name | string | The device used to make the recording. Examples 'Nexus 5X' or 'Polycom SoundStation IP 6000' or 'POTS' or 'VoIP' or 'Cardioid Microphone'. |
original_mime_type | string | Mime type of the original audio file. For example audio/m4a , audio/x-alaw-basic , audio/mp3 , audio/3gpp . A list of possible audio mime types is maintained at http://www.iana.org/assignments/media-types/media-types.xhtml#audio |
obfuscated_id | int64 | Obfuscated (privacy-protected) ID of the user, to identify number of unique users using the service. |
audio_topic | string | Description of the content. Eg. "Recordings of federal supreme court hearings from 2012". |
RecognizeRequest
The top-level message sent by the client for the Recognize
method.
Field | Type | Description |
---|---|---|
config | RecognitionConfig | Required Provides information to the recognizer that specifies how to process the request. |
audio | RecognitionAudio | Required The audio data to be recognized. |
RecognizeResponse
The only message returned to the client by the Recognize
method. It
contains the result as zero or more sequential SpeechRecognitionResult
messages.
Field | Type | Description |
---|---|---|
results | SpeechRecognitionResult[] | Output-only Sequential list of transcription results corresponding to sequential portions of audio. |
SpeakerDiarizationConfig
Config to enable speaker diarization.
Field | Type | Description |
---|---|---|
enable_speaker_diarization | bool | If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. |
SpeechContext
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
Field | Type | Description |
---|---|---|
phrases | string[] | Optional A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. |
SpeechRecognitionAlternative
Alternative hypotheses (a.k.a. n-best list).
Field | Type | Description |
---|---|---|
transcript | string | Output-only Transcript text representing the words that the user spoke. |
confidence | float | Output-only The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for is_final=true results. Clients should not rely on the confidence field as it is not guaranteed to be accurate, or even set, in any of the results. The default of 0.0 is a sentinel value indicating confidence was not set. |
words | WordInfo[] | Output-only A list of word-specific information for each recognized word. |
SpeechRecognitionResult
A speech recognition result corresponding to a portion of the audio.
Field | Type | Description |
---|---|---|
alternatives | SpeechRecognitionAlternative[] | Output-only May contain one or more recognition hypotheses (up to the maximum specified in max_alternatives ). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer. |
language_code | string | Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio. |
StreamingRecognitionConfig
Provides information to the recognizer that specifies how to process the request.
Field | Type | Description |
---|---|---|
config | RecognitionConfig | Required Provides information to the recognizer that specifies how to process the request. |
single_utterance | bool | Optional If false or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingRecognitionResult s with the is_final flag set to true .If true , the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease recognition. It will return no more than one StreamingRecognitionResult with the is_final flag set to true . |
interim_results | bool | Optional If true , interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the is_final=false flag). If false or omitted, only is_final=true result(s) are returned. |
StreamingRecognitionResult
A streaming speech recognition result corresponding to a portion of the audio that is currently being processed.
Field | Type | Description |
---|---|---|
alternatives | SpeechRecognitionAlternative[] | Output-only May contain one or more recognition hypotheses (up to the maximum specified in max_alternatives ). |
is_final | bool | Output-only If false , this StreamingRecognitionResult represents an interim result that may change. If true , this is the final time the speech service will return this particular StreamingRecognitionResult , the recognizer will not return any further hypotheses for this portion of the transcript and corresponding audio. |
stability | float | Output-only An estimate of the likelihood that the recognizer will not change its guess about this interim result. Values range from 0.0 (completely unstable) to 1.0 (completely stable). This field is only provided for interim results (is_final=false ). The default of 0.0 is a sentinel value indicating stability was not set. |
language_code | string | Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio. |
StreamingRecognizeRequest
The top-level message sent by the client for the StreamingRecognize
method.
Multiple StreamingRecognizeRequest
messages are sent. The first message
must contain a streaming_config
message and must not contain audio
data.
All subsequent messages must contain audio
data and must not contain a
streaming_config
message.
Field | Type | Description |
---|---|---|
oneof streaming_request.streaming_config | StreamingRecognitionConfig | Provides information to the recognizer that specifies how to process the request. The first StreamingRecognizeRequest message must contain a streaming_config message. |
oneof streaming_request.audio_content | bytes | The audio data to be recognized. Sequential chunks of audio data are sent in sequential StreamingRecognizeRequest messages. The first StreamingRecognizeRequest message must not contain audio_content data and all subsequent StreamingRecognizeRequest messages must contain audio_content data. The audio bytes must be encoded as specified in RecognitionConfig . Note: as with all bytes fields, protobuffers use a pure binary representation (not base64). |
StreamingRecognizeResponse
StreamingRecognizeResponse
is the only message returned to the client by
StreamingRecognize
. A series of zero or more StreamingRecognizeResponse
messages are streamed back to the client. If there is no recognizable
audio, and single_utterance
is set to false, then no messages are streamed
back to the client.
Here's an example of a series of ten StreamingRecognizeResponse
s that might
be returned while processing audio:
-
results { alternatives { transcript: "tube" } stability: 0.01 }
-
results { alternatives { transcript: "to be a" } stability: 0.01 }
-
results { alternatives { transcript: "to be" } stability: 0.9 } results { alternatives { transcript: " or not to be" } stability: 0.01 }
-
results { alternatives { transcript: "to be or not to be" confidence: 0.92 } alternatives { transcript: "to bee or not to bee" } is_final: true }
-
results { alternatives { transcript: " that's" } stability: 0.01 }
-
results { alternatives { transcript: " that is" } stability: 0.9 } results { alternatives { transcript: " the question" } stability: 0.01 }
-
results { alternatives { transcript: " that is the question" confidence: 0.98 } alternatives { transcript: " that was the question" } is_final: true }
Notes:
-
Only two of the above responses #4 and #7 contain final results; they are indicated by
is_final: true
. Concatenating these together generates the full transcript: "to be or not to be that is the question". -
The others contain interim
results
. #3 and #6 contain two interimresults
: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stabilityresults
. -
The specific
stability
andconfidence
values shown above are only for illustrative purposes. Actual values may vary. -
In each response, only one of these fields will be set:
error
,speech_event_type
, or one or more (repeated)results
.
Field | Type | Description |
---|---|---|
error | google.rpc.Status | Output-only If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the operation. |
results | StreamingRecognitionResult[] | Output-only This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or more is_final=false results followed by zero or one is_final=true result (the newly settled portion). |
speech_event_type | StreamingRecognizeResponse.SpeechEventType | Output-only Indicates the type of speech event. |
WordInfo
Word-specific information for recognized words. Word information is only
included in the response when certain request parameters are set, such
as enable_word_time_offsets
.
Field | Type | Description |
---|---|---|
start_time | google.protobuf.Duration | Output-only Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary. |
end_time | google.protobuf.Duration | Output-only Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary. |
word | string | Output-only The word corresponding to this set of information. |
speaker_tag | int32 | A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. speaker_tag is set if enable_speaker_diarization = 'true' and only in the top alternative. |
Enums
true
RecognitionConfig.AudioEncoding
Audio encoding of the data sent in the audio message. All encodings support
only 1 channel (mono) audio. Only FLAC
includes a header that describes
the bytes of audio that follow the header. The other encodings are raw
audio bytes with no header.
For best results, the audio source should be captured and transmitted using
a lossless encoding (FLAC
or LINEAR16
). Recognition accuracy may be
reduced if lossy codecs, which include the other codecs listed in
this section, are used to capture or transmit the audio, particularly if
background noise is present.
Name | Number | Description |
---|---|---|
ENCODING_UNSPECIFIED | 0 | Not specified. Will return result google.rpc.Code.INVALID_ARGUMENT . |
LINEAR16 | 1 | Uncompressed 16-bit signed little-endian samples (Linear PCM). |
FLAC | 2 | FLAC (Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of LINEAR16 . FLAC stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO are supported. |
OGG_OPUS | 6 | Opus encoded audio frames in Ogg container (OggOpus). sample_rate_hertz must be 16000. |
MP3 | 8 | MPEG 1 and 2 Layer III (MP3) |
GUESS | 9 | Have the server guess the encoding using some heuristics Note: google.cloud.speech.v1 is now using 9 for WEBM_OPUS, so that the APIs are incompatible. |
WEBM_OPUS | 24 | Opus encoded audio frames in WebM container (OggOpus). sample_rate_hertz must be one of 8000, 12000, 16000, 24000, or 48000.Note: google.cloud.speech.v1 is now using 9 for WEBM_OPUS, so that the APIs are incompatible. |
RecognitionMetadata.InteractionType
Use case categories that the audio recognition request can be described by.
Name | Number | Description |
---|---|---|
INTERACTION_TYPE_UNSPECIFIED | 0 | Use case is either unknown or is something other than one of the other values below. |
DISCUSSION | 1 | Multiple people in a conversation or discussion. For example in a meeting with two or more people actively participating. Typically all the primary people speaking would be in the same room (if not, see PHONE_CALL) |
PRESENTATION | 2 | One or more persons lecturing or presenting to others, mostly uninterrupted. |
PHONE_CALL | 3 | A phone-call or video-conference in which two or more people, who are not in the same room, are actively participating. |
VOICEMAIL | 4 | A recorded message intended for another person to listen to. |
PROFESSIONALLY_PRODUCED | 5 | Professionally produced audio (eg. TV Show, Podcast). |
VOICE_SEARCH | 6 | Transcribe spoken questions and queries into text. |
VOICE_COMMAND | 7 | Transcribe voice commands, such as for controlling a device. |
DICTATION | 8 | Transcribe speech to text to create a written document, such as a text-message, email or report. |
RecognitionMetadata.MicrophoneDistance
Enumerates the types of capture settings describing an audio file.
Name | Number | Description |
---|---|---|
MICROPHONE_DISTANCE_UNSPECIFIED | 0 | Audio type is not known. |
NEARFIELD | 1 | The audio was captured from a closely placed microphone. Eg. phone, dictaphone, or handheld microphone. Generally if there speaker is within 1 meter of the microphone. |
MIDFIELD | 2 | The speaker if within 3 meters of the microphone. |
FARFIELD | 3 | The speaker is more than 3 meters away from the microphone. |
RecognitionMetadata.NumberOfSpeakers
How many speakers expected in the speech to be recognized.
Name | Number | Description |
---|---|---|
NUMBER_OF_SPEAKERS_UNSPECIFIED | 0 | Unknown number of persons speaking. |
ONE_SPEAKER | 1 | Only one person is the prominent speaker (ignore background voices). |
TWO_SPEAKERS | 2 | Two people are the prominent speakers (transcript should focus on the two most prominent speakers). |
MULTIPLE_SPEAKERS | 3 | Transcribe all voices. |
RecognitionMetadata.OriginalMediaType
The original media the speech was recorded on.
Name | Number | Description |
---|---|---|
ORIGINAL_MEDIA_TYPE_UNSPECIFIED | 0 | Unknown original media type. |
AUDIO | 1 | The speech data is an audio recording. |
VIDEO | 2 | The speech data originally recorded on a video. |
RecognitionMetadata.RecordingDeviceType
The type of device the speech was recorded with.
Name | Number | Description |
---|---|---|
RECORDING_DEVICE_TYPE_UNSPECIFIED | 0 | The recording device is unknown. |
SMARTPHONE | 1 | Speech was recorded on a smartphone. |
PC | 2 | Speech was recorded using a personal computer or tablet. |
PHONE_LINE | 3 | Speech was recorded over a phone line. |
VEHICLE | 4 | Speech was recorded in a vehicle. |
OTHER_OUTDOOR_DEVICE | 5 | Speech was recorded outdoors. |
OTHER_INDOOR_DEVICE | 6 | Speech was recorded indoors. |
StreamingRecognizeResponse.SpeechEventType
Indicates the type of speech event.
Name | Number | Description |
---|---|---|
SPEECH_EVENT_UNSPECIFIED | 0 | No speech event specified. |
END_OF_SINGLE_UTTERANCE | 1 | This event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This event is only sent if single_utterance was set to true , and is not used otherwise. |
START_OF_SPEECH | 2 | Extension* Speech has been detected in the audio stream. |
END_OF_SPEECH | 3 | Extension* Speech has ceased to be detected in the audio stream. |
END_OF_AUDIO | 4 | Extension* The end of the audio stream has been reached. and it is being processed. |