Class: Google::Cloud::Dialogflow::V2::StreamingRecognitionResult

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/dialogflow/v2/doc/google/cloud/dialogflow/v2/session.rb

Overview

Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.

Example:

  1. transcript: "tube"

  2. transcript: "to be a"

  3. transcript: "to be"

  4. transcript: "to be or not to be" is_final: true

  5. transcript: " that's"

  6. transcript: " that is"

  7. recognition_event_type: +RECOGNITION_EVENT_END_OF_SINGLE_UTTERANCE+

  8. transcript: " that is the question" is_final: true

Only two of the responses contain final results (#4 and #8 indicated by +is_final: true+). Concatenating these generates the full transcript: "to be or not to be that is the question".

In each response we populate:

  • for +MESSAGE_TYPE_TRANSCRIPT+: +transcript+ and possibly +is_final+.

  • for +MESSAGE_TYPE_END_OF_SINGLE_UTTERANCE+: only +event_type+.

Defined Under Namespace

Modules: MessageType

Instance Attribute Summary collapse

Instance Attribute Details

#confidenceFloat

Returns The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

This field is typically only provided if +is_final+ is true and you should not rely on it being accurate or even set.

Returns:

  • (Float)

    The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.

    This field is typically only provided if +is_final+ is true and you should not rely on it being accurate or even set.



315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
# File 'lib/google/cloud/dialogflow/v2/doc/google/cloud/dialogflow/v2/session.rb', line 315

class StreamingRecognitionResult
  # Type of the response message.
  module MessageType
    # Not specified. Should never be used.
    MESSAGE_TYPE_UNSPECIFIED = 0

    # Message contains a (possibly partial) transcript.
    TRANSCRIPT = 1

    # Event indicates that the server has detected the end of the user's speech
    # utterance and expects no additional speech. Therefore, the server will
    # not process additional audio (although it may subsequently return
    # additional results). The client should stop sending additional audio
    # data, half-close the gRPC connection, and wait for any additional results
    # until the server closes the gRPC connection. This message is only sent if
    # +single_utterance+ was set to +true+, and is not used otherwise.
    END_OF_SINGLE_UTTERANCE = 2
  end
end

#is_finaltrue, false

Returns The default of 0.0 is a sentinel value indicating +confidence+ was not set. If +false+, the +StreamingRecognitionResult+ represents an interim result that may change. If +true+, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for +event_type+ = +RECOGNITION_EVENT_TRANSCRIPT+.

Returns:

  • (true, false)

    The default of 0.0 is a sentinel value indicating +confidence+ was not set. If +false+, the +StreamingRecognitionResult+ represents an interim result that may change. If +true+, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for +event_type+ = +RECOGNITION_EVENT_TRANSCRIPT+.



315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
# File 'lib/google/cloud/dialogflow/v2/doc/google/cloud/dialogflow/v2/session.rb', line 315

class StreamingRecognitionResult
  # Type of the response message.
  module MessageType
    # Not specified. Should never be used.
    MESSAGE_TYPE_UNSPECIFIED = 0

    # Message contains a (possibly partial) transcript.
    TRANSCRIPT = 1

    # Event indicates that the server has detected the end of the user's speech
    # utterance and expects no additional speech. Therefore, the server will
    # not process additional audio (although it may subsequently return
    # additional results). The client should stop sending additional audio
    # data, half-close the gRPC connection, and wait for any additional results
    # until the server closes the gRPC connection. This message is only sent if
    # +single_utterance+ was set to +true+, and is not used otherwise.
    END_OF_SINGLE_UTTERANCE = 2
  end
end

#message_typeGoogle::Cloud::Dialogflow::V2::StreamingRecognitionResult::MessageType

Returns Type of the result message.



315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
# File 'lib/google/cloud/dialogflow/v2/doc/google/cloud/dialogflow/v2/session.rb', line 315

class StreamingRecognitionResult
  # Type of the response message.
  module MessageType
    # Not specified. Should never be used.
    MESSAGE_TYPE_UNSPECIFIED = 0

    # Message contains a (possibly partial) transcript.
    TRANSCRIPT = 1

    # Event indicates that the server has detected the end of the user's speech
    # utterance and expects no additional speech. Therefore, the server will
    # not process additional audio (although it may subsequently return
    # additional results). The client should stop sending additional audio
    # data, half-close the gRPC connection, and wait for any additional results
    # until the server closes the gRPC connection. This message is only sent if
    # +single_utterance+ was set to +true+, and is not used otherwise.
    END_OF_SINGLE_UTTERANCE = 2
  end
end

#transcriptString

Returns Transcript text representing the words that the user spoke. Populated if and only if +event_type+ = +RECOGNITION_EVENT_TRANSCRIPT+.

Returns:

  • (String)

    Transcript text representing the words that the user spoke. Populated if and only if +event_type+ = +RECOGNITION_EVENT_TRANSCRIPT+.



315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
# File 'lib/google/cloud/dialogflow/v2/doc/google/cloud/dialogflow/v2/session.rb', line 315

class StreamingRecognitionResult
  # Type of the response message.
  module MessageType
    # Not specified. Should never be used.
    MESSAGE_TYPE_UNSPECIFIED = 0

    # Message contains a (possibly partial) transcript.
    TRANSCRIPT = 1

    # Event indicates that the server has detected the end of the user's speech
    # utterance and expects no additional speech. Therefore, the server will
    # not process additional audio (although it may subsequently return
    # additional results). The client should stop sending additional audio
    # data, half-close the gRPC connection, and wait for any additional results
    # until the server closes the gRPC connection. This message is only sent if
    # +single_utterance+ was set to +true+, and is not used otherwise.
    END_OF_SINGLE_UTTERANCE = 2
  end
end