OpenAIWhisperParserLocal#

class langchain_community.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: str | None = None, batch_size: int = 8, chunk_length: int = 30, forced_decoder_ids: Tuple[Dict] | None = None)[source]#

Transcribe and parse audio files with OpenAI Whisper model.

Audio transcription with OpenAI Whisper model locally from transformers.

Parameters: device - device to use

NOTE: By default uses the gpu if available, if you want to use cpu, please set device = “cpu”

lang_model - whisper model to use, for example “openai/whisper-medium” forced_decoder_ids - id states for decoder in multilanguage model,

usage example: from transformers import WhisperProcessor processor = WhisperProcessor.from_pretrained(“openai/whisper-medium”) forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,

task=”transcribe”)

forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”, task=”translate”)

Initialize the parser.

Parameters:
  • device (str) – device to use.

  • lang_model (str | None) – whisper model to use, for example “openai/whisper-medium”. Defaults to None.

  • forced_decoder_ids (Tuple[Dict] | None) – id states for decoder in a multilanguage model. Defaults to None.

  • batch_size (int) – batch size used for decoding Defaults to 8.

  • chunk_length (int) – chunk length used during inference. Defaults to 30s.

Methods

__init__([device, lang_model, batch_size, ...])

Initialize the parser.

lazy_parse(blob)

Lazily parse the blob.

parse(blob)

Eagerly parse the blob into a document or documents.

__init__(device: str = '0', lang_model: str | None = None, batch_size: int = 8, chunk_length: int = 30, forced_decoder_ids: Tuple[Dict] | None = None)[source]#

Initialize the parser.

Parameters:
  • device (str) – device to use.

  • lang_model (str | None) – whisper model to use, for example “openai/whisper-medium”. Defaults to None.

  • forced_decoder_ids (Tuple[Dict] | None) – id states for decoder in a multilanguage model. Defaults to None.

  • batch_size (int) – batch size used for decoding Defaults to 8.

  • chunk_length (int) – chunk length used during inference. Defaults to 30s.

lazy_parse(blob: Blob) Iterator[Document][source]#

Lazily parse the blob.

Parameters:

blob (Blob) –

Return type:

Iterator[Document]

parse(blob: Blob) List[Document]#

Eagerly parse the blob into a document or documents.

This is a convenience method for interactive development environment.

Production applications should favor the lazy_parse method instead.

Subclasses should generally not over-ride this parse method.

Parameters:

blob (Blob) – Blob instance

Returns:

List of documents

Return type:

List[Document]