自学内容网 自学内容网

封装一个语言识别文字的方法

语音识别

需求:

  1. 参考官方文档,整合语音识别api
  2. callback 的写法改为 Promise 的版本
  • 在startRecord中:
  1. 参考文档实例化-开启转换
  2. 将录制的内容传递给录音识别
  3. 回调函数中的 Log,改为 Logger
  • 在closeRecord:
  1. 结束识别、释放资源
  2. 设置状态为 VoiceState.VOICEOVER

 

  async startRecord() {
    // 开始识别
    this.asrEngine = await speechRecognizer.createEngine({
      language: 'zh-CN',
      online: 1
    })
    // 保存组件的 this,后续通过_this来使用组件
    const _this = this
    this.asrEngine.setListener({
      onStart(sessionId: string, eventMessage: string) {
        console.info(`onStart, sessionId: ${sessionId} eventMessage: ${eventMessage}`);
      },
      onEvent(sessionId: string, eventCode: number, eventMessage: string) {
        console.info(`onEvent, sessionId: ${sessionId} eventCode: ${eventCode} eventMessage: ${eventMessage}`);
      },
      onResult(sessionId: string, result: speechRecognizer.SpeechRecognitionResult) {
        _this.keyword = result.result
        _this.onChange(result.result)
        console.info(`onResult, sessionId: ${sessionId} sessionId: ${JSON.stringify(result)}`);
      },
      onComplete(sessionId: string, eventMessage: string) {
        _this.onComplete(_this.keyword)
        _this.keyword = ''
        _this.voiceState = VoiceState.DEFAULT
        console.info(`onComplete, sessionId: ${sessionId} eventMessage: ${eventMessage}`);
      },
      onError(sessionId: string, errorCode: number, errorMessage: string) {
        console.error(`onError, sessionId: ${sessionId} errorCode: ${errorCode} errorMessage: ${errorMessage}`);
      }
    })
    const recognizerParams: speechRecognizer.StartParams = {
      sessionId: '10000',
      audioInfo: {
        audioType: 'pcm',
        sampleRate: 16000,
        soundChannel: 1,
        sampleBit: 16
      }
    }
    this.asrEngine?.startListening(recognizerParams)
    // 开始录音
    const audioStreamInfo: audio.AudioStreamInfo = {
      samplingRate: audio.AudioSamplingRate.SAMPLE_RATE_16000,
      channels: audio.AudioChannel.CHANNEL_1,
      sampleFormat: audio.AudioSampleFormat.SAMPLE_FORMAT_S16LE,
      encodingType: audio.AudioEncodingType.ENCODING_TYPE_RAW
    }
    const audioCapturerInfo: audio.AudioCapturerInfo = {
      source: audio.SourceType.SOURCE_TYPE_MIC,
      capturerFlags: 0
    }
    const audioCapturerOptions: audio.AudioCapturerOptions = {
      streamInfo: audioStreamInfo,
      capturerInfo: audioCapturerInfo
    }

    this.audioCapturer = await audio.createAudioCapturer(audioCapturerOptions)
    this.audioCapturer.on('readData', (buffer) => {
      console.log('mk-logger', buffer.byteLength)
      this.asrEngine?.writeAudio('10000', new Uint8Array(buffer))
    })
    await this.audioCapturer.start()
    this.voiceState = VoiceState.VOICING
  }


原文地址:https://blog.csdn.net/2301_80345482/article/details/142403896

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!