怎么通过Objective-C中的语音框架实现语音到文本的转换?

本教程将介绍如何通过Objective-C中的语音框架实现语音到文本的转换?的处理方法,这篇教程是从别的地方看到的,然后加了一些国外程序员的疑问与解答,希望能对你有所帮助,好了,下面开始学习吧。

怎么通过Objective-C中的语音框架实现语音到文本的转换? 教程 第1张

问题描述

我想使用iOS语音框架在我的Objective-C应用程序中进行语音识别。

我找到了一些快速的例子,但在Objective-C中找不到任何东西。

是否可以从Objective-C访问此框架?如果是,怎么?

推荐答案

花了足够的时间寻找Objective-C示例--甚至在苹果的文档中--我也找不到像样的东西,所以我自己找出来了。

头文件(.h)

/*!
 * Import the Speech framework, assign the Delegate and declare variables
 */

#import <Speech/Speech.h>

@interface ViewController : UIViewController <SFSpeechRecognizerDelegate> {
 SFSpeechRecognizer *speechRecognizer;
 SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
 SFSpeechRecognitionTask *recognitionTask;
 AVAudioEngine *audioEngine;
}

方法文件(.m)

- (void)viewDidLoad {
 [super viewDidLoad];

 // Initialize the Speech Recognizer with the locale, couldn't find a list of locales
 // but I assume it's standard UTF-8 https://wiki.archlinux.org/index.php/locale
 speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"]];

 // Set speech recognizer delegate
 speechRecognizer.delegate = self;

 // Request the authorization to make sure the user is asked for permission so you can
 // get an authorized response, also remember to change the .plist file, check the repo's
 // readme file or this project's info.plist
 [SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
  switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
 NSLog(@"Authorized");
 break;
case SFSpeechRecognizerAuthorizationStatusDenied:
 NSLog(@"Denied");
 break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
 NSLog(@"Not Determined");
 break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
 NSLog(@"Restricted");
 break;
default:
 break;
  }
 }];

}

/*!
 * @brief Starts listening and recognizing user input through the 
 * phone's microphone
 */

- (void)startListening {

 // Initialize the AVAudioEngine
 audioEngine = [[AVAudioEngine alloc] init];

 // Make sure there's not a recognition task already running
 if (recognitionTask) {
  [recognitionTask cancel];
  recognitionTask = nil;
 }

 // Starts an AVAudio Session
 NSError *error;
 AVAudioSession *audioSession = [AVAudioSession sharedInstance];
 [audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
 [audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];

 // Starts a recognition process, in the block it logs the input or stops the audio
 // process if there's an error.
 recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
 AVAudioInputNode *inputNode = audioEngine.inputNode;
 recognitionRequest.shouldReportPartialResults = YES;
 recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
  BOOL isFinal = NO;
  if (result) {
// Whatever you say in the microphone after pressing the button should be being logged
// in the console.
NSLog(@"RESULT:%@",result.bestTranscription.formattedString);
isFinal = !result.isFinal;
  }
  if (error) {
[audioEngine stop];
[inputNode removeTapOnBus:0];
recognitionRequest = nil;
recognitionTask = nil;
  }
 }];

 // Sets the recording format
 AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
 [inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
  [recognitionRequest appendAudioPCMBuffer:buffer];
 }];

 // Starts the audio engine, i.e. it starts listening.
 [audioEngine prepare];
 [audioEngine startAndReturnError:&error];
 NSLog(@"Say Something, I'm listening"); 
}

- (IBAction)microPhoneTapped:(id)sender {
 if (audioEngine.isRunning) {
  [audioEngine stop];
  [recognitionRequest endAudio];
 } else {
  [self startListening];
 }
}

现在,将委派添加到SFSpeechRecognizerDelegate以检查语音识别器是否可用。

#pragma mark - SFSpeechRecognizerDelegate Delegate Methods

- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
 NSLog(@"Availability:%d",available);
}

说明和说明

记住修改.plist文件以获得用户对语音识别和使用麦克风的授权,当然<String>值必须根据您的需要进行自定义,您可以通过创建和修改Property List中的值来实现这一点,或者右键单击.plist文件和Open As->Source Code并在</dict>标签之前粘贴以下行。

<key>NSMicrophoneUsageDescription</key>  <string>This app uses your microphone to record what you say, so watch what you say!</string>

<key>NSSpeechRecognitionUsageDescription</key>  <string>This app uses Speech recognition to transform your spoken words into text and then analyze the, so watch what you say!.</string>

还要记住,为了能够将语音框架导入到项目中,您需要安装iOS 10.0+。

要让它运行并测试它,你只需要一个非常基本的用户界面,只需创建一个UIButton并为它分配microPhoneTapped操作,当按下时,应用程序应该开始监听并将它通过麦克风听到的所有内容记录到控制台(在样例代码中,NSLog是唯一接收文本的东西)。再次按下时应停止录制。

我用一个样例项目创建了一个Github repo,享受!

好了关于怎么通过Objective-C中的语音框架实现语音到文本的转换?的教程就到这里就结束了,希望趣模板源码网找到的这篇技术文章能帮助到大家,更多技术教程可以在站内搜索。