Android Audio——使用AudioRecord录制音频
一、android平台上的音频采集
Android SDK 提供了两套音频采集的API,分别是:MediaRecorder 和 AudioRecord,前者是一个更加上层一点的 API,它可以直接把手机麦克风录入的音频数据进行编码压缩(如 AMR、MP3 等)并存成文件,而后者则更接近底层,能够更加自由灵活地控制,可以得到原始的一帧帧 PCM 音频数据。
二、AudioRecord音频采集的基本流程
-
构造一个 AudioRecord 对象。
-
开始采集。
-
读取采集的数据。
-
停止采集。
三、AudioRecord的基本参数
-
audioSource 音频采集的来源,参考MediaRecorder.AudioSource
-
sampleRateInHz 音频采样率
-
channelConfig 声道,CHANNEL_IN_MONO(单声道),CHANNEL_IN_STEREO(双声道)
-
audioFormat 音频采样精度,指定采样的数据的格式和每次采样的大小
-
bufferSizeInBytes AudioRecord 采集到的音频数据所存放的缓冲区大小
四、AudioRecord类的主要方法
方法 | 描述 |
---|---|
AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes) | 构造函数 |
static int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) | 计算最小缓冲区大小,参数同构造函数中三个参数。 |
Builder setAudioSource(int audioSource) | 设置音频采集来源 |
void startRecording() | 开始录制 |
int read(byte[] audioData, int offsetInBytes, int sizeInBytes) int read(ByteBuffer audioBuffer, int sizeInBytes) int read(short[] audioData, int offsetInShorts, int sizeInShorts) | 从硬件读取音频数据保存到缓冲区有三个方法,都返回读取的数据个数 |
void stop() | 停止录制 |
void release() | 释放资源 |
int getState() | 获取状态 |
int getRecordingState() | 获取录制状态 |
五、代码实现
1)权限
<uses-permission android:name="android.permission.RECORD_AUDIO" />
2)AudioRecord 类的接口简单封装
public class AudioCapturer {
private static final String TAG = "AudioCapturer";
private static final int DEFAULT_SOURCE = MediaRecorder.AudioSource.MIC;
private static final int DEFAULT_SAMPLE_RATE = 44100;
private static final int DEFAULT_CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
private static final int DEFAULT_AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
private AudioRecord mAudioRecord;
private int mMinBufferSize = 0;
private Thread mCaptureThread;
private boolean mIsCaptureStarted = false;
private volatile boolean mIsLoopExit = false;
private OnAudioFrameCapturedListener mAudioFrameCapturedListener;
public interface OnAudioFrameCapturedListener {
public void onAudioFrameCaptured(byte[] audioData);
}
public boolean isCaptureStarted() {
return mIsCaptureStarted;
}
public void setOnAudioFrameCapturedListener(OnAudioFrameCapturedListener listener) {
mAudioFrameCapturedListener = listener;
}
public boolean startCapture() {
return startCapture(DEFAULT_SOURCE, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_CONFIG,
DEFAULT_AUDIO_FORMAT);
}
@SuppressLint("MissingPermission")
public boolean startCapture(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat) {
if (mIsCaptureStarted) {
Log.e(TAG, "Capture already started !");
return false;
}
mMinBufferSize = AudioRecord.getMinBufferSize(sampleRateInHz,channelConfig,audioFormat);
if (mMinBufferSize == AudioRecord.ERROR_BAD_VALUE) {
Log.e(TAG, "Invalid parameter !");
return false;
}
Log.d(TAG , "getMinBufferSize = "+mMinBufferSize+" bytes !");
mAudioRecord = new AudioRecord(audioSource,sampleRateInHz,channelConfig,audioFormat,mMinBufferSize);
if (mAudioRecord.getState() == AudioRecord.STATE_UNINITIALIZED) {
Log.e(TAG, "AudioRecord initialize fail !");
return false;
}
mAudioRecord.startRecording();
mIsLoopExit = false;
mCaptureThread = new Thread(new AudioCaptureRunnable());
mCaptureThread.start();
mIsCaptureStarted = true;
Log.d(TAG, "Start audio capture success !");
return true;
}
public void stopCapture() {
if (!mIsCaptureStarted) {
return;
}
mIsLoopExit = true;
try {
mCaptureThread.interrupt();
mCaptureThread.join(1000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
if (mAudioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
mAudioRecord.stop();
}
mAudioRecord.release();
mIsCaptureStarted = false;
mAudioFrameCapturedListener = null;
Log.d(TAG, "Stop audio capture success !");
}
private class AudioCaptureRunnable implements Runnable {
@Override
public void run() {
while (!mIsLoopExit) {
byte[] buffer = new byte[mMinBufferSize];
int ret = mAudioRecord.read(buffer, 0, mMinBufferSize);
if (ret == AudioRecord.ERROR_INVALID_OPERATION) {
Log.e(TAG , "Error ERROR_INVALID_OPERATION");
}
else if (ret == AudioRecord.ERROR_BAD_VALUE) {
Log.e(TAG , "Error ERROR_BAD_VALUE");
}
else {
if (mAudioFrameCapturedListener != null) {
mAudioFrameCapturedListener.onAudioFrameCaptured(buffer);
}
Log.d(TAG , "OK, Captured "+ret+" bytes !");
}
SystemClock.sleep(10);
}
}
}
}
原文地址:https://blog.csdn.net/liuning1985622/article/details/138391140
免责声明:本站文章内容转载自网络资源,如侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!