首页 > 解决方案 > 音频缓冲器的 FFT

问题描述

如何使用 WebAudioApi 对 AudioBuffer 进行 FFT?

AudioBuffer {length: 8575407, duration: 178.6543125, sampleRate: 48000, numberOfChannels: 2}
length: 8575407
duration: 178.6543125
sampleRate: 48000
numberOfChannels: 2
__proto__: AudioBuffer

可视化文档调用插入缓冲区的流

https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API

source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);

更新

  console.log(buffer);

  source = audioCtx.createBufferSource(buffer);
  source.connect(analyser);
  analyser.connect(audioCtx.destination);

  analyser.fftSize = 2048;
  var bufferLength = analyser.frequencyBinCount;
  var dataArray = new Uint8Array(bufferLength);

  analyser.getByteTimeDomainData(dataArray);

  console.log(analyser);

控制台

AnalyserNode {fftSize: 2048, frequencyBinCount: 1024, minDecibels: -100, maxDecibels: -30, smoothingTimeConstant: 0.8, …}
fftSize: 2048
frequencyBinCount: 1024
minDecibels: -100
maxDecibels: -30
smoothingTimeConstant: 0.8
context: AudioContext {baseLatency: 0.01, destination: AudioDestinationNode, currentTime: 8.04, sampleRate: 48000, listener: AudioListener, …}
numberOfInputs: 1
numberOfOutputs: 1
channelCount: 2
channelCountMode: "max"
channelInterpretation: "speakers"
__proto__: AnalyserNode

标签: javascriptaudioweb-audio-api

解决方案


推荐阅读