首页 > 解决方案 > 从解码的音频中读取 Promise 的结果 - 缓冲区

问题描述

我正在尝试从资产文件夹中获取音频文件并在网络中播放解码的音频。我想使用网络音频 API,这样我就可以根据音频数据应用视觉效果。

目前我的代码是:

      let audioContext = new (window.AudioContext || window.webkitAudioContext)();
  let masterGainNode = audioContext.createGain();
  let songBuffer = null;
  let path = "../assets/sampleTrackForWeb.mp3";

  function fetchSong() {
    fetch(path)
      .then((response) => response.arrayBuffer())
      .then((arrayBuffer) =>
        audioContext.decodeAudioData(
          arrayBuffer,
          (audioBuffer) => {
            console.log(audioBuffer) // the audio buffer is here and ready to go!
            songBuffer = audioBuffer;
          },
          (error) => console.error(error)
        )
      );
  }

  fetchSong();

  console.log(songBuffer); // null???!!!

我几乎完全按照 MDN 文档中有关如何执行此操作的信息进行操作。任何帮助表示赞赏!:)

编辑:发布 MDN 文档,了解他们是如何做到的

var source;
function getData() {
  source = audioCtx.createBufferSource();
  var request = new XMLHttpRequest();

  request.open('GET', 'viper.ogg', true);

  request.responseType = 'arraybuffer';

  request.onload = function() {
    var audioData = request.response;

    audioCtx.decodeAudioData(audioData, function(buffer) {
        source.buffer = buffer;

        source.connect(audioCtx.destination);
        source.loop = true;
      },

      function(e){ console.log("Error with decoding audio data" + e.err); });

  }

  request.send();
}

标签: javascriptaudioweb-audio-api

解决方案


问题:
您提前同步期待结果。就像这样做:

let A;
setTimeout(() => (A = "Albatros"), 1000);
console.log(A); // undefined  
// ...Why is A not an Albatros? 

承诺

了解Promises以深入了解异步性:


来自 MDN decodeAudioData

BaseAudioContext 接口的 decodeAudioData() 方法用于异步解码 ArrayBuffer 中包含的音频文件数据。在这种情况下,ArrayBuffer 是从 XMLHttpRequest 和 FileReader 加载的。解码后的 AudioBuffer 被重新采样到 AudioContext 的采样率, 然后传递给回调或 promise

因此,让我们探索如何将其传递给回调承诺

承诺.then()

你可以用.then().
由于.then()返回一个 Promise,它允许对返回的结果进行 承诺decodeAudioData

const audioContext = new (window.AudioContext || window.webkitAudioContext)();
const fetchSong = (path) =>
    fetch(path)
        .then((res) => res.arrayBuffer())
        .then((arrayBuffer) => audioContext.decodeAudioData(arrayBuffer));

异步等待

或者使用含糖的 Async Await语法:

const audioContext = new (window.AudioContext || window.webkitAudioContext)();
const fetchSong = async (path) => {
    const xhr = await fetch(path);
    const arrayBuffer = await xhr.arrayBuffer();
    return audioContext.decodeAudioData(arrayBuffer);
};

上述两个示例都返回一个 Promise,因此可以像这样使用:

const songDataPromise = fetchSong("test.mp3");   // Promise {<pending>}
songDataPromise.then((audioBuffer) => {
    console.log(audioBuffer);                    // AudioBuffer {}
    console.log(audioBuffer.getChannelData(0));  // Float32Array []
});

打回来

要及时解决所有问题,只需将另一个链接.then()到集合 - 并将其结果传递给回调函数:

const audioContext = new (window.AudioContext || window.webkitAudioContext)();
const fetchSong = (path, cb) =>
    fetch(path)
        .then((res) => res.arrayBuffer())
        .then((arrayBuffer) => audioContext.decodeAudioData(arrayBuffer))
        .then(cb);  // Resolve with callback

fetchSong("test.mp3", (audioBuffer) => {
    console.log(audioBuffer);                   // AudioBuffer {}
    console.log(audioBuffer.getChannelData(0)); // Float32Array []
});

推荐阅读