How to get frame by frame from MP4? (MediaCodec)
Asked Answered
E

4

5

Actually I am working with OpenGL and I would like to put all my textures in MP4 in order to compress them.

Then I need to get it from MP4 on my Android

I need somehow decode MP4 and get frame by frame by request.

I found this MediaCodec

https://developer.android.com/reference/android/media/MediaCodec

and this MediaMetadataRetriever

https://developer.android.com/reference/android/media/MediaMetadataRetriever

But I did not see approach how to request frame by frame...

If there is someone who worked with MP4, please give me a way where to go.

P.S. I am working with native way (JNI), so does not matter how to do it.. Java or native, but I need to find the way.

EDIT1

I make some kind of movie (just one 3d model), so I am changing my geometry as well as textures every 32 milliseconds. So, it is seems to me reasonable to use mp4 for tex because of each new frame (32 milliseconds) very similar to privious one...

Now I use 400 frames for one model. For geometry I use .mtr and for tex I use .pkm (because it optimized for android) , so I have around 350 .mtr files(because some files include subindex) and 400 .pkm files ...

This is the reason why I am going to use mp4 for tex. Because one mp4 much more smaller than 400 .pkm

EDIT2

Plase take a look at Edit1

Actually all that I need to know is there API of Android that could read MP4 by frames? Maybe some kind of getNextFrame() method?

Something like this

MP4Player player = new MP4Player(PATH_TO_MY_MP4_FILE);

void readMP4(){
   Bitmap b;

   while(player.hasNext()){
      b = player.getNextFrame();

      ///.... my code here ...///
   }
}

EDIT3

I made such implementation on Java

public static void read(@NonNull final Context iC, @NonNull final String iPath)
{
    long time;

    int fileCount = 0;

    //Create a new Media Player
    MediaPlayer mp = MediaPlayer.create(iC, Uri.parse(iPath));
    time = mp.getDuration() * 1000;

    Log.e("TAG", String.format("TIME :: %s", time));

    MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
    mRetriever.setDataSource(iPath);

    long a = System.nanoTime();

    //frame rate 10.03/sec, 1/10.03 = in microseconds 99700
    for (int i = 99700 ; i <= time ; i = i + 99700)
    {
        Bitmap b = mRetriever.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);

        if (b == null)
        {
            Log.e("TAG", String.format("BITMAP STATE :: %s", "null"));
        }
        else
        {
            fileCount++;
        }

        long curTime = System.nanoTime();
        Log.e("TAG", String.format("EXECUTION TIME :: %s", curTime - a));
        a = curTime;
    }

    Log.e("TAG", String.format("COUNT :: %s", fileCount));
}

and here execution time

  E/TAG: EXECUTION TIME :: 267982039
  E/TAG: EXECUTION TIME :: 222928769
  E/TAG: EXECUTION TIME :: 289899461
  E/TAG: EXECUTION TIME :: 138265423
  E/TAG: EXECUTION TIME :: 127312577
  E/TAG: EXECUTION TIME :: 251179654
  E/TAG: EXECUTION TIME :: 133996500
  E/TAG: EXECUTION TIME :: 289730345
  E/TAG: EXECUTION TIME :: 132158270
  E/TAG: EXECUTION TIME :: 270951461
  E/TAG: EXECUTION TIME :: 116520808
  E/TAG: EXECUTION TIME :: 209071269
  E/TAG: EXECUTION TIME :: 149697230
  E/TAG: EXECUTION TIME :: 138347269

This time in nanoseconds == +/- 200 milliseconds... It is very slowly... I need around 30 milliseconds by frame.

So, I think this method is execution on CPU, so question if there a method that executing on GPU?

EDIT4

I found out that there is MediaCodec class

https://developer.android.com/reference/android/media/MediaCodec

also I found similar question here MediaCodec get all frames from video

I understood that there is a way to read by bytes, but not by frames...

So, still question - if there is a way to read mp4 video by frames?

Encephalogram answered 27/6, 2019 at 12:59 Comment(3)
"I would like to put all my textures in MP4 in order to compress them." That will lead to incredibly bad results. MPEG compression relies on frame-to-frame similarity; a series of frames that have little or no similarity between them will yield terrible visual results.Ondrej
Please check out: Android Developer Blog Post on Game Textures, OpenGL Texture Compression Support and Image compression for Android developers - Google I/O 2016 for more normal Android developmentEmmer
@NicolBolas yes, I see, but I use the same model. I edited my question, please take a lookEncephalogram
D
13

The solution would look something like the ExtractMpegFramesTest, in which MediaCodec is used to generate "external" textures from video frames. In the test code, the frames are rendered to an off-screen pbuffer and then saved as PNG. You would just render them directly.

There are a few problems with this:

  1. MPEG video isn't designed to work well as a random-access database. A common GOP (group of pictures) structure has one "key frame" (essentially a JPEG image) followed by 14 delta frames, which just hold the difference from the previous decoded frame. So if you want frame N, you may have to decode frames N-14 through N-1 first. Not a problem if you're always moving forward (playing a movie onto a texture) or you only store key frames (at which point you've invented a clumsy database of JPEG images).
  2. As mentioned in comments and answers, you're likely to get some visual artifacts. How bad these look depends on the material and your compression rate. Since you're generating the frames, you may be able to reduce this by ensuring that, whenever there's a big change, the first frame is always a key frame.
  3. The firmware that MediaCodec interfaces with may want several frames before it starts producing output, even if you start at a key frame. Seeking around in a stream has a latency cost. See e.g. this post. (Ever wonder why DVRs have smooth fast-forward, but not smooth fast-backward?)
  4. MediaCodec frames passed through SurfaceTexture become "external" textures. These have some limitations vs. normal textures -- performance may be worse, can't use as color buffer in an FBO, etc. If you're just rendering it once per frame at 30fps this shouldn't matter.
  5. MediaMetadataRetriever's getFrameAtTime() method has less-than-desirable performance for the reasons noted above. You're unlikely to get better results by writing it yourself, although you can save a bit of time by skipping the step where it creates a Bitmap object. Also, you passed OPTION_CLOSEST_SYNC in, but that will only produce the results you want if all your frames are sync frames (again, clumsy database of JPEG images). You need to use OPTION_CLOSEST.

If you're just trying to play a movie on a texture (or your problem can be reduced to that), Grafika has some examples. One that may be relevant is TextureFromCamera, which renders the camera video stream on a GLES rect that can be zoomed and rotated. You can replace the camera input with the MP4 playback code from one of the other demos. This'll work fine if you're only playing forward, but if you want to skip around or go backward you'll have trouble.

The problem you're describing sounds pretty similar to what 2D game developers deal with. Doing what they do is probably the best approach.

Desmund answered 30/6, 2019 at 15:53 Comment(9)
So, as far as I see there is no way to get frames by indexes? Some kind decoder.getFrameByIndex(index)? As I see I have to make all this manipulation decoder post image to texture surface then openGL draw it then I need to read pixels glReadPixels() and then create Bitmap and just now I can use it... So, question is - there is no way to get frame this way decoder.setMP4Path(path) decoder.getFrameByIndex(index)? Sorry, but I fresh in this...Encephalogram
MediaMetadataRetriever has a getImageAtIndex() call. However, it has to do the same steps anything else does: it has to tell MediaCodec to seek to a the closest previous key frame, decode delta frames until it reaches the desired one, and then get the YUV data out to an application-owned buffer. The advantage of SurfaceTexture is that GLES can use the decoded frame data without moving through app memory. Note you can grab multiple consecutive frames more efficiently with getFramesAtIndex(), but I'm assuming that won't generally help because of random access requirements.Desmund
I thinks I understood the concept. Just one more question, I am using this cpp sample gist.github.com/alekseytimoshchenko/… , question is - how decoder know to go ahead to the next frame? Is this line move him AMediaExtractor_advance(d->ex); ?Encephalogram
In your sample, the work is all done in doCodecWork(). You don't drive MediaCodec one frame at a time. You hand it a bunch of buffers with input and a bunch of buffers that will hold output, and say "go". It notifies you when input buffers are emptied and output buffers are filled. MediaCodec operates on H.264 NAL units, not .mp4 (which can have multiple audio and video streams multiplexed); MediaExtractor is used to pull the desired video elementary stream out of the .mp4 wrapper. MediaExtractor readSampleData()/advance() is doing that work.Desmund
Look at this sample github.com/kueblert/AndroidMediaCodec/blob/master/…, as I understand it is exactly what I need, this guy is getting output buffer here AMediaCodec_getOutputBuffer(mCodec, status, &bufsize) and then I have buffer, but I don't want to use OpenCV, so I found this way github.com/latelee/yuv2rgb/blob/master/yuv2rgb.c to convert YUV to RGB... that is it, I am welcome to use RGB buffer of frame. It is very intresting what do you think about it?Encephalogram
and also, how to know which output format I have ? I know that YUV, but according to this methods github.com/latelee/yuv2rgb/blob/master/yuv2rgb.c there are a few YUV formats , YUV422, YUV420....Encephalogram
The efficient way to handle the frames is to have the video decoder pass them to the GPU as an external texture. Copying them to app space, converting them to RGB in software, and uploading them to the GPU will add unnecessary overhead. MediaCodec decoders get to choose their own YUV format, typically NV12 or I420, but on some devices (notably those with Qualcomm chipsets) it can be a proprietary format. You have to get the media format object and query the AMEDIAFORMAT_KEY_COLOR_FORMAT key, then use a YUV converter for the specific format. (your kueblert example seems device-specific.)Desmund
Hello fadden, could you please take a look at my question #57183357 thanks in advanceEncephalogram
@Desmund Using GL texture works for most of the videos but sometimes I just got some random colors, as specified in this question #57504943 please help to have a lookCyrene
V
2

I can see why it might seem easy to have all your textures in a single file, but this is a really really bad idea.

MP4 is a video codec it is highly optimised for a list of frames which have a high level of similarity to adjacent frames i.e. motion. It is also optimised to be decompressed in sequential order, so using a 'random access' approach will be very inefficient.

To give a bit more detail video codecs store key frames (one a second, but the rate changes) and delta frames the rest of the time. The key frames are independently compressed just like separate images, but the delta frames stored as the difference from one or more other frames. The algorithm assumes this difference will be fairly minimal, after motion compensation has been performed.

So if you want to access a single delta frame you code will have to decompress a nearby key frame and all the delta frames that connect it to the frame you want, this will be much slower than just using single frame JPEG.

In short, use JPEG or PNG to compress your textures and add them all to a single archive file to keep it tidy.

Vouchsafe answered 27/6, 2019 at 14:3 Comment(3)
Don't use JPEG or PNG. Use an actual compressed texture format suitable for a GPU, such as ETC2 or ASTC.Cristacristabel
@Cristacristabel I see, but I use the same model. I edited my question. Please take a lookEncephalogram
Hello! Could you please take a look at this question #56936047Encephalogram
I
2

Yes there is way to extract single frames from mp4 video.

In principle, you seem to look for alternative way to load textures, where usual way is GLUtils.texImage2D (which fills texture from a Bitmap).

First, you should consider what others advice, and expect visual artifacts from compression. But assuming that your textures form related textures (e.g. an explosion), getting these from video stream makes sense. For unrelated images you'll get better results using JPG or PNG. And note that mp4 video doesn't have alpha channel, often used in textures.

For the task, you can't use MediaMetadataRetriever, it won't give you needed accuracy to extract all frames.

You'd have to work with MediaCodec and MediaExtractor classes. Android documentation for MediaCodec is detailed.

Actually you'll need to implement kind of customized video player, and add one key function: frame step.

Close thing to this is Android's MediaPlayer, which is complete player, but 1) lacks frame-step, and 2) is rather closed-source because it's implemented by lot of native C++ libraries which are impossible to extend and hard to study.

I advice this with experience of creating a frame-by-frame video player, and I did it by adopting MediaPlayer-Extended, which is written in plain java (no native code), so you can include this in your project and add function that you need. It works with Android's MediaCodec and MediaExtractor.
Somewhere in MediaPlayer class you'd add function for frameStep, and add another signal + function in PlaybackThread to decode just one next frame (in paused mode). However, the implementation of this would be up to you. Result would be that you let decoder to obtain and process single frame, consume the frame, then repeat with next frame. I did it, so I know that this approach works.

Another half of the task is about obtaining the result. A video player (with MediaCodec) outputs frames into a Surface. Your task would be to get the pixels. I know about way how to read RGB bitmap from such surface: you need to create OpenGL Pbuffer EGLSurface, let MediaCodec render into this surface (Android's SurfaceTexture), then read pixels from this surface. This is another nontrivial task, you need to create shader to render EOS texture (the surface), and use GLES20.glReadPixels to obtain RGB pixels into a ByteBuffer. You'd then upload this RGB bitmaps into your textures.
However, as you want to load textures, you may find optimized way how to render the video frame directly into your textures, and avoid moving pixels around.

Hope this helps, and good luck in implementation.

Infest answered 2/7, 2019 at 22:56 Comment(2)
Look at this sample github.com/kueblert/AndroidMediaCodec/blob/master/…, as I understand it is exactly what I need, this guy is getting output buffer here AMediaCodec_getOutputBuffer(mCodec, status, &bufsize) and then I have buffer, but I don't want to use OpenCV, so I found this way github.com/latelee/yuv2rgb/blob/master/yuv2rgb.c to convert YUV to RGB... that is it, I am welcome to use RGB buffer of frame. It is very interesting what do you think about it?Encephalogram
Yes, you're at good direction. Note that you're in C++/native world, while I'm mostly in Java/Kotlin. So I don't have experience with use of media from C++. However here's what to study: developer.android.com/ndk/guides/stable_apis contain native APIs, you'd be interested in libmediandk (that's what guy in your link above uses), and also jni version of SurfaceTexture, Hardware Buffer APIs, etc. Important step is also YUV2RGB, for which you have clues, that's simple conversion, but you have to get access to hardware buffer from C++, and maybe you need to understand YUV format.Infest
E
1

Actually I want to post my implementation for current time.

Here h file

#include <jni.h>
#include <memory>

#include <opencv2/opencv.hpp>

#include "looper.h"
#include "media/NdkMediaCodec.h"
#include "media/NdkMediaExtractor.h"

#ifndef NATIVE_CODEC_NATIVECODECC_H
#define NATIVE_CODEC_NATIVECODECC_H

//Originally took from here https://github.com/googlesamples/android- 
ndk/tree/master/native-codec
//Convert took from here 
https://github.com/kueblert/AndroidMediaCodec/blob/master/nativecodecvideo.cpp

class NativeCodec
{
public:
NativeCodec() = default;

~NativeCodec() = default;

void DecodeDone();

void Pause();

void Resume();

bool createStreamingMediaPlayer(const std::string &filename);

void setPlayingStreamingMediaPlayer(bool isPlaying);

void shutdown();

void rewindStreamingMediaPlayer();

int getFrameWidth() const
{
    return m_frameWidth;
}

int getFrameHeight() const
{
    return m_frameHeight;
}

void getNextFrame(std::vector<unsigned char> &imageData);

private:
struct Workerdata
{
    AMediaExtractor *ex;
    AMediaCodec *codec;
    bool sawInputEOS;
    bool sawOutputEOS;
    bool isPlaying;
    bool renderonce;
};

void Seek();

ssize_t m_bufidx = -1;
int m_frameWidth = -1;
int m_frameHeight = -1;
cv::Size m_frameSize;

Workerdata m_data = {nullptr, nullptr, false, false, false, false};
};

#endif //NATIVE_CODEC_NATIVECODECC_H

Here cc file

#include "native_codec.h"

#include <cassert>
#include "native_codec.h"
#include <jni.h>
#include <cstdio>
#include <cstring>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <cerrno>
#include <climits>
#include "util.h"
#include <android/log.h>
#include <string>
#include <chrono>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>

#include <android/log.h>
#include <string>
#include <chrono>

// for native window JNI
#include <android/native_window_jni.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>

using namespace std;
using namespace std::chrono;

bool NativeCodec::createStreamingMediaPlayer(const std::string &filename)
{
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSource(ex, filename.c_str());;

if (err != AMEDIA_OK)
{
    return false;
}

size_t numtracks = AMediaExtractor_getTrackCount(ex);

AMediaCodec *codec = nullptr;

for (int i = 0; i < numtracks; i++)
{
    AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);

    int format_color;

    AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_COLOR_FORMAT, &format_color);
    bool ok = AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_WIDTH, &m_frameWidth);
    ok = ok && AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_HEIGHT, 
 &m_frameHeight);

    if (ok)
    {
        m_frameSize = cv::Size(m_frameWidth, m_frameHeight);
    } else
    {
        //Asking format for frame width / height failed.
    }

    const char *mime;

    if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime))
    {
        return false;
    } else if (!strncmp(mime, "video/", 6))
    {
        // Omitting most error handling for clarity.
        // Production code should check for errors.
        AMediaExtractor_selectTrack(ex, i);
        codec = AMediaCodec_createDecoderByType(mime);
        AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
        m_data.ex = ex;
        m_data.codec = codec;
        m_data.sawInputEOS = false;
        m_data.sawOutputEOS = false;
        m_data.isPlaying = false;
        m_data.renderonce = true;
        AMediaCodec_start(codec);
    }

    AMediaFormat_delete(format);
}

return true;
}

void NativeCodec::getNextFrame(std::vector<unsigned char> &imageData)
{
if (!m_data.sawInputEOS)
{
    m_bufidx = AMediaCodec_dequeueInputBuffer(m_data.codec, 2000);

    if (m_bufidx >= 0)
    {
        size_t bufsize;
        auto buf = AMediaCodec_getInputBuffer(m_data.codec, m_bufidx, &bufsize);
        auto sampleSize = AMediaExtractor_readSampleData(m_data.ex, buf, bufsize);

        if (sampleSize < 0)
        {
            sampleSize = 0;
            m_data.sawInputEOS = true;
        }

        auto presentationTimeUs = AMediaExtractor_getSampleTime(m_data.ex);

        AMediaCodec_queueInputBuffer(m_data.codec, m_bufidx, 0, sampleSize, 
presentationTimeUs,
                                     m_data.sawInputEOS ? 
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);

        AMediaExtractor_advance(m_data.ex);
    }
}

if (!m_data.sawOutputEOS)
{
    AMediaCodecBufferInfo info;
    auto status = AMediaCodec_dequeueOutputBuffer(m_data.codec, &info, 0);

    if (status >= 0)
    {
        if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM)
        {
            __android_log_print(ANDROID_LOG_ERROR, 
 "AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM", "AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM :: %s", 
//
                                "output EOS");

            m_data.sawOutputEOS = true;
        }

        if (info.size > 0)
        {
//                size_t bufsize;
            uint8_t *buf = AMediaCodec_getOutputBuffer(m_data.codec, 
  static_cast<size_t>(status), /*bufsize*/nullptr);
            cv::Mat YUVframe(cv::Size(m_frameSize.width, static_cast<int> 
  (m_frameSize.height * 1.5)), CV_8UC1, buf);

            cv::Mat colImg(m_frameSize, CV_8UC3);
            cv::cvtColor(YUVframe, colImg, CV_YUV420sp2BGR, 3);
            auto dataSize = colImg.rows * colImg.cols * colImg.channels();
            imageData.assign(colImg.data, colImg.data + dataSize);
        }

        AMediaCodec_releaseOutputBuffer(m_data.codec, static_cast<size_t>(status), 
 info.size != 0);

        if (m_data.renderonce)
        {
            m_data.renderonce = false;
            return;
        }
    } else if (status < 0)
    {
        getNextFrame(imageData);
    } else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED)
    {
        __android_log_print(ANDROID_LOG_ERROR, 
"AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED", "AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED :: %s", //
                            "output buffers changed");
    } else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED)
    {
        auto format = AMediaCodec_getOutputFormat(m_data.codec);

        __android_log_print(ANDROID_LOG_ERROR, 
"AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED", "AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED :: %s", 
 //
                            AMediaFormat_toString(format));

        AMediaFormat_delete(format);
    } else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER)
    {
        __android_log_print(ANDROID_LOG_ERROR, "AMEDIACODEC_INFO_TRY_AGAIN_LATER", 
  "AMEDIACODEC_INFO_TRY_AGAIN_LATER :: %s", //
                            "no output buffer right now");
    } else
    {
        __android_log_print(ANDROID_LOG_ERROR, "UNEXPECTED INFO CODE", "UNEXPECTED 
 INFO CODE :: %zd", //
                            status);
    }
}
}

void NativeCodec::DecodeDone()
{
if (m_data.codec != nullptr)
{
    AMediaCodec_stop(m_data.codec);
    AMediaCodec_delete(m_data.codec);
    AMediaExtractor_delete(m_data.ex);
    m_data.sawInputEOS = true;
    m_data.sawOutputEOS = true;
}
}

void NativeCodec::Seek()
{
AMediaExtractor_seekTo(m_data.ex, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaCodec_flush(m_data.codec);
m_data.sawInputEOS = false;
m_data.sawOutputEOS = false;

if (!m_data.isPlaying)
{
    m_data.renderonce = true;
}
}

void NativeCodec::Pause()
{
if (m_data.isPlaying)
{
    // flush all outstanding codecbuffer messages with a no-op message
    m_data.isPlaying = false;
}
}

void NativeCodec::Resume()
{
if (!m_data.isPlaying)
{
    m_data.isPlaying = true;
}
}

void NativeCodec::setPlayingStreamingMediaPlayer(bool isPlaying)
{
if (isPlaying)
{
    Resume();
} else
{
    Pause();
}
}

void NativeCodec::shutdown()
{
m_bufidx = -1;
DecodeDone();
}

void NativeCodec::rewindStreamingMediaPlayer()
{
Seek();
}

So, according to this implementation for format conversion (in my case from YUV to BGR) you need to set up OpenCV, for understand how to do it check this two source

https://www.youtube.com/watch?v=jN9Bv5LHXMk

https://www.youtube.com/watch?v=0fdIiOqCz3o

And also for sample I leave here my CMakeLists.txt file

#For add OpenCV take a look at this video
#https://www.youtube.com/watch?v=jN9Bv5LHXMk
#https://www.youtube.com/watch?v=0fdIiOqCz3o
#Look at the video than compare with this file and make the same

set(pathToProject
    C:/Users/tetavi/Downloads/Buffer/OneMoreArNew/arcore-android- 
sdk/samples/hello_ar_c)
set(pathToOpenCv C:/OpenCV-android-sdk)

cmake_minimum_required(VERSION 3.4.1)

set(CMAKE VERBOSE MAKEFILE on)
set(CMAKE CXX FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")

include_directories(${pathToOpenCv}/sdk/native/jni/include)

# Import the ARCore library.
add_library(arcore SHARED IMPORTED)
set_target_properties(arcore PROPERTIES IMPORTED_LOCATION
    ${ARCORE_LIBPATH}/${ANDROID_ABI}/libarcore_sdk_c.so
    INTERFACE_INCLUDE_DIRECTORIES ${ARCORE_INCLUDE}
    )

# Import the glm header file from the NDK.
add_library(glm INTERFACE)
set_target_properties(glm PROPERTIES
    INTERFACE_INCLUDE_DIRECTORIES 
${ANDROID_NDK}/sources/third_party/vulkan/src/libs/glm
    )

# This is the main app library.
add_library(hello_ar_native SHARED
     src/main/cpp/background_renderer.cc
    src/main/cpp/hello_ar_application.cc
    src/main/cpp/jni_interface.cc
    src/main/cpp/video_render.cc
    src/main/cpp/geometry_loader.cc
    src/main/cpp/plane_renderer.cc
    src/main/cpp/native_codec.cc
    src/main/cpp/point_cloud_renderer.cc
    src/main/cpp/frame_manager.cc
    src/main/cpp/safe_queue.cc
    src/main/cpp/stb_image.h
    src/main/cpp/util.cc)

add_library(lib_opencv SHARED IMPORTED)
set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION

${pathToProject}/app/src/main/jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libopencv_java3.so)

target_include_directories(hello_ar_native PRIVATE
    src/main/cpp)

target_link_libraries(hello_ar_native $\{log-lib} lib_opencv
    android
    log
    GLESv2
    glm
    mediandk
    arcore)

Usage:

You need to create stream media player with this method

NaviteCodec::createStreamingMediaPlayer(pathToYourMP4file);

and then just use

NativeCodec::getNextFrame(imageData);

Feel free to ask

Encephalogram answered 11/7, 2019 at 8:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.