Hey there o/
Past month I’ve been working on a first person view camcorder using my Android phone and a VR headset (many options to 3D print :D).
It’s a hard project but quite useful if you wish to record build tutorials from your own point of view.
Anyway, I hit a dead end in the development regarding OpenGL and I don’t want to spend another month studying it just to solve a bug from Google.
Is someone here with experience in Android development that would like to take a shot?
I assume, if you know what you are doing, it should be an easy thing to do. I can even tell what is the problem and what needs to be done, I just don’t know how.
I could offer to the Android Development group, but it’s ghost town there and I don’t like dose guys.
Well if you may define your issue more precisely, I can tell if I can help you out. Since years now I develope Android apps and also dealt with numerous OpenGL ES2 engines. If you plan to pretty much develope your own engine with new from scratch fragment and vertex shaders, I can’t offer the time needed though.
@Klaus_Daume Thanks for the interest Klaus, I really appreciate.
But my intention is to freely distribute the solution, so you will actually be helping anyone that finds this useful \o/
I created a git repo so you can take a look at the project, never used git before so if you find any problems just let me know.
https://github.com/Frazatto/FPCamcorder
Well…the simple answer is that there is a bug on the new Camera2 interface that can’t automatically identify the phone orientation and correct the camera preview accordingly. Recording is fine, but wile using the preview on landscape lock, the image is vertically distorted.
But the real problem is a little more complicated…
I’m trying to use three different things that there is no example and the documentation is vague, outdated or just useless.
There is actually an “official” correction for this bug (they wont admit it exists so it was quietly added to one of the video recording examples), it is the configureTransform () method on https://developer.android.com/samples/Camera2Video/src/com.example.android.camera2video/Camera2VideoFragment.html#l550
But it is meant to be used with a SurfaceTexture and, when using Google VR JDK, you must ditch the Surfaces and deal with OpenGL directly…
After a week trying to understand matrix transformations to replicate the “official solution” on VR, it became clear I would go no where since I never worked with 3D rendering before.
So check it out and let me know what you think.
I’m a Computer Engineer and experienced developer, but I just can’t understand how some one thinks Android development is simple or well documented.
I am currently in hospital with my son, at least till weekend. I’ll have a look at it later. Also speaking of the Camera2 interface, if I remember correctly it may use any SurfaceView for projection. The very last time I worked with it was about one year ago where I used OpenCV for real-time manipulation.
@Klaus_Daume , I hope is nothing serious. Anyway I’m not in any hurry.
Yes, it may. I made it work with a SurfaceTexture as I was learning, that’s why I know about the bug and how to fix it.
But VR uses opengl…
You will see it, easier than trying to explain here.
Unfortunately i couldn’t do much yet but cloning and having a very first look. Kinda stressfull right now.