Im trying to run face recognition and need a stream of images/files from the camera.
I decided to use Flutters own camera library
CameraController.startImageStream(Function(CameraImage) onAvailable) seams promising but I can't figure out how to convert the
CameraImage data into something readable for the face recognition.
Have anyone else solved this?
CameraImage is normally in YUV 420 format. (Test
cameraImage.format.group to confirm.)
This works well with Firebase ML as that's the format that it expects. Useful demo here. However, other recognizers may want other formats (AWS wants a JPEG or PNG, for example).
YUV is tricky to convert as it uses chroma subsampling. Also, for performance, you probably want to be using native code. On Android there's a YUVImage class into which you can pass the planes. It has a method to convert to JPEG. Create a plugin or method channel to pass the plane(s) to a YUVImage, have it save itself to a ByteArrayOutputStream and return the bytes.