I am currently trying to build a curved image tracking application using the Universal AR SDK. The goal is to track the label of a bottle to then display 3D content on top of it. Since the actual bottle is pretty small it was necessary to be very close to the bottle with the camera for it to start tracking the label. In order to prevent the need to be so close I had the idea to only track for the label in a cropped center part of the camera video.
(not the actual bottle used)
The bordered center part would be the actual area for the image tracking. This way a user wouldn’t have to be as close to the bottle since the target would appear much bigger in the actual tracking area.
- Instead of the default camera frame drawing I used the manual way by creating a fullscreen WebGL Quad but scaled the actual image texture by changing the UV coordinates. But sadly this only affected the displayed video but not the actual tracking, since the whole camera video was still being used for the image tracking.
- I switched from the regular Zappar CameraSource to an HTMLElementSource so I can use a video element as the basis. But to get a video of the camera only showing the center part I had to feed the camera video into a canvas, only painting the cropped center part and then streaming this canvas back into another video element. This way I was able to create a cropped video that I could feed into the Zappar image tracking but it also introduced other problems. The video always appeared skewed in some way inside the Zappar canvas (streched horizontally or vertically and parts of the video cut off) and it also had a noticeable performance impact. The Video inside the Zappar canvas lagged behind the actual camera video and had a lot of stutters, probably because of the multiple converting stages in between.
My question now is, if there is another easy way for something like this to work reliably and with good performance?
I appreciate any help or info on how I could achieve this