OBJ sequences by themselves aren’t very efficient for video use cases, though I suppose you could technically import all the frames as individual objects into Studio and set the right one visible in a script. It would be slow to load and a large download but it might technically “work”.
The ideal solution for streaming volumetric content is to have the textures encoded as a video and have the meshes in some custom binary format that can be compressed by making use of the fact the meshes don’t change that much from one frame to the next.
For that to work well the obj sequence needs to have certain properties - essentially it needs to be “keyframed” so the UV unwrapping, texture coordinates and triangle list can be shared by a range of frames. In that case the mesh animation effectively becomes just vertex animation for those frames, and as the UV coordinates are shared the textures will look similar too and will compress reasonably well as a video.
We do have some work-in-progress on a runtime volumetric format, and that includes an “encoder” that takes an obj sequence as input, though for best results the obj sequence will need to have the properties I described. The format isn’t finalised or available yet but our intention would be to add support into Studio. We will also have a three.js runtime for the format that would work with our UniversalAR for three.js SDK for WebAR experiences.
I hadn’t seen the EF EVE tools before, but that certainly looks like an interesting setup and way more affordable than the dedicated volumetric capture stage setups. If you’d be willing to share your obj sequence from EVE Creator I can take a look to see if it meets the requirements for the streaming format to work well and give it a go.
I’ll send you a private message on the forum with my email address if you’re willing to share the files.