2 Comments

    • Hi Kevin! Congratulations, you’re the first person to post a comment at my new site. 🙂

      I’m guessing you followed the link to my blog from that 2014 post of mine at foundation3d.com where I was trying to gauge interest in reviving the Hyperspace plugin. The short answer to your question is no, I never started working on a replacement for Hyperspace.

      In case you’re wondering how I did it, the work-in-progress video that I posted last week (https://vimeo.com/346674266) just uses a plain old set of single-vertex polygons that are randomly instanced in three separate layers using LightWave instancers. Each layer has 100,000 clones of those single-point polygons, and a different particle thickness for each layer (more on that in a second). The instanced poly clones each have a randomly-generated color and intensity that is derived from the instance ID of each instanced clone. The instancers randomly scatter the polygons in a cube 2000 kilometers on a side, and each layer has a distance dissolve of 500 km so that a rendered frame isn’t crammed with tens of thousands of stars. Combined with the varying star particle sizes, the end result is a field of stars that fade in and out as the camera moves through the cloud of polys, all with varying sizes, colors, and intensities.

      To render the star cloud, I import the camera from the “main” scene where the hero model(s) were imaged with OctaneRender (the stars have to be rendered with good ol’ LightWave; OctaneRender doesn’t play well with the method I’m describing here). Importing the camera (and anything that influences the camera such as camera target nulls or paths) ensures that the camera moves through my galaxy of 300,000 stars with the same vectors used in the original scene. I then double the scene resolution used for the hero models–this REALLY helps with getting rid of “flickering” stars. (For example, if I render the Enterprise at 1080p, I’ll render the stars at 4K.) When I composite the stars with the Enterprise in DaVinci Resolve or Adobe After Effects, I scale the stars filmstrip down by 50%. I also apply a lot more motion blur than was used in the hero model render (usually 50-100% depending on how fast the Enterprise is moving). Finally, I crank the anti-aliasing filter radius to a threshold of 2.0; this helps blur the stars even further.

      The parallax of moving through the instanced polygon cloud isn’t *quite* perfect but it’s awfully close to what I got with Hyperspace. Sometimes I’ll see a star poly fly too close to the camera which causes a weird “shooting star” effect. I also have to be sure I don’t start or end the scene too close to the edges of my little eight billion cubic kilometer galaxy (or else the star density will thin out). That really hasn’t been a problem though; as long as I’m at least 500 kilometers from an edge the density will remain fairly consistent.

      It was fun figuring this out, and it took less time than trying to reverse engineer how Hyperspace worked! As for rendering speed, it’s not as fast as HyperSpace, but I have a beast of a render machine (an AMD ThreadRipper 2950X) so it’s close enough. A 4K starfield usually renders in about eight seconds, so I just run the star layer scene in a second instance of LightWave while the main instance is rendering the Enterprise using OctaneRender. (Octane is fast but it’s not eight-seconds-per-frame fast; the star layer finishes long before the main layer does.) OctaneRender hardly taxes my CPU at all, and of course LightWave doesn’t use the GPU. Both use memory, but I’ve got 64 GB installed so the end result is that my computer barely breaks a sweat while it’s working.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.