Class Renderer


  • public class Renderer
    extends java.lang.Object
    A Renderer instance represents an operating system's window.

    Typically, applications create a Renderer per window. The Renderer generates drawing commands for the render thread and manages frame latency.
    A Renderer generates drawing commands from a View, itself containing a Scene description.

    Creation and Destruction

    A Renderer is created using Engine.createRenderer() and destroyed using Engine.destroyRenderer(com.google.android.filament.Renderer).

    See Also:
    Engine, View
    • Method Detail

      • setDisplayInfo

        public void setDisplayInfo​(@NonNull
                                   Renderer.DisplayInfo info)
        Information about the display this Renderer is associated to. This information is needed to accurately compute dynamic-resolution scaling and for frame-pacing.
      • setFrameRateOptions

        public void setFrameRateOptions​(@NonNull
                                        Renderer.FrameRateOptions options)
        Set options controlling the desired frame-rate.
      • setClearOptions

        public void setClearOptions​(@NonNull
                                    Renderer.ClearOptions options)
        Set ClearOptions which are used at the beginning of a frame to clear or retain the SwapChain content.
      • getEngine

        @NonNull
        public Engine getEngine()
        Gets the Engine that created this Renderer.
        Returns:
        Engine instance this Renderer is associated to.
      • beginFrame

        public boolean beginFrame​(@NonNull
                                  SwapChain swapChain,
                                  long frameTimeNanos)
        Sets up a frame for this Renderer.

        beginFrame manages frame pacing, and returns whether or not a frame should be drawn. The goal of this is to skip frames when the GPU falls behind in order to keep the frame latency low.

        If a given frame takes too much time in the GPU, the CPU will get ahead of the GPU. The display will draw the same frame twice producing a stutter. At this point, the CPU is ahead of the GPU and depending on how many frames are buffered, latency increases. beginFrame() attempts to detect this situation and returns false in that case, indicating to the caller to skip the current frame.

        All calls to render() must happen after beginFrame().

        Parameters:
        swapChain - the SwapChain instance to use
        frameTimeNanos - The time in nanoseconds when the frame started being rendered, in the System.nanoTime() timebase. Divide this value by 1000000 to convert it to the SystemClock.uptimeMillis() time base. This typically comes from Choreographer.FrameCallback.
        Returns:
        true: the current frame must be drawn, and endFrame() must be called
        false: the current frame should be skipped, when skipping a frame, the whole frame is canceled, and endFrame() must not be called. However, the user can choose to proceed as though true was returned and produce a frame anyways, by making calls to render(View), in which case endFrame() must be called.
        See Also:
        endFrame(), render(com.google.android.filament.View)
      • render

        public void render​(@NonNull
                           View view)
        Renders a View into this Renderer's window.

        This is filament's main rendering method, most of the CPU-side heavy lifting is performed here. The purpose of the render() function is to generate render commands which are asynchronously executed by the Engine's render thread.

        render() generates commands for each of the following stages:

        • Shadow map passes, if needed
        • Depth pre-pass
        • SSAO pass, if enabled
        • Color pass
        • Post-processing pass

        A typical render loop looks like this:

         void renderLoop(Renderer renderer, SwapChain swapChain) {
             do {
                 // typically we wait for VSYNC and user input events
                 if (renderer.beginFrame(swapChain)) {
                     renderer.render(mView);
                     renderer.endFrame();
                 }
             } while (!quit());
         }
         
        • render() must be called after beginFrame(com.google.android.filament.SwapChain, long) and before endFrame().
        • render() must be called from the Engine's main thread (or external synchronization must be provided). In particular, calls to render() on different Renderer instances must be synchronized.
        • render() performs potentially heavy computations and cannot be multi-threaded. However, internally, it is highly multi-threaded to both improve performance and mitigate the call's latency.
        • render() is typically called once per frame (but not necessarily).
        Parameters:
        view - the View to render
        See Also:
        beginFrame(com.google.android.filament.SwapChain, long), endFrame(), View
      • renderStandaloneView

        public void renderStandaloneView​(@NonNull
                                         View view)
        Renders a standalone View into its associated RenderTarget.

        This call is mostly equivalent to calling render(View) inside a beginFrame(com.google.android.filament.SwapChain, long) / endFrame() block, but incurs less overhead. It can be used as a poor man's compute API.

        • renderStandaloneView() must be called outside of beginFrame(com.google.android.filament.SwapChain, long) / endFrame().
        • renderStandaloneView() must be called from the Engine's main thread (or external synchronization must be provided). In particular, calls to renderStandaloneView() on different Renderer instances must be synchronized.
        • renderStandaloneView() performs potentially heavy computations and cannot be multi-threaded. However, internally, it is highly multi-threaded to both improve performance and mitigate the call's latency.
        Parameters:
        view - the View to render. This View must have an associated RenderTarget
        See Also:
        View
      • copyFrame

        public void copyFrame​(@NonNull
                              SwapChain dstSwapChain,
                              @NonNull
                              Viewport dstViewport,
                              @NonNull
                              Viewport srcViewport,
                              int flags)
        Copies the currently rendered View to the indicated SwapChain, using the indicated source and destination rectangle.

        copyFrame() should be called after a frame is rendered using render(com.google.android.filament.View) but before endFrame() is called.

        Parameters:
        dstSwapChain - the SwapChain into which the frame should be copied
        dstViewport - the destination rectangle in which to draw the view
        srcViewport - the source rectangle to be copied
        flags - one or more CopyFrameFlag behavior configuration flags
      • mirrorFrame

        @Deprecated
        public void mirrorFrame​(@NonNull
                                SwapChain dstSwapChain,
                                @NonNull
                                Viewport dstViewport,
                                @NonNull
                                Viewport srcViewport,
                                int flags)
        Deprecated.
      • readPixels

        public void readPixels​(@IntRange(from=0L)
                               int xoffset,
                               @IntRange(from=0L)
                               int yoffset,
                               @IntRange(from=0L)
                               int width,
                               @IntRange(from=0L)
                               int height,
                               @NonNull
                               Texture.PixelBufferDescriptor buffer)
        Reads back the content of the SwapChain associated with this Renderer.
        
          Framebuffer as seen on         User buffer (PixelBufferDescriptor)
          screen
          +--------------------+
          |                    |                .stride         .alignment
          |                    |         ----------------------->-->
          |                    |         O----------------------+--+   low addresses
          |                    |         |          |           |  |
          |             w      |         |          | .top      |  |
          |       <--------->  |         |          V           |  |
          |       +---------+  |         |     +---------+      |  |
          |       |     ^   |  | ======> |     |         |      |  |
          |   x   |    h|   |  |         |.left|         |      |  |
          +------>|     v   |  |         +---->|         |      |  |
          |       +.........+  |         |     +.........+      |  |
          |            ^       |         |                      |  |
          |          y |       |         +----------------------+--+  high addresses
          O------------+-------+
        
        

        readPixels must be called within a frame, meaning after beginFrame(com.google.android.filament.SwapChain, long) and before endFrame(). Typically, readPixels will be called after render(com.google.android.filament.View).


        After calling this method, the callback associated with buffer will be invoked on the main thread, indicating that the read-back has completed. Typically, this will happen after multiple calls to beginFrame(com.google.android.filament.SwapChain, long), render(com.google.android.filament.View), endFrame().


        readPixels is intended for debugging and testing. It will impact performance significantly.

        Parameters:
        xoffset - left offset of the sub-region to read back
        yoffset - bottom offset of the sub-region to read back
        width - width of the sub-region to read back
        height - height of the sub-region to read back
        buffer - client-side buffer where the read-back will be written

        The following format are always supported:

      • Texture.Format.RGBA
      • Texture.Format.RGBA_INTEGER
      • The following types are always supported:

      • Texture.Type.UBYTE
      • Texture.Type.UINT
      • Texture.Type.INT
      • Texture.Type.FLOAT
      • Other combination of format/type may be supported. If a combination is not supported, this operation may fail silently. Use a DEBUG build to get some logs about the failure.

        Throws:
        java.nio.BufferOverflowException - if the specified parameters would result in reading outside of buffer.
      • readPixels

        public void readPixels​(@NonNull
                               RenderTarget renderTarget,
                               @IntRange(from=0L)
                               int xoffset,
                               @IntRange(from=0L)
                               int yoffset,
                               @IntRange(from=0L)
                               int width,
                               @IntRange(from=0L)
                               int height,
                               @NonNull
                               Texture.PixelBufferDescriptor buffer)
        Reads back the content of a specified RenderTarget.
        
          Framebuffer as seen on         User buffer (PixelBufferDescriptor)
          screen
          +--------------------+
          |                    |                .stride         .alignment
          |                    |         ----------------------->-->
          |                    |         O----------------------+--+   low addresses
          |                    |         |          |           |  |
          |             w      |         |          | .top      |  |
          |       <--------->  |         |          V           |  |
          |       +---------+  |         |     +---------+      |  |
          |       |     ^   |  | ======> |     |         |      |  |
          |   x   |    h|   |  |         |.left|         |      |  |
          +------>|     v   |  |         +---->|         |      |  |
          |       +.........+  |         |     +.........+      |  |
          |            ^       |         |                      |  |
          |          y |       |         +----------------------+--+  high addresses
          O------------+-------+
        
        

        Typically readPixels will be called after render(com.google.android.filament.View) and before endFrame().


        After calling this method, the callback associated with buffer will be invoked on the main thread, indicating that the read-back has completed. Typically, this will happen after multiple calls to beginFrame(com.google.android.filament.SwapChain, long), render(com.google.android.filament.View), endFrame().


        readPixels is intended for debugging and testing. It will impact performance significantly.

        Parameters:
        renderTarget - RenderTarget to read back from
        xoffset - left offset of the sub-region to read back
        yoffset - bottom offset of the sub-region to read back
        width - width of the sub-region to read back
        height - height of the sub-region to read back
        buffer - client-side buffer where the read-back will be written

        The following format are always supported:

      • Texture.Format.RGBA
      • Texture.Format.RGBA_INTEGER
      • The following types are always supported:

      • Texture.Type.UBYTE
      • Texture.Type.UINT
      • Texture.Type.INT
      • Texture.Type.FLOAT
      • Other combination of format/type may be supported. If a combination is not supported, this operation may fail silently. Use a DEBUG build to get some logs about the failure.

        Throws:
        java.nio.BufferOverflowException - if the specified parameters would result in reading outside of buffer.
      • getUserTime

        public double getUserTime()
        Returns a timestamp (in seconds) for the last call to beginFrame(com.google.android.filament.SwapChain, long). This value is constant for all views rendered during a frame. The epoch is set with resetUserTime().

        In materials, this value can be queried using vec4 getUserTime(). The value returned is a highp vec4 encoded as follows:

              time.x = (float)Renderer.getUserTime();
              time.y = Renderer.getUserTime() - time.x;
         
        It follows that the following invariants are true:
              (double)time.x + (double)time.y == Renderer.getUserTime()
              time.x == (float)Renderer.getUserTime()
         
        This encoding allows the shader code to perform high precision (i.e. double) time calculations when needed despite the lack of double precision in the shader, e.g.:
        To compute (double)time * vertex in the material, use the following construct:
                      vec3 result = time.x * vertex + time.y * vertex;
         
        Most of the time, high precision computations are not required, but be aware that the precision of time.x rapidly diminishes as time passes:
        time precision
        16.7s us
        4h39.7s ms
        77h 1/60s

        In other words, it is only possible to get microsecond accuracy for about 16s or millisecond accuracy for just under 5h. This problem can be mitigated by calling resetUserTime(), or using high precision time as described above.

        Returns:
        the time in seconds since resetUserTime() was last called
        See Also:
        resetUserTime()
      • resetUserTime

        public void resetUserTime()
        Sets the user time epoch to now, i.e. resets the user time to zero.

        Use this method used to keep the precision of time high in materials, in practice it should be called at least when the application is paused, e.g. Activity.onPause in Android.

        See Also:
        getUserTime()
      • getNativeObject

        public long getNativeObject()