Package ftclib.vision

Class FtcEocvColorBlobProcessor

java.lang.Object
ftclib.vision.FtcEocvColorBlobProcessor
All Implemented Interfaces:
org.firstinspires.ftc.vision.VisionProcessor, TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>

public class FtcEocvColorBlobProcessor extends Object implements TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>, org.firstinspires.ftc.vision.VisionProcessor
This class implements a vision processor on top of an EOCV color blob pipeline.
  • Field Details

    • DEF_LINE_COLOR

      private static final int DEF_LINE_COLOR
      See Also:
    • DEF_LINE_WIDTH

      private static final float DEF_LINE_WIDTH
      See Also:
    • DEF_TEXT_COLOR

      private static final int DEF_TEXT_COLOR
      See Also:
    • DEF_TEXT_SIZE

      private static final float DEF_TEXT_SIZE
      See Also:
    • colorBlobPipeline

      private final TrcOpenCvColorBlobPipeline colorBlobPipeline
    • tracer

      public final TrcDbgTrace tracer
    • linePaint

      private final android.graphics.Paint linePaint
    • textPaint

      private final android.graphics.Paint textPaint
    • annotate

      private boolean annotate
  • Constructor Details

    • FtcEocvColorBlobProcessor

      public FtcEocvColorBlobProcessor(String instanceName, Integer colorConversion, double[] colorThresholds, TrcOpenCvColorBlobPipeline.FilterContourParams filterContourParams, boolean externalContourOnly, Integer lineColor, Float lineWidth, Integer textColor, Float textSize)
      Constructor: Create an instance of the object.
      Parameters:
      instanceName - specifies the instance name.
      colorConversion - specifies color space conversion, can be null if no color space conversion. Note: FTC ECOV input Mat format is RGBA, so you need to do Imgproc.COLOR_RGBA2xxx or Imgproc.COLOR_RGB2xxx conversion.
      colorThresholds - specifies an array of color thresholds. If useHsv is false, the array contains RGB thresholds (minRed, maxRed, minGreen, maxGreen, minBlue, maxBlue). If useHsv is true, the array contains HSV thresholds (minHue, maxHue, minSat, maxSat, minValue, maxValue).
      filterContourParams - specifies the parameters for filtering contours, can be null if not provided.
      externalContourOnly - specifies true for finding external contours only, false otherwise (not applicable if filterContourParams is null).
      lineColor - specifies the line color to draw the bounding rectangle, can be null if not provided in which case default color is used.
      lineWidth - specifies the line width to draw the bounding rectangle, can be null if not provided in which case default width is used.
      textColor - specifies the text color to draw the label text, can be null if not provided in which case default color is used.
      textSize - specifies the text size to draw the label text, can be null if not provided in which case default text size is used.
    • FtcEocvColorBlobProcessor

      public FtcEocvColorBlobProcessor(String instanceName, Integer colorConversion, double[] colorThresholds, TrcOpenCvColorBlobPipeline.FilterContourParams filterContourParams, boolean externalContourOnly)
      Constructor: Create an instance of the object.
      Parameters:
      instanceName - specifies the instance name.
      colorConversion - specifies color space conversion, can be null if no color space conversion. Note: FTC ECOV input Mat format is RGBA, so you need to do Imgproc.COLOR_RGBA2xxx or Imgproc.COLOR_RGB2xxx conversion.
      colorThresholds - specifies an array of color thresholds. If useHsv is false, the array contains RGB thresholds (minRed, maxRed, minGreen, maxGreen, minBlue, maxBlue). If useHsv is true, the array contains HSV thresholds (minHue, maxHue, minSat, maxSat, minValue, maxValue).
      filterContourParams - specifies the parameters for filtering contours, can be null if not provided.
      externalContourOnly - specifies true for finding external contours only, false otherwise (not applicable if filterContourParams is null).
  • Method Details

    • toString

      @NonNull public String toString()
      This method returns the pipeline instance name.
      Overrides:
      toString in class Object
      Returns:
      pipeline instance Name
    • setAnnotationAttributes

      public void setAnnotationAttributes(Integer lineColor, Float lineWidth, Integer textColor, Float textSize)
      This method sets the annotation rectangle and text attributes such as rectangle line width/color and label text color/size.
      Parameters:
      lineColor - specifies the line color to draw the bounding rectangle, can be null if not provided in which case default color or previously set color is used.
      lineWidth - specifies the line width to draw the bounding rectangle, can be null if not provided in which case default width or previously set width is used.
      textColor - specifies the text color to draw the label text, can be null if not provided in which case default color or previously set color is used.
      textSize - specifies the text size to draw the label text, can be null if not provided in which case default text size or previously set size is used.
    • reset

      public void reset()
      This method is called to reset the state of the pipeline if any.
      Specified by:
      reset in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
    • process

      public TrcOpenCvColorBlobPipeline.DetectedObject[] process(org.opencv.core.Mat input)
      This method is called to process the input image through the pipeline.
      Specified by:
      process in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Parameters:
      input - specifies the input image to be processed.
      Returns:
      array of detected objects.
    • getDetectedObjects

      public TrcOpenCvColorBlobPipeline.DetectedObject[] getDetectedObjects()
      This method returns the array of detected objects.
      Specified by:
      getDetectedObjects in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Returns:
      array of detected objects.
    • setAnnotateEnabled

      public void setAnnotateEnabled(boolean enabled)
      This method enables/disables image annotation of the detected object.
      Specified by:
      setAnnotateEnabled in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Parameters:
      enabled - specifies true to enable annotation, false to disable.
    • isAnnotateEnabled

      public boolean isAnnotateEnabled()
      This method checks if image annotation is enabled.
      Specified by:
      isAnnotateEnabled in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Returns:
      true if annotation is enabled, false otherwise.
    • setVideoOutput

      public void setVideoOutput(int intermediateStep)
      This method sets the intermediate mat of the pipeline as the video output mat. Note: FTC supports multiple vision processors, so we don't control video output. Let's throw an exception here.
      Specified by:
      setVideoOutput in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Parameters:
      intermediateStep - specifies the intermediate mat used as video output (0 is the original input frame).
    • setNextVideoOutput

      public void setNextVideoOutput()
      This method cycles to the next intermediate mat of the pipeline as the video output mat. Note: FTC supports multiple vision processors, so we don't control video output. Let's throw an exception here.
      Specified by:
      setNextVideoOutput in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
    • getIntermediateOutput

      public org.opencv.core.Mat getIntermediateOutput(int step)
      This method returns an intermediate processed frame. Typically, a pipeline processes a frame in a number of steps. It may be useful to see an intermediate frame for a step in the pipeline for tuning or debugging purposes.
      Specified by:
      getIntermediateOutput in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Parameters:
      step - specifies the intermediate step (0 is the original input frame).
      Returns:
      processed frame of the specified step.
    • getSelectedOutput

      public org.opencv.core.Mat getSelectedOutput()
      This method returns the selected intermediate output Mat.
      Specified by:
      getSelectedOutput in interface TrcOpenCvPipeline<TrcOpenCvDetector.DetectedObject<?>>
      Returns:
      selected output mat.
    • init

      public void init(int width, int height, org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration calibration)
      This method is called to initialize the vision processor.
      Parameters:
      width - specifies the image width.
      height - specifies the image height.
      calibration - specifies the camera calibration data.
    • processFrame

      public Object processFrame(org.opencv.core.Mat frame, long captureTimeNanos)
      This method is called to process an image frame.
      Parameters:
      frame - specifies the source image to be processed.
      captureTimeNanos - specifies the capture frame timestamp.
      Returns:
      array of detected objects.
    • onDrawFrame

      public void onDrawFrame(android.graphics.Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext)
      Called during the viewport's frame rendering operation at some later point during processFrame(). Allows you to use the Canvas API to draw annotations on the frame, rather than using OpenCV calls. This allows for more eye-candy annotations since you've got a high resolution canvas to work with rather than, say, a 320x240 image.

      Note that this is NOT called from the same thread that calls processFrame(), and may actually be called from the UI thread depending on the viewport renderer.

      Parameters:
      canvas - the canvas that's being drawn on NOTE: Do NOT get dimensions from it, use below
      onscreenWidth - the width of the canvas that corresponds to the image
      onscreenHeight - the height of the canvas that corresponds to the image
      scaleBmpPxToCanvasPx - multiply pixel coords by this to scale to canvas coords
      scaleCanvasDensity - a scaling factor to adjust e.g. text size. Relative to Nexus5 DPI.
      userContext - whatever you passed in when requesting the draw hook :monkey: