Distance Measurement¶
Class: DistanceMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.distance_measurement.v1.DistanceMeasurementBlockV1
Calculate the distance between two bounding boxes on a 2D plane, leveraging a perpendicular camera view and either a reference object or a pixel-to-unit scaling ratio for precise measurements.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/distance_measurement@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
object_1_class_name |
str |
The class name of the first object.. | ❌ |
object_2_class_name |
str |
The class name of the second object.. | ❌ |
reference_axis |
str |
The axis along which the distance will be measured.. | ❌ |
calibration_method |
str |
Select how to calibrate the measurement of distance between objects.. | ❌ |
reference_object_class_name |
str |
The class name of the reference object.. | ✅ |
reference_width |
float |
Width of the reference object in centimeters. | ✅ |
reference_height |
float |
Height of the reference object in centimeters. | ✅ |
pixel_ratio |
float |
The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Distance Measurement
in version v1
.
- inputs:
Segment Anything 2 Model
,Instance Segmentation Model
,Model Monitoring Inference Aggregator
,Keypoint Detection Model
,Byte Tracker
,VLM as Classifier
,Camera Focus
,Perspective Correction
,VLM as Detector
,Google Vision OCR
,Detections Stitch
,Detections Stabilizer
,YOLO-World Model
,Multi-Label Classification Model
,VLM as Detector
,Dynamic Zone
,Object Detection Model
,OpenAI
,CogVLM
,Detections Transformation
,Email Notification
,Time in Zone
,Moondream2
,Dynamic Crop
,Slack Notification
,Detections Classes Replacement
,Byte Tracker
,Twilio SMS Notification
,Byte Tracker
,Detection Offset
,Roboflow Custom Metadata
,Template Matching
,Time in Zone
,Clip Comparison
,Detections Consensus
,LMM
,LMM For Classification
,Single-Label Classification Model
,Overlap Filter
,Instance Segmentation Model
,Florence-2 Model
,Roboflow Dataset Upload
,Google Gemini
,Webhook Sink
,Stitch OCR Detections
,Detections Merge
,Velocity
,Local File Sink
,Bounding Rectangle
,Gaze Detection
,Roboflow Dataset Upload
,Anthropic Claude
,Detections Filter
,Florence-2 Model
,OpenAI
,Path Deviation
,Path Deviation
,Llama 3.2 Vision
,Identify Changes
,Object Detection Model
,OCR Model
,Line Counter
,CSV Formatter
,Cosine Similarity
- outputs:
SIFT Comparison
,Keypoint Detection Model
,Byte Tracker
,Image Threshold
,Polygon Visualization
,Dynamic Zone
,Pixelate Visualization
,Identify Outliers
,Circle Visualization
,Dot Visualization
,Slack Notification
,Absolute Static Crop
,Byte Tracker
,Keypoint Visualization
,Byte Tracker
,Detection Offset
,Detections Consensus
,Label Visualization
,Image Slicer
,Keypoint Detection Model
,Triangle Visualization
,Anthropic Claude
,Corner Visualization
,Image Contours
,Instance Segmentation Model
,Perspective Correction
,Detections Stabilizer
,Line Counter Visualization
,Object Detection Model
,Email Notification
,Color Visualization
,Image Slicer
,Mask Visualization
,Twilio SMS Notification
,Crop Visualization
,Grid Visualization
,Pixel Color Count
,Stitch Images
,Instance Segmentation Model
,Bounding Box Visualization
,Webhook Sink
,Stitch OCR Detections
,Ellipse Visualization
,Dominant Color
,Image Blur
,Reference Path Visualization
,SIFT Comparison
,Image Preprocessing
,Classification Label Visualization
,Blur Visualization
,Identify Changes
,Object Detection Model
,Trace Visualization
,Halo Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Distance Measurement
in version v1
has.
Bindings
-
input
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): The output of a detection model describing the bounding boxes that will be used to measure the objects..reference_object_class_name
(string
): The class name of the reference object..reference_width
(float
): Width of the reference object in centimeters.reference_height
(float
): Height of the reference object in centimeters.pixel_ratio
(float
): The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels..
-
output
Example JSON definition of step Distance Measurement
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/distance_measurement@v1",
"predictions": "$steps.model.predictions",
"object_1_class_name": "car",
"object_2_class_name": "person",
"reference_axis": "vertical",
"calibration_method": "<block_does_not_provide_example>",
"reference_object_class_name": "reference-object",
"reference_width": 2.5,
"reference_height": 2.5,
"pixel_ratio": 100
}